Reverse image search is a content-based image retrievalquery technique that involves providing the CBIR system with a sample image that it will then base its search upon; in terms of information retrieval, the sample image is what formulates a search query. In particular, reverse image search is characterized by a lack of search terms. This effectively removes the need for a user to guess at keywords or terms that may or may not return a correct result. Reverse image search also allows users to discover content that is related to a specific sample image, popularity of an image, and discover manipulated versions and derivative works.
Google's Search by Image is a feature that uses reverse image search and allows users to search for related images just by uploading an image or image URL. Google accomplishes this by analyzing the submitted picture and constructing a mathematical model of it using advanced algorithms. It is then compared with billions of other images in Google's databases before returning matching and similar results. When available, Google also uses metadata about the image such as description.
TinEye
is a search engine specialized for reverse image search. Upon submitting an image, TinEye creates a "unique and compact digital signature or fingerprint" of said image and matches it with other indexed images. This procedure is able to match even very edited versions of the submitted image, but will not usually return similar images in the results.
Pixsy
Pixsy reverse image search technology detects image matches on the public internet for images uploaded to the Pixsy platform. New matches are automatically detected and alerts sent to the user. For unauthorised use, Pixsy offers a compensation recovery service for commercial use of the image owners work. Pixsy partners with over 25 law firms and attorneys around the world to bring resolution for copyright infringement. Pixsy is the strategic image monitoring service for the Flickr platform and user.
uses reverse image search to find related fashion items on its e-commerce website. It developed the vision encoder network based on the TensorFlowinception-v3, with speed of convergence and generalization for production usage. A recurrent neural network is used for multi-class classification, and fashion-product region-of interest detection is based on Faster R-CNN. SK Planet's reverse image search system is built in less than 100 man-months.
Alibaba
Alibaba released the Pailitao application during 2014. Pailitao allows users to search for items on Alibaba's E-commercial platform by taking a photo of the query object. The Pailitao application uses a deep CNN model with branches for joint detection and feature learning to discover the detection mask and exact discriminative feature without background disturbance. GoogLeNet V1 is employed as the base model for category prediction and feature learning.
Pinterest
Pinterest acquired startup company VisualGraph in 2014 and introduced visual search on its platform. In 2015, Pinterest published a paper at the ACMConference on Knowledge Discovery and Data Mining conference and disclosed the architecture of the system. The pipeline uses Apache Hadoop, the open-source Caffeconvolutional neural network framework, Cascading for batch processing, PinLater for messaging, and Apache HBase for storage. Image characteristics, including local features, deep features, salient color signatures and salient pixels are extracted from user uploads. The system is operated by Amazon EC2, and only requires a cluster of 5 GPU instances to handle daily image uploads onto Pinterest. By using reverse image search, Pinterest is able to extract visual features from fashion objects and offer product recommendations that look similar.
Research systems
's Beijing Lab published a paper in the Proceedings of the IEEE on the Arista-SS and the Arista-DS systems. Arista-DS only performs duplicate search algorithms such as principal component analysis on global image features to lower computational and memory costs. Arista-DS is able to perform duplicate search on 2 billion images with 10 servers but with the trade-off of not detecting near duplicates.