Reverse image search
{{short description|Content-based image retrieval}}
Reverse image search is a content-based image retrieval (CBIR) query technique that involves providing the CBIR system with a sample image that it will then base its search upon; in terms of information retrieval, the sample image is very useful. In particular, reverse image search is characterized by a lack of search terms. This effectively removes the need for a user to guess at keywords or terms that may or may not return a correct result. Reverse image search also allows users to discover content that is related to a specific sample image{{cite web|title=How to search by image|url=https://support.google.com/images/answer/1325808?hl=en|accessdate=2 November 2013}} or the popularity of an image, and to discover manipulated versions and derivative works.{{cite web|title=Video searching with Frompo|url=http://blogs.scientificamerican.com/compound-eye/2011/08/16/googles-reverse-image-search/|publisher=Frompo.com|accessdate=2 November 2013}}
A visual search engine is a search engine designed to search for information on the World Wide Web through a reverse image search. Information may consist of web pages, locations, other images and other types of documents. This type of search engines is mostly used to search on the mobile Internet through an image of an unknown object (unknown search query). Examples are buildings in a foreign city. These search engines often use techniques for Content Based Image Retrieval.
A visual search engine searches images, patterns based on an algorithm which it could recognize and gives relative information based on the selective or apply pattern match technique.
Uses
Reverse image search may be used to:{{cite web|url=https://www.tineye.com/faq#why|title=FAQ - TinEye - Why use TinEye?|work=TinEye}}
- Locate the source of an image.
- Find higher resolution versions.
- Discover webpages where the image appears.
- Find the content creator.
- Get information about an image.
Algorithms
Commonly used reverse image search algorithms include:[http://research.microsoft.com/pubs/80803/CVPR%5F2009%5Fbundle.pdf Bundling Features for Large Scale Partial-DuplicateWeb Image Search] Microsoft.
Visual information searchers
= Image search =
An image search engine is a search engine that is designed to find an image. The search can be based on keywords, a picture, or a web link to a picture. The results depend on the search criterion, such as metadata, distribution of color, shape, etc., and the search technique which the browser uses.
== Image search techniques ==
Two techniques currently used in image search:
Search by metadata: Image search is based on comparison of metadata associated with the image as keywords, text, etc. and it is obtained by employing a set of images sorted by relevance. The metadata associated with each image can reference the title of the image, format, color, etc. and can be generated manually or automatically. This metadata generation process is called audiovisual indexing.
Search by example: In this technique, also called reverse image search, the search results are obtained through the comparison between images using content-based image retrieval computer vision techniques. During the search the content of the image is examined, such as color, shape, texture or any visual information that can be extracted from the image. This system requires a higher computational complexity, but is more efficient and reliable than search by metadata.
There are image searchers that combine both search techniques. For example, the first search is done by entering a text. The images obtained are then used to refine the search.
= Video search =
A video search engine is a search engine designed to search video on the net. Some video searchers process the search directly in the Internet, while others shelter the videos from which the search is done. Some searchers also enable to use as search parameters the format or the length of the video. Usually the results come with a miniature capture of the video.
== Video search techniques ==
Currently, almost all video searchers are based on keywords (search by metadata) to perform searches. These keywords can be found in the title of the video, text accompanying the video or can be defined by the author. An example of this type of search is YouTube.
= 3D Models searcher =
A searcher of 3D models aims to find the file of a 3D modeling object from a database or network. At first glance the implementation of this type of searchers may seem unnecessary, but due to the continuous documentary inflation of the Internet, every day it becomes more necessary indexing information.
== 3D Models search techniques ==
File:3D models search techniques.png
These have been used with traditional text-based searchers (keywords / tags), where the authors of the indexed material, or Internet users, have contributed these tags or keywords. Because it is not always effective, it has recently been investigated in the implementation of search engines that combine the search using text with the search compared to 2D drawings, 3D drawings and 3D models.
Princeton University has developed a search engine that combines all these parameters to perform the search, thus increasing the efficiency of search.{{cite journal | last1= Funkhouser | first1= Thomas | first2 = Patrick | last2 = Min | first3 = Michael | last3 = Kazhdan | first4 = Joyce | last4 = Chen | first5 = Alex | last5 = Halderman | first6 = David | last6 = Dobkin | first7 = David | last7 = Jacobs | year= 2002 | title= A Search Engine for 3D Models | journal= ACM Transactions on Graphics |url=https://www.cs.princeton.edu/~funk/tog03.pdf | volume= 22 | issue =1 | pages= 83–105 |doi = 10.1145/588272.588279 | s2cid= 1178691 }}
= Mobile visual search =
A mobile image searcher is a type of search engine designed exclusively for mobile phones, through which you can find any information on Internet, through an image made with the own mobile phone or using certain words (keywords). Mobile Visual Search solutions enable you to integrate image recognition software capabilities into your own branded mobile applications. Mobile Visual Search (MVS) bridges the gap between online and offline media, enabling you to link your customers to digital content.
== Introduction ==
Mobile phones have evolved into powerful image and video processing devices equipped with high-resolution cameras, color displays, and hardware-accelerated graphics. They are also increasingly equipped with a global positioning system and connected to broadband wireless networks. All this enables a new class of applications that use the camera phone to initiate search queries about objects in visual proximity to the user (Figure 1). Such applications can be used, e.g., for identifying products, comparison shopping, finding information about movies, compact disks (CDs), real estate, print media, or artworks.
== Process ==
Typically, this type of search engine uses techniques of query by example or Image query by example, which use the content, shape, texture and color of the image to compare them in a database and then deliver the approximate results from the query.
The process used in these searches in the mobile phones is as follows:
First, the image is sent to the server application. Already on the server, the image will be analyzed by different analytical teams, as each one is specialized in different fields that make up an image. Then, each team will decide if the submitted image contains the fields of their speciality or not.
Once this whole procedure is done, a central computer will analyze the data and create a page of the results sorted with the efficiency of each team, to eventually be sent to the mobile phone.
Application in popular search systems
=Yandex=
Yandex Images offers a global reverse image and photo search. The site uses standard Content Based Image Retrieval (CBIR) technology used by many other sites, but additionally uses artificial intelligence-based technology to locate further results based on query.{{cite web |url=https://www.buddinggeek.com/yandex-reverse-image-search/ |url-access= |title=How Does Yandex Reverse Image Search Work? Detailed Guide |date=February 27, 2022 |editor-last=Raj |editor-first=Abhishek |website=www.buddinggeek.com |publisher=Budding Geek |access-date=May 5, 2022 |url-status= |archive-url= |archive-date= |quote= }} Users can drag and drop images to the toolbar for the site to complete a search on the internet for similar looking images. The Yandex images searches some obscure social media sites in addition to more common ones offering content owners means of tracking plagiarism of image or photo intellectual property.
=Google Images=
Google's Search by Image is a feature that uses reverse image search and allows users to search for related images by uploading an image or copying the image URL. Google accomplishes this by analyzing the submitted picture and constructing a mathematical model of it. It is then compared with other images in Google's databases before returning matching and similar results. When available, Google also uses metadata about the image such as description. In 2022 the feature was replaced by Google Lens as the default visual search method on Google, and the old Search by Image function remains available within Google Lens.{{cite web |last1=Li |first1=Abner |title=Google Images on the web now uses Google Lens |url=https://9to5google.com/2022/08/10/google-lens-image-search/ |website=9to5Google |access-date=2 December 2022 |date=10 August 2022}}
=TinEye=
TinEye is a search engine specialized for reverse image search. Upon submitting an image, TinEye creates a "unique and compact digital signature or fingerprint" of said image and matches it with other indexed images.{{cite web|url=http://www.tineye.com/faq#how|title=FAQ - TinEye - How does TinEye work?|work=TinEye}} This procedure is able to match even very edited versions of the submitted image, but will not usually return similar images in the results.{{cite web|url=http://www.tineye.com/faq#similar|title=FAQ - TinEye - Can TinEye find similar images??|work=TinEye}}
=Pixsy=
Pixsy reverse image search technology detects image matches{{Cite news|url=http://www.pixsy.com/find/|title=Find stolen images - Pixsy|work=Pixsy|access-date=2017-10-20|language=en-US}} on the public internet for images uploaded to the Pixsy platform.{{Cite news|url=https://theabundantartist.com/pixsy-review-find-fight-image-theft/|title=Pixsy.com review: Find & Fight Image Theft - Online Marketing for Artists -|date=2015-07-02|work=Online Marketing for Artists|access-date=2017-10-20|language=en-US}} New matches are automatically detected and alerts sent to the user. For unauthorized use, Pixsy offers a compensation recovery service{{Cite web|url=https://artlawjournal.com/pixsy-infringement-tool/|title=Pixsy: Find and Get Paid for Image Theft|date=2014-10-18|website=artlawjournal.com|language=en-US|access-date=2017-10-20}}{{Cite news|url=http://www.pixsy.com/resolve/|title=Resolve image theft - Pixsy|work=Pixsy|access-date=2017-10-20|language=en-US}} for commercial use of the image owners work. Pixsy partners with over 25 law firms and attorneys around the world to bring resolution for copyright infringement. Pixsy is the strategic image monitoring service for the Flickr platform and users.{{Cite web|url=https://petapixel.com/2019/04/09/flickr-teams-up-with-pixsy-for-the-first-end-to-end-photo-theft-solution/|title=Flickr Teams Up with Pixsy to Get You Paid When Photos Are Stolen|website=petapixel.com|date=9 April 2019 |access-date=2019-12-12}}
= eBay =
eBay ShopBot uses reverse image search to find products by a user uploaded photo. eBay uses a ResNet-50 network for category recognition, image hashes are stored in Google Bigtable; Apache Spark jobs are operated by Google Cloud Dataproc for image hash extraction; and the image ranking service is deployed by Kubernetes.{{cite book|url=https://dl.acm.org/citation.cfm?id=3098162|work=acm.org|year=2017 |doi=10.1145/3097983.3098162 |last1=Yang |first1=Fan |last2=Kale |first2=Ajinkya |last3=Bubnov |first3=Yury |last4=Stein |first4=Leon |last5=Wang |first5=Qiaosong |last6=Kiapour |first6=Hadi |last7=Piramuthu |first7=Robinson |title=Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining |chapter=Visual Search at eBay |pages=2101–2110 |arxiv=1706.03154 |isbn=9781450348874 |s2cid=22367645 }}
= SK Planet =
SK Planet uses reverse image search to find related fashion items on its e-commerce website. It developed the vision encoder network based on the TensorFlow inception-v3, with speed of convergence and generalization for production usage. A recurrent neural network is used for multi-class classification, and fashion-product region-of interest detection is based on Faster R-CNN. SK Planet's reverse image search system is built in less than 100 man-months.[https://arxiv.org/abs/1609.07859 Visual Fashion-Product Search at SK Planet]
= Alibaba =
Alibaba released the Pailitao application in 2014. Pailitao ({{zh|拍立淘}}, literally means shopping through a camera) allows users to search for items on Alibaba's E-commercial platform by taking a photo of the query object. The Pailitao application uses a deep CNN model with branches for joint detection and feature learning to discover the detection mask and exact discriminative feature without background disturbance. GoogLeNet V1 is employed as the base model for category prediction and feature learning.{{cite book|url=https://dl.acm.org/citation.cfm?id=3219820|work=acm.org|year=2018 |doi=10.1145/3219819.3219820 |last1=Zhang |first1=Yanhao |last2=Pan |first2=Pan |last3=Zheng |first3=Yun |last4=Zhao |first4=Kang |last5=Zhang |first5=Yingya |last6=Ren |first6=Xiaofeng |last7=Jin |first7=Rong |title=Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining |chapter=Visual Search at Alibaba |pages=993–1001 |arxiv=2102.04674 |isbn=9781450355520 |s2cid=50776405 }}{{cite web|url=https://medium.com/coinmonks/shopping-with-your-camera-visual-image-search-meets-e-commerce-at-alibaba-8551925746d0|title=Shopping With Your Camera: Visual Image Search Meets E-Commerce at Alibaba|work=Alibaba Tech|date=September 2020 }}
=Pinterest=
Pinterest acquired startup company VisualGraph in 2014 and introduced visual search on its platform.{{cite web|url=https://techcrunch.com/2014/01/06/pinterest-visualgraph/|title=Pinterest Acquires Image Recognition And Visual Search Startup VisualGraph|author=Josh Constine|publisher=AOL|work=TechCrunch|date=6 January 2014 }} In 2015, Pinterest published a paper at the ACM Conference on Knowledge Discovery and Data Mining conference and disclosed the architecture of the system. The pipeline uses Apache Hadoop, the open-source Caffe convolutional neural network framework, Cascading for batch processing, PinLater for messaging, and Apache HBase for storage. Image characteristics, including local features, deep features, salient color signatures and salient pixels are extracted from user uploads. The system is operated by Amazon EC2, and only requires a cluster of 5 GPU instances to handle daily image uploads onto Pinterest. By using reverse image search, Pinterest is able to extract visual features from fashion objects (e.g. shoes, dress, glasses, bag, watch, pants, shorts, bikini, earrings) and offer product recommendations that look similar.{{cite book|url=http://dl.acm.org/citation.cfm?id=2788621|work=acm.org|year=2015 |doi=10.1145/2783258.2788621 |last1=Jing |first1=Yushi |last2=Liu |first2=David |last3=Kislyuk |first3=Dmitry |last4=Zhai |first4=Andrew |last5=Xu |first5=Jiajing |last6=Donahue |first6=Jeff |last7=Tavel |first7=Sarah |title=Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining |chapter=Visual Search at Pinterest |pages=1889–1898 |isbn=9781450336642 |s2cid=1153609 }}{{cite web|url=https://engineering.pinterest.com/blog/building-scalable-machine-vision-pipeline|title=Building a scalable machine vision pipeline|work=Pinterest Engineering |archive-url=https://web.archive.org/web/20150906230722/https://engineering.pinterest.com/blog/building-scalable-machine-vision-pipeline|archive-date=2015-09-06}}
=JD.com=
JD.com disclosed the design and implementation of its real time visual search system at the Middleware '18 conference. The peer reviewed paper focuses on the algorithms used by JD's distributed hierarchical image feature extraction, indexing and retrieval system, which has 300 million daily active users. The system was able to sustain 80 million updates to its database per hour when it was deployed in production in 2018.{{cite book|url=https://dl.acm.org/doi/10.1145/3284028.3284030|website=acm.org|year=2018 |doi=10.1145/3284028.3284030 |last1=Li |first1=Jie |last2=Liu |first2=Haifeng |last3=Gui |first3=Chuanghua |last4=Chen |first4=Jianyu |last5=Ni |first5=Zhenyuan |last6=Wang |first6=Ning |last7=Chen |first7=Yuan |title=Proceedings of the 19th International Middleware Conference Industry |chapter=The Design and Implementation of a Real Time Visual Search System on JD E-commerce Platform |pages=9–16 |arxiv=1908.07389 |isbn=9781450360166 |s2cid=53713854 }}
=Bing=
Microsoft Bing published the architecture of their reverse image searching of system at the KDD'18 conference. The paper states that a variety of features from a query image submitted by a user are used to describe its content, including using deep neural network encoders, category recognition features, face recognition features, color features and duplicate detection features.{{cite book|url=https://dl.acm.org/doi/abs/10.1145/3219819.3219843|website=acm.org|year=2018 |doi=10.1145/3219819.3219843 |last1=Hu |first1=Houdong |last2=Wang |first2=Yan |last3=Yang |first3=Linjun |last4=Komlev |first4=Pavel |last5=Huang |first5=Li |last6=Chen |first6=Xi (Stephen) |last7=Huang |first7=Jiapei |last8=Wu |first8=Ye |last9=Merchant |first9=Meenaz |last10=Sacheti |first10=Arun |title=Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining |chapter=Web-Scale Responsive Visual Search at Bing |pages=359–367 |isbn=9781450355520 |s2cid=3427399 }}
=Amazon=
Amazon.com disclosed the architecture of a visual search engine for fashion and home products named Amazon Shop the Look in a paper published at the KDD'22 conference. The paper describes the lessons learned by Amazon when deployed in production environment, including image synthesis-based data augmentation for retrieval performance optimization and accuracy improvement.[https://dl.acm.org/doi/10.1145/3534678.3539071 Amazon Shop the Look: A Visual Search System for Fashion and Home]
Research systems
Microsoft Research Asia's Beijing Lab published a paper in the Proceedings of the IEEE on the Arista-SS (Similar Search) and the Arista-DS (Duplicate Search) systems. Arista-DS only performs duplicate search algorithms such as principal component analysis on global image features to lower computational and memory costs. Arista-DS is able to perform duplicate search on 2 billion images with 10 servers but with the trade-off of not detecting near duplicates.[https://ieeexplore.ieee.org/document/6210348 Duplicate-Search-Based Image Annotation Using Web-Scale Data] Microsoft.
Open-source implementations
In 2007, the Puzzle library is released under the ISC license. Puzzle is designed to offer reverse image search visually similar images, even after the images have been resized, re-compressed, recolored and/or slightly modified.[https://github.com/jedisct1/libpuzzle The Puzzle library]
The image-match open-source project was released in 2016. The project, licensed under the Apache License, implements a reverse image search engine written in Python.[https://github.com/ProvenanceLabs/image-match ProvenanceLabs / image-match]
Both the Puzzle library and the image-match projects use algorithms published at an IEEE ICIP conference.[https://ieeexplore.ieee.org/abstract/document/1038047 An image signature for any kind of image]
In 2019, a book published by O'Reilly documents how a simple reverse image search system can be built in a few hours. The book covers image feature extraction and similarity search, together with more advanced topics including scalability using GPUs and search accuracy improvement tuning.{{cite book |last=Koul |first=Anirudh |date= October 2019 |title=Practical Deep Learning for Cloud, Mobile, and Edge |chapter= Chapter 4. Building a Reverse Image Search Engine: Understanding Embeddings |url=https://www.oreilly.com/library/view/practical-deep-learning/9781492034858/ch04.html |publisher=O'Reilly Media |isbn=9781492034865 }} The code for the system was made available freely on GitHub.[https://github.com/PracticalDL/Practical-Deep-Learning-Book/ Practical-Deep-Learning-Book source repository]
Reverse video search
The processing demands for performing reverse video search would be astoundingly high. There is no simple tool to just upload the video to find the matching results. At present there is no technology that can successfully perform a reverse video search.{{cite book|url=https://www.searchenginejournal.com/how-reverse-video-search/464654/|title=VHow to Use Reverse Video Search (& Why It's Useful)
|work=searchenginejournal | date=September 2022}}{{cite web|url=https://www.digitbin.com/how-find-source-of-video/|title=How to Find Source of a Video with Reverse Image Search?
|work=Alibaba DigitBin|date=October 2020 }}
Production reverse image search systems
See also
{{Commonscat}}