DeepPeep
{{short description|Defunct search engine}}
DeepPeep was a search engine that aimed to crawl and index every database on the public Web.{{cite news|url=https://www.nytimes.com/2009/02/23/technology/internet/23search.html?th&emc=th|title=Exploring a 'Deep Web' That Google Can't Grasp|last=Wright|first=Alex|date=2009-02-22|work=The New York Times|pages=|access-date=2009-02-23}}{{cite web|url=https://www.lanline.de/forscher-wollen-versteckte-datenbanken-im-web-zuganglich-machen-html|title=DeepPeep: Forscher wollen verborgene Datenbanken im Web zugänglich machen|last=Franke|first=Susanne|date=2009-02-24|publisher=Comp. Ztg.|via=lanline.de|trans-title=DeepPeep: Researchers want to make hidden databases accessible on the web|access-date=2009-02-25}} Unlike traditional search engines, which crawl existing webpages and their hyperlinks, DeepPeep aimed to allow access to the so-called Deep web, World Wide Web content only available via for instance typed queries into databases.{{cite web|url=http://web20.telecomtv.com/pages/?newsid=44572&id=e9381817-0593-417a-8639-c4c53e2a2a10&view=news|title=DeepPeep lets light in to the hidden Web|last=Warwick|first=Martyn|date=2009-02-25|publisher=TelecomTV|access-date=2009-02-25}}{{Dead link|date=July 2011|fix-attempted=yes}} The project started at the University of Utah and was overseen by Juliana Freire, an associate professor at the university's School of Computing WebDB group.{{cite news|url=http://www.livemint.com/2010/03/09211503/Crawling-the-deep-web.html|title=Crawling the deep web|last=Sawant|first=Nimish|date=2010-03-09|work=LiveMint|publisher=Mint|access-date=2010-12-13}}{{cite web|url=http://webdb.cs.utah.edu/webdb/index.php/Main_Page|title=Main Page|last=|date=2008-10-04|publisher=University of Utah School of Computing|archive-url=https://web.archive.org/web/20090227092755/http://webdb.cs.utah.edu/webdb/index.php/Main_Page|archive-date=2009-02-27|work=WebDB|access-date=2009-02-23}} The goal was to make 90% of all WWW content accessible, according to Freire.{{cite news|url=http://www.pressetext.at/pte.mc?pte=090223012|title=Suchansätze dringen in die Tiefen des Internets: Erforschen von Datenbanken als wichtiger Schritt|last=Pichler|first=Thomas|date=2009-02-23|agency=Pressetext|language=de|trans-title=Search phrases penetrate the depths of the Internet: Researching databases as an important step|access-date=2009-02-23}}{{cite news|url=http://www.nachrichten.ch/detail/373727.htm |title=Suchansätze dringen in die Tiefen des Internets |date=2009-02-24 |work=nachrichten.ch |language=de |trans-title=Search phrases penetrate the depths of the Internet |access-date=2010-12-13 |url-status=dead|archive-url=https://web.archive.org/web/20110707003227/http://www.nachrichten.ch/detail/373727.htm |archive-date=2011-07-07 }} The project ran a beta search engine and was sponsored by the University of Utah and a $243,000 grant from the National Science Foundation.{{cite web|url=https://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0713637|title=Award Abstract #0713637: III-COR: Discovering and Organizing Hidden-Web Sources|last=|date=|publisher=National Science Foundation|work=NSF Award Search|access-date=2009-02-23}} It generated worldwide interest.{{cite web|url=http://www.lsdi.it/2009/03/05/esplorando-il-deepweb/|title=Esplorando il DeepWeb, i fondali della Rete dove Google non arriva|last=|date=2009-04-05|location=Italy|language=it|trans-title=Exploring the DeepWeb, the depths of the Net where Google does not arrive|type=This is an Italian translation of the New York Times article "Exploring a ‘Deep Web’ That Google Can’t Grasp" by Alex Wright|work=Liberta di Stampa Diritto all'Informazione|access-date=2009-03-05}}{{cite web|url=http://www.sg.hu/cikkek/65846/az_internet_melyet_kutatja_a_deeppeep|title=Az internet mélyét kutatja a DeepPeep|last=Sándor|first=Berta|date=2009-02-24|publisher=SG (Hungary)|language=hu|trans-title=The internet researching the depths of DeepPeep|doi=|work=sg.hu|access-date=2009-03-05}}{{cite web|url=http://www.dutchcowboys.nl/search/16400|title=Niet alles is te vinden met Google|last=|date=2009-03-04|publisher=Dutch Cowboys|language=nl|trans-title=Not everything can be found with Google|access-date=2009-03-05}}{{cite web|url=http://tech.92jn.com/net/200903/1968.html |title=探索谷歌尚未把持的'深层网络' |date=2006-03-03 |language=zh |trans-title=Explore Google's not yet dominated 'deep network' |type=This is a Chinese translation of the New York Times article "Exploring a ‘Deep Web’ That Google Can’t Grasp" by Alex Wright |archive-url=https://web.archive.org/web/20110707071035/http://tech.92jn.com/net/200903/1968.html |access-date=2009-03-05 |url-status=dead|archive-date=2011-07-07 }}{{cite news|url=http://www.ilmessaggero.it/articolo.php?id=47843&sez=HOME_SCIENZA |title=Sfida al deep web: la Kosmix prova a svelare le pagine nascoste di internet |date=2009-02-23 |newspaper=Messagg. |trans-title=Challenge to the deep web: Kosmix tries to reveal the hidden pages of the internet |archive-url=https://archive.today/20120804003555/http://www.ilmessaggero.it/articolo.php?id=47843&sez=HOME_SCIENZA |archive-date=2012-08-04 |access-date=2010-12-13 |url-status=dead}}
How it works
Similar to Google, Yahoo, and other search engines, DeepPeep allows the users to type in a keyword and returns a list of links and databases with information regarding the keyword.
However, what separated DeepPeep and other search engines is that DeepPeep uses the [https://ache.readthedocs.io/ ACHE crawler], '[https://www.sciencedirect.com/science/article/pii/002002557490005X Hierarchical Form Identification]', '[https://hal.science/hal-01934281/file/ContextQualityExpressive.pdf Context-Aware Form Clustering]' and 'LabelEx' to locate, analyze, and organize web forms to allow easy access to users.{{Cite book|last1=Barbosa|first1=Luciano|last2=Nguyen|first2=Hoa|last3=Nguyen|first3=Thanh|last4=Pinnamaneni|first4=Ramesh|last5=Freire|first5=Juliana|title=Proceedings of the 2010 ACM SIGMOD International Conference on Management of data |chapter=Creating and exploring web form repositories |date=2010-01-01|series=SIGMOD '10|location=New York, NY, USA|publisher=ACM|pages=1175–1178|doi=10.1145/1807167.1807311|isbn=9781450300322|s2cid=15471440 }}
= ACHE Crawler =
The [https://ache.readthedocs.io/ ACHE Crawler] is used to gather links and utilizes a learning strategy that increases the collection rate of links as these crawlers continue to search. What makes [https://ache.readthedocs.io/ ACHE Crawler] unique from other crawlers is that other crawlers are focused crawlers that gather Web pages that have specific properties or keywords. Ache Crawlers instead includes a page classifier which allows it to sort out irrelevant pages of a domain as well as a link classifier which ranks a link by its highest relevance to a topic. As a result, the [https://ache.readthedocs.io/ ACHE Crawler] first downloads web links that has the higher relevance and saves resources by not downloading irrelevant data.{{Cite web|url=https://github.com/ViDA-NYU/ache|title=ViDA-NYU/ache|website=GitHub|access-date=2016-11-06}}
= Hierarchical Form Identification =
In order to further eliminate irrelevant links and search results, DeepPeep uses the [https://www.sciencedirect.com/science/article/pii/002002557490005X HIerarchical Form Identification (HIFI)] framework that classifies links and search results based on the website's structure and content. Unlike other forms of classification which solely relies on the web form labels for organization, [https://www.sciencedirect.com/science/article/pii/002002557490005X HIFI] utilizes both the structure and content of the web form for classification. Utilizing these two classifiers, HIFI organizes the web forms in a hierarchical fashion which ranks the a web form's relevance to the target keyword.{{Cite journal|last=Duygulu|first=Pinar|editor-first1=Daniel P. |editor-first2=Jiangying |editor-last1=Lopresti |editor-last2=Zhou |date=1999-12-22|title=Hierarchical representation of form documents for identification and retrieval|url=https://www.deepdyve.com/lp/spie/hierarchical-representation-of-form-documents-for-identification-and-TCutkdAHSf|journal=Proceedings of SPIE|series=Document Recognition and Retrieval VII |volume=3967|issue=1|page=128 |doi=10.1117/12.373486|bibcode=1999SPIE.3967..128D |s2cid=28128295 |issn=0277-786X|url-access=subscription}}
= Context-Aware Clustering =
When there is no domain of interest or the domain specified has multiple types of definition, DeepPeep must separate the web form and cluster them into similar domains. The search engine uses context-aware clustering to group similar links in the same domain by modeling the web form into sets of hyperlinks and using its context for comparison. Unlike other techniques that require complicated label extraction and manual pre-processing of web forms, context-aware clustering is done automatically and uses meta-data to handle web forms that are content rich and contain multiple attributes.
= LabelEx =
DeepPeep further extracts information called Meta-Data from these pages which allows for better ranking of links and databases with the use of LabelEx, an approach for automatic decomposition and extraction of meta-data. Meta-data is data from web links that give information about other domains. LabelEx identifies the element-label mapping and uses the mapping to extract meta-data with accuracy unlike conventional approaches that used manually specific extraction rules.
= Ranking =
When the search results pop up after the user has input their keyword, DeepPeep ranks the links based on 3 features: term content, number of backlinks. and pagerank. Firstly, the term content is simply determined by the content of the web link and its relevance. Backlinks are hyperlinks or links that direct the user to a different website. Pageranks is the ranking of websites in search engine results and works by counting the amount and quality of links to website to determine its importance. Pagerank and back link information are obtained from outside sources such as Google, Yahoo, and Bing.
Beta Launch
DeepPeep Beta was launched and only covered seven domains: auto, airfare, biology, book, hotel, job, and rental. Under these seven domains, DeepPeep offered access to 13,000 Web forms.{{Cite news|url=https://www.theguardian.com/technology/2009/nov/26/dark-side-internet-freenet|title=The dark side of the internet|last=Beckett|first=Andy|date=2009-11-25|newspaper=The Guardian|language=en-GB|issn=0261-3077|access-date=2016-11-06}} One could access the website at [./Https://deepai.org/ DeepPeep.org] but the website has been inactive after the beta version was taken down.
References
{{reflist|30em}}
External links
- {{URL|http://www.deeppeep.org/|DeepPeep.org site}}, found dead November 2016 with site appearing in relation to Register.com. Last {{Cite web |url=http://www.deeppeep.org/ |title=DeepPeep: Discover the hidden web |access-date=2009-02-23 |archive-url=https://web.archive.org/web/20120509073423/http://www.deeppeep.org/ |archive-date=2012-05-09 |url-status=bot: unknown }}.