Google Books Ngram Viewer

{{Short description|Online search engine}}

File:Example of a google Ngram.jpg

The Google Books Ngram Viewer is an online search engine that charts the frequencies of any set of search strings using a yearly count of n-grams found in printed sources published between 1500 and 2022{{Cite journal

|last1=Michael|first1=Jean-Baptiste

|last2=Shen|first2=Yuan K.

|last3=Aiden|first3=Aviva P.

|last4=Veres|first4=Adrian

|last5=Gray|first5=Matthew K.

|last6=((The Google Books Team))

|last7=Pickett|first7=Joseph P.

|last8=Hoiberg|first8=Dale|author-link8=Dale Hoiberg

|last9=Clancy|first9=Dan

|last10=Norvig|first10=Peter|author-link10=Peter Norvig

|last11=Orwant|first11=Jon

|last12=Pinker|first12=Steven|author-link12=Steven Pinker

|last13=Nowak|first13=Martin A.

|last14=Aiden|first14=Erez L.|author-link14=Erez Lieberman Aiden

|year=2010

|title=Quantitative Analysis of Culture Using Millions of Digitized Books

|journal=Science|volume=331|issue=6014|pages=176–182

|doi=10.1126/science.1199644

|pmid=21163965

|pmc=3279742

}}{{cite web

|last1=Bosker|first1=Bianca |author-link1=Bianca Bosker

|date=2010-12-17

|title=Google Ngram Database Tracks Popularity Of 500 Billion Words

|url=http://www.huffingtonpost.com/2010/12/17/google-ngram-database-tra_n_798150.html

|website=The Huffington Post

|accessdate=2012-05-31

}}{{cite web

|author=Lance Whitney

|date=2010-12-17

|title=Google's Ngram Viewer: A time machine for wordplay

|url=http://news.cnet.com/8301-1023_3-20025979-93.html

|archive-url=https://web.archive.org/web/20140123012004/http://news.cnet.com/8301-1023_3-20025979-93.html

|archive-date=2014-01-23

|publisher=Cnet.com

|accessdate=2012-05-31

}}{{cite tweet

|title=The Google Books Ngram Viewer has now been updated with fresh data through 2019

|user=searchliaison

|number=1282797986863386624

|access-date=2020-08-11

|language=en}} in Google's text corpora in English, Chinese (simplified), French, German, Hebrew, Italian, Russian, or Spanish.{{cite web

|date=2011-08-22

|title=Google Books Ngram Viewer - University at Buffalo Libraries

|url=http://libweb.lib.buffalo.edu/pdp/index.asp?ID=497

|archive-url=https://web.archive.org/web/20130702054244/http://libweb.lib.buffalo.edu/pdp/index.asp?ID=497

|archive-date=2013-07-02

|publisher=Lib.Buffalo.edu

|accessdate=2012-05-31

}}

There are also some specialized English corpora, such as American English, British English, and English Fiction.{{cite web|title=Google Books Ngram Viewer - Information|url=https://books.google.com/ngrams/info|accessdate=2024-06-01}}

The program can search for a word or a phrase, including misspellings or gibberish. The n-grams are matched with the text within the selected corpus, and if found in 40 or more books, are then displayed as a [https://books.google.com/ngrams graph]. The Google Books Ngram Viewer supports searches for parts of speech and wildcards. It is routinely used in research.{{cite journal

|last1=Greenfield|first1=Patricia M.

|year=2013

|title=The Changing Psychology of Culture From 1800 Through 2000

|url=http://journals.sagepub.com/doi/10.1177/0956797613479387

|journal=Psychological Science|volume=24|issue=9|pages=1722–1731

|doi=10.1177/0956797613479387|pmid=23925305|s2cid=6123553|issn=0956-7976

|url-access=subscription}}{{cite journal

|last1=Younes|first1=Nadja

|last2=Reips|first2=Ulf-Dietrich|author-link2=Ulf-Dietrich Reips

|year=2018

|title=The changing psychology of culture in German-speaking countries: A Google Ngram study

|url=https://onlinelibrary.wiley.com/doi/10.1002/ijop.12428

|journal=International Journal of Psychology|volume=53|pages=53–62

|doi=10.1002/ijop.12428|pmid=28474338 |s2cid=7440938

}}

History

In the development processes, Google teamed up with two Harvard researchers, Jean-Baptiste Michel and Erez Lieberman Aiden, and quietly released the program on December 16, 2010.{{Cite web

|title=In 500 Billion Words, New Window on Culture

|date=2010-12-16

|url=https://www.nytimes.com/2010/12/17/books/17words.html

|work=The New York Times

|accessdate=2024-06-01

}}

Before the release, it was difficult to quantify the rate of linguistic change because of the absence of a database that was designed for this purpose, said Steven Pinker,{{cite web

|url=https://www.youtube.com/watch?v=5S1d3cNge24&t=56m45s

|title=Steven Pinker – The Stuff of Thought: Language as a window into human nature

|publisher=Royal Society of Arts

|date=2010-02-04

|via=YouTube

|accessdate=2024-06-02

}}

a well-known linguist who was one of the co-authors of the Science paper published on the same day. The Google Books Ngram Viewer was developed in the hope of opening a new window to quantitative research in the humanities field, and the database contained 500 billion words from 5.2 million books publicly available from the very beginning.

The intended audience was scholarly, but the Google Books Ngram Viewer made it possible for anyone with a computer to see a graph that represents the diachronic change of the use of words and phrases with ease. Lieberman said in response to the New York Times that the developers aimed to provide even children with the ability to browse cultural trends throughout history. In the Science paper, Lieberman and his collaborators called the method of high-volume data analysis in digitalized texts "culturomics".

Usage

Commas delimit user-entered search terms, where each comma-separated term is searched in the database as an n-gram (for example, "nursery school" is a 2-gram or bigram). The Ngram Viewer then returns a plotted line chart. Note that due to limitations on the size of the Ngram database, only matches found in at least 40 books are indexed.

Limitations

The data sets of the Ngram Viewer have been criticized for their reliance upon inaccurate optical character recognition (OCR) and for including large numbers of incorrectly dated and categorized texts.{{cite web

|last=Nunberg|first=Geoff|author-link=Geoffrey Nunberg

|title=Humanities research with the Google Books corpus

|date=2010-12-16

|url=http://languagelog.ldc.upenn.edu/nll/?p=2847

|url-status=live

|archive-url=https://web.archive.org/web/20160310035741/http://languagelog.ldc.upenn.edu/nll/?p=2847

|archive-date=2016-03-10

|accessdate=2015-04-19

}}

Because of these errors, and because they are uncontrolled for bias{{cite journal

|last1=Pechenick|first1=Eitan Adam

|last2=Danforth|first2=Christopher M.

|last3=Dodds|first3=Peter Sheridan

|last4=Barrat|first4=Alain

|date=2015-10-07

|title=Characterizing the Google Books Corpus: Strong Limits to Inferences of Socio-Cultural and Linguistic Evolution

|journal=PLOS One|volume=10|issue=10|pages=e0137041

|doi=10.1371/journal.pone.0137041|pmid=26445406|arxiv=1501.00960|pmc=4596490|bibcode=2015PLoSO..1037041P|doi-access=free

}}

(such as the increasing amount of scientific literature, which causes other terms to appear to decline in popularity), care must be taken in using the corpora to study language or test theories.{{Cite magazine

|url=https://www.wired.com/2015/10/pitfalls-of-studying-language-with-google-ngram/

|title=The Pitfalls of Using Google Ngram to Study Language

|last=Zhang|first=Sarah

|magazine=WIRED

|access-date=2017-05-24

|language=en-US

}}

Furthermore, the data sets may not reflect general linguistic or cultural change and can only hint at such an effect because they do not involve any metadata like date published,{{dubious |reason=The results are indexed by date. Does the cited source really say there is no information about publication dates? |date=August 2024}} author, length, or genre, to avoid any potential copyright infringements.{{Cite journal

|last=Koplenig|first=Alexander

|date=2015-09-02

|publication-date=2017-04-01

|title=The impact of lacking metadata for the measurement of cultural and linguistic change using the Google Ngram data sets—Reconstructing the composition of the German corpus in times of WWII

|url=https://academic.oup.com/dsh/article-abstract/32/1/169/2957375/The-impact-of-lacking-metadata-for-the-measurement

|url-access=subscription

|journal=Digital Scholarship in the Humanities|volume=32|issue=1|pages=169–188

|publisher=Oxford Academic

|doi=10.1093/llc/fqv037|issn=2055-7671

}}

Systemic errors like the confusion of s and f in pre-19th century texts (due to the use of ſ, the long s, which is similar in appearance to f) can cause systemic bias. Although the Google Books team claims that the results are reliable from 1800 onwards, poor OCR and insufficient data mean that frequencies given for languages such as Chinese may only be accurate from 1970 onward, with earlier parts of the corpus showing no results at all for common terms, and data for some years containing more than 50% noise.{{cite web

|title=Google n-grams and pre-modern Chinese

|url=http://digitalsinology.org/google-ngrams-pre-modern-chinese/

|website=digitalsinology.org

|accessdate=2015-04-19

}}{{cite web

|title=When n-grams go bad

|url=http://digitalsinology.org/when-n-grams-go-bad/

|website=digitalsinology.org

|accessdate=2015-04-19

}}{{better source |reason=These two citations appear to reference an anonymously published blog that has little content on the site (the archives show only about 15 articles) and accepts user-generated content. Some articles identify an author, but these do not, and the site itself does not identify who or what organization publishes it. The site's About page just provides a Gmail address as contact information (with no name). |date=August 2024}}

Guidelines for doing research with data from Google Ngram have been proposed that try to address some of the issues discussed above.{{Cite journal

|last1=Younes|first1=Nadja

|last2=Reips|first2=Ulf-Dietrich|author-link2=Ulf-Dietrich Reips

|date=2019-03-22

|title=Guideline for improving the reliability of Google Ngram studies: Evidence from religious terms

|journal=PLOS One |volume=14|issue=3|pages=e0213554

|language=en

|doi=10.1371/journal.pone.0213554|issn=1932-6203|pmc=6430395|pmid=30901329|bibcode=2019PLoSO..1413554Y|doi-access=free

}}

See also

References

{{reflist|2}}

Bibliography

  • {{cite journal |first1=Yuri |last1=Lin |first2=Jean-Baptiste |last2=Michel |first3=Erez Lieberman |last3=Aiden |author-link3=Erez Lieberman Aiden |first4=Jon |last4=Orwant |first5=Will |last5=Brockman |first6=Slav |last6=Petrov |display-authors=1 |date=July 2012 |title=Syntactic Annotations for the Google Books Ngram Corpus |url=http://aclweb.org/anthology/P/P12/P12-3029.pdf |format=PDF |journal=Proceedings of the 50th Annual Meeting |series=Demo Papers |location=Jeju, Republic of Korea |publisher=Association for Computational Linguistics |volume=2 |pages=169–174 |id=2390499 |quote=Whitepaper presenting the 2012 edition of the Google Books Ngram Corpus }}