:Timeline of machine learning

{{short description|None}}

{{Update|date=August 2021}}

This page is a timeline of machine learning. Major discoveries, achievements, milestones and other major events in machine learning are included.

Overview

class="wikitable sortable"
DecadeSummary
pre-
1950
Statistical methods are discovered and refined.
1950sPioneering machine learning research is conducted using simple algorithms.
1960sBayesian methods are introduced for probabilistic inference in machine learning.{{cite journal |last1=Solomonoff |first1=R.J. |title=A formal theory of inductive inference. Part II |journal=Information and Control |date=June 1964 |volume=7 |issue=2 |pages=224–254 |doi=10.1016/S0019-9958(64)90131-7 |doi-access= }}
1970s'AI winter' caused by pessimism about machine learning effectiveness.
1980sRediscovery of backpropagation causes a resurgence in machine learning research.
1990sWork on Machine learning shifts from a knowledge-driven approach to a data-driven approach. Scientists begin creating programs for computers to analyze large amounts of data and draw conclusions{{snd}} or "learn"{{snd}} from the results.{{harvnb|Marr|2016}}. Support-vector machines (SVMs) and recurrent neural networks (RNNs) become popular.{{cite journal |last1=Siegelmann |first1=H.T. |last2=Sontag |first2=E.D. |title=On the Computational Power of Neural Nets |journal=Journal of Computer and System Sciences |date=February 1995 |volume=50 |issue=1 |pages=132–150 |doi=10.1006/jcss.1995.1013 |doi-access=free }} The fields of computational complexity via neural networks and super-Turing computation started.{{cite journal |last1=Siegelmann |first1=Hava |title=Computation Beyond the Turing Limit |journal=Journal of Computer and System Sciences |volume=238 |issue=28 |year=1995 |pages=632–637 |doi=10.1126/science.268.5210.545 |pmid=17756722 |bibcode=1995Sci...268..545S |s2cid=17495161 }}
2000sSupport-Vector Clustering{{cite journal |first1=Asa |last1=Ben-Hur |first2=David |last2=Horn |first3=Hava |last3=Siegelmann |first4=Vladimir|last4=Vapnik |title=Support vector clustering |journal=Journal of Machine Learning Research|volume=2 |year=2001 |pages=51–86}} and other kernel methods{{cite journal |last1=Hofmann |first1=Thomas |first2=Bernhard |last2=Schölkopf |first3=Alexander J. |last3=Smola |title=Kernel methods in machine learning |journal=The Annals of Statistics |volume=36 |issue=3 |year=2008 |pages=1171–1220 |jstor=25464664 |arxiv=math/0701907| doi=10.1214/009053607000000677 |doi-access=free }} and unsupervised machine learning methods become widespread.{{cite journal |first1=James |last1=Bennett |first2=Stan |last2=Lanning |title=The netflix prize |journal=Proceedings of KDD Cup and Workshop 2007 |date=2007 |url=https://www.cs.uic.edu/~liub/KDD-cup-2007/NetflixPrize-description.pdf }}
2010sDeep learning becomes feasible, which leads to machine learning becoming integral to many widely used software services and applications. Deep learning spurs huge advances in vision and text processing.
2020s

|Generative AI leads to revolutionary models, creating a proliferation of foundation models both proprietary and open source, notably enabling products such as ChatGPT (text-based) and Stable Diffusion (image based). Machine learning and AI enter the wider public consciousness. The commercial potential of AI based on machine learning causes large increases in valuations of companies linked to AI.

Timeline

{{Incomplete list|date=December 2022}}

class="wikitable sortable"
YearEvent typeCaptionEvent
1763DiscoveryThe Underpinnings of Bayes' TheoremThomas Bayes's work An Essay Towards Solving a Problem in the Doctrine of Chances is published two years after his death, having been amended and edited by a friend of Bayes, Richard Price.{{cite journal|last1=Bayes|first1=Thomas|title=An Essay Towards Solving a Problem in the Doctrine of Chance|journal=Philosophical Transactions|date=1 January 1763|volume=53|pages=370–418|doi=10.1098/rstl.1763.0053|jstor=105741|doi-access=free}} The essay presents work which underpins Bayes' theorem.
1805DiscoveryLeast SquareAdrien-Marie Legendre describes the "méthode des moindres carrés", known in English as the least squares method.{{cite book|last1=Legendre|first1=Adrien-Marie|title=Nouvelles méthodes pour la détermination des orbites des comètes|date=1805|publisher=Firmin Didot|location=Paris|page=viii|url=https://archive.org/details/bub_gb_FRcOAAAAQAAJ|accessdate=13 June 2016|language=French}} The least squares method is used widely in data fitting.
1812Bayes' TheoremPierre-Simon Laplace publishes Théorie Analytique des Probabilités, in which he expands upon the work of Bayes and defines what is now known as Bayes' Theorem.{{cite web|last1=O'Connor|first1=J J|last2=Robertson|first2=E F|title=Pierre-Simon Laplace|url=http://www-history.mcs.st-and.ac.uk/Biographies/Laplace.html|publisher=School of Mathematics and Statistics, University of St Andrews, Scotland|accessdate=15 June 2016}}
1843VisionaryVisionary PioneerAda Lovelace Lovelace's most significant relationship was with Charles Babbage, the inventor of the Analytical Engine, which is considered the first conceptual blueprint for a modern computer.{{cite web|title=Ada Lovelace|date=11 September 2024 |url=https://aivips.org/ada-lovelace/|publisher=AI VIPs}} Lovelace's vision extended beyond Babbage's own understanding of his machine. She saw the Analytical Engine as more than a calculator; she believed it could process and manipulate any form of symbolic data, such as music or text. This early vision of machines processing more than just numbers laid the groundwork for the development of symbolic AI and machine learning.{{cite web|last1=Zwolak|first1=Justyna|title=Ada Lovelace: The World's First Computer Programmer Who Predicted Artificial Intelligence|work=NIST |date=22 March 2023 |url=https://www.nist.gov/blogs/taking-measure/ada-lovelace-worlds-first-computer-programmer-who-predicted-artificial|publisher=National Institute of Standards and Technology}} Her contributions included what is now considered the first algorithm designed to be executed by a machine, making her the world's first computer programmer. Lovelace's understanding of the computational potential of machines continues to influence modern technologies like artificial intelligence.

{{cite web|last1=Gregersen|first1=Erik|title=Ada Lovelace: The First Computer Programmer|url=https://www.britannica.com/story/ada-lovelace-the-first-computer-programmer|publisher=Encyclopaedia Britannica}}

1913DiscoveryMarkov ChainsAndrey Markov first describes techniques he used to analyse a poem. The techniques later become known as Markov chains.{{cite journal |last1=Langston |first1=Nancy |title=Mining the Boreal North |journal=American Scientist |date=2013 |volume=101 |issue=2 |page=1 |doi=10.1511/2013.101.1 |quote=Delving into the text of Alexander Pushkin's novel in verse Eugene Onegin, Markov spent hours sifting through patterns of vowels and consonants. On January 23, 1913, he summarized his findings in an address to the Imperial Academy of Sciences in St. Petersburg. His analysis did not alter the understanding or appreciation of Pushkin's poem, but the technique he developed—now known as a Markov chain—extended the theory of probability in a new direction. }}
1943

|Discovery

|Artificial Neuron

|Warren McCulloch and Walter Pitts develop a mathematical model that imitates the functioning of a biological neuron, the artificial neuron which is considered to be the first neural model invented.{{cite journal |last1=McCulloch |first1=Warren S. |last2=Pitts |first2=Walter |title=A logical calculus of the ideas immanent in nervous activity |journal=The Bulletin of Mathematical Biophysics |date=December 1943 |volume=5 |issue=4 |pages=115–133 |doi=10.1007/BF02478259 }}

1950Turing's Learning MachineAlan Turing proposes a 'learning machine' that could learn and become artificially intelligent. Turing's specific proposal foreshadows genetic algorithms.{{cite journal |last1=Turing |first1=A. M. |title=I.—COMPUTING MACHINERY AND INTELLIGENCE |journal=Mind |date=1 October 1950 |volume=LIX |issue=236 |pages=433–460 |doi=10.1093/mind/LIX.236.433 }}
1951First Neural Network MachineMarvin Minsky and Dean Edmonds build the first neural network machine, able to learn, the SNARC.{{Harvnb|Crevier|1993|pp=34–35}} and {{Harvnb|Russell|Norvig|2003|p=17}}.
1952Machines Playing CheckersArthur Samuel joins IBM's Poughkeepsie Laboratory and begins working on some of the first machine learning programs, first creating programs that play checkers.{{cite journal |last1=McCarthy |first1=J. |last2=Feigenbaum |first2=E. |title=In memoriam—Arthur Samuel (1901–1990) |journal=AI Magazine |date=1 September 1990 |volume=11 |issue=3 |pages=10–11 |url=https://dl.acm.org/doi/10.5555/95618.95622 }}
1957DiscoveryPerceptronFrank Rosenblatt invents the perceptron while working at the Cornell Aeronautical Laboratory.{{cite journal |last1=Rosenblatt |first1=F. |title=The perceptron: A probabilistic model for information storage and organization in the brain. |journal=Psychological Review |date=1958 |volume=65 |issue=6 |pages=386–408 |doi=10.1037/h0042519 |pmid=13602029 |citeseerx=10.1.1.588.3775 |s2cid=12781225 }} The invention of the perceptron generated a great deal of excitement and was widely covered in the media.{{cite magazine|last1=Mason|first1=Harding|last2=Stewart|first2=D|last3=Gill|first3=Brendan|title=Rival|url=http://www.newyorker.com/magazine/1958/12/06/rival-2|accessdate=5 June 2016|magazine=The New Yorker|date=6 December 1958}}
1963AchievementMachines Playing Tic-Tac-ToeDonald Michie creates a 'machine' consisting of 304 match boxes and beads, which uses reinforcement learning to play Tic-tac-toe (also known as noughts and crosses).{{cite web|last1=Child|first1=Oliver|title=Menace: the Machine Educable Noughts And Crosses Engine Read|url=http://chalkdustmagazine.com/features/menace-machine-educable-noughts-crosses-engine/#more-3326|website=Chalkdust Magazine |date=13 March 2016|accessdate=16 Jan 2018}}
1967Nearest NeighborThe nearest neighbour algorithm was created, which is the start of basic pattern recognition. The algorithm was used to map routes.
1969Limitations of Neural NetworksMarvin Minsky and Seymour Papert publish their book Perceptrons, describing some of the limitations of perceptrons and neural networks. The interpretation that the book shows that neural networks are fundamentally limited is seen as a hindrance for research into neural networks.{{cite web|last1=Cohen|first1=Harvey|title=The Perceptron|url=http://harveycohen.net/image/perceptron.html|accessdate=5 June 2016}}
1970Automatic Differentiation (Backpropagation)Seppo Linnainmaa publishes the general method for automatic differentiation (AD) of discrete connected networks of nested differentiable functions.{{cite thesis |authorlink1=Seppo Linnainmaa |first1=Seppo |last1=Linnainmaa |year=1970 |title=Algoritmin kumulatiivinen pyoristysvirhe yksittaisten pyoristysvirheiden taylor-kehitelmana |trans-title=The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors |language=Finnish |pages=6–7 |url=https://people.idsia.ch/~juergen/linnainmaa1970thesis.pdf }}{{cite journal |first=Seppo |last=Linnainmaa |authorlink=Seppo Linnainmaa |year=1976 |title=Taylor expansion of the accumulated rounding error |journal=BIT Numerical Mathematics |volume=16 |issue=2 |pages=146–160 |doi=10.1007/BF01931367 |s2cid=122357351 }} This corresponds to the modern version of backpropagation, but is not yet named as such.{{cite journal |last=Griewank |first=Andreas |year=2012 |title=Who Invented the Reverse Mode of Differentiation? |journal=Documenta Mathematica, Extra Volume ISMP |series=Documenta Mathematica Series |volume=6 |pages=389–400|doi=10.4171/dms/6/38 |doi-access=free |isbn=978-3-936609-58-5 }}{{cite book|last1=Griewank|first1=Andreas|last2=Walther|first2=A.|title=Principles and Techniques of Algorithmic Differentiation|url=https://books.google.com/books?id=qMLUIsgCwvUC|language=en|edition=Second|publisher=SIAM|year=2008|isbn=978-0898716597}}{{cite journal |authorlink=Jürgen Schmidhuber |last=Schmidhuber |first=Jürgen |year=2015 |title=Deep learning in neural networks: An overview |journal=Neural Networks |volume=61 |pages=85–117 |arxiv=1404.7828|bibcode=2014arXiv1404.7828S |doi=10.1016/j.neunet.2014.09.003 |pmid=25462637|s2cid=11715509 }}{{cite journal | last1 = Schmidhuber | first1 = Jürgen | authorlink = Jürgen Schmidhuber | year = 2015 | title = Deep Learning (Section on Backpropagation) | journal = Scholarpedia | volume = 10 | issue = 11| page = 32832 | doi = 10.4249/scholarpedia.32832 | bibcode = 2015SchpJ..1032832S | doi-access = free }}
1976DiscoveryTransfer LearningStevo Bozinovski and Ante Fulgosi introduced transfer learning method in neural networks training. Stevo Bozinovski and Ante Fulgosi (1976) "The influence of pattern similarity and transfer learning upon training of a base perceptron" (original in Croatian) Proceedings of Symposium Informatica 3-121-5, Bled. Stevo Bozinovski (2020) "Reminder of the first paper on transfer learning in neural networks, 1976". Informatica 44: 291–302.
1979Stanford CartStudents at Stanford University develop a cart that can navigate and avoid obstacles in a room.
1979DiscoveryNeocognitronKunihiko Fukushima first publishes his work on the neocognitron, a type of artificial neural network (ANN).{{cite journal

| last = Fukushima

| first = Kunihiko

| date = October 1979

| title = 位置ずれに影響されないパターン認識機構の神経回路のモデル --- ネオコグニトロン ---

| trans-title = Neural network model for a mechanism of pattern recognition unaffected by shift in position — Neocognitron —

| language = Japanese

| journal = Trans. IECE

| volume = J62-A

| issue = 10

| pages = 658–665

| url = https://search.ieice.org/bin/summary.php?id=j62-a_10_658

}}{{cite journal |last1=Fukushima |first1=Kunihiko |title=Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position |journal=Biological Cybernetics |date=April 1980 |volume=36 |issue=4 |pages=193–202 |doi=10.1007/BF00344251 |pmid=7370364 |s2cid=206775608 }} Neocognition later inspires convolutional neural networks (CNNs).{{cite journal|last1=Le Cun|first1=Yann|title=Deep Learning|citeseerx=10.1.1.297.6176}}

1981AchievementLearning to recognize 40 patternsStevo Bozinovski showed an experiment of neural network supervised learning for recognition of 40 linearly dependent patters: 26 letters, 10 numbers, and 4 special symbols from a computer terminal. S. Bozinovski (1981) "Teaching space: A representation concept for adaptive pattern classification" COINS Technical Report No. 81-28, Computer and Information Science Department, University of Massachusetts at Amherst, MA, 1981. UM-CS-1981-028.pdf
1981Explanation Based LearningGerald Dejong introduces Explanation Based Learning, where a computer algorithm analyses data and creates a general rule it can follow and discard unimportant data.
1982DiscoveryRecurrent Neural NetworkJohn Hopfield popularizes Hopfield networks, a type of recurrent neural network that can serve as content-addressable memory systems.{{cite journal |last1=Hopfield |first1=J J |title=Neural networks and physical systems with emergent collective computational abilities. |journal=Proceedings of the National Academy of Sciences |date=April 1982 |volume=79 |issue=8 |pages=2554–2558 |doi=10.1073/pnas.79.8.2554 |pmid=6953413 |pmc=346238 |bibcode=1982PNAS...79.2554H |doi-access=free }}
1982DiscoverySelf LearningStevo Bozinovski develops a self-learning paradigm in which an agent does not use an external reinforcement. Instead, the agent learns using internal state evaluations, represented by emotions. He introduces the Crossbar Adaptive Array (CAA) architecture capable of self-learning. Bozinovski, S. (1982). "A self-learning system using secondary reinforcement". In Trappl, Robert (ed.). Cybernetics and Systems Research: Proceedings of the Sixth European Meeting on Cybernetics and Systems Research. North-Holland. pp. 397–402. ISBN 978-0-444-86488-8 Bozinovski S. (1995) "Adaptive parallel distributed processing, neural and genetic agents. Part I: Neuro-genetic agents and structural theory of self-reinforcement learning systems". CMPSCI Technical Report 95-107, University of Massachusetts at Amherst, UM-CS-1995-107
1982AchievementDelayed reinforcement learningStevo Bozinovski solved the challenge of reinforcement learning with delayed rewards. Using the Crossbar Adaptive Array (CAA) he presented solutions of two tasks: 1) learning path in a graph 2) balancing an inverted pendulum.Bozinovski, S. (1999) "Crossbar Adaptive Array: The first connectionist network that solved the delayed reinforcement learning problem" In A. Dobnikar, N. Steele, D. Pearson, R. Albert (Eds.) Artificial Neural Networks and Genetic Algorithms, Springer Verlag, p. 320-325, 1999, ISBN 3-211-83364-1
1985NETtalkA program that learns to pronounce words the same way a baby does, is developed by Terry Sejnowski.
1986ApplicationBackpropagationSeppo Linnainmaa's reverse mode of automatic differentiation (first applied to neural networks by Paul Werbos) is used in experiments by David Rumelhart, Geoff Hinton and Ronald J. Williams to learn internal representations.{{cite journal |last1=Rumelhart |first1=David E. |last2=Hinton |first2=Geoffrey E. |last3=Williams |first3=Ronald J. |title=Learning representations by back-propagating errors |journal=Nature |date=October 1986 |volume=323 |issue=6088 |pages=533–536 |doi=10.1038/323533a0 |bibcode=1986Natur.323..533R |s2cid=205001834 }}
1988DiscoveryUniversal approximation theorem{{ill|Kurt Hornik|de}} proves that standard multilayer feedforward networks are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available.
1989DiscoveryReinforcement LearningChristopher Watkins develops Q-learning, which greatly improves the practicality and feasibility of reinforcement learning.{{cite journal|last1=Watksin|first1=Christopher|title=Learning from Delayed Rewards|date=1 May 1989|url=http://www.cs.rhul.ac.uk/~chrisw/new_thesis.pdf}}
1989CommercializationCommercialization of Machine Learning on Personal ComputersAxcelis, Inc. releases Evolver, the first software package to commercialize the use of genetic algorithms on personal computers.{{cite news|last1=Markoff|first1=John|title=BUSINESS TECHNOLOGY; What's the Best Answer? It's Survival of the Fittest|url=https://www.nytimes.com/1990/08/29/business/business-technology-what-s-the-best-answer-it-s-survival-of-the-fittest.html|accessdate=8 June 2016|work=New York Times|date=29 August 1990}}
1992AchievementMachines Playing BackgammonGerald Tesauro develops TD-Gammon, a computer backgammon program that uses an artificial neural network trained using temporal-difference learning (hence the 'TD' in the name). TD-Gammon is able to rival, but not consistently surpass, the abilities of top human backgammon players.{{cite journal |last1=Tesauro |first1=Gerald |title=Temporal difference learning and TD-Gammon |journal=Communications of the ACM |date=March 1995 |volume=38 |issue=3 |pages=58–68 |doi=10.1145/203330.203343 |s2cid=8763243 }}
1995DiscoveryRandom Forest AlgorithmTin Kam Ho publishes a paper describing random decision forests.{{cite book |doi=10.1109/ICDAR.1995.598994 |chapter=Random decision forests |title=Proceedings of 3rd International Conference on Document Analysis and Recognition |year=1995 |last1=Tin Kam Ho |volume=1 |pages=278–282 |isbn=0-8186-7128-9 }}
1995DiscoverySupport-Vector MachinesCorinna Cortes and Vladimir Vapnik publish their work on support-vector machines.{{cite journal |last1=Cortes |first1=Corinna |last2=Vapnik |first2=Vladimir |title=Support-vector networks |journal=Machine Learning |date=September 1995 |volume=20 |issue=3 |pages=273–297 |doi=10.1007/BF00994018 |doi-access=free }}
1997AchievementIBM Deep Blue Beats KasparovIBM's Deep Blue beats the world champion at chess.
1997DiscoveryLSTMSepp Hochreiter and Jürgen Schmidhuber invent long short-term memory (LSTM) recurrent neural networks,{{cite journal |last1=Hochreiter |first1=Sepp |last2=Schmidhuber |first2=Jürgen |title=Long Short-Term Memory |journal=Neural Computation |date=1 November 1997 |volume=9 |issue=8 |pages=1735–1780 |doi=10.1162/neco.1997.9.8.1735 |pmid=9377276 |s2cid=1915014 }} greatly improving the efficiency and practicality of recurrent neural networks.
1998MNIST databaseA team led by Yann LeCun releases the MNIST database, a dataset comprising a mix of handwritten digits from American Census Bureau employees and American high school students.{{cite web|last1=LeCun|first1=Yann|last2=Cortes|first2=Corinna|last3=Burges|first3=Christopher|title=THE MNIST DATABASE of handwritten digits|url=http://yann.lecun.com/exdb/mnist/|accessdate=16 June 2016}} The MNIST database has since become a benchmark for evaluating handwriting recognition.
2002ProjectTorch Machine Learning LibraryTorch, a software library for machine learning, is first released.{{cite journal|last1=Collobert|first1=Ronan|last2=Benigo|first2=Samy|last3=Mariethoz|first3=Johnny|title=Torch: a modular machine learning software library|date=30 October 2002|url=http://www.idiap.ch/ftp/reports/2002/rr02-46.pdf|accessdate=5 June 2016|archive-date=6 August 2016|archive-url=https://web.archive.org/web/20160806084735/http://www.idiap.ch/ftp/reports/2002/rr02-46.pdf|url-status=dead}}
2006The Netflix PrizeThe Netflix Prize competition is launched by Netflix. The aim of the competition was to use machine learning to beat Netflix's own recommendation software's accuracy in predicting a user's rating for a film given their ratings for previous films by at least 10%.{{cite web|title=The Netflix Prize Rules|url=http://www.netflixprize.com/rules|website=Netflix Prize|publisher=Netflix|accessdate=16 June 2016|url-status=dead|archiveurl=https://web.archive.org/web/20120303162455/http://www.netflixprize.com/rules|archivedate=3 March 2012}} The prize was won in 2009.
2009

|Achievement

|ImageNet

|ImageNet is created. ImageNet is a large visual database envisioned by Fei-Fei Li from Stanford University, who realized that the best machine learning algorithms wouldn't work well if the data didn't reflect the real world.{{Cite web|url=https://qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world/|title=ImageNet: the data that spawned the current AI boom — Quartz|last=Gershgorn|first=Dave|website=qz.com|date=26 July 2017 |language=en-US|access-date=2018-03-30}} For many, ImageNet was the catalyst for the AI boom{{cite news |last1=Hardy |first1=Quentin |title=Reasons to Believe the A.I. Boom Is Real |url=https://www.nytimes.com/2016/07/19/technology/reasons-to-believe-the-ai-boom-is-real.html |work=The New York Times |date=18 July 2016 }} of the 21st century.

2010ProjectKaggle CompetitionKaggle, a website that serves as a platform for machine learning competitions, is launched.{{cite web|title=About|url=https://www.kaggle.com/about|website=Kaggle|publisher=Kaggle Inc|accessdate=16 June 2016|archive-date=18 March 2016|archive-url=https://web.archive.org/web/20160318210802/https://www.kaggle.com/about|url-status=dead}}
2011AchievementBeating Humans in JeopardyUsing a combination of machine learning, natural language processing and information retrieval techniques, IBM's Watson beats two human champions in a Jeopardy! competition.{{cite news |last1=Markoff |first1=John |title=Computer Wins on 'Jeopardy!': Trivial, It's Not |url=https://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html |work=The New York Times |date=16 February 2011 |page=A1 }}
2012AchievementRecognizing Cats on YouTubeThe Google Brain team, led by Andrew Ng and Jeff Dean, create a neural network that learns to recognize cats by watching unlabeled images taken from frames of YouTube videos.{{cite book |doi=10.1109/ICASSP.2013.6639343 |chapter=Building high-level features using large scale unsupervised learning |title=2013 IEEE International Conference on Acoustics, Speech and Signal Processing |year=2013 |last1=Le |first1=Quoc V. |pages=8595–8598 |isbn=978-1-4799-0356-6 |s2cid=206741597 }}{{cite news|last1=Markoff|first1=John|title=How Many Computers to Identify a Cat? 16,000|url=https://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html|accessdate=5 June 2016|work=New York Times|date=26 June 2012|page=B1}}
2012

|Discovery

|Visual Recognition

|The AlexNet paper and algorithm achieves breakthrough results in image recognition in the ImageNet benchmark. This popularizes deep neural networks.{{Cite web |date=2017-07-26 |title=The data that transformed AI research—and possibly the world |url=https://qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world |access-date=2023-09-12 |website=Quartz |language=en}}

2013

|Discovery

|Word Embeddings

|A widely cited paper nicknamed word2vec revolutionizes the processing of text in machine learnings. It shows how each word can be converted into a sequence of numbers (word embeddings), the use of these vectors revolutionized text processing in machine learning.

2014AchievementLeap in Face RecognitionFacebook researchers publish their work on DeepFace, a system that uses neural networks that identifies faces with 97.35% accuracy. The results are an improvement of more than 27% over previous systems and rivals human performance.{{cite journal|last1=Taigman|first1=Yaniv|last2=Yang|first2=Ming|last3=Ranzato|first3=Marc'Aurelio|last4=Wolf|first4=Lior|title=DeepFace: Closing the Gap to Human-Level Performance in Face Verification|journal=Conference on Computer Vision and Pattern Recognition|date=24 June 2014|url=https://research.facebook.com/publications/deepface-closing-the-gap-to-human-level-performance-in-face-verification/|accessdate=8 June 2016}}
2014SibylResearchers from Google detail their work on Sibyl,{{cite web|last1=Canini|first1=Kevin|last2=Chandra|first2=Tushar|last3=Ie|first3=Eugene|last4=McFadden|first4=Jim|last5=Goldman|first5=Ken|last6=Gunter|first6=Mike|last7=Harmsen|first7=Jeremiah|last8=LeFevre|first8=Kristen|last9=Lepikhin|first9=Dmitry|last10=Llinares|first10=Tomas Lloret|last11=Mukherjee|first11=Indraneel|last12=Pereira|first12=Fernando|last13=Redstone|first13=Josh|last14=Shaked|first14=Tal|last15=Singer|first15=Yoram|title=Sibyl: A system for large scale supervised machine learning|url=https://users.soe.ucsc.edu/~niejiazhong/slides/chandra.pdf|website=Jack Baskin School of Engineering|publisher=UC Santa Cruz|accessdate=8 June 2016|archive-date=15 August 2017|archive-url=https://web.archive.org/web/20170815072141/https://users.soe.ucsc.edu/~niejiazhong/slides/chandra.pdf|url-status=dead}} a proprietary platform for massively parallel machine learning used internally by Google to make predictions about user behavior and provide recommendations.{{cite news|last1=Woodie|first1=Alex|title=Inside Sibyl, Google's Massively Parallel Machine Learning Platform|url=http://www.datanami.com/2014/07/17/inside-sibyl-googles-massively-parallel-machine-learning-platform/|accessdate=8 June 2016|work=Datanami|publisher=Tabor Communications|date=17 July 2014}}
2016AchievementBeating Humans in GoGoogle's AlphaGo program becomes the first Computer Go program to beat an unhandicapped professional human player{{cite web|title=Google achieves AI 'breakthrough' by beating Go champion|url=https://www.bbc.com/news/technology-35420579|website=BBC News|publisher=BBC|accessdate=5 June 2016|date=27 January 2016}} using a combination of machine learning and tree search techniques.{{cite web|title=AlphaGo|url=https://www.deepmind.com/alpha-go.html|website=Google DeepMind|publisher=Google Inc|accessdate=5 June 2016|archive-date=30 January 2016|archive-url=https://web.archive.org/web/20160130230207/http://www.deepmind.com/alpha-go.html|url-status=dead}} Later improved as AlphaGo Zero and then in 2017 generalized to Chess and more two-player games with AlphaZero.
2017

|Discovery

|Transformer

|A team at Google Brain invent the transformer architecture,{{Cite journal |last1=Vaswani |first1=Ashish |last2=Shazeer |first2=Noam |last3=Parmar |first3=Niki |last4=Uszkoreit |first4=Jakob |last5=Jones |first5=Llion |last6=Gomez |first6=Aidan N. |last7=Kaiser |first7=Lukasz |last8=Polosukhin |first8=Illia |date=2017 |title=Attention Is All You Need |arxiv=1706.03762}} which allows for faster parallel training of neural networks on sequential data like text.

2018AchievementProtein Structure PredictionAlphaFold 1 (2018) placed first in the overall rankings of the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP) in December 2018.{{cite news |last1=Sample |first1=Ian |title=Google's DeepMind predicts 3D shapes of proteins |url=https://www.theguardian.com/science/2018/dec/02/google-deepminds-ai-program-alphafold-predicts-3d-shapes-of-proteins |work=The Guardian |date=2 December 2018 }}
2021AchievementProtein Structure PredictionAlphaFold 2 (2021), A team that used AlphaFold 2 (2020) repeated the placement in the CASP competition in November 2020. The team achieved a level of accuracy much higher than any other group. It scored above 90 for around two-thirds of the proteins in CASP's global distance test (GDT), a test that measures the degree to which a computational program predicted structure is similar to the lab experiment determined structure, with 100 being a complete match, within the distance cutoff used for calculating GDT.{{cite journal |last1=Eisenstein |first1=Michael |title=Artificial intelligence powers protein-folding predictions |journal=Nature |date=23 November 2021 |volume=599 |issue=7886 |pages=706–708 |doi=10.1038/d41586-021-03499-y |s2cid=244528561 }}

See also

References

= Citations =

{{Reflist}}

= Works cited =

  • {{Cite book |first=Daniel |last=Crevier |year=1993 |title=AI: The Tumultuous Search for Artificial Intelligence |publisher=BasicBooks |isbn=0-465-02997-3 |location=New York |author-link=Daniel Crevier}}
  • {{cite news |last1=Marr |first1=Bernard |title=A Short History of Machine Learning -- Every Manager Should Read |url=https://www.forbes.com/sites/bernardmarr/2016/02/19/a-short-history-of-machine-learning-every-manager-should-read/ |work=Forbes |date=19 February 2016 |access-date=2022-12-25 |archive-url=https://web.archive.org/web/20221205135114/https://www.forbes.com/sites/bernardmarr/2016/02/19/a-short-history-of-machine-learning-every-manager-should-read/ |archive-date=2022-12-05 |url-status=live}}
  • {{Cite book |first1=Stuart |last1=Russell |first2=Peter |last2=Norvig |year=2003 |title=Artificial Intelligence: A Modern Approach |publisher=Pearson Education |isbn=0-137-90395-2 |location=London |author-link1=Stuart J. Russell |author-link2=Peter Norvig}}

{{Timelines of computing}}

Category:Machine learning

Machine learning