Google DeepMind
{{Short description|Artificial intelligence research laboratory}}
{{Use dmy dates|date=April 2024}}
{{Use British English|date=September 2016}}
{{Infobox company
| name = DeepMind Technologies Limited
| logo = DeepMind new logo.svg
| logo_upright = 1.2
| image =
| image_size =
| image_caption =
| trading_name = {{Ubl
| Google DeepMind
| DeepMind
}}
| former_name =
| founded = {{Start date and age|df=y|2010|09|23}} (incorporation){{Cite web |date=2010-09-23 |title=DeepMind Technologies Limited overview - Find and update company information - Gov.uk |url=https://find-and-update.company-information.service.gov.uk/company/07386350 |access-date=2024-12-14 |website=Companies House |language=en}}
{{Start date and age|df=y|2010|11|15}} (official launch){{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/1843/2019/03/01/deepmind-and-google-the-battle-to-control-artificial-intelligence |newspaper=The Economist |access-date=22 September 2024}}
| location = London, England{{Cite web |title=King's Cross – S2 Building – SES Engineering Services |url=https://www.ses-ltd.co.uk/case-study/kings-cross-s2-building/ |access-date=14 July 2022 |website=ses-ltd.co.uk |language=en}}
| founders = {{plain list|
| key_people = {{Unbulleted list
|Lila Ibrahim (COO)}}
| industry = Artificial intelligence
| parent = Deepmind Holdings Limited{{Cite web |title=Deepmind Technologies Limited persons with significant control – Find and update company information – Gov.uk |url=https://find-and-update.company-information.service.gov.uk/company/07386350/persons-with-significant-control |date=2019-11-04 |access-date=2024-12-14 |website=Companies House |language=en}}
| type = Subsidiary
| owner = Alphabet Inc.{{Cite web |title=Deepmind Holdings Limited persons with significant control – Find and update company information – GOV.UK |url=https://find-and-update.company-information.service.gov.uk/company/12181850/persons-with-significant-control |date=2019-08-30 |access-date=2024-05-07 |website=Companies House |language=en}}
| products = {{UBL
| AlphaGo
}}
| revenue = {{increase}} £1.53 billion (2023){{Cite web |url=https://find-and-update.company-information.service.gov.uk/company/07386350/filing-history/MzQzODE0ODI0MGFkaXF6a2N4/document?format=pdf&download=0 |title=Full accounts made up to 31 December 2023 |date=7 October 2024 |publisher=Companies House |page=11}}
| operating_income = {{increase}} £136 million (2023)
| net_income = {{increase}} £113 million (2023)
| website = {{URL|https://deepmind.google/}}
}}
{{Artificial intelligence}}
DeepMind Technologies Limited, trading as Google DeepMind or simply DeepMind, is a British–American artificial intelligence research laboratory which serves as a subsidiary of Alphabet Inc. Founded in the UK in 2010, it was acquired by Google in 2014{{Cite web|url=https://dealbook.nytimes.com/2014/01/27/google-acquires-british-artificial-intelligence-developer/|title=Google Acquires British Artificial Intelligence Developer|last=Bray|first=Chad|date=27 January 2014|website=DealBook|language=en|access-date=4 November 2019}} and merged with Google AI's Google Brain division to become Google DeepMind in April 2023. The company is headquartered in London, with research centres in the United States, Canada,{{cite web |title=About Us |url=https://deepmind.com/about/ |website=DeepMind|date=14 May 2024 }} France,{{cite web |title=A return to Paris |url=https://deepmind.com/blog/a-return-to-paris/ |website=DeepMind|date=14 May 2024 }} Germany and Switzerland.
DeepMind introduced neural Turing machines (neural networks that can access external memory like a conventional Turing machine),{{cite arXiv |eprint= 1410.5401 |title= Neural Turing Machines |last1= Graves |first1= Alex |last2= Wayne |first2= Greg |last3= Danihelka |first3= Ivo |class= cs.NE |year= 2014 |author1-link= Alex Graves (computer scientist) }} resulting in a computer that loosely resembles short-term memory in the human brain.[http://www.technologyreview.com/view/533741/best-of-2014-googles-secretive-deepmind-startup-unveils-a-neural-turing-machine/ Best of 2014: Google's Secretive DeepMind Startup Unveils a "Neural Turing Machine"] {{Webarchive|url=https://web.archive.org/web/20151204081728/http://www.technologyreview.com/view/533741/best-of-2014-googles-secretive-deepmind-startup-unveils-a-neural-turing-machine/ |date=4 December 2015 }}, MIT Technology Review{{Cite journal |author-link= Alex Graves (computer scientist) |last1=Graves |first1=Alex |last2=Wayne |first2= Greg |last3=Reynolds |first3=Malcolm |last4= Harley |first4=Tim |last5=Danihelka |first5= Ivo |last6=Grabska-Barwińska |first6= Agnieszka |last7=Colmenarejo |first7=Sergio Gómez |last8= Grefenstette |first8=Edward |last9=Ramalho |first9=Tiago |date=12 October 2016 |title= Hybrid computing using a neural network with dynamic external memory |journal=Nature |language=en |volume=538 |issue=7626 |doi= 10.1038/nature20101 |issn= 1476-4687 |pages=471–476 |pmid= 27732574 |bibcode= 2016Natur.538..471G|s2cid=205251479 |url=https://ora.ox.ac.uk/objects/uuid:dd8473bd-2d70-424d-881b-86d9c9c66b51 }}
DeepMind has created neural network models to play video games and board games. It made headlines in 2016 after its AlphaGo program beat a human professional Go player Lee Sedol, a world champion, in a five-game match, which was the subject of a documentary film.{{Citation|last=Kohs|first=Greg|title=AlphaGo|date=29 September 2017|url=https://www.imdb.com/title/tt6700846/|others=Ioannis Antonoglou, Lucas Baker, Nick Bostrom|access-date=9 January 2018}} A more general program, AlphaZero, beat the most powerful programs playing go, chess and shogi (Japanese chess) after a few days of play against itself using reinforcement learning.{{Cite arXiv|author-link1=David Silver (programmer)|first1=David|last1= Silver|first2=Thomas|last2= Hubert|first3= Julian|last3=Schrittwieser|first4= Ioannis|last4=Antonoglou |first5= Matthew|last5= Lai|first6= Arthur|last6= Guez|first7= Marc|last7= Lanctot|first8= Laurent|last8= Sifre|first9= Dharshan|last9= Kumaran|first10= Thore|last10= Graepel|first11= Timothy|last11= Lillicrap|first12= Karen|last12= Simonyan|first13=Demis |last13=Hassabis|author-link13=Demis Hassabis |eprint=1712.01815|title=Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm|class=cs.AI|date=5 December 2017}}
In 2020, DeepMind made significant advances in the problem of protein folding with AlphaFold.{{cite news |last=Callaway |first=Ewen |date=30 November 2020 |title='It will change everything': DeepMind's AI makes gigantic leap in solving protein structures |url=https://www.nature.com/articles/d41586-020-03348-4 |work=Nature |access-date=31 August 2021}} In July 2022, it was announced that over 200 million predicted protein structures, representing virtually all known proteins, would be released on the AlphaFold database.{{cite news |url=https://www.theguardian.com/technology/2022/jul/28/deepmind-uncovers-structure-of-200m-proteins-in-scientific-leap-forward |title=DeepMind uncovers structure of 200m proteins in scientific leap forward |first=Linda |last=Geddes|work=The Guardian|date=28 July 2022}}{{cite web |url=https://www.deepmind.com/blog/alphafold-reveals-the-structure-of-the-protein-universe |title=AlphaFold reveals the structure of the protein universe |date=28 July 2022 |work=DeepMind }} AlphaFold's database of predictions achieved state of the art records on benchmark tests for protein folding algorithms, although each individual prediction still requires confirmation by experimental tests. AlphaFold3 was released in May 2024, making structural predictions for the interaction of proteins with various molecules. It achieved new standards on various benchmarks, raising the state of the art accuracies from 28 and 52 percent to 65 and 76 percent.
{{toclimit|3}}
History
The start-up was founded by Demis Hassabis, Shane Legg and Mustafa Suleyman in November 2010. Hassabis and Legg first met at the Gatsby Computational Neuroscience Unit at University College London (UCL).{{cite news|title=Demis Hassabis: 15 facts about the DeepMind Technologies founder|url=https://www.theguardian.com/technology/shortcuts/2014/jan/28/demis-hassabis-15-facts-deepmind-technologies-founder-google|newspaper=The Guardian|access-date=12 October 2014}}
Demis Hassabis has said that the start-up began working on artificial intelligence technology by teaching it how to play old games from the seventies and eighties, which are relatively primitive compared to the ones that are available today. Some of those games included Breakout, Pong, and Space Invaders. AI was introduced to one game at a time, without any prior knowledge of its rules. After spending some time on learning the game, AI would eventually become an expert in it. "The cognitive processes which the AI goes through are said to be very like those of a human who had never seen the game would use to understand and attempt to master it."{{Cite news|url=https://www.forbes.com/sites/bernardmarr/2017/02/02/how-googles-amazing-ai-start-up-deepmind-is-making-our-world-a-smarter-place/#3f5f079ddfff|title=How Google's Amazing AI Start-Up 'DeepMind' Is Making Our World A Smarter Place|last=Marr|first=Bernard|work=Forbes|access-date=30 June 2018|language=en}} The goal of the founders is to create a general-purpose AI that can be useful and effective for almost anything.
Major venture capital firms Horizons Ventures and Founders Fund invested in the company,{{cite news|title=DeepMind buy heralds rise of the machines|url=http://www.ft.com/cms/s/0/b09dbd40-876a-11e3-9c5c-00144feab7de.html#axzz3G6ykG7uq|newspaper=Financial Times|date=27 January 2014|access-date=14 October 2014|last1=Cookson|first1=Robert}} as well as entrepreneurs Scott Banister,{{cite web|title=DeepMind Technologies Investors|url=https://angel.co/deepmind-technologies-limited|access-date=12 October 2014}} Peter Thiel,{{cite web |last1=Shead |first1=Sam |title=How DeepMind convinced billionaire Peter Thiel to invest without moving the company to Silicon Valley |url=https://www.businessinsider.com/how-deepmind-convinced-peter-thiel-to-invest-outside-silicon-valley-2017-7 |publisher=Business Insider}} and Elon Musk.{{cite magazine|url=https://www.wired.co.uk/article/deepmind|title=DeepMind: inside Google's super-brain|first=David |last=Rowan|date=22 June 2015|magazine=Wired UK |archive-url=https://web.archive.org/web/20230903223821/https://www.wired.co.uk/article/deepmind |archive-date=3 September 2023 |url-status=live}} Jaan Tallinn was an early investor and an adviser to the company.{{cite web|title=Recode.net – DeepMind Technologies Acquisition|url=http://recode.net/2014/01/26/exclusive-google-to-buy-artificial-intelligence-startup-deepmind-for-400m/|access-date=27 January 2014|date=26 January 2014}} On 26 January 2014, Google confirmed its acquisition of DeepMind for a price reportedly ranging between $400 million and $650 million.{{cite news|url=https://www.reuters.com/article/google-deepmind-idUSL2N0L102A20140127|title=Google to buy artificial intelligence company DeepMind|date=26 January 2014|newspaper=Reuters|access-date=12 October 2014}}{{cite news|url=https://www.theguardian.com/technology/2014/jan/27/google-acquires-uk-artificial-intelligence-startup-deepmind|title=Google Acquires UK AI startup Deepmind|newspaper=The Guardian|access-date=27 January 2014}}{{cite news|url=https://techcrunch.com/2014/01/26/google-deepmind/|title=Report of Acquisition, TechCrunch|work=TechCrunch|access-date=27 January 2014}} and that it had agreed to take over DeepMind Technologies. The sale to Google took place after Facebook reportedly ended negotiations with DeepMind Technologies in 2013.{{cite web|url=https://www.theinformation.com/Google-beat-Facebook-For-DeepMind-Creates-Ethics-Board|title=Google beats Facebook for Acquisition of DeepMind Technologies|access-date=27 January 2014}} The company was afterwards renamed Google DeepMind and kept that name for about two years.
In 2014, DeepMind received the "Company of the Year" award from Cambridge Computer Laboratory.{{cite web|title=Hall of Fame Awards: To celebrate the success of companies founded by Computer Laboratory graduates.|url=https://www.cl.cam.ac.uk/ring/awards.html|publisher=University of Cambridge|access-date=12 October 2014}}
{{Multiple image|align=right|direction=vertical|width=260px|image1=Google DeepMind logo.svg|caption1=Logo from 2015–2016|image2=DeepMind logo.png|caption2=Logo from 2016–2019}}
In September 2015, DeepMind and the Royal Free NHS Trust signed their initial information sharing agreement to co-develop a clinical task management app, Streams.{{Cite news|url=https://techcrunch.com/2017/08/31/documents-detail-deepminds-plan-to-apply-ai-to-nhs-data-in-2015/|title=Documents detail DeepMind's plan to apply AI to NHS data in 2015|last=Lomas|first=Natasha|work=TechCrunch|access-date=26 September 2017|language=en}}
After Google's acquisition the company established an artificial intelligence ethics board.{{cite magazine |title=Inside Google's Mysterious Ethics Board |url=https://www.forbes.com/sites/privacynotice/2014/02/03/inside-googles-mysterious-ethics-board/|magazine=Forbes|access-date=12 October 2014|date=3 February 2014}} The ethics board for AI research remains a mystery, with both Google and DeepMind declining to reveal who sits on the board.{{cite news|last1=Ramesh|first1=Randeep|title=Google's DeepMind shouldn't suck up our NHS records in secret |url=https://www.theguardian.com/commentisfree/2016/may/04/googles-deepmind-shouldnt-be-sucking-up-our-nhs-records-in-secret |newspaper=The Guardian |access-date=19 October 2016 |archive-url=https://web.archive.org/web/20161013145134/https://www.theguardian.com/commentisfree/2016/may/04/googles-deepmind-shouldnt-be-sucking-up-our-nhs-records-in-secret|archive-date=13 October 2016|date=4 May 2016}} DeepMind has opened a new unit called DeepMind Ethics and Society and focused on the ethical and societal questions raised by artificial intelligence featuring prominent philosopher Nick Bostrom as advisor.{{cite news |url=https://www.theguardian.com/technology/2017/oct/04/google-deepmind-ai-artificial-intelligence-ethics-group-problems|title=DeepMind announces ethics group to focus on problems of AI|first=Alex|last=Hern|date=4 October 2017 |via=www.theguardian.com|newspaper=The Guardian}} In October 2017, DeepMind launched a new research team to investigate AI ethics.{{Cite news |url=http://www.businessinsider.com/deepmind-has-launched-a-new-ethics-and-society-research-team-2017-10|title=DeepMind has launched a new 'ethics and society' research team|work=Business Insider|access-date=25 October 2017 |language=en}}{{Cite news |url=https://www.theverge.com/2017/10/4/16417978/deepmind-ai-ethics-society-research-group|title=DeepMind launches new research team to investigate AI ethics|work=The Verge|access-date=25 October 2017}}
In December 2019, co-founder Suleyman announced he would be leaving DeepMind to join Google, working in a policy role.Madhumita Murgia, [https://www.ft.com/content/02757f12-1780-11ea-9ee4-11f260415385 "DeepMind co-founder leaves for policy role at Google"], Financial Times, 5 December 2019 In March 2024, Microsoft appointed him as the EVP and CEO of its newly created consumer AI unit, Microsoft AI.{{Cite web |last=Blogs |first=Microsoft Corporate |date=2024-03-19 |title=Mustafa Suleyman, DeepMind and Inflection Co-founder, joins Microsoft to lead Copilot |url=https://blogs.microsoft.com/blog/2024/03/19/mustafa-suleyman-deepmind-and-inflection-co-founder-joins-microsoft-to-lead-copilot/ |access-date=2024-03-20 |website=The Official Microsoft Blog |language=en-US}}
In April 2023, DeepMind merged with Google AI's Google Brain division to form Google DeepMind, as part of the company's continued efforts to accelerate work on AI in response to OpenAI's ChatGPT.{{Cite web |last1=Roth |first1=Emma |last2=Peters |first2=Jay |date=20 April 2023 |title=Google's big AI push will combine Brain and DeepMind into one team |url=https://www.theverge.com/2023/4/20/23691468/google-ai-deepmind-brain-merger |url-status=live |archive-url=https://web.archive.org/web/20230420234052/https://www.theverge.com/2023/4/20/23691468/google-ai-deepmind-brain-merger |archive-date=20 April 2023 |access-date=21 April 2023 |website=The Verge}} This marked the end of a years-long struggle from DeepMind executives to secure greater autonomy from Google.{{Cite news |last=Olson |first=Parmy |date=21 May 2023 |title=Google Unit DeepMind Tried—and Failed—to Win AI Autonomy From Parent |url=https://www.wsj.com/articles/google-unit-deepmind-triedand-failedto-win-ai-autonomy-from-parent-11621592951 |url-access=subscription |url-status=live |archive-url=https://archive.today/20210521120435/https://www.wsj.com/articles/google-unit-deepmind-triedand-failedto-win-ai-autonomy-from-parent-11621592951 |archive-date=21 May 2021 |access-date=12 September 2023 |newspaper=The Wall Street Journal}}
Products and technologies
Google Research released a paper in 2016 regarding AI safety and avoiding undesirable behaviour during the AI learning process.{{Cite arXiv |last1=Amodei|first1=Dario |last2=Olah|first2=Chris |last3=Steinhardt|first3=Jacob |last4=Christiano|first4=Paul |last5=Schulman|first5=John |last6=Mané|first6=Dan |date=21 June 2016|title=Concrete Problems in AI Safety |eprint=1606.06565 |class=cs.AI}} In 2017 DeepMind released GridWorld, an open-source testbed for evaluating whether an algorithm learns to disable its kill switch or otherwise exhibits certain undesirable behaviours.{{cite news|title=DeepMind Has Simple Tests That Might Prevent Elon Musk's AI Apocalypse|url=https://www.bloomberg.com/news/articles/2017-12-11/deepmind-has-simple-tests-that-might-prevent-elon-musk-s-ai-apocalypse|access-date=8 January 2018|work=Bloomberg.com|date=11 December 2017}}{{cite news|title=Alphabet's DeepMind Is Using Games to Discover If Artificial Intelligence Can Break Free and Kill Us All |url=http://fortune.com/2017/12/12/alphabet-deepmind-ai-safety-musk-games/ |access-date=8 January 2018|work=Fortune|language=en}}
In July 2018, researchers from DeepMind trained one of its systems to play the computer game Quake III Arena.[https://www.engadget.com/2018/07/03/deepmind-ai-quake-iii-arena-human/ "DeepMind AI's new trick is playing 'Quake III Arena' like a human"]. Engadget. 3 July 2018.
As of 2020, DeepMind has published over a thousand papers, including thirteen papers that were accepted by Nature or Science. DeepMind received media attention during the AlphaGo period; according to a LexisNexis search, 1842 published news stories mentioned DeepMind in 2016, declining to 1363 in 2019.{{cite news |last1=Shead |first1=Sam |title=Why the buzz around DeepMind is dissipating as it transitions from games to science |url=https://www.cnbc.com/2020/06/05/google-deepmind-alphago-buzz-dissipates.html |access-date=12 June 2020 |work=CNBC |date=5 June 2020 |language=en}}
=Games=
Unlike earlier AIs, such as IBM's Deep Blue or Watson, which were developed for a pre-defined purpose and only function within that scope, DeepMind's initial algorithms were intended to be general. They used reinforcement learning, an algorithm that learns from experience using only raw pixels as data input. Their initial approach used deep Q-learning with a convolutional neural network.{{cite arXiv|title=Playing Atari with Deep Reinforcement Learning |date=12 December 2013 |first1=Volodymyr |last1=Mnih |first2=Koray |last2=Kavukcuoglu |first3=David |last3=Silver |first4=Alex |last4=Graves |first5=Ioannis |last5=Antonoglou |first6=Daan |last6=Wierstra |first7=Martin |last7=Riedmiller |eprint=1312.5602|class=cs.LG }} They tested the system on video games, notably early arcade games, such as Space Invaders or Breakout. Without altering the code, the same AI was able to play certain games more efficiently than any human ever could.{{cite AV media|title=Deepmind artificial intelligence @ FDOT14|url=https://www.youtube.com/watch?v=EfGD2qveGdQ|date=19 April 2014|via=YouTube}}
In 2013, DeepMind published research on an AI system that surpassed human abilities in games such as Pong, Breakout and Enduro, while surpassing state of the art performance on Seaquest, Beamrider, and Q*bert.{{Cite web|url=https://venturebeat.com/2018/12/29/a-look-back-at-some-of-ais-biggest-video-game-wins-in-2018/|title=A look back at some of AI's biggest video game wins in 2018|date=29 December 2018|website=VentureBeat|language=en-US|access-date=19 April 2019}}{{Cite arXiv|title=Playing Atari with Deep Reinforcement Learning|date=19 December 2013|eprint=1312.5602|language=en-US|last1=Mnih|first1=Volodymyr|last2=Kavukcuoglu|first2=Koray|last3=Silver|first3=David|last4=Graves|first4=Alex|last5=Antonoglou|first5=Ioannis|last6=Wierstra|first6=Daan|last7=Riedmiller|first7=Martin|class=cs.LG}} This work reportedly led to the company's acquisition by Google.{{cite web |title=The Last AI Breakthrough DeepMind Made Before Google Bought It |publisher= The Physics arXiv Blog |url= https://medium.com/the-physics-arxiv-blog/the-last-ai-breakthrough-deepmind-made-before-google-bought-it-for-400m-7952031ee5e1 |access-date= 12 October 2014|date= 29 January 2014 }} DeepMind's AI had been applied to video games made in the 1970s and 1980s; work was ongoing for more complex 3D games such as Quake, which first appeared in the 1990s.
In 2020, DeepMind published Agent57,{{Cite arXiv|title=Agent57: Outperforming the Atari Human Benchmark|date=30 March 2020|eprint=2003.13350|language=en-US|author1=Adrià Puigdomènech Badia|last2=Piot|first2=Bilal|last3=Kapturowski|first3=Steven|last4=Sprechmann|first4=Pablo|last5=Vitvitskyi|first5=Alex|last6=Guo|first6=Daniel|last7=Blundell|first7=Charles|class=cs.LG}}{{Cite web|url=https://deepmind.com/blog/article/Agent57-Outperforming-the-human-Atari-benchmark|title=Agent57: Outperforming the Atari Human Benchmark|date=31 March 2020|website=DeepMind|language=en-US|access-date=25 May 2020}} an AI Agent which surpasses human level performance on all 57 games of the Atari 2600 suite.{{cite news |last1=Linder |first1=Courtney |title=This AI Can Beat Humans At All 57 Atari Games |url=https://www.popularmechanics.com/culture/gaming/a32006038/deepmind-ai-atari-agent57/ |access-date=9 June 2020 |work=Popular Mechanics |date=2 April 2020}} In July 2022, DeepMind announced the development of DeepNash, a model-free multi-agent reinforcement learning system capable of playing the board game Stratego at the level of a human expert.{{cite web|url=https://www.marktechpost.com/2022/07/09/deepmind-ai-researchers-introduce-deepnash-an-autonomous-agent-trained-with-model-free-multiagent-reinforcement-learning-that-learns-to-play-the-game-of-stratego-at-expert-level/|title=Deepmind AI Researchers Introduce 'DeepNash', An Autonomous Agent Trained With Model-Free Multiagent Reinforcement Learning That Learns To Play The Game Of Stratego At Expert Level|date=9 July 2022|website=MarkTechPost}}
== AlphaGo and successors ==
{{Main|AlphaGo|AlphaGo Zero|AlphaZero|MuZero}}
In October 2015, a computer Go program called AlphaGo, developed by DeepMind, beat the European Go champion Fan Hui, a 2 dan (out of 9 dan possible) professional, five to zero.{{cite news|url=https://www.bbc.com/news/technology-35420579|title=Google achieves AI 'breakthrough' by beating Go champion|date=27 January 2016|newspaper=BBC News}} This was the first time an artificial intelligence (AI) defeated a professional Go player.{{cite news|url=http://www.lemonde.fr/pixels/article/2016/01/27/premiere-defaite-d-un-professionnel-du-go-contre-une-intelligence-artificielle_4854886_4408996.html|title=Première défaite d'un professionnel du go contre une intelligence artificielle|language=fr|date=27 January 2016|work=Le Monde}} Previously, computers were only known to have played Go at "amateur" level.{{cite web|url=http://googleresearch.blogspot.com/2016/01/alphago-mastering-ancient-game-of-go.html|title=Research Blog: AlphaGo: Mastering the ancient game of Go with Machine Learning|date=27 January 2016|work=Google Research Blog}} Go is considered much more difficult for computers to win compared to other games like chess, due to the much larger number of possibilities, making it prohibitively difficult for traditional AI methods such as brute-force.
In March 2016 it beat Lee Sedol, one of the highest ranked players in the world, with a score of 4 to 1 in a five-game match. In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie, who had been the world's highest-ranked player for two years.{{Cite web|url=http://www.goratings.org/|title=World's Go Player Ratings|date=May 2017}}{{Cite web|title=柯洁迎19岁生日 雄踞人类世界排名第一已两年|url=http://sports.sina.com.cn/go/2016-08-02/doc-ifxunyya3020238.shtml|language=zh|date=May 2017}} In 2017, an improved version, AlphaGo Zero, defeated AlphaGo in a hundred out of a hundred games. Later that year, AlphaZero, a modified version of AlphaGo Zero, gained superhuman abilities at chess and shogi. In 2019, DeepMind released a new model named MuZero that mastered the domains of Go, chess, shogi, and Atari 2600 games without human data, domain knowledge, or known rules.{{Cite web |title=MuZero: Mastering Go, chess, shogi and Atari without rules |url=https://www.deepmind.com/blog/muzero-mastering-go-chess-shogi-and-atari-without-rules |access-date=29 April 2022 |website=www.deepmind.com |language=en}}{{Cite journal |last1=Schrittwieser |first1=Julian |last2=Antonoglou |first2=Ioannis |last3=Hubert |first3=Thomas |last4=Simonyan |first4=Karen |last5=Sifre |first5=Laurent |last6=Schmitt |first6=Simon |last7=Guez |first7=Arthur |last8=Lockhart |first8=Edward |last9=Hassabis |first9=Demis |last10=Graepel |first10=Thore |last11=Lillicrap |first11=Timothy |date=23 December 2020 |title=Mastering Atari, Go, chess and shogi by planning with a learned model |url=https://www.nature.com/articles/s41586-020-03051-4.epdf?sharing_token=kTk-xTZpQOF8Ym8nTQK6EdRgN0jAjWel9jnR3ZoTv0PMSWGj38iNIyNOw_ooNp2BvzZ4nIcedo7GEXD7UmLqb0M_V_fop31mMY9VBBLNmGbm0K9jETKkZnJ9SgJ8Rwhp3ySvLuTcUr888puIYbngQ0fiMf45ZGDAQ7fUI66-u7Y= |journal=Nature |language=en |volume=588 |issue=7839 |pages=604–609 |doi=10.1038/s41586-020-03051-4 |pmid=33361790 |arxiv=1911.08265 |bibcode=2020Natur.588..604S |s2cid=208158225 |issn=0028-0836}}
AlphaGo technology was developed based on deep reinforcement learning, making it different from the AI technologies then on the market. The data fed into the AlphaGo algorithm consisted of various moves based on historical tournament data. The number of moves was increased gradually until over 30 million of them were processed. The aim was to have the system mimic the human player, as represented by the input data, and eventually become better. It played against itself and learned from the outcomes; thus, it learned to improve itself over the time and increased its winning rate as a result.{{Cite news|url=https://www.economist.com/news/science-and-technology/21730391-learning-play-go-only-start-latest-ai-can-work-things-out-without|title=The latest AI can work things out without being taught|newspaper=The Economist|access-date=19 October 2017|language=en}}
AlphaGo used two deep neural networks: a policy network to evaluate move probabilities and a value network to assess positions. The policy network trained via supervised learning, and was subsequently refined by policy-gradient reinforcement learning. The value network learned to predict winners of games played by the policy network against itself. After training, these networks employed a lookahead Monte Carlo tree search, using the policy network to identify candidate high-probability moves, while the value network (in conjunction with Monte Carlo rollouts using a fast rollout policy) evaluated tree positions.{{cite journal |first1=David |last1=Silver|author-link1=David Silver (programmer)|first2= Julian|last2= Schrittwieser|first3= Karen|last3= Simonyan|first4= Ioannis|last4= Antonoglou|first5= Aja|last5= Huang|author-link5=Aja Huang|first6=Arthur|last6= Guez|first7= Thomas|last7= Hubert|first8= Lucas|last8= Baker|first9= Matthew|last9= Lai|first10= Adrian|last10= Bolton|first11= Yutian|last11= Chen|author-link11=Chen Yutian|first12= Timothy|last12= Lillicrap|first13=Hui|last13= Fan|author-link13=Fan Hui|first14= Laurent|last14= Sifre|first15= George van den|last15= Driessche|first16= Thore|last16= Graepel|first17= Demis|last17= Hassabis |author-link17=Demis Hassabis|title=Mastering the game of Go without human knowledge|journal=Nature|issn= 0028-0836|pages=354–359|volume =550|issue =7676|doi =10.1038/nature24270|pmid=29052630|date=19 October 2017|bibcode=2017Natur.550..354S|s2cid=205261034|url= http://discovery.ucl.ac.uk/10045895/1/agz_unformatted_nature.pdf}}{{closed access}}
In contrast, AlphaGo Zero was trained without being fed data of human-played games. Instead it generated its own data, playing millions of games against itself. It used a single neural network, rather than separate policy and value networks. Its simplified tree search relied upon this neural network to evaluate positions and sample moves. A new reinforcement learning algorithm incorporated lookahead search inside the training loop. AlphaGo Zero employed around 15 people and millions in computing resources.{{Cite news|url=https://www.technologyreview.com/s/609141/alphago-zero-shows-machines-can-become-superhuman-without-any-help/|title=The world's smartest game-playing AI—DeepMind's AlphaGo—just got way smarter|last=Knight|first=Will|work=MIT Technology Review|access-date=19 October 2017|language=en}} Ultimately, it needed much less computing power than AlphaGo, running on four specialized AI processors (Google TPUs), instead of AlphaGo's 48.{{Cite news|url=https://www.theverge.com/2017/10/18/16495548/deepmind-ai-go-alphago-zero-self-taught|title=DeepMind's Go-playing AI doesn't need human help to beat us anymore|last=Vincent|first=James|date=18 October 2017|work=The Verge|access-date=19 October 2017}} It also required less training time, being able to beat its predecessor after just three days, compared with months required for the original AlphaGo.{{Cite news|url=https://www.bbc.com/news/technology-41668701|title=Google DeepMind: AI becomes more alien|last=Cellan-Jones|first=Rory|date=18 October 2017|work=BBC News|access-date=3 December 2017|language=en-GB}} Similarly, AlphaZero also learned via self-play.
Researchers applied MuZero to solve the real world challenge of video compression with a set number of bits with respect to Internet traffic on sites such as YouTube, Twitch, and Google Meet. The goal of MuZero is to optimally compress the video so the quality of the video is maintained with a reduction in data. The final result using MuZero was a 6.28% average reduction in bitrate.{{Cite web |title=MuZero's first step from research into the real world |url=https://www.deepmind.com/blog/muzeros-first-step-from-research-into-the-real-world |access-date=29 April 2022 |website=www.deepmind.com |language=en}}{{cite arXiv |last1=Mandhane |first1=Amol |last2=Zhernov |first2=Anton |last3=Rauh |first3=Maribeth |last4=Gu |first4=Chenjie |last5=Wang |first5=Miaosen |last6=Xue |first6=Flora |last7=Shang |first7=Wendy |last8=Pang |first8=Derek |last9=Claus |first9=Rene |last10=Chiang |first10=Ching-Han |last11=Chen |first11=Cheng |date=14 February 2022 |title=MuZero with Self-competition for Rate Control in VP9 Video Compression |class=eess.IV |eprint=2202.06626 }}
== AlphaStar ==
{{Main|AlphaStar (software)}}
In 2016, Hassabis discussed the game StarCraft as a future challenge, since it requires strategic thinking and handling imperfect information.{{cite web|url=https://www.theverge.com/2016/3/10/11192774/demis-hassabis-interview-alphago-google-deepmind-ai|title=DeepMind founder Demis Hassabis on how AI will shape the future|date=10 March 2016|website=The Verge}}
In January 2019, DeepMind introduced AlphaStar, a program playing the real-time strategy game StarCraft II. AlphaStar used reinforcement learning based on replays from human players, and then played against itself to enhance its skills. At the time of the presentation, AlphaStar had knowledge equivalent to 200 years of playing time. It won 10 consecutive matches against two professional players, although it had the unfair advantage of being able to see the entire field, unlike a human player who has to move the camera manually. A preliminary version in which that advantage was fixed lost a subsequent match.{{Cite news|url=http://www.extremetech.com/gaming/284441-deepmind-ai-challenges-pro-starcraft-ii-players-wins-almost-every-match|title=DeepMind AI Challenges Pro StarCraft II Players, Wins Almost Every Match|date=24 January 2019|work=Extreme Tech|access-date=24 January 2019|language=en-GB}}
In July 2019, AlphaStar began playing against random humans on the public 1v1 European multiplayer ladder. Unlike the first iteration of AlphaStar, which played only Protoss v. Protoss, this one played as all of the game's races, and had earlier unfair advantages fixed.{{cite news|url=https://arstechnica.com/gadgets/2019/07/deepmind-ai-takes-on-the-public-in-starcraft-ii-multiplayer/|title=DeepMind AI is secretly lurking on the public StarCraft II 1v1 ladder|last1=Amadeo|first1=Ron|date=11 July 2019|work=Ars Technica|access-date=18 September 2019}}{{Cite web|url=https://www.reddit.com/r/starcraft/comments/cgvu6r/i_played_against_alphastardeepmind/|title=I played against AlphaStar/Deepmind|access-date=27 July 2019|language=en|website=reddit|date=23 July 2019}} By October 2019, AlphaStar had reached Grandmaster level on the StarCraft II ladder on all three StarCraft races, becoming the first AI to reach the top league of a widely popular esport without any game restrictions.{{Cite news|url=https://deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning|title=AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning|date=31 October 2019|website=DeepMind Blog|access-date=31 October 2019|language=en-GB}}
= Protein folding =
{{Main|AlphaFold}}
In 2016, DeepMind turned its artificial intelligence to protein folding, a long-standing problem in molecular biology. In December 2018, DeepMind's AlphaFold won the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP) by successfully predicting the most accurate structure for 25 out of 43 proteins. "This is a lighthouse project, our first major investment in terms of people and resources into a fundamental, very important, real-world scientific problem," Hassabis said to The Guardian.{{Cite web|url=https://www.theguardian.com/science/2018/dec/02/google-deepminds-ai-program-alphafold-predicts-3d-shapes-of-proteins|title=Google's DeepMind predicts 3D shapes of proteins|last=Sample|first=Ian|date=2 December 2018|website=The Guardian|language=en|access-date=3 December 2018}} In 2020, in the 14th CASP, AlphaFold's predictions achieved an accuracy score regarded as comparable with lab techniques. Dr Andriy Kryshtafovych, one of the panel of scientific adjudicators, described the achievement as "truly remarkable", and said the problem of predicting how proteins fold had been "largely solved".{{Cite web|url=https://www.bbc.co.uk/news/science-environment-55133972|title=One of biology's biggest mysteries 'largely solved' by AI|last=Briggs|first=Helen|date=30 November 2020|website=BBC News|language=en|access-date=30 November 2020}}{{Cite web|url=https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology|title=AlphaFold: a solution to a 50-year-old grand challenge in biology|date=30 November 2020|website=DeepMind|language=en|access-date=30 November 2020}}{{cite news |last1=Shead |first1=Sam |title=DeepMind solves 50-year-old 'grand challenge' with protein folding A.I. |url=https://www.cnbc.com/2020/11/30/deepmind-solves-protein-folding-grand-challenge-with-alphafold-ai.html |access-date=30 November 2020 |publisher=cnbc.com |date=30 November 2020}}
In July 2021, the open-source RoseTTAFold and AlphaFold2 were released to allow scientists to run their own versions of the tools. A week later DeepMind announced that AlphaFold had completed its prediction of nearly all human proteins as well as the entire proteomes of 20 other widely studied organisms.{{cite journal |title= What's next for AlphaFold and the AI protein-folding revolution|first=Ewen |last=Callaway|journal=Nature|volume=604|date=2022|issue= 7905|pages= 234–238|doi= 10.1038/d41586-022-00997-5 |pmid= 35418629|bibcode= 2022Natur.604..234C|s2cid= 248156195|doi-access= free}} The structures were released on the AlphaFold Protein Structure Database. In July 2022, it was announced that the predictions of over 200 million proteins, representing virtually all known proteins, would be released on the AlphaFold database.
The most recent update, AlphaFold3, was released in May 2024, predicting the interactions of proteins with DNA, RNA, and various other molecules. In a particular benchmark test on the problem of DNA interactions, AlphaFold3's attained an accuracy of 65%, significantly improving the previous state of the art of 28%.{{cite web|title=DeepMind's new AlphaFold 3 expands to DNA, RNA modeling|date=May 8, 2024|last1=Sullivan|first1=Mark|website=Fast Company|url=https://www.fastcompany.com/91120456/deepmind-alphafold-3-dna-rna-modeling}}
In October 2024, Hassabis and John Jumper received half of the 2024 Nobel Prize in Chemistry jointly for protein structure prediction, citing AlphaFold2 achievement.{{cite web |title=The Nobel Prize in Chemistry 2024 |url=https://www.nobelprize.org/prizes/chemistry/2024/press-release/ |website=NobelPrize.org |access-date=18 October 2024}}
=Language models=
In 2016, DeepMind introduced WaveNet, a text-to-speech system. It was originally too computationally intensive for use in consumer products, but in late 2017 it became ready for use in consumer applications such as Google Assistant.{{cite news|title=Here's Why Google's Assistant Sounds More Realistic Than Ever Before|url=http://fortune.com/2017/10/05/google-assistant-deepmind-wavenet-speech-ai/|access-date=20 January 2018|work=Fortune|date=5 October 2017|language=en}}{{cite news|last1=Gershgorn|first1=Dave|title=Google's voice-generating AI is now indistinguishable from humans|url=https://qz.com/1165775/googles-voice-generating-ai-is-now-indistinguishable-from-humans/|access-date=20 January 2018|work=Quartz}} In 2018 Google launched a commercial text-to-speech product, Cloud Text-to-Speech, based on WaveNet.{{Cite news|url=https://www.cnbc.com/2018/03/31/how-google-makes-money-from-alphabets-deepmind-ai-research-group.html|title=Google is finding ways to make money from Alphabet's DeepMind A.I. technology|last=Novet|first=Jordan|date=31 March 2018|work=CNBC|access-date=3 April 2018}}{{cite web|title=Introducing Cloud Text-to-Speech powered by DeepMind WaveNet technology|url=https://cloudplatform.googleblog.com/2018/03/introducing-Cloud-Text-to-Speech-powered-by-Deepmind-WaveNet-technology.html|website=Google Cloud Platform Blog|access-date=5 April 2018|language=en}} In 2018, DeepMind introduced a more efficient model called WaveRNN co-developed with Google AI.{{Cite web|url=https://deepmind.com/research/publications/efficient-neural-audio-synthesis|archive-url=https://web.archive.org/web/20181231184541/https://deepmind.com/research/publications/efficient-neural-audio-synthesis/|url-status=dead|archive-date=31 December 2018|title=Efficient Neural Audio Synthesis|website=Deepmind|access-date=1 April 2020}}{{Cite web|url=https://deepmind.com/blog/article/Using-WaveNet-technology-to-reunite-speech-impaired-users-with-their-original-voices|title=Using WaveNet technology to reunite speech-impaired users with their original voices|website=Deepmind|access-date=1 April 2020}} In 2020 WaveNetEQ, a packet loss concealment method based on a WaveRNN architecture, was presented.{{cite conference | last1=Stimberg | first1=Florian | last2=Narest | first2=Alex | last3=Bazzica | first3=Alessio | last4=Kolmodin | first4=Lennart | last5=Barrera Gonzalez | first5=Pablo | last6=Sharonova | first6=Olga | last7=Lundin | first7=Henrik | last8=Walters | first8=Thomas C. | title=2020 54th Asilomar Conference on Signals, Systems, and Computers | chapter=WaveNetEQ — Packet Loss Concealment with WaveRNN | publisher=IEEE | date=1 November 2020 | pages=672–676 | doi=10.1109/ieeeconf51394.2020.9443419 | isbn=978-0-7381-3126-9 }} In 2019, Google started to roll WaveRNN with WavenetEQ out to Google Duo users.{{Cite web|url=http://ai.googleblog.com/2020/04/improving-audio-quality-in-duo-with.html|title=Improving Audio Quality in Duo with WaveNetEQ|website=Google AI Blog|date=April 2020 |language=en|access-date=1 April 2020}}
Released in May 2022, Gato is a polyvalent multimodal model. It was trained on 604 tasks, such as image captioning, dialogue, or stacking blocks. On 450 of these tasks, Gato outperformed human experts at least half of the time, according to DeepMind.{{Cite web |last=Wiggers |first=Kyle |date=13 May 2022 |title=DeepMind's new AI system can perform over 600 tasks |url=https://techcrunch.com/2022/05/13/deepminds-new-ai-can-perform-over-600-tasks-from-playing-games-to-controlling-robots/ |access-date=16 April 2024 |website=TechCrunch |language=en-US}} Unlike models like MuZero, Gato does not need to be retrained to switch from one task to the other.
Sparrow is an artificial intelligence-powered chatbot developed by DeepMind to build safer machine learning systems by using a mix of human feedback and Google search suggestions.{{Cite web|url=https://www.marktechpost.com/2022/09/28/deepmind-introduces-sparrow-an-artificial-intelligence-powered-chatbot-developed-to-build-safer-machine-learning-systems/|title=Deepmind Introduces 'Sparrow,' An Artificial Intelligence-Powered Chatbot Developed To Build Safer Machine Learning Systems|first=Khushboo|last=Gupta|date=28 September 2022|accessdate=8 May 2023}}
Chinchilla is a language model developed by DeepMind.{{Cite web|url=https://dataconomy.com/2023/01/12/what-is-chinchilla-ai-chatbot-deepmind/|title=What Is Chinchilla AI: Chatbot Language Model Rival By Deepmind To GPT-3 - Dataconomy|date=12 January 2023|accessdate=8 May 2023}}
DeepMind posted a blog post on 28 April 2022 on a single visual language model (VLM) named Flamingo that can accurately describe a picture of something with just a few training images.{{Cite web |title=Tackling multiple tasks with a single visual language model |url=https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model |access-date=29 April 2022 |website=www.deepmind.com |language=en}}{{Cite web |last=Alayrac |first=Jean-Baptiste |title=Flamingo: a Visual Language Model for Few-Shot Learning |url=https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/tackling-multiple-tasks-with-a-single-visual-language-model/flamingo.pdf |journal=|year=2022 |arxiv=2204.14198 }}
== AlphaCode ==
In 2022, DeepMind unveiled AlphaCode, an AI-powered coding engine that creates computer programs at a rate comparable to that of an average programmer, with the company testing the system against coding challenges created by Codeforces utilized in human competitive programming competitions.{{Cite web |last=Vincent |first=James |date=2 February 2022 |title=DeepMind says its new AI coding engine is as good as an average human programmer |url=https://www.theverge.com/2022/2/2/22914085/alphacode-ai-coding-program-automatic-deepmind-codeforce |url-status=live |archive-url=https://web.archive.org/web/20220202163608/https://www.theverge.com/2022/2/2/22914085/alphacode-ai-coding-program-automatic-deepmind-codeforce |archive-date=2 February 2022 |access-date=3 February 2022 |website=The Verge}} AlphaCode earned a rank equivalent to 54% of the median score on Codeforces after being trained on GitHub data and Codeforce problems and solutions. The program was required to come up with a unique solution and stopped from duplicating answers.
==Gemini==
{{main|Gemini (language model)}}
Gemini is a multimodal large language model which was released on 6 December 2023.{{Cite news |last=Kruppa |first=Miles |date=6 December 2023 |title=Google Announces AI System Gemini After Turmoil at Rival OpenAI |url=https://www.wsj.com/tech/ai/google-announces-ai-system-gemini-after-turmoil-at-rival-openai-10835335 |url-access=subscription |url-status=live |archive-url=https://archive.today/20231206152820/https://www.wsj.com/tech/ai/google-announces-ai-system-gemini-after-turmoil-at-rival-openai-10835335 |archive-date=6 December 2023 |access-date=6 December 2023 |newspaper=The Wall Street Journal |issn=0099-9660}} It is the successor of Google's LaMDA and PaLM 2 language models and sought to challenge OpenAI's GPT-4.{{Cite magazine |last=Knight |first=Will |date=26 June 2023 |title=Google DeepMind's CEO Says Its Next Algorithm Will Eclipse ChatGPT |url=https://www.wired.com/story/google-deepmind-demis-hassabis-chatgpt/ |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20230626121231/https://www.wired.com/story/google-deepmind-demis-hassabis-chatgpt/ |archive-date=26 June 2023 |access-date=21 August 2023 |magazine=Wired}} Gemini comes in 3 sizes: Nano, Pro, and Ultra.{{Cite web |last=Pierce |first=David |date=6 December 2023 |title=Google launches Gemini, the AI model it hopes will take down GPT-4 |url=https://www.theverge.com/2023/12/6/23990466/google-gemini-llm-ai-model |access-date=16 April 2024 |website=The Verge |language=en}} Gemini is also the name of the chatbot that integrates Gemini (and which was previously called Bard).{{Cite web |date=8 February 2024 |title=Google is rebranding its Bard AI service as Gemini. Here's what it means. |url=https://www.cbsnews.com/news/google-gemini-ai-bard/ |access-date=16 April 2024 |website=CBS News |language=en-US}}
On 12 December 2024, Google released Gemini 2.0 Flash, the first model in the Gemini 2.0 series. It notably features expanded multimodality, with the ability to also generate images and audio,{{Cite web |last=Haddad |first=C. J. |date=2024-12-11 |title=Google releases the first of its Gemini 2.0 AI models |url=https://www.cnbc.com/2024/12/11/google-releases-the-first-of-its-gemini-2point0-ai-models.html |access-date=2024-12-11 |website=CNBC |language=en}} and is part of Google's broader plans to integrate advanced AI into autonomous agents.{{Cite web |date=2024-12-11 |title=Introducing Gemini 2.0: our new AI model for the agentic era |url=https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/?utm_content=#agents-for-developers |access-date=2024-12-11 |website=Google |language=en-us}}
On 25 March 2025, Google released Gemini 2.5, a reasoning model that stops to "think" before giving a response. Google announced that all future models will also have reasoning ability.{{Cite web |last=Zeff |first=Maxwell |date=2025-03-25 |title=Google unveils a next-gen family of AI reasoning models |url=https://techcrunch.com/2025/03/25/google-unveils-a-next-gen-ai-reasoning-model/ |access-date=2025-03-25 |website=TechCrunch |language=en-US}}{{Cite web |date=2025-03-25 |title=Gemini 2.5: Our most intelligent AI model |url=https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/#enhanced-reasoning |access-date=2025-03-25 |website=Google |language=en-us}} On 30 March 2025, Google released Gemini 2.5 to all free users.{{Cite web |last= |first= |date=2025-03-26 |title=Google rolls-out custom chatbots 'Gems' for free-tier Gemini users: Details |url=https://www.business-standard.com/technology/tech-news/google-rolls-out-custom-chatbots-gems-for-free-tier-gemini-users-details-125032600913_1.html |archive-url=http://web.archive.org/web/20250326151453/https://www.business-standard.com/technology/tech-news/google-rolls-out-custom-chatbots-gems-for-free-tier-gemini-users-details-125032600913_1.html |archive-date=2025-03-26 |access-date=2025-03-30 |website=Business Standard |language=en-US}}
==Gemma==
{{main|Gemma (language model)}}
Gemma is a collection of open-weight large language models. The first ones were released on 21 February 2024 and are available in two distinct sizes: a 7 billion parameter model optimized for GPU and TPU usage, and a 2 billion parameter model designed for CPU and on-device applications. Gemma models were trained on up to 6 trillion tokens of text, employing similar architectures, datasets, and training methodologies as the Gemini model set.{{Cite news |date=2024-02-22 |title=Google Gemma LLMs small enough to run on your computer |url=https://www.theregister.com/2024/02/22/google_gemma_llms/ |archive-url=http://web.archive.org/web/20250126232718/https://www.theregister.com/2024/02/22/google_gemma_llms/ |archive-date=2025-01-26 |access-date= |work=The Register |language=en}}
In June 2024, Google started releasing Gemma 2 models.{{Cite web |last=Yeung |first=Ken |date=2024-06-27 |title=Google's Gemma 2 series launches with not one, but two lightweight model options—a 9B and 27B |url=https://venturebeat.com/ai/googles-gemma-2-series-launches-with-not-one-but-two-lightweight-model-options-a-9b-and-27b/ |access-date=2025-02-22 |website=VentureBeat |language=en-US}} In December 2024, Google introduced PaliGemma 2, an upgraded vision-language model.{{Cite web |date=5 December 2024 |title=Google says its new AI models can identify emotions — and that has experts worried |url=https://techcrunch.com/2024/12/05/google-says-its-new-open-models-can-identify-emotions-and-that-has-experts-worried/ |access-date=5 December 2024 |website=TechCrunch |language=en-us}} In February 2025, they launched PaliGemma 2 Mix, a version fine-tuned for multiple tasks. It is available in 3B, 10B, and 28B parameters with 224px and 448px resolutions.{{Cite web |last=Barron |first=Jenna |date=2025-02-21 |title=Feb 21, 2025: Development tools that have recently added new AI capabilities |url=https://sdtimes.com/ai/feb-21-2025-development-tools-that-have-recently-added-new-ai-capabilities/ |access-date=2025-02-22 |website=SD Times |language=en-US}}
In March 2025, Google released Gemma 3, calling it the most capable model that can be run on a single GPU.{{Cite web |last=Lawler |first=Richard |date=2025-03-12 |title=Google calls Gemma 3 the most powerful AI model you can run on one GPU |url=https://www.theverge.com/ai-artificial-intelligence/627968/google-gemma-3-open-ai-model |access-date=2025-03-16 |website=The Verge |language=en-US}} It has four available sizes: 1B, 4B, 12B, and 27B.{{Cite web |last=David |first=Emilia |date=2025-03-12 |title=Google unveils open source Gemma 3 model with 128k context window |url=https://venturebeat.com/ai/google-unveils-open-source-gemma-3-model-with-128k-context-window/ |access-date=2025-03-16 |website=VentureBeat |language=en-US}} In March 2025, Google introduced TxGemma, an open-source model designed to improve the efficiency of therapeutics development.{{Cite web |title=Introducing TxGemma: Open models to improve therapeutics development |url=https://developers.googleblog.com/en/introducing-txgemma-open-models-improving-therapeutics-development/#:~:text=TxGemma%20models,%20fine-tuned%20from,:%202B,%209B%20and%2027B. |access-date=2025-03-28 |website=Google Developers Blog |language=en}}
In April 2025, Google introduced DolphinGemma, an research artificial intelligence model designed to hopefully decode decode dolphin communication. They want to train a foundation model that can learn the structure of dolphin vocalizations and generate novel dolphin-like sound sequences.{{Cite web |date=2025-04-14 |title=DolphinGemma: How Google AI is helping decode dolphin communication |url=https://blog.google/technology/ai/dolphingemma/ |access-date=2025-04-15 |website=Google |language=en-us}}{{Cite web |title=DolphinGemma: Google Using AI And Pixel 9 Phone To Understand What Dolphins Are Saying |url=https://www.news18.com/tech/dolphingemma-google-using-ai-to-understand-what-dolphins-are-saying-9299420.html |access-date=2025-04-15 |website=News18 |language=en}}
==SIMA==
In March 2024, DeepMind introduced Scalable Instructable Multiword Agent, or SIMA, an AI agent capable of understanding and following natural language instructions to complete tasks across various 3D virtual environments. Trained on nine video games from eight studios and four research environments, SIMA demonstrated adaptability to new tasks and settings without requiring access to game source code or APIs. The agent comprises pre-trained computer vision and language models fine-tuned on gaming data, with language being crucial for understanding and completing given tasks as instructed. DeepMind's research aimed to develop more helpful AI agents by translating advanced AI capabilities into real-world actions through a language interface.{{Cite web |date=13 March 2024 |title=A generalist AI agent for 3D virtual environments |url=https://deepmind.google/discover/blog/sima-generalist-ai-agent-for-3d-virtual-environments/ |access-date=27 March 2024 |website=Google DeepMind |language=en}}{{Cite web |last=David |first=Emilia |date=13 March 2024 |title=Google's new AI will play video games with you — but not to win |url=https://www.theverge.com/2024/3/13/24099024/google-deepmind-ai-agent-sima-video-games |access-date=27 March 2024 |website=The Verge |language=en}}
== Habermas machine ==
{{See also|Pol.is|Deliberative opinion poll}}
In 2024, Google Deepmind published the results of an experiment where they trained two large language models to help identify and present areas of overlap among a few thousand group members they had recruited online using techniques like sortition to get a representative sample of participants. The project is named in honor of Jürgen Habermas.{{Cite web |last=Williams |first=Rhiannon |date=October 17, 2024 |title=AI could help people find common ground during deliberations |url=https://www.technologyreview.com/2024/10/17/1105810/ai-could-help-people-find-common-ground-during-deliberations/ |access-date=2024-10-23 |website=MIT Technology Review |language=en}}{{Cite news |last=Davis |first=Nicola |last2= |first2= |date=2024-10-17 |title=AI mediation tool may help reduce culture war rifts, say researchers |url=https://www.theguardian.com/technology/2024/oct/17/ai-mediation-tool-may-help-reduce-culture-war-rifts-say-researchers |access-date=2024-10-23 |work=The Guardian |language=en-GB |issn=0261-3077}} In one experiment, the participants rated the summaries by the AI higher than the human moderator 56% of the time.
= Video generation =
In May 2024, a multimodal video generation model called Veo was announced at Google I/O 2024. Google claimed that it could generate 1080p videos beyond a minute long. In December 2024, Google released Veo 2, available via VideoFX. It supports 4K resolution video generation, and has an improved understanding of physics.{{Cite news |last= |first= |date=2024-12-17 |title=Google unveils improved AI video generator Veo 2 to rival OpenAI's Sora |url=https://www.thehindu.com/sci-tech/technology/google-unveils-improved-ai-video-generator-veo-2-to-rival-openais-sora/article68994621.ece |access-date=2024-12-20 |work=The Hindu |language=en-IN |issn=0971-751X}}
On April 2025, Google announced that Veo 2 is now available in Gemini App for Advanced Users & Gemini Artificial Intelligence Studio free for all.{{Cite web |last=Wiggers |first=Kyle |date=2025-04-15 |title=Google's Veo 2 video generating model comes to Gemini |url=https://techcrunch.com/2025/04/15/googles-veo-2-video-generator-comes-to-gemini/ |access-date=2025-04-16 |website=TechCrunch |language=en-US}}{{Cite web |date=2025-04-15 |title=Generate videos in Gemini and Whisk with Veo 2 |url=https://blog.google/products/gemini/video-generation/ |access-date=2025-04-16 |website=Google |language=en-us}}
= Music generation =
Google DeepMind developed Lyria, a text-to-music model. As of April 2025, it is available in preview mode on Vertex AI.{{Cite web |last=Wiggers |first=Kyle |date=2025-04-09 |title=Google's enterprise cloud gets a music-generating AI model |url=https://techcrunch.com/2025/04/09/google-brings-a-music-generating-ai-model-to-its-enterprise-cloud/ |access-date=2025-04-10 |website=TechCrunch |language=en-US}}
= Environment generation =
In March 2023, DeepMind introduced "Genie" (Generative Interactive Environments), an AI model that can generate game-like, action-controllable virtual worlds based on textual descriptions, images, or sketches. Built as an autoregressive latent diffusion model, Genie enables frame-by-frame interactivity without requiring labeled action data for training. Its successor, Genie 2, released in December 2024, expanded these capabilities to generate diverse and interactive 3D environments.{{Cite web |last=Orland |first=Kyle |date=2024-12-06 |title=Google's Genie 2 "world model" reveal leaves more questions than answers |url=https://arstechnica.com/ai/2024/12/googles-genie-2-world-model-reveal-leaves-more-questions-than-answers/ |access-date=2024-12-21 |website=Ars Technica |language=en-US}}
= Robotics =
Released in June 2023, RoboCat is an AI model that can control robotic arms. The model can adapt to new models of robotic arms, and to new types of tasks.{{Cite web |last=Wiggers |first=Kyle |date=21 June 2023 |title=DeepMind's RoboCat learns to perform a range of robotics tasks |url=https://techcrunch.com/2023/06/21/deepminds-robocat-learns-to-perform-a-range-of-robotics-tasks/ |access-date=16 April 2024 |website=TechCrunch |language=en-US}}{{Cite web |date=23 June 2023 |title=Google's DeepMind unveils AI robot that can teach itself unsupervised |url=https://www.independent.co.uk/tech/google-deepmind-ai-robot-robocat-b2362892.html |access-date=16 April 2024 |website=The Independent |language=en}} In March 2025, DeepMind launched two AI models, Gemini Robotics and Gemini Robotics-ER, aimed at improving how robots interact with the physical world.{{Cite web |last=Wiggers |first=Kyle |date=2025-03-12 |title=Google DeepMind unveils new AI models for controlling robots |url=https://techcrunch.com/2025/03/12/google-deepmind-unveils-new-ai-models-for-controlling-robots/ |access-date=2025-03-13 |website=TechCrunch |language=en-US}}
= Sports =
DeepMind researchers have applied machine learning models to the sport of football, often referred to as soccer in North America, modelling the behaviour of football players, including the goalkeeper, defenders, and strikers during different scenarios such as penalty kicks. The researchers used heat maps and cluster analysis to organize players based on their tendency to behave a certain way during the game when confronted with a decision on how to score or prevent the other team from scoring.
The researchers mention that machine learning models could be used to democratize the football industry by automatically selecting interesting video clips of the game that serve as highlights. This can be done by searching videos for certain events, which is possible because video analysis is an established field of machine learning. This is also possible because of extensive sports analytics based on data including annotated passes or shots, sensors that capture data about the players movements many times over the course of a game, and game theory models.{{Cite web |title=Advancing sports analytics through AI research |url=https://www.deepmind.com/blog/advancing-sports-analytics-through-ai-research |access-date=29 April 2022 |website=DeepMind |language=en}}{{Cite journal |last1=Tuyls |first1=Karl |last2=Omidshafiei |first2=Shayegan |last3=Muller |first3=Paul |last4=Wang |first4=Zhe |last5=Connor |first5=Jerome |last6=Hennes |first6=Daniel |last7=Graham |first7=Ian |last8=Spearman |first8=William |last9=Waskett |first9=Tim |last10=Steel |first10=Dafydd |last11=Luc |first11=Pauline |date=6 May 2021 |title=Game Plan: What AI can do for Football, and What Football can do for AI |url=https://www.jair.org/index.php/jair/article/view/12505 |journal=Journal of Artificial Intelligence Research |language=en |volume=71 |pages=41–88 |doi=10.1613/jair.1.12505 |s2cid=227013043 |issn=1076-9757|doi-access=free |arxiv=2011.09192 }}
= Archaeology =
Google has unveiled a new archaeology document program, named Ithaca after the Greek island in Homer's Odyssey.{{Cite web |date=9 March 2022 |title=Predicting the past with Ithaca |url=https://deepmind.google/discover/blog/predicting-the-past-with-ithaca/ |website=Google DeepMind |language=en}} This deep neural network helps researchers restore the empty text of damaged Greek documents, and to identify their date and geographical origin.{{Cite web |last=Vincent |first=James |date=9 March 2022 |title=DeepMind's new AI model helps decipher, date, and locate ancient inscriptions |url=https://www.theverge.com/2022/3/9/22968773/ai-machine-learning-ancient-inscriptions-texts-deepmind-ithaca-model |access-date=16 April 2024 |website=The Verge |language=en}} The work builds on another text analysis network that DeepMind released in 2019, named Pythia. Ithaca achieves 62% accuracy in restoring damaged texts and 71% location accuracy, and has a dating precision of 30 years. The authors claimed that the use of Ithaca by "expert historians" raised the accuracy of their work from 25 to 72 percent. However, Eleanor Dickey noted that this test was actually only made of students, saying that it wasn't clear how helpful Ithaca would be to "genuinely qualified editors".
The team is working on extending the model to other ancient languages, including Demotic, Akkadian, Hebrew, and Mayan.
=Materials science=
In November 2023, Google DeepMind announced an Open Source Graph Network for Materials Exploration (GNoME). The tool proposes millions of materials previously unknown to chemistry, including several hundred thousand stable crystalline structures, of which 736 had been experimentally produced by the Massachusetts Institute of Technology, at the time of the release.{{Cite journal |last1=Merchant |first1=Amil |last2=Batzner |first2=Simon |last3=Schoenholz |first3=Samuel S. |last4=Aykol |first4=Muratahan |last5=Cheon |first5=Gowoon |last6=Cubuk |first6=Ekin Dogus |date=December 2023 |title=Scaling deep learning for materials discovery |journal=Nature |language=en |volume=624 |issue=7990 |pages=80–85 |doi=10.1038/s41586-023-06735-9 |issn=1476-4687|doi-access=free |pmid=38030720 |pmc=10700131 |bibcode=2023Natur.624...80M }}{{Cite web |title=Google DeepMind's new AI tool helped create more than 700 new materials |url=https://www.technologyreview.com/2023/11/29/1084061/deepmind-ai-tool-for-new-materials-discovery/ |access-date=2 January 2024 |website=MIT Technology Review |language=en}} However, according to Anthony Cheetham, GNoME did not make "a useful, practical contribution to the experimental materials scientists."{{cite web|title=Is Google's AI Actually Discovering 'Millions of New Materials?'|last1=Koebler|first1=Jason|date=April 11, 2024|website=404 Media|url=https://www.404media.co/google-says-it-discovered-millions-of-new-materials-with-ai-human-researchers/}} A review article by Cheetham and Ram Seshadri were unable to identify any "strikingly novel" materials found by GNoME, with most being minor variants of already-known materials.{{cite journal|last1=Cheetham|first1=Anthony K.|last2=Seshadri|first2=Ram|author-link1=Anthony Cheetham|title=Artificial intelligence driving materials discovery? Perspective on the article: Scaling Deep Learning for Materials Discovery|journal=Chemistry of Materials|year=2024|volume=36|issue=8|pages=3490–3495|doi=10.1021/acs.chemmater.4c00643|doi-access=free|pmid=38681084 |pmc=11044265}}
= Mathematics =
==AlphaTensor==
In October 2022, DeepMind released AlphaTensor, which used reinforcement learning techniques similar to those in AlphaGo, to find novel algorithms for matrix multiplication.{{cite journal |url=https://www.nature.com/articles/d41586-022-03166-w |title=DeepMind AI invents faster algorithms to solve tough maths puzzles |date=5 October 2022 |journal=Nature |last=Hutson |first=Matthew|doi=10.1038/d41586-022-03166-w |pmid=36198824 |s2cid=252737506 }}{{cite web |url=https://www.technologyreview.com/2022/10/05/1060717/deepmind-uses-its-game-playing-ai-to-best-a-50-year-old-record-in-computer-science/ |title=DeepMind's game-playing AI has beaten a 50-year-old record in computer science |date=5 October 2022 |website=MIT Technology Review |first=Will Douglas |last=Heaven}} In the special case of multiplying two 4×4 matrices with integer entries, where only the evenness or oddness of the entries is recorded, AlphaTensor found an algorithm requiring only 47 distinct multiplications; the previous optimum, known since 1969, was the more general Strassen algorithm, using 49 multiplications.{{cite news |date=November 2022 |title=AI Reveals New Possibilities in Matrix Multiplication |work=Quanta Magazine |url=https://www.quantamagazine.org/ai-reveals-new-possibilities-in-matrix-multiplication-20221123/ |access-date=26 November 2022}} Computer scientist Josh Alman described AlphaTensor as "a proof of concept for something that could become a breakthrough," while Vassilevska Williams called it "a little overhyped" despite also acknowledging its basis in reinforcement learning as "something completely different" from previous approaches.
==AlphaGeometry==
{{main|AlphaGeometry}}
AlphaGeometry is a neuro-symbolic AI that was able to solve 25 out of 30 geometry problems of the International Mathematical Olympiad, a performance comparable to that of a gold medalist.{{Cite web |last=Zia |first=Tehseen |date=January 24, 2024 |title=AlphaGeometry: DeepMind's AI Masters Geometry Problems at Olympiad Levels |url=https://www.unite.ai/alphageometry-how-deepminds-ai-masters-geometry-problems-at-olympian-levels/ |access-date=2024-05-03 |website=Unite.ai}}
Traditional geometry programs are symbolic engines that rely exclusively on human-coded rules to generate rigorous proofs, which makes them lack flexibility in unusual situations. AlphaGeometry combines such a symbolic engine with a specialized large language model trained on synthetic data of geometrical proofs. When the symbolic engine doesn't manage to find a formal and rigorous proof on its own, it solicits the large language model, which suggests a geometrical construct to move forward. However, it is unclear how applicable this method is to other domains of mathematics or reasoning, because symbolic engines rely on domain-specific rules and because of the need for synthetic data.
==AlphaProof==
AlphaProof is an AI model, which couples a pre-trained language model with the AlphaZero reinforcement learning algorithm. AlphaZero has previously taught itself how to master games. The pre-trained language model used in this combination is the fine-tuning of a Gemini model to automatically translate natural language problem statements into formal statements, creating a large library of formal problems of varying difficulty. For this purpose, mathematical statements are defined in the formal language Lean. At the 2024 International Mathematical Olympiad, AlphaProof together with an adapted version of AlphaGeometry have reached the same level of solving problems in the combined categories as a silver medalist in that competition for the first time.{{Cite web |last=Roberts |first=Siobhan |date=July 25, 2024 |title=AI achieves silver-medal standard solving International Mathematical Olympiad problems |url=https://www.nytimes.com/2024/07/25/science/ai-math-alphaproof-deepmind.html/ |access-date=2024-08-03 |website=The New York Times}}{{Cite web |last=AlphaProof and AlphaGeometry teams |date=July 25, 2024 |title=AI achieves silver-medal standard solving International Mathematical Olympiad problems |url=https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/ |access-date=2024-08-03 |website=deepmind.google}}
= AlphaDev =
{{Main article |AlphaDev}}
In June 2023, Deepmind announced that AlphaDev, which searches for improved computer science algorithms using reinforcement learning, discovered a more efficient way of coding a sorting algorithm and a hashing algorithm. The new sorting algorithm was 70% faster for shorter sequences and 1.7% faster for sequences exceeding 250,000 elements, and the new hashing algorithm was 30% faster in some cases. The sorting algorithm was accepted into the C++ Standard Library sorting algorithms, and was the first change to those algorithms in more than a decade and the first update to involve an algorithm discovered using AI.{{cite web |last=Heaven |first=Will Douglas |date=June 7, 2023 |title=Google DeepMind's game-playing AI just found another way to make code faster |url=https://www.technologyreview.com/2023/06/07/1074184/google-deepmind-game-ai-alphadev-algorithm-code-faster/ |url-status=live |archive-url=https://web.archive.org/web/20230614083801/https://www.technologyreview.com/2023/06/07/1074184/google-deepmind-game-ai-alphadev-algorithm-code-faster/ |archive-date=2023-06-14 |access-date=2023-06-20 |publisher=MIT Technology Review}} The hashing algorithm was released to an opensource library.{{cite web|title=AlphaDev discovers faster sorting algorithms |url=https://deepmind.google/discover/blog/alphadev-discovers-faster-sorting-algorithms|website=DeepMind Blog|date=14 May 2024 }} 18 June 2024. Google estimates that these two algorithms are used trillions of times every day.{{Cite web |last=Sparkes |first=Matthew |date=7 June 2023 |title=DeepMind AI's new way to sort objects could speed up global computing |url=https://www.newscientist.com/article/2376512-deepmind-ais-new-way-to-sort-objects-could-speed-up-global-computing/ |access-date=2024-06-20 |website=New Scientist |language=en-US}}
= Chip design =
AlphaChip is an reinforcement learning-based neural architecture that guides the task of chip placement. DeepMind claimed that the time needed to create chip layouts fell from weeks to hours. Its chip designs were used in every Tensor Processing Unit (TPU) iteration since 2020.{{Cite web |last=Ghoshal |first=Abhimanyu |date=2024-11-30 |title=Singularity alert: AIs are already designing their own chips |url=https://newatlas.com/ai-humanoids/3-mind-blowing-ways-ai-chip-design-singularity/ |access-date=2024-12-02 |website=New Atlas |language=en-US}}{{Cite web |last=Shilov |first=Anton |date=2024-09-28 |title=Google unveils AlphaChip AI-assisted chip design technology — chip layout as a game for a computer |url=https://www.tomshardware.com/tech-industry/google-unveils-alphachip-ai-assisted-chip-design-technology-chip-layout-as-a-game-for-a-computer |access-date=2024-12-02 |website=Tom's Hardware |language=en}}
= Miscellaneous contributions to Google =
Google has stated that DeepMind algorithms have greatly increased the efficiency of cooling its data centers by automatically balancing the cost of hardware failures against the cost of cooling.{{cite web|title=DeepMind AI Reduces Google Data Centre Cooling Bill by 40% |url=https://www.deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-by-40 |website=DeepMind Blog|date=14 May 2024 }} 20 July 2016. In addition, DeepMind (alongside other Alphabet AI researchers) assists Google Play's personalized app recommendations. DeepMind has also collaborated with the Android team at Google for the creation of two new features which were made available to people with devices running Android Pie, the ninth installment of Google's mobile operating system. These features, Adaptive Battery and Adaptive Brightness, use machine learning to conserve energy and make devices running the operating system easier to use. It is the first time DeepMind has used these techniques on such a small scale, with typical machine learning applications requiring orders of magnitude more computing power.{{cite web |title=DeepMind, meet Android |url=https://deepmind.com/blog/deepmind-meet-android/ |website=DeepMind Blog|date=14 May 2024 }} 8 May 2018.
DeepMind Health
In July 2016, a collaboration between DeepMind and Moorfields Eye Hospital was announced to develop AI applications for healthcare.{{cite news|url=https://www.bbc.com/news/technology-36713308|title=Google's DeepMind to peek at NHS eye scans for disease analysis|date=6 July 2016|publisher=BBC|last1=Baraniuk|first1=Chris|access-date=6 July 2016}} DeepMind would be applied to the analysis of anonymised eye scans, searching for early signs of diseases leading to blindness.
In August 2016, a research programme with University College London Hospital was announced with the aim of developing an algorithm that can automatically differentiate between healthy and cancerous tissues in head and neck areas.{{cite news|url=https://www.bbc.co.uk/news/technology-37230806|title=Google DeepMind targets NHS head and neck cancer treatment|date=31 August 2016|publisher=BBC|last1=Baraniuk|first1=Chris|access-date=5 September 2016}}
There are also projects with the Royal Free London NHS Foundation Trust and Imperial College Healthcare NHS Trust to develop new clinical mobile apps linked to electronic patient records.{{cite news|title=DeepMind announces second NHS partnership|url=http://www.itpro.co.uk/public-sector/27833/deepmind-announces-second-nhs-partnership|access-date=23 December 2016|publisher=IT Pro|date=23 December 2016}} Staff at the Royal Free Hospital were reported as saying in December 2017 that access to patient data through the app had saved a 'huge amount of time' and made a 'phenomenal' difference to the management of patients with acute kidney injury. Test result data is sent to staff's mobile phones and alerts them to changes in the patient's condition. It also enables staff to see if someone else has responded, and to show patients their results in visual form.{{cite news|title=Google DeepMind's Streams technology branded 'phenomenal'|url=https://www.digitalhealth.net/2017/12/google-deepmind-streams-royal-free/|access-date=23 December 2017|publisher=Digital Health|date=4 December 2017}}{{Cite news |date=17 February 2018 |title=A dedicated WhatsApp for clinicians |url=https://www.bmj.com/bmj/section-pdf/966505?path=/bmj/360/8141/This_Week.full.pdf |work=the bmj}}
In November 2017, DeepMind announced a research partnership with the Cancer Research UK Centre at Imperial College London with the goal of improving breast cancer detection by applying machine learning to mammography.{{cite web|url=https://siliconangle.com/blog/2017/11/24/google-deepmind-announces-new-research-partnership-fight-breast-cancer-ai/|title=Google DeepMind announces new research partnership to fight breast cancer with AI|date=24 November 2017|website=Silicon Angle}} Additionally, in February 2018, DeepMind announced it was working with the U.S. Department of Veterans Affairs in an attempt to use machine learning to predict the onset of acute kidney injury in patients, and also more broadly the general deterioration of patients during a hospital stay so that doctors and nurses can more quickly treat patients in need.{{cite web|url=https://venturebeat.com/2018/02/22/googles-deepmind-wants-ai-to-spot-kidney-injuries/|title=Google's DeepMind wants AI to spot kidney injuries|date=22 February 2018|website=Venture Beat}}
DeepMind developed an app called Streams, which sends alerts to doctors about patients at risk of acute kidney injury.{{cite web |last=Evenstad |first=Lis |url=https://www.computerweekly.com/news/252443164/DeepMind-Health-must-be-transparent-to-gain-public-trust-review-finds |title=DeepMind Health must be transparent to gain public trust, review finds |website=ComputerWeekly.com |date=15 June 2018 |access-date=14 November 2018}} On 13 November 2018, DeepMind announced that its health division and the Streams app would be absorbed into Google Health.{{cite news |last=Vincent |first=James |url=https://www.theverge.com/2018/11/13/18091774/google-deepmind-health-absorbing-streams-team-ai-assistant-nurse-doctor|title=Google is absorbing DeepMind's health care unit to create an 'AI assistant for nurses and doctors' |work=The Verge |date=13 November 2018 |access-date=14 November 2018}} Privacy advocates said the announcement betrayed patient trust and appeared to contradict previous statements by DeepMind that patient data would not be connected to Google accounts or services.{{cite news |last=Hern |first=Alex |url=https://www.theguardian.com/technology/2018/nov/14/google-betrays-patient-trust-deepmind-healthcare-move |title=Google 'betrays patient trust' with DeepMind Health move |date=14 November 2018 |newspaper=The Guardian |access-date=14 November 2018}}{{cite magazine |last=Stokel-Walker |first=Chris |url=https://www.wired.co.uk/article/google-deepmind-nhs-health-data |title=Why Google consuming DeepMind Health is scaring privacy experts |magazine=Wired |date=14 November 2018 |access-date=15 November 2018}} A spokesman for DeepMind said that patient data would still be kept separate from Google services or projects.{{cite news |last= Murphy |first=Margi |url= https://www.telegraph.co.uk/technology/2018/11/14/deepmind-boss-defends-controversial-google-health-deal/ |archive-url=https://ghostarchive.org/archive/20220112/https://www.telegraph.co.uk/technology/2018/11/14/deepmind-boss-defends-controversial-google-health-deal/ |archive-date=12 January 2022 |url-access=subscription |url-status=live |title=DeepMind boss defends controversial Google health deal |date=14 November 2018 |work=The Telegraph |access-date=14 November 2018}}{{cbignore}}
= NHS data-sharing controversy =
In April 2016, New Scientist obtained a copy of a data sharing agreement between DeepMind and the Royal Free London NHS Foundation Trust. The latter operates three London hospitals where an estimated 1.6 million patients are treated annually. The agreement shows DeepMind Health had access to admissions, discharge and transfer data, accident and emergency, pathology and radiology, and critical care at these hospitals. This included personal details such as whether patients had been diagnosed with HIV, suffered from depression or had ever undergone an abortion in order to conduct research to seek better outcomes in various health conditions.{{cite news |url=https://www.newscientist.com/article/2086454-revealed-google-ai-has-access-to-huge-haul-of-nhs-patient-data |title=Revealed: Google AI has access to huge haul of NHS patient data |first=Hal |last=Hodson |work=New Scientist |date=29 April 2016 }}{{cite news |url=https://www.newscientist.com/article/mg23030722-900-big-data-if-theres-nothing-to-hide-why-be-secretive/ |title=Leader: If Google has nothing to hide about NHS data, why so secretive? |work=New Scientist |date=4 May 2016 }}
A complaint was filed to the Information Commissioner's Office (ICO), arguing that the data should be pseudonymised and encrypted.{{cite news |url=http://www.computerweekly.com/news/450296175/ICO-probes-Google-DeepMind-patient-data-sharing-deal-with-NHS-Hospital-Trust |title=ICO probes Google DeepMind patient data-sharing deal with NHS Hospital Trust |first=Caroline |last=Donnelly |work=Computer Weekly |date=12 May 2016 }} In May 2016, New Scientist published a further article claiming that the project had failed to secure approval from the Confidentiality Advisory Group of the Medicines and Healthcare products Regulatory Agency.{{cite news |url=https://www.newscientist.com/article/2088056-exclusive-googles-nhs-deal/ |title=Did Google's NHS patient data deal need ethical approval? |first=Hal |last=Hodson |work=New Scientist |date=25 May 2016 |access-date=28 May 2016 }}
In 2017, the ICO concluded a year-long investigation that focused on how the Royal Free NHS Foundation Trust tested the app, Streams, in late 2015 and 2016.{{Cite web|archive-url=https://web.archive.org/web/20180616142219/https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2017/07/royal-free-google-deepmind-trial-failed-to-comply-with-data-protection-law/|url=https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2017/07/royal-free-google-deepmind-trial-failed-to-comply-with-data-protection-law/|title=Royal Free - Google DeepMind trial failed to comply with data protection law|date=17 August 2017|website=ico.org.uk|language=en|access-date=15 February 2018|archive-date=16 June 2018 }} The ICO found that the Royal Free failed to comply with the Data Protection Act when it provided patient details to DeepMind, and found several shortcomings in how the data was handled, including that patients were not adequately informed that their data would be used as part of the test. DeepMind published its thoughts{{Cite web |url=https://deepmind.com/blog/ico-royal-free/ |title=The Information Commissioner, the Royal Free, and what we've learned |website=DeepMind |access-date=15 February 2018}} on the investigation in July 2017, saying "we need to do better" and highlighting several activities and initiatives they had initiated for transparency, oversight and engagement. This included developing a patient and public involvement strategy{{Cite web |url=https://deepmind.com/applied/deepmind-health/patients/|title=For Patients |website=DeepMind |access-date=15 February 2018}} and being transparent in its partnerships.
In May 2017, Sky News published a leaked letter from the National Data Guardian, Dame Fiona Caldicott, revealing that in her "considered opinion" the data-sharing agreement between DeepMind and the Royal Free took place on an "inappropriate legal basis".{{cite news |url=http://news.sky.com/story/google-received-16-million-nhs-patients-data-on-an-inappropriate-legal-basis-10879142/ |title=Google received 1.6 million NHS patients' data on an 'inappropriate legal basis' |first=Alexander J |last=Martin |work=Sky News |date=15 May 2017 |access-date=16 May 2017 }} The Information Commissioner's Office ruled in July 2017 that the Royal Free hospital failed to comply with the Data Protection Act when it handed over personal data of 1.6 million patients to DeepMind.{{cite news |first=Alex |last=Hern |url=https://www.theguardian.com/technology/2017/jul/03/google-deepmind-16m-patient-royal-free-deal-data-protection-act |title=Royal Free breached UK data law in 1.6m patient deal with Google's DeepMind |newspaper=The Guardian |date=3 July 2017}}
DeepMind Ethics and Society
In October 2017, DeepMind announced a new research unit, DeepMind Ethics & Society.{{Cite news|url=https://deepmind.com/blog/why-we-launched-deepmind-ethics-society/|title=Why we launched DeepMind Ethics & Society|work=DeepMind Blog|access-date=25 March 2018}} Their goal is to fund external research of the following themes: privacy, transparency, and fairness; economic impacts; governance and accountability; managing AI risk; AI morality and values; and how AI can address the world's challenges. As a result, the team hopes to further understand the ethical implications of AI and aid society to seeing AI can be beneficial.{{Cite news|url=https://www.wired.co.uk/article/deepmind-ethics-and-society-artificial-intelligence|title=DeepMind's new AI ethics unit is the company's next big move|last=Temperton|first=James|work=Wired (UK)|access-date=3 December 2017}}
This new subdivision of DeepMind is a completely separate unit from the partnership of leading companies using AI, academia, civil society organizations and nonprofits of the name Partnership on Artificial Intelligence to Benefit People and Society of which DeepMind is also a part.{{cite news|last1=Hern|first1=Alex|title=DeepMind announces ethics group to focus on problems of AI|url=https://www.theguardian.com/technology/2017/oct/04/google-deepmind-ai-artificial-intelligence-ethics-group-problems|access-date=8 December 2017|work=The Guardian|date=4 October 2017}} The DeepMind Ethics and Society board is also distinct from the mooted AI Ethics Board that Google originally agreed to form when acquiring DeepMind.{{cite news |last1=Hern |first1=Alex |title=DeepMind announces ethics group to focus on problems of AI |url=https://www.theguardian.com/technology/2017/oct/04/google-deepmind-ai-artificial-intelligence-ethics-group-problems |access-date=12 June 2020 |work=The Guardian |date=4 October 2017}}
DeepMind Professors of machine learning
DeepMind sponsors three chairs of machine learning:
- At the University of Cambridge, held by Neil Lawrence,{{Cite web|url=https://www.cam.ac.uk/research/news/cambridge-appoints-first-deepmind-professor-of-machine-learning|title=Cambridge appoints first DeepMind Professor of Machine Learning|date=18 September 2019|website=University of Cambridge}} in the Department of Computer Science and Technology,
- At the University of Oxford, held by Michael Bronstein,{{Cite web|url=http://www.cs.ox.ac.uk/news/1862-full.html|title=DeepMind funds new post at Oxford University – the DeepMind Professorship of Artificial Intelligence|website=Department of Computer Science}} in the Department of Computer Science, and
- At the University College London, held by Marc Deisenroth,{{Cite web|url=https://www.ucl.ac.uk/news/2019/nov/deepmind-renews-its-commitment-ucl|title=DeepMind renews its commitment to UCL |date=29 March 2021|website=University College London}} in the Department of Computer Science.
See also
References
External links
- {{Official website}}
- [https://github.com/google-deepmind GitHub Repositories]
{{Google AI}}
{{Google LLC}}
{{Generative AI}}
{{Artificial intelligence navbox}}
{{Existential risk from artificial intelligence}}
{{authority control}}
Category:2010 establishments in England
Category:Artificial intelligence laboratories
Category:British companies established in 2010
Category:Game artificial intelligence
Category:Applied machine learning
Category:British subsidiaries of foreign companies
Category:Alphabet Inc. subsidiaries
Category:2014 mergers and acquisitions
Category:Information technology companies of the United Kingdom