AlphaGo versus Lee Sedol#Game 4

{{Short description|Go match between AI and human}}

{{Use dmy dates|date=November 2020}}

{{Infobox multigame series

| title = AlphaGo versus Lee Sedol
4–1

| bgcolor =

| fgcolor =

| compact = yes

| teams =

| location1 = Seoul, South Korea, 9–15 March 2016

| date1 =

| result1 = AlphaGo W+R

| date2 =

| result2 = AlphaGo B+R

| date3 =

| result3 = AlphaGo W+R

| date4 =

| result4 = Lee Sedol W+R

| date5 =

| result5 = AlphaGo W+R

| previous =

| next =

}}

AlphaGo versus Lee Sedol, also known as the DeepMind Challenge Match, was a five-game Go match between top Go player Lee Sedol and AlphaGo, a computer Go program developed by DeepMind, played in Seoul, South Korea between the 9th and 15 March 2016. AlphaGo won all but the fourth game;{{cite web | title= Artificial intelligence: Go master Lee Se-dol wins against AlphaGo program |url= https://www.bbc.co.uk/news/technology-35797102| author=|date= 13 March 2016| website= BBC News Online | access-date= 13 March 2016 }} all games were won by resignation.{{cite web|url=https://gogameguru.com/go-news/computer-go/|title=Computer Go|publisher=Go Game Guru|access-date=13 March 2016|archive-url=https://web.archive.org/web/20160314060214/https://gogameguru.com/go-news/computer-go/|archive-date=14 March 2016|url-status=usurped}} The match has been compared with the historic chess match between Deep Blue and Garry Kasparov in 1997.

The winner of the match was slated to win $1 million. Since AlphaGo won, Google DeepMind stated that the prize would be donated to charities, including UNICEF, and Go organisations.{{cite news|url=http://hosted.ap.org/dynamic/stories/A/AS_SKOREA_GAME_MAN_VS_COMPUTER_ASOL-?SITE=SCAND&SECTION=HOME&TEMPLATE=DEFAULT |archive-url=https://web.archive.org/web/20181222173604/http://hosted.ap.org/dynamic/stories/A/AS_SKOREA_GAME_MAN_VS_COMPUTER_ASOL-?SITE=SCAND&SECTION=HOME&TEMPLATE=DEFAULT |url-status=dead |archive-date=22 December 2018 |title=Human champion certain he'll beat AI at ancient Chinese game |agency=Associated Press|date=22 February 2016 |access-date=22 February 2016 }} Lee received $170,000 ($150,000 for participating in the five games and an additional $20,000 for winning one game).{{cite web|url=http://www.baduk.or.kr/news/report_view.asp?news_no=1671|title=이세돌 vs 알파고, '구글 딥마인드 챌린지 매치' 기자회견 열려|publisher=Korea Baduk Association|date=22 February 2016|access-date=22 February 2016|language=ko|archive-url=https://web.archive.org/web/20160303210212/http://www.baduk.or.kr/news/report_view.asp?news_no=1671|archive-date=3 March 2016|url-status=dead}}

After the match, The Korea Baduk Association awarded AlphaGo the highest Go grandmaster rank – an "honorary 9 dan". It was given in recognition of AlphaGo's "sincere efforts" to master Go.{{cite web|url=http://www.straitstimes.com/asia/east-asia/googles-alphago-gets-divine-go-ranking|title=Google's AlphaGo gets 'divine' Go ranking|work=The Straits Times|date=15 March 2016}} This match was chosen by Science as one of the runners-up for Breakthrough of the Year, on 22 December 2016.{{cite web|url=https://www.science.org/content/article/ai-protein-folding-our-breakthrough-runners|title=From AI to protein folding: Our Breakthrough runners-up|work=Science|date=22 December 2016|access-date=30 December 2016}}

Background

=Difficult challenge in artificial intelligence=

{{main|Computer Go}}

{{external media | width = 210px | float = right | headerimage = | video1 = [https://www.youtube.com/watch?v=hQm4SKT3YO8&index=27&list=PLKB8zkQFlMyJ7iBJqT9pnnwTfXz_jtxbu Machine trains self to beat humans at world's hardest game, Retro Report], 2:51, Retro Report{{cite web | title =Machine trains self to beat humans at world's hardest game | date =10 March 2016 | publisher =Retro Report | url =https://www.retroreport.org/voices/machine-trains-self-to-beat-humans-at-worlds-hardest-game/ | access-date =15 December 2016 | archive-date =20 December 2016 | archive-url =https://web.archive.org/web/20161220132811/https://www.retroreport.org/voices/machine-trains-self-to-beat-humans-at-worlds-hardest-game/ | url-status =dead }} }}

Go is a complex board game that requires intuition, creative and strategic thinking.{{cite magazine|url=https://www.wired.com/2016/03/googles-ai-wins-first-game-historic-match-go-champion/|title=Google's AI Wins First Game in Historic Match With Go Champion|date=9 March 2016|magazine=WIRED}}{{cite web|url=https://www.koreatimes.co.kr/www/news/tech/2016/03/325_200068.html |title=AlphaGo victorious once again |work=The Korea Times|date=11 March 2016 |access-date=16 March 2016}} It has long been considered a difficult challenge in the field of artificial intelligence (AI). It is considerably more difficult{{cite journal|last1=Bouzy|first1=Bruno|last2=Cazenave|first2=Tristan|title=Computer Go: An AI oriented survey|journal=Artificial Intelligence|date= 9 August 2001|volume=132|issue=1|pages=39–103|doi=10.1016/S0004-3702(01)00127-8|doi-access=}} to solve than chess. Many in artificial intelligence consider Go to require more elements that mimic human thought than chess.{{Citation| url=https://query.nytimes.com/gst/fullpage.html?res=9C04EFD6123AF93AA15754C0A961958260 | title=To Test a Powerful Computer, Play an Ancient Game | last=Johnson | first=George | work=The New York Times| date=29 July 1997 | access-date = 16 June 2008}} Mathematician I. J. Good wrote in 1965:{{cite web|first=I J |last=Good |url=http://www.chilton-computing.org.uk/acl/literature/reports/p019.htm |title=Go, Jack Good |work=New Scientist |date=21 January 1965 |access-date=16 March 2016 |via=Atlas Computer Laboratory, Chilton}}

{{blockquote|Go on a computer? – In order to program a computer to play a reasonable game of Go, rather than merely a legal game – it is necessary to formalise the principles of good strategy, or to design a learning program. The principles are more qualitative and mysterious than in chess, and depend more on judgement. So, I think it will be even more difficult to program a computer to play a reasonable game of Go than of chess.}}

Prior to 2015,{{Cite journal|title = Mastering the game of Go with deep neural networks and tree search|journal = Nature| issn= 0028-0836|pages = 484–489|volume = 529|issue = 7587|doi = 10.1038/nature16961|pmid = 26819042|first1 = David|last1 = Silver|author-link1=David Silver (programmer)|first2 = Aja|last2 = Huang|author-link2=Aja Huang|first3 = Chris J.|last3 = Maddison|first4 = Arthur|last4 = Guez|first5 = Laurent|last5 = Sifre|first6 = George van den|last6 = Driessche|first7 = Julian|last7 = Schrittwieser|first8 = Ioannis|last8 = Antonoglou|first9 = Veda|last9 = Panneershelvam|first10= Marc|last10= Lanctot|first11= Sander|last11= Dieleman|first12=Dominik|last12= Grewe|first13= John|last13= Nham|first14= Nal|last14= Kalchbrenner|first15= Ilya|last15= Sutskever|author-link15=Ilya Sutskever|first16= Timothy|last16= Lillicrap|first17= Madeleine|last17= Leach|first18= Koray|last18= Kavukcuoglu|first19= Thore|last19= Graepel|first20= Demis |last20=Hassabis|author-link20=Demis Hassabis|date= 28 January 2016|bibcode = 2016Natur.529..484S|s2cid = 515925}}{{closed access}} the best Go programs only managed to reach amateur dan level.{{cite web|last=Wedd|first=Nick|title=Human-Computer Go Challenges|url=http://www.computer-go.info/h-c/index.html|work=computer-go.info|access-date=28 October 2011}} On the small 9×9 board, the computer fared better, and some programs managed to win a fraction of their 9×9 games against professional players. Before AlphaGo, some researchers had claimed that computers would never defeat top humans at Go.{{cite web|last=Cho|first=Adrian|url=https://www.science.org/content/article/huge-leap-forward-computer-mimics-human-brain-beats-professional-game-go|title='Huge leap forward': Computer that mimics human brain beats professional at game of Go|work=Science |date=27 January 2016}} Elon Musk, an early investor of Deepmind, said in 2016 that experts in the field thought AI was 10 years away from achieving a victory against a top professional Go player.{{cite web|url=https://www.inverse.com/article/12620-elon-musk-says-google-deepmind-s-go-victory-is-a-10-year-jump-for-a-i|title=Elon Musk Says Google Deepmind's Go Victory Is a 10-Year Jump For A.I.|first=William|last=Hoffman|date=9 March 2016|work=Inverse|access-date=12 March 2016}}

The match AlphaGo versus Lee Sedol is comparable to the 1997 chess match when Garry Kasparov lost to IBM computer Deep Blue. Kasparov's loss to Deep Blue is considered the moment a computer became better than humans at chess.{{cite news|url=https://www.bbc.com/news/technology-35785875|title=Artificial intelligence: Google's AlphaGo beats Go master Lee Se-dol|work=BBC News|date=12 March 2016}}

AlphaGo is significantly different from previous AI efforts. Instead of using probability algorithms hard-coded by human programmers, AlphaGo uses neural networks to estimate its probability of winning. AlphaGo accesses and analyses the entire online library of Go, including all matches, players, analytics, literature, and games played by AlphaGo against itself and other players. Once set up, AlphaGo is independent of the developer team and evaluates the best pathway to solving Go (i.e., winning the game). By using neural networks and Monte Carlo tree search, AlphaGo calculates colossal numbers of likely and unlikely probabilities many moves into the future {{Citation needed|date=June 2021}}.

Related research results are being applied to fields such as cognitive science, pattern recognition and machine learning.{{cite journal|last=Müller|first=Martin|title=Computer Go|journal=Artificial Intelligence|volume=134|date=January 2002|issue=1–2|pages=145–179|doi=10.1016/S0004-3702(01)00121-7|doi-access=}}{{rp|150}}

=Match against Fan Hui=

{{main|AlphaGo versus Fan Hui}}

File:FHvAG5.jpg

AlphaGo defeated European champion Fan Hui, a 2 dan professional, 5–0 in October 2015, the first time an AI had beaten a human professional player at the game on a full-sized board without a handicap.{{cite web |url=https://www.bbc.com/news/technology-35420579 |title=Google achieves AI 'breakthrough' by beating Go champion |work=BBC News |date=27 January 2016|access-date=28 January 2016}}{{citation |url=http://www.nature.com/news/go-players-react-to-computer-defeat-1.19255 |title=Go players react to computer defeat |first=Elizabeth |last=Gibney |date=27 January 2016 |journal=Nature |doi=10.1038/nature.2016.19255|s2cid=146868978 |url-access=subscription }} Some commentators stressed the gulf between Fan and Lee, who is ranked 9 dan professional. Computer programs Zen and Crazy Stone have previously defeated human players ranked 9 dan professional with handicaps of four or five stones.{{cite web|url=https://gogameguru.com/zen-computer-go-program-beats-takemiya-masaki-4-stones/|title=Zen computer Go program beats Takemiya Masaki with just 4 stones!|publisher=Go Game Guru|access-date=28 January 2016|archive-url=https://web.archive.org/web/20160201162313/https://gogameguru.com/zen-computer-go-program-beats-takemiya-masaki-4-stones/|archive-date=1 February 2016|url-status=usurped}}{{cite web|title=「アマ六段の力。天才かも」囲碁棋士、コンピューターに敗れる 初の公式戦 |url=http://sankei.jp.msn.com/life/news/130320/igo13032020420000-n1.htm |publisher=MSN Sankei News |access-date=27 March 2013 |url-status=dead |archive-url=https://web.archive.org/web/20130324221549/http://sankei.jp.msn.com/life/news/130320/igo13032020420000-n1.htm |archive-date=24 March 2013 }} Canadian AI specialist Jonathan Schaeffer, commenting after the win against Fan, compared AlphaGo with a "child prodigy" that lacked experience, and considered, "the real achievement will be when the program plays a player in the true top echelon." He then believed that Lee would win the match in March 2016. Hajin Lee, a professional Go player and the International Go Federation's secretary-general, commented that she was "very excited" at the prospect of an AI challenging Lee, and thought the two players had an equal chance of winning.

In the aftermath of his match against AlphaGo, Fan Hui noted that the game had taught him to be a better player and to see things he had not previously seen. By March 2016, Wired reported that his ranking had risen from 633 in the world to around 300.{{cite magazine|url=https://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go|title=The Sadness and Beauty of Watching Google's AI Play Go|date=11 March 2016|magazine=WIRED}}

=Preparation=

Go experts found errors in AlphaGo's play against Fan, particularly relating to a lack of awareness of the entire board. Before the game against Lee, it was unknown how much the program had improved its game since its October match.{{cite journal |url=https://www.science.org/content/article/update-why-week-s-man-versus-machine-go-match-doesn-t-matter-and-what-does |title=Update: Why this week's man-versus-machine Go match doesn't matter (and what does) |first=Dana |last=Mackenzie |journal=Science |date=9 March 2016 |doi=10.1126/science.aaf4152 |url-access=subscription }}{{cite web |url=http://gogameguru.com/can-alphago-defeat-lee-sedol/ |title=Can AlphaGo defeat Lee Sedol? |first=Ben |last=Kloester |publisher=Go Game Guru |date=4 March 2016 |access-date=10 March 2016 |archive-url=https://web.archive.org/web/20160311010256/https://gogameguru.com/can-alphago-defeat-lee-sedol/ |archive-date=11 March 2016 |url-status=usurped}} AlphaGo's original training dataset started with games of strong amateur players from internet Go servers, after which AlphaGo trained by playing against itself for tens of millions of games.{{cite web |url=https://www.youtube.com/watch?v=yCALyQRN3hw |title=Match 4 - Google DeepMind Challenge Match: Lee Sedol vs AlphaGo |website=YouTube |time=6:09:35-6:11:20 |date=13 March 2016 |access-date=24 March 2016 }}{{cite web |url=https://www.youtube.com/watch?v=qUAmTYHEyM8 |title=Match 3 - Google DeepMind Challenge Match: Lee Sedol vs AlphaGo |website=YouTube |date=12 March 2016 |access-date=20 March 2016 }}

Players

=AlphaGo=

{{main|AlphaGo}}

File:Alphago logo Reversed.svg

AlphaGo is a computer program developed by Google DeepMind to play the board game Go. AlphaGo's algorithm uses a combination of machine learning and tree search techniques, combined with extensive training, both from human and computer play. The system's neural networks were initially bootstrapped from human game-play expertise. AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a KGS Go Server database of around 30 million moves from 160,000 games by KGS 6 to 9 dan human players.{{cite magazine|title = In Major AI Breakthrough, Google System Secretly Beats Top Player at the Ancient Game of Go|url = https://www.wired.com/2016/01/in-a-huge-breakthrough-googles-ai-beats-a-top-player-at-the-game-of-go/|magazine = WIRED|access-date = 1 February 2016|date = 27 January 2016|last = Metz|first = Cade}} Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, using reinforcement learning to improve its play.{{cite web |url=http://googleresearch.blogspot.com/2016/01/alphago-mastering-ancient-game-of-go.html|title=Research Blog: AlphaGo: Mastering the ancient game of Go with Machine Learning |date=27 January 2016 |work=Google Research Blog}} The system does not use a "database" of moves to play. As one of the creators of AlphaGo explained:{{cite magazine|url=https://www.wired.com/2016/03/googles-ai-wins-pivotal-game-two-match-go-grandmaster/|title=Google's AI Wins Pivotal Second Game in Match With Go Grandmaster|date=10 March 2016|magazine=WIRED|access-date=12 March 2016}}

{{blockquote|Although we have programmed this machine to play, we have no idea what moves it will come up with. Its moves are an emergent phenomenon from the training. We just create the data sets and the training algorithms. But the moves it then comes up with are out of our hands—and much better than we, as Go players, could come up with.}}

In the match against Lee, AlphaGo used about the same computing power as it had in the match against Fan Hui,{{cite tweet |user=demishassabis|number=708488229750591488|date=11 March 2016|title=We are using roughly same amount of compute power as in Fan Hui match: distributing search over further machines has diminishing returns|first=Demis |last=Hassabis|author-link=Demis_Hassabis|access-date=14 March 2016}} where it used 1,202 CPUs and 176 GPUs. The Economist reported that it used 1,920 CPUs and 280 GPUs.{{cite news|url=https://www.economist.com/news/science-and-technology/21694540-win-or-lose-best-five-battle-contest-another-milestone|title=Showdown|newspaper=The Economist|date=12 March 2016|access-date=19 November 2016}} Google has also stated that its proprietary tensor processing units were used in the match against Lee Sedol.{{Cite web|url=https://cloudplatform.googleblog.com/2016/05/Google-supercharges-machine-learning-tasks-with-custom-chip.html|title=Google supercharges machine learning tasks with TPU custom chip|author-link1=Norman Jouppi|last=Jouppi|first=Norm|date=18 May 2016|website=Google Cloud Platform Blog|access-date=2016-06-26}}

= Lee Sedol =

{{main|Lee Sedol}}

File:Lee Se-dol 2012.jpg

Lee Sedol is a professional Go player of 9 dan rank[http://gobase.org/information/players/?pp=Lee+SeDol Lee SeDol]. gobase.org. Retrieved 22 June 2010. and is one of the strongest players in the history of Go. He started his career in 1996 (promoted to professional dan rank at the age of 12), winning 18 international titles since then.{{cite web|url=http://www.shanghaidaily.com/article/article_xinhua.aspx?id=322918|title=Lee Sedol expects 'not easy' game with AlphaGo in 3rd Go match |work=Shanghai Daily}} He is a "national hero" in his native South Korea, known for his unconventional and creative play.{{cite web|url=https://www.newscientist.com/article/2079871-im-in-shock-how-an-ai-beat-the-worlds-best-human-at-go/|title='I'm in shock!' How an AI beat the world's best human at Go|first=Mark|last=Zastrow|work=New Scientist}} Lee Sedol initially predicted he would defeat AlphaGo in a "landslide". Some weeks before the match he won the Korean Myungin title, a major championship.{{cite web|url=https://gogameguru.com/go-commentary-lee-sedol-vs-park-junghwan-43rd-myeongin-final-game-4/|title=Go Commentary: Lee Sedol vs Park Junghwan – 43rd Myeongin Final, Game 4|publisher=Go Game Guru|access-date=13 March 2016|archive-url=https://web.archive.org/web/20160503045849/https://gogameguru.com/go-commentary-lee-sedol-vs-park-junghwan-43rd-myeongin-final-game-4/|archive-date=3 May 2016|url-status=usurped}}

Games

The match was a five-game match with one million US dollars as the grand prize, using Chinese rules with a 7.5-point komi. For each game there was a two-hour set time limit for each player followed by three 60-second byo-yomi overtime periods. Each game started at 13:00 KST (04:00 GMT).{{cite web|url=https://www.deepmind.com/alpha-go.html|title=AlphaGo|publisher=Google DeepMind|access-date=10 March 2016|archive-url=https://web.archive.org/web/20160130230207/http://www.deepmind.com/alpha-go.html|archive-date=30 January 2016|url-status=dead}}

The match was played at the Four Seasons Hotel in Seoul, South Korea in March 2016 and was video-streamed live with commentary; the English language commentary was done by Michael Redmond (9-dan professional) and Chris Garlock.{{cite web|url=https://www.theguardian.com/technology/2016/feb/05/google-ai-alphago-world-no-1-lee-se-dol-live-broadcast|title=Google's AI AlphaGo to take on world No 1 Lee Se-dol in live broadcast|work=The Guardian|date=5 February 2016|access-date=15 February 2016}}{{cite web|url=http://www.businessinsider.com/google-deepmind-to-play-go-against-lee-sedol-in-south-korea-four-seasons-2016-2?r=UK&IR=T|title=Google DeepMind is going to take on the world's best Go player in a luxury 5-star hotel in South Korea|work=Business Insider|date=22 February 2016|access-date=23 February 2016}}{{cite web|title = YouTube will livestream Google's AI playing Go superstar Lee Sedol in March|url = https://venturebeat.com/2016/02/04/youtube-will-livestream-googles-ai-playing-go-superstar-lee-sedol-in-march/|website = VentureBeat|access-date = 7 February 2016|date = 4 February 2016|last = Novet|first = Jordan}} Aja Huang, a DeepMind team member and amateur 6-dan Go player, placed stones on the Go board for AlphaGo, which ran through the Google Cloud Platform with its server located in the United States.{{cite web|url=http://chinese.joins.com/gb/article.do?method=detail&art_id=148225&category=005001|title=李世乭:即使Alpha Go得到升级也一样能赢|publisher=JoongAng Ilbo|date=23 February 2016|access-date=24 February 2016|language=zh}}

=Summary=

class="wikitable" style="float:left;"
scope="col" | Game

! scope="col" | Date

! scope="col" | Black

! scope="col" | White

! scope="col" | Result

! scope="col" | Moves

19 March 2016style="text-align:center"|Lee Sedol{{win|AlphaGo}}Lee Sedol resigned186 [http://gokifu.com/s/2ipk-gokifu-20160309-Lee_Sedol(9p)-AlphaGo(9p).html Game 1]
210 March 2016{{win|AlphaGo}}style="text-align:center"|Lee SedolLee Sedol resigned211 [http://gokifu.com/s/2ipv-gokifu-20160310-AlphaGo(9p)-Lee_Sedol(9p).html Game 2]
312 March 2016style="text-align:center"|Lee Sedol{{win|AlphaGo}}Lee Sedol resigned176 [http://gokifu.com/s/2iq2-gokifu-20160312-Lee_Sedol(9p)-AlphaGo(9p).html Game 3]
413 March 2016style="text-align:center"|AlphaGo{{win|Lee Sedol}}AlphaGo resigned180 [http://gokifu.com/s/2iq8-gokifu-20160313-Alphago(9p)-Lee_Sedol(9p).html Game 4]
515 March 2016style="text-align:center"|Lee Sedol{{Ref |note1|[note 1]}}{{win|AlphaGo}}Lee Sedol resigned280 [http://gokifu.com/s/2iqt-gokifu-20160315-Lee_Sedol(9p)-AlphaGo(9p).html Game 5]
colspan=6 | Result:
AlphaGo 4 – 1 Lee Sedol
colspan=6 | {{note|note1}}note 1: For Game Five, under the official rules, it was intended that the colour assignments would be done at random.{{cite magazine|url=https://www.wired.com/2016/03/final-game-alphago-lee-sedol-big-deal-humanity/ |title=Why the Final Game Between AlphaGo and Lee Sedol Is Such a Big Deal for Humanity|magazine=Wired |date=14 March 2016|access-date=18 March 2016}} However, during the press conference after the fourth match, Lee requested "... since I won with white, I really do hope that in the fifth match I could win with black because winning with black is much more valuable."{{cite web|url=https://www.youtube.com/watch?v=yCALyQRN3hw&t=22412|title=Match 4 - Google DeepMind Challenge Match: Lee Sedol vs AlphaGo|publisher=DeepMind}} Hassabis agreed to allow Sedol to play with black.

{{clear}}

=Game 1=

AlphaGo (white) won the first game. Lee appeared to be in control throughout the match, but AlphaGo gained the advantage in the final 20 minutes, and Lee resigned. Lee stated afterwards that he had made a critical error at the beginning of the match; he said that the computer's strategy in the early part of the game was "excellent" and that the AI had made one unusual move that no human Go player would have made.{{cite news |url=https://www.bbc.co.uk/news/technology-35761246 |title=Google's AI beats world Go champion in first of five matches |publisher=BBC |date=9 March 2016 |access-date=9 March 2016 }} David Ormerod, commenting on the game at Go Game Guru, described Lee's seventh stone as "a strange move to test AlphaGo's strength in the opening", characterising the move as a mistake and AlphaGo's response as "accurate and efficient". He described AlphaGo's position as favourable in the first part of the game, considering that Lee started to come back with move 81 before making "questionable" moves at 119 and 123, followed by a "losing" move at 129.{{cite web|url=https://gogameguru.com/alphago-defeats-lee-sedol-game-1/|title=AlphaGo defeats Lee Sedol in first game of historic man vs machine match|date=9 March 2016|access-date=9 March 2016|publisher=Go Game Guru|archive-url=https://web.archive.org/web/20160503045838/https://gogameguru.com/alphago-defeats-lee-sedol-game-1/|archive-date=3 May 2016|url-status=usurped}} Professional Go player Cho Hanseung commented that AlphaGo's game had greatly improved from when it beat Fan Hui in October 2015. Michael Redmond described the computer's game as being more aggressive than against Fan.{{cite journal |url=http://www.nature.com/news/the-go-files-ai-computer-wins-first-match-against-master-go-player-1.19544 |title=The Go Files: AI computer wins first match against master Go player |first=Tanguy |last=Chouard |journal=Nature |date=9 March 2016 |doi=10.1038/nature.2016.19544 |s2cid=180287588 |url-access=subscription }}

According to 9-dan Go grandmaster Kim Seong-ryong, Lee seemed stunned by AlphaGo's strong play on the 102nd stone.{{cite web|url=http://english.hani.co.kr/arti/english_edition/e_international/734275.html|title=Surprised at his loss, Lee Se-dol says he's looking forward to another chance|work=The Hankyoreh}} After watching AlphaGo make the game's 102nd move, Lee mulled over his options for more than 10 minutes.

style="display:inline; display:inline-table;"
style="border: solid thin; padding: 2px;" |

{{Goban

| | | | | | | | | |98| | | | | | | | |

| | | | | | | | |48|97|54| |43|53|49| | |96|

| | | | | |b5| | |32|31| |55| |52| |12|10|11|

| | | |w2| | |44| |18|33| |b9| | |w8| x|b1| |

| | | | | | | | |34|35|24|25| |29|30| | |13|

| | |w6| | | |66|41|37|36|28|19|99| |14| | | |

| | | | | | |80| | |38|27|26| | |16|15| | |

| | | | | | | | |42|68|23|22| | |20|17| | |

| | | | | | | |64|67| |45|39|40| | |21| | |

| | | | x| | |65|63|69|61| | |46| | | x| | |

| | | | | | | | | | |47|60| | | | | | |

| | | |93| | | | | |62|58|59|50| | | |b7| |

| | |92|83| | | | |77| | |51|56| | | | | |

| | |81|82|89| | | | | | |57|70| | | | | |

| | |88|90| | | | | | | |71|72| | | |78| |

| | | |w4|91|84|85| | | x| |73|74| | | x| | |

| | | | | |79|86| | | | |75|76| | |b3| | |

| | | | | |87| | | | |95|94| | | | | | |

| | | | | | | | | | | | | | | | | | |

|20}}

style="text-align:center" | First 99 moves

style="display:inline; display:inline-table;"
style="border: solid thin; padding: 2px;" |

{{Goban

| | | | | | | | | | w| | | | | | | | |

| | | | | | | | | w| b| w| | b| b| b|67|68| w|

| | |16| | | b| | | w| b| | b|79| w|78| w| w| b|

| | | | w| | | w| | w| b| | b| |01| w|36| b|70|

| | | | | | | | | w| b| w| b| | b| w| |14| b|

| | | w| | | | w| b| b| w| w| b| b| | w|13|10|12|69

| | | | | | | w|52| | w| b| w| | | w| b| |21|

|72|74| | | |86|85|51| w| w| b| w|00| | w| b|15|08|

|71|66|50| | |64|75| w| b| | b| b| w| | | b| |11|

|73|65|82|54| |76| b| b| b| b| | | w| | |03|02| |

| | | | |63|77| | | | | b| w| | |07|06|04| |

|83|19|45| b| | | | | | w| w| b| w| |49|05| b|09|

| |18| w| b| | | | | b| | | b| w|80|48|81| | |

|46|17| b| w| b| | | | | | | b| w| |29|26|43|41|

| |20| w| w| | | | | | | | b| w| |27|23| w|42|44

|62|58|61| w| b| w| b| | | x| | b| w| |31|30|28| |

|84|55|56|59| | b| w| | | |25| b| w| |35| b|32| |

| |60|57|53| | b| | | |24| b| w|22| |33|34|37|38|

| | | | | | | | | | |47| | | | |39| |40|

|20}}

style="text-align:center" | Moves 100–186

=Game 2=

AlphaGo (black) won the second game. Lee stated afterwards that "AlphaGo played a nearly perfect game",{{cite news |url=https://www.bbc.co.uk/news/technology-35771705 |title=Google AI wins second Go game against world champion |publisher=BBC |date=10 March 2016 |access-date=10 March 2016}} "from very beginning of the game I did not feel like there was a point that I was leading".{{cite web|url=https://www.theverge.com/2016/3/10/11191184/lee-sedol-alphago-go-deepmind-google-match-2-result|title=Google's DeepMind beats Lee Se-dol again to go 2-0 up in historic Go series|first=Sam |last=Byford|date=10 March 2016|work=The Verge}} One of the creators of AlphaGo, Demis Hassabis, said that the system was confident of victory from the midway point of the game, even though the professional commentators could not tell which player was ahead.

Michael Redmond (9p) noted that AlphaGo's 19th stone (move 37) was "creative" and "unique". It was a move that no human would've ever made. Lee took an unusually long time to respond. An Younggil (8p) called AlphaGo's move 37 "a rare and intriguing shoulder hit" but said Lee's counter was "exquisite". He stated that control passed between the players several times before the endgame, and especially praised AlphaGo's moves 151, 157, and 159, calling them "brilliant".{{cite web |url=https://gogameguru.com/alphago-races-ahead-2-0-lee-sedol/ |title=AlphaGo races ahead 2–0 against Lee Sedol |first=David |last=Ormerod |publisher=Go Game Guru |date=10 March 2016 |access-date=11 March 2016 |archive-url=https://web.archive.org/web/20160311075132/https://gogameguru.com/alphago-races-ahead-2-0-lee-sedol/ |archive-date=11 March 2016 |url-status=usurped}}

AlphaGo showed anomalies and moves from a broader perspective, which professional Go players described as looking like mistakes at first sight but an intentional strategy in hindsight.{{cite news |url=http://www.shanghaidaily.com/article/article_xinhua.aspx?id=322918 |title=Lee Sedol expects 'not easy' game with AlphaGo in 3rd Go match |work=Shanghai Daily |date= 10 March 2016 |access-date=10 March 2016 }} As one of the creators of the system explained, AlphaGo does not attempt to maximize its points or its margin of victory, but tries to maximize its probability of winning. If AlphaGo must choose between a scenario where it will win by 20 points with 80 percent probability and another where it will win by 1 and a half points with 99 percent probability, it will choose the latter, even if it must give up points to achieve it. In particular, move 167 by AlphaGo seemed to give Lee a fighting chance and was declared to look like a blatant mistake by commentators. An Younggil said, "So when AlphaGo plays a slack looking move, we may regard it as a mistake, but perhaps it should more accurately be viewed as a declaration of victory?"

style="display:inline; display:inline-table;"
style="border: solid thin; padding: 2px;" |

{{Goban

| | | | | | | | | | | | | | | | | | |

| | | | | | | | | | | | | | | | | | |

| | | | |97|99|80|95|13|83| | | | | | | | |

| | |b3| x|31|94|93|91|82| x|67| | |35| |b1| |71|

| | | | | |96|84|92| | | | | | | | |33|70|

| | |86|81| | | | | | | | | | | |34|32|65|72

| | |30|85| | | | | | | | | | | | | |66|

| |88|87|89|61| |98| | | | | |40| | | | | |

| |90| |62| |73| | | | | | | | |38|36|64| |

| | | |14| | | | | | x| | | |39|37|63| | |

| | | |58|57| |74| | | | | | | | | | | |

| | |60|59|52|55| |77| | | | | | | | | | |

| | |28|44|51|53| |75| | | | | | |69| | | |

| |24|b9|43|49|50|54|68|76| | | | | | | |12| |

| |22|20|23|48|45|56|47| | |78| | | | |15|16| |

| |21|17|w2|26| |41|46| |29|79| |11| |b5| x|w4| |

| |19|18|25| |10|42| | | | | | |b7|w6|w8| | |

| | | |27| | | | | | | | | | | | | | |

| | | | | | | | | | | | | | | | | | |

|20}}

style="text-align:center" | First 99 moves

style="display:inline; display:inline-table;"
style="border: solid thin; padding: 2px;" |

{{Goban

| | | | | | | | | | | |97|96|66|98|44| | |

| | | | | | | | | | | | |71|45|42|43|38| |

| | | | | b| b| w| b| b| b| | |73|70|74|33|30|32|

|95|37| b|99| b| w| b| b| w| x| b| | | b|72| b|31| b|34

|93|92|36|46|47| w| w| w| | | | |53| | | | b| w|

|94| | w| b| | | | |09| | | |52|51| | w| w| | w

| | | w| b| | |35|29| |40|63|59|61|58|62| | | w|

| | w| b| b| b|55| w|49| |65|28|60| w|57| | | | |

| | w|17| w| | b|00|39|01|69|67|15|68|64| w| w| w| |

| | |18| w|50| |48|41|06|08|16|12|19| b| b| b| | |

| | | | w| b| | w|02|03| | |13| | | | | |54|

| | | w| b|56| b| | b|04|07|11| | | | | |83|84|

| |88| w| w| b| b| | b|05| |10|27| | | b| |85|86|

| | w| b| b| b| w| w| w| w|80|90|89| | | | | w| |

|87| w| w| b| w| b| w| b|77|79| w|14|91| | | b| w| |

| | b| b| w| w| | b| w|78| b| b|23| b|25| b| x| w| |

| | b| w| b| | w| w| | | | |22|24| b| w| w| | |

| | | | b| | | |82|81| |75|26|21|20| | | | |

| | | | | | | | | | | | |76| | | | | |

|20}}

style="text-align:center" | Moves 100–199

style="display:inline; display:inline-table;"
style="border: solid thin; padding: 2px;" |

{{Goban

| | | | | | | | | | | | b| w| w| w| w| | |

| | | | | | | | | | | |01| b| b| w| b| w| |

| | | | | b| b| w| b| b| b| | | b| w| w| b| w| w|

| b| b| b| b| b| w| b| b| w| x| b| | | b| w| b| b| b| w

| b| w| w| w| b| w| w| w| | | | | b| | | | b| w|

| w| | w| b| | | | | b| | | | w| b| | w| w| | w

| |00| w| b| | | b| b| | w| b| b| b| w| w| | | w|

| | w| b| b| b| b| w| b| | b| w| w| w| b| | | | |

| | w| b| w| | b| w| b| b| b| b| b| w| w| w| w| w| |

| | | w| w| w| | w| b| w| w| w| w| b| b| b| b|09|10|

| | | | w| b| | w| w| b| | | b| | | | | | w|

| | | w| | w| b| | b| | b| b| | | | | | b| w|

| | w| w| w| b| b| | b| b| | w| b| | | b| | b| w|

| | w| b| b| b| w| w| w| w| w| w| b| | |03|02| w| |

| b| w| w| b| w| b| w| b| b| b| w| w| b| | | b| w| |

| | b| b| w| w|06| b| w| w| b| b| b| b| b| b| x| w| |

| | b| w| b| | w| w|08|07| | | w| w| b| w| w| | |

| | |05| b|04| | | w| b| | b| w| | w| | | | |

| | | | | | | | | | |11| | w| | | | | |

|20}}

style="text-align:center" | Moves 200–211

=Game 3=

AlphaGo (white) won the third game.{{cite news| url=https://www.bbc.co.uk/news/technology-35785875 |title=Artificial intelligence: Google's AlphaGo beats Go master Lee Se-dol |publisher=BBC |date=12 March 2016 |access-date=12 March 2016 }}

After the second game, players still had doubts about whether AlphaGo was truly a strong player in the sense that a human might be. The third game was described as removing that doubt, with analysts commenting that: {{blockquote|AlphaGo won so convincingly as to remove all doubt about its strength from the minds of experienced players. In fact, it played so well that it was almost scary ... In forcing AlphaGo to withstand a very severe, one-sided attack, Lee revealed its hitherto undetected power ... Lee wasn't gaining enough profit from his attack ... One of the greatest virtuosos of the middle game had just been upstaged in black and white clarity. }}

According to An Younggil (8p) and David Ormerod, the game showed that "AlphaGo is simply stronger than any known human Go player."{{cite web |url=https://gogameguru.com/alphago-shows-true-strength-3rd-victory-lee-sedol/ |title=AlphaGo shows its true strength in 3rd victory against Lee Sedol |first=David |last=Ormerod |publisher=Go Game Guru |date=12 March 2016 |access-date=12 March 2016 |archive-url=https://web.archive.org/web/20160313032049/https://gogameguru.com/alphago-shows-true-strength-3rd-victory-lee-sedol/ |archive-date=13 March 2016 |url-status=usurped}} AlphaGo was seen to capably navigate tricky situations known as ko that did not come up in the previous two matches.{{cite web|url=https://www.theverge.com/2016/3/12/11210650/alphago-deepmind-go-match-3-result|title=AlphaGo beats Lee Se-dol again to take Google DeepMind Challenge series|first=Sam |last=Byford|date=12 March 2016|work=The Verge|access-date=12 March 2016}} An and Ormerod consider move 148 to be particularly notable: in the middle of a complex ko fight, AlphaGo displayed sufficient "confidence" that it was winning the game to play a significant move elsewhere.

Lee, playing black, opened with a High Chinese formation and generated a large area of black influence, which AlphaGo invaded at move 12. This required the program to defend a weak group, which it did successfully. An Younggil described Lee's move 31 as possibly the "losing move" and Andy Jackson of the American Go Association considered that the outcome had already been decided by move 35.{{cite journal |url=http://www.nature.com/news/the-go-files-ai-computer-clinches-victory-against-go-champion-1.19553 |title=The Go Files: AI computer clinches victory against Go champion |first=Tanguy |last=Chouard |journal=Nature |date=12 March 2016 |doi=10.1038/nature.2016.19553|s2cid=155164502 |url-access=subscription }} AlphaGo had gained control of the game by move 48, and forced Lee onto the defensive. Lee counterattacked at moves 77/79, but AlphaGo's response was effective, and its move 90 succeeded in simplifying the position. It then gained a large area of control at the bottom of the board, strengthening its position with moves from 102 to 112 described by An as "sophisticated". Lee attacked again at moves 115 and 125, but AlphaGo's responses were again effective. Lee eventually attempted a complex ko from move 131 without forcing an error from the program, and he resigned at move 176.

style="display:inline; display:inline-table;"
style="border: solid thin; padding: 2px;" |

{{Goban

| | | | | | | | | | | | | | | | | | |

| | | | | | | | | | | | | | | | | | |

| | | |13|19|22| |90|27| | | | |b9| | | | |

| |23|b3| |12|20| |26|b7| x| | | | | |b1| | |

| |38|18|16| | | |24|25| | | | | | | |99| |

| | | |17|15|21| | |29| | | | | | | |w8| |

| |36| |33|14|50|30|28| | | | | | | | | | |

| |37|45|41|51|47| | | |31| | | | | | | | |

| |39|32|34|35| |46|66|56|68| | | | | | |88| |

|63|62|40|11|43| |67|65| | x| | | | | | x| | |

| |61|44|42|53| |55| | | | | | | |87| | | |

| | | | |54|60| | |69| | |89| | |83|78|82| |

| | |57|48| |52| |64| | | | | | |80|77|10| |

| | |b5|49|59| | | | |70| |98| | |84|81|79|85|

| | | | | |58| | | | | | | | |86| | | |

| | |71|w4| | | | | | x| | | | | |w2|92|93|

| |75|72|73|76|w6| | | | | | | | | |94|91|95|

| | | |74| | | | | | | | | | | | |96|97|

| | | | | | | | | | | | | | | | | | |

|20}}

style="text-align:center" | First 99 moves

style="display:inline; display:inline-table;"
style="border: solid thin; padding: 2px;" |

{{Goban

| |11| | | | | | | | | | | | | | | | |

| |04|07| |10| |12| |48| | | | | | | | | |

| |08| | b| b| w| | w| b| | | | | b| | | | |

|09| b| b|05| w| w| | w| b| x| | | | | | b|03|01|

| | w| w| w|06| | | w| b| | | | | | |02| b|00|

| | | | b| b| b| | | b| | | | | | |24| w| |

| | w| | b| w| w| w| w| | | | | | | | | | |

| | b| b| b| b| b| | | | b| | | | |68|62|61| |

| | b| w| w| b| | w| w| w| w| | | | | |67| w| |

| b| w| w| b| b| | b| b| | x| | | | | |17|18| |

| | b| w| w| b| | b| | | | | | | | b|14|13|20|

| | | | | w| w| | | b| | | b| | | b| w| w|19|

| | | b| w| | w| | w| | | | | | | w| b| w|21|

| |74| b| b| b|41|42|52|40| w|30| w|16| | w| b| b| b|

| |73|72|57|56| w|43|37|34|29| |15| | | w| | | |

| |70|bT| w|58|55| |33|32| x| | | | | | w| w| b|

| | b|wT|51| w| w|38|39|25|28|26|27| | | | w| b| b|

| |65|23| w|59| |44|31| |35|36|50| | | | | w| b|

| | |76|53| |46|45|60|47| |49| | | | | | | |

|20}}

style="text-align:center" | Moves 100–176 (122 at 113,
154 at 20x20px, 163 at 145, 164 at 151,
166 and 171 at 160, 169 at 145, 175 at 20x20px)

=Game 4=

Lee (white) won the fourth game. Lee chose to play a type of extreme strategy, known as amashi, in response to AlphaGo's apparent preference for Souba Go (attempting to win by many small gains when the opportunity arises), taking territory at the perimeter rather than the center.{{cite web |url=https://gogameguru.com/lee-sedol-defeats-alphago-masterful-comeback-game-4/ |title=Lee Sedol defeats AlphaGo in masterful comeback – Game 4 |first=David |last=Ormerod |publisher=Go Game Guru |date=13 March 2016 |access-date=13 March 2016 |archive-url=https://web.archive.org/web/20161116082508/https://gogameguru.com/lee-sedol-defeats-alphago-masterful-comeback-game-4/ |archive-date=16 November 2016 |url-status=usurped}} By doing so, his apparent aim was to force an "all or nothing" style of situation – a possible weakness for an opponent strong at negotiation types of play, and one which might make AlphaGo's capability of deciding slim advantages largely irrelevant.

The first 11 moves were identical to the second game, where Lee also played white. In the early game, Lee concentrated on taking territory in the edges and corners of the board, allowing AlphaGo to gain influence in the top and centre. Lee then invaded AlphaGo's region of influence at the top with moves 40 to 48, following the amashi strategy. AlphaGo responded with a shoulder hit at move 47, sacrificing four stones elsewhere and gaining the initiative with moves 47 to 53 and 69. Lee tested AlphaGo with moves 72 to 76 without provoking an error, and by this point in the game, commentators had begun to feel Lee's play was a lost cause. However, an unexpected play at white 78, described as "a brilliant tesuji", turned the game around. The move developed a white wedge at the centre, and increased the game's complexity.{{cite magazine |url=https://www.wired.com/2016/03/go-grandmaster-lee-sedol-grabs-consolation-win-googles-ai/ |title=Go Grandmaster Lee Sedol Grabs Consolation Win Against Google's AI |first=Cade |last=Metz |magazine=Wired |date=13 March 2016 |access-date=14 March 2016 }} Gu Li (9p) described it as a "divine move" and stated that the move had been completely unforeseen by him.

AlphaGo responded poorly on move 79, at which time it estimated it had a 70% chance to win the game. Lee followed up with a strong move at white 82. AlphaGo's initial response in moves 83 to 85 was appropriate, but at move 87, its estimate of its chances to win suddenly plummeted,{{cite web|title=Twitter post (12:09 a.m. – 13 Mar 2016) |first=Demis |last=Hassabis |url=https://twitter.com/demishassabis/status/708928006400581632 |access-date=13 March 2016}}{{Primary source inline|date=November 2020}}{{cite web|title=Twitter post (12:36 a.m. – 13 Mar 2016) |first=Demis |last=Hassabis |url=https://twitter.com/demishassabis/status/708934687926804482 |access-date=13 March 2016}}{{Primary source inline|date=November 2020}} provoking it to make a series of very bad moves from black 87 to 101. David Ormerod characterised moves 87 to 101 as typical of Monte Carlo-based program mistakes. Lee took the lead by white 92, and An Younggil described black 105 as the final losing move. Despite good tactics during moves 131 to 141, AlphaGo could not recover during the endgame and resigned. AlphaGo's resignation was triggered when it evaluated its chance of winning to be less than 20%; this is intended to match the decision of professionals who resign rather than play to the end when their position is felt to be irrecoverable.

An Younggil at Go Game Guru concluded that the game was "a masterpiece for Lee Sedol and will almost certainly become a famous game in the history of Go". Lee commented after the match that he considered AlphaGo was strongest when playing white (second).Lee Sedol in Google DeepMind Challenge Match 4 post-match press conference (13 March 2016) For this reason, he requested that he play black in the fifth game, which is considered more risky.

David Ormerod of Go Game Guru stated that although an analysis of AlphaGo's play around 79–87 was not yet available, he believed it resulted from a known weakness in play algorithms that use Monte Carlo tree search. In essence, the search attempts to prune less relevant sequences. In some cases, a play can lead to a particular line of play which is significant but which is overlooked when the tree is pruned, and this outcome is therefore "off the search radar".{{cite web|url=https://gogameguru.com/lee-sedol-defeats-alphago-masterful-comeback-game-4/|title=Lee Sedol defeats AlphaGo in masterful comeback - Game 4|publisher=Go Game Guru|access-date=13 March 2016|archive-url=https://web.archive.org/web/20161116082508/https://gogameguru.com/lee-sedol-defeats-alphago-masterful-comeback-game-4/|archive-date=16 November 2016|url-status=usurped}}

style="display:inline; display:inline-table;"
style="border: solid thin; padding: 2px;" |

{{Goban

| | | | | | | | | | | | | | | | | | |

| | | | | | | | | | | | | |43| | | | |

| | | |19|14|77| |45| | | |42|40|21| | | | |

| | |b3| x|17|76| |15|44| x| | |41|61| |b1| | |

| | | | | | |74|75| | | | |65|60|99| | | |

| | | | | |38|39| | | | |63|58|66| | | | |

| | |16| | |36|37| | |46| | |55|54| |95| | |

| | | | |32|33|72| | |82|81|67|53|52| | | | |

| | | | |30|31| | |92|71|78|59|56|51|50|48|22| |

| |34|18|28|29| | |73| |79|91|80|62|57|49|47|68| |

| | |27|25| |35| | |70| |69| |64|84|86|87|89|93|

| | | | | | | | | | | | |83|90|85|88|96| |

| | | | | | | | | | | | | | |94| | | |

| | |b9| | | | | | | | | | | | | | | |

| | |98| |24| | | | | | | | | | |12| | |

| |20|97|w2|23|26| | | | x| | |11| |b5| x|w4| |

| | | | | |10| | |13| | | | |b7|w6|w8| | |

| | | | | | | | | | | | | | | | | | |

| | | | | | | | | | | | | | | | | | |

|20}}

style="text-align:center" | First 99 moves

style="display:inline; display:inline-table;"
style="border: solid thin; padding: 2px;" |

{{Goban

| | | | | | | | | | | | | | | | | | |

| | | | |15| | | |68| |66| | | b| | | | |

| | | | b| w| b| | b| |44|65| w| w| b| | | | |

| | | b| | b| w|27| b| w| x|63| | b| b| | b| |49|

| | | | |14|28| w| b|64| | | | b| w| b| | | |47

| | | |21| | w| b|29|26| | | b| w| w|75| |43|41|46

| |16| w|17|18| w| b|03| | w| | | b| w| | b|42|36|48

|61|31|13|22| w| b| w| |30| w| b| b| b| w|73| |69|70|

|62| |32| | w| b|06| | w| b| | b| w|bS| w| w| w| |

| | w| w| w| b|07| | b|04| b| b| w| w|bT| b| b| w|02|76

|52|51| b| b| | b|67|24| w| | b| | w| w| w| b| b| b|01

| |50|53| | |11|10| | |08|09| | b| w| | w| w|00|74

| | | | | | |12| |23| |05| | | | w| | | |

| |25| b|37|19| | |54|55| | | | | | | | | |

| | | w| | w|20|34|33| | | | | | |79| w| | |

| | w| b| w| b| w| |56| |80| | | b| | b| x| w| |

| | |38| | | w| | | b| | | | | b| w| w| | |

| | | | |58|57|45| | | | | | | |35|39|40| |

| | | |60|59| | | | | | | | | | | |71|72|

|20}}

style="text-align:center" | Moves 100–180 (177 at 20x20px, 178 at 20x20px)

=Game 5=

AlphaGo (white) won the fifth game. The game was described as being close. Hassabis stated that the result came after the program made a "bad mistake" early in the game.{{cite web|url=https://www.theverge.com/2016/3/15/11213518/alphago-deepmind-go-match-5-result |title=Google's AlphaGo AI beats Lee Se-dol again to win Go series 4-1 |first=Sam |last=Byford|work=The Verge |date=15 March 2016 |access-date=15 March 2016}}

Lee, playing black, opened similarly to the first game and began to stake out territory in the right and top left corners – a similar strategy to the one he employed successfully in game 4 – while AlphaGo gained influence in the centre of the board. The game remained even until white moves 48 to 58, which AlphaGo played in the bottom right. These moves unnecessarily lost ko threats and aji, allowing Lee to take the lead. Michael Redmond (9p) speculated that perhaps AlphaGo had missed black's "tombstone squeeze" tesuji. Humans are taught to recognize the specific pattern, but it is a long sequence of moves, made difficult if computed from scratch.

AlphaGo then started to develop the top of the board and the centre and defended successfully against an attack by Lee in moves 69 to 81 that David Ormerod characterised as over-cautious. By white 90, AlphaGo had regained equality and then played a series of moves described by Ormerod as "unusual... but subtly impressive", which gained a slight advantage. Lee tried a Hail Mary pass with moves 167 and 169, but AlphaGo's defence was successful. An Younggil noted white moves 154, 186, and 194 as being particularly strong, and the program played an impeccable endgame, maintaining its lead until Lee resigned.{{citation |url=https://gogameguru.com/alphago-defeats-lee-sedol-4-1/ |title=AlphaGo defeats Lee Sedol 4–1 in Google DeepMind Challenge Match |first=David |last=Ormerod |publisher=Go Game Guru |date=16 March 2016 |access-date=16 March 2016 |archive-url=https://web.archive.org/web/20160317095008/https://gogameguru.com/alphago-defeats-lee-sedol-4-1/ |archive-date=17 March 2016 |url-status=usurped}}

style="display:inline; display:inline-table;"
style="border: solid thin; padding: 2px;" |

{{Goban

| | | | | |98|94|96|97| |99| | | |67| | | |

| | | | | |86|93|92|95|81| | |66|61|62|63| | |

| | |34| | |28|71|89| |83| | |68|60|b5|65| | |

| |29| |w2| |72|69| |79|82| | | | |64| x|b1|15|

| | | | | |88|73| | |84| | | | |18|13|12| |

| | |27| | |74|70|75|77|80| | | | | |16|14| |

| | | |33| | |87|76|78| | | | | | | | | |

| |39|30|31| | |85| | | | | | | | | | | |

| |37|35|32| | | |90| | | | | | | | | | |

| | |36|38| | | | | | x| | | | | | x|24| |

| | | | | | | | | | | | | |44|22| | | |

| | | | | | | | | |91| |46|40|43|20|19|17|51|

| | | | | | | | | | | |42|41|23|21|10| |50|58

| | | | | | | | | | |47|45|48| | |54|w8|25|53

| | | | | | | | | | | | |49|57|55|w6|b7|52|

| | | |w4| | | | | | x| | | | |56|59|b9| |

| | | | | | |26| | | | | | |11| |b3| | |

| | | | | | | | | | | | | | | | | | |

| | | | | | | | | | | | | | | | | | |

|20}}

style="text-align:center" | First 99 moves

style="display:inline; display:inline-table;"
style="border: solid thin; padding: 2px;" |

{{Goban

| | | | | | w| w| w| b| | b| | | | b| | | |

| | | | | | w| b| w| b| b| | | w| b| | b| | |

| |86| w| | | w| b| b| | b| | | w| w| b| b| | |

| | b| | w| | w| b| | b| w| | | | | w| x| b| b|

| | |93| | | w| b| | | w| | | | | w| b| w|98|99

| | | b| | | w| w| b| b| w| | |90| | | w| w| |

| | | | b| |46| b| w| w| | |83| | |89| | |96|95

|65| b| w| b|45|38| b|02| | | | |81|91| |92| | |

|64| b| b| w|54|37|04| w|01| | |49|50|82| | | | |

|57|56| w| w| |76|05|03|40|41|53|48|47|52| | x| w| |

|78|55| | | | | | |36|39| |51|94| w| w| | |62|

|79|67|66|72| |75|97| |42| b| | w| w| b| w| b| b| b|63

| |68|71|69|73| | | |44|43| | w| b| b| b| w| | w| w

| |77|06|70|74|22| | |24| | b| b| w| | | w| w|bS| b

| |87|14|13|21|20| | | | | | | b| b| b| w| b| w|60

| |88| | w|15|11|12| | | x| | | | | w| b| b| |

| | |08|07|10|09| w| |00|26|30|32|33| b| | b| | |

| | | |16|17|80|19|23|25|28|27|31|34|35| | | | |

| | | | | | | |59|58|29|84|85| | | | | | |

|20}}

style="text-align:center" | Moves 100–199 (118 at 107, 161 at 20x20px)

style="display:inline; display:inline-table;"
style="border: solid thin; padding: 2px;" |

{{Goban

|17| | | | | w| w| w| b| | b| |42|43| b| | | |

|06|15|16| | | w| b| w| b| b|11| | w| b| | b| | |

|05| w| w| | | w| b| b| | b|10|14| w| w| b| b| | |

|07| b|08| w|64| w| b| | b| w|66|03| | | w|67| b| b|41

| | | b|61|63| w| b|45|44| w|65|26|04|59| w| b| w| w| b

| |09| b| | | w| w| b| b| w|27|23| w|13|60| w| w|01|00

| | | | b| | w| | w| w|28|22| b| |69| b|68| | w| b

| b| b| w| b| b| w| | w| |21|24|25| b| b|12| w| | |02

| | b| b| w| w| | w| w| b| |31| b| w| w| | | | |

| b| w| w| w| | w| b| b| w| b| b|30| b| w| | x| w|80|

| | b|77|78| |72|18|20| w| b|58|bT| w| w| w|38|79| w|70

| b| b| w| w|54| b| b|19| w| b|73| w| w| b| w| b| b| b| b

|57|wS| b| b| b|55|74|56| w| b|33| w| b| b| b| w|39| w| w

| | b| w| w| w| w| | | w|32| b| b| w| |35| w| w|bS|34

|29| b| w| | | w| | | | |48|51| b| b| b| w| b| w| w

|62| w| | w| | | w| | | x|50|49| | | w| b| b|36|

| | | w| w| w| | w|46| w| w| w| w| b| b| | b| |37|

| | | | w| b| w| b| b| b| w| b| b| w| b| | | | |

| | | | | |52| | b| | b|53| b|47| | | | | |

|20}}

style="text-align:center" | Moves 200–280 (240 at 200, 271 at 20x20px,
275 at 20x20px, 276 at 20x20px)

Coverage

Live video of the games and associated commentary was broadcast in Korean, Chinese, Japanese, and English. Korean-language coverage was made available through Baduk TV.{{cite web|url=http://www.tvbaduk.com/|title=바둑TV|publisher=Baduk TV}} Chinese-language coverage of game 1 with commentary by 9-dan players Gu Li and Ke Jie was provided by Tencent and LeTV respectively, reaching about 60 million viewers.{{cite magazine|url=https://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/|title=The Sadness and Beauty of Watching Google's AI Play Go|date=11 March 2016|magazine=WIRED|access-date=12 March 2016}} Online English-language coverage presented by US 9-dan Michael Redmond and Chris Garlock, a vice-president of the American Go Association, reached an average 80 thousand viewers with a peak of 100 thousand viewers near the end of game 1.{{cite web|url=http://www.golem.de/news/kuenstliche-intelligenz-alpha-go-spielt-wie-eine-goettin-1603-119646.html|title=Künstliche Intelligenz: "Alpha Go spielt wie eine Göttin"|publisher=Golem.de}}

Responses

=AI community=

AlphaGo's victory was a major milestone in artificial intelligence research.{{cite news|author1=Steven Borowiec|author2=Tracey Lien|title=AlphaGo beats human Go champ in milestone for artificial intelligence|url=https://www.latimes.com/world/asia/la-fg-korea-alphago-20160312-story.html|access-date=13 March 2016|work=Los Angeles Times|date=12 March 2016}} Go had previously been regarded as a hard problem in machine learning that was expected to be out of reach for the technology of the time.{{cite news |title=A computer has beaten a professional at the world's most complex board game |url=https://www.independent.co.uk/life-style/gadgets-and-tech/news/google-alphago-computer-beats-professional-at-worlds-most-complex-board-game-go-a6837506.html |archive-url=https://web.archive.org/web/20160128012935/http://www.independent.co.uk/life-style/gadgets-and-tech/news/google-alphago-computer-beats-professional-at-worlds-most-complex-board-game-go-a6837506.html |archive-date=2016-01-28 |url-access=limited |url-status=live |newspaper=The Independent |access-date=28 January 2016 |date=27 January 2016 |last=Connor |first=Steve}}{{cite news |title=Google's AI beats human champion at Go |url=http://www.cbc.ca/news/technology/alphago-ai-1.3422347 |publisher=CBC News |access-date=28 January 2016 |date=27 January 2016}} Most experts thought a Go program as powerful as AlphaGo was at least five years away;{{cite news|author1=Dave Gershgorn|title=Google's AlphaGo Beats World Champion in Third Match to Win Entire Series|url=http://www.popsci.com/googles-alphago-beats-world-champion-in-third-match-to-win-entire-series|access-date=13 March 2016|work=Popular Science|date=12 March 2016}} some experts thought that it would take at least another decade before computers would beat Go champions.{{cite news|title=Google DeepMind computer AlphaGo sweeps human champ in Go matches|url=http://www.cbc.ca/news/technology/go-google-alphago-lee-sedol-deepmind-1.3488913|access-date=13 March 2016|publisher=CBC News|agency=Associated Press|date=12 March 2016}}{{cite news|author1=Sofia Yan|title=A Google computer victorious over the world's 'Go' champion|url=https://money.cnn.com/2016/03/12/technology/google-deepmind-alphago-wins/|access-date=13 March 2016|work=CNN Money|date=12 March 2016}} Most observers at the beginning of the 2016 matches expected Lee to beat AlphaGo.

With games such as checkers, chess, and now Go won by computer players, victories at popular board games can no longer serve as significant milestones for artificial intelligence in the way that they used to. Deep Blue's Murray Campbell called AlphaGo's victory "the end of an era... board games are more or less done and it's time to move on."

When compared with Deep Blue or with Watson, AlphaGo's underlying algorithms are potentially more general-purpose and may be evidence that the scientific community is making progress toward artificial general intelligence.{{cite news|title=AlphaGo: Google's artificial intelligence to take on world champion of ancient Chinese board game|url=http://www.abc.net.au/news/2016-03-08/google-artificial-intelligence-to-face-board-game-champion/7231192|access-date=13 March 2016|publisher=Australian Broadcasting Corporation|date=8 March 2016}} Some commentators believe AlphaGo's victory makes for a good opportunity for society to start discussing preparations for the possible future impact of machines with general purpose intelligence. In March 2016, AI researcher Stuart Russell stated that "AI methods are progressing much faster than expected, (which) makes the question of the long-term outcome more urgent," adding that "to ensure that increasingly powerful AI systems remain completely under human control... there is a lot of work to do."{{cite news|author1=Mariëtte Le Roux|title=Rise of the Machines: Keep an eye on AI, experts warn|url=http://phys.org/news/2016-03-machines-eye-ai-experts.html|access-date=13 March 2016|work=Phys.org|date=12 March 2016}} Some scholars, such as physicist Stephen Hawking, warn that some future self-improving AI could gain actual general intelligence, leading to an unexpected AI takeover; other scholars disagree: AI expert Jean-Gabriel Ganascia believes that "Things like 'common sense'... may never be reproducible",{{cite news|title=Game over? New AI challenge to human smarts (Update)|url=http://phys.org/news/2016-03-game-ai-human-smarts.html|access-date=13 March 2016|work=phys.org}}{{cite news|author1=Mariëtte Le Roux|author2=Pascale Mollard|title=Game over? New AI challenge to human smarts (Update)|url=http://phys.org/news/2016-03-game-ai-human-smarts.html|access-date=13 March 2016|work=phys.org|date=8 March 2016}} and says "I don't see why we would speak about fears. On the contrary, this raises hopes in many domains such as health and space exploration." Richard Sutton said, "I don't think people should be scared... but I do think people should be paying attention."{{cite news|author1=Tanya Lewis|title=An AI expert says Google's Go-playing program is missing 1 key feature of human intelligence|url=http://www.businessinsider.com/what-does-googles-deepmind-victory-mean-for-ai-2016-3|access-date=13 March 2016|work=Business Insider|date=11 March 2016}}

The DeepMind AlphaGo Team received the Inaugural IJCAI Marvin Minsky Medal for Outstanding Achievements in AI. "AlphaGo is a wonderful achievement, and a perfect example of what the Minsky Medal was initiated to recognise", said Professor Michael Wooldridge, Chair of the IJCAI Awards Committee. "What particularly impressed IJCAI was that AlphaGo achieves what it does through a brilliant combination of classic AI techniques as well as the state-of-the-art machine learning techniques that DeepMind is so closely associated with. It's a breathtaking demonstration of contemporary AI, and we are delighted to be able to recognise it with this award".{{cite news|title=Marvin Minsky Medal for Outstanding Achievements in AI|url=http://www.ijcai.org/awards/minsky_medal|access-date=21 October 2017|work=International Joint Conference on Artificial Intelligence|date=19 October 2017|language=en}}

=Go community=

Go is a popular game in South Korea, China, and Japan. This match was watched and analyzed by millions of people worldwide. Many top Go players characterized AlphaGo's unorthodox plays as seemingly-questionable moves that initially befuddled onlookers but made sense in hindsight: "All but the very best Go players craft their style by imitating top players. AlphaGo seems to have totally original moves it creates itself." AlphaGo appeared to have unexpectedly become much stronger, even when compared with its October 2015 match against Fan Hui{{cite news|author1=John Ribeiro|title=Google's AlphaGo AI program strong but not perfect, says defeated South Korean Go player|url=http://www.pcworld.com/article/3043211/big-win-for-ai-as-google-alphago-program-trounces-korean-player-in-go-tournament.html|access-date=13 March 2016|work=PC World|date=12 March 2016}} where a computer had beaten a Go professional for the first time without the advantage of a handicap.

China's number one player, Ke Jie, who was at the time the top-ranked player worldwide, initially claimed that he would be able to beat AlphaGo, but declined to play against it for fear that it would "copy my style".{{cite news|author1=Neil Connor|title=Google AlphaGo 'can't beat me' says China Go grandmaster|url=https://www.telegraph.co.uk/news/worldnews/asia/china/12190917/Google-AlphaGo-cant-beat-me-says-China-Go-grandmaster.html|access-date=13 March 2016|work=The Telegraph (UK)|date=11 March 2016}} As the matches progressed, Ke Jie went back and forth, stating that "it is highly likely that I (could) lose" after analyzing the first three matches,{{cite web|url=http://english.donga.com/List/3/all/26/527586/1|title=Chinese Go master Ke Jie says he could lose to AlphaGo|work=The Dong-a Ilbo|date=14 March 2016|access-date=17 March 2016}} but regaining confidence after the fourth match.{{cite web|url=http://m.hankooki.com/m_sp_view.php?WM=sp&FILE_NO=c3AyMDE2MDMxNDE4MDIzMDEzNjU3MC5odG0=&ref=search.naver.com|title='첫 불계승' 이세돌, 커제 9단 태도 좌우…알파고와의 5국 중계는 어디서?|work=Hankook Ilbo|date=14 March 2016|access-date=17 March 2016|language=ko|archive-date=15 March 2016|archive-url=https://web.archive.org/web/20160315153117/http://m.hankooki.com/m_sp_view.php?WM=sp&FILE_NO=c3AyMDE2MDMxNDE4MDIzMDEzNjU3MC5odG0=&ref=search.naver.com|url-status=dead}}"...if today's performance was its true capability, then it doesn't deserve to play against me." In the end Ke Jie played Alpha Go the next year and was defeated in three games.

Toby Manning, the referee of AlphaGo's match against Fan Hui, and Hajin Lee, secretary general of the International Go Federation, both reason that in the future, Go players will get help from computers to learn what they have done wrong in games and improve their skills.{{cite journal |url=http://www.nature.com/news/go-players-react-to-computer-defeat-1.19255 |title=Go players react to computer defeat |first=Elizabeth |last=Gibney |journal=Nature |year=2016 |doi=10.1038/nature.2016.19255|s2cid=146868978 |url-access=subscription }}

After game three, Lee apologized for his losses and stated, "I misjudged the capabilities of AlphaGo and felt powerless." He emphasized that the defeat was "Lee Se-dol's defeat" and "not a defeat of mankind".{{cite news|author1=Yoon Sung-won|title=Lee Se-dol shows AlphaGo beatable|url=https://www.koreatimes.co.kr/www/news/tech/2016/03/133_200267.html|access-date=15 March 2016|work=The Korea Times|date=14 March 2016}} Lee said his eventual loss to a machine was "inevitable" but stated that "robots will never understand the beauty of the game the same way that we humans do." Lee called his game four victory a "priceless win that I (would) not exchange for anything."

=Government=

In response to the match the South Korean government announced on 17 March 2016 that it would invest 1 trillion won (US$863 million) in artificial-intelligence (AI) research over the next five years.{{cite journal|url=http://www.nature.com/news/south-korea-trumpets-860-million-ai-fund-after-alphago-shock-1.19595|title=South Korea trumpets $860-million AI fund after AlphaGo 'shock'|journal=Nature|first=Mark|last=Zastrow|date=18 March 2016|access-date=20 March 2016|doi=10.1038/nature.2016.19595|s2cid=167331855|url-access=subscription}}

=Other Human vs AI Competitiors=

Ken Jennings, who and Brad Rutter were famously defeated in 2011 by IBM Watson in a two-game Jeopardy! The IBM Challenge between the AI supercomputer and two of the game show's legendary champions in a three-episode special regarding the exhibition match, wrote in Slate magazine after the event. He stated, "The nightmarish robot dystopias of science-fiction movies just got one benchmark closer."{{cite web |last1=Jennings |first1=Kenneth W. |title=The Go Champion, the Grandmaster, and Me |url=https://slate.com/technology/2016/03/googles-alphago-defeated-go-champion-lee-sedol-ken-jennings-explains-what-that-feels-like.html |website=Slate Magazine |publisher=Microsoft |access-date=2025-03-15}}

{{blockquote| There’s a disorienting, airless vibe to facing an artificial challenger. You feel unexpectedly alone in the spotlight, but at the same time you’re hyperaware of the millions of tech dollars and labs full of anonymous nerds arrayed against you. Your new opponent, unlike everyone you’ve ever played in the past, can never become overconfident or intimidated. There’s no way to play it psychologically at all, because it has no psychology.}}

Jennings compared AlphaGo to Kurt Vonnegut's 1952 novel Player Piano, where artificial intelligence eliminates almost all careers, and a those whose jobs were replaced by AI, in Vonnegut's novel, are placed into a government Works Progress Administration-style organisation that revolts because of people lost self-respect to AI. He stated it was "charmingly retrofuturistic as Walt Disney’s Tomorrowland."

Jennings, who was eventually named interim host on October 29, 2020 and permanent full-time host of Jeopardy! on December 15, 2023, concluded his article with the following:

{{blockquote|I assume that Lee, like (Garry) Kasparov and me before him, will eventually make it through the five stages of automation obsolescence and accept his pioneering role in the early history of “thinking” machines. But what about all those newly replaceable souls who come after us, in a seismic shift that seems about to reshape our entire economy? For now, it’s just a handful of chess and Go and Jeopardy! champions who no longer feel needed and useful. But what happens to society when it’s tens of millions of us?}}

In media

An award-winning documentary film about the matches, AlphaGo, was made in 2017.{{Cite web|url=https://www.alphagomovie.com/|title=AlphaGo Movie|website=AlphaGo Movie}}{{Cite web|url=https://www.rottentomatoes.com/m/alphago/|title = AlphaGo|website = Rotten Tomatoes}} On 13 March 2020, the film was made free online on the DeepMind YouTube channel.{{Cite web|url=https://www.youtube.com/watch?v=WXuK6gekU1Y|title=AlphaGo - The Movie {{!}} Full Documentary|website=YouTube|access-date=20 March 2020}}

The matches were featured in Benjamin Labatut's 2023 novel, The MANIAC.{{cite web |last1=Simon |first1=Ed |title=Nightmares of Reason: On Benjamín Labatut's "The MANIAC" |url=https://lareviewofbooks.org/article/nightmares-of-reason-on-benjamin-labatuts-the-maniac/ |website=Los Angeles Review of Books |access-date=30 December 2023 |date=25 November 2023}}{{cite news |last1=Rothfeld |first1=Becca |title=Review {{!}} 'The MANIAC' blends fiction and history at the edge of reason |url=https://www.washingtonpost.com/books/2023/09/21/maniac-benjamin-labatut-review-cease-understand/ |newspaper=Washington Post |access-date=30 December 2023 |date=21 September 2023}}

See also

References

{{Reflist|30em}}

=Official match commentary=

Official match commentary by Michael Redmond (9-dan pro) and Chris Garlock on Google DeepMind's YouTube channel:

  • [https://www.youtube.com/watch?v=vFr3K2DORc8&t=1670 Game 1] ([https://www.youtube.com/watch?v=bIQxOsRAXCo 15 minute summary])
  • [https://www.youtube.com/watch?v=l-GsfyVCBu0&t=1212 Game 2] ([https://www.youtube.com/watch?v=1aMt7ulL6EI 15 minute summary])
  • [https://www.youtube.com/watch?v=qUAmTYHEyM8&t=912 Game 3] ([https://www.youtube.com/watch?v=6hROM_bxZ9E 15 minute summary])
  • [https://www.youtube.com/watch?v=yCALyQRN3hw&t=899 Game 4] ([https://www.youtube.com/watch?v=G5gJ-pVo1gs 15 minute summary])
  • [https://www.youtube.com/watch?v=mzpW10DPHeQ&t=598 Game 5] ([https://www.youtube.com/watch?v=QxHdPdRcMhw 15 minute summary])

=[[Smart Game Format|SGF]] files=

  • {{usurped|1=[https://web.archive.org/web/20160309131718/https://gogameguru.com/i/2016/03/Lee-Sedol-vs-AlphaGo-20160309.sgf Game 1]}} (Go Game Guru)
  • {{usurped|1=[https://web.archive.org/web/20160314092959/https://gogameguru.com/i/2016/03/AlphaGo-vs-Lee-Sedol-20160310.sgf Game 2]}} (Go Game Guru)
  • {{usurped|1=[https://web.archive.org/web/20160313075810/https://gogameguru.com/i/2016/03/20160312-Lee-Sedol-vs-AlphaGo.sgf Game 3]}} (Go Game Guru)
  • {{usurped|1=[https://web.archive.org/web/20160314022423/https://gogameguru.com/i/2016/03/20160313-AlphaGo-vs-Lee-Sedol.sgf Game 4]}} (Go Game Guru)
  • {{usurped|1=[https://web.archive.org/web/20160315204036/https://gogameguru.com/i/2016/03/Lee-Sedol-vs-AlphaGo-20160315.sgf Game 5]}} (Go Game Guru)

{{coord|37.5706|N|126.9754|E|region:KR_type:event|display=title}}

{{Google AI}}

{{Go (game)}}

Category:Computer Go games

Category:Sport in Seoul

Category:2016 in South Korean sport

Category:2016 in computing

Category:2010s in Seoul

Category:2016 in South Korea

Category:Human versus computer matches

Category:2016 in go

Category:March 2016 sports events in Asia

Category:AlphaGo