Roman Yampolskiy

{{Short description|Latvian computer scientist (born 1979)}}

{{Use American English|date=December 2022}}

{{Use dmy dates|date=December 2022}}

{{Family name hatnote|Vladimirovich|Yampolskiy|lang=Eastern Slavic}}

{{Infobox scientist

| honorific_prefix =

| name = Roman Yampolskiy

| honorific_suffix =

| native_name = {{Nobold|Роман Ямпольский}}

| native_name_lang = Latvian

| image = SCH1473 (52734656870) (cropped).jpg

| image_size =

| alt =

| caption = Yampolskiy in 2023

| birth_name = Roman Vladimirovich Yampolskiy

| birth_date = {{birth date and age|1979|08|13|df=y}}

| birth_place = Riga, Latvian SSR, Soviet Union

| death_date =

| death_place =

| death_cause =

| resting_place =

| resting_place_coordinates =

| other_names =

| residence =

| citizenship =

| nationality = Latvian

| fields = Computer science

| workplaces = {{Plainlist|

}}

| patrons =

| education =

| alma_mater = University at Buffalo

| thesis_title =

| thesis_url =

| thesis_year =

| doctoral_advisor =

| academic_advisors =

| doctoral_students =

| notable_students =

| known_for =

| influences =

| influenced =

| awards =

| author_abbrev_bot =

| author_abbrev_zoo =

| spouse =

| partner =

| children =

| signature =

| signature_alt =

| website =

| footnotes =

}}

Roman Vladimirovich Yampolskiy ({{langx|ru|link=no|Роман Владимирович Ямпольский}}; born in Riga, 13 August 1979) is a Latvian computer scientist at the University of Louisville, mostly known for his work on AI safety and cybersecurity. He holds a PhD from the University at Buffalo (2008).{{cite web|url=http://cecs.louisville.edu/ry/ |title=Dr. Roman V. Yampolskiy, Computer Science, Speed School, University of Louisville, KY |publisher=Cecs.louisville.edu |accessdate=25 September 2012}} He is the founder and current director of Cyber Security Lab, in the department of Computer Engineering and Computer Science at the Speed School of Engineering of the University of Louisville.{{cite web |title=Cyber-Security Lab |url=http://cecs.louisville.edu/security/ |accessdate=25 September 2012 |publisher=University of Louisville}}

Yampolskiy is an author of some 100 publications,{{cite web |title=Roman V. Yampolskiy |url=https://scholar.google.com/citations?user=0_Rq68cAAAAJ&hl=en |accessdate=25 September 2012 |website=Google Scholar}} including numerous books.{{cite web|url=https://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=roman+yampolskiy |title=roman yampolskiy |publisher=Amazon.com |accessdate=25 September 2012}}

AI safety

Yampolskiy has warned of the possibility of existential risk from advanced artificial intelligence, and has advocated research into "boxing" artificial intelligence.{{cite news|last1=Hsu|first1=Jeremy|title=Control dangerous AI before it controls us, one expert says|url=https://www.nbcnews.com/id/wbna46590591|access-date=28 January 2016|publisher=NBC News|date=1 March 2012}} More broadly, Yampolskiy and his collaborator, Michaël Trazzi, have proposed in 2018 to introduce "Achilles' heels" into potentially dangerous AI, for example by barring an AI from accessing and modifying its own source code.{{cite news |last1=Baraniuk |first1=Chris |title=Artificial stupidity could help save humanity from an AI takeover |url=https://www.newscientist.com/article/2177656-artificial-stupidity-could-help-save-humanity-from-an-ai-takeover/ |accessdate=12 April 2020 |work=New Scientist |date=23 August 2018}}{{Cite web |last=Trazzi |first=Michaël |last2=Yampolskiy |first2=Roman V. |date=2018 |title=Building safer AGI by introducing artificial stupidity |url=https://arxiv.org/abs/1808.03644 |website=Arxiv |arxiv=}} Another proposal is to apply a "security mindset" to AI safety, itemizing potential outcomes in order to better evaluate proposed safety mechanisms.{{cite news |last1=Baraniuk |first1=Chris |title=Checklist of worst-case scenarios could help prepare for evil AI |url=https://www.newscientist.com/article/2089606-checklist-of-worst-case-scenarios-could-help-prepare-for-evil-ai/ |accessdate=12 April 2020 |work=New Scientist |date=23 May 2016}}

He has suggested that there is no evidence of a solution to the AI control problem and has proposed pausing AI development, arguing that "Imagining humans can control superintelligent AI is a little like imagining that an ant can control the outcome of an NFL football game being played around it".{{Cite web |date=2024-02-12 |title=There is no evidence that AI can be controlled, expert says |url=https://www.independent.co.uk/tech/ai-artificial-intelligence-safety-b2494909.html |access-date=2024-07-04 |website=The Independent |language=en}}{{Cite web |last=McMillan |first=Tim |date=2024-02-28 |title=AI Superintelligence Alert: Expert Warns of Uncontrollable Risks, Calling It a Potential 'An Existential Catastrophe' |url=https://thedebrief.org/ai-superintelligence-alert-expert-warns-of-uncontrollable-risks-calling-it-a-potential-an-existential-catastrophe/ |access-date=2024-07-04 |website=The Debrief |language=en-US}} He joined AI researchers such as Yoshua Bengio and Stuart Russell in signing "Pause Giant AI Experiments: An Open Letter".{{Cite web |title=Pause Giant AI Experiments: An Open Letter |url=https://futureoflife.org/open-letter/pause-giant-ai-experiments/ |access-date=2024-07-04 |website=Future of Life Institute |language=en-US}}

In an appearance on the Lex Fridman podcast in 2024, Yampolskiy predicted the chance that AI could lead to human extinction at "99.9% within the next hundred years".{{Cite web |last=Altchek |first=Ana |title=Why this AI researcher thinks there's a 99.9% chance AI wipes us out |url=https://www.businessinsider.com/ai-researcher-roman-yampolskiy-lex-fridman-human-extinction-prediction-2024-6 |access-date=2024-06-13 |website=Business Insider |language=en-US}}

Yampolskiy has been a research advisor of the Machine Intelligence Research Institute, and an AI safety fellow of the Foresight Institute.{{Cite web |title=Roman Yampolskiy |url=https://futureoflife.org/person/prof-roman-yampolskiy/ |access-date=2024-07-03 |website=Future of Life Institute |language=en-US}}

Intellectology

In 2015, Yampolskiy launched intellectology, a new field of study founded to analyze the forms and limits of intelligence.{{cite book|last= Yampolskiy |first=Roman V.|title= Artificial Superintelligence: a Futuristic Approach |year=2015|publisher= Chapman and Hall/CRC Press (Taylor & Francis Group)|isbn=978-1482234435}}{{cite web|url=http://www.technicallysentient.com/blog/2015/9/11/intellectology-and-other-ideas-a-review-of-artificial-superintelligence |title=Intellectology and Other Ideas: A Review of Artificial Superintelligence |publisher=Technically Sentient |date=20 September 2015 |accessdate=22 November 2016 |url-status=bot: unknown |archiveurl=https://web.archive.org/web/20160807183217/http://www.technicallysentient.com/blog/2015/9/11/intellectology-and-other-ideas-a-review-of-artificial-superintelligence |archivedate=7 August 2016 }}{{cite web|url=https://www.singularityweblog.com/roman-yampolskiy-on-artificial-superintelligence/ |title= Roman Yampolskiy on Artificial Superintelligence |publisher= Singularity Weblog |date=7 September 2015 |accessdate=22 November 2016}} Yampolskiy considers AI to be a sub-field of this. An example of Yampolskiy's intellectology work is an attempt to determine the relation between various types of minds and the accessible fun space, i.e. the space of non-boring activities.{{cite conference |last1= Ziesche |first1= Soenke |last2= Yampolskiy |first2= Roman V. |title= Artificial Fun: Mapping Minds to the Space of Fun |book-title=3rd Annual Global Online Conference on Information and Computer Technology (GOCICT16). Louisville, KY, USA. 16–18 November 2016 |year=2016 |arxiv= 1606.07092 }}

AI-Completeness

Yampolskiy has worked on developing the theory of AI-completeness, suggesting the Turing Test as a defining example.Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf|date=2013-05-22}}

Books

  • Feature Extraction Approaches for Optical Character Recognition. Briviba Scientific Press, 2007, {{ISBN|0-6151-5511-1}}
  • Computer Security: from Passwords to Behavioral Biometrics. New Academic Publishing, 2008, {{ISBN|0-6152-1818-0}}
  • Game Strategy: a Novel Behavioral Biometric. Independent University Press, 2009, {{ISBN|0-578-03685-1}}
  • Artificial Superintelligence: a Futuristic Approach. Chapman and Hall/CRC Press (Taylor & Francis Group), 2015, {{ISBN|978-1482234435}}
  • AI: Unexplainable, Unpredictable, Uncontrollable. Chapman & Hall/CRC Press, 2024, {{ISBN|978-1032576268}}

See also

References

{{Reflist}}