Center for Human-Compatible Artificial Intelligence
{{Short description|US AI safety research center}}
{{Infobox organization
| name = Center for Human-Compatible Artificial Intelligence
| image = Center for Human-Compatible Artificial Intelligence.png
| formation = {{start date and age|2016}}
| headquarters = Berkeley, California
| leader_title = Director
| leader_name = Stuart J. Russell
| leader_title2 = Executive director
| leader_name2 = Mark Nitzberg
| parent_organization = University of California, Berkeley
| homepage = {{URL|https://humancompatible.ai/}}
}}
The Center for Human-Compatible Artificial Intelligence (CHAI) is a research center at the University of California, Berkeley focusing on advanced artificial intelligence (AI) safety methods. The center was founded in 2016 by a group of academics led by Berkeley computer science professor and AI expert Stuart J. Russell.{{cite web |url=https://vcresearch.berkeley.edu/news/uc-berkeley-launches-center-human-compatible-artificial-intelligence |title=UC Berkeley launches Center for Human-Compatible Artificial Intelligence |last=Norris |first=Jeffrey |date=Aug 29, 2016 |accessdate=Dec 27, 2019}}{{cite web |url=https://www.theguardian.com/technology/2016/aug/30/rise-of-robots-evil-artificial-intelligence-uc-berkeley |title=The rise of robots: forget evil AI – the real risk is far more insidious |last=Solon |first=Olivia |date=Aug 30, 2016 |accessdate=Dec 27, 2019 |work=The Guardian}} Russell is known for co-authoring the widely used AI textbook Artificial Intelligence: A Modern Approach.
CHAI's faculty membership includes Russell, Pieter Abbeel and Anca Dragan from Berkeley, Bart Selman and Joseph Halpern from Cornell,{{cite web |url=https://research.cornell.edu/research/human-compatible-ai |title=Human-Compatible AI |author=Cornell University |accessdate=Dec 27, 2019}} Michael Wellman and Satinder Singh Baveja from the University of Michigan, and Tom Griffiths and Tania Lombrozo from Princeton.{{cite web |url=https://humancompatible.ai/people | title=People |author=Center for Human-Compatible Artificial Intelligence |accessdate = Dec 27, 2019}} In 2016, the Open Philanthropy Project (OpenPhil) recommended that Good Ventures provide CHAI support of $5,555,550 over five years.{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai |author=Open Philanthropy Project |title=UC Berkeley — Center for Human-Compatible AI (2016) |date=Aug 2016 |accessdate=Dec 27, 2019}} CHAI has since received additional grants from OpenPhil and Good Ventures of over $12,000,000, including for collaborations with the World Economic Forum and Global AI Council.{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 |author=Open Philanthropy Project |title=UC Berkeley — Center for Human-Compatible AI (2019) |date=Nov 2019|accessdate=Dec 27, 2019}}{{cite web |url=https://www.openphilanthropy.org/grants/uc-berkeley-center-for-human-compatible-artificial-intelligence-2021/|title=UC Berkeley — Center for Human-Compatible Artificial Intelligence (2021)|website=openphilanthropy.org}}{{Cite web |date=April 2020 |title=World Economic Forum — Global AI Council Workshop |url=https://www.openphilanthropy.org/grants/world-economic-forum-global-ai-council-workshop/ |url-status=live |archive-url=https://archive.today/20230901170003/https://www.openphilanthropy.org/grants/world-economic-forum-global-ai-council-workshop/ |archive-date=2023-09-01 |access-date=2023-09-01 |website=Open Philanthropy |language=en-us}}
Research
CHAI's approach to AI safety research focuses on value alignment strategies, particularly inverse reinforcement learning, in which the AI infers human values from observing human behavior.{{cite web |url=https://futureoflife.org/2016/08/31/new-center-human-compatible-ai/ |title=New Center for Human-Compatible AI |last=Conn |first=Ariel |publisher=Future of Life Institute |date=Aug 31, 2016 |accessdate=Dec 27, 2019}} It has also worked on modeling human-machine interaction in scenarios where intelligent machines have an "off-switch" that they are capable of overriding.{{cite web|url=https://www.thetimes.com/business-money/technology/article/making-robots-less-confident-could-prevent-them-taking-over-gnsblq7lx|title=Making robots less confident could prevent them taking over|last=Bridge|first=Mark|website=The Times |date=June 10, 2017}}
See also
References
{{reflist|30em}}
External links
- {{official|https://humancompatible.ai/}}
Category:Artificial intelligence associations
Category:Existential risk from artificial general intelligence
Category:Existential risk organizations
Category:Organizations established in 2016
{{existential risk from artificial intelligence}}
{{US-org-stub}}