PauseAI

{{Short description|Advocacy movement}}

{{Infobox organization

| name = PauseAI

| formation = {{start date and age|2023|05}}

| logo = PauseAI Logo.svg

| logo_size = 250

| founder = Joep Meindertsma

| founding_location = Utrecht, Netherlands

| type = Advocacy group, Nonprofit

| purpose = Mitigating the existential risk from artificial general intelligence and other risks of advanced artificial intelligence

| region = International

| website = [https://pauseai.info pauseai.info]

}}PauseAI is a global political movement founded in the Netherlands with the stated aim of achieving global coordination to stop the development of artificial intelligence systems more powerful than GPT-4, at least until it is known how to build them safely, and keep them under democratic control.{{Cite web |title=PauseAI Proposal |url=https://pauseai.info/proposal |access-date=2024-05-02 |website=PauseAI |language=en}} The movement was established in Utrecht in May 2023 by software entrepreneur Joep Meindertsma.{{Cite magazine |last=Meaker |first=Morgan |title=Meet the AI Protest Group Campaigning Against Human Extinction |url=https://www.wired.com/story/pause-ai-existential-risk/ |access-date=2024-04-30 |magazine=Wired |language=en-US |issn=1059-1028}}{{Cite magazine |last=Reynolds |first=Matt |title=Protesters Are Fighting to Stop AI, but They're Split on How to Do It |url=https://www.wired.com/story/protesters-pause-ai-split-stop/ |access-date=2024-08-20 |magazine=Wired |language=en-US |issn=1059-1028}}{{Cite web |date=2023-05-24 |title=The rag-tag group trying to pause AI in Brussels |url=https://www.politico.eu/article/microsoft-brussels-elon-musk-anti-ai-protesters-well-five-of-them-descend-on-brussels/ |access-date=2024-04-30 |website=Politico |language=en-GB}}

Proposal

PauseAI's stated goal is to “implement a pause on the training of AI systems more powerful than GPT-4”. Their website lists some proposed steps to achieve this goal:

  • Set up an international AI safety agency, similar to the IAEA.
  • Only allow training of general AI systems more powerful than GPT-4 if their safety can be guaranteed.
  • Only allow deployment of models after no dangerous capabilities are present.

Background

During the late 2010s and early 2020s, a rapid improvement in the capabilities of artificial intelligence models known as the AI boom was underway, which included the release of large language model GPT-3, its more powerful successor GPT-4, and image generation models Midjourney and DALL-E. This led to an increased concern about the risks of advanced AI, causing the Future of Life Institute to release an open letter calling for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4". The letter was signed by thousands of AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, and Elon Musk.{{Cite news |last1=Hern |first1=Alex |date=2023-03-29 |title=Elon Musk joins call for pause in creation of giant AI 'digital minds' |url=https://www.theguardian.com/technology/2023/mar/29/elon-musk-joins-call-for-pause-in-creation-of-giant-ai-digital-minds |access-date=2024-08-20 |work=The Guardian |language=en-GB |issn=0261-3077}}{{Cite news |last1=Metz |first1=Cade |last2=Schmidt |first2=Gregory |date=2023-03-29 |title=Elon Musk and Others Call for Pause on A.I., Citing 'Profound Risks to Society' |url=https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html |access-date=2024-08-20 |work=The New York Times |language=en-US |issn=0362-4331}}{{Cite web |title=Pause Giant AI Experiments: An Open Letter |url=https://futureoflife.org/open-letter/pause-giant-ai-experiments/ |access-date=2024-08-20 |website=Future of Life Institute |language=en-US}}

History

Founder Joep Meindertsma first became worried about the existential risk from artificial general intelligence after reading philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies. He founded PauseAI in May 2023, putting his job as the CEO of a software firm on hold. Meindertsma claimed the rate of progress in AI alignment research is lagging behind the progress in AI capabilities, and said "there is a chance that we are facing extinction in a short frame of time". As such, he felt an urge to organise people to act.{{Cite web |date=2023-06-14 |title=Could AI lead us to extinction? This activist group believes so |url=https://www.euronews.com/next/2023/06/14/could-ai-lead-us-to-extinction-this-brussels-based-group-believes-so |access-date=2024-11-06 |website=euronews |language=en}}

PauseAI's first public action was to protest in front of Microsoft's Brussels lobbying office in May 2023 during an event on artificial intelligence. In November of the same year, they protested outside the inaugural AI Safety Summit at Bletchley Park.{{Cite web |title=What happens in Bletchley, stays in… |url=https://www.islingtontribune.co.uk/article/what-happens-in-bletchley-stays-in |access-date=2024-05-08 |website=Islington Tribune |language=en-gb}} The Bletchley Declaration that was signed at the summit, which acknowledged the potential for catastrophic risks stemming from AI, was perceived by Meindertsma to be a small first step. But, he argued "binding international treaties" are needed. He mentioned the Montreal Protocol and treaties banning blinding laser weapons as examples of previous successful global agreements.

In February 2024, members of PauseAI gathered outside OpenAI's headquarters in San Francisco, in part due to OpenAI changing its usage policy that prohibited the use of its models for military purposes.{{Cite web |last=Nuñez |first=Michael |date=2024-02-13 |title=Protesters gather outside OpenAI office, opposing military AI and AGI |url=https://venturebeat.com/ai/protesters-gather-outside-openai-office-opposing-military-ai-and-agi/ |access-date=2024-08-20 |website=VentureBeat |language=en-US}}

On 13 May 2024, protests were held across thirteen different countries before the AI Seoul Summit, including the United States, the United Kingdom, Brazil, Germany, Australia, and Norway. Meindertserma said that those attending the summit "need to realize that they are the only ones who have the power to stop this race". Protesters in San Francisco held signs reading "When in doubt, pause", and "Quit your job at OpenAI. Trust your conscience".{{Cite magazine |last=Gordon |first=Anna |date=2024-05-13 |title=Why Protesters Are Demanding Pause on AI Development |url=https://time.com/6977680/ai-protests-international/ |access-date=2024-08-20 |magazine=TIME |language=en}}{{Cite web |last=Rodriguez |first=Joe Fitzgerald |date=2024-05-13 |title=As OpenAI Unveils Big Update, Protesters Call for Pause in Risky 'Frontier' Tech {{!}} KQED |url=https://www.kqed.org/news/11985949/as-openai-unveils-big-update-protesters-call-for-pause-in-risky-frontier-tech |access-date=2024-08-20 |website=www.kqed.org |language=en}}{{Cite web |date=2024-05-14 |title=OpenAI launches new AI model GPT-4o, a conversational digital personal assistant |url=https://abc7news.com/post/openai-launches-new-ai-model-gpt-4o/14809989/ |access-date=2024-08-20 |website=ABC7 San Francisco |language=en}} Jan Leike, head of the "superalignment" team at OpenAI, resigned 2 days later due to his belief that "safety culture and processes [had] taken a backseat to shiny products".{{Cite web |last=Robison |first=Kylie |date=2024-05-17 |title=OpenAI researcher resigns, claiming safety has taken "a backseat to shiny products" |url=https://www.theverge.com/2024/5/17/24159095/openai-jan-leike-superalignment-sam-altman-ai-safety |access-date=2024-08-20 |website=The Verge |language=en}}

See also

References