Margaret Mitchell (scientist)
{{Short description|American computer scientist}}
{{Other people|Margaret Mitchell}}
{{BLP primary sources|date=April 2023}}{{Infobox scientist
| name = Margaret Mitchell
| image = MargaretMitchell2022.jpg
| caption = Mitchell (2022)
| birth_place = United States
| nationality =
| workplaces = Google
Microsoft Research
Johns Hopkins University
| alma_mater = University of Aberdeen (PhD in Computer Science)
University of Washington (MSc in Computational Linguistics)
| thesis_title = Generating Reference to Visible Objects
| thesis_year = 2012
| thesis_url = http://www.m-mitchell.com/papers/Mitchell-2013-Thesis.pdf
| fields = Computer science
| known_for = Algorithmic bias
Fairness in machine learning
Computer vision
Natural language processing
| website = [http://www.m-mitchell.com/ Personal website]
}}
Margaret Mitchell is a computer scientist who works on algorithmic bias and fairness in machine learning. She is most well known for her work on automatically removing undesired biases concerning demographic groups from machine learning models,{{cite conference |title=Mitigating Unwanted Biases with Adversarial Learning |last3=Mitchell |first3=Margaret |last2= Lemoine |first2=Blake |last1=Hu Zhang |first1=Brian |publisher= |book-title=Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society |date=2018-12-01 |pages=220–229 |location= |doi=10.1145/3278721.3278779 |conference=AAAI/ACM Conference on AI, Ethics, and Society |id=|doi-access=free |arxiv=1801.07593 }} as well as more transparent reporting of their intended use.{{cite conference |url=https://dl.acm.org/doi/abs/10.1145/3287560.3287596 |title=Model Cards for Model Reporting |last1=Mitchell |first1=Margaret|last2= Wu |first2=Simone |last3= Zaldivar |first3=Andrew |last4=Barnes |first4=Parker |last5=Vasserman |first5=Lucy |last6=Hutchinson |first6=Ben |last7=Spitzer |first7=Elena |last8=Raji |first8=Inioluwa Deborah |last9=Gebru |first9=Timnit |date=2019-01-29 |publisher= |book-title=Proceedings of the Conference on Fairness, Accountability, and Transparency |pages= |location= |doi=10.1145/3287560.3287596 |conference=Conference on Fairness, Accountability, and Transparency |id=|arxiv=1810.03993 }}
Education
Mitchell obtained a bachelor's degree in linguistics from Reed College, Portland, Oregon, in 2005. After having worked as a research assistant at the OGI School of Science and Engineering for two years, she subsequently obtained a Master's in Computational Linguistics from the University of Washington in 2009. She enrolled in a PhD program at the University of Aberdeen, where she wrote a doctoral thesis on the topic of Generating Reference to Visible Objects,{{cite thesis |type=PhD |last=Mitchell |first=Margaret |date=2013 |title=Generating Reference to Visible Objects |publisher=University of Aberdeen | url=http://www.m-mitchell.com/papers/Mitchell-2013-Thesis.pdf}} graduating in 2013.
Career and research
Mitchell is best known for her work on fairness in machine learning and methods for mitigating algorithmic bias. This includes her work on introducing the concept of 'Model Cards' for more transparent model reporting, and methods for debiasing machine learning models using adversarial learning. Margaret Mitchell created the framework for recognizing and avoiding biases by testing with a variable for the group of interest, predictor and an adversary.{{Cite book|last1=Zhang|first1=Brian Hu|last2=Lemoine|first2=Blake|last3=Mitchell|first3=Margaret|title=Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society |chapter=Mitigating Unwanted Biases with Adversarial Learning |date=2018-12-27|chapter-url=https://dl.acm.org/doi/10.1145/3278721.3278779|series=Aies '18|language=en|location=New Orleans LA USA|publisher=ACM|pages=335–340|doi=10.1145/3278721.3278779|arxiv=1801.07593|isbn=978-1-4503-6012-8|s2cid=9424845}}
In 2012, Mitchell joined the Human Language Technology Center of Excellence at Johns Hopkins University as a postdoctoral researcher, before taking up a position at Microsoft Research in 2013.{{Cite web|last=Mitchell|first=Margaret|date=February 14, 2017|title=Margaret Mitchell (Google Research) "Algorithmic Bias in Artificial Intelligence: The Seen and Unseen Factors Influencing Machine Perception of Images and Language"|url=https://www.clsp.jhu.edu/events/margaret-mitchell-google-research/#.YYr0vprMKlF|access-date=November 9, 2021|website=Johns Hopkins}} At Microsoft, Mitchell was the research lead of the Seeing AI project, an app that offers support for the visually impaired by narrating texts and images.{{cite web |url=https://www.microsoft.com/en-us/ai/seeing-ai |title=Seeing AI in New Languages |website=Microsoft |access-date=February 20, 2021}}
In November 2016, she became a senior research scientist at Google Research and Machine intelligence. While at Google, she founded and co-led the Ethical Artificial Intelligence team together with Timnit Gebru. In May 2018, she represented Google in the Partnership on AI.
In February 2018, she gave a TED talk on 'How we can build AI to help humans, not hurt us'.{{cite web |url=https://www.ted.com/speakers/margaret_mitchell |title=Margaret Mitchell's TED talk|date=February 2018 |website=TED |access-date=February 20, 2021}}
In January 2021, after Timnit Gebru's termination from Google, Mitchell reportedly used a script to search through her corporate account and download emails that allegedly documented discriminatory incidents involving Gebru. An automated system locked Mitchell's account in response. In response to media attention Google claimed that she "exfiltrated thousands of files and shared them with multiple external accounts".{{Cite news |last=Murphy |first=Margi |date=20 February 2021 |title=Google sacks second ethical AI researcher amid censorship storm |work=The Daily Telegraph |url=https://www.telegraph.co.uk/technology/2021/02/20/google-sacks-second-ethical-ai-researcher-amid-censorship-storm/ |access-date=April 2, 2023}}{{Cite web |last=Fried |first=Ina |date=2021-01-20 |title=Scoop: Google is investigating the actions of another top AI ethicist |url=https://www.axios.com/2021/01/20/scoop-google-is-investigating-the-actions-of-another-top-ai-ethicist |access-date=2023-04-02 |website=Axios |language=en}}{{Cite magazine |last=Simonite |first=Tom |title=What Really Happened When Google Ousted Timnit Gebru |language=en-US |magazine=Wired |url=https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/ |access-date=2023-04-02 |issn=1059-1028}} After a five-week investigation, Mitchell was fired.{{cite web|date=February 20, 2021|title=Google fires Margaret Mitchell, another top researcher on its AI ethics team|url=https://www.theguardian.com/technology/2021/feb/19/google-fires-margaret-mitchell-ai-ethics-team|access-date=February 20, 2021|website=The Guardian}}{{cite web|date=February 20, 2021|title=Margaret Mitchell: Google fires AI ethics founder|url=https://www.bbc.com/news/technology-56135817|access-date=February 20, 2021|website=BBC}}{{cite web|date=February 20, 2021|title=Google fires Ethical AI lead Margaret Mitchell|url=https://venturebeat.com/2021/02/19/google-fires-ethical-ai-lead-margaret-mitchell/|access-date=February 20, 2021|website=VentureBeat}} Prior to her dismissal, Mitchell had been a vocal advocate for diversity at Google, and had voiced concerns about research censorship at the company.{{Cite web|last=Osborne|first=Charlie|title=Google fires top ethical AI expert Margaret Mitchell|url=https://www.zdnet.com/article/google-fires-top-ethical-ai-expert-margaret-mitchell/|access-date=2021-03-22|website=ZDNet|language=en}}
In late 2021, she joined AI start-up Hugging Face.{{cite news | url=https://www.bloomberg.com/news/articles/2021-08-24/fired-at-google-after-critical-work-ai-researcher-mitchell-to-join-hugging-face | title=Fired from Google After Critical Work, AI Researcher Mitchell to Join Startup | newspaper=Bloomberg.com | date=24 August 2021 }}
Leadership
Mitchell was a co-founder of Widening NLP, group seeking to increase the proportion of women and minorities working in natural language processing,{{Cite magazine |last=Johnson |first=Khari |title=Black and Queer AI Groups Say They'll Spurn Google Funding |language=en-US |magazine=Wired |url=https://www.wired.com/story/black-queer-ai-groups-spurn-google-funding/ |access-date=2023-04-02 |issn=1059-1028}} and a special interest group within the Association for Computational Linguistics.
References
{{reflist}}
{{Authority control}}
{{DEFAULTSORT:Mitchell, Margaret}}
Category:Alumni of the University of Aberdeen
Category:American women computer scientists
Category:American computer scientists
Category:Machine learning researchers
Category:Natural language processing researchers
Category:University of Washington alumni
Category:Year of birth missing (living people)