Algorithmic Justice League
{{Short description|Digital advocacy non-profit organization}}
{{Use mdy dates|date=June 2022}}
{{Infobox organization
| name = Algorithmic Justice League
| abbreviation = AJL
| logo = Algorithmic_Justice_League_Logo.png
| logo_size = 150px
| formation = 2016
| founder = Joy Buolamwini
| purpose = AI activism
| location_city = Cambridge, Massachusetts
| website = {{URL|https://www.ajl.org/}}
}}
The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge, Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society.{{Cite web |title=Learn More |publisher=The Algorithmic Justice League |url=https://www.ajl.org/learn-more |access-date=April 7, 2022 |archive-date=March 29, 2022 |archive-url=https://web.archive.org/web/20220329220526/https://www.ajl.org/learn-more |url-status=live }} The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, Fast Company named AJL as one of the 10 most innovative AI companies in the world.{{Cite news |date=March 9, 2021 |title=The 10 most innovative companies in artificial intelligence |url=https://www.fastcompany.com/90600124/artificial-intelligence-most-innovative-companies-2021 |access-date=April 7, 2022 |website=Fast Company |archive-date=April 7, 2022 |archive-url=https://web.archive.org/web/20220407230345/https://www.fastcompany.com/90600124/artificial-intelligence-most-innovative-companies-2021 |url-status=live }}{{cite news |last1=Villoro |first1=Elías |title=Coded Bias and the Algorithm Justice League |url=https://boingboing.net/2023/02/16/coded-bias-and-the-algorithm-justice-league.html |work=Boing Boing |date=16 February 2023}}
History
Buolamwini founded the Algorithmic Justice League in 2016 as a graduate student in the MIT Media Lab. While experimenting with facial detection software in her research, she found that the software could not detect her "highly melanated" face until she donned a white mask.{{Cite web |title=Documentary 'Coded Bias' Unmasks The Racism Of Artificial Intelligence |date=November 18, 2020 |url=https://www.wbur.org/news/2020/11/18/documentary-coded-bias-review |access-date=April 7, 2022 |publisher=WBUR-FM |archive-date=January 4, 2022 |archive-url=https://web.archive.org/web/20220104212834/https://www.wbur.org/news/2020/11/18/documentary-coded-bias-review |url-status=live }} After this incident, Buolamwini became inspired to found AJL to draw public attention to the existence of bias in artificial intelligence and the threat it can poses to civil rights. Early AJL campaigns focused primarily on bias in face recognition software; recent campaigns have dealt more broadly with questions of equitability and accountability in AI, including algorithmic bias, algorithmic decision-making, algorithmic governance, and algorithmic auditing.
Additionally there is a community of other organizations working towards similar goals, including Data and Society, Data for Black Lives, the Distributed Artificial Intelligence Research Institute (DAIR), and Fight for the Future.{{Cite news |title=Google fired its star AI researcher one year ago. Now she's launching her own institute |newspaper=The Washington Post |url=https://www.washingtonpost.com/technology/2021/12/02/timnit-gebru-dair/ |access-date=April 7, 2022 |issn=0190-8286 |archive-date=December 2, 2021 |archive-url=https://web.archive.org/web/20211202165034/https://www.washingtonpost.com/technology/2021/12/02/timnit-gebru-dair/ |url-status=live }}{{Cite web |title=DAIR |url=https://www.dair-institute.org/ |access-date=April 7, 2022 |publisher=Distributed AI Research Institute |archive-date=April 7, 2022 |archive-url=https://web.archive.org/web/20220407104645/https://www.dair-institute.org/ |url-status=live }}{{Cite web |first=Rachel |last=Metz |title=Activists pushed the IRS to drop facial recognition. They won, but they're not done yet |url=https://www.cnn.com/2022/03/07/tech/facial-recognition-activists-irs/index.html |access-date=April 7, 2022 |publisher=CNN |date=March 7, 2022 |archive-date=March 31, 2022 |archive-url=https://web.archive.org/web/20220331222755/https://www.cnn.com/2022/03/07/tech/facial-recognition-activists-irs/index.html |url-status=live }}
Notable work
=Facial recognition=
AJL founder Buolamwini collaborated with AI ethicist Timnit Gebru to release a 2018 study on racial and gender bias in facial recognition algorithms used by commercial systems from Microsoft, IBM, and Face++. Their research, entitled "Gender Shades", determined that machine learning models released by IBM and Microsoft were less accurate when analyzing dark-skinned and feminine faces compared to performance on light-skinned and masculine faces.{{Cite journal |last=Buolamwini, Joy; Gebru, Timnit |date=2018 |title=Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification |url=http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf |journal=Proceedings of the 1st Conference on Fairness, Accountability and Transparency |volume=81 |pages=77–91 |via=December 12, 2020 |access-date=December 12, 2020 |archive-date=December 12, 2020 |archive-url=https://web.archive.org/web/20201212034350/http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf |url-status=live }}{{Cite web |title=Gender Shades |url=http://gendershades.org/index.html |access-date=April 7, 2022 |website=gendershades.org |archive-date=May 29, 2022 |archive-url=https://web.archive.org/web/20220529045504/https://gendershades.org/index.html |url-status=live }}{{Cite web |first=Spencer |last=Buell |date=February 23, 2018 |title=MIT Researcher: AI Has a Race Problem, and We Need to Fix It |url=https://www.bostonmagazine.com/news/2018/02/23/artificial-intelligence-race-dark-skin-bias/ |access-date=April 7, 2022 |website=Boston |archive-date=April 7, 2022 |archive-url=https://web.archive.org/web/20220407042644/https://www.bostonmagazine.com/news/2018/02/23/artificial-intelligence-race-dark-skin-bias/ |url-status=live }} The "Gender Shades" paper was accompanied by the launch of the Safe Face Pledge, an initiative designed with the Georgetown Center on Privacy & Technology that urged technology organizations and governments to prohibit lethal use of facial recognition technologies.{{Cite web |date=January 20, 2021 |title=Announcement - Safe Face Pledge |url=https://www.safefacepledge.org/press-release |access-date=April 7, 2022 |archive-url=https://web.archive.org/web/20210120165837/https://www.safefacepledge.org/press-release |archive-date=January 20, 2021 }} The Gender Shades project and subsequent advocacy undertaken by AJL and similar groups led multiple tech companies, including Amazon and IBM, to address biases in the development of their algorithms and even temporarily ban the use of their products by police in 2020.{{Cite web |title=The two-year fight to stop Amazon from selling face recognition to the police |url=https://www.technologyreview.com/2020/06/12/1003482/amazon-stopped-selling-police-face-recognition-fight/ |access-date=April 7, 2022 |website=MIT Technology Review |language=en |archive-date=April 7, 2022 |archive-url=https://web.archive.org/web/20220407232840/https://www.technologyreview.com/2020/06/12/1003482/amazon-stopped-selling-police-face-recognition-fight/ |url-status=live }}{{Cite web |title=IBM pulls out of facial recognition, fearing racial profiling and mass surveillance |url=https://fortune.com/2020/06/09/george-floyd-ibm-exits-facial-recognition-bias-human-rights/ |access-date=April 7, 2022 |website=Fortune |language=en |archive-date=April 7, 2022 |archive-url=https://web.archive.org/web/20220407232811/https://fortune.com/2020/06/09/george-floyd-ibm-exits-facial-recognition-bias-human-rights/ |url-status=live }}
Buolamwini and AJL were featured in the 2020 Netflix documentary Coded Bias, which premiered at the Sundance Film Festival.{{Cite news |last=Lee |first=Jennifer 8 |date=February 8, 2020 |title=When Bias Is Coded Into Our Technology |language=en |work=NPR |url=https://www.npr.org/sections/codeswitch/2020/02/08/770174171/when-bias-is-coded-into-our-technology |access-date=April 7, 2022 |archive-date=March 26, 2022 |archive-url=https://web.archive.org/web/20220326025113/https://www.npr.org/sections/codeswitch/2020/02/08/770174171/when-bias-is-coded-into-our-technology |url-status=live }}{{Cite web |title=Watch Coded Bias {{!}} Netflix |url=https://www.netflix.com/title/81328723 |access-date=April 8, 2022 |website=www.netflix.com |language=en |archive-date=March 24, 2022 |archive-url=https://web.archive.org/web/20220324014331/https://www.netflix.com/title/81328723 |url-status=live }} This documentary focused on the AJL's research and advocacy efforts to spread awareness of algorithmic bias in facial recognition systems.
A research collaboration involving AJL released a white paper in May 2020 calling for the creation of a new United States federal government office to regulate the development and deployment of facial recognition technologies.{{Cite web |last=Burt |first={{!}} Chris |date=June 8, 2020 |title=Biometrics experts call for creation of FDA-style government body to regulate facial recognition {{!}} Biometric Update |url=https://www.biometricupdate.com/202006/biometrics-experts-call-for-creation-of-fda-style-government-body-to-regulate-facial-recognition |access-date=April 7, 2022 |website=www.biometricupdate.com |language=en-US |archive-date=April 7, 2022 |archive-url=https://web.archive.org/web/20220407232811/https://www.biometricupdate.com/202006/biometrics-experts-call-for-creation-of-fda-style-government-body-to-regulate-facial-recognition |url-status=live }} The white paper proposed that creating a new federal government office for this area would help reduce the risks of mass surveillance and bias posed by facial recognition technologies towards vulnerable populations.{{Cite journal |last=Miller, Erik Learned; Ordóñez, Vicente; Morgenstern, Jamie; Buolamwini, Joy |date=2020 |title=Facial Recognition Technologies in the Wild: A Call for a Federal Office |url=https://people.cs.umass.edu/~elm/papers/FRTintheWild.pdf |journal=White Paper |pages=3–49 |access-date=April 8, 2022 |archive-date=January 21, 2022 |archive-url=https://web.archive.org/web/20220121215051/https://people.cs.umass.edu/~elm/papers/FRTintheWild.pdf |url-status=live }}
=Bias in speech recognition=
The AJL has run initiatives to increase public awareness of algorithmic bias and inequities in the performance of AI systems for speech and language modeling across gender and racial populations. The AJL's work in this space centers on highlighting gender and racial disparities in the performance of commercial speech recognition and natural language processing systems, which have been shown to underperform on racial minorities and reinforced gender stereotypes.{{Cite book |last1=Bender |first1=Emily M. |last2=Gebru |first2=Timnit |last3=McMillan-Major |first3=Angelina |last4=Shmitchell |first4=Shmargaret |title=Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency |chapter=On the Dangers of Stochastic Parrots: Can Language Models be Too Big? |date=March 3, 2021 |series=FAccT '21 |language=en |location=Virtual Event Canada |publisher=ACM |pages=610–623 |doi=10.1145/3442188.3445922 |isbn=978-1-4503-8309-7 |s2cid=232040593 |doi-access=free }}{{Cite book |last=Kiritchenko, Svetlana; Mohammad, Saif M. |title=Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics |chapter=Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems |date=2018 |chapter-url=https://aclanthology.org/S18-2005.pdf |series=Proceedings of the 7th Joint Conference on Lexical and Computational Semantics |pages=43–53 |doi=10.18653/v1/S18-2005 |arxiv=1805.04508 |s2cid=21670658 |via=June 5–6, 2018. |access-date=April 8, 2022 |archive-date=March 8, 2022 |archive-url=https://web.archive.org/web/20220308101838/https://aclanthology.org/S18-2005.pdf |url-status=live }}{{Cite journal |last1=Koenecke |first1=Allison |author-link=Allison Koenecke |last2=Nam |first2=Andrew |last3=Lake |first3=Emily |last4=Nudell |first4=Joe |last5=Quartey |first5=Minnie |last6=Mengesha |first6=Zion |last7=Toups |first7=Connor |last8=Rickford |first8=John R. |last9=Jurafsky |first9=Dan |last10=Goel |first10=Sharad |date=April 7, 2020 |title=Racial disparities in automated speech recognition |journal=Proceedings of the National Academy of Sciences |language=en |volume=117 |issue=14 |pages=7684–7689 |doi=10.1073/pnas.1915768117 |issn=0027-8424 |pmc=7149386 |pmid=32205437 |bibcode=2020PNAS..117.7684K |doi-access=free}}
In March 2020, AJL released a spoken word artistic piece, titled Voicing Erasure, that increased public awareness of racial bias in automatic speech recognition (ASR) systems. The piece was performed by numerous female and non-binary researchers in the field, including Ruha Benjamin, Sasha Costanza-Chock, Safiya Noble, and Kimberlé Crenshaw.{{Cite web |date=April 1, 2020 |title=Algorithmic Justice League protests bias in voice AI and media coverage |url=https://venturebeat.com/2020/03/31/algorithmic-justice-league-protests-bias-voice-ai-and-media-coverage/ |access-date=April 7, 2022 |website=VentureBeat |language=en-US |archive-date=March 31, 2022 |archive-url=https://web.archive.org/web/20220331224520/https://venturebeat.com/2020/03/31/algorithmic-justice-league-protests-bias-voice-ai-and-media-coverage/ |url-status=live }}{{Cite web |title=Voicing Erasure |url=https://www.ajl.org/voicing-erasure |access-date=April 7, 2022 |website=www.ajl.org |archive-date=April 11, 2022 |archive-url=https://web.archive.org/web/20220411180534/https://www.ajl.org/voicing-erasure |url-status=live }} AJL based their development of "Voicing Erasure" on a 2020 PNAS paper, titled, "Racial disparities in automated speech recognition" that identified racial disparities in performance of five commercial ASR systems.
=Algorithmic governance=
In 2019, Buolamwini represented AJL at a congressional hearing of the US House Committee on Science, Space, and Technology, to discuss the applications of facial recognition technologies commercially and in the government.{{Cite news |last=Quach |first=Katyanna |title=We listened to more than 3 hours of US Congress testimony on facial recognition so you didn't have to go through it |url=https://www.theregister.com/2019/05/22/congress_facial_recognition/ |access-date=April 8, 2022 |website=The Register |language=en |date=22 May 2019 |archive-date=January 21, 2022 |archive-url=https://web.archive.org/web/20220121182029/https://www.theregister.com/2019/05/22/congress_facial_recognition/ |url-status=live }}{{Cite web |title=Artificial Intelligence: Societal and Ethical Implications |url=https://science.house.gov/hearings/artificial-intelligence-societal-and-ethical-implications |access-date=April 8, 2022 |website=House Committee on Science, Space and Technology |language=en |date=June 26, 2019 |archive-date=March 15, 2022 |archive-url=https://web.archive.org/web/20220315210601/https://science.house.gov/hearings/artificial-intelligence-societal-and-ethical-implications |url-status=live }} Buolamwini served as a witness at the hearing and spoke on underperformance of facial recognition technologies in identifying people with darker skin and feminine features and supported her position with research from the AJL project "Gender Shades".{{Cite web |last=Rodrigo |first=Chris Mills |date=July 2, 2020 |title=Dozens of advocacy groups push for Congress to ban facial recognition technology |url=https://thehill.com/policy/technology/505563-dozens-of-advocacy-groups-push-for-congress-to-ban-facial-recognition/ |access-date=April 8, 2022 |website=The Hill |language=en-US |archive-date=April 8, 2022 |archive-url=https://web.archive.org/web/20220408092616/https://thehill.com/policy/technology/505563-dozens-of-advocacy-groups-push-for-congress-to-ban-facial-recognition/ |url-status=live }}{{Cite news |date=December 19, 2019 |title=U.S. government study finds racial bias in facial recognition tools |language=en |work=Reuters |url=https://www.reuters.com/article/us-usa-crime-face-idUSKBN1YN2V1 |access-date=April 8, 2022 |archive-date=April 8, 2022 |archive-url=https://web.archive.org/web/20220408092344/https://www.reuters.com/article/us-usa-crime-face-idUSKBN1YN2V1 |url-status=live }}
In January 2022, the AJL collaborated with Fight for the Future and the Electronic Privacy Information Center to release an online petition called DumpID.me, calling for the IRS to halt their use of ID.me, a facial recognition technology they were using on users when they log in. The AJL and other organizations sent letters to legislators and requested them to encourage the IRS to stop the program. In February 2022, the IRS agreed to halt the program and stop using facial recognition technology.{{Cite web |author=Rachel Metz |title=IRS halts plan to require facial recognition for logging in to user accounts |url=https://www.cnn.com/2022/02/07/tech/irs-facial-recognition-idme/index.html |access-date=April 8, 2022 |website=CNN |date=February 7, 2022 |archive-date=April 8, 2022 |archive-url=https://web.archive.org/web/20220408092616/https://www.cnn.com/2022/02/07/tech/irs-facial-recognition-idme/index.html |url-status=live }} AJL has now shifted efforts to convince other government agencies to stop using facial recognition technology; as of March 2022, the DumpID.me petition has pivoted to stop the use of ID.me in all government agencies.{{Cite web |title=Demand All Government Agencies Drop ID.me |url=https://dumpid.me/ |access-date=April 8, 2022 |website=Fight for the Future |language=en-US |archive-date=April 26, 2022 |archive-url=https://web.archive.org/web/20220426062217/https://www.dumpid.me/ |url-status=live }}
= Olay Decode the Bias campaign =
In September 2021, Olay collaborated with AJL and O'Neil Risk Consulting & Algorithmic Auditing (ORCAA) to conduct the Decode the Bias campaign, which included an audit that explored whether the Olay Skin Advisor (OSA) System included bias against women of color.{{Cite web |title=Decode the Bias & Face Anything {{!}} Women in STEM {{!}} OLAY |url=https://www.olay.com/decodethebias |access-date=April 8, 2022 |website=www.olay.com |language=en |archive-date=April 11, 2022 |archive-url=https://web.archive.org/web/20220411180821/https://www.olay.com/decodethebias |url-status=live }} The AJL chose to collaborate with Olay due to Olay's commitment to obtaining customer consent for their selfies and skin data to be used in this audit.{{Cite web |last=Shacknai |first=Gabby |title=Olay Teams Up With Algorithmic Justice Pioneer Joy Buolamwini To #DecodetheBias In Beauty |url=https://www.forbes.com/sites/gabbyshacknai/2021/09/14/olay-teams-up-with-algorithmic-justice-pioneer-joy-buolamwini-to-decodethebias-in-beauty/ |access-date=April 8, 2022 |website=Forbes |language=en |date=September 14, 2021 |archive-date=March 28, 2022 |archive-url=https://web.archive.org/web/20220328031033/https://www.forbes.com/sites/gabbyshacknai/2021/09/14/olay-teams-up-with-algorithmic-justice-pioneer-joy-buolamwini-to-decodethebias-in-beauty/ |url-status=live }} The AJL and ORCAA audit revealed that the OSA system contained bias in its performance across participants' skin color and age. The OSA system demonstrated higher accuracy for participants with lighter skin tones, per the Fitzpatrick Skin Type and individual typology angle skin classification scales. The OSA system also demonstrated higher accuracy for participants aged 30–39.{{Cite web |title=ORCAA's Report |url=https://www.olay.com/decodethebias/orcaa |access-date=April 8, 2022 |website=www.olay.com |language=en |archive-date=April 11, 2022 |archive-url=https://web.archive.org/web/20220411181339/https://www.olay.com/decodethebias/orcaa |url-status=live }} Olay has, since, taken steps to internally audit and mitigate against the bias of the OSA system. Olay has also funded 1,000 girls to attend the Black Girls Code camp, to encourage African-American girls to pursue STEM careers.
= CRASH project =
In July 2020, the Community Reporting of Algorithmic System Harms (CRASH) Project was launched by AJL.{{Cite web |title=Algorithmic Vulnerability Bounty Project (AVBP) |url=https://www.ajl.org/avbp |access-date=April 8, 2022 |website=www.ajl.org |archive-date=March 18, 2022 |archive-url=https://web.archive.org/web/20220318075336/https://www.ajl.org/avbp |url-status=live }} This project began in 2019 when Buolamwini and digital security researcher Camille François met at the Bellagio Center Residency Program, hosted by The Rockefeller Foundation. Since then, the project has also been co-led by MIT professor and AJL research director Sasha Costanza-Chock. The CRASH project focused on creating the framework for the development of bug-bounty programs (BBPs) that would incentivize individuals to uncover and report instances of algorithmic bias in AI technologies.{{Cite web |last=Laas |first=Molly |title=Bug Bounties For Algorithmic Harms? {{!}} Algorithmic Justice League |url=https://mediawell.ssrc.org/2022/01/27/bug-bounties-for-algorithmic-harms-algorithmic-justice-league/ |access-date=April 8, 2022 |website=MediaWell |language=en-US |date=January 27, 2022 |archive-date=January 18, 2023 |archive-url=https://web.archive.org/web/20230118040849/https://mediawell.ssrc.org/2022/01/27/bug-bounties-for-algorithmic-harms-algorithmic-justice-league/ |url-status=live }} After conducting interviews with BBP participants and a case study of Twitter's BBP program,{{Cite web |last=Vincent |first=James |date=August 10, 2021 |title=Twitter's photo-cropping algorithm prefers young, beautiful, and light-skinned faces |url=https://www.theverge.com/2021/8/10/22617972/twitter-photo-cropping-algorithm-ai-bias-bug-bounty-results |access-date=April 8, 2022 |website=The Verge |language=en |archive-date=April 8, 2022 |archive-url=https://web.archive.org/web/20220408093559/https://www.theverge.com/2021/8/10/22617972/twitter-photo-cropping-algorithm-ai-bias-bug-bounty-results |url-status=live }} AJL researchers developed and proposed a conceptual framework for designing BBP programs that compensate and encourage individuals to locate and disclose the existence of bias in AI systems.{{Cite web |title=AJL Bug Bounties Report.pdf |url=https://drive.google.com/file/d/1f4hVwQNiwp13zy62wUhwIg84lOq0ciG_/view |access-date=April 8, 2022 |website=Google Docs |archive-date=January 31, 2022 |archive-url=https://web.archive.org/web/20220131151416/https://drive.google.com/file/d/1f4hVwQNiwp13zy62wUhwIg84lOq0ciG_/view |url-status=live }} AJL intends for the CRASH framework to give individuals the ability to report algorithmic harms and stimulate change in AI technologies deployed by companies, especially individuals who have traditionally been excluded from the design of these AI technologies [20, DataSociety report].{{Cite web |last=League |first=Algorithmic Justice |date=August 4, 2021 |title=Happy Hacker Summer Camp Season! |url=https://medium.com/@ajlunited/happy-hacker-summer-camp-season-e1f6fdaf7694 |access-date=April 8, 2022 |website=Medium |language=en |archive-date=November 16, 2021 |archive-url=https://web.archive.org/web/20211116013310/https://medium.com/@ajlunited/happy-hacker-summer-camp-season-e1f6fdaf7694 |url-status=live }}{{Cite journal |last=Ellis, Ryan Ellis; Stevens, Yuan |date=January 2022 |title=Bounty Everything: Hackers and the Making of the Global Bug Marketplace |url=https://datasociety.net/wp-content/uploads/2022/01/BountyEverythingFinal01052022.pdf |journal=Data Society |pages=3–86 |access-date=April 8, 2022 |archive-date=February 24, 2022 |archive-url=https://web.archive.org/web/20220224002402/https://datasociety.net/wp-content/uploads/2022/01/BountyEverythingFinal01052022.pdf |url-status=live }}
Support and media appearances
AJL initiatives have been funded by the Ford Foundation, the MacArthur Foundation, the Alfred P. Sloan Foundation, the Rockefeller Foundation, the Mozilla Foundation and individual private donors.{{Cite web |title=AJL Bug Bounties Report.pdf |url=https://drive.google.com/file/d/1f4hVwQNiwp13zy62wUhwIg84lOq0ciG_/view?usp=embed_facebook |access-date=April 8, 2022 |website=Google Docs |archive-date=April 8, 2022 |archive-url=https://web.archive.org/web/20220408093554/https://drive.google.com/file/d/1f4hVwQNiwp13zy62wUhwIg84lOq0ciG_/view?usp=embed_facebook |url-status=live }} Fast Company recognized AJL as one of the 10 most innovative AI companies in 2021. Additionally, venues such as Time magazine, The New York Times, NPR, and CNN have featured Buolamwini's work with the AJL in several interviews and articles.{{Cite web |title=Joy Buolamwini: How Do Biased Algorithms Damage Marginalized Communities? |url=https://www.npr.org/2020/10/30/929204946/joy-buolamwini-how-do-biased-algorithms-damage-marginalized-communities |access-date=April 8, 2022 |website=NPR |archive-date=April 3, 2022 |archive-url=https://web.archive.org/web/20220403094310/https://www.npr.org/2020/10/30/929204946/joy-buolamwini-how-do-biased-algorithms-damage-marginalized-communities |url-status=live }}
See also
References
{{reflist}}
External links
- [https://www.ajl.org/ Official Website]
Category:Civil liberties advocacy groups in the United States
Category:Digital rights organizations
Category:Artificial intelligence associations
Category:Politics and technology
Category:Ethics of science and technology
Category:Diversity in computing
Category:Existential risk from artificial general intelligence
Category:Organizations based in Cambridge, Massachusetts
Category:Government by algorithm
Category:Non-profit organizations based in Massachusetts
Category:Data and information organizations
Category:Social welfare charities based in the United States