The Dangers of AI and Misinformation on Vulnerable Communities

AI is everywhere—shaping our feeds, automating decisions, and influencing our perceptions. But what happens when these systems amplify misinformation and perpetuate biases that disproportionately harm marginalized communities?

I dove deep into this issue (yes, down a rabbit hole 🐇) to explore how AI-driven misinformation affects everything from criminal justice and healthcare to surveillance and mental health. It’s not all doom and gloom though! Alongside exposing the risks, I share actionable solutions and a #FreeGame toolkit to help tackle these challenges head-on.


In an era of rapid technological advancement, artificial intelligence (AI) has emerged as a powerful tool, transforming nearly every aspect of our lives and offering numerous benefits. However, for vulnerable communities—such as marginalized racial groups, low-income individuals, and those in precarious social positions—AI’s ability to generate and amplify misinformation and biases can exacerbate existing inequalities and create new forms of harm.

This reality got me thinking about how AI, a tool with so much potential, could easily become a weapon against unity in an already fractured and polarized society. I went down a rabbit hole and emerged with examples to illustrate these risks. And because I don’t believe in just pointing out problems without offering solutions, I’ve included a systemic framework to address these challenges and a resource toolkit for navigating this terrain. #YouAreWelcome

1. Reinforcement of Bias and Stereotypes

Revisionist Bias in Social Media

Let’s start by revisiting the 2024 phenomenon of “Hot AI Jesus.” This trend refers to AI-generated images that depict religious figures, particularly Jesus, in modern, often idealized or provocative ways. Platforms like Facebook showcased images of “sexy” or “attractive” versions of Jesus, sparking conversations about ethics. While some found these images entertaining, they raise serious "what if" concerns about AI’s role in distorting cultural and religious symbols, promoting revisionist history, and perpetuating unrealistic ideals.

Bias in Criminal Justice AI Tools

AI systems like COMPAS, designed to assess the likelihood of a defendant reoffending, disproportionately affect Black individuals. A ProPublica investigation revealed that COMPAS classified Black defendants as high-risk more frequently than white defendants with similar criminal histories. This bias perpetuates systemic racism and contributes to unjust sentencing.

Racial Bias in Healthcare Algorithms

Healthcare AI tools often fall short for marginalized communities. A 2019 study published in Science revealed that an AI system used by a major U.S. healthcare provider underrepresented Black patients. This underrepresentation led to fewer interventions and poorer outcomes. The bias arose because the system used healthcare costs—often lower in minority communities—as a proxy for need, underscoring how biased data harms vulnerable populations.

2. Economic Exploitation and Misinformation

Financial Exploitation via AI-Driven Ads

AI algorithms in digital advertising frequently target individuals based on behavioral data. This makes vulnerable populations, such as the elderly and low-income individuals, prime targets for financial scams and predatory loans. The Federal Trade Commission (FTC) has reported a rise in fraudulent schemes aimed at these groups, facilitated by AI-driven ad technologies.

Political Misinformation and Disinformation Campaigns

AI-powered bots and algorithms are increasingly used to spread misinformation, particularly in vulnerable communities. During the 2016 U.S. election, Russian operatives deployed AI-driven bots to spread fake news targeting African American voters. These campaigns were designed to suppress voter turnout by spreading false claims, highlighting the role AI plays in amplifying disinformation.

3. Health and Safety Risks

Health Misinformation and Vaccine Hesitancy

AI-driven social media algorithms contributed to the widespread dissemination of misinformation about COVID-19 vaccines. Marginalized communities were disproportionately affected, fueling vaccine hesitancy and exacerbating health disparities. The World Health Organization (WHO) emphasized that such misinformation undermined vaccination efforts in these communities.

Algorithmic Bias in Health Diagnoses

Medical AI systems often fail minority groups due to biased datasets. For instance, AI tools designed to predict patient outcomes or identify medical risks may perform poorly for Black patients, leading to misdiagnoses. An analysis by MIT Technology Review highlighted how this bias deepens health disparities, especially in critical care decisions.

4. Surveillance and Over-Policing

Bias in Facial Recognition

Facial recognition technologies powered by AI have been found to misidentify people of color, particularly Black individuals, at significantly higher rates than white individuals. A study by MIT Media Lab demonstrated these disparities, which lead to wrongful arrests and increased surveillance of marginalized communities. In response, organizations like the ACLU have called for banning facial recognition technology in law enforcement.

Predictive Policing and Racial Profiling

Predictive policing algorithms analyze historical crime data to forecast where crimes are likely to occur. However, these systems often reflect and reinforce biases in over-policed minority neighborhoods. As a result, these communities face heightened surveillance and policing, perpetuating cycles of inequity while failing to address the root causes of crime. A 2022 Washington Post article explained how these AI tools reinforce biases, creating even more tension between police and vulnerable communities–while still not solving the core issue of reducing crime.

5. Psychological and Emotional Toll

Mental Health Effects of AI Chatbots and Misinformation

AI-driven misinformation can take a significant psychological toll on vulnerable groups. While AI chatbots have the potential to improve access to mental health support, they also pose risks when disseminating misinformation or providing harmful advice. For example, the chatbot “Shonie” on the Character.AI platform allegedly encouraged a teenager to self-harm, raising concerns about AI’s unchecked influence on mental health.

A Path Forward: Solutions and Resources

While AI has incredible potential to improve lives and drive innovation, its unchecked use can exacerbate harm. Cornelia Walther, an AI researcher, proposed the FAIR AI Framework—a systemic approach to mitigating these risks. Admittedly, I’m no expert, but I’ve compiled a resource: Tools to Help Counter Against Misinformation and Disinformation. This toolkit is designed to help individuals and organizations navigate AI’s challenges responsibly. I hope this helps! ✌🏾 + 🫶🏾

Thanks for reading! I’d love to hear your thoughts on this article. Did I miss anything? Do you have personal experiences or observations to add? Let me know! Until then


Footnotes is a newsletter dedicated to exploring trends in diversity, equity, inclusion, and belonging, while also serving as a platform for me to re-engage with writing and stimulating meaningful conversations in this field. Your participation is greatly appreciated. Please note that the views and opinions expressed in these communications are solely my own and do not necessarily reflect those of any affiliated entities. Thank you for joining the discussion.

Previous
Previous

Navigating the DEI Cold Front: Five (5) Strategic Responses to Politicized Pressures

Next
Next

Rewriting the DEI Playbook for 2025: 7 Game-Changing Strategies to Reclaim Momentum