The AI Doctor You Didn’t Ask For
AI is everywhere—from helping doctors make medical decisions to filtering out “harmful” content on social media. It promises to revolutionize everything from how we manage our health to how we interact online. But here’s the problem: the very systems we’re relying on to shape our future are built on biases that may harm marginalized communities, and we don’t even know it.
AI’s big pitch in healthcare is that it can speed up diagnoses, identify trends, and personalize treatments faster and more efficiently than human doctors. And sure, that sounds great—until you realize that the data AI relies on is historically biased. When the machine is trained on medical records dominated by white, middle-class patients, you’re going to see some serious blind spots for everyone else.
Take, for example, the AI used in hospitals to predict which patients need extra care. A 2019 study found that the algorithm was biased against Black patients because it used healthcare spending as a metric for “health needs.” But Black patients, on average, have less access to healthcare, meaning the algorithm falsely assumed they were less sick and therefore didn’t need extra care. This is no glitch—it’s an example of the kind of systemic oversight that reinforces existing disparities.
And don’t even get started on AI’s shortcomings in diagnosing skin conditions. Several studies have shown that machine learning models trained to detect skin cancer struggle with darker skin tones. That’s a problem. A big one. And it’s just the tip of the iceberg.
The AI Content Police Are Watching You
It’s not just healthcare where AI is messing up. Take a stroll through any social media platform, and you’ll encounter a digital army of algorithms designed to keep the peace by flagging harmful content—hate speech, fake news, graphic violence. Sounds good, right? Except that these content moderation algorithms are often woefully inadequate at distinguishing between what’s harmful and what’s actually a part of marginalized communities’ everyday lives.
For example, the same algorithms that are meant to protect you from hate speech are also silencing important discussions about race, mental health, and LGBTQ+ issues. In fact, content related to racial justice or queer activism has been disproportionately flagged and removed, not because it’s harmful, but because the AI simply can’t recognize the context.
And then there’s the problem of “sensitive content” filtering, where algorithms errantly block images or posts related to body positivity, mental health struggles, or trauma. Essentially, these algorithms are treating conversations about real human experiences as dangerous and removing them from view.
The Hidden Cost of Trusting the Machine
So, what’s the bottom line here? AI might be faster, more efficient, and less prone to human error, but it’s also reflecting the biases of the people who create it—and those biases are baked into the system. When it comes to healthcare, those mistakes are literally life and death. When it comes to content moderation, the risk is just as severe: erasing voices that need to be heard and reinforcing harmful stereotypes.
But here’s the kicker: these algorithms are getting smarter. They’re learning from us, processing data faster than we can keep up, and potentially amplifying our worst tendencies. Without oversight, we run the risk of a future where AI doesn’t just reflect human society—it defines it.
Who’s Watching the Watchers?
Here’s the thing we don’t hear enough about: who’s holding these AI systems accountable? We need oversight that’s not just about making sure algorithms work, but making sure they work for everyone. And right now, that’s not happening. The lack of diversity in tech teams, the lack of inclusive datasets, and the overall lack of transparency mean that too often, the people developing these systems are blind to their own biases. And the result? People get left out—marginalized groups, those with rare diseases, communities living at the intersection of multiple identities.
But it doesn’t have to be this way. We need better practices for training these systems, more inclusive datasets, and robust systems of accountability that keep tech companies in check. If we’re serious about AI in healthcare or on digital platforms, we need to ensure that these systems don’t just serve the privileged few. We need them to be more than efficient; they need to be ethical.
Conclusion: The Future Isn’t Fixed Yet—But It’s Ours to Shape
AI has the potential to change everything—but only if we make sure it doesn’t change things for the worse. We need to ask ourselves: are we willing to let algorithms decide who gets care and who gets heard? Or are we ready to demand better, more transparent systems that don’t just reflect the world we live in but help build the world we want to see?
At Glint, we understand that technology is not neutral. It has power—and so do we. By questioning the systems that are shaping our lives, we can ensure they serve all of us, not just the ones who built them.
Want to hear more about how we survive the AI revolution and the messiness of life? Tune in to Glint’s podcast for laughs, survival tips, and a whole lot of real talk. Plus, let us know—have you ever felt left out by AI or content moderation? We want to hear your stories. Because here at Glint, we believe in amplifying voices that matter, even when the system doesn’t.
PS: We dropped our second podcast episode “We Bleed” on Glint, found on Spotify Here!
Leave a comment