The Alarming Reality of Racial Bias in AI

EXCLUSIVE: An Eccentric Painter Taps into the Literal Power of Black History in ‘ANCESTRAL RECALL’
August 3, 2025
If Taika Waititi Is In for the Judge Dredd Universe but Karl Urban Is Out, Do We Even Want to Watch?
August 3, 2025

The Alarming Reality of Racial Bias in AI

https://blackgirlnerds.com/the-alarming-reality-of-racial-bias-in-ai/

The rise of personal computing in the ’80s and ’90s gave birth to the idea that the technology could be finally used to empower the individual, democratize knowledge, and create new opportunities. The optimism that surrounded those ideas exploded with the dot-com boom during the late ’90s and early 2000s, and it was further exacerbated with the subsequent rise of social media and the smartphone revolution in the 2010s. Now, we’re witnessing the rise of artificial intelligence (AI), which is rapidly transforming many industries and spheres of our lives.

But has the tech really made our society better, or did it make us even more confused and way too reliant on the conveniences it promises? There’s an enduring and harmful notion that technology is objective and neutral when it comes to race and gender. For the most part, that actually is the case. However, does the same apply to machines that are trained to think and draw conclusions in a way that mimics that of humans? Apparently, that does not seem to be the case because there’s ample evidence of AI subtly perpetuating and even enhancing the biases of their creators — including racial biases.

In 2018, Joy Buolamwini — we mentioned her in our discussion about 5 Black Female CEOs Leading the Charge in Tech — discovered that the facial recognition algorithms in her lab can’t adequately detect the faces of people of African descent. She even tried scanning her own face, and the AI algorithms couldn’t recognize her unless she put on a white mask. This indicated the existence of bias in AI systems that are often trained on images of predominantly light-skinned men, which just so happen to comprise the vast majority of the tech industry.

Boulamwini’s research uncovered large gender and racial biases in AI systems that major tech giants, such as Microsoft, IBM, and Amazon, develop. For context, the systems developed by these companies were substantially better at distinguishing male faces compared to female faces, with less than one percent error for light-skinned males. For darker-skinned women, on the other hand, the error percentage soared to an alarming 35%. The issue isn’t persistent in facial recognition alone; other AI robots also exhibit racism and gender discrimination.

For example, image-generating AI systems are trained on billions of images, and many of them initially identified women as “homemakers” and people of color as criminals or janitors. In fact, many generative engines weren’t even able to generate images that would even resemble a Black woman; artist Stephanie Dinkins reported that some algorithms she worked with would produce pink-shaded humanoids shrouded in black cloaks.

Generative AI has come a long way since then, but the issue of racial bias remains deeply rooted in AI systems due to the way they’re created. Predictive policing is a painfully obvious example of AI systems reproducing racial biases found in their training data. These AI systems make assessments about future crimes, who might commit them, and where, based on data such as location and personal information. But therein lies the issue, which could potentially exacerbate policing or even over-policing in communities along racial and ethnic lines.

This is where data really comes into play; historically speaking, law enforcement focuses more on marginalized communities and neighborhoods, and as a result, members of those communities are overrepresented in police records that comprise the bulk of AI training data. Predictive policing AI systems will then use said data to predict where future crimes will occur, leading to increased police deployments in such communities. To make things even worse, recording new crimes in overpoliced areas will create a “positive” feedback loop, which will reinforce the algorithms’ biased predictions.

So, the real issue is the bias that’s in the data used to train AI systems, and that data is often unrepresentative or straight-out discriminatory for people of color, women, and other marginalized groups. And law enforcement isn’t the only thing affected by these faulty systems. AI housing discrimination is also a real thing, as algorithms often affect mortgage qualifications, tenant selection, employment (we previously talked about this), and financial lending discrimination.

Most of these systems evaluate tenants, job applicants, and potential borrowers based on data that includes court records, eviction and criminal histories, and many other datasets that contain their own biases that reflect the systemic racism and sexism baked into them. As a result, people are being denied housing despite their ability to pay the rent, and qualified professionals are being denied job opportunities simply because an algorithm deemed them ineligible, unqualified, or unworthy.

The case of Shudu Gram, a Black digital supermodel, adds another layer to the controversy about racial bias in AI. Shudu was designed by a white British photographer to look like a dark-skinned Black woman, and she quickly gained popularity on social media, despite not being a real person. Though praised for he beauty, Shudu’s creation sparked backlash over what many called a new form of digital blackface.

Furthermore, critics argue that when a white creator profits from a virtual Black identity, without hiring, crediting, or collaborating with real Black women, it creates the illusion of diversity without delivering it. This serves as a troubling reminder that AI is not only replicating biased data, but also being used to control and commodify identities that have long been marginalized and excluded from industries like fashion.

The truth is that AI systems aren’t inherently racist; they just picked it up from humans who train them. The datasets these systems are being trained with are obviously incomplete and imperfect and often rely on different variables, such as education level, socioeconomic background, and location, to perpetuate biases. So, we can only conclude that the biases these systems exhibit reflect the biases of their creators, regardless of whether those biases are intentional on a personal level or not.

AI is rapidly transforming various industries, but its broad, unregulated, and often unethical applications in various fields are concerning. Facial recognition systems misidentify people of color, hiring algorithms unintentionally discriminate, and AI amplifies systemic inequities across critical sectors like law enforcement, healthcare, and education. The underlying causes and potential solutions reveal the urgency of addressing these flaws to ensure the technologies shaping our future don’t perpetuate the injustices of our past.

The post The Alarming Reality of Racial Bias in AI appeared first on Black Girl Nerds.

Comments are closed.