
Tech
The Dark Side Of AI: Racism, Environmental Justice, And Covert Bias
Nowadays artificial intelligence (AI) has become almost ubiquitous, and it can seem impossible to escape its reach. But with the rise of this new technology have come warnings about its inherent dangers.
For instance, in July, Grammy-award winning R&B singer SZA took to Instagram, cautioning her followers about the consequences of this new technology, writing in her story, “AI is killing and polluting Black and Brown cities. None of you care ‘cause [you’re] codependent on a machine. Have a great life,” Complex reported.
SZA was on target with respect to the overreliance on technology. According to a survey conducted by Nsoft, an AI-security company, “58.1% of respondents report an increased reliance on AI for decision-making.
For further context regarding the environmental harms, every query AI responds to involves an incredible amount of energy for the computing involved in producing their answers. Per the Guardian, “In the US, the majority of that electricity comes from burning fossil fuels,” calling out Elon Musk’s AI company, xAI’s presence in Memphis. The company has moved into the city, and their operations are located near “several residential neighborhoods that have long dealt with industrial pollution. This area is historically Black and has higher rates of cancer and asthma and a lower life expectancy than other parts of the city.”
MEMPHIS, TN – APRIL 25: Demonstrators gather in opposition to a plan by Elon Musks’s xAI to use gas turbines for a new data center during a rally outside of Fairley High School ahead of a public comment meeting on the project in Memphis, TN on April 25, 2025. (Photo by Brandon Dill for The Washington Post via Getty Images)
Furthermore, as Energy and Policy Institute researcher Shelby Green states, this is not a unique problem just to Memphis, and is becoming endemic to other Black communities as well. “Most Black households, especially rural ones in the South, are not using AI or as much computing power, but they are having to pay for that demand in both money and dirty air.”
Excluding the obvious environmental justice issues, given that Pandora’s box has already been unleashed with the introduction of AI into society, is it possible for BIPOC to avoid the dark side of this technology moving forward around racial bias?
AFROPUNK sat down with University of Chicago Assistant Professor Sharese King to discuss racial bias, code switching, and future implications of continued AI use, because as the Neubauer Family Assistant Professor of Linguistics has said, “If we continue to ignore the field of AI as a space where racism can emerge, then we’ll continue to perpetuate the stereotypes and the harms against African Americans.
King told AFROPUNK, “the scary thing about AI is we don’t know the extent of the harm yet because they are not asking the right questions before they make the technology. They ask them after, so it’s especially scary for [Black people], not knowing beforehand what the extent of the damage can and will be.”
One central problem starts with the data AI is using to train their bots. “You can get data from all parts of the internet that can be cesspools of bigotry,” King explained. “What happens is that data gets baked into the overall algorithm, and so it’s a way of perpetuating stereotypes about Black people without explicitly teaching it to do so, but it happens because of the kind of data used to train it.”
When asked if it would be possible to trick the bots, akin to the 2018 film Sorry to Bother You, which depicted Lakeith Stanfield’s character successfully code switching, and using a “white voice,” King noted that on one hand, it would be harder to trick AI into thinking that you were white, considering all the kinds of information that it already has on the user.” In addition, the linguistic scholar noted, “people can code switch, but I don’t think a lot of people realize that even in their version of code switching, there are still aspects of the speech that gets marked as Black without them even knowing it.
These are just small features that people don’t always recognize. The user is racialized as Black because of what the technology can pick up, and so I would say it’s harder to code switch if you yourself don’t even know all of the cues that are being used to racialize you.”
As the discussion concluded, King emphasized that while she did talk “about how this technology disproportionately can harm communities of color,” she wants to make sure that “if we come into positions of power that require us to use this technology,” it is her hope that Black leaders would “be more thoughtful and mindful about how we’re using AI, whether that be in employment, education, in how we’re using the technology to sort of give people or deny people opportunities.”
Get The Latest
Signup for the AFROPUNK newsletter