Categories
Academia Capitalism, Class, Inequality Film, Art, Television, and Media Politics Race, Sex, Gender, and Sexuality

On Race, AI, and Representation Or, Why Democracy Now Needs To Redo Its June 1 Segment

On June 1, Democracy Now featured a roundtable discussion hosted by Amy Goodman and Nermeen Shaikh, with three experts on Artificial Intelligence (AI), about their views on AI in the world.  They included Yoshua Bengio, a computer scientist at the Université de Montréal, long considered a “godfather of AI,” Tawana Petty, an organiser and Director of Policy at the Algorithmic Justice League (AJL), and Max Tegmark, a physicist at the Massachusetts Institute of Technology. Recently, the Future of Life Institute, of which Tegmark is president, issued an open letter “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Bengio is a signatory on the letter (as is Elon Musk). The AJL has been around since 2016, and has (along with other organisations) been calling for a public interrogation of racialised surveillance technology, the use of police robots, and other ways in which AI can be directly responsible for bodily harm and even death. 

Bengio is French-Canadian and of Moroccan descent, Tegmark is Swedish-American: both identify as white.  Petty, the only woman on the panel, is African-American.

There was, certainly, an interesting discussion about what concerns AI today might or should be, but we also saw was evidence of what numerous people of colour have known for centuries: that a certain kind of white man can’t hear a critique of “white men” without taking it personally, especially when the criticism comes from a person of colour, and they won’t hesitate to lash out at length in petulant and bitter anger. Just as critically, the segment exposed flaws in the structure of Democracy Now, an important news show that struggles with issues of race and identity.

A transcript of the episode makes it easy to see what, exactly, transpired, but it’s worth watching the segment to get a sense of its tensions. The discussion begins with the two men expounding at length, for about 20 minutes,  on why they think AI poses a challenge. For Bengio, “It suffices that just a small organisation or somebody with crazy beliefs, conspiracy theory, terrorists, a military organisation decides to use this without the right safety mechanisms, and it could be catastrophic for humanity.”  Tegmark says, “We’re, as a species, confronting the most dramatic thing that has ever happened to us, where we may be losing control over our future, and almost no one is talking about it.” 

Goodman then reels Petty into the discussion: “You are not only warning people about the future; you’re talking about the uses of AI right now and how they can be racially discriminatory. Can you explain?” Petty responds that “many women have been warning about the existing harms of artificial intelligence many years prior to now — Timnit Gebru, Dr. Joy Buolamwini and so many others, Safiya Noble, Ruha Benjamin, and so — and Dr. Alondra Nelson…” She points out that “existing harms of algorithmic discrimination that date back many years prior to this most robust narrative-reshaping conversation that has been happening over the last several months with artificial general intelligence. So, we’re already seeing harm with algorithmic discrimination in medicine. We’re seeing the pervasive surveillance that is happening with law enforcement using face detection systems to target community members during protests, squashing not only our civil liberties and rights to organise and protest, but also the misidentifications that are happening with regard to false arrests…”

She continues, “And so, there are many examples of existing harms that it would have been really great to have these voices of mostly white men who are in the tech industry, who did not pay attention to the voices of all those women who were lifting up these issues many years ago. And they’re talking about these futuristic possible risks, when we have so many risks that are happening today.”

Petty isn’t just providing a perspective and information based on her racial identity but implicitly—and very elegantly—correcting DN’s framing of the issue and provides a detailed and incisive history of what racialised technology actually looks like, in terms of the harms it perpetuates in an astoundingly large number of public and private spaces. 

At this point, Shaikh asks Tegmark to respond to Petty, with a question that’s just as elegantly worded and presented with her trademark precision, asking him to respond to “the fact that others have also said that the risks have been vastly overstated in that letter, and, more importantly, given what Tawana has said, that it distracts from already-existing effects of artificial intelligence that are widely in use already?”

Tegmark never responds to Shaikh’s question and, instead, deliberately and wilfully misinterprets what Petty has said a personal attack on him by huffing, “I have spoken up a lot on social justice risks, as well, and threats.” And then, in a classic turn that any organiser of colour will recognise in white-dominated spaces, makes her the problem, claiming that Petty’s response was divisive and dangerously so: “It just plays into the hands of the tech lobbyist if it looks like there’s infighting between people who are trying to rein in Big Tech for one reason and people who are trying to rein in Big Tech for other reasons. Let’s all work together and realise that society can work on both cancer prevention and stroke prevention.” 

At this point, Goodman continues to ask Petty about what kinds of regulations might work and, after getting a response, turns to Bengio with the same question. Instead of answering her right away, Bengio snips about wanting to make a “correction” and launches into a list of all the ways in which he, personally, has been “dealing with the negative social impact of AI for many years” and concludes, “So, I think these accusations are just false.” At the end of the segment, Petty feels compelled to address what the two white men have said and to clarify that her “commentary is not to attack any of the existing efforts or previous efforts or years’ worth of work that these two gentlemen have been involved in. I greatly respect efforts to address racial inequity and ethics in artificial intelligence.” 

To be fair to all concerned: as a tightly-run show (it goes live every morning at 8 a.m EST, for exactly one hour), Democracy Now hosts have to juggle multiple considerations during such roundtables.  At any given moment, should the hosts pause and dwell upon an argument or keep moving the discussion forward?  How might they deal with any conflict that arises: let it bloom for views, or cut it short in some way so that there’s a actual discussion?  And so on. 

And yet. 

The June 1 AI segment lasted about 37 minute: the two men spoke for approximately 20 minutes, while Petty got 10, if that (the rest of the time was taken up by host introductions and questions). 

The minutes matter not only because airtime does signify the importance ascribed to particular figures but because, in the case of Democracy Now, what happened on June 1 is emblematic of the show’s larger inability to deal with academics and with people of colour. Democracy Now allowed two white men to dominate the conversation from the start—both expounded on their introductory points for much longer than the time taken up by Petty—and then let them personalise a necessary political point made by a Black woman who then found herself in the position of spending her valuable airtime clarifying her points (and in effect, presenting what might at least appear as an apology to the men). 

Too often, Democracy Now has no idea what to do with many of the experts and academics it brings on board. Until fairly recently, academics being interviewed would simply read from prepared speeches instead of answering questions (I take credit for the show having ended this practice, after I lobbed numerous tweets about it at them).  Goodman in particular is overly deferential towards experts and practically melts into the floor in a breathless puddle if an interviewee is even a minor celebrity. Shaikh is better than Goodman at being firm about asking for responses, but there have been instances of white men in particular barrelling past her. In this segment, both Shaikh and Goodman address the men with the honorific “Professor,” but address Petty as “Tawana,” mostly.  Both men spoke for too long, uninterrupted by either of the two women: this is a recurring problem with male academics on the show. 

Which brings up the other problem with Democracy Now: despite the presence of two co-hosts of colour (Shaikh and Juan González), the show is soaked in the style and politics of what I call the Nice White Lady approach, represented by Amy Goodman. Democracy Now has a race problem but it’s not the kind that’s easily recognised as a problem because it’s so well disguised as a sincere interest in representation for the sake of representation.

I have an entire piece, “The Nice White Ladies’ Club,” forthcoming, where I describe this phenomenon (pervasive throughout media and culture at large). Goodman is an excellent example of a certain kind of media approach towards people of colour as authentic interview/roundtable participants and an inability to recognise and interact with them as intellectuals and political agents in their own right. Petty’s points were an effort to turn the conversation from simply a problem with representation—she was clearly brought on to be The Black Woman—to making it one about how AI technology has been imbued with bias, discrimination, and deathliness from the start. Her point that women of colour, like the ones she named, have been raising issues about AI for much longer than the recent letter might indicate is a deeply crucial one.  It makes people of colour a generative force in the  larger discussion whereas Goodman’s more simplistic framing makes them passive objects. What Goodman wanted—and what the white men expected—was that Petty would deliver some charming and easily digestible little bit about how Black and other people of colour suffered because of AI’s issues, and then remain silent.  What they did not expect was what she gave instead: a sharp, critical, intelligent, materialist analysis of the systems of knowledge and production that had raised and elevated AI to its current status. 

As to the “white men” in question: she didn’t say “these voices,” signifying the two men on the show, but “these voices of mostly white men who are in the tech industry.” Even putting aside the fact that it’s entirely possible to misunderstand what’s being said in a show that is, every day, being counted down as the minutes tick on, it’s notable that both men first responded as part of their tribe, of white men (which is why I identify Bengio as white: he clearly took umbrage as one).  Having done that, their instinct was to turn the conversation around and accuse Petty of being divisive and then to expound on what they, personally, had done for the good of people of colour everywhere. 

African Americans in particular face this constantly in the United States: bring up racism to a certain kind of white person, in any context at all, and the instant response is often some version of, “Listen, Missy, I marched for you in Selma” (given the number of white people who make this claim, every day, I’m always astonished that the historic photographs aren’t of a sea of white faces).  This is not just a generational problem: whiteness among those too young to even have parents who may have marched with Martin Luther King have too often  imbibed, in literal and metaphorical mother’s milk, the message of self-proclaimed anti-racists everywhere: We Must Make Sure Black People Understand Our Sacrifices. 

Petty’s analysis and response, comprehensively and quickly wrought in the precious little time she was given, complicated the idea that she would be speaking for all Black people or any people of colour and instead compelled an understanding that Black and other people were and are also people who think about technology: they don’t just respond to it as people affected by its uses.  She’s absolutely right, that the current and very, very recent uproar about AI makes it clear that alarms are only raised when white “godfathers” like Hinton and Bengio decide to change their minds (an entirely different essay questioning the timing of all this panic comes to mind, and is discreetly shelved for later).  In recent months, the public conversation has suddenly and sharply veered from something like, “Yes, AI can be incompetent and occasionally incompetent, we should keep an eye on it” to, “AI could lead to our extinction.” This seems like an extraordinarily quick shift in the discourse and we have to ask how that happened instead of bowing and scraping in front of every white man who, having unleashed the techology on the world, now feels compelled to stand up and criticise it. What might be the institutional and even—perhaps—financial motivations behind such changes, after decades of activists have been pointing out the problems? Why such a change, now? 

Given the treatment of Tawana Petty on this roundtable, what needs to happen next?  Tegmark has already, in a rather slimy fashion, tried to refashion the narrative of what happened via a June 1 tweet, and  implied that all three of them came to some peaceable resolution. In fact, he and Bengio evaded questions that might have cast doubt on their recent callouts about AI and, instead, distracted the hosts and the audience by making Petty the problem. Again, because this can never be emphasised enough: any person of colour who has tried to organise with and around white people has faced this at least once (if not multiple times).  Even a hint of criticism of whiteness is enough to make the loudest, chest-thumping anti-racist person pivot and hiss, “But I’m not like that. And look at everything I’ve done for you people!” An anti-racist will loudly decry identifiable white racists and make a big show of it all (and often even claim “heritage” in some non-white culture), but if a person of colour echoes even the slightest critique of whiteness, they immediately turn into All White People As One and take offence on behalf of their imagined collective and make it very, very personal.

There needs to be a better discussion of the issues surrounding AI, and that may well involve bringing Petty back to the show, without her being cut short by white men, even if it means spending the time to ask her probing questions as well (something that Goodman is unlikely to do well, being fearful of losing her Nice White Lady cred).  The two men need to be questioned more critically, and actually made to answer questions.  Democracy Now has many problems, but it’s also a show that many of us watch religiously because there are few other venues that allow for sustained and intelligent discussions on complex matters. The show needs to fix its race problem and move away from the Nice White Lady setup dominated by Goodman.  It needs to stop being so deferential towards academics (of all colours!) and it needs to end its often condescending fetishisation of people of colour.  It should stop treating people of colour as authentic representations of every other person of colour, and instead engage them as critically and thoughtfully as it engages everyone else. The show’s focus should no longer be on representation for its own sake.  Representation absolutely matters, but it can’t be fetishised.  The point of having a Black person or an Indigenous person on a show is not to ask them how they think or feel as a Black or Indigenous person and then push them aside so that the white people can keep going on with their analysis.  The point is to think about knowledge production and analysis as inherently racialised from their inception to both exclude and brutalise entire populations. That kind of complicated approach in a clickbait-filled world is neither impossible nor easy, as many us doing it can testify. But the effort is worth it. 

See also:
March As Feminists, Not As Women.”

This piece is not behind a paywall, but represents many hours of original research and writing. Please make sure to cite it, using my name and a link, should it be useful in your own work. I can and will use legal resources if I find you’ve plagiarised my work in any way. And if you’d like to support me, please donate and/or subscribe, or get me something from my wish list. Thank you.

Image courtesy of Nathan J. Robinson, using the Midjourney AI image generator.
You should also check out his stunning new book, Echoland: A Comprehensive Guide to a Nonexistent Place.


Don’t plagiarise any of this, in any way.  I have used legal resources to punish and prevent plagiarism, and I am ruthless and persistent. I make a point of citing people and publications all the time: it’s not that hard to mention me in your work, and to refuse to do so and simply assimilate my work is plagiarism. You don’t have to agree with me to cite me properly; be an ethical grownup, and don’t make excuses for your plagiarism. Read and memorise “On Plagiarism.” There’s more forthcoming, as I point out in “The Plagiarism Papers.”  If you’d like to support me, please donate and/or subscribe, or get me something from my wish list. Thank you.