The insane bias in generative images... and if it is possible to fix
PLUS where AI regulation is going in Africa
I got an email after last week’s letter about the image I used. It was generated with ChatGPT and Dall-E 3. And, on reflection, it depicted damaging stereotypes of a young Ethiopian man explaining AI to a group of old, bewildered Ethiopian men. I swapped it for something more palatable (three young people gazing over a tablet).
What was interesting is that I had indulged in a large amount of “prompt engineering” to even get to that admittedly biased image. It proved very difficult to stop Dall-E 3 from always serving up an image of a white person explaining AI to a group of black Ethiopians. Attempt after attempt brought up the same racial bias. And no version of the image showed a black person in a position of authority explaining to a group of white people. I had to explicitly mention race and explain that I didn’t want white people in the image for them to go away.
At the African Investigative Journalism Conference on Monday, Ron Nixon (Vice President for News, investigative, enterprise and grants and partnerships at Associated Press) alerted me to the book “Unmasking AI: My Mission to Protect What Is Human in a World of Machines” by Dr. Joy Buolamwini, which explores the coded bias in AI. As a college student she discovered that some facial-analysis systems couldn't detect her dark-skinned face until she donned a white expressionless mask. “I was literally not seen by technology,” she says. This prompted her to start a movement of accountability called the Algorithmic Justice League.
AI image tools learn from gargantuan amounts of training data. Instead of billions of words, they are fed billions of pairs of images and captions. AI products like ChatGPT scrape the Internet and we already know the internet is heavily biased (and deals in stereotypes). Generative AI is largely serving up a white person’s view of the world. Hot takes from certain professors say that we should allow these systems to remain biased. The idea is that conducting this well-intended “AI neurosurgery” could result in unforeseen or damaging effects. Attempts to make ChatGPT less toxic manually have been coupled with additional scandals, like paying Kenyans less than $2 an hour to sift through horrendous imagery of violence and sexual abuse and flag it.
It isn’t like the services aren’t aware of the problem, but as the Washington Post points out, they are just attacking whatever issue people are drawing attention to in the moment.
The scary part for me is it leaves the onus to spot bias on the creator. I had a fascinating chat with Prof Vukosi Marivate from Lelapa AI (who are doing incredible work with African language translation) the other day and he reiterated that we can’t leave it to the consumer to spot disinformation. That stuff needs to be regulated and filtered out at source. And I think that is similarly true for these images, because you can spot bias when it involves your own situation, but the confidence of these models means that someone unwittingly sitting in a different country creating images of places he or she has never been to aren’t going to know the bias or stereotype is even there. And after they have pushed those biased images out into the world the AI models will be ready to gobble them up and use them as training data all over again.
Coding Corner (the gradual process of a journalist learning how to code)
OpenAI upgraded its API, so my code needs to be updated. However, ChatGPT currently only has a memory up to April 2023 so it isn’t aware of the update. This makes it difficult to use my usual strategy of asking ChatGPT for the code.
The work around is you point it towards more recent websites (which explain how to update the code, but are pretty long and complicated) and ask it to write the code based on what it reads.
What AI was used in creating this newsletter?
None (the above image is a book cover) but when I asked ChatGPT to create an image for the letter today it said: “Creating an image that effectively conveys the concept of ‘the insane bias in generative images’ and its potential resolution is complex and abstract. However, I can generate an image that symbolizes this concept. Let's visualize a scene where two groups of images are being compared: one set represents biased and distorted imagery, perhaps with exaggerated or skewed features, and the other set represents more balanced and diverse images. This contrast can highlight the issue of bias in generative images and the need for correction.”
The irony is… the faces in the resulting image were 95% white. So, I didn’t use it.
In the news…
The OpenAI CEO scandal has been unfolding on X in the slow motion car crash way that the platform was built for. Though, Meta sneaked this in during the chaos: they have disbanded their Responsible AI team.
What’s new at Develop AI? I am at AIJC this week and Develop AI is getting involved in advising on the regulation of AI in Africa.
I was honoured to be part of a workshop last week by Media Monitoring Africa on behalf of the Forum On Information & Democracy about their report on AI. It was great to be able to contribute and showcase our generative podcasting AI tool. The workshop has inspired me to start a podcast on AI regulation in Africa that I’ll be launching in the new year.
I presented remotely last week at Brave New Media Forum & Innovation Days in Serbia and talked about “AI in Podcasting: The Future of Audio Content Creation”.
The 19th African Investigative Journalism Conference kicked off on Monday. I presented on Tuesday with Teresa Chirwa-Ndanga on how to use the MethodKit for Podcasts to create the best version of your investigative podcast series. Thanks so much to Anton Harber, Izak Minnaar and the conference for the opportunity and DW Akademie for bringing me to Johannesburg.
Next week… I’m heading to The Egypt Media Forum in Cairo to talk about “The Future is Here; Skills of Tomorrow Needed Now”.
See you next week. All the best,
Join our WhatsApp Community, listen to our AI generated podcast, visit our website, or contact us on X, Threads, LinkedIn, Instagram or TikTok.
Physically we are based in Cape Town, South Africa.
You can email me directly on paul@developai.co.za.
If you aren’t subscribed to this newsletter, click here.
This is informative and well explained Paul, thanks for keeping us updated. Keen on all the quick early developments in AI in Africa.