Five ways AI can save and endanger Public Service Media
PLUS How to develop more linguistic diversity with AI
A version of this article was originally published by Public Media Alliance.
I was training journalists in Nairobi, Kenya on how to use AI ethically and effectively last year and on the second day of the workshop, when honesty was finally free-flowing, one editor revealed that they were in a bind. There was no way they could disclose that they were using AI to their audience, because they were convinced that their audience would consider them too “lazy” if they discovered AI was being used. And regardless of the accuracy or their enjoyment of the content, he was convinced members of the audience would definitely turn away from the outlet if AI was involved.
This has struck me as a vital note in the AI insanity of the last few years. Media organisations have become largely brainwashed by large donors and impact investors to prioritise “membership models” and “audience engagement” as the road to revenue. And though there are success stories of membership models making money, by design it can only work for the top few. The rest are left berating themselves and agonising over their newsletter open rates. So, the idea of AI shattering this close human connection between the writer and the content is understandably distressing. But the truth is, AI is reshaping journalism and the audience doesn’t know what they are going to be upset or enthralled by until they see it.
Ideally, we need to acknowledge where AI is involved in the reporting process and be honest with audiences about what that means. This is usually solved by thoroughly talking to an audience and meeting them halfway: they may be okay with certain tasks being AI augmented, like data analysis, but they still want stuttering human voices on a podcast (for example).
Here are five ways AI could support public service journalism and how each could also cause harm if we forget what the point of journalism is in the first place:
1. Automating the boring stuff
The positive: AI can free up journalists to focus on original reporting, investigations, and community engagement.
Tool example: Notta provides AI-based voice-to-text transcription supporting over 100 languages, facilitating multilingual content creation.
The negative: If media organisations start to rely on AI to create journalism instead of supporting it, you end up with filler content, articles with no soul, no context, and no public value. Public interest takes a back seat to “content velocity.”
2. Reaching new audiences
The positive: AI-driven personalisation can help newsrooms tailor stories to different languages, regions, or even individual readers. This could be transformative in multilingual societies where access to news in your own language isn’t guaranteed. It’s also an opportunity to reach younger audiences on the platforms they actually use.
Tool example: Adobe Target helps with personalised content delivery based on user behaviour, enhancing reader engagement.
The negative: Hyper-personalisation also creates filter bubbles. If every user gets a version of the news tailored to their preferences, we lose shared narratives. Worse, editorial priorities can shift away from what’s important to what’s clickable for each audience segment.
3. Investigating at scale
The positive: AI can analyse massive datasets, such as leaked documents, financial records, social media networks, to spot corruption, disinformation, or environmental abuse. These are tools that empower small teams to punch way above their weight.
Tool example: Bellingcat’s Online Investigations Toolkit gives a unique set of tools including geolocation and social media analysis.
The negative: Investigative work that leans too heavily on AI can become detached from human context. Worse, they could flag the wrong patterns, sending journalists down rabbit holes that waste time or mislead. And if the source of your insights is a black-box model with unclear logic, how do you defend your reporting?
4. Fighting misinformation and disinformation
The positive: AI tools can identify coordinated disinformation campaigns, and trace the spread of viral lies. That’s huge. It arms journalists and the public alike with defenses against bad actors trying to pollute the information ecosystem.
Tool example: Meedan’s Check is a fantastic service where you can build a tip line for your community and analyse the results.
The negative: Automated content moderation often fails to understand irony, dissent, or cultural context. Legitimate voices, especially from marginalised communities, can be suppressed by blunt AI filters trained on the biases of the dominant web.
5. Enhancing legal and ethical oversight
The positive: AI is being used by journalists and editors to identify potential legal and ethical issues. This includes detecting defamatory statements and ensuring compliance with privacy laws.
Tool example: Harvey.ai is a legal AI assistant. I assume the name is a reference to Suits.
The negative: These tools may not fully grasp context or nuanced legal standards, potentially leading to oversight of critical issues.
The big threat
One of the biggest threats to freedom of the press today isn’t an oppressive government or a corrupt billionaire, it is that trust in all information has been shattered. If the public can’t trust anything they see, then it is hard for legitimate media to convince them otherwise. We’re already seeing AI-created deepfakes used to discredit reporters and attack their credibility online (there was a fascinating piece on this in The Daily Maverick). It gets worse when you consider AI-powered facial recognition and metadata analysis can be used to track journalists, monitor their communications and expose their sources. In some parts of the world, this kind of technology has been used to intimidate and silence critical voices.
An AI model doesn’t care whether it’s helping you investigate corruption or making up a plausible sounding conspiracy theory. It just does what it’s trained to do: predict the next word. This recent podcast episode highlights very well how we shouldn’t overuse the technology.
I have been told that Meta plans to give preferential treatment on Facebook to posts created with Meta AI (rather than just ones you write yourself or create with other AI models), so be prepared to go to war with the platforms in getting your content seen in the coming months.
However, the ethical guardrails around AI needs to be “sold” to newsrooms as a necessity, really as the only way they are going to stand above every YouTuber and TikToker in the land. If they get down in the mud with the rest of the content creators it is unlikely that they’ll win.
What is happening at Develop AI?
I have recently been in Tanzania at the Africa retreat for International Media Support (IMS). I was presenting on “AI and New Opportunities For Public Interest Journalism”. They are all incredibly smart, lovely people.
I am honoured to have been invited to attend a workshop called “Reclaiming Our Languages” in Germany in July, hosted by Deutsche Welle (as part of The Media Development Think Tank). It is a way for experts in the field from across the world to trade concepts on how to develop more linguistic diversity in AI applications.
For the formidable Thomson Reuters Foundation I am guiding several newsrooms from across South Africa to build AI products for their newsrooms and create robust AI policies. I will keep you posted on what they create.
I will be doing an online training in June for Public Media Alliance entitled “Responsible AI For Pacific Public Media” which I am really looking forward to.
I have been asked to contribute towards the highly revered State of The Newsroom, published each year by the Wits Centre for Journalism. I’ll be writing about the state of AI and media in South Africa.
See you next week. All the best,
Develop Al is an innovative consulting and training company that creates AI strategies for businesses & newsrooms so they can implement AI effectively and responsibly.
Check out Develop AI’s consulting services, training workshops and conference appearances.
Email me directly on paul@developai.co.za. Or find me on our WhatsApp Community.
Follow us on LinkedIn / Facebook / Instagram / TikTok / Threads and X. Visit the website or watch us on YouTube.
Thank you to all our paid subscribers. For $5 a month you can help support this newsletter (and keep it free for everyone).