Is more "media literacy" the answer to AI slop? Not if we keep doing it the same way.
PLUS What AI does to your brain and what is going on at Politico
These stories first appeared on my daily LinkedIn newsletter, Develop AI Daily.
An evangelical tinge has creeped into media literacy over the years. Even when the “facts” were correct what underpinned a lot of media literacy was the idea that as soon as you become Media Literate then you will believe X, Y and Z. And people sensed this agenda and rejected it. Brexit and Trump were obvious examples of media organisations believing that if the public understood Critical Thinking and all the rest then the LOGICAL step would be to not vote for Brexit or not vote for Trump or think a certain way about COVID. And people immediately saw the smug, elite attitude behind that. You can be Media Literate and "informed on the issues" and still have voted for Brexit (even if I don’t personally agree with voting for it). You can't have a political agenda of what you want a person to believe if you are sincerely trying to teach them to be critical on their own terms.
So, media literacy became a skills game. Similar to fact-checking with teaching people the classic "reverse image search" to spot where an image originated. And we largely disregarded the psychological and behavioural aspects of how misinformation and propaganda works. And if we are going to double down on trying to teach the world to assess the media better (rather than fighting and debunking ever piece of content) in the age of AI then we need to be clear with what we want our media literacy training to lead to and it probably shouldn’t be as pre-determined as before. For example, if all roads to this new education have to lead to the students hating the tech companies (which personally I would probably design at the moment seeing how much I hate them) then we would see a similar rejection of the skills of analysis and critical thinking that would be useful for them.
When I was at The Global Media Forum in Bonn earlier this month I attended a session hosted by an organisation called Lie Detectors. They run a system of sending journalists into schools in Austria, Germany and other EU countries to teach kids how to understand and fact-check their social media feeds. It is an inspired idea, though they are working with particularly young kids before they become too hardened. And there is a soft power element as mainstream journalists punt their undercover influencers (funded by their media outlets) to kids in a hope that they will follow them and absorb better content and ultimately become customers of their adult news when they are old enough.
This is your brain... on AI
If you are old enough to remember the absurd 80s adverts from the US of “this is your brain… this is your brain on drugs” where men talked sternly to the camera and illustrated your drug-addled fate by menacingly cracking an egg and frying it - that was your BRAIN being fried - will wonder when we are going to get similar PSAs for AI. There was a big splash when MIT released a study in June saying that if you use an LLM then your brain is under-engaged compared to even using a search engine and that there should be concern about what this would do to “long-term educational implications of LLM reliance”. But I would posit that this is a result of nascent AI use where we are all still sitting and waiting for AI results and doing the same tasks as two years ago but with AI “help”. This phase will be short-lived and I think the hardest part of the AI jobapocalypse is going to be people will steadily use less brain engagement to do the same work, feel steadily more guilty for it, until they are fired. And really employers (with AI training and coaching) need to start recalibrating jobs now so they can keep their staff engaged and useful.
The real kicker is this article from Harvard Business Review (quoting a study from the Journal of Marketing) which found that those people who study and know more about AI are not necessarily going to be the ones who embrace it. However, those who knew less about AI saw it as less capable and less ethical (than those who knew more), but were struck by the “magic” of its abilities and this fuelled their enthusiasm.
If you are enjoying what you’re reading please consider paying $5 a month to help support this newsletter.
Africa, AI and the M20
So, a group of researchers and journalists (some of them friends of mine) are pulling together the M20 which will ride along side the G20 in Johannesburg in November and address the media’s “relevance”. You have all the powerful people in one room and it feels pertinent for them to consider the state of the media. Though, it feels crazy to talk about “the media” as one entity that has a cohesive perspective or agenda in 2025, but in modern times we also like a team up and it is true that journalists need to cover the horrific developments of AI while actually adopting AI methods and tools to improve their output.
The M20 want to address how the power is concentrated in a few tech companies, how Africans have been paid excruciatingly low salaries to label images and help big tech companies build their billions and also, most interestingly, they want to address how journalists have reported on AI poorly. They want to address the “knowledge gaps” in journalism and stop every tech story being PR fluff. This issue came up when I was part of the DW Freedom: The Media Development Think Tank workshop on AI earlier this month at The Global Media Forum in Bonn. It was a honour to be part of that group and I’ll be writing more about the work of DW Freedom. Though, I will say, journalists besmirching big companies and then turning them into enemies shouldn’t really be something they need help with.
The meat of the M20 proposal is around journalism safety, repression of freedom and combating the use of AI for large-scale information manipulation. We all need assistance with these issues and this has to be stressed at a regulatory level. Also, the need for more African languages to be incorporated into the AI ecosystem is crucial. IMS (International Media Support) are deep in bringing together media partners, universities and technologists to try and solve the AI African language problem. Figuring out how to get it paid for and not be completely controlled by the big tech companies is a huge challenge. I’ll be covering more of the M20 as it rolls around.
My Faith Is Fading
Last week saw the scales starting to fall from the eyes of the AI Hype Cycle. Politico, the shining star of AI newsroom product development (I wrote about how they rolled out exciting AI products to their readers for a premium price so readers could do their jobs more efficiently) shows the behaviour that every journalist fears when the so called "business side" gets hold of the content. Turns out they have been releasing AI products that don’t work (inaccurate a large amount of the time) and undermine the journalism in the process. Meanwhile, 404 wrote a scathing article about the difficulty of stopping LLMs from scraping their website and argued that to position your media company as “AI First” is a buzzword that won’t save your business. This is because the tech companies aren’t interested in your “media” company longterm. They are actively trying to scrape paywalled content globally, right now. Digital Digging tested AI systems in June 2025 across a host of publications using established open-source intelligence methodologies. The results showed OpenAI's ChatGPT, Perplexity, and xAI's Grok successfully accessed protected content approximately 50% of the time, while Anthropic's Claude achieved 35% and Google's Gemini proved the least aggressive.
What AI was used in creating this newsletter?
None, besides the image.
What is happening at Develop AI?
I am busy developing AI strategies for newsrooms in exile for the Thomson Reuters Foundation. And for International Media Support we will have a fully accessible online IMS Learn course on AI implementation soon.
We are also running an AI online clinic with newsrooms in the Pacific thanks to Public Media Alliance.
See you next week. All the best,
Develop Al is an innovative consulting and training company that creates AI strategies for businesses & newsrooms so they can implement AI effectively and responsibly.
Contact Develop AI to arrange AI training (online and in person) for you and your team. And mentoring for your business or newsroom to implement AI responsibly and build AI products efficiently.
Email me directly on paul@developai.co.za, find me on LinkedIn or chat to me on our WhatsApp Community.
We have a podcast on YouTube and Spotify called Burn It Down about AI and what it could do to the world.
I use AI strategies to work with IMS (International Media Support), Thomson Reuters Foundation, DW Akademie, Public Media Alliance and others to improve the impact of media globally.
If you are enjoying what you’re reading please consider paying $5 a month to help support this newsletter.