Let’s not dismiss the EU AI Act… but if Meta and Apple don’t comply what’s the point?
PLUS: Why would you interview an AI avatar of a dead teenager?
Versions of these stories first appeared on my LinkedIn newsletter, Develop AI Daily.
The EU AI Act is here... and central to its ambition is to have us think about the RISK of different types of AI. It categorises AI systems as being either “unacceptable”, “high”, “limited” or “minimal” risk. Those under scrutiny as "unacceptable" are AI systems used for social scoring, real-time facial recognition in public and manipulative systems that prey on the vulnerable. If your model distorts behaviour and causes harm, it’s banned. Fair enough.
But also, what they consider "high risk" are AI tools used in education, healthcare, policing, employment, migration and justice, with the argument that the stakes are human rights. However, these are also areas where AI can provide incredible benefit. Providers must comply with strict rules, hand over technical documentation, endure human oversight, testing and transparency. If your model evaluates job applicants, allocates school placements or assists in a criminal trial, then you are going to need to put in the work. I am for this in theory, we have already seen rampant bias and racism in areas like recruitment, but arguably the red tape could stop plenty of innovations even happening.
Importantly, you can’t dodge responsibility just because your company’s based in Nairobi or New York. If your AI’s output affects anyone in the EU then you can be accountable. But then it makes me think could I just turn off my app for people in the EU? If they don’t want me then I don’t want them…
I’ve said before that the AI Act is grossly pro government: go back to that face recognition rule, it stops security companies from developing systems but allows the police to create their own. If you move this to an African context, where services are increasingly privatised, it’s ludicrous. This will simply pool the high end tech in the governments and institutions that need the most oversight.
So, what does it mean for OpenAI, Meta and Google? These are foundation models trained on enormous datasets, capable of powering thousands of downstream apps. Well, there is a ticking clock: AI models already on the market before August 2nd, 2025 (just the other day) have until August 2nd, 2027 to ensure compliance. And August 2nd, 2026 is when most remaining provisions will become enforceable.
The big LLMs will need to disclose their training data, comply with copyright law, and publish how their models work. This is great news. And most companies have voluntarily signed a code of practice to show goodwill… except Meta and Apple. People are speculating that the lack of signing may be “legal limbo” or quite the opposite.
I would strongly recommend Luiza's Newsletter for AI legal updates.
If you are enjoying what you’re reading please consider paying $5 a month to help support this newsletter.
One journalist thought it was a good idea to interview an AI avatar of a dead teenager
Resident ex-CNN blowhard Jim Acosta probably imagined he would get plenty of subscribers for his YouTube channel earlier this week. Unfortunately, a google search of "Jim Acosta youtube" brings his channel up in 7th place under half a dozen outraged articles and videos ABOUT what he did rather than his actual video.
Joaquin Oliver, who died at age 17 in the Parkland school shooting in 2018 was interviewed by Acosta on Tuesday with the "magic" of AI and the endorsement of the kid/now adult’s father, Manuel.
The tasteless outrage is obvious. And the counter of it being to raise awareness for gun control in the US is maybe more so. But, what should be stressed is the AI implementation is bad. Truly horrendous. It isn’t clear if he is interacting with the avatar in real time or if everything is scripted and pre-prepared. Either way it is stunted. The voice is robotic and the avatar without expression. And for this to be a flashpoint for people to get a display of what AI cloning can offer, even with off-the-shelf apps, in 2025 is disingenuous.
And the larger point is what this means for the ecosystem of washed up TV news anchors. For Acosta to do this for 36k views on YouTube (after three days) with the tag "Don't give in to the lies. Don't give up on the truth" seems to say, 1) even with global publicity there is no guarantee that people will visit your actual channel or content and 2) we are now increasingly being forced to consume media that isn't created by a team, but just one guy in his house. This post you are reading is the same, written by just me in my home office, without a sub or an editor in sight.
And this solo creation by design means the quality of the journalism is going to be compromised, even if the freedom is greater (the conspiracy is that Acosta was ousted from CNN for being against Trump) and the stunts more daring, the product is going to be diminished. I am pro Substack (naturally) but we have to acknowledge that the biggest effect of technology on news isn't AI avatars, but that we now want deep personalisation. And this isn't yet "liquid content" where we will get articles or podcasts produced by AI specifically for us, but singular voices (like Joe Rogan or Piers Morgan or Trevor Noah) who will cozy up to us and tell us exactly what they think. The problem is, these guys don't have checks or balances or press codes to hinder them. And even though Acosta seems to be terrible at it, this style of news is winning our attention and that means we a destined for just more noise and less substance.
What is happening at Develop AI?
I am honoured to be talking at the M20 Summit at the start of September on a panel about the significance of AI in journalism. The M20 is an initiative to amplify media, journalism and information integrity issues relevant to the G20.
I’ll also be presenting at a UNESCO conference (remotely, for their office in Bangkok) on the topic of “AI-Driven Newsrooms and Journalism Education” towards the end of August.
See you next week. All the best,
Develop Al is an innovative consulting and training company that creates AI strategies for businesses & newsrooms so they can implement AI effectively and responsibly.
I use AI strategies to work with IMS (International Media Support), Thomson Reuters Foundation, DW Akademie, Public Media Alliance and others to improve the impact of media globally.
Contact Develop AI to arrange AI training (online and in person) for you and your team. And mentoring for your business or newsroom to implement AI responsibly and build AI products efficiently. And listen to our podcast on YouTube and Spotify.
Email me directly on paul@developai.co.za, find me on LinkedIn or chat to me on our WhatsApp Community. We have also recently started chatting on Discord and uploading to Hugging Face.