What is the AI EU Act backtracking on?
PLUS: How is AI going to change your job?
I was in Copenhagen last week, which is a lovely theme park type of city. I was speaking at IMS’s Copenhagen Conference on Information Integrity on a panel about AI and the future of media. One of my fellow panellists was the Danish Tech Ambassador, Anne Marie Engtoft Meldgaard. She was the first in the world to hold this kind of position and she had just come from the US where everyone she spoke to was obsessed with AIG (the development of a super intelligence that will replace us all). In contrast, the tone and incredible calibre of speakers at the conference (which took place at UN City) made me feel like the EU is focused on the moral and humane use of tech in the world and how we can harness it for good.
This was true for the delegates in that room, but meanwhile, this month, the long established EU AI Act has been put under strain. (I wrote about the Act’s details here). It was adopted by the EU and entered into force on 1 August 2024. But many key obligations didn’t apply immediately, particularly for the “high-risk” systems which includes everything from healthcare to certain toys.
Important parts of its mandate may be postponed or discarded allegedly in an attempt to woo the very tech companies that we all spent the week at the conference lambasting. The Commission denies being influenced by such companies.
Here is what is potentially changing with the AI Act:
The EU is preparing to present a “digital omnibus” reform package (scheduled for today, the 19th of November 2025) which will include targeted amendments to the AI Act and other digital-legislation.
These potential amendments are framed around simplification: the idea that the regulatory burden might be too heavy or too complex, especially for smaller players or certain sectors. BUT there are rumours that the Commission is considering giving a one-year “grace period” for high-risk or generative AI systems already on the market. One year in AI is like ten or a hundred regular years.
It be fair, the technical standards to implement most of the Act’s provisions are not yet finalised. And, of course, very few people building AI products want this to happen. It isn’t just Meta and Musk, the guy who got laid off last year and is now vibe-coding his heart out also doesn’t want a range of hurdles to stop his million dollar app getting to market.
Why this all matters
The Act is partially intended to build public trust in AI by showing governments are acting. Delays can signal weakness or capture by industry. That erodes the legitimacy of the regulation and may reduce public acceptance of AI systems in the long run.
If harmful incidents occur during the “delay window,” the absence of regulation could lead to an old fashioned backlash.
What is the flip side?
Some argue that rushing regulation can be harmful. It may over-regulate, stifle innovation, mis-classify technologies or become quickly outdated given how fast AI evolves.
Delaying the AI Act may not necessarily fix its problems. The regulation’s architecture is already showing strain: the original sector-based risk logic collapsed in the face of general-purpose models like ChatGPT, forcing the EU to add a capabilities-based layer. Revision of the Act is largely limited to annual reviews which are slow and subject to politics.
I’ll be covering this slow (yet rather interesting) story in future letters.
The AI tool of the week that you can use
A while back I wrote about downloading your own LLM and running it on your laptop. Now, you can do it with far greater ease. By using Ollama you can load a host of LLMs directly onto your computer. This means you can have an AI to use that isn’t connected to the Internet for “sensitive” tasks. Even though these models are inferior to the likes of ChatGPT 5.1 they are, of course, free.
AI & Work: how is your job going to change?
Two months ago, Stephen Bartlett (who occupies the smarmiest corner of Dragon’s Den) managed to pause from creating clickbait for medicine that doesn’t work for two seconds to do a competent interview about the future of work in the age of AI.
I have talked plenty (for years) about how the web will be decimated because of AI, but what is talked about less is the type of person who will work in media is going to change if news continues to personalise and the content we consume is written by AI rather than a human.
The job of a journalist is ostensibly pious, but is also propped up by a tremendous need for recognition. Awards ceremonies fuel the industry (I have been to plenty and even though journalists want societal impact, they also want to see their name on those bylines and statues). With start-ups like Scroll in India we are about to enter an era where journalists write their stories, submit them to an editor for checking but they are then processed and the audience members never see a word of what was written. They will receive their content in the tone, language or format that they prefer. Maybe as a podcast or a series of tweets, but no content that the journalist actually produced. And if you remove that ego aspect that will mean different types of people will enter the profession. Not necessarily worse people to do the job of investigating, but certainly different. And this is going to be true for all jobs, as skills need to change so will the intangible benefits. Think of an animator working on a huge Marvel film and being a name among hundreds at the end of a movie. Most journalists (myself included when I was working on investigations) wouldn’t cope with that. The transition period with AI has always been pitched as one around skills, but it is also going to be social and personal.
What is happening at Develop AI?
Last month I was in Chișinău, Moldova (which is a city that pops with energy and is the only place I’ve visited that is cheaper than South Africa) giving a full workshop on how AI is shaping the newsroom to a host of very engaged journalists. I was also talking at Moldova’s Podcast Fest presenting a 90 minute Masterclass on AI and Podcasting and how the tech is shaping the business of audio and video. Thanks to DW Akademie, The Moldova School of Journalism and Olena Ponomarenko for the amazing support and partnerships. It has been really great working with everyone.
Two weeks ago I was on home turf at The African Investigative Journalism Conference in Johannesburg giving a workshop with Caroline James on how AI can ethically complement investigative journalism. Thanks to the Thomson Reuters Foundation and the fantastic work they are doing in this space.
I have been consulting with Just Detention International on how to create an ethical AI policy for their staff and output. It feels like every organisation needs to go through this process and great results can come for those who do it sooner rather than later.
See you next week. Cheers.
Develop Al is an innovative consulting and training company that creates AI strategies for companies, newsrooms and individuals so they can implement AI effectively to improve their work and help mitigate the upheaval that AI will inevitably cause.
I use AI strategies to work with IMS (International Media Support), Thomson Reuters Foundation, DW Akademie, Public Media Alliance, Agence Française de Développement and others to improve the impact of media globally.
Contact Develop AI to arrange AI training (online and in person) for you and your team. And mentoring for your business or newsroom to implement AI responsibly and build AI products efficiently.
Email me directly on paul@developai.co.za, find me on LinkedIn or chat to me on our WhatsApp Community.







