Straight out of the gate Apple “Intelligence” was all promise and no delivery. The only decent tech reviewer on the Internet pointed out that the iPhone last September was sold on the PROMISE of AI (it was plastered all over the marketing) without a single AI feature available to the public when it shipped. Well, now those features are creeping into our lives and though every day I think about how I would like AI to eat all the software on my phone, as Microsoft are promising, and just be a chatbot, instead we have a host of largely irrelevant and now destructive features.
The BBC ran on its website yesterday a rather meek story saying that it has been complaining for ages about how Apple’s AI service has been garbling their stories through its fancy notification rewriting feature. So, if you get a story alert from the BBC app your iPhone takes the story, imbues it with errors and then puts it in your notification feed.
Apple has previously said its notification summaries allow you to "scan for key details". However, the BBC says, "These AI summarisations by Apple do not reflect – and in some cases completely contradict – the original BBC content."
It is particularly weird behaviour from the tech giant, because I think anyone who has even sparingly used ChatGPT probably asked themselves how Apple were going to use AI to compress and rewrite notifications while mitigating mistakes and hallucinations in the rewriting process. Turns out, they didn’t think that would be a problem…
“On Friday, Apple's AI inaccurately summarised BBC app notifications to claim that Luke Littler had won the PDC World Darts Championship hours before it began - and that the Spanish tennis star Rafael Nadal had come out as gay,” said the BBC article. Basically, it’s like having your own Kent Brockman in your pocket. It is declaring winners of events that haven’t even happened and outing people who aren’t gay.
The sad part about this, even for a juggernaut like the BBC, is they are at the mercy of Apple, because it is through this AI feature that iPhone users are going to be largely absorbing their news.
I have spent the last few years teaching AI implementation in newsrooms all around the world and a golden lesson for journalists that I try to impress on them is don’t use AI when you can do the job better yourself, especially if it introduces a high risk of creating errors in the process. I can’t think of a more glaring example of how AI should not be used than what Apple is doing here, because the original headlines, summaries and notifications already exist (written diligently by a BBC employee) and they are in your phone for you to read. We agonise about misinformation going viral, but Apple is essentially blocking the correct story from you and creating a fake one right on your handset.
The tech giant was said it will gracefully deliver us an update that will “clarify” the notifications, but it is still a formidable advert for how the AI revolution is going to be bumpy at best.
If you are enjoying what you’re reading please consider paying $5 a month to help support this newsletter.
What AI was used in creating this newsletter?
Nothing, except the image below and the strangely Photoshopped looking picture above. They were both created with Meta AI.
Judging AI, the legal landscape
According to Luiza’s Newsletter AI copyright lawsuits will slow down in 2025. And on the flip side we will see more licensing deals. Writers, artists, record labels and media companies have all formed a long queue to sue AI companies for using their copyrighted work to train AI models. It is going to take a while for the courts to figure out this mess, but essentially people are learning to make love, not war. It is also likely that tech companies will find new, legal and financially sustainable ways to train AI models while ensuring creators can earn a living. Let’s all say it together: “Too little! Too late!”
How to code with AI
I advised a friend on how to build an email writer “in his image” this week. He was deep in using Python code to fine tune a model that would hopefully sound like him. He had 900 old emails in a clean format. But every time he initiated his code (given to him by ChatGPT) the application would hang. The truth is - and strangely ChatGPT did not suggest this easy solution - you can create a bot in your image using a ChatGPT subscription. Just click “Explore GPTs” and then “Create” and upload your cleaned emails as PDFs and tell it that you would like the GPT to act like you and write emails mimicking the ones you have uploaded. Then you can prompt it with an email that you have received and ask it for a response. My friend described it as a “good quality bad copy” and added “it often sounds like someone parodying me, but it’ll work for some things”. The question is, are we ethically obliged to put “written by ChatGPT” on the bottom of our emails now? With a little extra effort you can plug your gmail dashboard into your custom GPT clone - get in touch if you would like a few tips on how to do this.
I am obsessed with AI assisted coding. The newsletter The Pragmatic Engineer recently sketched out the “70% problem” with this type of coding. Tools like ChatGPT and Copilot make the first stages of development feel magical, but the final 30%, refining and debugging, becomes painfully complex. For non-engineers, like me, this process often feels like whack-a-mole, with fixes creating new problems in a cycle they lack the expertise to resolve. Experienced developers thrive here, using AI to speed up tasks while applying their deep knowledge to refine and stabilise the output. For beginners, AI can become a crutch, offering quick results but bypassing the learning of fundamentals. Ironically, AI benefits experts more, as they can guide it effectively. I have learnt this the hard way, but without a doubt AI coding creates an entry point for you to learn that did not exist before.
This week’s AI tool for people to use
OpenStreetMap is like if Wikipedia was a map of the world: free, editable and built by volunteers largely from scratch and released with an open-content license. A bolt on to the service is SPOT, a geospatial search tool from DW Innovation. It’s similar to Bellingcat’s OpenStreetMap search tool, except it allows you to submit a natural language prompt for a specific location type. For example, “Show me all shopping malls near a traffic light with a park within 300 meters in Nairobi.” Learn more in their tutorial.
What is happening at Develop AI?
This year is going to be immense for Develop AI. We have trips to Zambia, Botswana and Georgia (the country, not the state) already planned for conferences and AI workshops. If you would like a full AI workshop for your team (remote or in person) then get in touch.
See you next week. All the best,
Develop Al is an innovative consulting and training company that creates AI strategies for businesses & newsrooms so they can implement AI effectively.
Check out Develop AI’s consulting services and conference appearances.
Look at our training workshops (and how your team could benefit from being trained in using AI).
This newsletter is syndicated to millions of people on the Daily Maverick.
Email me directly on paul@developai.co.za. Or find me on our WhatsApp Community.
Watch us on YouTube. Follow us on LinkedIn / Facebook / Instagram / TikTok / Threads and X. Or visit the website.
Listen to Develop Audio’s investigative podcasts, including Alibi, Asylum and The Last Afternoon In The Garden. Contact them for a podcast of your own.
Thank you to all our paid subscribers. For $5 a month you can help support this newsletter (and keep it free for everyone).