Under the dazzling floodlights of the Yas Marina Grand Prix Formula 1 track in Abu Dhabi the Autonomous Racing League sought to pioneer a new era of motorsport. There was not a single human driving anywhere on the track. It was an event poised to showcase the zenith of AI, but it unfolded more like a cautionary tale rather than a triumph. The promise was that soon human drivers would be replaced by algorithms. But the crowd quickly realised that dystopia was not going to be delivered…
The stage was set for a spectacle: eight teams from various corners of the globe, each equipped with a state-of-the-art Dallara racing car loaded with the latest in LaSAR, RaDAR, cameras, and an intricate mesh of sensors. These vehicles, crafted for speed and precision, were programmed to navigate the demanding track autonomously at speeds exceeding 250 km/h.
However, reality quickly veered into view. The teams, including the well-prepared squad from the Technical University of Munich (TUM), encountered a string of setbacks.
During qualifying time trials, the cars outfitted with cameras and software struggled to complete a full lap. They were marred by technical glitches and crashes. But most dispiriting was when cars just pulled over for no reason and took a little break on the side of the track.
TUM, despite their rigorous preparation and technical acumen, only managed a third-place finish in the time trials due to these unforeseen issues.
The final race, intended to be a seamless display of advanced technology, turned chaotic when one car spun out on the very first lap, triggering a domino effect. The remaining vehicles, programmed for safety first, halted abruptly behind it, unable to navigate the unforeseen obstacle of a car out of place. The race ground to a premature stop, with thousands of spectators witnessing the limitations of AI. It was a stark reminder of the formidable challenge of replicating human intuition and reflexes. But, I can imagine there was also relief in that crowd, because do we really want everything to be replaced and automated?
The aftermath saw feeble humans scramble around to reset their vehicles for another attempt. During this second chance, despite a more cautious start, mechanical and software issues persisted.
Amidst the technological turmoil, the TUM team, led by Prof. Markus Lienkamp and team leader Simon Hoffmann, rallied their collective expertise to address each challenge. Their vehicle, equipped with an array of sensors processing massive data streams, showcased brief moments of brilliance, hinting at the potential that might one day be realised.
The event closed with an exaggerated shrug. Yet, with the right kind of eyes you could possibly see progress. Prof. Lienkamp viewed the event as a critical learning experience. He said that it brought them closer to understanding the complex interplay of technology and racing dynamics. Though, apparently plenty of spectators were so bored they left before the race even finished. They wouldn’t even have seen TUM win the race.
I have to say from developing entirely AI generated content (like our podcast) it is easier to have a “big red button” approach and make the whole project AI dependent, but that doesn’t necessarily bring the best results. It is a cute gimmick to say no humans were used in producing a podcast (or driving a F1 car) but the future is certainly a less glitzy mix of people and emerging tech blended together. So, expect to see drivers in F1 cars maybe forever, but for their jobs to get infinitely easier going forward.
What AI was used in creating this newsletter?
I asked ChatGPT to create the image for this newsletter and to help write the main story. Initially it got the verdict of the F1 race completely wrong. Despite me giving the AI a range of stories and information it wrote the article as if all had transpired perfectly. It was only after I told ChatGPT that it had been a disaster did it pick out those negative facts and include them in the story. As always, I had to rewrite the article to remove the generic “ChatGPT voice” that has emerged in the last year.
In the news…
The bad: OpenAI is finally paying for news content (but way too late). In a move that feels surprising for modern media, OpenAI is paying the FT for their content. Intuitively this feels like good news as this cements the AI giant’s ability to ingest training material without legal risk and throw a few bucks to providers for their work. Though I would say that this is too little, too late and basic lip service and good PR for OpenAI after gobbling up oceans of data to train their model for free. This is definitely interesting when you consider the backdrop of The New York Times’ lawsuit against OpenAI (and a whole bunch of other papers suing them). We have a long way to go before we figure out the rules of engagement between AI and content.
The good: Deepfakes are being outlawed in the UK. The government in the UK has already pledged that the creation of sexually explicit "deepfake" images will be made a criminal offence in England and Wales. Next, musicians want to be protected. Professional whiners Mumford and Sons and Sam Smith (plus others) are saying that AI is taking their voices and faces and is "a destroyer of creators' livelihoods". In response MPs in the UK are scrambling around to try and update their archaic laws for modern times. But, I think what should have artists riled up is that AI can pump out music so easily at this point that we might not even need their original tunes soon enough.
This week’s AI tool for people to use…
I have started testing the various AI platforms to see which one can best plan my day. I have extreme to-do-list making disease. I will happily make comprehensive lists of what I need to do rather than do any work. Also, I have found that allowing the Internet into these AI platforms hasn’t necessarily been for the best - Microsoft’s Copilot is obsessed with giving out links as if it is Google instead of doing what I asked, but ChatGPT will take your to-do-list and build a comprehensive plan, particularly if you tell it your working hours and deadlines.
What is happening at Develop AI?
We are proud to be offering more training workshops in AI. In the coming months we will be heading to Kenya (in person) and to Moldova (remotely) to teach journalists how they can use AI efficiently and ethically.
See you next week. All the best,
Develop Al is an innovative company that reports on AI, builds AI focused projects and provides training on how to use AI responsibly.
Check out Develop AI’s press and conference appearances.
Listen to our completely AI generated podcast (and ask us to make you one of your own).
Also, look at our training workshops (and see how your team could benefit from being trained in using AI).
This newsletter is syndicated to millions of people on the Daily Maverick.
Email me directly on paul@developai.co.za. Or find me on our WhatsApp Community.
Follow us on TikTok / LinkedIn / X / Instagram. Or visit the website.
Physically we are based in Cape Town, South Africa.
If you aren’t subscribed to this newsletter, click here.
Excellent newsletter once again. Thanks for sharing.