Is AI funnier than we want to accept?
PLUS Is generative AI going to be an "all or nothing"decision in 2024 and how do you scrape websites
My first true AI moment was the release of “Nothing, Forever” which is a continuous AI generated stream of content influenced by the 90s sitcom Seinfeld. Crudely drawn recreations from the show perform scripts pumped out by ChatGPT. Back in December 2022 the idea that the project was generating constant content derived from my favourite TV show was remarkable. Within 3 months, in February 2023, Twitch (the service on which the stream was running) temporarily banned the service because the “Jerry” character came out on stage and started doing anti-trans material. The audience doesn’t laugh at the supposed jokes so Jerry says he will stop.
What is curious with hindsight is the defence from the creators of “Nothing, Forever” was that they had been forced to shift from the “new” model of ChatGPT "GPT-3 Davinci" back to an older model "GPT-3 Curie" and they said that’s what caused the transphobic remarks. We are much further along in 2024 in terms of models.
But no one mentioned that there was a pretty heavy dose of self-awareness in what the AI was generating. The headlines glossed over that Jerry was asking the audience if he should do that type of material and no one laughed so giving him the answer that he shouldn’t. The AI already knew that ani-trans content was being joked about, but wasn’t funny. Ironically, in the real world when Jerry’s words were taken out of the context created by the AI the service was banned.
A year later we are still poking our AI machines to be funny. Elon Musk and xAI have gone all in on claiming their new chatbot Grok is “funny”. And apparently it isn’t.
Coding Corner (the gradual process of a journalist learning how to code)
I have been busying retooling the AI generated podcast so it scrapes Google News and builds a daily show around African politics. The scraper I use is a Python package called Beautiful Soup and you can point it at any website with any series of search words to pull that content into your program. I am trying my best to manufacture the appropriate credit to news organisations in the new scripts - that part is proving to be harder than anything else. I’m finally acknowledging that most “content” in the lives of people I know is simply the recordings of random people talking into webcams. This may be political analysis or reviews of TV shows, but all content feels like it is gurgling around the same low-production plughole. And though this is mildly heartbreaking if you love creating documentaries, it is encouraging if you are exploring how to make competent content created with AI. Watch out for a relaunch of our AI pod next week.
This week’s AI tool for people to use…
Social media platforms are forcing us to make it explicit when we use AI. However, currently they aren’t offering a way for us to elaborate on how much we used or which methods we implemented. And this feels like it is pointing to an “all or nothing” mentality for the tools going forward. If you are admitting to your audience (and the algorithm) that you are using generative AI and suffering those consequences in perception then why not push it as far as you can?
The pleasure and the craft of producing a product without AI has already been lost once you have even used it a little and the joy, at least for me, when using these tools is to see what can be produced with the least amount of effort.
So, I predict content falling into two strict camps: the stuff maxed out on AI and another that is “pure”. And the audience will comfortably flip from one to the other.
What AI was used in creating this newsletter?
Nothing, apart from using ChatGPT to create the “humorous” image above.
In the news…
The brilliant technologist and human rights advocate Sam Gregory gives a terrifying (and ultimately uplifting) Ted Talk titled “When AI can fake reality, who can you trust?” AI needs to flourish in newsrooms so we can stop disinformation. Putting the onus on the audience to figure out what is real or not is over. Or rather, if we keep doing that then we are all going to drown in this stuff. Gregory points out in his talk that the AI tools that are being developed to detect disinformation aren’t being made available to the public because then they will also be in the hands of the baddies. The idea is to distribute certain detectors only to journalists and activists with influence.
The fight between The New York Times and OpenAI around how they are training their AI models continues. The AI giant is claiming in a new blog post that there is a process to opt-out as a publisher which The New York Times adopted in August 2023. But The Times is saying that using their content for training is not “fair use” partly because they are in competition with ChatGPT as a source of knowledge.
What’s new at Develop AI?
Develop Audio has two free online courses that we would love for you to sign up for. The lessons are sent out every day over email. One course is in Investigative Podcasting and the other is on Branded Podcasting. We are also developing one (probably unsurprisingly) on AI and Podcasting. If you sign up to one of the above let us know how you get on.
See you next week. All the best,
Join our WhatsApp Community, visit our website or contact us on X, Threads, LinkedIn, Instagram or TikTok.
Physically we are based in Cape Town, South Africa.
You can email me directly on paul@developai.co.za.
If you aren’t subscribed to this newsletter, click here.