AI at The Met Gala and the horror that is to come...
PLUS How to write an academic essay with AI
The Met Gala is a fundraiser in New York City for the Costume Institute and is an excuse for the rich and famous to look fabulous. If it had been reported on X that an alien invasion, complete with decapitations and explosions, had happened at The Met Gala (complete with AI generated rubble and destruction) on Monday then we all would have guffawed and raised our eyebrows at how amazing this tech is becoming.
However, the exhilarating and genuinely terrifying reality was that people used AI to do far subtler acts of trickery. They changed the dresses of Met Gala participants or put people in the room that weren’t there (but could reasonably have been). They made everyone following the event on social media doubt the reality of what they were seeing.
I have never shown interest in The Met Gala - I am colour blind and have no dress sense - but I have to say this year highlights where AI and disinformation is going and just because it is a fashion event we shouldn’t dismiss these Deepfakes as trivial.
Katy Perry was not at the event despite her likeness being in several photos… she even LIKED the AI generated images on X. Though, she did admit that her own mother was fooled by the photos.
Meanwhile, every year Rihanna is fashionably late for the event and promised her fans this year she would be punctual. Sure enough, images of her got millions of views as she smiled into camera while wearing a dress covered in birds and flowers. Turns out, she had flu and didn’t attend. The clue (besides not being in the room for real): days before she had dyed her hair pink, which was not its colour in the AI photos.
Ready for the real twist? Various pictures of Zendaya surfaced from the gala, a few in a blue-green outfit and then in a black leather gown… so, immediately we should be suspicious, but she did a costume change in the middle of the event and all of the images were real.
Shrinking newsrooms have forced journalists over the years to increasingly use social media to help with their reporting. Now, the only way you would know for sure that Katy Perry was absent was if you were at the event yourself. Who is going to pay to put that journalist in the room? And who is going to have the discipline to only go to “official” news sites in the age of social media?
Now, imagine this type of trickery being used on images of a disaster or a war. Or during an election cycle? Creative deception from people who understand what could be possible in a situation is so dangerous. And the Zendaya example I think is important for political situations, because if we lose trust in everything we see and are forced to question constantly then how long will we stay engaged? We do need to check and double-check what we see and share, but if that becomes the new normal it will rapidly decrease the enjoyment of our time online. And we will be at risk of completely shutting off from the discourse.
This event dovetailed nicely into plenty of news from the tech giants on how they are responding to the problem of Deepfakes.
OpenAI says it has a tool that can now detect images (usually, but not always) created by its own software. It has a 98.8% success rate in identifying images produced by DALL-E 3, though it struggles with detecting media created by other systems. This renders their achievement largely useless, because if the image was created by different software then it generally won’t be flagged. This points to a need for massive collaboration across the industry.
McAfee, a brand famous for virus detection and taking drugs with 18-year-olds in Belize, has developed Project Mockingbird, an advanced AI model capable of detecting AI-generated audio with a high degree of accuracy. This technology is designed to protect us from scams and misinformation. Of course, this will only work if the consumer knows that fakery at this high level is possible and know where to test the content to see if it is real.
Intel has created FakeCatcher (by teaming up with the Graphics and Image Computing laboratory at Binghamton University). This is a tool that can identify fake portrait videos. The way it does it is pretty amazing. It checks the person in the video for “signs of life” like blood flow. When our hearts pump blood, our veins change colour. FakeCatcher claims to collect these blood flow signals from all over the face and instantly detect if there is a real person behind those pixels.
The last point is one of digital literacy. OpenAI, in a joint effort with Microsoft, did recently invest $2 million in an initiative to educate the public about identifying AI-generated content… this would be a gallant gesture from a different pair of companies, but when their platform is helping to generate the fake content, then it feels like they are saying that it is our problem to sift through this mess, rather than theirs.
What AI was used in creating this newsletter?
I feel like we are in a perpetual game of restriction pass-the-parcel. For the image today of the old man and his Deepfake, ChatGPT couldn’t produce anything in the style of Ralph Steadman “due to content policy restrictions” and Gemini is still struggling with images of people (as I wrote about here). Claude can’t do images at all. Bing tried, but the results were terrible. Meta.AI (on its third attempt) managed to dig deep and produce the picture above.
In the news…
The podcast Newsroom Robots is excellent and this week they do well to explain how The Guardian is using AI in an ethical way. It also underlines how you need to consider the resources necessary when adopting AI processes for your business. I am aggressively building AI workshops at the minute and the key elements are cost and the agency people across an organisation have to implement changes.
This week’s AI tool for people to use…
It has been decades since I have had to write an academic essay, but I am interested in the art of plagiarism in the age of AI. This curiosity got me to try Essay Genius and though it was thin on citations (and I fed it a topic about my favourite movie) I was shocked with how it took just a title and tied an argument together.
See you next week. All the best,
Develop Al is an innovative company that reports on AI, builds AI focused projects and provides training on how to use AI responsibly.
Check out Develop AI’s press and conference appearances.
Listen to our completely AI generated podcast (and ask us to make you one of your own).
Also, look at our training workshops (and see how your team could benefit from being trained in using AI).
This newsletter is syndicated to millions of people on the Daily Maverick.
Email me directly on paul@developai.co.za. Or find me on our WhatsApp Community.
Follow us on TikTok / LinkedIn / X / Instagram. Or visit the website.
Physically we are based in Cape Town, South Africa.
If you aren’t subscribed to this newsletter, click here.
Fascinating, as always. Thanks for sharing.