ChatGPT drinks 500ml of fresh water for every 50 texts you send it...
PLUS Why voice cloning doesn't always work
When the Internet undersea cables on the West Coast of Africa - that bring our streamed episodes of Suits into our homes - broke the other week, it reminded all of us that our intangible digital world and the fancy AI we all love still needs kilometres of chunky metal to work.
Similarly, as we ramp up towards a world of constantly generating content with AI, we are being asked to consider the environmental cost of its production. According to a paper published late last year it is estimated that ChatGPT is thirsty for 500ml of fresh water to generate between 5 and 50 prompts or questions. The range varies depending on where its servers are located and the season. The estimate includes indirect water usage which is needed to cool power plants that supply the data centres with electricity. And, frankly, this is only the beginning…
The big guys can’t hide how much more water they now need. In this environmental report, Microsoft said that its global water consumption spiked 34% from 2021 to 2022 (to over 6 billion litres). This is a sharp increase compared to previous years and researchers reckon this has to do with all its work with AI.
Shaolei Ren, a researcher at the University of California, Riverside and one of the authors on the paper cited above says that training GPT-3 in Microsoft’s state-of-the-art U.S. data centres used 700,000 litres of clean freshwater, but the real problem is going to come when the public becomes increasingly obsessed with asking their AI assistants questions. The paper says the the global AI demand may mean we withdraw between 4.2 and 6.6 billion cubic metres of water in 2027, which is currently half of the total water that the UK uses in an entire year. I had to look this up, but a cubic metre of water is the same as 1000 litres.
And addressing these problems doesn’t bode well when we also expect our AI models to be increasingly faster and better. We can reduce computational intensity (and so reduce the environmental impact) by sacrificing some precision and accuracy in our models. This process is called quantization and may seem like a step backwards, but not every use case needs the model to be working at the highest level, so being able to moderate this will reduce the footprint to a degree.
A positive is that models do use less energy after they have been trained, but a negative is that an AI can forget what it has already learned as it learns new things (which reminds me of when Homer learnt how to make wine and forgot how to drive) and so an AI may need to be taught a second time (and so consume additional power).
As you can imagine there is a whole cottage industry around “green software” that has sprouted up during AI’s ascent. But the real pressure, that might save the environment, is going to be the cost to businesses. Every time your piece of software needs to access an LLM like ChatGPT you have to pay. So building an AI chatbot at the moment is cheap, but if it becomes wildly popular then it can be pricey for you to maintain. When the chatbot on your website needs to send a message to ChatGPT and get an AI generated response you are charged. As a result, already in the bot world you are seeing a hybrid of automated conversations (with preplanned responses), AI generated conversations and the option for a human to jump in and take over.
The chatter around the environmental impact of AI is going to increase and, as usual, people don’t want to sacrifice what they see as progress, but they also always want to save a few bucks. This thrifty nature, at least, could slightly curb the environmental chaos that’s coming.
This week’s AI tool for people NOT to use…
I am a recent Descript convert for producing podcasts. It is incredible. You load all the audio into the platform and then you can edit the episode - shifting sections around, cutting out mistakes - as if it was a Word document. However, they have recently jumped on the AI voice cloning bandwagon. And it doesn’t work. The promise they make is you can clone your voice and then simply type in the words you want to produce for your voice over. The promotion materials even claim that you will save money because you won’t need to buy any recording equipment. The truth is I upgraded my plan to access the AI features and my voice clone sounded robotic and nothing like me. It is a waste of time. But the new Rode PodMic USB is dynamite.
What AI was used in creating this newsletter?
ChatGPT was used to create the picture for the article above.
In the news…
Apple have been frustratingly claiming Siri to be more than it is for years. Their new AI move ReALM may actually reverse Siri’s lobotomy and make it useful. The model is tiny compared to GPT-4, but that is because all it has to do is reconstruct your iPhone’s screen and label each part of what you are seeing with text. This text can only be seen by the phone’s voice assistant that is ready to receive your instructions. So if you are scrolling through a website you can ask Siri to “call the business“ and it will be able to “see” the phone number and ring them. I mean, that is great… but Siri launched back in 2011 and if a voice assistant can’t do this sort of task then there has been no point us tolerating it for over a decade.
Jon Stewart finally got his boot in around AI, though it felt rather toothless. For a better analysis of AI and disinformation read this by Julius Endert.
Updates from our WhatsApp Community
Here are this week’s best links and tools from our beloved WhatsApp Community (which we would love you to join):
A huge thanks goes to Kim Fox for sharing this resource of assignments and materials for educators who are curious about how AI may affect their students and what they should be teaching.
We have also been sharing this rather gloomy report from Reuters on the future of news in the age of generative AI.
What is happening at Develop Audio?
Our sister company does investigative podcast production and training. We’re excited to share with you our new podcast series, Asylum (on Apple, Spotify and YouTube).
Investigative journalist Opoka p'Arop Otto has asylum in The Netherlands. He was forced to flee his country of South Sudan because he was in danger of being killed. The incredible journalism he produced about the situation in his country meant be could no longer live there. He is now rebuilding his life in Europe, forced to work as a cleaner in order to make money for his family... and yet he is still helping other journalists back in South Sudan as they fight to survive. (Those links again: Spotify, Apple & YouTube).
Also, sign up for our free email courses on how to produce investigative and branded podcasts.
See you next week. All the best,
Develop Al is an innovative company that reports on AI, builds AI focused projects and provides training on how to use AI responsibly.
Check out Develop AI’s press and conference appearances & our completely AI generated podcast.
Visit our website. Contact us on TikTok, LinkedIn, X and Instagram.
This newsletter is syndicated to millions of people on the Daily Maverick.
Email me directly on paul@developai.co.za. Or find me on our WhatsApp Community.
Physically we are based in Cape Town, South Africa.
If you aren’t subscribed to this newsletter, click here.