When AI systems fail... on purpose
PLUS Our AI Data Tracker is Getting Better
What’s new at Develop AI?
We are spending our days mentoring newsrooms and building tools for them, along with developing AI ethical policies. We are currently (and simultaneously) doing this in Zimbabwe, Tanzania, Zambia, Kenya and South Africa. We are a partner for Thomson Reuters Foundation and the Digital News Transformation Fund. It is getting incredibly in-depth and exciting. I will be sure to share the results in this newsletter.
When AI systems fail... on purpose
I send everyone in my life nuts ranting on about how Africa is so intensely vulnerable to data extraction. Data that we aren’t even aware that we are creating will be taken from us and monetised. I was interviewed for a podcast the other day and started ranting about AI toothbrushes… though, there is validity in that, the new Oral-B range tracks your brush movements (with an app you download) ostensibly as a way to improve your technique, but they are certainly keeping all that data for something.
This week, Africa Uncensored, Lighthouse Reports and The Guardian published an investigation that is a premium piece of journalism. The story is about Kenya’s Social Health Authority (SHA), the centrepiece of President Ruto’s overhaul of public health insurance. SHA uses an AI model to estimate household income based on dozens of proxies (roof material, whether you own a radio, education level, access to electricity and type of toilet) because most Kenyans who work informally have no payslips for the system to read. The predicted income determines what each household pays each year for health cover.
The journalists, led by Purity Mukami, Joy Kirigia, Gabriel Geiger, Tomas Statius and Naipanoi Lepapa, filed access-to-information requests under Kenyan law, when those were ignored they escalated the issue to the Ombudsman. They managed to capture the underlying formula and the household survey the AI model was trained on and then rebuilt the system themselves and tested it. They found that the model systematically overcharged the poorest Kenyans for health insurance and undercharged the wealthiest.
There is a temptation, especially among those of us who spend a lot of time around AI, to receive this kind of finding as an algorithmic error story: the model was poorly calibrated, the training data was unrepresentative, no one is to blame, fix the maths and try again. However, a confidential report produced by IDinsight (the consultancy hired to evaluate SHA’s system before launch, never published until this investigation surfaced it) revealed differently. The consultancy has the laughable slogan, “We transform how the world fights poverty. Improving lives with data and evidence.”
And to quote Mark Ruffalo, “They Knew!” The consultants knew the system was flawed and could not be made reliably accurate, especially for poor people. As a fix they suggested putting a complaints system in place and launching it anyway. This conveniently put the burden on the user. The training data over-represented middle-income households and had almost no coverage of the disadvantaged people the system would most affect. The journalists ran a series of tests on the adjusted AI system proposed in the consultants' report and this showed that their suggested changes did not improve the system's accuracy and actually made things worse.
The World Bank had advised against deploying the system in its current form. So had the ILO. So had the UN. Sources told the reporters that internal Ministry of Health resistance was overridden because pressure was coming from, in their words, “a very high place.” One engineer involved in the build was asked, after early figures came in, whether there was a way to make the model “have a higher yield so that people contribute more.”
This is not a story about a model that needs better training data, although it would have benefitted from some. It is a story about a state knowingly deploying a system that would load the cost of public healthcare onto people who could not pay, while preserving deniability about who made that call. The AI is doing the political work of distancing the decision from the people who took it.
Access-to-information laws across the region are uneven but not useless. But the next time a government tells you that AI will help it deliver social benefits more fairly, ask whose definition of fair the system was tuned to. Ask what data it was trained on, and when. Ask what happens to people who are misclassified and whether that mechanism has been tested with the actual people it was designed to protect.
The AI tool for you to use this week
Airtable. This is not known for being one of the big AI tools (even though it has been glutted with AI features recently like every other app), but it can fully revolutionise your AI work. Airtable can serve as the backbone for your AI projects. The issue when chatting to an LLM like Claude about a work project is the damn thing forgets. My life has become a 50 First Dates situation where I have to patiently explain my life to one of these machines every morning. Airtable is a fancy database platform which can connect directly in Claude and every insight or idea that Claude may have worth remembering you can ask it to store it in your Airtable. And more importantly you can mine your Airtable with Claude for ideas or insights on who to contact next or where to push your business. The huge draw back of LLMs is this lack of history, even though they manage to fake it pretty well at a surface level. And crucially, this keeps all your data away from just being buried in an LLM, even though it is on another platform, at least you can see, edit and export it from Airtable. Also, it is intensely satisfying to give a command to Claude for it to build and populate a database with your info and then go over to Airtable and it is full and you didn’t have to do any of that work.
OpenAI get a prime place in our data security tracker
We are expanding our AI Tracker (which already looks at AI legal and regulation developments globally) to accommodate AI data security. The first story to go into this section of the tracker feels practically benign: free ChatGPT users are opted into marketing cookies by default… this tracks your chats and your behaviour across the web… and here is the kicker, with the goal to convert you from the free tier to the paid version. And with that OpenAI will better understand what makes users convert. and which frustrations make them pay.
And as my life obsession is becoming about data tracking, I want to stress that cookies do not just “remember your preferences” it pushes OpenAI into the realm of Google and Meta, but with the intimate nature of an assistant. ChatGPT is where people ask personal questions and think out loud to a machine and there is an understanding that the data is being harvested for “training” but in reality ChatGPT’s answers are now constantly nudging you to going paid. So, this means the answers have an ulterior motive. Feels like cynics will respond to this with a “this is the world, kid” shrug, but it is important to know this is happening. Every use of AI is a security decision, even if not always a risk, you need to deliberately make a decision.
What AI was used in creating this newsletter?
The image of me below is now created with ChatGPT’s new image model. I fed it a pic of me taking a photo of myself in a mirror with phone showing and bag on my back and it polished it up. Corporate photo shoots just became extinct.
See you next week. Cheers.
Develop Al is an AI consulting company that builds AI solutions for newsrooms and businesses. We have trained and worked with 100s of organisations globally so they can effectively implement AI and develop ethical AI prototypes and polices.
Contact Develop AI to arrange an AI training (online and in person) for you and your team. And ask about our mentoring so your business can build efficient AI workflows.
We have implemented AI strategies for The Digital News Transformation Fund, Thomson Reuters Foundation, DW Akademie, Public Media Alliance, International Media Support, Agence Française de Développement and others to improve the impact of AI globally.
Email me directly on paul@developai.co.za, find me on LinkedIn or chat to me on our WhatsApp Community.





