AI Detection & The University Implosion
PLUS The type of AI content that is the threat to journalism we didn't see coming
These stories first appeared on my daily LinkedIn newsletter, Develop AI Daily.
Sometimes we want a solution so badly that we will collectively see it when it isn't even there. The software used for AI detection at universities is an example of this. We want to know if the kids are using ChatGPT to write their essays! The university where I went (UCT) just discontinued using their AI detection software Turnitin in disgrace. I would be heartily skeptical of sending my kids there (or any university that uses it or GPTZero or any one of these packages) given the overwhelming evidence that none of these packages work. They are easily tricked or condemn work as AI generated when it isn't. I met a father in North Macedonia last year who was involved in a battle with his daughter's college over a AI dispute that was threatening to get her expelled. And it is evident that tertiary institutions don't know what to do.
The solution I have been touting is simple and I think effective: build an LLM for your campus, tailor it specifically with the books and materials that your students need and allow them to use it as much as they want. You will have the capacity to monitor all their chats and AI generations on the backend and you will be able to analyse and score the quality of their interactions with the chatbot and even put that towards their mark. Though, the rule is they can only use the designated LLM and if they stray they are penalised. And here comes the tricky part... we need to reinvent what we ask our students that actually tests them in this new world order. What is a useful skill if all written content in a person's future job is to be AI generated? The often touted secret sauce of a university education is "critical thinking", well, what does that look like when AI becomes competent at thinking for us? The universities across the globe need to grapple with this, instead of scratching in the mud for ineffective ways to catch out their students.
I am busy building these private LLMs for certain organisations and believe universities in Africa need them immediately. Get in touch on how we can strategise.
If you are enjoying what you’re reading please consider paying $5 a month to help support this newsletter.
AI self-made content is the threat to media we didn't see coming
While media organisations are fretting over competing "user generated" content that they can see, hear and read, it may be time for them to add a new layer of fretting for content that will never be published. Yesterday Google launched a significant new feature to my most loved AI tool. Ostensibly for "research", NotebookLM allows you to upload a "notebook" of PDFs and links and then chat to them so you can understand a subject better. Last year they snuck in a way to turn your dry PDFs into an annoying, zany American style chat podcast. They have since allowed you to have those podcast hosts chat in dozens of different languages and now they are adding a video option, so you can create your own YouTube video on the subject of your choice. NotebookLM is already an app I use to absorb a bunch of complicated documents in a short space of time (via 20 minutes of audio rather than read 100s of pages), but the use case is becoming compelling for me to quickly create my morning news with the service and gobble it up as an audience of one. It is the ultimate echo chamber, where I will only bump up against the bias inherent in Google's Gemini (the LLM that is writing the scripts from the uploaded info). And for media organisations they can't contest not being credited because it is never strictly "published" outside of my own private use. But when you duplicate that private use millions of times where does that leave the creators of the "source material"?
AI website crawling is having a bit of a moment. The crawlers that companies like OpenAI use to land grab the world's data don't play as nice as we hoped. But the self-created content future of NotebookLM (and whatever comes after) weirdly makes this issue obsolete: each user will spend a couple of minutes copying, pasting, exporting websites into PDFs and grabbing the transcripts of podcasts and then serving them right into Google's mouth. They will then spend the next hour consuming their morning content with no adverts and without a cent passing into the hands of a journalist.
The only thing that can stop this future is if the users (me and you) resist the temptation and content creators are going to need to build a pretty compelling case for us to do that. Their stuff needs to be better than what we can make ourselves... and they don't even know us.
What AI was used in creating this newsletter?
None, besides the image.
What is happening at Develop AI?
I use AI strategies to work with IMS (International Media Support), Thomson Reuters Foundation, DW Akademie, Public Media Alliance and others to improve the impact of media globally. But we are now branching out into schools and ways to fix education in Africa with AI.
See you next week. All the best,
Develop Al is an innovative consulting and training company that creates AI strategies for businesses & newsrooms so they can implement AI effectively and responsibly.
Contact Develop AI to arrange AI training (online and in person) for you and your team. And mentoring for your business or newsroom to implement AI responsibly and build AI products efficiently.
Email me directly on paul@developai.co.za, find me on LinkedIn or chat to me on our WhatsApp Community.
We have a podcast on YouTube and Spotify called Burn It Down about AI and what it could do to the world.
If you are enjoying what you’re reading please consider paying $5 a month to help support this newsletter.