radomski.co.nz

AI news roundup - June 2023

Patrick3 min read

Trying to stay on top of AI news can be exhausting - there is just so much and only the most hyped or most gloomy tends to make it to our feeds. Here are a few news articles, studies and opinion pieces I've come across in the last few weeks that I found useful and insightful.

Using AI in a ER

A really interesting article from Josh Tamayo-Sarver, MD, PhD about how he uses AI as an ER Doctor - and a lot of it rings true to how I see AI too.
"I’ve come to realize that dealing with ChatGPT is like working with an incredibly brilliant, hard-working — and occasionally hungover — intern. That’s become my mental model for considering the usefulness of ChatGPT.

Now, for any potential application, I think, “Would a dedicated but occasionally hungover intern working on this make things easier for me and my staff — or would the work required managing them end up being more effort than just doing it without their involvement?”

I’m an ER doctor. Here’s how I’m already using ChatGPT to help treat patients -Fastcompany.com - Josh Tamayo-Sarver, MD, PhD - 15th May 2023

New Zealand's Ministry of Business Innovation and Employment (MBIE) bans staff from using AI.

In New Zealand, MBIE has pressed the pause button while it performs internal assessments on the risks -
"Documents show that in March, MBIE blocked staff access to a number of AI tools including ChatGPT.

The ministry was worried staff could put sensitive information into the technology which could later resurface.

MBIE has hit the pause button while it works out if the technology can be used safely."

There doesn't seem to be a whole of government approach to AI in New Zealand yet:

"Privacy Commissioner Michael Webster said it was up to individual government agencies and companies to decide if, and how, they use AI."
Government ministry blocks AI technology from staff use - Radio New Zealand, 6 June 2023

Legal ramifications of AI Hallucinations

Something I've been thinking about for a while - who is responsible for decisions or actions that come from AI generated content?  One of the first legal cases to tackle this has started in the US.
"The case alleges, ChatGPT "hallucinated" that Mark Walters' name was attached to a criminal complaint -- and moreover, that it falsely accused him of embezzling money"
Man sues OpenAI claiming ChatGPT 'hallucination' said he embezzled money. The Register - 8th June 2023. 

AI doesn't help programmers

A great write up of the limitations and untrustworthiness of AI assistants and why they aren't useful for programmers (yet).
"Fascinating as they are, AI assistants are not works of logic; they are works of words. They have become incredibly good at producing text that looks right. For many applications that is enough. Not for programming."
AI doesn't help programmers - CACM - Bertrand Meyer - 3rd June 2023

AI is going to eat itself: Experiment shows people training bots are using bots

And finally, a slightly scary study into how some crowd sourced AI trainers are training new AI models using AI and how this might create a death loop.
"The academics recruited 44 Mechanical Turk serfs to summarize the abstracts of 16 medical research papers, and estimated that 33 to 46 percent of passages of text submitted by the workers were generated using large language models."
AI is going to eat itself - The Register - 16th June 2023