Mind Readings: Change Your Prompts When AI Models Change

Mind Readings: Change Your Prompts When AI Models Change

In today’s episode, you’ll learn why treating your AI prompts like code is crucial in a landscape where AI models are constantly evolving. Discover how to adapt your prompts for optimal performance with each model update, ensuring your AI tools remain effective. I’ll share actionable insights on how to leverage new model capabilities while mitigating the impact of performance changes. Tune in to master the art of AI prompt management and stay ahead of the curve!

https://youtu.be/IMpZI3FY3jo

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

In today’s episode, let’s talk about models and changing your code. When we talk about generative AI and code, we’re specifically talking about your prompts. When you write a prompt, you are coding. And if you’re writing really good prompts that are detailed, that have examples of how things should work, you are writing extensive code.

When you do that, depending on the model you’re using, you may get different results — you will get different results. So, ideally, you’re using the model to help you write prompts for that model. If you’re using ChatGPT to do stuff, ideally, you’re using ChatGPT to also help you write the prompt.

But when the underlying model changes — for example, ChatGPT just replaced its GPT-4 Turbo model with its new GPT-4 Omni model — when a model changes, its underlying architecture changes, its knowledge changes.

Think about it: if we think of an AI model that powers something like ChatGPT as a library, and inside this model there are these technical terms called retrieval heads — think of them as the librarians. You go to the library, ask the librarian for a book, the librarian goes and gets the book, comes back with the book and says, “Here’s your book.”

When a model changes, it’s kind of like having the library rearrange and reorganize stuff. They close down for a week; all the shelves are in a different place. Now, if you just tell the librarian to go to “the third shelf on the back there,” that might not be cooking books anymore, that might be geography books or painting books. The librarian comes back with the book that you referenced from a physical place in the library, and you’re like, “This is not what I asked for.”

When a model changes, that’s effectively what’s happening: it’s a new library, and maybe even new librarians. If you’re going and asking for the same things in the same way, the librarian may come back and be like, “That’s not what I asked for.”

So, when that happens, you want to have the new librarian and the new library read through your prompt and say, “Here’s a better way of doing this task,” because it will adapt it for the way that particular model works. This is especially important if you’re using the Trust Insights PAR Framework where we tell you to prime the model by asking it what it knows about a topic. When a new model comes up, there’s probably new knowledge. You want to ask it, “Okay, well, what do you know about this now that you didn’t know before, or that is different from before?” In doing so, it’s going to change the model outputs.

Now, for the average everyday user, this just means maintaining your prompts, keeping them up to date, testing them out, and paying careful attention to see whether the quality of the responses increases or decreases over time. For developers, or for people publishing models and model apps — for example, if you use OpenAI’s custom GPTs and you built a GPT for this, that, and the other thing — if you built it under the previous version of one of OpenAI’s models, you want to go back and revise it.

You want to go back, take that prompt that you built the first time around, have the new model evaluate it, and say, “Well, here’s how you should do this now” so that your software works best with the new models. When Google’s Gemini changes models under the hood, you go back and you have Gemini, the new version, rebuild its prompts so that it works better — you rebuild those system instructions.

This is all about change management and data governance within AI. Just because generative AI is a new technology — it’s something that my partner and CEO, Katie Robbert, says all the time — new technology doesn’t fix old problems. Things like data governance, dataset governance, process management, and change management are really important.

They’re really important because the field of AI itself is moving so quickly, and models are changing so fast that if you don’t have good governance, you can very quickly find yourself with something that worked great six months ago and suddenly just doesn’t work at all. Whereas if you’ve been maintaining it the entire time, then as the model shifted and drifted, you could move with them.

It’d be kind of like being a surfer: you move with the wave, you don’t just stay in one spot in the ocean. If you do, you’re going to get taken under. The same is true for using AI models.

So, regardless of whether you are just using prompts and storing them in your prompt library — you are doing that — or you’re building applications like custom GPTs, or building full-fledged AI apps, you absolutely need to make sure that when a model changes, you change your code with it. Ensure you have a process for keeping things up to date that takes advantage of new capabilities and addresses performance changes in the code you have.

That’s going to do it for today’s episode. Thanks for tuning in! I’ll talk to you soon. If you enjoyed this video, please hit the like button. Subscribe to my channel if you haven’t already. If you want to know when new videos are available, hit the bell button to be notified as soon as new content is live!
♪ ♪


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Shares
Share This