Almost Timely News: πŸ—žοΈ How to Build a Generative AI Priming Representation (2024-06-30)

Almost Timely News: πŸ—žοΈ How to Build a Generative AI Priming Representation (2024-06-30) :: View in Browser

Almost Timely News

πŸ‘‰ Catch the replay of my Generative AI for Sales webinar!_

Content Authenticity Statement

90% of this week’s newsletter was generated by me, the human. You’ll see screenshot outputs from generative AI in the opening section, and the country song lyrics are also AI output. Learn why this kind of disclosure is a good idea and might be required for anyone doing business in any capacity with the EU in the near future.

Watch This Newsletter On YouTube πŸ“Ί

Almost Timely News: πŸ—žοΈ How to Build a Generative AI Priming Representation (2024-06-30)

Click here for the video πŸ“Ί version of this newsletter on YouTube Β»

Click here for an MP3 audio 🎧 only version »

What’s On My Mind: How to Build a Generative AI Priming Representation

Let’s get a little nerdy today with generative AI and the concept of the priming representation. I can immediately hear people asking, “Is that even English?” and yes, it is.

What Is a Priming Representation?

First, we should go over what a priming representation is. Inside today’s language and multimodal models, there are these things called retrieval heads. When you prompt a model, the retrieval heads activate and trigger the activation of tokens throughout the model, returning the result.

In concrete terms, retrieval heads are the librarians, and the rest of the model is the library. You give the librarian a request, and the librarian goes into the library and gets the materials you requested.

So far, so good. Now, the challenge with prompts is that people very often are asking for things that are either too generic, or don’t trigger the retrieval heads well. If you go to the librarian and ask for a book – literally, “Hey, can I get a book?” – the librarian will come back with a book. If you specifically wanted a book on 17th century French literature, but you only asked for a book, you’d probably be irritated when the librarian handed you “The Complete Book of Baseball Statistics”.

Yet, that’s what an awful lot of people do. They give incredibly generic prompts, and then wonder why the machine’s responses are dull, boring, and generic. They gave the librarian almost no guidance, and the librarian came back with their best effort that didn’t match the instructions the person mentally had in their head but were unwilling or unable to type out.

With that understanding of the basics of how these models work, let’s talk about a priming representation. This is the first part of the Trust Insights PARE Framework. When we use the word priming in the context of AI, what we’re talking about is giving the right instructions – clear, complete instructions – to a model to trigger the best activation sequence. It’s like giving the librarian a written out list of exactly what kinds of books we want in great detail and precision. With that list, the librarian can go and get us exactly what we want.

So what does one of these things look like? In today’s modern language models, it’s as simple as an outline – but there’s a catch. The catch is that the model needs to be the one to generate it for it to have maximum impact.

Here’s why: as with a real library, our conceptual AI librarian knows the layout of the library way better than we do, and understands what books are in the library and what aren’t. That means that if we have the language model build our outline, it’ll contain references to known books in the library, metaphorically speaking. The retrieval heads will have known activation tokens to go after.

Why does this matter? Because if we want specific AND accurate results with less risk of hallucination, then we’re better off having the librarian tell us what it knows about a topic than for us to tell the librarian what they should know. As with real librarians, AI tries to do its best to be helpful, harmless, and truthful, which means that if we give it specific instructions, there’s a chance it’ll make things up (hallucination) to fulfill its directives of being helpful.

Ok, so a priming representation is an outline that a model makes. Great. What do you do with that information? Priming representations are the instructions you give to a model for a very specific task, like writing song lyrics or creating analogies. If you want it to do a specific task in a very exacting way, you want to build a priming representation to guide it.

Priming Representation In Practice

Let’s take this out of the theoretical and put it into practice. Let’s use the example of writing song lyrics. Suppose you wanted a generative AI model to help you write some song lyrics that you’re going to turn into a song. How would you go about doing this?

Well, the absolute worst way to go about it would be to say, “Write me a song.” Will a language model do it? Yes. Will you get anything usable? Probably not. Let’s get a lot more specific. Suppose I wanted to write an absolutely ridiculous song, maybe about the curious phenomenon that McDonald’s french fries are amazing when they’re very hot, but they go from great to awful in an insanely short period of time. That’s a very specific topic.

First, we should approach our model and ask it what it knows about writing song lyrics, the first part of the PARE framework. We want to leverage the Trust Insights RACE framework as well for the initial prompt structure, so we might start with:

You are a Grammy-award winning songwriter who specializes in writing lyrics to songs. Your lyrics have won multiple Grammies over the years, and your specialty is country music lyrics. What do you know about best practices for writing song lyrics?” That gives the model a solid start. For this demo, I’ll use ChatGPT’s GPT-4-omni model, but you can use this process in any modern language model. It’ll spit out results that look similar to this:

ChatGPT Lyrics Basics

Next, let’s ask an augmentation question. Ideally, you should have some background in the prompt and topic area. In this case, I might ask, “What specific lyric writing techniques are most common or most unique in country music that aren’t in other genres of music?

It will return some more details:

ChatGPT Country Music Lyrics Specifics

Next, we move into the refresh question. I’ve become a huge fan of contrastive prompting in this stage, asking what not to do. In this case, I might ask, “What are some common mistakes in lyrics writing made by less experienced country music lyrics writers?

We’ll get some very useful specifics here:

ChatGPT Country Music Lyrics Mistakes

Finally, we finish off with an evaluation question, something that can expand or drill down into very niche specifics. In this case, I might ask, “Given everything we’ve talked about so far, what country music lyrics writing techniques or practices would a Grammy-award winning writer know that we haven’t discussed yet? What secrets or tricks of the trade would you add?

It’ll return some useful extras:

ChatGPT Country Music Lyrics Tricks

We’ve now gone through four stages of building out this conversation, extracting a lot of knowledge from the model. You’ll note that I haven’t written huge prompts per se, but the conversation so far HAS been huge, with lots and lots of detail.

This is the secret to our priming representation method: you don’t have to bring all the knowledge. You can leverage what’s in the model – and if you don’t like that, you can upload your own.

From here, we need to turn this into system instructions so that we can use this conversation over and over again. This is the true priming representation, the specific directions we want, and to get to detailed specifics, we’ll need to go in stages. In this case, I might ask, “Building on everything we’ve talked about – best practices, mistakes to avoid, tips and tricks, and overall great lyrics writing skills, create a high-level outline of the lyric writing best practices, incorporating everything we’ve discussed so far.

We’ll get responses like this:

ChatGPT Country Music Outline

From here, we have the model write out each section. In this case, I might direct it: “Now write a detailed outline just of section I: Foundation Principles.”

We’ll get a response like this:

ChatGPT Section 1 Outline

Repeat this for as many sections of the outline as you have, copying and pasting them into a text document for storage.

When we’re done, we’ll have a master prompt – THE master prompt – on writing country music lyrics. The prompt is about 3,000 words long, and you can now use it over and over again. That’s the key, the value of a priming representation outline. Every time I want to accomplish this specific task, I can simply inject my outline into the session and get up and running very quickly while still generating very high quality results.

Priming Representation In Action

Let’s put it to use. We’ll start a new chat session like this:

“You are a Grammy-award winning songwriter who specializes in writing lyrics to songs. Your lyrics have won multiple Grammies over the years, and your specialty is country music lyrics. Today we will be composing some song lyrics for a song idea I have. First, I’m going to give you some comprehensive best practices for writing song lyrics. Read through these and acknowledge them.” (paste the mega-prompt after this!)

Then we give the language model our song idea, written out in standard prose:

“I want you to write a country song about a very specific annoyance. McDonald’s makes French fries, but they have this one really weird, annoying quirk. They’re amazingly delicious when they’re piping hot right out of the fryer. Two minutes later, they’re mediocre as they cool down. And five minutes later? You may as well be eating moist cardboard. What the hell is up with that? Why can’t their fries be delicious for longer? It’s so frustrating, because instead of the dining experience becoming more enjoyable over time, it becomes less enjoyable.”

What do we get?

Title: “Fries That Fade Away”

Verse 1:
Met a love so deep at the Golden Arches’ door,
Golden, crispy magic, couldn’t ask for more.
But this romance, it ain’t built to last,
Those fries go cold, and the magic’s in the past.

Chorus:
Oh, McDonald’s fries, why can’t you stay?
You start out hot, then you fade away.
Two minutes in, you’re just okay,
Five minutes later, you’re cardboard on my tray.

Verse 2:
First bite’s heaven, can’t believe my luck,
But the clock is tickin’, man, what the heck?
I dream of fries that hold their charm,
But I’m left with soggy sadness in my arms.

Chorus:
Oh, McDonald’s fries, why can’t you stay?
You start out hot, then you fade away.
Two minutes in, you’re just okay,
Five minutes later, you’re cardboard on my tray.

Bridge:
Maybe it’s the salt, or the grease that cools too fast,
All I know is that this love, it just won’t last.
Can’t you find a way to keep the love alive,
Give me fries that don’t just survive?

Chorus:
Oh, McDonald’s fries, why can’t you stay?
You start out hot, then you fade away.
Two minutes in, you’re just okay,
Five minutes later, you’re cardboard on my tray.

Outro:
So I’ll keep on dreamin’ of a better fry,
One that stays golden, never says goodbye.
Till then, I’ll savor those first few bites,
And curse the day they lose their light.

If you’d like to hear the whole song, just watch the YouTube video until the end.

Here’s the key point reiterated. That outline, that priming representation allows me to generate high quality results with a very short prompt. One paragraph ordinarily would not be enough detail to create something really effective. But with a priming representation of very specific, clear details about all the best practices we collected, one paragraph is enough.

Critically, I can then take that priming representation and give it to a coworker, colleague, or friend, and they can be up and running with even less preparatory time. This allows me to scale our AI practices much faster and more robustly – and is the reason why individuals, teams, and companies aren’t getting as much value out of AI as they could be.

If you want to really amplify the impact of AI, use the PARE and RACE frameworks along with the detailed process we walked through here to build priming representations for your most valuable tasks. Constructing the representation takes a lot of time and effort up front, but it saves you time and increases quality for every invocation after.

As always, shameless plug, if you want help with scaling your generative AI systems and processes like I did in this example, this is literally what my company does, so if getting started with this use of generative AI is of interest, hit me up.

How Was This Issue?

Rate this week’s newsletter issue with a single click. Your feedback over time helps me figure out what content to create for you.

Here’s The Unsubscribe

It took me a while to find a convenient way to link it up, but here’s how to get to the unsubscribe.

Click me to unsubscribe!

If you don’t see anything, here’s the text link to copy and paste:

https://almosttimely.substack.com/action/disable_email

Share With a Friend or Colleague

If you enjoy this newsletter and want to share it with a friend/colleague, please do. Send this URL to your friend/colleague:

https://www.christopherspenn.com/newsletter

For enrolled subscribers on Substack, there are referral rewards if you refer 100, 200, or 300 other readers. Visit the Leaderboard here.

ICYMI: In Case You Missed it

Besides the newly updated Generative AI for Marketers course I’m relentlessly flogging, this week we talked through basic and advanced tips for building a custom GPT and similar types of software.

Skill Up With Classes

These are just a few of the classes I have available over at the Trust Insights website that you can take.

Premium

Free

Advertisement: Generative AI Workshops & Courses

Imagine a world where your marketing strategies are supercharged by the most cutting-edge technology available – Generative AI. Generative AI has the potential to save you incredible amounts of time and money, and you have the opportunity to be at the forefront. Get up to speed on using generative AI in your business in a thoughtful way with Trust Insights’ new offering, Generative AI for Marketers, which comes in two flavors, workshops and a course.

Workshops: Offer the Generative AI for Marketers half and full day workshops at your company. These hands-on sessions are packed with exercises, resources and practical tips that you can implement immediately.

πŸ‘‰ Click/tap here to book a workshop

Course: We’ve turned our most popular full-day workshop into a self-paced course. The Generative AI for Marketers online course is now available and just updated as of April 12! Use discount code ALMOSTTIMELY for $50 off the course tuition.

πŸ‘‰ Click/tap here to pre-register for the course

If you work at a company or organization that wants to do bulk licensing, let me know!

Get Back to Work

Folks who post jobs in the free Analytics for Marketers Slack community may have those jobs shared here, too. If you’re looking for work, check out these recent open positions, and check out the Slack group for the comprehensive list.

Advertisement: Free Generative AI Cheat Sheets

Grab the Trust Insights cheat sheet bundle with the RACE Prompt Engineering framework, the PARE prompt refinement framework, and the TRIPS AI task identification framework AND worksheet, all in one convenient bundle, the generative AI power pack!

Download the bundle now for free!

How to Stay in Touch

Let’s make sure we’re connected in the places it suits you best. Here’s where you can find different content:

Listen to my theme song as a new single:

Advertisement: Ukraine πŸ‡ΊπŸ‡¦ Humanitarian Fund

The war to free Ukraine continues. If you’d like to support humanitarian efforts in Ukraine, the Ukrainian government has set up a special portal, United24, to help make contributing easy. The effort to free Ukraine from Russia’s illegal invasion needs your ongoing support.

πŸ‘‰ Donate today to the Ukraine Humanitarian Relief Fund Β»

Events I’ll Be At

Here are the public events where I’m speaking and attending. Say hi if you’re at an event also:

  • MAICON, Cleveland, September 2024
  • MarketingProfs B2B Forum, Boston, November 2024

There are also private events that aren’t open to the public.

If you’re an event organizer, let me help your event shine. Visit my speaking page for more details.

Can’t be at an event? Stop by my private Slack group instead, Analytics for Marketers.

Required Disclosures

Events with links have purchased sponsorships in this newsletter and as a result, I receive direct financial compensation for promoting them.

Advertisements in this newsletter have paid to be promoted, and as a result, I receive direct financial compensation for promoting them.

My company, Trust Insights, maintains business partnerships with companies including, but not limited to, IBM, Cisco Systems, Amazon, Talkwalker, MarketingProfs, MarketMuse, Agorapulse, Hubspot, Informa, Demandbase, The Marketing AI Institute, and others. While links shared from partners are not explicit endorsements, nor do they directly financially benefit Trust Insights, a commercial relationship exists for which Trust Insights may receive indirect financial benefit, and thus I may receive indirect financial benefit from them as well.

Thank You

Thanks for subscribing and reading this far. I appreciate it. As always, thank you for your support, your attention, and your kindness.

See you next week,

Christopher S. Penn


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Shares
Share This