Mind Readings: How Apple’s On-Device AI Strategy Should Inform Our AI Strategy

Mind Readings: How Apple's On-Device AI Strategy Should Inform Our AI Strategy

In today’s episode, you’ll delve into Apple’s strategic emphasis on on-device AI and what it signifies for the future of AI applications. You’ll learn how this approach could dramatically shift the cost dynamics of generative AI, potentially allowing for powerful AI capabilities without the traditional hefty price tag. Discover how this trend could give rise to a new generation of AI companies and explore the implications for your own ventures. This episode might just spark the inspiration you need to become the next AI unicorn.

https://youtu.be/R-J4LM29aiQ

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

Lots of folks have had reactions recently to all the stuff that Apple revealed at its Worldwide Developer Conference, or WWDC. Lots of folks, including me, enjoyed the big show with the keynote — the flashy show, the high-level explanation of everything that’s going on and what it means for us. But, just as the devil is in the details, the good stuff is in the details, too. In the technical sessions, there was the Platform State of the Union, which was squarely targeted at the developer community.

The keynote was sort of for everybody. The Platform State of the Union was terrific. It was 100% for developers. There were sessions on training and building models for using Core ML on-device that were absolutely brilliant — highly technical, but absolutely brilliant. If you sat through those sessions, you now have a much better idea of the details about how a lot of this stuff is going to be brought to life.

One lesson I think that is being overshadowed in all the hype about the big announcements is that Apple is leaning really, really hard into on-device AI. They’ve been focused on on-device stuff for a while. If you look at the history of Apple hardware, this has been in the works for a long time.

The first Apple hardware that had dedicated AI processors was the iPhone 8. That was when the first neural engine was built into Apple’s chips. Recently — through the A15 chip, which came out with the iPhone 13, through the M series chips that came out with the new Max — those are AI machines. Those are AI machines; they have such huge, dedicated processors just for AI.

Obviously these devices — the other first M series computers came out in 2021. The iPhone 13 came out in 2021 — Apple has been laying the groundwork for this stuff for a very long time. When you look at the tooling that they’re sharing for people to use technologies like MLX and Core ML to compress models and run them on-device, they are opening up opportunities for technically advanced, technically savvy companies to do the same thing. If you follow their architecture and their directions and use their tooling, you can take your own AI models and run them on Apple devices.

That is a really big deal. “Bring your own model” has been the dream for the technical generative AI community for a while because it changes the cost equation for generative AI. The traditional generative AI startup company or the company looking to build generative AI into their products needs big server rooms, big GPU clusters, big energy bills, big sustainability problems. Or you end up working with a company like Google or OpenAI and you pay — boy, do you pay — you pay 10s of 1,000s, hundreds of 1,000s, millions of dollars a month to use somebody else’s AI, to use somebody else’s hardware.

If you are just dipping your toe in the water, you’re trying to prove a feature or something like that, and suddenly you get a massive bill, you’re like, “I don’t know that there is an ROI to AI.” Apple leaning hard into on-device AI models means that the phone — the physical phone itself — is the server room; your customers are walking around with the server room in their pockets.

Now, in terms of performance, you don’t get as big a performance out of a tiny model on a phone as you do, say, a room full of H200 GPUs, for sure. But you’re also not paying a gazillion dollars. I mean, one H200 is like $50,000! That is a lot of budget just to run models on one piece of hardware, and you need a lot more than that. The big foundation models like GPT-40 or Gemini 1.5, yeah, they have more capabilities — at a much greater cost. Take a model like Meta’s Llama Three model: that model proved you can squeeze a lot of capability and a lot of quality into a relatively small model by training with a much larger dataset, training for much longer.

So, think about this: if, say, you were a company like a fashion company, and you wanted to have a generative AI model, a chatbot that could talk about shirts. It knows everything that there is to know about shirts and knows nothing about anything else. You ask it about the weather or who won the Super Bowl, it will have no clue what you’re talking about.

But boy, does it know shirts. You have trained this model, you have tuned this model, and it is the shirt model. You can take that model, now, use Apple’s technology to compress it down and make it super tiny and run on their hardware. And then, if you build that into your app, your shopping app, guess what? You provide natural language conversation about shirts, like, “Hey, what kind of shirt should I buy? I’ve got a shirt that goes with this, what shirt goes with this kind of event?” And the model will know that and answer really, really smartly. But you don’t pay server room costs for that because the model’s running on-device.

See how big a deal that is? You can have generative AI capabilities without the generative AI cost if you focus on on-device, that’s the experience that you want people to have — like the high-quality experience. When they’re using the shirt app, and they’ve waited, they get great stuff about shirts. But you’re not paying the “OpenAI Tax,” as it’s called.

The next unicorn AI company might have a server bill of $0 because they figured out how to make a model efficient and on-device. And perhaps that next unicorn is you.

So, take this trend of on-device AI — and it’s not just Apple, Google talked about this with the Gemini models and things in Android. And obviously the flagship phones from Google and Samsung and so on and so forth all have substantial processing capabilities as well. But take the concept overall: if you had a mobile app, or even a mobile website, and there was an on-device model that was tuned for your specific use case, how easy would it be to add generative AI capabilities without generative AI’s massive ongoing costs by creating and tuning a model and deploying it on-device?

That’s going to do it for today’s episode. Thanks for tuning in. Talk to you next time! If you enjoyed this video, please hit the like button, subscribe to my channel if you haven’t already. And if you want to know when new videos are available, hit the bell button to be notified as soon as new content is live.
♪ ♪


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Shares
Share This