Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Craig Federighi & John Giannandrea talk Apple Intelligence at WWDC

iJustine, John Giannandrea, and Craig Federighi (from left to right)

After the WWDC keynote, Apple's Craig Federighi and John Giannandrea sat down to talk more in-depth about its AI efforts with Apple Intelligence — including about how the company trained the models.

During its kickoff event of WWDC 2024, nearly 40 minutes were devoted to Apple Intelligence, the iPhone-maker's big push into AI, or what it is calling personal intelligence.

It was no surprise that its execs spent more time talking on the record about why their approach is different and how it plans to balance useful features with privacy. AppleInsider was in attendance at the discussion, which was hosted by iJustine.

Privacy at its core

The chat took place inside of the Steve Jobs theater and was hosted by the ever-popular iJustine who lobbed questions at Craig Federighi and John Giannandrea around the announcements.

Even though Apple is making a big deal about Apple Intelligence, it's quick to point out it hasn't been stagnant. Your iPhone has over 200 Machine Learning models already that do everything from detect a car crash to suggest curated memories.

"We want AI not to replace our users, but to empower them," Federighi said. He's pushing the narrative that this isn't the same artificial intelligence features we've seen before.

Traditional chatbots don't know enough about you, while they do know more about the world around you. It's something Apple is calling "personal intelligence."

For example, if you asked the simple question about how long would it take to get to your son's baseball game, a chatbot wouldn't know. It doesn't know how you travel — car, public transport, or walking — nor does it know about the baseball game or who your son is.

Giannandrea believes it's Apple's focus on privacy that enables them to use so much of your information. Information other chatbots wouldn't even touch because of those privacy implications.

How Apple is using AI

Chatbots, as Federighi says, are the most open-ended way to exploring AI and its weakness. Apple's approach was to focus on a series of experiences that it knew would be reliable and work the best.

This results in a more curated set of use cases for Apple users, even if some feel that it's limited.

Despite this, AI uses quite a bit of resources. It's why it is being limited to Apple's M-series chips and the A17 Pro found in latest iPhone 15 Pro and iPhone 15 Pro Max. They require too much compute power and draw too much energy to be feasible on anything else.

Federighi points out that the A17 Pro's Neural Engine is twice as powerful the year-before version and has other benefits that suit Apple Intelligence.

Apple training of AI models was in fact ethical

Alongside what devices are supported, another oft-asked feature already is what data Apple used to train its AI models.

Giannandrea says that Apple users a variety of sources. The company includes public web data — where publishers are free to opt out alongside licensed data such as stock photography, text books, and more.

To be clear, Apple paid for the licensed data, as we said in April. Other AI firms have said that they shouldn't have to, because it's too hard to properly source data.

The fine tuning and training data set was created in-house by Apple itself.

Private Cloud Computing

"We're really excited about this because it solves a big problem around many capabilities of AI," said Federighi as the topic turned to its new Private Cloud Computing infrastructure.

The idea is that much of what you do will be processed locally on your phone, but in some instances where the topic is too broad, it would benefit from larger servers. So that's what Apple has attempted to do, but with a focus on privacy.

The pair of execs were keen to highlight the security issues with other cloud providers. Even if a company says they won't do anything with your data, things could change at any time.

Only those running that server truly knew how safe your data is and hat happens to it. It could be saved in a log file, stored in a database, or tied to your profile.

With Private Cloud Computing, only a tiny bit of data is uploaded to the servers, which of course is also anonymized. Those servers are incapable of logging or storing anything.

"It's essential to know that no one, not even Apple, has access to the data used to process your request," Federighi mentions.

Not only that, but your devices won't even communicate with servers that don't have public published software. That allows independent third-parties to verify Apple's privacy claims, setting a high bar.

These servers, by the way, run on Apple Silicon. This is because not only is it incredibly power efficient which is crucial for Apple's environmental efforts, but because they're secure from the beginning with features like the Secure Enclave, Secure Boot, and more.

And yes, they all run on 100 percent renewable energy.

Siri & ChatGPT

AppleInsider broke the news before WWDC about many of the new features we'd see in Siri. So far, it's just the beginning of Apple's efforts.

"This is a huge journey for us and our developers," adds Giannandrea as they talked about how developers can leverage these features by adding app intents.

App intents are these individual in-app features like applying a filter to a photo, sending an email, or buying a game in the Playstation app. They require dev work, but can unlock a lot of new AI functionality.

Siri is also able to pass over certain queries to other LLMs, such as ChatGPT. At launch, it will offer integration with ChatGPT 4 but they're open to working with others, including Google's Gemini.

Federighi sees this as a way to bring in data that Apple isn't an expert in, to Siri. For example, we could see models full of medical data, legal data, ones for coding knowledge, or creative writing.

Users will be able to add the models of their choice that make sense for what they need to do. When this happens though, it's clear your passing that data off to that other provider, and whatever data policies they have.

Much more to come from Apple Intelligence

Apple Intelligence is just the beginning. Technically, it's not even here yet as its absent from the first betas of Apple's new operating systems.

It will arrive in a later beta and launch as a beta feature in general this fall.

"It's very early innings here, when we look at these major trends. The introducing of the internet, of mobility. We're now with iPhone... 18 is the number on the OS," mused Federighi. "It's been incredible from what I think was an amazing start for iPhone to see where it has come over the years and this is the beginning of a long and exciting journey.

"What's exciting to me is what this means for developers as platform to bring together users and developers as the glue, I think will create incredible value and will be an exciting number of years ahead," Federighi concluded.



4 Comments

blastdoor 15 Years · 3594 comments

I sure would love to know more about the Apple silicon used in those servers 

atonaldenim 4 Years · 68 comments

What was the name of this session, is a recording of it available anywhere? In the Developer app or elsewhere?

sevenfeet 16 Years · 471 comments

blastdoor said:
I sure would love to know more about the Apple silicon used in those servers 

This was rumored before the event and verified by Apple during the show. The rumor was that Apple was building custom servers based on the M2 chip. This makes sense for a few reasons. They have a lot of experience with making this chip now, the yields are probably pretty good and even though they aren't 3 nm, they are still far more power efficient than anything else out there per watt, especially Nvidia. Nvidia right now has the class leading AI chip that everyone wants, but Apple doesn't have to use it for their Private Compute Cloud. And the programming behind the scenes for Apple engineers is identical to what they are doing on M2 Macs. And since the chip is in a server environment, it can run at full speed without having to worry about power draw.

Of course it's interesting that Apple is building servers at all, but they are well suited to do so. There is likely a lightweight server version of MacOS that they are running (Darwin + necessary libraries). The real trick is how you handle things like high availability, high speed networking and keeping the pipelines fed. Since Apple has experience handling a lot of Siri requests a day, I'm sure they have some idea of how to build what they need.

Finally, I'm sure that Apple could build a custom AI server chip of their own that biases itself toward the a massive neural engine. Whether that happens or if Apple just chooses to build custom boards with M2s or later chips on them will remain to be seen.

JinTech 9 Years · 1061 comments

What was the name of this session, is a recording of it available anywhere? In the Developer app or elsewhere?

Great questions.

Edit: iJustine just uploaded a video of her interview with Tim Cook and mentioned the full interview as mentioned above will come soon.

https://youtu.be/W5X0x9zq5U0?si=Swavch_5CXIZo_G7