Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Apple Intelligence & Private Cloud Compute are Apple's answer to generative AI

Apple Intelligence is a cross-platform AI initiative

Last updated

Following months of rumors, Apple Intelligence has finally arrived, and promises to provide users more than just generative images.

Towards the end of Apple's keynote for WWDC 2024, following its main operating system announcements, the company moved on to the main event. It discussed its often-rumored push into machine learning.

Named Apple Intelligence, the new addition is all about using large language models (LLMs) to handle tasks involving text, images, and in-app actions.

For a start, the system is able to summarize key notifications, showing users the most important items in a summary, based on context. Key details are also surfaced to the user for each notification, while only the most important are shown under the new Reduce Interruptions Focus.

Laptop screen showing an email with proofreading and rewriting options. Background includes calendar events, weather, and a visible CV.
Apple Intelligence text options in Mail on iPadOS 18

System-wide writing tools can write, proofread, and summarize text for users. While this sounds like it could just be for messages and short text, it can also be used on longer stretches, such as blog posts.

The Writing Tools' Rewrite feature provide multiple versions of text for users to choose from, adjusting what the user has already written. This can vary in tone, depending on the audience the user's writing for.

This is available automatically across built-in apps and third-party apps.

Images

Apple Intelligence can also create images, again for many built-in apps. This includes personalizable for conversations with specific contacts in Messages, for example.

These images are created in three styles: Sketch, Animation, and Realism.

In apps like Messages, the Image Playground lets users create images quickly by selecting concepts like themes, costumes, and places, or by typing a description. Images can also be used from a user's photo library to add to the composition.

This can be performed within Messages itself, or through the dedicated Image Playground app.

iPad showing a character creation screen with an astronaut-themed avatar surrounded by various character and theme options. Create button in the top right corner.
Image Playground in iPadOS

In Notes, Image Playground can be accessed using a new Image Wand tool. This can also create images, using elements of the current notes page to work out what the most appropriate image to add could be.

The image generation even extends to emoji, with Genmoji allowing users to create their own custom icons. Using descriptions, a Genmoji can be created and inserted inline into messages.

A more conventional addition is in Photos, with Artificial Intelligence able to use natural language to search for specific photos or video clips. A new Clean Up tool will remove distracting objects from a photo's background.

Actions and Context

It can also perform actions within apps on behalf of the user. For example, it could open Photos and show images of specific groups of people from a request.

Person standing next to a large screen displaying a colorful logo and categories like Books, Browsers, Cameras, Document readers, File management, Journals, Mail, Photos, Presentations, Spreadsheets.
Apple Intelligence will support a wide array of apps

Apple also says Apple Intelligence is grounded on the context of a request within a user's data. For example, it could work out who family members are in relation to the user, and how meetings can overlap or clash.

One example of a complex query that can trigger actions in an app is "Play that podcast that Jamie recommended." Apple Intelligence will determine the episode by checking all of the user's conversations for the reference, and then open the Podcasts app to that specific episode.

In Mail, Priority messages can show the most urgent emails at the top of the list, as selected by Apple Intelligence. This includes summaries of each email, to give users more of a clue of its contents and why it was selected.

Tablet screen displaying ARCH 201 Lecture 12 notes on architecture in India, 15th-18th centuries, with a sketch of a domed pavilion and bullet points of key details.
Apple Intelligence in Notes

Notes users can record, transcribe, and summarize audio thanks to Apple Intelligence. If a call is recorded, a summary transcript can be created at the end of the call, with all call participants informed of the recording automatically when it commences.

Naturally, Siri has been considerably upgraded with Apple Intelligence, including being able to understand users in specific contexts.

As part of this, Siri can also refer queries to ChatGPT.

Private Cloud Compute

A lot of this is based on on-device processing for security and privacy. The A17 Pro chip in the iPhone 15 Pro line is said to be powerful enough to handle this level of processing.

Many models are running on-device, but sometimes the processing requires in-cloud processing. This could be a security issue, but Apple's method is different.

Private Cloud Compute allows Apple Intelligence to work in the cloud, while preserving security and privacy. Models are run on servers running Apple Silicon, using security aspects of Swift.

Processes on-device work out if the request is sent to cloud servers or if it can be handled locally.

Apple insists the servers are secure, that they don't store user data, and use cryptographic elements to maintain security. This includes the devices never talking to a server unless that server has been publicly logged for inspection by independent experts.

Not for all users

While Apple Intelligence will be beneficial to many users, the requirements will freeze out many from enjoying it.

On iPhone, the requirement for an A17 Pro chip means only iPhone 15 Pro and Pro Max users can try it out in beta. Similarly, iPads with M-series chips and Macs running Apple Silicon can use it too.

It will be available in beta "this fall," in U.S. English only.