Apple has embarked on a billion-dollar push to develop generative AI, as the iPhone maker attempts to bring more AI to the next versions of its operating systems.
Following a wave of technological progress pushed forward by ChatGPT and its rivals, Apple needs to be seen to catch up with the rest of the market. Apple CEO Tim Cook confirmed in September that the company is "of course" working on generative AI, but not what was on the way.
In Sunday's "Power On" newsletter for Bloomberg, Apple executives were "caught off guard" by the influx of AI, and has been working to make up for lost time since late 2022.
"There's a lot of anxiety about this and it's considered a pretty big miss internally," a source told the report.
Along with creating an internal chatbot called "Apple GPT," the company is keen to work out how to add the technology to its products.
The AI push is headed up by SVPs John Giannandrea and Craig Federighi, in charge of AI and software respectively. Services chief Eddy Cue is also apparently on board with the project, which is set to cost Apple about $1 billion per year.
Giannandrea is managing the development of a new AI system's technology, as well as revamping Siri to use the system. It is thought that a new and improved Siri using the technology could arrive as soon as 2024.
The software engineering group controlled by Federighi will be working to add AI to the next edition of iOS. This will apparently involve features that use Apple's large language model, and could improve how Siri fields questions and how Messages auto-completes sentences.
There is also exploration in adding generative AI to development tools such as Xcode. Doing so could help developers create apps for Apple's platforms more quickly.
Services will also work to add AI wherever it can in its various apps, such as auto-generated Apple Music playlists. This could also include helping users write in Pages or create slides in Keynote.
Apple is also trialling using generative AI for internal customer service apps under AppleCare.
As development continues, there is apparently some debate over whether to continue pushing for on-device processing or if using cloud-based LLMs could help. The former is more privacy-focused, but the latter could allow for more advanced features to be developed.