Apple has contributed 20 new Core Machine Learning models to an open source AI repository Hugging Face, adding to its existing public models and research papers.
In April 2024, Apple publicly shared a series of four Open Source Efficient LLMs (OpenELMs), and did so via the collaborative platform, Hugging Face. This platform is used for hosting AI models, for training them, and in particular for people to work together to make improvements.
As spotted by VentureBeat, Apple has now added 20 new Core ML models, plus a series of datasets, on Hugging Face, which now has an Apple-specific section.
"This is a major update by uploading many models to their Hugging Face repo with their Core ML framework," said Hugging Face co-founder and CEO, Clement Delangue. "[Apple's] update includes exciting new models focused on text and images, such as image classification or depth segmentation."
Apple's new additions to the open-source platform include ones for image classification and semantic segmentation. They're respectively for identifying elements of an image, or of text.
"Imagine an app that can effortlessly remove unwanted backgrounds from photos," said Delangue, "or instantly identify objects in front of you and provide their names in a foreign language."
While this is Apple's first public release since announcing Apple Intelligence at WWDC, it's far from the company's first contributions to AI research. As well as the four OpenELMs added to Hugging Face in April 2024, Apple's researchers released "Ferret" to GitHub, a large language model (LLM) for image queries, in October 2023.
"Ferret" has since been updated. And Apple has also published research papers about generative AI animation tools and the creation of AI avatars.
The latter may subsequently have been used in Apple Intelligence's forthcoming Genmoji feature in iOS 18.
It was believed that the original release of research papers and the first OpenELMs was Apple attempting to disprove the common claim that it was behind the rest of the industry with AI. It now seems that while Apple has successfully been using AI in the form of its Machine Learning for years, it wasn't until Craig Federighi tried Microsoft Copilot that it began taking generative AI more seriously.
3 Comments
Super happy about this trend of Apple embracing the OSS AI movement. MLX has been a game changer as well and the recent additions of distributed sharing of models across multiple Apple devices is fantastic and X has a lot of chatter about multiple Apple Studios linked via Thunderbolt cables achieving runs of massive models at good performance levels.
This may likely spark a lot of upsells in companies that do not want to go full Nvidia on prem.
I think there is a stealth way for apple to enter that market through PCC once it reaches some scale.
i do not think apple will enter the discreet server business, but instead sell IaaS/PaaS services via their cloud.
first segment could be app/ai devs that target iOS that host their backends on aws but prefer to have an Xcode native way to manage everything end to end.