Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Google ML Kit aims to help developers add machine learning to their iOS apps

Last updated

Google launched a new machine learning SDK called ML Kit, which provides a way for developers to add machine learning-based features to their Android and iOS apps, with Google's new framework following almost a year after Apple introduced the similar Core ML platform.

Introduced at Google I/O on Tuesday, ML Kit consists of a number of APIs developers can incorporate into their apps, with little prior knowledge of machine learning required to use them. The APIs are all provided by Google and have undergone extensive training, saving developers from building their own model and spending resources training it correectly.

The existing models offered by Google in ML Kit enable text recognition, face detection, labelling images, recognizing landmarks, and barcode scanning, all of which relying on imaging data from the device's camera. Future additions to the list include an API to add smart replies to messages, and a high-density face contour feature for the face detection API that will be useful for adding imaging effects to an image.

The ML Kit APIs are offered in two versions with certain tradeoffs. The cloud-based version does require an Internet connection, but offers high accuracy, while the on-device version is less accurate and depends on the device's processing power, but can be used offline.

For example, while the offline version is capable of identifying a dog within a photograph, it is unlikely to ascertain more specific details about the animal. Switching to the online version, the API would also be able to suggest what breed of dog is pictured.

While both API versions will be offered to developers for use, only the on-device version will be completely free. Developers opting to use the cloud-based APIs will end up needing to use Firebase, Google's mobile and web application platform, which will charge a fee.

Google is initially offering access to the APIs in a limited early preview, but has already provided documentation to start using ML Kit.

The cross-platform nature of ML Kit puts it in competition with Apple's own Core ML, a machine learning framework introduced at WWDC 2017. Similar in nature, the API can be used by developers to use machine learning to improve their apps, including a broad variety of model types, which take advantage of Apple's low-level technologies including Metal and Accelerate.

The initial APIs offered under Core ML included computer vision elements including face tracking and detection, landmarks, text detection, barcode detection, object tracking, and image registration. There are also natural language processing APIs available, which offer language identification, tokenization, lemmatization, and named entity recognition functions.



14 Comments

perpetual3 12 Years · 52 comments

I really hope people can see that making google then end-all-be-all of Machine Learning is one of the worst ideas imaginable.  

mr lizard 15 Years · 354 comments

Hopefully developers will be required by Apple to state clearly if their app makes use of Google’s ML technology, so that those of us who care about our privacy can avoid downloading and using them. 

ivanh 12 Years · 596 comments

It will be interesting to see the effect of mutual learning and competing between two Machine Learning kits on the same app. Next iPhone models should have 4GB or 8GB memory. Anything less than 4GB is a planned obsolescence.

bestkeptsecret 13 Years · 4289 comments

ivanh said:
 planned obsolescence.

That is such a cliché.

You may never get the highest spec'd phone with an iPhone, but you'll definitely get the best performing one.

If Apple figures that they need to have more memory for ML, they'll add it.