Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Google ML Kit aims to help developers add machine learning to their iOS apps

Google launched a new machine learning SDK called ML Kit, which provides a way for developers to add machine learning-based features to their Android and iOS apps, with Google's new framework following almost a year after Apple introduced the similar Core ML platform.

Introduced at Google I/O on Tuesday, ML Kit consists of a number of APIs developers can incorporate into their apps, with little prior knowledge of machine learning required to use them. The APIs are all provided by Google and have undergone extensive training, saving developers from building their own model and spending resources training it correectly.

The existing models offered by Google in ML Kit enable text recognition, face detection, labelling images, recognizing landmarks, and barcode scanning, all of which relying on imaging data from the device's camera. Future additions to the list include an API to add smart replies to messages, and a high-density face contour feature for the face detection API that will be useful for adding imaging effects to an image.

The ML Kit APIs are offered in two versions with certain tradeoffs. The cloud-based version does require an Internet connection, but offers high accuracy, while the on-device version is less accurate and depends on the device's processing power, but can be used offline.

For example, while the offline version is capable of identifying a dog within a photograph, it is unlikely to ascertain more specific details about the animal. Switching to the online version, the API would also be able to suggest what breed of dog is pictured.

While both API versions will be offered to developers for use, only the on-device version will be completely free. Developers opting to use the cloud-based APIs will end up needing to use Firebase, Google's mobile and web application platform, which will charge a fee.

Google is initially offering access to the APIs in a limited early preview, but has already provided documentation to start using ML Kit.

The cross-platform nature of ML Kit puts it in competition with Apple's own Core ML, a machine learning framework introduced at WWDC 2017. Similar in nature, the API can be used by developers to use machine learning to improve their apps, including a broad variety of model types, which take advantage of Apple's low-level technologies including Metal and Accelerate.

The initial APIs offered under Core ML included computer vision elements including face tracking and detection, landmarks, text detection, barcode detection, object tracking, and image registration. There are also natural language processing APIs available, which offer language identification, tokenization, lemmatization, and named entity recognition functions.