At Google I/O the company introduced LaMDA as a model for generating dynamic responses to queries, which could grant Google Assistant better conversational capabilities
Revealed at Google I/O, LaMDA is a language model for dialogue applications. Citing the improvements in search and Google Assistant results, Google CEO Sundar Pichai mentioned that sometimes the results are not as natural as they could be.
"Sensible responses keep conversations going," said Sundar, before a demonstration of LaMDA itself.
The idea is a continuation of Google's BERT system for understanding the context of terms in a search query. By learning concepts on a subject and in language, it can provide more natural-sounding responses, which can flow into a conversation.
In demonstrations, LaMDA is used to demonstrate a conversation with Pluto and a paper airplane. Each LaMDA model held its own part of a conversation with a user, including innocuous questions such as "Tell me what I would see if I visited" resulting in a description of the landscape and frozen icebergs on Pluto's surface.
Google is still developing LaMDA, and it is "still early research," Pichai admits, with it currently only working on text at the moment. Google intends to make it multi-modal, to understand images and video alongside text.
Eventually, this could lead to the Google Assistant becoming more conversational in nature for its users. The research may also drive Apple towards further advancements with its own Siri, bringing its own virtual assistant up to speed with the search company's version, and making it more chatty for users.
Stay on top of all Apple news right from your HomePod. Say, "Hey, Siri, play AppleInsider," and you'll get latest AppleInsider Podcast. Or ask your HomePod mini for "AppleInsider Daily" instead and you'll hear a fast update direct from our news team. And, if you're interested in Apple-centric home automation, say "Hey, Siri, play HomeKit Insider," and you'll be listening to our newest specialized podcast in moments.