AppleInsider is supported by its audience and may earn commission as an Amazon Associate and affiliate partner on qualifying purchases. These affiliate partnerships do not influence our editorial content.
A security research organization in Germany placed eight 'smart spies' in both the Amazon Alexa and Google Home app stores to demonstrate how easily eavesdropping and phishing can be done over smart speakers.
German organization Security Research Labs has demonstrated both that malicious apps can be created for Alexa and Google Home, and that they can pass security vetting. The company successfully created eight such apps that they called "Smart Spies." Each was designed to eavesdrop or phish, and each was then approved by Amazon and Google.
"It was always clear that those voice assistants have privacy implications— with Google and Amazon receiving your speech, and this possibly being triggered on accident sometimes," Fabian Braunlein, senior security consultant at SRLabs, told Ars Technica.
"We now show that, not only the manufacturers, but... also hackers can abuse those voice assistants to intrude on someone's privacy," he continued.
The Smart Spies skills on Alexa or actions on Google Home were all able to eavesdrop on users after they should have stopped listening. Some were phishing ones that told users there was an update and asked for passwords.
According to SRLabs documentation, the company relied on how certain elements of an Alexa voice skill can be changed after it has passed Amazon's review process.
It also took advantage of the ability for developers to insert very long pauses in the speech output of either Alexa skills or Google actions. This is achieved by asking either smart speaker to repeatedly say an unpronounceable series of ASCII or ISO codes.
This meant the voice apps would go silent and so appear to have ended, when in reality they were waiting up to a minute to ask phishing questions.
SRLabs disclosed the apps and its research to Amazon and Google, both of whom have now removed the apps. Both companies then responded to SRLabs with statements about preventing this being done again.
"This is no longer possible for skills being submitted for certification," said an Amazon spokesperson in a written statement to SRLabs. "We have put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified."
"All Actions on Google are required to follow our developer policies, and we prohibit and remove any Action that violates these policies," said a Google spokesperson in a similar statement.
"We have review processes to detect the type of behavior described in this report, and we removed the Actions that we found from these researchers," continued Google's spokesperson. "We are putting additional mechanisms in place to prevent these issues from occurring in the future."
Ars Technica reports that Google is now reviewing all third-party Google Home actions.
Previously, Amazon has been reported to use thousands of workers to monitor recordings of spoken commands issued to the company's smart speakers and other devices. Google has done the same, and so has Apple.