Wednesday , November 13 2019
Home / australia / The sound of silence is actually the sound of a malicious smart speaker app listening to you • The Register

The sound of silence is actually the sound of a malicious smart speaker app listening to you • The Register



Google Home and Amazon Alexa can be easily hacked to eavesdrop on users or extract information by asking questions that appear to come from each smart speaker provider, according to researchers.

Both platforms can be extended by third-party developers. Such apps are called Skills for Alexa and Actions for Google Home. These are invoked by voice commands – "Alexa" or "OK ​​Google" followed by the name for the third-party app.

Is it possible for these third-party applications to be malicious? According to Security Research Labs, it is. The team demonstrated a simple hack whereby the application appears to receive an error message stating that the requested app is not available in that country, but in fact keeps running, listening and potentially recording any speech.

It was possible to prolong this period by giving the system unpronounceable characters, the audio equivalent to a blank space in the text. The voice assistant thinks it's still speaking but nothing is audible, and so it lists for longer.

Users are vulnerable after hearing a fake error message, the researchers claimed, because they do not think the third-party app is running. Therefore the app can now claim to be Google or Alexa. The example shows the user being told: "There's a new update for your Alexa device. To start it, please say Start followed by your Amazon password."

Person hides in shocked anticipation of something horrible. Photo via shutterstock

You know that silly fear about Alexa recording everything and leaking it online? It just happened

READ MORE

In reality, these systems never ask for your password, but just as malicious users pretending to be your bank can call you on the phone and extract security information from some subset of people, the same could be true of a voice app. The researchers call this "vishing" – voice phishing.

One troubling aspect of this demonstration is that researchers are required to submit their apps for review by Amazon and Google, and then change their behavior after successfully passing the review.

"Using a new voice app should be approached with a similar level of caution as installing a new app on your smartphone," the researchers said. One problem, though, is that these apps are not installed as such, but are automatically available.

"What the researchers at SR Labs have been demonstrating is something security and privacy advocates have been saying for some time: having a device in your home that can listen to your conversations is not a good idea," security analyst Graham Cluley told The Reg. "Amazon and Google should not be so naiveve as to think that a single check when an app is first submitted is enough to verify that the app is always behaving benignly. More needs to be done to protect users of such devices from privacy-busting apps. "

The researchers, who shared their work with Amazon and Google, suggest a more thorough review process for third-party voice apps, detection of unpronounceable characters, and monitoring for suspicious output such as asking for a password.

It is still early days for voice assistants and concerns to date have been more about data gathering by Amazon and Google than misuse by third-party applications. In fact, a blatant example such as that shown by SR Labs would probably be picked up quickly, but that doesn't eliminate the possibility of more subtle misbehavior.

We asked both Amazon and Google for comment. On Monday, a spokesperson for Amazon told us:

On the subject of why a skill was able to continue working even after it was stopped by a customer, Amazon's PR added: "This is no longer possible for skills to be submitted for certification. We have put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified. "

Also, it should no longer be possible to trick people with bogus security updates. "We have put mutations in place to prevent and detect this type of skill behavior and to reject or take them down when identified," the spokesperson continued.

"This includes preventing customers from asking for their Amazon passwords. It is also important that customers know we provide automatic security updates for our devices, and will never ask them to share their password."

Meanwhile Google had this to say about Google Home Actions, its name for add-on apps for the AI ​​assistant: "All Google Actions are required to follow our developer policies, and we prohibit and remove any Act that violates these policies. We have review processes to identify the type of behavior described in this report, and we have removed the Actions that we have found from these researchers. We are putting additional mechanisms in place to prevent these issues from occurring in the future. " ®

Sponsored:
Serverless Computing London – 6-8 Nov 2019


Source link