Hackers Can Hack Amazon Alexa Voice Assistant Using Audio Commands

image


Cybersecurity researchers from Royal Holloway University of London have reported that Amazon Alexa devices can wake up from sleep mode and start executing audio commands given by a remote attacker.

An attacker with access to a smart speaker can use a vulnerability in the device to broadcast commands to himself or other smart speakers nearby. This allows the criminal to launch other “smart” devices in the house, make purchases and eavesdrop on users.

The vulnerability of “incorrect audio output neutralization” (CVE-2022-25809) was named Alexa vs Alexa (AvA) and affects Echo Dot devices of the third and fourth generation. Initiating an attack is as simple as using an Alexa-enabled device. Specially created audio files can be placed on an Internet radio station configured by Amazon Echo. In this scenario, the attacker just needs to set up an Internet radio station (in fact, a command server) in order to gain control of the device.

The execution of the attack requires the use of Amazon Alexa Skills functions. The Speech Synthesis Markup Language (SSML) has provided hackers with another way to use Alexa Skills, different from the streaming radio approach. This is a language that allows developers to program how Alexa will speak in certain situations. The SSML tag can say that Alexa will whisper or talk in a good mood.

Anyone can create a new Alexa Skill and publish it in the Alexa Skill store. The skills don’t need any special privileges to work on an Alexa-enabled device, although Amazon claims to test them before putting them on the air.

Start a discussion …
Source link