Nonsense Attacks on Google Assistant

08/06/2018
by   Mary K. Bispham, et al.
0

This paper presents a novel attack on voice-controlled digital assistants using nonsensical word sequences. We present the results of experimental work which demonstrates that it is possible for malicious actors to gain covert access to a voice-controlled system by hiding commands in apparently nonsensical sounds of which the meaning is opaque to humans. Several instances of nonsensical word sequences were identified which triggered a target command in a voice-controlled digital assistant, but which were incomprehensible to humans, as shown in tests with human experimental subjects. Our work confirms the potential for hiding malicious voice commands to voice-controlled digital assistants or other speech-controlled devices in speech sounds which are perceived by humans as nonsensical.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset