We are all stuck in a long-term abusive relationship—with technology and voice assistant privacy. Time and time again, we learn about some outrageous new privacy breach. And, time and time again, the predictable excuses follow:
- Let us explain why this is not actually a problem.
- What we’re doing is common industry practice.
- It was in the user agreement, so we have your consent.
- We’re using your data in order to improve our products and serve you better.
Or, my favorite:
- We have heard your concerns, and we have changed our practices. This will never happen again.
Eventually the dust settles, and we find ourselves lining up yet again for the latest new device. And, we begin the countdown to the next inevitable privacy horror show. The cycle of abuse continues.
Voice assistants first hit the mainstream in November 2014, when Amazon introduced the world to Alexa via its new Echo. Right out of the gates, tech pundits waved their caution flags. “After the Snowden leaks, it’s a big ask of Amazon or any other company for us to put Internet-connected, always-listening microphones in our homes,” observed Time magazine’s Alex Fitzpatrick. ”The 1984 comparisons write themselves, and aren’t totally without precedent.”
Nevertheless, millions of families soon invited Alexa into their homes. Many of them simply brushed privacy concerns aside, beguiled by the ability to use a simple voice command to launch a playlist, deliver a weather forecast, or tell us how many ounces there are in a half cup. Others overcame their initial reluctance, soothed by Amazon’s reassurances. The device only begins recording when it hears the “wake” phrase, Amazon told us (but anyone with an Echo has seen it activated by random sounds). You have the ability to review and delete any recordings the Echo uploads, they said (although doing so “may degrade your Alexa experience”). And, of course, you can always turn the microphone off (leaving you with a voice-activated assistant that can’t hear your voice).
Then, in 2019, the Orwellian stories began to hit the internet. Apple contractors ‘regularly hear confidential details,’ via Siri, the Guardian wrote. “Google is always listening. Now it’s watching, too,” wrote the Washington Post. “Amazon confirms it retains your Alexa voice recordings,” TNW reported. As the pundits leaped on this new scandal, major tech players scrambled to trot out their usual reassurances.
At some point, though, those reassurances ring hollow. Eventually people will unplug their voice assistants, or reject the idea of purchasing one in the first place.
Or, will they? At the height of this latest scandal, Bloomberg reported, “Consumers are worried machines by Amazon and Google are eavesdropping on them at home, but that’s not stopping purchases.” Even though we say we value privacy, we apparently can’t live without our tech toys.
So, what are we to do, if we can’t trust big tech but we love our voice assistants?
Edmonton, Canada-based Paranoid Inc. believe they have the answer. In this cheeky sci-fi video they introduce Paranoid, a simple hardware add-on that controls what your voice assistant hears. Essentially, you use a “wake” phrase to tell Paranoid it’s okay for the voice assistant to listen. Then, you use your device as normal. Afterward, Paranoid effectively plugs Alexa’s or Google’s ears until the next time you need your device.
Of course, once you awaken your device via Paranoid, your subsequent voice commands are recorded, uploaded, and processed. But, at least you’ll know when your device is listening, and when you’re truly alone.
In our ongoing relationship with big tech, it’s time to take charge.