“What would it mean for your business if you could target potential clients who are actively discussing their need for your services in their day-to-day conversations? No, it's not a Black Mirror episode—it's Voice Data, and CMG has the capabilities to use it to your business advantage.”
Aren’t they all already listening always? I mean, how else does it hear you say “Ay yo Siri” otherwise?
No.
Both Android and iOS do enforce permissions against applications that have not been granted explicit access to listen constantly.
For example, the Google Assistant is a privileged app oftentimes; and it is allowed to listen. It does so by listening efficiently for one kind of sound, the hotword “Ok Google”.
Other applications not only have to obtain user permission; but oftentimes that permission is restricted to be only granted “While app is in use”, meaning it’s the app on the screen, notifying the user, in the foreground, or recently opened. This permission prevents most abuses of the microphone unless someone is using an app.
the phone’s processor has the wake up word hardcoded, so it’s not like an ad company can add a new one on a whim. and it uses passive listening, so it’s not recording everything you say - I’ve seen it compared to sitting in a class and not paying attention until the teacher says your name.
Have you seen this code though? Every time I hear a statement like that, I have to wonder if you’re all just taking their word for it.
I don’t take their word for it, unless they show me that code and prove that it is the code running on all the devices in use.
Do you also personally audit all open source software that you use?
Your rebuttal makes no sense.
The issue with proprietary “smart” assistants is that we can only guess how they work.
Okay so that’s a no, then, thanks!
No but I do review code audits that certified professionals publish for things that I use when they are available, and I also don’t use any voice assistants and only use open source smartphone ROMs such as GrapheneOS.
Basically I use the opsec methods available to me to prevent as much of the rampant spying that I can. The last thing I would do is put an open mic to Amazon’s audio harvesting bots in my home because that’s incredibly careless.
That’s what I figured!
Thanks for the reply.
There’s no way that an app with mic permissions could basically do the same thing and pick up on certain preprogrammed words like Ford or Coke which could then be parsed by AI and used by advertisers? It certainly seems like that isn’t out of the realm of physical possibility but I’m definitely no expert. Would they have had to pay the OS maker to hardcode it in to the OS? Could that be done in an update at a later time?
only if you want the phone to start burning battery and data while displaying the “microphone in use” indicator all the time.
not to mention that the specific phrases have been picked in order to cause as few false positives as possible (which is why you can’t change them yourself), and you can still fool Google Assistant by saying “hey booboo” or “okay boomer”. good luck with making it reliably recognize “Ford”, lol.
Huh, TIL. I figured “if they can do it with one thing they could do it with more than one thing.”
For that I think they use special hardware, that’s the reason that you can’t modify the calling word, and they still notify you when the voice assistant is disabled. I don’t know if this is actually true, or the companies try to hide behind this, or I just remember it incorrectly.
That same hardware couldn’t also have a brand added as a code word for ad, like say “pepsi?”