Amazon has many thousands of people doing quality control on Alexa, meaning that they're listening to incoming audio captured on Echo devices. This shouldn't be surprising. The question is how they're doing it, and what policies they have around privacy when doing so. I don't personally see a major problem here. But at the same time I'd never put a Facebook device in my home. To me it's more about the company and its incentives than anything else. Link
A number of FBI-affiliated websites were hacked, and information on thousands of federal agents and law enforcement officers are now being sold online. Link
Chinese schools are using facial recognition on students, and using ML to determine whether or not they're currently paying attention, distracted, etc. Link
Sift is a service that builds a risk profile on you so merchants can determine whether you're a benign actor or someone about to commit fraud. I think people need to accept that continuous risk scoring for people and situations is both inevitable and actually already happening. The moment you try to block bad actors by looking at their behavior, you quickly end up with a score that determines action based on various thresholds. And the moment you do it for bad actors, you're kind of implicitly doing it for good actors as well. There are better and worse ways to approach this, but profile scoring is not something we're going to be able to avoid going forward. Let's accept this reality and start having the conversations about how to make (and keep) this functionality as benign as possible. Link
A Dutch F-16 was damaged by rounds from its own 20MM cannon. So it fired bullets, and then flew into them. Life is awesome. Link
Become a Member: https://danielmiessler.com/upgrade
See omnystudio.com/listener for privacy information.
Amazon has many thousands of people doing quality control on Alexa, meaning that they're listening to incoming audio captured on Echo devices. This shouldn't be surprising. The question is how they're doing it, and what policies they have around privacy when doing so. I don't personally see a major problem here. But at the same time I'd never put a Facebook device in my home. To me it's more about the company and its incentives than anything else. Link
A number of FBI-affiliated websites were hacked, and information on thousands of federal agents and law enforcement officers are now being sold online. Link
Chinese schools are using facial recognition on students, and using ML to determine whether or not they're currently paying attention, distracted, etc. Link
Sift is a service that builds a risk profile on you so merchants can determine whether you're a benign actor or someone about to commit fraud. I think people need to accept that continuous risk scoring for people and situations is both inevitable and actually already happening. The moment you try to block bad actors by looking at their behavior, you quickly end up with a score that determines action based on various thresholds. And the moment you do it for bad actors, you're kind of implicitly doing it for good actors as well. There are better and worse ways to approach this, but profile scoring is not something we're going to be able to avoid going forward. Let's accept this reality and start having the conversations about how to make (and keep) this functionality as benign as possible. Link
A Dutch F-16 was damaged by rounds from its own 20MM cannon. So it fired bullets, and then flew into them. Life is awesome. Link
Become a Member: https://danielmiessler.com/upgrade
See omnystudio.com/listener for privacy information.
Nyd den ubegrænsede adgang til tusindvis af spændende e- og lydbøger - helt gratis
Dansk
Danmark