Linus’ thread: (CW: bigotry and racism in the comments) https://social.kernel.org/notice/AWSXomDbvdxKgOxVAm (you need to scroll down, i can’t seem to link to the comment in the screenshot)
Linus’ thread: (CW: bigotry and racism in the comments) https://social.kernel.org/notice/AWSXomDbvdxKgOxVAm (you need to scroll down, i can’t seem to link to the comment in the screenshot)
It’d be the business’ fault not ours, you shouldn’t use unreliable user data for something so important.
Legally you are correct but ethically you are wrong. If they include false data that causes a crash, everyone that intentionally contributed that data is morally at fault. You don’t get to wash your hands of it just because the business is the one legally liable for it.
I mean ethically its a debatable topic, if I don’t help fix someone’s car and then he crashes it, its not my fault, he shouldn’t have driven it while it was broken.
Same with user generated or AI data, it works 99.9% of the time, but that 0.1% is too dangerous to deploy in a life endangering situation.
You’ve got a bit of a point there I’ll give you that but it’s an apple to oranges comparison, unless you’re intentionally trying to cause them to crash by not helping them fix their car. The person I originally replied to is advocating intentionally trying to cause a crash.
I think it was a more tongue in cheek reference to the incompetence of the companies and how they will use that data in practice, but I might have read too much into it. Regardless, intentionally clicking the wrong items on captchas shouldn’t cause a crash unless the companies force it to by cutting corners.
It doesn’t matter if it was tongue in cheek, if my dumbass took it seriously then you know other dumbass people will take it seriously. And I guess my main issue is about the vocal intent to cause harm which is demonstrated by their mention of making sure to stay safe on the sidewalk.