![](https://lemmy.world/pictrs/image/c93fd967-da58-4df9-986c-bc56740a6e3b.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
I’m sure the legion of bots that comprise their user base won’t mind.
I’m sure the legion of bots that comprise their user base won’t mind.
You’re giving Elmo too much credit as this Machiavellian character. Ordinary capitalist. Appeasing your investors is the most common commercial reason. It’s just that advertisers can’t really provide twitter with business anymore on account of his bigoted optics, but dangerous governments can.
He’s so transparent with his intentions, it’s embarrassing. The only explanation of removing your own tool to combat misinformation is because it does not align with your own interests. There’s no way to spin that fact in a positive light and yet there will still be people using twitter. It’s actually getting really fucking hard to not be a misanthrope.
Yeah I just looked it up. Serving stuff through CF does a check for illicit material. Pretty neat. Be that as it may, the original complaint is that Lemmy is lacking moderation tools. Such a moderation tool would be something that disallows CSAM even being stored in the server in the first place.
The developers of LemmyNet are being asked for the ability to define a subroutine by which uploaded images are to be preprocessed and denied or passed thereafter. There is no such feature right now. Even if they wanted to use CloudFlare CSAM protection, they couldn’t. That’s the entire problem. This preprocessing routine could use Microsoft PhotoDNA and Google CSAI, it could use a self-hosted alternative as db0 desires or it could even be your own custom solution that doesn’t destroy, but stores CSAM on a computer you own and stops it from being posted.
Imagine if you were the owner of a really large computer with CSAM in it. And there is in fact no good way to prevent creeps from putting more into it. And when police come to have a look at your CSAM, you are liable for legal bullshit. Now imagine you had dependents. You would also be well past the point of being respectful.
On that note, the captain db0 has raised an issue on the github repository of LemmyNet, requesting essentially the ability to add middleware that checks the nature of uploaded images (issue #3920 if anyone wants to check). Point being, the ball is squarely in their court now.
Traditional hash like MD5 and SHA256 are not locality-sensitive. Can’t be used to detect match with certain degree. Otherwise, yes you are correct. Perceptual hashes can create false positive. Very unlikely, but yes it is possible. This is not a problem with perfect solution. Extraordinary edge cases must be resolved on a case by case basis.
And yes, simplest solution must be implemented first always. Tracking post reputation, captcha before post, wait for account to mature before can post, etc. The problem is that right now the only defense we have access to are mods. Mods are people, usually with eyeballs. Eyeballs which will be poisoned by CSAM so we can post memes and funnies without issues. This is not fair to them. We must do all we can, and if all we can includes perceptual hashing, we have moral obligation to do so.
I agree. Perhaps what Lemmy developers can do is they can put slot for generic middleware before whatever the POST request is in Lemmy API for uploading content? This way, owner of instance can choose to put whatever middleware for CSAM they want. This way, we are not dependent on developers of Lemmy for solution to pedo problem.
Good question. Yes. Also artefacts from compression can fuck it up. However hash comparison returns percentage of match. If match is good enough, it is CSAM. Davai ban. There is bigger issue however for developers of Lemmy, I assume. It is a philosophical pizdec. It is that if we elect to use PhotoDNA and CSAI Match, Lemmy is now at the whims of Microsoft and Google respectively.
I guess it’d be a matter of incorporating something that hashes whatever it is that’s being uploaded. One takes that hash and checks it against a database of known CSAM. If match, stop upload, ban user and complain to closest officer of the law. Reddit uses PhotoDNA and CSAI-Match. This is not a simple task.
Of all the lack of positive role model behaviour one could exhibit, it had to be this. Seeing that shit kind of fucked me up, NGL. Good health to the mods who are running defense for us!
I understand him having these views. Money was exchanged behind closed doors, deals were struck, whatever. I can imagine a financial incentive for him to sow dissent via shitty meme. I don’t understand what’s in it for his followers. Is it just about being contrarian? What more must he do or say for it to be clear to them that he’s just kind of a bozo?