What are the odds that the AI model actually promotes harassment?
And what’s their plan for using it to fight power-tripping mods?
Have it set to IP ban anyone who calls them out. Duh!
If it weren’t for power tripping mods, your favorite subreddits would fall to peaces from being too civil and having people actually follow the rules instead of being arbitrarily banned because their username contains the letter “Q” in it.
/s
Yeahhhhhhh, so you’re a liar, because that’s not REALLY that sarcastic…. That’s fairly realistic
Good Luck I’m behind 7 proxy’s.jpg
Ask a question on a big sub and the bot will downvote you and tell you to gargle battery acid
Battery acid story time!
Long, long ago, I worked at a Jiffy Lube. I was under the hood, some pothead noob was in the pit below. We’re supposed to call out what we’re doing, for a bunch of reasons, one of which was safety.
I called out “Checking battery, bay 3!” because I was going to pull the caps off of a non-sealed battery, when pothead noob comes right underneath to say “What?” – and then he starts screaming, because he got a drop of battery acid in his eye.
I raced down the stairs, grabbed him by his shirt, dragged him over to the eye wash station, turned the water on, and shoved his face in it. Fucking Mike.
4 out of 5 heils
By removing any sensible comments and retaining the delusional ones.
Reddit is fucking awful at this sort of thing and I would not be at all surprised if it punishes people who didn’t harass anyone in part because it’s broken and in part because it’s being gamed by trolls to harass people because it’s broken while also openly harassing people and getting away with it because, again, it’s broken and reddit won’t bother to fix it just like the current system.
Calling it now.
Worked exactly like that with their anti evil operations “team”, which I was convinced was either a LLM or some foreign labor worker with barely any English knowledge.
Nah, it was just a collection of shitty bots and scripts.
That was their “admin” team… The anti evil team was something worse than that.
They were all pretty useless. There was one guy who openly admitted to abusing botnets to mess with reddit on reddit, then he got me banned, saying in a post that’s what he was doing.
After I got banned, it took 5 days for the appeal to process and the guy who did it didn’t get banned until a month later, only after multiple reports were filed by various people (not to mention, it was cited in the appeal).
I’m sure they turned right around and made new accounts right away too.
Reddit is horribly managed and it’s only going to keep getting worse as far as I can tell: I’m just happy there’s Lemmy now.
deleted by creator
We all knew they would use bots for moderation eventually, I didn’t think it would happen this quickly though.
They’ve been relying on an absolute dumpster fire of an automated system for years now. It is regularly abused to harass people, even getting them banned site wide when they didn’t do anything while other people openly troll and harass with impunity.
AI will not fix this and there’s a good chance it’ll even make it worse. What reddit needs to do is actually hire adequate staffing and put an effective system in place, but they will absolutely not do that because they don’t care about the users, they just want to try and make money no matter what.
They already were. Auto-mod was limited, but it did stuff automatically already.
While I’m skeptical AI technology is ready for this, I actually think it’s one of the better changes they’ve proposed. A truly impartial AI moderator can enforce polite discourse instead of flamewars.
Of course I don’t trust Reddit to do it right, but theoretically I dig it.
A truly impartial AI moderator can enforce polite discourse instead of flamewars.
They’re basing it on data mining existing flagged comments, though. So their dumb bot will be trained on the dumbest samples. And it may not be able to tell the difference between why someone would get banned from /r/politics vs. r/conservative vs. r/catsstandingup
It seems like it would be a trivially straightforward thing to add the sub’s rules and moderation policies to the bot’s context whenever it’s operating on something in a particular sub.
Though it sounds like this initial implementation is aimed at enforcing site-wide rules, in which circumstance the AI shouldn’t care what subreddit you’re posting in.
Knowing them, even if it only starts out enforcing site wide rules, I expect it to start banning random people and IPs for no discernable reason, followed by r€dd!t coming out and saying it’s a great success
Yep. Conceptually, it actually sounds like an appropriate application of the technology, but I expect Reddit to faceplant on the actual implementation. I mean honestly, the only way they were able to make a mobile app that people enjoyed using is by basically outlawing 3rd party clients.
deleted by creator
~it’s just branch statements all the way down~
Reddit planning on having even fewer moderators or paid admins, in their typical cheapskate fashion.
“The filter is powered by a Large Language Model (LLM) that’s trained on moderator actions and content removed by Reddit’s internal tools and enforcement teams,” reads an excerpt from the page.
Eww, gross. I was never a moderator, but that would annoy me.
deleted by creator
deleted by creator