Wow you mean reddit is banning real users and replacing them with bots???
You know what Pac stands for? PAC. Program and Control. He’s Program and Control Man. The whole thing’s a metaphor. All he can do is consume. He’s pursued by demons that are probably just in his own head. And even if he does manage to escape by slipping out one side of the maze, what happens? He comes right back in the other side. People think it’s a happy game. It’s not a happy game. It’s a fucking nightmare world. And the worst thing is? It’s real and we live in it.
Please elaborate. I would love to understand this from black mirror but I don’t get it.
It hurts them right in the feels when someone uses their platform better than them. How dare those researchers manipulate their manipulations!
Personally I love how they found the AI could be very persuasive by lying.
why wouldn’t that be the case, all the most persuasive humans are liars too. fantasy sells better than the truth.
I mean, the joke is that AI doesn’t tell you things that are meaningfully true, but rather is a machine for guessing next words to a standard of utility. And yes, lying is a good way to arbitrarily persuade people, especially if you’re unmoored to any social relation with them.
LOL (while I cry)
deleted by creator
Realistic AI generated faces have been available for longer than realistic AI generated conversation ability.
Meh. Believe none of what you hear and very little of what you can see
Unless a person is in front of you, don’t assume anything is real online. I mean it. Nothing online cannot be faked, nothing online HASNT been faked.
The least trustworthy place in the universe. Is the internet.
Reddit: Ban the Russian/Chinese/Israeli/American bots? Nope. Ban the Swiss researchers that are trying to study useful things? Yep
Bots attempting to manipulate humans by impersonating trauma counselors or rape survivors isn’t useful. It’s dangerous.
Sure, but still less dangerous of bots undermining our democracies and trying to destroy our social frabic.
Welcome to the internet? Learn skepticism?
Humans pretend to be experts infront of eachother and constantly lie on the internet every day.
Say what you want about 4chan but the disclaimer it had ontop of its page should be common sense to everyone on social media.
that doesn’t mean we should exacerbate the issue with AI.
If fake experts on the internet get their jobs taken by the ai, it would be tragic indeed.
Don’t worry tho, popular sites on the internet are dead since they’re all bots anyway. It’s over.
If fake experts on the internet get their jobs taken by the ai, it would be tragic indeed.
These two groups are not mutually exclusive
What a bunch of fear mongering, anti science idiots.
You think it’s anti science to want complete disclosure when you as a person are being experimented on?
What kind of backwards thinking is that?
Not when disclosure ruins the experiment. Nobody was harmed or even could be harmed unless they are dead stupid, in which case the harm is already inevitable. This was posting on social media, not injecting people with random pathogens. Have a little perspective.
You do realize the ends do not justify the means?
You do realize that MANY people on social media have emotional and mental situations occuring and that these experiments can have ramifications that cannot be traced?
This is just a small reason why this is so damn unethical
In that case, any interaction would be unethical. How do you know that I don’t have an intense fear of the words “justify the means”? You could have just doomed me to a downward spiral ending in my demise. As if I didn’t have enough trouble. You not only made me see it, you tricked me into typing it.
you are being beyond silly.
in no way is what you just posited true . unsuspecting nd non malicious social faux pas are in no way equal to Intentionally secretive manipulation used to garner data from unsuspecting people
that was an embarrassingly bad attempt to defend an indefensible position, and one no-one would blame you for deleting and re-trying
Well, you are trying embarrassingly hard to silence me at least. That is fine. I was definitely positing an unlikely but possible case, I do suffer from extreme anxiety and what sets it off has nothing to do with logic, but you are also overstating the ethics violation by suggesting that any harm they could cause is real or significant in a way that wouldn’t happen with regular interaction on random forums.
The key result
When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters
While that is indeed what was reported, we and the researchers will never know if the posters with shifted opinions were human or in fact also AI bots.
The whole thing is dodgy for lack of controls, this isn’t science it’s marketing
If they were personalized wouldn’t that mean they shouldn’t really receive that many upvotes other than maybe from the person they were personalized for?
Their success metric was to get the OP to award them a ‘Delta’, which is to say that the OP admits that the research bot comment changed their view. They were not trying to farm upvotes, just to get the OP to say that the research bot was effective.
I would assume that people in a similar demographics are interested in similar topics. Adjusting the answer to a person within a demographic would therefore adjust it to all people within that demographic and interested in that specific topic.
Or maybe it’s just the nature of the answer being more personal that makes it more appealing to people in general, no matter their background.
Reddit’s chief legal officer, Ben Lee, wrote that the company intends to “ensure that the researchers are held accountable for their misdeeds.”
What are they going to do? Ban the last humans on there having a differing opinion?
Next step for those fucks is verification that you are an AI when signing up.
This is probably the most ethical you’ll ever see it. There are definitely organizations committing far worse experiments.
Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.
Yeah I was thinking exactly this.
It’s easy to point to reasons why this study was unethical, but the ugly truth is that bad actors all over the world are performing trials exactly like this all the time - do we really want the only people who know how this kind of manipulation works to be state psyop agencies, SEO bros, and astroturfing agencies working for oil/arms/religion lobbyists?
Seems like it’s much better long term to have all these tricks out in the open so we know what we’re dealing with, because they’re happening whether it gets published or not.
actors all over the world are performing trials exactly like this all the time
I marketing speak this is called A/B testing.
But you aren’t allowed to mention Luigi
You’re banned for inciting violence.
Free Luigi
Eat the rich
The police are a terrorist organization
Trump and Epstein bff
Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.
You put it better than I could. I’ve noticed this too.
I used to just disengage. Now when I find myself talking to someone like this I use my own local LLM to generate replies just to waste their time. I’m doing this by prompting the LLM to take a chastising tone, point out their fallacies and to lecture them on good faith participation in online conversations.
It is horrifying to see how many bots you catch like this. It is certainly bots, or else there are suddenly a lot more people that will go 10-20 multi-paragraph replies deep into a conversation despite talking to something that is obviously (to a trained human) just generated comments.
Would you mind elaborating? I’m naive and don’t really know what to look for…
I think the simplest way to explain it is that the average person isn’t very skilled at rhetoric. They argue inelegantly. Over a long time of talking online, you get used to talking with people and seeing how they respond to different rhetorical strategies.
In these bot infested social spaces it seems like there are a large number of commenters who just seem to argue way too well and also deploy a huge amount of fallacies. This could be explained, individually, by a person who is simply choosing to argue in bad faith; but, in these online spaces there seem to be too many commenters who seem to deploy these tactics compared to the baseline that I’ve established in my decades of talking to people online.
In addition, what you see in some of these spaces are commenters who seem to have a very structured way of arguing. Like they’ve picked your comment apart into bullet points and then selected arguments against each point which are technically on topic but misleading in a way.
I’ll admit that this is all very subjective. It’s entirely based on my perception and noticing patterns that may or may not exist. This is exactly why we need research on the topic, like in the OP, so that we can create effective and objective metrics for tracking this.
For example, if you could somehow measure how many good faith comments vs how many fallacy-laden comments in a given community there would likely be a ratio that is normal (i.e. there are 10 people who are bad at arguing for every 1 person who is good at arguing and, of those skilled arguers 10% of them are commenting in bad faith and using fallacies) and you could compare this ratio to various online topics to discover the ones that appear to be botted.
That way you could objectively say that on the topic of Gun Control on this one specific subreddit we’re seeing an elevated ratio of bad faith:good faith scoring commenters and, therefore, we know that this topic/subreddit is being actively LLM botted. This information could be used to deploy anti-bot counter measures (captchas, for example).
Thanks for replying
Do you think response time could also indicate that a user is a bot? I’ve had an interaction that I chalked up to someone using AI, but looking back now I’m questioning if there was much human involvement at all just due to how quickly the detailed replies were coming in…
It depends, but it’d be really hard to tell. I type around 90-100 WPM, so my comment only took me a few minutes.
If they’re responding within a second or two with a giant wall of text it could be a bot, but it may just be a person who’s staring at the notification screen waiting to reply. It’s hard to say.
Added to idcaboutprivacy (which is open source). If there are any other similar links, feel free to add them or send them my way.
This is a really interesting paragraph to me because I definitely think these results shouldn’t be published or we’ll only get more of these “whoopsie” experiments.
At the same time though, I think it is desperately important to research the ability of LLMs to persuade people sooner rather than later when they become even more persuasive and natural-sounding. The article mentions that in studies humans already have trouble telling the difference between AI written sentences and human ones.
This is certainly not the first time this has happened. There’s nothing to stop people from asking ChatGPT et al to help them argue. I’ve done it myself, not letting it argue for me but rather asking it to find holes in my reasoning and that of my opponent. I never just pasted what it said.
I also had a guy post a ChatGPT response at me (he said that’s what it was) and although it had little to do with the point I was making, I reasoned that people must surely be doing this thousands of times a day and just not saying it’s AI.
To say nothing of state actors, “think tanks,” influence-for-hire operations, etc.
The description of the research in the article already conveys enough to replicate the experiment, at least approximately. Can anyone doubt this is commonplace, or that it has been for the last year or so?
black on white, ew
I’m pretty sure that only applies due to a majority of people being morons. There’s a vast gap between the 2% most intelligent, 1/50, and the average intelligence.
Also please put digital text on white on black instead of the other way around
I agree, but that doesn’t change anything, right? Even if you are in the 2% most intelligent and you’re somehow immune, you still have to live with the rest who do get influenced by AI. And they vote. So it’s never just a they problem.
What? Intelligent people get fooled all the time. The NXIVM cult was made up mostly of reasonably intelligent women. Shit that motherfucker selected for intelligent women.
You’re not immune. Even if you were, you’re incredibly dependent on people of average to lower intelligence on a daily basis. Our planet runs on the average intelligence.
propaganda matters.
Yes. Much more than we peasants all realized.
Not sure how everyone hasn’t expected Russia has been doing this the whole time on conservative subreddits…
Those of us who are not idiots have known this for a long time.
They beat the USA without firing a shot.
Or somebody else is doing the manipulation and is successfully putting the blame on Russia.
Mainly I didn’t really expect that since the old methods of propaganda before AI use worked so well for the US conservatives’ self-destructive agenda that it didn’t seem necessary.
Russia are every bit as active in leftist groups whipping them up into a frenzy too. There was even a case during BLM where the same Russian troll farm organised both a protest and its counter-protest. Don’t think you’re immune to being manipulated to serve Russia’s long-term interests just because you’re not a conservative.
They don’t care about promoting right-wing views, they care about sowing division. They support Trump because Trump sows division. Their long-term goal is to break American hegemony.
Yup. We’re all susceptible to joining a cult. No one willingly joins a cult, their group slowly morphs into one.
There have a been a few times over the last few years, that my “bullshit- this is an extemist plant/propaganda” meter has gone off for left leaning individuals.
Meaning these comments/videos are aimed to look like they are left folks, but are meant to make the left look bad/extremist in order to push people from the working class movements.
Im truly a layman, but you just know its out there. The goal is indeed to divide us, and everyone should be suspect of everything the see on the Internet and do proper vetting of their sources.
The difference is in which groups are consequentially making it their identity and giving one political party carte blanche to break American politics and political norms (and national security orgs).
100% agree though.
Lol, coming from the people who sold all of your data with no consent for AI research
The quote is not coming from Reddit, but from a professor at Georgia Institute of Technology