• Fat Tony@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    5 days ago

    You know what Pac stands for? PAC. Program and Control. He’s Program and Control Man. The whole thing’s a metaphor. All he can do is consume. He’s pursued by demons that are probably just in his own head. And even if he does manage to escape by slipping out one side of the maze, what happens? He comes right back in the other side. People think it’s a happy game. It’s not a happy game. It’s a fucking nightmare world. And the worst thing is? It’s real and we live in it.

  • Itdidnttrickledown@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    6 days ago

    It hurts them right in the feels when someone uses their platform better than them. How dare those researchers manipulate their manipulations!

    • acosmichippo@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      ·
      6 days ago

      why wouldn’t that be the case, all the most persuasive humans are liars too. fantasy sells better than the truth.

      • deathbird@mander.xyz
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 days ago

        I mean, the joke is that AI doesn’t tell you things that are meaningfully true, but rather is a machine for guessing next words to a standard of utility. And yes, lying is a good way to arbitrarily persuade people, especially if you’re unmoored to any social relation with them.

    • blind3rdeye@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      6 days ago

      Realistic AI generated faces have been available for longer than realistic AI generated conversation ability.

    • thedruid@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      Meh. Believe none of what you hear and very little of what you can see

      Unless a person is in front of you, don’t assume anything is real online. I mean it. Nothing online cannot be faked, nothing online HASNT been faked.

      The least trustworthy place in the universe. Is the internet.

  • justdoitlater@lemmy.world
    link
    fedilink
    English
    arrow-up
    60
    arrow-down
    10
    ·
    6 days ago

    Reddit: Ban the Russian/Chinese/Israeli/American bots? Nope. Ban the Swiss researchers that are trying to study useful things? Yep

    • Ilandar@lemm.ee
      link
      fedilink
      English
      arrow-up
      49
      arrow-down
      2
      ·
      6 days ago

      Bots attempting to manipulate humans by impersonating trauma counselors or rape survivors isn’t useful. It’s dangerous.

      • justdoitlater@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        6 days ago

        Sure, but still less dangerous of bots undermining our democracies and trying to destroy our social frabic.

      • Oniononon@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        5
        ·
        6 days ago

        Humans pretend to be experts infront of eachother and constantly lie on the internet every day.

        Say what you want about 4chan but the disclaimer it had ontop of its page should be common sense to everyone on social media.

          • Oniononon@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            7
            ·
            6 days ago

            If fake experts on the internet get their jobs taken by the ai, it would be tragic indeed.

            Don’t worry tho, popular sites on the internet are dead since they’re all bots anyway. It’s over.

            • Chulk@lemmy.ml
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              5 days ago

              If fake experts on the internet get their jobs taken by the ai, it would be tragic indeed.

              These two groups are not mutually exclusive

    • thedruid@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      6 days ago

      You think it’s anti science to want complete disclosure when you as a person are being experimented on?

      What kind of backwards thinking is that?

      • Sculptus Poe@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        edit-2
        4 days ago

        Not when disclosure ruins the experiment. Nobody was harmed or even could be harmed unless they are dead stupid, in which case the harm is already inevitable. This was posting on social media, not injecting people with random pathogens. Have a little perspective.

        • thedruid@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 days ago

          You do realize the ends do not justify the means?

          You do realize that MANY people on social media have emotional and mental situations occuring and that these experiments can have ramifications that cannot be traced?

          This is just a small reason why this is so damn unethical

          • Sculptus Poe@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            3 days ago

            In that case, any interaction would be unethical. How do you know that I don’t have an intense fear of the words “justify the means”? You could have just doomed me to a downward spiral ending in my demise. As if I didn’t have enough trouble. You not only made me see it, you tricked me into typing it.

            • thedruid@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              you are being beyond silly.

              in no way is what you just posited true . unsuspecting nd non malicious social faux pas are in no way equal to Intentionally secretive manipulation used to garner data from unsuspecting people

              that was an embarrassingly bad attempt to defend an indefensible position, and one no-one would blame you for deleting and re-trying

              • Sculptus Poe@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                3 days ago

                Well, you are trying embarrassingly hard to silence me at least. That is fine. I was definitely positing an unlikely but possible case, I do suffer from extreme anxiety and what sets it off has nothing to do with logic, but you are also overstating the ethics violation by suggesting that any harm they could cause is real or significant in a way that wouldn’t happen with regular interaction on random forums.

  • Knock_Knock_Lemmy_In@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    ·
    6 days ago

    The key result

    When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters

    • thanksforallthefish@literature.cafe
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 days ago

      While that is indeed what was reported, we and the researchers will never know if the posters with shifted opinions were human or in fact also AI bots.

      The whole thing is dodgy for lack of controls, this isn’t science it’s marketing

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 days ago

      If they were personalized wouldn’t that mean they shouldn’t really receive that many upvotes other than maybe from the person they were personalized for?

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Their success metric was to get the OP to award them a ‘Delta’, which is to say that the OP admits that the research bot comment changed their view. They were not trying to farm upvotes, just to get the OP to say that the research bot was effective.

      • the_strange@feddit.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 days ago

        I would assume that people in a similar demographics are interested in similar topics. Adjusting the answer to a person within a demographic would therefore adjust it to all people within that demographic and interested in that specific topic.

        Or maybe it’s just the nature of the answer being more personal that makes it more appealing to people in general, no matter their background.

  • nodiratime@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    6 days ago

    Reddit’s chief legal officer, Ben Lee, wrote that the company intends to “ensure that the researchers are held accountable for their misdeeds.”

    What are they going to do? Ban the last humans on there having a differing opinion?

    Next step for those fucks is verification that you are an AI when signing up.

  • conicalscientist@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    2
    ·
    6 days ago

    This is probably the most ethical you’ll ever see it. There are definitely organizations committing far worse experiments.

    Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      19
      ·
      6 days ago

      Yeah I was thinking exactly this.

      It’s easy to point to reasons why this study was unethical, but the ugly truth is that bad actors all over the world are performing trials exactly like this all the time - do we really want the only people who know how this kind of manipulation works to be state psyop agencies, SEO bros, and astroturfing agencies working for oil/arms/religion lobbyists?

      Seems like it’s much better long term to have all these tricks out in the open so we know what we’re dealing with, because they’re happening whether it gets published or not.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.

      You put it better than I could. I’ve noticed this too.

      I used to just disengage. Now when I find myself talking to someone like this I use my own local LLM to generate replies just to waste their time. I’m doing this by prompting the LLM to take a chastising tone, point out their fallacies and to lecture them on good faith participation in online conversations.

      It is horrifying to see how many bots you catch like this. It is certainly bots, or else there are suddenly a lot more people that will go 10-20 multi-paragraph replies deep into a conversation despite talking to something that is obviously (to a trained human) just generated comments.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          6 days ago

          I think the simplest way to explain it is that the average person isn’t very skilled at rhetoric. They argue inelegantly. Over a long time of talking online, you get used to talking with people and seeing how they respond to different rhetorical strategies.

          In these bot infested social spaces it seems like there are a large number of commenters who just seem to argue way too well and also deploy a huge amount of fallacies. This could be explained, individually, by a person who is simply choosing to argue in bad faith; but, in these online spaces there seem to be too many commenters who seem to deploy these tactics compared to the baseline that I’ve established in my decades of talking to people online.

          In addition, what you see in some of these spaces are commenters who seem to have a very structured way of arguing. Like they’ve picked your comment apart into bullet points and then selected arguments against each point which are technically on topic but misleading in a way.

          I’ll admit that this is all very subjective. It’s entirely based on my perception and noticing patterns that may or may not exist. This is exactly why we need research on the topic, like in the OP, so that we can create effective and objective metrics for tracking this.

          For example, if you could somehow measure how many good faith comments vs how many fallacy-laden comments in a given community there would likely be a ratio that is normal (i.e. there are 10 people who are bad at arguing for every 1 person who is good at arguing and, of those skilled arguers 10% of them are commenting in bad faith and using fallacies) and you could compare this ratio to various online topics to discover the ones that appear to be botted.

          That way you could objectively say that on the topic of Gun Control on this one specific subreddit we’re seeing an elevated ratio of bad faith:good faith scoring commenters and, therefore, we know that this topic/subreddit is being actively LLM botted. This information could be used to deploy anti-bot counter measures (captchas, for example).

          • ibelieveinthehousehippo@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 days ago

            Thanks for replying

            Do you think response time could also indicate that a user is a bot? I’ve had an interaction that I chalked up to someone using AI, but looking back now I’m questioning if there was much human involvement at all just due to how quickly the detailed replies were coming in…

            • FauxLiving@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 days ago

              It depends, but it’d be really hard to tell. I type around 90-100 WPM, so my comment only took me a few minutes.

              If they’re responding within a second or two with a giant wall of text it could be a bot, but it may just be a person who’s staring at the notification screen waiting to reply. It’s hard to say.

  • Donkter@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    1
    ·
    6 days ago

    This is a really interesting paragraph to me because I definitely think these results shouldn’t be published or we’ll only get more of these “whoopsie” experiments.

    At the same time though, I think it is desperately important to research the ability of LLMs to persuade people sooner rather than later when they become even more persuasive and natural-sounding. The article mentions that in studies humans already have trouble telling the difference between AI written sentences and human ones.

    • FourWaveforms@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      ·
      5 days ago

      This is certainly not the first time this has happened. There’s nothing to stop people from asking ChatGPT et al to help them argue. I’ve done it myself, not letting it argue for me but rather asking it to find holes in my reasoning and that of my opponent. I never just pasted what it said.

      I also had a guy post a ChatGPT response at me (he said that’s what it was) and although it had little to do with the point I was making, I reasoned that people must surely be doing this thousands of times a day and just not saying it’s AI.

      To say nothing of state actors, “think tanks,” influence-for-hire operations, etc.

      The description of the research in the article already conveys enough to replicate the experiment, at least approximately. Can anyone doubt this is commonplace, or that it has been for the last year or so?

    • Dasus@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      6
      ·
      5 days ago

      I’m pretty sure that only applies due to a majority of people being morons. There’s a vast gap between the 2% most intelligent, 1/50, and the average intelligence.

      Also please put digital text on white on black instead of the other way around

      • angrystego@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        5 days ago

        I agree, but that doesn’t change anything, right? Even if you are in the 2% most intelligent and you’re somehow immune, you still have to live with the rest who do get influenced by AI. And they vote. So it’s never just a they problem.

      • SippyCup@feddit.nl
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        5 days ago

        What? Intelligent people get fooled all the time. The NXIVM cult was made up mostly of reasonably intelligent women. Shit that motherfucker selected for intelligent women.

        You’re not immune. Even if you were, you’re incredibly dependent on people of average to lower intelligence on a daily basis. Our planet runs on the average intelligence.

      • CBYX@feddit.org
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        6 days ago

        Not sure how everyone hasn’t expected Russia has been doing this the whole time on conservative subreddits…

        • Geetnerd@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          6 days ago

          Those of us who are not idiots have known this for a long time.

          They beat the USA without firing a shot.

        • seeigel@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          Or somebody else is doing the manipulation and is successfully putting the blame on Russia.

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 days ago

          Mainly I didn’t really expect that since the old methods of propaganda before AI use worked so well for the US conservatives’ self-destructive agenda that it didn’t seem necessary.

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          18
          arrow-down
          4
          ·
          6 days ago

          Russia are every bit as active in leftist groups whipping them up into a frenzy too. There was even a case during BLM where the same Russian troll farm organised both a protest and its counter-protest. Don’t think you’re immune to being manipulated to serve Russia’s long-term interests just because you’re not a conservative.

          They don’t care about promoting right-wing views, they care about sowing division. They support Trump because Trump sows division. Their long-term goal is to break American hegemony.

          • aceshigh@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            6 days ago

            Yup. We’re all susceptible to joining a cult. No one willingly joins a cult, their group slowly morphs into one.

          • Madzielle@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            ·
            6 days ago

            There have a been a few times over the last few years, that my “bullshit- this is an extemist plant/propaganda” meter has gone off for left leaning individuals.

            Meaning these comments/videos are aimed to look like they are left folks, but are meant to make the left look bad/extremist in order to push people from the working class movements.

            Im truly a layman, but you just know its out there. The goal is indeed to divide us, and everyone should be suspect of everything the see on the Internet and do proper vetting of their sources.

          • CBYX@feddit.org
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 days ago

            The difference is in which groups are consequentially making it their identity and giving one political party carte blanche to break American politics and political norms (and national security orgs).

            100% agree though.

  • MTK@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    7
    ·
    6 days ago

    Lol, coming from the people who sold all of your data with no consent for AI research

    • loics2@lemm.ee
      link
      fedilink
      English
      arrow-up
      16
      ·
      6 days ago

      The quote is not coming from Reddit, but from a professor at Georgia Institute of Technology