• perestroika@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    22 hours ago

    With generative AI (I would say that language models are a subset of this, they generate text) you can trick a reasonably naive person for a long period (and a reasonably skeptical person for a short period):

    a) into believing that something really happened

    b) into believing that many people support an idea

    This is potentially highly damaging to electoral representative democracy.

    There exists a form of representative democracy (representative direct democracy) that’s invulnerable - it’s called sortitition and allocates offices by lottery - but no country currently uses it and no country seems prepared to use it.

    So in short term, the solution is to be skeptical of the information we receive. For many people, that’s quite a lot to ask.

    • DarkWinterNights@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      6 hours ago

      Especially online - detection becomes possible over sustained interactions for only the most critical (at best), but you don’t get that online, and especially not on drive-by commenting, which is the norm for media/social media.

      Phony consensus and bad rhetoric are one system prompt away, and the only thing I’d argue is there probably is no escaping it, even for the most civic minded and informed people. Your best bet in the coming months is an awareness it’s happening largely undetected, that we’ve all fallen for it, and explaining it to as many people as possible.

      The big problem will be the vast majority thinks they can tell, that they’re uninfluenced, and that they have the inside line.