We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

  • Lumidaub@feddit.org
    link
    fedilink
    English
    arrow-up
    85
    ·
    edit-2
    4 days ago

    adding missing information

    Did you mean: hallucinate on purpose?

    Wasn’t he going to lay off the ketamine for a while?

    Edit: … i hadnt seen the More Context and now i need a fucking beer or twnety fffffffffu-

    • Carmakazi@lemmy.world
      link
      fedilink
      English
      arrow-up
      41
      ·
      4 days ago

      He means rewrite every narrative to his liking, like the benevolent god-sage he thinks he is.

    • BreadstickNinja@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      4 days ago

      Yeah, let’s take a technology already known for filling in gaps with invented nonsense and use that as our new training paradigm.

    • Phoenicianpirate@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      Wasn’t he the children’s author who published the book about a talking animals learning the value of hard work or something?

      • rmuk@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        That’d be esteemed British author Georgie Orrell, author of such whimsical classics as “Now the Animals Are Running The Farm!”, “My Big Day Out At Wigan Pier” and, of course, “Winston’s Zany Eighties Adventure”.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    4 days ago

    I’m interested to see how this turns out. My prediction is that the AI trained from the results will be insane, in the unable-to-reason-effectively sense, because we don’t yet have AIs capable of rewriting all that knowledge and keeping it consistent. Each little bit of it considered in isolation will fit the criteria that Musk provides, but taken as a whole it’ll be a giant mess of contradictions.

    Sure, the existing corpus of knowledge doesn’t all say the same thing either, but the contradictions in it can be identified with deeper consistent patterns. An AI trained off of Reddit will learn drastically different outlooks and information from /r/conservative comments than it would from /r/news comments, but the fact that those are two identifiable communities means that it’d see a higher order consistency to this. If anything that’ll help it understand that there are different views in the world.

    • leftzero@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      in the unable-to-reason-effectively sense

      That’s all LLMs by definition.

      They’re probabilistic text generators, not AI. They’re fundamentally incapable of reasoning in any way, shape or form.

      They just take a text and produce the most probable word to follow it according to their training model, that’s all.

      What Musk’s plan (using an LLM to regurgitate as much of its model as it can, expunging all references to Musk being a pedophile and whatnot from the resulting garbage, adding some racism and disinformation for good measure, and training a new model exclusively on that slop) will produce is a significantly more limited and prone to hallucinations model that occasionally spews racism and disinformation.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      LLMs are prediction tools. What it will produce is a corpus that doesn’t use certain phrases, or will use others more heavily, but will have the same aggregate statistical “shape”.

      It’ll also be preposterously hard for them to work out, since the data it was trained on always has someone eventually disagreeing with the racist fascist bullshit they’ll get it to focus on. Eventually it’ll start saying things that contradict whatever it was supposed to be saying, because statistically eventually some manner of contrary opinion is voiced.
      They won’t be able to check the entire corpus for weird stuff like that, or delights like MLK speeches being rewriten to be anti-integration, so the next version will have the same basic information, but passed through a filter that makes it sound like a drunk incel talking about asian women.

  • finitebanjo@lemmy.world
    link
    fedilink
    English
    arrow-up
    107
    ·
    4 days ago

    “If we take this 0.84 accuracy model and train another 0.84 accuracy model on it that will make it a 1.68 accuracy model!”

    ~Fucking Dumbass

  • AbidanYre@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    4 days ago

    I have Twitter blocked at my router.

    Please tell me one of the “politically incorrect but objectively true” facts was that Elon is a pedophile.

  • squaresinger@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    4 days ago

    First error to correct:

    We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information errors and deleting errors information.

  • FreakinSteve@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    By the way, when you refuse to band together, organize, and dispose of these people, they entrench themselves further in power. Everyone ignored Kari Lake as a harmless kook and she just destroyed Voice of America. That loudmouthed MAGA asshole in your neighborhood is going to commit a murder.

  • GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 days ago

    if you won’t tell my truth I’ll force you to acknowledge my truth.

    nothing says abusive asshole more than this.

  • ThePowerOfGeek@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    4 days ago

    That’s not how knowledge works. You can’t just have an LLM hallucinate in missing gaps in knowledge and call it good.

    • D_C@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      ·
      4 days ago

      I’ll have you know he’s seeing a medical professional at least once a day. Sometimes multiple times!!!

      (On an absolutely and completely unrelated note ketamine dealers are medical professionals, yeah?)

  • antihumanitarian@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    4 days ago

    Most if not all leading models use synthetic data extensively to do exactly this. However, the synthetic data needs to be well defined and essentially programmed by the data scientists. If you don’t define the data very carefully, ideally math or programs you can verify as correct automatically, it’s worse than useless. The scope is usually very narrow, no hitchhikers guide to the galaxy rewrite.

    But in any case he’s probably just parroting whatever his engineers pitched him to look smart and in charge.