• PenguinTD@lemmy.ca
    link
    fedilink
    English
    arrow-up
    30
    ·
    1 year ago

    so similar to say, a redditor trying to sound smart by googling and debating another while both has no qualification on that topic, got it.

    • Laneus@beehaw.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      I wonder how much of that is just an inherent part of how neural networks behave, or if LLMs only do it because they learned it from humans.

      • Kata1yst@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        More the latter. Neural networks have been used in biomed for about a decade now fairly successfully. Look into their use of genetic algorithms, where we are effectively using the power of evolution to discover new therapies, in many cases even new uses for existing (approved) drugs.

        But ChatGPT has no way to test or improve any “designs”, it simply uses existing indexed data to infer what you want to hear as best it can. The goal is to sound smart, not be smart.

    • Hexorg@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      That’s actually a decently good analogy, though a random redditor is still smarter than ChatGPT because they can actually analyze google results, not just match situations and put them together.