Apparently, stealing other people’s work to create product for money is now “fair use” as according to OpenAI because they are “innovating” (stealing). Yeah. Move fast and break things, huh?

“Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials,” wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit “misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

  • hascat@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    That’s not the point though. The point is that the human comedian and the AI both benefit from consuming creative works covered by copyright.

    • vexikron@lemmy.zip
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      1 year ago

      And human comedians regularly get called out when they outright steal others material and present it as their own.

      The word for this is plagiarism.

      And in OpenAIs framework, when used in a relevant commercial context, they are functionally operating and profiting off of the worlds most comprehensive plagiarism software.

    • Phanatik@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      Yeah except a machine is owned by a company and doesn’t consume the same way. It breaks down copyrighted works into data points so it can find the best way of putting those data points together again. If you understand anything at all about how these models work, they do not consume media the same way we do. It is not an entity with a thought process or consciousness (despite the misleading marketing of “AI” would have you believe), it’s an optimisation algorithm.

        • Phanatik@kbin.social
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          It’s so funny that this is something new. This was Grammarly’s whole schtick since before ChatGPT so how different is Grammarly AI?

          • vexikron@lemmy.zip
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            Here is the bigger picture: The vast majority of tech illiterate people think something is AI because duh its called AI.

            Its literally just the power of branding and marketing on the minds of poorly informed humans.

            Unfortunately this is essentially a reverse Turing Test.

            The vast majority of humans do not know anything about AI, and also a huge majority of them can also barely tell the difference between, currently in some but not all forms, output from what is basically a brute force total internet plagiarism and synthesis software, from many actual human created content in many cases.

            To me this basically just means that about 99% of the time, most humans are actually literally NPCs, and they only do actual creative and unpredictable things very very rarely.

            • intensely_human@lemm.ee
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              I call it AI because it’s artificial and it’s intelligent. It’s not that complicated.

              The thing we have to remember is how scary and disruptive AI is. Given that fear, it is scary to acknowledge that we have AI emerging into our world. Because it is scary, that pushes us to want to ignore it.

              It’s called denial, and it’s the best explanation for why people aren’t willing to acknowledge that LLMs are AI.

              • vexikron@lemmy.zip
                link
                fedilink
                arrow-up
                4
                ·
                1 year ago

                It meets almost none of the conceptions of intelligence at all.

                It is not capable of abstraction.

                It is capable of brute force understanding similarities between various images and text, and then presenting a wide array of text and images containing elements that reasonably well emulate a wide array of descriptors.

                This is convincing to many people that it has a large knowledge set.

                But that is not abstraction.

                It is not capable of logic.

                It is only capable of again brute force analyzing an astounding amount of content and then producing essentially the consensus view on answers to common logical problems.

                Ask it any complex logical question that has never been answered on the internet before and it will output irrelevant or inaccurate nonsense, likely just finding an answer to a similar but not identical question.

                The same goes for reasoning, planning, critical thinking and problem solving.

                If you ask it to do any of these things in a highly specific situation even giving it as much information as possible, if your situation is novel or even simply too complex, it will again just spit out a non sense answer that is basically going to be very inadequate and faulty because it will just draw elements together from the closest things it has been trained on, nearly certainly being contradictory or entirely dubious due to being unable to account for a particularly uncommon constraint, or constraints that are very uncommonly faced simultaneously.

                It is not creative, in the sense of being able to generate something novel or new.

                All it does is plagiarize elements of things that are popular and have many examples of and then attempt mix them together, but it will never generate a new art style or a new genre of music.

                It does not even really infer things, is not really capable of inference.

                It simply has a massive, astounding data set, and the ability to synthesize elements from this in a convincing way.

                In conclusion, you have no idea what you are talking about, and you yourself literally are one of the people who has failed the reverse Turing Test, likely because you are not very well very versed in the technicals of how this stuff actually works, thus proving my point that you simply believe it is AI because of its branding, with no critical thought applied whatsoever.

              • ParsnipWitch@feddit.de
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Current models aren’t intelligent. Not even by the flimsy and unprecise definition of intelligence we currently have.

                Wanted to post a whole rant but then saw vexikron already did so I spare you xD