• stevedidwhat_infosec@infosec.pub
    link
    fedilink
    arrow-up
    1
    ·
    9 months ago

    Considering I responded to your three comments?? No, it wasn’t but good try at trying to insult me lmao.

    At least I can pay attention who I’m talking to in a thread if you wanna start throwing stones 😂

    • AVincentInSpace@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      9 months ago

      Why would my unrelated attempts to explain why people would see AI art as valuable, or explain that there is only one computer in the world right now powerful enough to run Midjourney (and no, the much-less-capable local models don’t count) matter to this discussion at all?

      State your counterargument to my claim that AI art serves no purpose other than to let people who don’t want to put in the effort to get good at art “create” art by stealing art from other people, or admit that you have none.

      • stevedidwhat_infosec@infosec.pub
        link
        fedilink
        arrow-up
        1
        ·
        9 months ago

        You’re purposefully downplaying and over simplifying what AI models do. I’m not going to continue arguing with someone who can’t debate fairly.

        Learning models don’t fucking collage shit. That’s not how the tech works.

        I’m not going to debate this shit with someone as blantly with their bad faith argumentation as you are being, good bye.

        Anyone else wants to actually discuss or learn more about the tech in a civil way, lmk.

        • AVincentInSpace@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          9 months ago

          I know perfectly well how the tech works. It’s given a bunch of images and randomly rolls dice to generate weights until it can generate things that approximate its training data, then continues in that general direction using a hill climbing algorithm to approximate that data as closely as possible. Every output of a generative neural network is a combination of random noise and a pattern of pixels that appeared in its training data (possibly across several input images, but that appeared nonetheless). You cannot get something out that did not, at some point, go in. Legally speaking, that makes them a collage tool.

          I ask again: do you have an argument or are you going to continue to make appeals to ignorance against mine?