First off, sorry if this is the wrong to community to post to - I’ll move it somewhere else should it not fit the community.

My best friend quite often is a contrarian for the sake of being a contrarian, I feel like. Discussing politics, veganism, the problems with using Amazon, what have you, with him is nigh impossible because he insists on his opinion and won’t budge. I feel like he just feels superior to other people, or at least to me, in a way that he just won’t change his mind, doesn’t hear other sides, and argues for the sake of arguing.

Now, in a recent discussion, I asked him if he knew why images aren’t displayed properly in my Firefox-fork browser (Mull). He gave an answer and asked why I would use a custom browser instead of Firefox itself to which I responded that it’s more privacy-focused and that I didn’t like Mozilla’s implementation of AI in their browser.

Long story short, it devolved into a lengthy discussion about AI, how the fear of AI is based on ignorance and a lack of knowledge, that it’s fine that AI is used for creative projects because in most cases it’s an assisting tool that aids creativity, doesn’t steal jobs etc. essentially that it’s just a tool to be used like a hammer would be.

What pisses me off the most about all this is that he subtly implies that I don’t know enough about the subject to have an opinion on it and that I don’t have any sources to prove my points so they’re essentially void.

How do I deal with this? Whatever facts I name he just shrugs off with “counter”-arguments. I’ve sent him articles that he doesn’t accept as sources. This has been going on for a couple hours now and I don’t know what to tell him. Do you guys have sources I could shove in his face? Any other facts I should throw his way?

Thank you in advance

Edit: A thing to add: I wasn’t trying to convince him that AI itself is bad - there are useful usages of AI that I won’t ignore. What I was concerned about is the way AI is used in any and all products nowadays that don’t need AI to function at all, like some AI-powered light bulbs or whatever; that creative jobs and arts are actively harmed by people scraping data and art from artists to create derivative “art”; that it’s used to influence politics (Trump, Gaza). These things. The way AI is used in its unmonitored way is just dangerous, I feel like

  • rand_alpha19@moist.catsweat.com
    link
    fedilink
    arrow-up
    1
    ·
    3 months ago

    It’s not theft, the artist still has their work. If anything, it’s copyright infringement. When some 16-year-old aspiring artist uses another artists’ work as a reference or traces something, what’s that?

    I guess you could call it practice, but then doesn’t AI do the same thing by iterating based on its dataset? Some AI outputs look terrifying and janky - so did my art when I was younger.

    I dunno, like this issue isn’t as simple as I used to think it was. If we look outside of economics (because artists need money to survive, like all of us) is there actually a problem here?

    I’m still trying to figure out how I feel about all of this, but it’s pretty obvious AI isn’t just gonna go away like NFTs did. I really am interested in discussion, I’m not trolling.

    • howrar@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      Whatever you decide to call it, the problem exists.

      When you trace or use existing art as reference, you’re using this to learn and not passing it off as your own design. Equivalently, when training an AI model, it’s the same. I don’t think the training part is a problem. The problem comes when producing work. A generative model will only produce things that are essentially interpolations of artworks it has trained on. A human artist interpolates between artworks they have seen from other artists, as well as their own lived experiences, and extrapolate by evaluating how some more avant garde elements tickle their emotions. Herein lies the argument that generative AI in its current state doesn’t produce anything novel and just regurgitates what it has seen.

      There’s also the problem of “putting words in someone else’s mouth”. Everyone has a unique art style (to a certain extent), just like how everyone has a unique writing style, or a unique voice. I’ll speak on voice first since more of us can relate to that. Having someone copy your voice to make it say things you did not say is something many will be very uncomfortable with. To an artist, art style and writing styles are the same.

      The economic side is also a problem. And while I don’t expect generative AI to go away, it can be done in a way that is fair to the people whose work have made it possible and allows them to continue doing what they do. We should be striving towards that.

      • rand_alpha19@moist.catsweat.com
        link
        fedilink
        arrow-up
        1
        ·
        3 months ago

        The problem comes when producing work. A generative model will only produce things that are essentially interpolations of artworks it has trained on. A human artist interpolates between artworks they have seen from other artists, as well as their own lived experiences, and extrapolate […].

        Yes, but how does that negate its usefulness as a tool or a foundation to start from? I never made any assertion that AI is able to make connections or possess any sort of creativity.

        Herein lies the argument that generative AI in its current state doesn’t produce anything novel and just regurgitates what it has seen.

        There’s a common saying that there is no such thing as an original story, because all fiction builds on other fiction. Can you see how that would apply here? Just because thing A and thing B exist doesn’t mean that thing C cannot possibly be interesting or substantially different. The brainstorming potential of an AI with a significant dataset seems functionally identical to an artist searching for references on Google (or Pixiv).

        Having someone copy your voice to make it say things you did not say is something many will be very uncomfortable with.

        So is this your main issue? I’m just not sure that that is really a valid reason, since many people are very uncomfortable with like, organ donation, pig heart valves, animal agriculture, ghostwriters, real person fanfiction, or data collection by Google. I’m sure there is something in the world that most people see as either positive or neutral that makes you very uncomfortable. For me, it’s policing.

        On the economic front, I agree - these companies should have been licensing these images from the start and we should be striving to create some sort of open database for artists so that they are compensated. It’s possible that awarding royalties, while flawed, may be a good framework since they could potentially be paid for all derivative works and not simply the image itself. But that may be prohibitively expensive due to the sheer number of iterations being performed, so it’s hard to say.

        • howrar@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          Yes, but how does that negate its usefulness as a tool or a foundation to start from? I never made any assertion that AI is able to make connections or possess any sort of creativity.

          It is useful. Never said it wasn’t. I’m pointing out problematic uses of an otherwise good tool.

          Maybe it’s easier to think about this through the lens of the end goal. We want good art to exist. We want good art to continue being produced for the foreseeable future. What inhibits this from happening? If artists stop producing art and AI can’t replace them, then we stop getting art. The point about current AI not being able to create the kind of art we care about is that we still need human artists. So how do we ensure that human artists continue producing? By making sure they get properly compensated for value they produce and that their work does not get used in a way that they don’t like. I’m personally not a fan of forcing people to work, so my preferred solution would be to give artists what they want in exchange for their work.

          There’s a common saying that there is no such thing as an original story, because all fiction builds on other fiction. Can you see how that would apply here? Just because thing A and thing B exist doesn’t mean that thing C cannot possibly be interesting or substantially different. The brainstorming potential of an AI with a significant dataset seems functionally identical to an artist searching for references on Google (or Pixiv).

          I’m not sure if I understand this correctly. Are you saying that an interpolation between two existing artworks can still make interesting artwork? If so, then yes, but if that’s all you’re doing, it severely limits the space of art that you have access to compared to something that also interpolates with a human being’s unique life experiences and is also capable of extrapolating by optimizing for the emotional cost function.