• 0 Posts
  • 122 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle




  • Your first point is misguided and incorrect. If you’ve ever learned something by ‘cramming’, a.k.a. just repeating ingesting material until you remember it completely. You don’t need the book in front of you anymore to write the material down verbatim in a test. You still discarded your training material despite you knowing the exact contents. If this was all the AI could do it would indeed be an infringement machine. But you said it yourself, you need to trick the AI to do this. It’s not made to do this, but certain sentences are indeed almost certain to show up with the right conditioning. Which is indeed something anyone using an AI should be aware of, and avoid that kind of conditioning. (Which in practice often just means, don’t ask the AI to make something infringing)


  • This would be a good point, if this is what the explicit purpose of the AI was. Which it isn’t. It can quote certain information verbatim despite not containing that data verbatim, through the process of learning, for the same reason we can.

    I can ask you to quote famous lines from books all day as well. That doesn’t mean that you knowing those lines means you infringed on copyright. Now, if you were to put those to paper and sell them, you might get a cease and desist or a lawsuit. Therein lies the difference. Your goal would be explicitly to infringe on the specific expression of those words. Any human that would explicitly try to get an AI to produce infringing material… would be infringing. And unknowing infringement… well there are countless court cases where both sides think they did nothing wrong.

    You don’t even need AI for that, if you followed the Infinite Monkey Theorem and just happened to stumble upon a work falling under copyright, you still could not sell it even if it was produced by a purely random process.

    Another great example is the Mona Lisa. Most people know what it looks like and if they had sufficient talent could mimic it 1:1. However, there are numerous adaptations of the Mona Lisa that are not infringing (by today’s standards), because they transform the work to the point where it’s no longer the original expression, but a re-expression of the same idea. Anything less than that is pretty much completely safe infringement wise.

    You’re right though that OpenAI tries to cover their ass by implementing safeguards. Which is to be expected because it’s a legal argument in court that once they became aware of situations they have to take steps to limit harm. They can indeed not prevent it completely, but it’s the effort that counts. Practically none of that kind of moderation is 100% effective. Otherwise we’d live in a pretty good world.





  • ClamDrinker@lemmy.worldtosolarpunk memes@slrpnk.netGatekeep ideas, not people
    link
    fedilink
    arrow-up
    20
    arrow-down
    11
    ·
    edit-2
    29 days ago

    Yeah… who doesn’t love moral absolutism… The honest answer to all of these questions is, it depends.

    Are these tools ethical or environmentally sustainable:

    AI doesn’t just exist of LLMs, which are indeed notoriously expensive to train and run. Using an image generator for example can be done on something as simple as a gaming grade GPU. And other AI technologies are already so light weight your phone can handle them. Do we assign the same negativity to gaming even though it’s just people using electricity for entertainment? Producing a game also costs a lot more than it does for an end user to play. It’s all about the balance between the two. And yes, AI technologies should rightfully be criticized for being wasteful, such as implementing it in places that it has no business in, or foregoing becoming more efficient.

    The ethicality of AI is also something that is a deeply nuanced topic that has no clear consensus. Nor does every company that works with AI use it in the same way. Court cases are pending, and none have been conclusive thus far. Implying it is one sided is just incredibly dishonest.

    but do they enable great things that people want?

    This is probably the silliest one of them all, because AI technologies are ground breaking in medical research. They are seemingly pivotal in healing the sick people of tomorrow. And creative AIs allow people who are creative to be more creative. But they are ignored. They are shoved to the side because they don’t fit in the “AI bad” narrative. Even though we should be acknowledging them, and seeing them as the allies they are against big companies trying to hoard AI technology for themselves. It is these companies that produce problematic AI, not the small artists, creatives, researchers, or anyone using AI ethically.

    but are they being made by well meaning people for good reasons?

    Who, exactly? You must realize there are far more parties than Google, Meta and Microsoft that create AI right? Companies and groups you’ve most likely never heard of before, creating open source AI for everyone to benefit from, not just those hoarding it for themselves. It’s just so incredibly narrow minded to assign maliciousness to such a large group of people on the basis of what technology they work with.

    Maybe you’re not being negative enough

    Maybe you are not being open minded enough, or have been blinded by hate. Because this shit isn’t healthy. It’s echo chamber level behaviour. I have a lot more respect for people that don’t like AI, but base it on rational reasons. There’s plenty of genuinely bad things about AI that have to be addressed, but instead you have to find yourself in a divide between people cuddling very close with spreading borderline misinformation to get what they want, and genuine people that simply want their voice and concerns about AI to be heard.


  • Every piece of legislature ever needs to deal with the emotions of it’s subjects. An unemphatic, but cold hard rational law, will be nothing less of tyrannical most of the time. Laws are for humans to follow, and humans have emotions that need to be understood for a law to be successful and supported to last into the future. A law that isn’t supported by it’s subjects eventually leads to revolution (big and small).

    How logic and rational a person can be is highly dependent on their emotional intelligence. You might be able to suppress your emotions when there is no stress at all, but if you cave during a stressful situation and start lashing out, that does impact your overall intelligence. Intelligence is just the collection of behaviors and training that make you effective at doing what you want to do, and being rational and logical is definitely good, but not the end of it all.





  • Exactly, thinking that is what I was getting pulled me over the edge, I sometimes remember a music video I want to listen to on my phone during my commute and I don’t want to spend 30 minutes either getting on my PC to download it with a tool, or using a third party downloader which can at times be shady. So upgrading that to a single click in the app seemed like a great deal. Crushingly disappointed when I found out how it actually was. Turns out the real answer was NewPipe, which I don’t even have to pay for.


  • Yeah, I thought it was a nice compromise. It seemed sensible that if Premium is the ‘compliant’ response to not wanting ads, the ‘compliant’ response to using third party tools to download videos was to just be able to do things more easily and with more options through Premium as well. But apparently they wanted to advertise something anyone who’s wanted to download a youtube video would not describe as ‘downloading’, which is easily out competed by free (but at times shady) tools.



  • I like those perks too, but if I pay more to be able to download videos (which again, I could’ve used a free tool for) I want to be able to do whatever I want with it. Download means getting a file I can watch using my own video player and store for later even if Youtube dies tomorrow, If I go on holiday without internet, or if my internet goes down for a week. Anything.

    If Google is going to be “Uhm aksually, you are technically downloading it, thats why we can advertise it like that”, then I’m already downloading literally every video I watch. And thats not the kind of bullshit you give to a paying customer. That is spitting in my face for paying you. Why does a non-Premium user get better service with free third party youtube downloaders?

    It’s a matter of principle.


  • ClamDrinker@lemmy.worldtoMemes@lemmy.mlMe but ublock origin
    link
    fedilink
    arrow-up
    96
    arrow-down
    3
    ·
    edit-2
    1 month ago

    I gave Premium a shot. Then the one time I wanted to use the feature Google said I was paying for - being able to download videos - I found out that it was just a glorified pre-buffer.

    • Can’t view the video outside the youtube app or the website, source video file encrypted ✅️
    • Can’t view the video if you havent connected to the internet in 3 days ✅️
    • Does less than your average youtube downloader that you can find for free with one search ✅️
    • Literally just saving Youtube bandwidth because they destroyed every benefit you would get if it was actually reasonable ✅️

    Enshittification isnt just limited to free users folks. Slammed that cancellation button right then and there. Good luck earning back my trust, I’m happy to pay if you didnt scream so loudly that even if I paid, you were going to treat me like shit anyways.