It’s so ridiculous when corporations steal everyone’s work for their own profit, no one bats an eye but when a group of individuals do the same to make education and knowledge free for everyone it’s somehow illegal, unethical, immoral and what not.
Using publically available data to train isn’t stealing.
Daily reminder that the ones pushing this narrative are literally corporation like OpenAI. If you can’t use copyright materials freely to train on, it brings up the cost in such a way that only a handful of companies can afford the data.
They want to kill the open-source scene and are manipulating you to do so. Don’t build their moat for them.
And using publicly available data to train gets you a shitty chatbot…
Hell, even using copyrighted data to train isn’t that great.
Like, what do you even think they’re doing here for your conspiracy?
You think OpenAI is saying they should pay for the data? They’re trying to use it for free.
Was this a meta joke and you had a chatbot write your comment?
Was this a meta joke and you had a chatbot write your comment?
if someone said this to me I’d cry
The point that was being made was that public available data includes a whole lot amount of copyrighted data to begin with and its pretty much impossible to filter it out. Grand example, the Eiffel tower in Paris is not copyright protected, but the lights on it are so you can only using pictures of the Eiffel tower during the day, if the picture itself isn’t copyright protected by the original photographer. Copyright law has all these complex caveat and exception that make it impossible to tell in glance whether or not it is protected.
This in turn means, if AI cannot legally train on copyrighted materials it finds online without paying huge sums of money then effectively only mega corporation who can pay copyright fines as cost of business will be able to afford training decent AI.
The only other option to produce any ai of such type is a very narrow curated set of known materials with a public use license but that is not going to get you anything competent on its own.
EDIT: In case it isn’t clear i am clarifying what i understood from [email protected] comment, not adding to it.
That’s insane logic…
Like you’re essentially saying I can copy/paste any article without a paywall to my own blog and sell adspace on it…
And your still saying OpenAI is trying to make AI companies pay?
Like, do you think AI runs off free cloud services? The hardware is insanely expensive.
And OpenAI is trying to argue the opposite, that AI companies shouldn’t have to pay to use copyrighted works.
You have zero idea what is going on, but you are really confident you do
I clarified the comment above which was misunderstood, whether it makes a moral/sane argument is subjective and i am not covering that.
I am not sure why you think there is a claim that openAI is trying to make companies pay, on the contrary the comment i was clarifying (so not my opinion/words) states that openAI is making an argument that anyone should be able to use copyrighted materials for free to train AI.
The costs of running an online service like chatgpt is wildly besides the argument presented. You can run your own open source large language models at home about as well as you can run Bethesda’s Starfield on a same spec’d PC
Those Open source large language models are trained on the same collections of data including copyrighted data.
The logic being used here is:
If It becomes globally forbidden to train AI with copyrighted materials or there is a large price or fine in order to use them for training then the Non-Corporate, Free, Open Source Side of AI will perish or have to go underground while to the For-Profit mega corporations will continue exploit and train ai as usual because they can pay to settle in court.
The Ethical dilemma as i understand it is:
Allowing Ai to train for free is a direct threat towards creatives and a win for BigProfit Enthertainment, not allowing it to train to free is treat to public democratic AI and a win for BigTech merging with BigCrime
That is very well put, I really wish I could have started with that.
Though I envision it as a loss for BigProfit Enthertainment since I see this as a real boon for the indie gaming, animation and eventually filmmaking industry.
It’s definitely overall quite a messy situation.
I didn’t want any of this shit. IDGAF if we don’t have AI. I’m still not sure the internet actually improved anything, let alone what the benefits of AI are supposed to be.
It doesn’t matter what you want. What matters is if corporations can extract $ from you, gain an efficiency, or cut their workforce using it.
That’s what the drive for AI is all about.
No doubt.
You don’t have to use it. You can even disconnect from the internet completely.
Whats the benefit of stopping me from using it?
If the data has to be paid for, openAI will gladly do it with a smile on their face. It guarantees them a monopoly and ownership of the economy.
Paying more but having no competition except google is a good deal for them.
Eh, the issue is lots of people wouldn’t be willing to sell tho.
Like, you think an author wants the chatbot to read their collected works and use that? Regardless of if it’s quoting full texts or “creating” text in their style.
No author is going to want that.
And if it’s up to publishers, they likely won’t either. Why take one small payday if that could potentially lead to loss of sales a few years down the row.
It’s not like the people making the chatbits just need to buy a retail copy of the text to be in the legal clear.
The publisher’s will absolutely sell imo. They just publish, the book will be worth the same with or without the help of AI to write it.
I guess there is a possibility that people start replacing bought books with personalized book llm outputs but that strikes me as unlikely.
We have a mechanism for people to make their work publically visible while reserving certain rights for themselves.
Are you saying that creators cannot (or ought not be able to) reserve the right to ML training for themselves? What if they want to selectively permit that right to FOSS or non-profits?
Essentially yes. There isn’t a happy solution where FOSS gets the best images and remains competitive. The amount of data needed is outside what can be donated. Any open source work will be so low in quality as to be unusable.
It also won’t be up to them. The platforms where the images are posted will be selling and brokering. No individual is getting a call unless they are a household name.
None of the artists are getting paid either way so yeah, I’m thinking of society in general first.
They want to kill the open-source scene
Yeah, by using the argument you just gave as an excuse to “launder” copyleft works in the training data into permissively-licensed output.
Including even a single copyleft work in the training data ought to force every output of the system to be copyleft. Or if it doesn’t, then the alternative is that the output shouldn’t be legal to use at all.
100% agree, making all outputs copyleft is a great solution. We get to keep the economic and cultural boom that AI brings while keeping the big companies in check.
The point is the entire concept of AI training off people’s work to make profit for others is wrong without the permission of and compensation for the creator regardless if it’s corporate or open source.
I think I’ve decided to not publish anything that I want to keep ownership of, just in case. There’s an entire planet’s worth of countries, which will all have their own sets of laws. It takes waay too long to polish something, only to just give it away for free haha. Someone else is free to do that work if it is that easy. No skin off my back.
I think it’s similar to many other hand-made crafts/items. Most people will buy their clothes from stores, but there are definitely still people who make beautiful clothing from hand better than machines could.
Don’t even get me started on stuff like knitting. It already costs the creator a crap ton of money just for the materials. It takes a crap ton of time to make those, too. Despite the costs, many people just expect those knitted pieces for practically free. The people who expect that pricing are also free to go with machine-produced crafts/items instead.
It comes down to what people want, and what they’re willing to pay, imo. Some people will find value in something physically being put together by another human, and other people will find value in having more for less. Neither is “wrong” necessarily, so long as no one is literally ripped off. (With over 8 billion people, it’s bound to happen at least once. I feel bad for whoever that is.)
That being said, we’ll never be able to honestly say that the specific skills and techniques that are currenty required are the exact same. It would be like calling a photographer amazing at realism painting because their photo looks like real life. Photographers and painters both have their place, but they are not the exact same.
I think that’s also part of what’s frustrating so many artists. Coding AI is not the same as using the colour wheel, choosing materials, working fine motor control, etc. It’s not learning about shadows, contrast, focal points, etc. I can definitely understand people not wanting those aspects to be brushed off, especially since it usually takes most of a lifetime to achieve. A music generator and a violin may both make great music, but they are not the same, and they require different technical skills.
I’ll never buy AI art if I have any say in the matter. I’ll support handmade stuff first, every time.
There is definitely more value in hand made art. Even the fanciest prints on canvas can’t compare and I don’t think AI art will be evoking the same feelings a john waterhouse exhibit does any time soon.
On the subject of publishing, I’ve chosen to embrace it personally. My view is that even the hidden stuff on our comp ends up in a Chinese or US databases anyways.
OpenAI is definitely not the one arguing that they have stole data to train their AIs, and Disney will be fine whether AI requires owning the rights to training materials or not. Small artists, the ones protesting the most against it, will not. They are already seeing jobs and commission opportunities declining due to it.
Being publicly available in some form is not a permission to use and reproduce those works however you feel like. Only the real owner have the right to decide. We on the internet have always been a bit blasé about it, sometimes deservedly, but as we get to a point we are driving away the very same artists that we enjoy and get inspired by, maybe we should be a bit more understanding about their position.
Thats basically my main point, Disney doesn’t need the data, Getty either. AI isn’t going away and the jobs will be lost no matter what.
Putting a price tag in the high millions for any kind of generative model only benefits the big players.
I feel for the artists. It was already a very competitive domain that didn’t really pay well and it’s now much worse but if they aren’t a household name, they aren’t getting a dime out of any new laws.
I’m not ready to give the economy to Microsoft, Google, Getty and Adobe so GRRM can get a fat payday.
If AI companies lose, small artists may have the recourse of seeking compensation for the use and imitation of their art too. Just feeling for them is not enough if they are going to be left to the wolves.
There isn’t a scenario here in which big media companies lose so talking of it like it’s taking a stand against them doesn’t make much sense. What are we fighting for here? That we get to generate pictures of Goofy? The small AI user’s win here seems like such a silly novelty that I can’t see how it justifies just taking for granted that artists will have it much rougher than they already have.
The reality here is that even if AI gets the free pass, large media and tech companies are still primed to profit from them far more than any small user. They will be the one making AI-assisted movies and integrating chat AI into their systems. They don’t lose in either situation.
There are ways to train AI without relying on unauthorized copyrighted data. Even if OpenAI loses, it wouldn’t be the death of the technology. It may be more efficient and effective to train them with that data, but why is “efficiency” enough to justify this overreach?
And is it even wise to be so callous about it? Because it’s not going to stop with artists. This technology has the potential to replace large swaths of service industries. If we don’t think of the human costs now, it will be even harder to make a case for everyone else.
I fully believe AI will be able to replace 50% or more of desk jobs in the near future. It’s definitely a complicated situation and you make good points.
First and foremost, I think it’s imperative the barrier for entry for model training is as low as possible. Anything else basically gives a select few companies the ability to charge a huge subscription fee on all our goods and services.
The data needed is pretty heavy as well, it’s not very pheasible to go off of donated or public domain data.
I also think any job loss is virtually guaranteed and trying to save them is misguided as well as not really benefiting most of those affected.
And yea, the big companies win either way but if it’s easier to use this new tech, we might not lose as hard. Disney for instance doesn’t have any competition but if a bunch of indie animation companies and groups start popping up, it levels the playing field a bit.
this is because the technocrats are allowed to steal from you, but when you steal from them what they’ve stolen from actual researchers that’s a problem
Time to make OpenASci?
/rimshot
More people need to think like you. Why isn’t “Total War: Warhammer” just called “Total Warhammer”? These are the questions that keep me up at night
I agree with you, but also Total War is the trademark brand and they’re also gonna use it.
Total War: Hammer!
Stop, Hammer time!
“Go with the flow”, it is said
What really breaks the suspension of disbelief in this reality of ours is that fucking advertising is the most privacy invasive activity in the world. Seriously, even George Orwell would call bullshit on that.
Kind of a strawman, I’d like everything to be FOSS, and if we keep Capitalism (which we shouldn’t), it should be HEAVILY regulated not the laissez-faire corporatocracy / oligarchy we have now.
I don’t want any for-profit capitalists to have any control of AI. It should all be owned by the public and all productive gains from it taxed at 100%. But open source AI models, right on.
And team SciHub–FUCK YEAH!
And people wonder why there’s so much push back against everything corps/gov does these days. They do not act in a manner which encourages trust.
What’s scihub?
A website where you can download paywalled scientific literature. Most scientific literature is paywalled by publishers, and costs a real significant amount to read (like 30-50$ per article if you don’t have a subscription).
Scihub basically just pirates it. And has been shut down several times. But as most scientific studies are already laid with public money, scihub isn’t that unethical at all.
Lots of scientists will just send you their article if you email them. They don’t get the money when you pay to read it - often they pay to submit. Reviewing journal articles is a privilege and doesn’t get you paid. The prestige of a scientific article is from the number of times people have cited it. The only “harm” done is that the publisher doesn’t get to make 100% profit for doing nothing.
Journal publishing is mostly a way to extract money from universities. Elsevier and its ilk name whatever price they think a research university can afford.
deleted by creator
This is different. AI as a transformative tech is going to usher the US economy into the next boom of prosperity. The AI revolution will change the world and allow people to decide if they want to work for money or not (read UBI). In case you haven’t caught on, am being sarcastic.
All this despite ChatGPT being a total complete joke.
So, I feel taking an .epub and putting it in a .zip is pretty transformative.
Also you can make ChatGPT (or Copilot) print out quotes with a bit of effort, now that it has Internet.
deleted
I follow a few researchers with interesting youtube channels, and they often mention that if you ask them or their colleagues for a publication of theirs, chances are they’ll be glad to send it to you.
A lot of them love sharing their work, and don’t care at all for science journal paywalls.
Don’t mind? Hell, we want people to read that shit. We don’t profit at all if it’s paywalled, it hurts us and hurts science in general. This is 100% the wishes of scientific for profit journals.
Academics don’t care because they don’t get paid for them anyway. A lot of the time you have to pay to have your paper published. Then companies like Elsevier just sit back and make money.
The IP system, which goes to great lengths to block things like open-access scientific publications, is borked borked borked borked borked.
If OpenAI and other generative AI projects are the means by which we finally break it so we can have culture and a public domain again, well, we had to nail Capone with tax evasion.
Yes, industrialists want to use AI [exactly they way they want to use every other idea – plausible or not] to automate more of their industries so they can pay fewer people less money for more productivity. And this is a problem of which generative AI figures centrally, but it’s not really all that new, and eventually we’re going to have to force our society to recognize that it works for the public and not money. I don’t think AI is going to break the system and lead us to communist revolution ( The owning class will tremble…! ) But eventually it will be 1789 all over again. Or we’ll crush the fash and realize the only way we can get the fash to not come back is by restoring and extending FDR’s new deal.
I am skeptical the latter can happen without piles of elite heads and rivers of politician blood.
If this ends with LLMs getting shutdown to some degree, I wonder if it’s going to result in something like a Pirate Bai.
Not to be confused with, “Pirate Bae”, the pirate dating site for those endowed with abundant doubloons.
OpenAI isn’t really proven as legal. They claim it is, and it’s very difficult to mount a challenge, but there definitely is an argument that they have no fair use protection - their “research” is in fact development of a commercial product.
Yes, because 1:1 duplication of copy written works violates copyright, but summaries of those works and relaying facts stated in those works is perfectly legal (by an ai or not).
If you mean by “perfectly legal” a fair use claim, then could you please explain how a commercial for-profit company using the works, sometimes echoing verbatim results, is infringing on the copyrights in a fair use manner?
I do not mean a fair use claim. To quote the copyright office “Copyright does not protect facts, ideas, systems, or methods of operation, although it may protect the way these things are expressed” source
Facts and ideas cannot be copy written, so what I was specifically referring to is that if I or an AI read a paper about jellyfish being ocean creatures, then later talk about jellyfish being ocean creatures, there’s no restrictions on that whatsoever as long as we don’t reproduce the paper word by word.
Now, most of the time AI summarizes things or collects facts, and since those themselves cannot be protected by copyright it’s perfectly legal. On the occasion when AI spits out copy written work then that’s a gray area and liability if any will probably decided in the courts.
I pirated 90% of the texts I used to write my thesis at university, because those books would have cost me hundreds of euros that I didn’t have.
Fuck you, capitalism.
Lemmy users: Copyright law is broken and stupid.
Also Lemmy users: A.I. violates copyright law!
A.I. doesn’t violate copywrite laws. It is the data-mining done to train A.I. and the regurgitation of said data in the responses that ultimately violate these laws. A model trained on privately owned, properly licensed, or exclusively public works wouldn’t be a problem.
Even then, I would argue that lack of attribution is a bigger problem than merely violating copywrite. A big part of the LLM mystique is in how it can spit out a few lines of Shakespeare without accreditation and convince its users that its some kind of master poet.
Copywrite law is stupid and broken. But plagarism is a problem in its own right, as it seeks to effectively sell people their own creative commons at an absurd markup.
A model trained on privately owned, properly licensed, or exclusively public works wouldn’t be a problem.
This is how we end up with only corpo owned AIs being allowed to exist imo, places like stock photo sites are the only ones with large enough repositories of images to train AI that they have all the legal rights to
The way I see it, either generative AI is legal, free for everyone to run locally, and the created works are public domain, OR, everyone pays $20/mo to massive faceless corpos for the rest of their lives to have the privilege of access to it because they’re the only ones who own all (or have enough money to license) the IP needed to train them
This is how we end up with only corpo owned AIs being allowed to exist imo
Its how you end up with sixteen different streaming services that only vend a sliver of the total available content, sure. But the underlying technology of AI grows independent of what its trained on.
The way I see it, either generative AI is legal, free for everyone to run locally, and the created works are public domain, OR, everyone pays $20/mo to massive faceless corpos for the rest of their lives to have the privilege of access to it
There are other alternatives. These sites can be restricted to data within the public domain. And we can increase our investment in public media. The problem of NYT articles being digested and regurgitated as ChatGPT info-vomit isn’t a problem if the NYT is a publicly owned and operated enterprise. Then its not struggling to profit off journalism, but treating this information as a loss-leading public service open to all, with ChatGPT simply operating as a tool to store, process, and present the data.
Similarly, if you limit generative AI to the old Mickey Mouse and Winnie-the-Pooh films from the 1930s, you leave plenty of room for original artists to create new works without fear that their livelihoods get chews up and fed back into the system. If you invest in public art exhibitions then these artists can get paid to pursue their craft, the art becomes public domain immediately, and digital tools that want to riff on the original are free to do so without undermining the artists themselves.