LLM is AI. So are NPCs in video games that just use if-else statements.
Don’t confuse AI in real-life with AI in fiction (like movies).
LLM is AI. So are NPCs in video games that just use if-else statements.
Don’t confuse AI in real-life with AI in fiction (like movies).
It’s a writing style. I like it. I even turn off auto-capitalization on my phone keyboard so my chats are all lowercase.
Despite how bad Google Search had become, DuckDuckGo and Bing are somehow still worse. While Google displays the result in the first few, DDG and Bing have no idea what I’m looking for.
Gotta try Kagi sometime.
Training on copyrighted data should be allowed as long as it’s something publicly posted.
If you make it reproduce copyrighted media, it is a problem.
As long as the stuff it generates doesn’t resemble any copyrighted works, even if it was trained on copyrighted works, I don’t see why that should be problem.
Vista was amazing and 8/8.1 was refreshing. Also, Vista introduced hardware accelerated desktop rendering in Windows, finally no more tearing. I enjoyed using them. I personally haven’t had any gripes with any of the recent Windows versions.
I can buy one here in Malaysia.
This smells like investor-baiting. Studios don’t really need to announce that they’re going “aggressive” in using a certain tool.
Don’t talk to me if your average ratio is less than 1.0.
deleted by creator
It’s not about using it for bad deeds as any other tool like a kitchen knife can be used the same way. It’s more to the necessity to work in order to live.
I love LLMs for coming up with patterns to solve the problem but that’s about it. I’ll do the implementation myself.
I wonder if they’d release the weights and training/inferencing code. They did it for LLaMA.
There’s been a lot of open source alternatives to Stable Diffusion lately and it’s great.
This is what AI actually is. Not the super-intelligent “AI” that you see in movies, those are fiction.
The NPC you see in video games with a few branches of if-else statements? Yeah that’s AI too.
I don’t think they would care if it didn’t get popular and having thousands of people trying it out, eating up huge amount of compute resources.
It’s a known quirk of LLMs.
It’s definitely cost. There are other ways to make it generate text that is similar to training data without needing it to endlessly repeat words so I doubt OpenAI cares in that aspect.
“leak training data”? What? That’s not how LLMs work. I guess a sensational headline attracts more clicks than a factually accurate one.
If you do this and then later you face some sort of issue with Windows, remember that it might not be Windows’ fault.
Are we defending/justifying toxicity now?