How are you using new AI technology? Maybe you're only deploying things like ChatGPT to summarize long texts or draft up mindless emails. But what are you losing by taking these shortcuts? And is this tech taking away our ability to think?
Too be fair, this can also be said of teachers. It’s important to recognise that AI’s are as accurate as any single source and should always check everything yourself. I have concerns over a future where our only available sources are through AI.
Bruh so much of our lives is made up of people lying, either intentionally or unintentionally via spreading misinformation.
I remember being in 5th grade and my science teacher in a public school was teaching the “theory” of evolution but then she mentioned there are “other theories like intelligent design”
She wasn’t doing it to be malicious, just a brainwashed idiot.
And that’s why we, as humans, know how to look for signs of this in other humans. This is the skill we have to learn precisely because of that. Not only it’s not applicable when you read the generated bullshit, it actually does the opposite.
Some people are mistaken, some people are actively misleading, almost no one has the combination of being wrong just enough, and confident just enough, to sneak their bullshit under the bullshit detector.
Took that a slightly different way then I was expecting, my point is we have to be on the lookout for bullshit when getting info from other people so it’s really no different when getting info from an LLM.
However you took it to the LLM can’t determine between what’s true and false, which is obviously true but an interesting point to make nonetheless
It’s not that LLM can’t know truth, that’s obvious but besides the point. Its that the user can’t really determine when the lies are, not to the degree that you can be when getting info from a human.
So you really need to check everything, every claim, every word, every sound. You can’t assume good intentions, there are no intentions in real sence of the word, you can’t extrapolate or intrapolate. Every word of the data you’re getting might be a lie with the same certainty as any other word.
It requires so much effort to check properly, you either skip some or spend more time that you would without the layer of lies.
I don’t see how that’s different honestly, then again I’m not usually asking for absolute truth from LLMs, moreso explaining concepts that I can’t fully grasp by restating things in another way or small coding stuff that I can check essentially immediately if it works or not lol.
See, this is the problem I’m talking about. You think you can gauge if the code works or not, but even for small pieces (and in some cases, especially for small pieces) there is a world of very bad, very dangerous shit that lies between “works” and “not works”.
And it is as dangerous when you trust it to explain something for you. It’s by definition something you don’t know therefore can’t check.
Too be fair, this can also be said of teachers. It’s important to recognise that AI’s are as accurate as any single source and should always check everything yourself. I have concerns over a future where our only available sources are through AI.
The level of psychopathy required from a human to be as blatant at lying as an llm is almost unachievable
Bruh so much of our lives is made up of people lying, either intentionally or unintentionally via spreading misinformation.
I remember being in 5th grade and my science teacher in a public school was teaching the “theory” of evolution but then she mentioned there are “other theories like intelligent design”
She wasn’t doing it to be malicious, just a brainwashed idiot.
And that’s why we, as humans, know how to look for signs of this in other humans. This is the skill we have to learn precisely because of that. Not only it’s not applicable when you read the generated bullshit, it actually does the opposite.
Some people are mistaken, some people are actively misleading, almost no one has the combination of being wrong just enough, and confident just enough, to sneak their bullshit under the bullshit detector.
Took that a slightly different way then I was expecting, my point is we have to be on the lookout for bullshit when getting info from other people so it’s really no different when getting info from an LLM.
However you took it to the LLM can’t determine between what’s true and false, which is obviously true but an interesting point to make nonetheless
It’s not that LLM can’t know truth, that’s obvious but besides the point. Its that the user can’t really determine when the lies are, not to the degree that you can be when getting info from a human.
So you really need to check everything, every claim, every word, every sound. You can’t assume good intentions, there are no intentions in real sence of the word, you can’t extrapolate or intrapolate. Every word of the data you’re getting might be a lie with the same certainty as any other word.
It requires so much effort to check properly, you either skip some or spend more time that you would without the layer of lies.
I don’t see how that’s different honestly, then again I’m not usually asking for absolute truth from LLMs, moreso explaining concepts that I can’t fully grasp by restating things in another way or small coding stuff that I can check essentially immediately if it works or not lol.
See, this is the problem I’m talking about. You think you can gauge if the code works or not, but even for small pieces (and in some cases, especially for small pieces) there is a world of very bad, very dangerous shit that lies between “works” and “not works”.
And it is as dangerous when you trust it to explain something for you. It’s by definition something you don’t know therefore can’t check.
I mean I literally can test it immediately lol, a nodered js function isn’t going to be dangerous lol
Or an AHK script that displays a keystroke on screen, or cleaning up a docker command into docker compose, simple shit lol