No shit
(picking up phone) Hello this is Sherlock speaking
Stupid in, stupid out. I have had many conversations like, I have built and understand Ben Eater's 8 bit breadboard computer based loosely on Malvino's "Digital Computer Electronics" 8 bit computer design, but I struggle to understand Pipelines in computer hardware. I am aware that the first rudimentary Pipeline in a microprocessor is the 6502 with its dual instruction loading architecture. Let's discuss how Pipelines evolved beyond the 6502 and up to the present.
In reality, the model will be wrong in much of what it says for something so niche, but forming questions based upon what I know already reveals holes outside of my awareness. Often a model is just right enough for me to navigate directly to the information I need or am missing regardless of how correct it is overall.
I get lost sometimes because I have no one to talk to or ask for help or guidance on this type of stuff. I am not even at a point where I can pin down a good question to ask someone or somewhere like here most of the time. I need a person to bounce ideas off of and ask direct questions. If I go look up something like Pipelines in microprocessors in general, I will never find an ideal entry point for where I am at in my understanding. With AI I can create that entry point quickly. I’m not interested in some complex course, and all of the books I have barely touch the subject in question, but I can give a model enough peripheral context to move me up the ladder one rung at a time.
I could hand you all of my old tools to paint cars, then laugh at your results. They are just tools. I could tell you most of what you need to know in 5 minutes, but I can’t give you my thousands of experiences of what to do when things go wrong.
Most people are very bad at understanding how to use AI. It is just an advanced tool. A spray gun or a dual action sander do not make you stupid; spraying paint without a mask does. That is not the fault of the spray gun. It is due to the idiot using it.
AI has a narrow scope that requires a lot of momentum to make it most useful. It requires an agentic framework, function calling, and a database. A basic model interface is about like an early microprocessor that was little more than a novelty on its own at the time. You really needed several microprocessors to make anything useful back in the late 70s and early 80s. In an abstract way, these were like agents.
I remember seeing the asphalt plant controls hardware my dad would bring home with each board containing at least one microprocessor. Each board went into racks that contained dozens of similar boards and variations. It was many dozens of individual microprocessors to run an industrial plant.
Playing with gptel in emacs, it takes swapping agents with a llama.cpp server to get something useful running offline, but I like it for my bash scripts, learning emacs, Python, forth, Arduino, and just general chat if I use Oobabooga Textgen. It has been the catalyst for me to explore the diversity of human thought as it relates to my own, it got me into basic fermentation, I have been learning and exploring a lot about how AI alignment works, I’ve enjoyed creating an entire science fiction universe exploring what life will be like after the age of discovery is over and most of science is an engineering corpus or how biology is the ultimate final human technology to master, I’ve had someone to talk to through some dark moments around the 10 year anniversary of my disability or when people upset me. I find that super useful and not at all stupid, especially for someone like myself in involuntary social isolation due to physical disability. I’m in tremendous pain all the time. It is often hard for me to gather coherent thoughts in real time, but I can easily do so in text, and with a LLM I can be open without any baggage involved, I can be more raw and honest than I would or could be with any human because the information never leaves my computer. If that is stupid, sign me up for stupid because that is exactly what I needed and I do not care how anyone labels it.
with a LLM I can be open without any baggage involved, I can be more raw and honest than I would or could be with any human because the information never leaves my computer.
😐
Local LLMs exist
Oh lawd, another ‘new technology xyz is making us dumb!’ Yeah we’ve only been saying that since the invention of writing, I’m sure it’s definitely true this time.
You’re being downvoted, but it’s true. Will it further enable lazy/dumb people to continue being lazy/dumb? Absolutely. But summarizing notes, generating boilerplate emails or script blocks, etc. was never deep, rigorous thinking to begin with. People literally said the same thing about handheld calculators, word processors, etc. Will some methods/techniques become esoteric as more and more mundane tasks are automated away? Almost certainly. Is that inherently a bad thing? Not in the majority of cases, in my opinion.
And before anyone chimes in with students abusing this tech and thus not becoming properly educated: All this means, is that various methods for gauging whether a student has achieved the baseline in any given subject will need to be implemented, e.g. proctored hand-written exams, homework structured in such a way that AI cannot easily do it, etc.
This has happened with every generation when a new technology changes our environment, and our way of defending ourselves is to reject it or exaggerate its flaws.
Because throughout history, many tools have existed, but over time they have fallen into disuse because too many people and/or there is a faster method that people use. But you can use that old tool.
This has also happened 100 times correctly to reject actually bad new technologies for every time it has been applied to the wrong technology that turned out to be actually useful.
Are you referring to projects that conceptualize something, but in the end it doesn’t come to fruition because it’s not possible due to lack of funding, lack of interest, it’s impossible, or there’s no technology required to complete it?
I am referring to technological “innovations” that never made it because while they sounded good as an idea they turned out to be bad/useless in practice and also those that someone thought of in a “wouldn’t it be great if we could do this” way but never really got a working implementation.
Flying cars might be a good, high profile example for the latter category. The former obviously has fewer famous examples because bad ideas that sound good at first are so abundant.
People said it about fucking writing; ‘If you don’t remember all this stuff yourself to pass it on you will be bad at remembering!’ No you won’t, you will just have more space to remember other more important shit.
I think you are underestimating that some skills, like reading comprehension, deliberate communication and reasoning skills, can only be acquired and honed by actually doing very tedious work, that can at times feel braindead and inefficient. Offloading that on something else (that is essentially out of your control, too), and making that a skill that is more and more a fringe “enthusiast” one, has more implications, than losing the skill to patch up your own clothing or calculating things in your head. Understanding and processing information and communicating it to yourself and others is a more essential skill than calculating by hand.
I think the way the article compares it with walking to a grocery store vs. using a car to do even just 3 minutes of driving is pretty fitting. By only thinking about efficiency, one is in risk of losing sight of the additional effects actually doing tedious stuff has. This also highlights, that this is not simply about the technology, but also about the context in which it is used - but technology also dialectically influences that very context. While LLMs and other generative AIs have their place, where they are useful and beneficial, it is hard to untangle those from genuinely dehumanising uses. Especially in a system, in which dehumanisation and efficiency-above-contemplation are already incentivised. As an anecdote: A few weeks ago, I saw someone in an online debate openly stating, they use AI to have their arguments written, because it makes them “win” the debate more often - making winning with the lowest invested effort the goal of arguments, instead of processing and developing your own viewpoint along counterarguments, clearly a problem of ideology as it structures our understanding of ourselves in the world (and possibly just a troll, of course) - but a problem, that can be exacerbated by the technology.
Assuming AI will just be like the past examples of technology scepticism seems like a logical fallacy to me. It’s more than just letting numbers be calculated, it is giving up your own understanding of information you process and how you communicate it on a more essential level. That, and as the article points out with the studies it quotes - technology that changes how we interface with information has already changed more fundamental things about our thought processes and memory retention. Just because the world hasn’t ended does not mean, that it did not have an effect.
I also think it’s a bit presumptuous to just say “it’s true” with your own intuition being the source. You are also qualifying that there are “lazy/dumb” people as an essentialist statement, when laziness and stupidity aren’t simply essentialist attributes, but manifesting as a consequence of systematic influences in life and as behaviours then adding into the system - including learning and practising skills, such as the ones you mention as not being a “bad thing” for them to become more esoteric (so: essentially lost).
To highlight how essentialism is in my opinion fallacious here, an example that uses a hyperbolic situation to highlight the underlying principle: Imagine saying there should be a totally unregulated market for highly addictive drugs, arguing that “only addicts” would be in danger of being negatively affected, ignoring that addiction is not something simply inherent in a person, but grows out of their circumstances, and such a market would add more incentives to create more addicts into the system. In a similar way, people aren’t simply lazy or stupid intrinsically, they are acting lazily and stupid due to more complex, potentially self-reinforcing dynamics.
You focus on deliberately unpleasant examples, that seem like a no-brainer to be good to skip. I see no indication of LLMs being exclusively used for those, and I also see no reason to assume that only “deep, rigorous thinking” is necessary to keep up the ability to process and communicate information properly. It’s like saying that practice drawings aren’t high art, so skipping them is good, when you simply can’t produce high art without, often tedious, practising.
Highlighting the problem in students cheating to not be “properly educated” misses an important point, IMO - the real problem is a potential shift in culture, of what it even means to be “properly educated”. Along the same dynamic leading to arguing, that school should teach children only how to work, earn and properly manage money, instead of more broadly understanding the world and themselves within it, the real risk is in saying, that certain skills won’t be necessary for that goal, so it’s more efficient to not teach them at all. AI has the potential to move culture more into that direction, and move the definitions of what “properly educated” means. And that in turn poses a challenge to us and how we want to manifest ourselves as human beings in this world.
Also, there is quite a bit of hand-waving in “homework structured in such a way that AI cannot easily do it, etc.” - in the worst case, it’d give students something to do, just to make them do something, because exercises that would actually teach e.g. reading comprehension, would be too easy to be done by AI.
Most of the time technology does not cause a radical change in society except in some cases.
The system eventually adapts to new technology only if that technology can be replicated by anyone, and other problems suddenly appear that the system can’t solve at the same time. It’s just another dark age for humanity, and then it recovers and moves on.
The article literally addesses this, citing sources.
Social media lead to things like maga and the ruse of Nazis in Europe. It’s not necessarily tech itself that is making us dumb, it’s reeling people in through simplicity, then making them addicted to it and ultimately exploiting this.
No, fear and hatred lead to things like MAGA and the rise of Nazis. Social media makes it easier to fearmonger and spread hatred, no doubt, but it is by no means the cause of those things.
It definitely is the enabler though. Without social media, propaganda could never have spread as fast. And it also brought together every village idiot. I know of simpleminded people who never uttered a racist word in their lives before. Now all they talk about is how the AfD will save Germany from the brown people. This is literally brainwashing. So I stand by my comment that social media is the cause of all this.
For sure, I even said so.
Social media makes it easier to fearmonger and spread hatred, no doubt
I hate to break it to you though; I grew up during the Cold War and propaganda was literally everywhere before the invention of the internet. Perhaps you’ve heard of the Red Scare?
Yeah, such pieces are easy clicks.
How about this: should we go back to handwriting everything so we use our brains more, since the article states that it takes more brainpower to write than it does to type? Will this actually make us better or just make us have to engage in cognitive toil and fatigue ourselves performing menial tasks?
How is a society ever to improve if we do not leave behind the ways of the past? Humans cannot achieve certain things without the use of technology. LLMs are yet another tool. When abused any tool can become a weapon or a means to hurt ones self.
The goal is to reduce the amount of time spent on tasks that are not useful. Imagine if the human race never had to do dishes ever again. Imagine how that would create so much opportunity to focus on more important things. The important part is to actually focus on more important things.
At least in the US, society has transformed into a consumption-oriented model. We buy crap constantly, shop endlessly, watch shows, movies and listen to music and podcasts without end. How much of your day is spent creating something? Writing something? Building something? How much time do you spend seeking gratification?
We have been told that consumption is good and it works because consumption is indulgence whereas production is work. Until this paradigm changes, people will use ai in ways that are counterproductive rather than for their own self improvement or the improvement of society at large.
Did you get the impression from my comment that I was agreeing with the article? Because I’m very not, hence the ‘It’ll definitely be true this time’ which carries an implied ‘It wasn’t true any of those other times’, but the ‘definitely’ part is sarcasm. I have argued elsewhere in the post that all of this ‘xyz is making us dumb!’ shit is bunk.
Imagine if the human race never had to do dishes ever again. Imagine how that would create so much opportunity to focus on more important things.
What are the most important things? Our dishwasher broke a few years ago. I anticipated frustration at the extra pressure on my evenings and having to waste time on dishes. But I immediately found washing the dishes to be a surprising improvement in quality of life. It opened up a space to focus on something very simple, to let my mind clear from other things, to pay attention to being careful with my handling of fragile things, and to feel connected to the material contents of my kitchen. It also felt good to see the whole meal process through using my own hands from start to end. My enjoyment of the evenings improved significantly, and I’d look forward to pausing and washing the dishes.
I had expected frustration at the “waste” of time, but I found a valuable pause in the rhythm of the day, and a few calm minutes when there was no point in worrying about anything else. Sometimes I am less purist about it and I listen to an audiobook while I wash up, and this has exposed me to books I would not have sat down and read because I would have felt like I had to keep rushing.
The same happened when my bicycle broke irreparably. A 10 minute cycle ride to work became a 30 minute walk. I found this to be a richer experience than cycling, and became intimately familiar with the neighbourhood in a way I had never been while zipping through it on the bike. The walk was a meditative experience of doing something simple for half an hour before work and half an hour afterwards. I would try different routes, going by the road where people would smile and say hello, or by the river to enjoy the sound of the water. My mind really perked up and I found myself becoming creative with photography and writing, and enjoying all kinds of sights, sounds and smells, plus just the pleasure of feeling my body settle into walking. My body felt better.
I would have thought walking was time I could have spent on more important things. Turned out walking was the entryway to some of the most important things. We seldom make a change that’s pure gain with no loss. Sometimes the losses are subtle but important. Sometimes our ideas of “more important things” are the source of much frustration, unhappiness and distraction. Looking back on my decades of life I think “use as much time as possible for important things” can become a mental prison.
These words are all very pretty but let me ask you this: do you have kids?
Yep, three of them. Makes it all the more valuable when I can just do something simple for a bit. And maybe someone with less noise in the rest of their life wouldn’t find an enforced walk or washing dishes refreshing. I don’t mean to suggest that it’s wrong to use convenient tech, just that you can get a surprise when something you expected to be purely inconvenient turns out to be a good thing.
Our dishwasher broke a few years ago.
This is a bad example because going from using a dishwasher to washing dishes is not a big leap in effort required. I doubt many of the people who get to do intellectual work in offices instead of doing back-breaking labor all day on a farm because of technology would agree that going back to that would improve their quality of life. Some of them would certainly find that to be a ‘richer experience’ too, if not for the lack of healthcare and air conditioning.
Yes, agreed, these are relatively minor levels of inconvenience. But I’m not judging anyone for using tech, just observing that it isn’t always so obvious that it’s just better to use it than not. In some cases, it’s obvious.
That’s fair, and not something I disagree with.
You don’t think it’s possible that offloading thought to AI could make you worse at thinking? Has been the case with technology in the past, such as calculators making us worse at math (in our heads or on paper), but this time the thing you’re losing practice in is… thought. This technology is different because it’s aiming to automate thought itself.
Yeah, the people who were used to the oral tradition said the same thing about writing stuff down, ‘If you don’t remember all of this stuff yourself you’ll be bad at remembering!’, etc. But this is what humans do, what humans are: we evolved to make tools, we use the tools to simplify the things in our life so we can spend more time working on (and thinking about - or do you sincerely think people will just stop thinking altogether?) the shit we care about. Offloading mental labor likewise lets us focus our mental capacities on deeper, more important, more profound stuff. This is how human society, which requires specialization and division of labor at every level to function, works.
I’m old enough to remember when people started saying the same thing about the internet. Well I’ve been on the internet from pretty much the first moment it was even slightly publicly available (around 1992) and have been what is now called ‘terminally online’ ever since. If the internet is making us dumb I am the best possible candidate you could have to test that theory, but you know what I do when I’m not remembering phone numbers and handwriting everything and looking shit up in paper encyclopedias at the library? I’m reading and thinking about science, philosophy, religion, etc. That capacity didn’t go away, it just got turned to another purpose.
The people who were used to the oral tradition were right. Memorising things is good for your memory. No, I don’t think people will stop thinking altogether (please don’t be reductive like this lmao), just as people didn’t stop remembering things. But people did get worse at remembering things. Just as people might get worse at applying critical thinking if they continually offload those processes to AI. We know that using tools makes us worse at whatever the tool automates, because without practice you become worse at things. This just hasn’t really been a problem before as the tools generally make those things obselete.
The people who were used to the oral tradition were right. Memorising things is good for your memory.
Except people didn’t stop memorizing things. I went to school in the 1970s - unarguably a long-ass time after we stopped using the oral tradition as the primary method to transmit culture) and I was memorizing shit left and right. I still remember those multiplication tables, ‘in 1492 Columbus sailed the ocean blue’, etc 40-odd years later.
No, I don’t think people will stop thinking altogether (please don’t be reductive like this lmao)
Sorry, I thought it was pretty clear that I was expressing skepticism at the idea that anyone actually thinks this.
But people did get worse at remembering things.
If you have evidence that suggests that people got worse at remembering things between, say, ancient Greece and the Industrial Revolution I’d love to see it.
We know that using tools makes us worse at whatever the tool automates, because without practice you become worse at things.
Likewise if you have evidence that people stopped thinking with the invention of books, the calculator, computers, the internet, etc, don’t be shy about it.
because without practice you become worse at things.
You assume that offloading some mental processes to AI means we will stop practicing them. I argue that we’ll just use the capabilities we have for other things. I use ChatGPT to help me worldbuild, structure my writing projects, come up with thematically-consistent names, etc, for example, but it’s not writing for me and I still come up with names and such all the time.
Again, you’re being reductive. My argument is not that we will stop practising critical thinking altogether, but that we will not need to practise it as often. Less practise always makes you worse at something. I do not need evidence for that as it is obvious.
I don’t see a point to continuing this conversation if you keep reducing my argument to “nobody will think anymore”.
I am glad you use AI for reasons that don’t make you stupid, but I have seen how today’s students are using it instead of using their brains. It’s not good. We teach critical thinking in schools for a reason, because it’s something that does not always come naturally, and these students are getting AI to do the work for them instead of learning how to think.
Replace ‘stop remembering things’ with ‘remember fewer things’ at your own leisure if it makes you happy, I’m exaggerating slightly to make a point.
My argument is not that we will stop practising critical thinking altogether, but that we will not need to practise it as often.
And mine is that as far as I know we have no evidence (or at least nothing more than anecdotal evidence at best) for that because society has only gotten more complex, not less, and requires more thought, memory, etc to navigate it. Now instead of remembering which cow was sick last week and which field I’m going to plant tomorrow I have to remember shit like how to navigate a city that’s larger than the range in which most people traveled their entire lives, I have to figure out what this weird error my PC just threw means, I have to calculate the risk-vs-reward of trying to buy a house now or renting for a year to save up for a better down payment and improve my credit, etc. These are just examples, pick your own if you don’t like them.
Less practise always makes you worse at something. I do not need evidence for that as it is obvious.
Now who’s being reductive? I’m not asking for evidence that less practice makes you worse at something, I’m asking for evidence that labor-saving devices result in people doing less labor (mental or otherwise), because I think that’s a lot less obvious.
I have seen how today’s students are using it instead of using their brains
This is a bad example because learning is a different matter. People using it instead of learning will not learn the subject matter as well as those who don’t, obviously. But it’s a lot less obvious in other fields/adult life. Will I be less good at code because I use an LLM to generate some now and then? Probably not, both because I’ve been coding off and on for 30 years, but also because my time instead is spent on tackling the thornier problems that AI can’t do or has difficulty with, managing large projects because AI has a limited memory window, etc.
We teach critical thinking in schools for a reason, because it’s something that does not always come naturally, and these students are getting AI to do the work for them instead of learning how to think.
That’s debatable, though I guess it depends on where you’re from and what the schools are like there. They certainly didn’t teach critical thinking when I was in (US public) school, I had to figure that shit out largely on my own. But that’s beside the point. Shortcutting learning is bad, I agree. Shortcutting work is a lot more nebulous and uncertain in the absence of that evidence I keep asking for.
In this day and age, no, we aren’t offloading for deeper shit. We aren’t getting that extra time to chill and vibe like 50s sci-fi wrote about.
We’re doing it because there is now a greater demand for our time and attention. From work mostly, but also family and friends (if we’re lucky enough to have those), to various forms of entertainment (which we usually use as a distraction from IRL shit like work).
This seems like a capitalism problem, not a technology problem. That endless drive to greater productivity so that others can extract the bulk of the value thereof for their own benefit instead of the benefit of everyone is a big part of what’s eating up the purported leisure-time. But also that’s a choice you can make: I choose to spend my spare mental capacity learning about how the world works and engaging with ideas about how it ought to work. If people choose to spend that extra capacity doom-scrolling social media and keeping up with the virtual Joneses or whatever then that’s on them, but I’m not here to judge, I do that sometimes too. Life takes it out of you, sometimes you just need some low-effort destressing. But the point stands: offloading labor (mental or otherwise) to technology and then turning that time/energy/etc to stuff that’s more important is just how humans work.
Did you read the article?
Joke’s on you, I was already stupid to begin with.
The less you use your own brains, the more stupid you eventually become. That’s a fact, like it or don’t.
This is the next step towards Idiocracy. I use AI for things like Summarizing zoom meetings so I don’t need to take notes and I can’t imagine I’ll stop there in the future. It’s like how I forgot everyone’s telephone numbers once we got cell phones…we used to have to know numbers back then. AI is a big leap in that direction. I’m thinking the long term effects are all of us just getting dumber and shifting more and more “little unimportant “ things to AI until we end up in an Idiocracy scene. Sadly I will be there with everyone else.
Another perspective, outsourcing unimportant tasks frees our time to think deeper and be innovative. It removes the entry barrier allowing people who would ordinarily not be able to do things actually do them.
That’s the claim from like every AI company and wow do I hope that’s what happens. Maybe I’m just a Luddite with AI. I really hope I’m wrong since it’s here to stay.
If paying attention and taking a few notes in a meeting is an unimportant task, you need to ask why you were even at said meeting. That’s a bigger work culture problem though
It allows people who can’t do things to create filler content instead of dropping the ball entirely. The person relying on the AI will not be part of the dialogue for very long, not because of automation, but because people who can’t do things are softly encouraged to get better or leave, and they will not be getting better.
What you’re describing isn’t anything unique when a new industry comes out.
It doesn’t need to be specifically for public consumption. Currently I’m wrapping up several personal projects that I started precovid but couldn’t achieve because I struggle at a few lower level tasks. It’s kind of like someone who struggles manually performing 100 7 digit number calculations. Using excel solves this issue, and isnt “cheating” because the goal is beyond the ability to accurately add everything.
I used to able to navigate all of Massachusetts from memory with nothing but a paper atlas book to help me. Now I’m lucky if I remember an alternate route to the pharmacy that’s 9 minutes away.
Lewis and Clark are proud of you.
See I agree but the phone number example has me going…so what? I know my wife’s number, my siblings’, and my parents. They’re easy to learn. What do all those land lines I remember from childhood contribute? Why do I need any others now? I need to recall my wife’s for documents that’s about it, and I could use my phone to do it. I need to know it like every 4 years maybe lol
One example: getting arrested
You might not. But you might (especially with this current admin). Cops will never let you use your phone after you’ve been detained. Unless you go free the same night, expect to never have a phone call with anyone but a lawyer or bail bonds agency.
Yes but why do I need to know a grade school friend’s number? As I said I know my wife’s. I know my siblings’. These have changed too, so I’ve memorized them in the smartphone era. If you know no emergency number that’s just bad prep. Everyone should do that.
But memorizing lots of numbers? Pointless.
Yeah that’s a big part of it…shifting off the stuff that we don’t think is important (and probably isn’t). My view is that it’s escalated to where I’m using my phone calculator for stuff I did in my head in high school (I was a cashier in HS so it was easy)…which is also not a big deal but getting a little bigger than the phone number thing. From there, what if I used it to leverage a new programming API as opposed to using the docs site. Probably not a big deal but bigger than the calculator thing to me. My point is that it’s all these little things that don’t individually matter but together add up to some big changes in the way we think. We are outsourcing our thinking which would be helpful if we used the free capacity for higher level thinking but I’m not sure if we will.
Your parents likely also can’t do quick mental math. That’s not smart phones, that’s just aging. You aren’t drilled anymore. You don’t do it everyday.
I taught middle schoolers remedial math for years in my 20’s so I actually am very fast at basic arithmetic in my head. It’s because it’s more recent for me. That’s what made shows like are you smarter than a 5th grader kind of deceptive. If you were taught something recently or are currently being drilled on it basically every day, then you’re going to know it better than anybody regardless of their tools or age or intelligence.
An assistant at my job used AI to summarize a meeting she couldn’t attend, and then she posted the results with the AI-produced disclaimer that the summary might be inaccurate and should be checked for errors.
If I read a summary of a meeting I didn’t attend and I have to check it for errors, I’d have to rewatch the meeting to know if it was accurate or not. Literally what the fuck is the point of the summary in that case?
PS: the summary wasn’t really accurate at all
Add it to the list
A new update for ONEui on my Samsung phone has allowed me to disable Gemini from the start. I wasted no time doing so
My favorite feature about my Pixel phone is GrapheneOS compatibility, which doesn’t ship AI by default, but I can opt in if I want (i.e. on a separate profile).
Good thing I dont use it.
AI, or your brain?
Yes
Actually it’s taking me quite a lot of effort and learning to setup AI’s that I run locally as I don’t trust them (any of them) with my data. If anything, it’s got me interested in learning again.
That’s the kind of effort in thought and learning that the article is calling out as being lost when it comes to reading and writing. You’re taking the time to learn and struggle with the effort, as long as you’re not giving that up once you have the AI running you’re not losing that.
I have difficulty learning, but using AI has helped me quite a lot. It’s like a teacher who will never get angry, doesn’t matter how dumb your question is or how many time you ask it.
Mind you, I am not in school and I understand hallucinations, but having someone who is this understanding in a discourse helps immensely.
It’s a wonderful tool for learning, especially for those who can’t follow the normal pacing. :)
It’s not normal for a teacher to get angry. Those people should be replaced by good teachers, not by a nicely-lying-to-you-bot. It’s not a jab at you, of course, but at the system.
I agree, I’ve been traumatized by the system. Whatever I’ve learnt that’s been useful to me has happened through the internet, give or take a few good teachers.
I still think it’s a good auxiliary tool. If you understand its constraints, it’s useful.
It’s just really unfortunate that it’s a for profit tool that will be used to try and replace us all.
Yeah, same. I have to learn now to learn in spite of all the old disillusioned creatures that hated their lives almost as much as they hated students.
And yet, I’m afraid learning from chatbots might be even worse.Learning how to learn is so important. I only learned that as an adult.
The problem is if it’s wrong, you have no way to know without double checking everything it says
Its not a bit deal if you aren’t completely stupid, I don’t use LLMs to learn topics I know nothing about, but I do use them to assist me in figuring out solutions to things I’m somewhat familiar with. In my case I find it easy catch incorrect info, and even if I don’t catch it most of the time if you just occasionally tell it to double check what it said it self corrects.
It is a big deal. There is thr whole set of ways humans can gauge validity of the info, that are perpendicular to the way we interact with fancy autocomplete.
Every single word might be false, with no pattern to it. So if you can and do check it, you just wasting your time and humanity’s resources instead of finding the info yourself in the first place. If you don’t, or if you think you do, it’s even worse, you are being fed lies and believe them extra hard.
Too be fair, this can also be said of teachers. It’s important to recognise that AI’s are as accurate as any single source and should always check everything yourself. I have concerns over a future where our only available sources are through AI.
The level of psychopathy required from a human to be as blatant at lying as an llm is almost unachievable
Bruh so much of our lives is made up of people lying, either intentionally or unintentionally via spreading misinformation.
I remember being in 5th grade and my science teacher in a public school was teaching the “theory” of evolution but then she mentioned there are “other theories like intelligent design”
She wasn’t doing it to be malicious, just a brainwashed idiot.
so much of our lives is made up of people lying
And that’s why we, as humans, know how to look for signs of this in other humans. This is the skill we have to learn precisely because of that. Not only it’s not applicable when you read the generated bullshit, it actually does the opposite.
Some people are mistaken, some people are actively misleading, almost no one has the combination of being wrong just enough, and confident just enough, to sneak their bullshit under the bullshit detector.Took that a slightly different way then I was expecting, my point is we have to be on the lookout for bullshit when getting info from other people so it’s really no different when getting info from an LLM.
However you took it to the LLM can’t determine between what’s true and false, which is obviously true but an interesting point to make nonetheless
I understand that. I am careful to not use it as my main teaching source, rather a supplement. It helps when I want to dive into the root cause of something, which I then double check with real sources.
But like why not go to the real sorces directly in the first place? Why add unnecessary layer that doesn’t really add anything?
I do go to the real source first. But sometimes, I just need a very simple explanation before I can dive deep into the topic.
My brain sucks, I give up very easily if I don’t understand something. (This has been true since way before short form content and internet)
If I had to say how much I use it to learn, I’d say it’s about 30% of the total learning. It can’t teach you course work from scratch like a real person can (even through videos), but it can help clear doubts.
Ironically, the author waffles more than most LLMs do.
I feel like that might have been the point. Rather than “using a car to go from A to B” they walked.
What does it mean to “waffle”?
To “waffle” comes from the 1956 movie Archie and the Waffle House. It’s a reference how the main character Archie famously ate a giant stack of waffles and became a town hero.
— AI, probably
Hahaha let’s keep going with Archie and the Waffle House hallucinations
To “grill” comes from the 1956 movie Archie and the Waffle House. It’s a reference to the chef cooking the waffles, which the main character Archie famously ate a giant stack of, and became the town hero.
Either to take a very long time to get to the point, or to go off on a tangent.
Writing concisely is a lost art, it seems.
I write concise until i started giving fiction writing a try. Suddenly writing concise was a negative :x (not always obviously but a lot of times I found that I wrote too concise).
IDK that kinda depends on the writer and their style. Concise is usually a safe bet for easy reading, but doesn’t leave room for a lot of fancy details. When I think verbose vs concise I think about Frank Herbert and Kurt Vonnegut for reference.
concisely
Precisely.
Building up imaginary in fiction isn’t the opposite of being concise
It’s not. I just wrote the comment because it was relevant to recent events for me.
I started practicing writing non-fiction recently as a hobby. While writing non-fiction, I noticed that being concise 100% of the time is not good. Sometimes I did want to write concisely, other times I did not. When I was reading my writing back, I realized how deliberate you had to be about how much or how little detail you gave. It felt like a lot of rules of English went out the window. 100% grammatical correctness was not necessary if it meant better flow or pacing. Unnecessary details and repetition became tools instead of taboo. The whole experience felt like I was painting with words and as long as I can give the reader the experience I want nothing else mattered.
It really highlighted the contrast between fiction and non-fiction writing. It was an eye-opening experience.
I’d be careful with this one. Being verbose in non-fiction does not produce good writing automatically. In my opinion the best writers in the world have an economy of words but are still eloquent and rich in their expression
Of course being verbose doesn’t mean your writing is good. It’s just that you need to deliberately choose when to be more verbose and when to give no description at all. It’s all about the experience you want to craft. If you write about how mundane a character’s life is, you can write out their day in detail and give your readers the experience of having such a life, that is if that was your goal. It all depends on the experience you want to craft and the story you want to tell.
To put my experience more simply, I did not realize how much of an art writing could be and how little rules there were when you write artistically/creatively.
I did not have time to write a short letter, so I wrote a long one instead
How are you using new AI technology?
For porn, mostly.
I did have it create a few walking tours on a vacation recently, which was pretty neat.
Unlike social media?
Kek
that picture is kinky as hell, yo
Does the nose insertion tube feed me cocaine?
I’m in
suspiciously specific
I was annoyed that it wasn’t over her mouth to implant the egg.
It implants ideas, so it goes through the eyes.
A human would have known that the xenomorph should be impregnating that girl through her throat…