How are you using new AI technology? Maybe you're only deploying things like ChatGPT to summarize long texts or draft up mindless emails. But what are you losing by taking these shortcuts? And is this tech taking away our ability to think?
I think you are underestimating that some skills, like reading comprehension, deliberate communication and reasoning skills, can only be acquired and honed by actually doing very tedious work, that can at times feel braindead and inefficient. Offloading that on something else (that is essentially out of your control, too), and making that a skill that is more and more a fringe “enthusiast” one, has more implications, than losing the skill to patch up your own clothing or calculating things in your head. Understanding and processing information and communicating it to yourself and others is a more essential skill than calculating by hand.
I think the way the article compares it with walking to a grocery store vs. using a car to do even just 3 minutes of driving is pretty fitting. By only thinking about efficiency, one is in risk of losing sight of the additional effects actually doing tedious stuff has. This also highlights, that this is not simply about the technology, but also about the context in which it is used - but technology also dialectically influences that very context. While LLMs and other generative AIs have their place, where they are useful and beneficial, it is hard to untangle those from genuinely dehumanising uses. Especially in a system, in which dehumanisation and efficiency-above-contemplation are already incentivised. As an anecdote: A few weeks ago, I saw someone in an online debate openly stating, they use AI to have their arguments written, because it makes them “win” the debate more often - making winning with the lowest invested effort the goal of arguments, instead of processing and developing your own viewpoint along counterarguments, clearly a problem of ideology as it structures our understanding of ourselves in the world (and possibly just a troll, of course) - but a problem, that can be exacerbated by the technology.
Assuming AI will just be like the past examples of technology scepticism seems like a logical fallacy to me. It’s more than just letting numbers be calculated, it is giving up your own understanding of information you process and how you communicate it on a more essential level. That, and as the article points out with the studies it quotes - technology that changes how we interface with information has already changed more fundamental things about our thought processes and memory retention. Just because the world hasn’t ended does not mean, that it did not have an effect.
I also think it’s a bit presumptuous to just say “it’s true” with your own intuition being the source. You are also qualifying that there are “lazy/dumb” people as an essentialist statement, when laziness and stupidity aren’t simply essentialist attributes, but manifesting as a consequence of systematic influences in life and as behaviours then adding into the system - including learning and practising skills, such as the ones you mention as not being a “bad thing” for them to become more esoteric (so: essentially lost).
To highlight how essentialism is in my opinion fallacious here, an example that uses a hyperbolic situation to highlight the underlying principle: Imagine saying there should be a totally unregulated market for highly addictive drugs, arguing that “only addicts” would be in danger of being negatively affected, ignoring that addiction is not something simply inherent in a person, but grows out of their circumstances, and such a market would add more incentives to create more addicts into the system. In a similar way, people aren’t simply lazy or stupid intrinsically, they are acting lazily and stupid due to more complex, potentially self-reinforcing dynamics.
You focus on deliberately unpleasant examples, that seem like a no-brainer to be good to skip. I see no indication of LLMs being exclusively used for those, and I also see no reason to assume that only “deep, rigorous thinking” is necessary to keep up the ability to process and communicate information properly. It’s like saying that practice drawings aren’t high art, so skipping them is good, when you simply can’t produce high art without, often tedious, practising.
Highlighting the problem in students cheating to not be “properly educated” misses an important point, IMO - the real problem is a potential shift in culture, of what it even means to be “properly educated”. Along the same dynamic leading to arguing, that school should teach children only how to work, earn and properly manage money, instead of more broadly understanding the world and themselves within it, the real risk is in saying, that certain skills won’t be necessary for that goal, so it’s more efficient to not teach them at all. AI has the potential to move culture more into that direction, and move the definitions of what “properly educated” means. And that in turn poses a challenge to us and how we want to manifest ourselves as human beings in this world.
Also, there is quite a bit of hand-waving in “homework structured in such a way that AI cannot easily do it, etc.” - in the worst case, it’d give students something to do, just to make them do something, because exercises that would actually teach e.g. reading comprehension, would be too easy to be done by AI.
Most of the time technology does not cause a radical change in society except in some cases.
The system eventually adapts to new technology only if that technology can be replicated by anyone, and other problems suddenly appear that the system can’t solve at the same time. It’s just another dark age for humanity, and then it recovers and moves on.
I think you are underestimating that some skills, like reading comprehension, deliberate communication and reasoning skills, can only be acquired and honed by actually doing very tedious work, that can at times feel braindead and inefficient. Offloading that on something else (that is essentially out of your control, too), and making that a skill that is more and more a fringe “enthusiast” one, has more implications, than losing the skill to patch up your own clothing or calculating things in your head. Understanding and processing information and communicating it to yourself and others is a more essential skill than calculating by hand.
I think the way the article compares it with walking to a grocery store vs. using a car to do even just 3 minutes of driving is pretty fitting. By only thinking about efficiency, one is in risk of losing sight of the additional effects actually doing tedious stuff has. This also highlights, that this is not simply about the technology, but also about the context in which it is used - but technology also dialectically influences that very context. While LLMs and other generative AIs have their place, where they are useful and beneficial, it is hard to untangle those from genuinely dehumanising uses. Especially in a system, in which dehumanisation and efficiency-above-contemplation are already incentivised. As an anecdote: A few weeks ago, I saw someone in an online debate openly stating, they use AI to have their arguments written, because it makes them “win” the debate more often - making winning with the lowest invested effort the goal of arguments, instead of processing and developing your own viewpoint along counterarguments, clearly a problem of ideology as it structures our understanding of ourselves in the world (and possibly just a troll, of course) - but a problem, that can be exacerbated by the technology.
Assuming AI will just be like the past examples of technology scepticism seems like a logical fallacy to me. It’s more than just letting numbers be calculated, it is giving up your own understanding of information you process and how you communicate it on a more essential level. That, and as the article points out with the studies it quotes - technology that changes how we interface with information has already changed more fundamental things about our thought processes and memory retention. Just because the world hasn’t ended does not mean, that it did not have an effect.
I also think it’s a bit presumptuous to just say “it’s true” with your own intuition being the source. You are also qualifying that there are “lazy/dumb” people as an essentialist statement, when laziness and stupidity aren’t simply essentialist attributes, but manifesting as a consequence of systematic influences in life and as behaviours then adding into the system - including learning and practising skills, such as the ones you mention as not being a “bad thing” for them to become more esoteric (so: essentially lost).
To highlight how essentialism is in my opinion fallacious here, an example that uses a hyperbolic situation to highlight the underlying principle: Imagine saying there should be a totally unregulated market for highly addictive drugs, arguing that “only addicts” would be in danger of being negatively affected, ignoring that addiction is not something simply inherent in a person, but grows out of their circumstances, and such a market would add more incentives to create more addicts into the system. In a similar way, people aren’t simply lazy or stupid intrinsically, they are acting lazily and stupid due to more complex, potentially self-reinforcing dynamics.
You focus on deliberately unpleasant examples, that seem like a no-brainer to be good to skip. I see no indication of LLMs being exclusively used for those, and I also see no reason to assume that only “deep, rigorous thinking” is necessary to keep up the ability to process and communicate information properly. It’s like saying that practice drawings aren’t high art, so skipping them is good, when you simply can’t produce high art without, often tedious, practising.
Highlighting the problem in students cheating to not be “properly educated” misses an important point, IMO - the real problem is a potential shift in culture, of what it even means to be “properly educated”. Along the same dynamic leading to arguing, that school should teach children only how to work, earn and properly manage money, instead of more broadly understanding the world and themselves within it, the real risk is in saying, that certain skills won’t be necessary for that goal, so it’s more efficient to not teach them at all. AI has the potential to move culture more into that direction, and move the definitions of what “properly educated” means. And that in turn poses a challenge to us and how we want to manifest ourselves as human beings in this world.
Also, there is quite a bit of hand-waving in “homework structured in such a way that AI cannot easily do it, etc.” - in the worst case, it’d give students something to do, just to make them do something, because exercises that would actually teach e.g. reading comprehension, would be too easy to be done by AI.
Most of the time technology does not cause a radical change in society except in some cases.
The system eventually adapts to new technology only if that technology can be replicated by anyone, and other problems suddenly appear that the system can’t solve at the same time. It’s just another dark age for humanity, and then it recovers and moves on.