You’re a necessary accomplice. Organise. Burn shit up and build better things on top of the ashes.
You’re a necessary accomplice. Organise. Burn shit up and build better things on top of the ashes.
I’m both, and while I do hate myself, I don’t think it’s related, so I’m not sure I get it.
(I hate computers more, though, except when they’re turned off — no bugs when they’re off —, but they’re the only thing I’m good enough at to make a living off of.)
Well, this is going to be golden, isn’t it…?
makes it sound like they’re all equal, and there hasn’t been any progression
Programming peaked with Lisp (and SQL for database stuff).
Every “progression” made since Lisp has been other languages adding features to (partially but not quite completely) do stuff that could already be done in Lisp, but with less well implemented (though probably with probably less parentheses).
They are all flawed and they all encourage some bad design patterns.
On the other hand, Lisp.
Wait until you learn about Steller’s sea cows…
Yeah. Turns out penguins have been extinct for almost two centuries, and what we’ve been calling penguins are a completely unrelated type of bird that just happens to look vaguely similar.
What’s worse is that half the coordinates probably ended up as dates…
Are search engines worse than they used to be?
Definitely.
Am I still successfully using them several times a day to learn how to do what I want to do (and to help colleagues who use LLMs instead of search engines learn how to do what they want to do once they get frustrated enough to start swearing loudly enough for me to hear them)?
Also yes. And it’s not taking significantly longer than it did when they were less enshittified.
Are LLMs a viable alternative to search engines, even as enshittified as they are today?
Fuck, no. They’re slower, they’re harder and more cumbersome to use, their results are useless on a good day and harmful on most, and they give you no context or sources to learn from, so best case scenario you get a suboptimal partial buggy solution to your problem which you can’t learn anything useful from (even worse, if you learn it as the correct solution you’ll never learn why it’s suboptimal or, more probably, downright harmful).
If search engines ever get enshittified to the point of being truly useless, the alternative aren’t LLMs. The alternative is to grab a fucking book (after making sure it wasn’t defecated by an LLM), like we did before search engines were a thing.
I’ve been finding it a lot harder recently to find what I’m looking for when it comes to coding knowledge on search engines
Yeah, the enshittification has been getting worse and worse, probably because the same companies making the search engines are the ones trying to sell you the LLMs, and the only way to sell them is to make the alternatives worse.
That said, I still manage to find anything I need much faster and with less effort than dealing with an LLM would take, and where an LLM would simply get me a single answer (which I then would have to test and fix), while a search engine will give me multiple commented answers which I can compare and learn from.
I remembered another example: I was checking a pull request and it wouldn’t compile; the programmer had apparently used an obscure internal function to check if a string was empty instead of string.IsNullOrWhitespace()
(in C# internal
means “I designed my classes wrong and I don’t have time to redesign them from scratch; this member should be private
or protected
, but I need to access it from outside the class hierarchy, so I’ll allow other classes in the same assembly to access it, but not ones outside of the assembly”; similar use case as friend
in c++; it’s used a lot in standard .NET libraries).
Now, that particular internal
function isn’t documented practically anywhere, and being internal
can’t be used outside its particular library, so it wouldn’t pop up in any example the coder might have seen… but .NET is open source, and the library’s source code is on GitHub, so chatgpt/copilot has been trained on it, so that’s where the coder must have gotten it from.
The thing, though, is that LLM’s being essentially statistic engines that’ll just pop up the most statistically likely token after a given sequence of tokens, they have no way whatsoever to “know” that a function is internal
. Or private
, or protected
, for that matter.
That function is used in the code they’ve been trained on to figure if a string is empty, so they’re just as likely to output it as string.IsNullOrWhitespace()
or string.IsNullOrEmpty()
.
Hell, if(condition)
and if(!condition)
are probably also equally likely in most places… and I for one don’t want to have to debug code generated by something that can’t tell those apart.
It could be, in a monkeys with typewriters sort of way… 🤷♂️
Sure, but if you’re copying from stack overflow or reddit and ignore the dozens of comments telling you why the code you’re copying is wrong for your use case, that’s on you.
An LLM on the other hand will confidently tell you that its garbage is perfect and will do exactly what you asked for, and leave you to figure out why it doesn’t by yourself, without any context.
An inexperienced programmer who’s willing to learn won’t fall for the first case and will actually learn from the comments and alternative answers, but will be completely lost if the hallucinating LLM is all they’ve got.
The other day we were going over some SQL query with a younger colleague and I went “wait, what was the function for the length of a string in SQL Server?”, so he typed the whole question into chatgpt, which replied (extremely slowly) with some unrelated garbage.
I asked him to let me take the keyboard, typed “sql server string length” into google, saw LEN in the except from the first result, and went on to do what I’d wanted to do, while in another tab chatgpt was still spewing nonsense.
LLMs are slower, several orders of magnitude less accurate, and harder to use than existing alternatives, but they’re extremely good at convincing their users that they know what they’re doing and what they’re talking about.
That causes the people using them to blindly copy their useless buggy code (that even if it worked and wasn’t incomplete and full of bugs would be intended to solve a completely different problem, since users are incapable of properly asking what they want and LLMs would produce the wrong code most of the time even if asked properly), wasting everyone’s time and learning nothing.
Not that blindly copying from stack overflow is any better, of course, but stack overflow or reddit answers come with comments and alternative answers that if you read them will go a long way to telling you whether the code you’re copying will work for your particular situation or not.
LLMs give you none of that context, and are fundamentally incapable of doing the reasoning (and learning) that you’d do given different commented answers.
They’ll just very convincingly tell you that their code is right, correct, and adequate to your requirements, and leave it to you (or whoever has to deal with your pull requests) to find out without any hints why it’s not.
The usability has been plummetting with every single redesign for quite a while, though.
Used to be everything could be found and done in two or three clicks… now it’s five minutes clicking and scrolling through the useless single windowed chaos of the configuration app looking for where the last update randomly moved it to (finding one or two options that are almost what you’re looking for, but can’t do what used to take just a couple clicks), five minutes looking it up on what’s left of the internet while avoiding ads, spam, and hallucinating LLMs, only to find out this setting you and everyone you know had been using almost daily was removed by the last update “to improve usability”, and five minutes writing eldritch incantations into the registry, group policies, or powershell to finally configure the fucking setting…
Protestors and counter protestors both have a right to express their views
No. For a just, tolerant, and civilised society to exist, intolerance can’t be tolerated..
Those eyes! Blurriness was clearly about to take place!
American standards of decency require genital mutilation.
We are, though, if we’re not actively doing our best to stop all this. We’re all at least necessary accomplices.