I don’t know why you’re getting downvoted. It is hilarious in a black humor way. For us Europeans that’s a giant monster, because it is, while a lot of Americans would consider it a little baby truck because they are so brainwashed.
I don’t know why you’re getting downvoted. It is hilarious in a black humor way. For us Europeans that’s a giant monster, because it is, while a lot of Americans would consider it a little baby truck because they are so brainwashed.
DocFx could do what you’re looking for. You would write your stuff in markdown and it generates an interactive and customizable site.
As others have already said, that is a lot of pasta. If you regularly cook volumes like that, it would really make sense to invest in a large pot as well. A cheap 10l pot will do just fine for boiling pasta, and it sounds like you would get plenty of use out of it.
Do you cook your pasta in a large pot, with plenty of boiling water, and a good amount of salt? Usually I just stir once just after putting the pasta in, and I never have noodles sticking together.
That is what I remember too.
It is more of a “For typical cards, expect very competitive options from AMD. If you want top performance, buy Nvidia.”
In theory it allows them to focus more on the cards that actually get bought, and thus they could make those cards better products.
As the great operation begins.
It is a hardware failure. Screens are complex and sensitive parts that are exposed to a lot of (ab)use. What is cryptic about that?
Am I missing something? I thought the outage was caused by CrowdStrike and had nothing to do with Microsoft or Windows?
They say there are 16 screens inside, each with a 16k resolution. Such a screen would have 16x as many pixels as a 4k screen. The GPUs power those as well.
For the number of GPUs it appears to make sense. 150 GPUs for the equivalent of about 256 4k screens means each GPU handles ±2 4k screens. That doesn’t sound like a lot, but it could make sense.
The power draw of 28 MW still seems ridiculous to me though. They claim about 45 kW for the GPUs, which leaves 27955 kW for everything else. Even if we assume the screens are stupid and use 1 kw per 4k segment, that only accounts for 256 kW, leaving 27699 kW. Where the fuck does all that energy go?! Am I missing something?
Yeah that’s what I thought too. The horrors are described well, they just typically don’t get described through their physical form. As you say, because the human mind cannot comprehend it. There is a lot more focus on impressions, comparisons, and effects, rather than on a real physical description. Personally I thought it was quite neat!
AI is a field of research in computer science, and LLM are definitely part of that field. In that sense, LLM are AI. On the other hand, you’re right that there is definitely no real intelligence in an LLM.
That’s a good tip, but I assume he meant he drinks juice of burned beans, rather than burned juice of beans. After all, coffee beans do need to be roasted (burned) before you use them!
You couldn’t really do that with beer, because beer is typically carbonated and thus you’ll need a very strong bag inside of the box. So strong that you’ll end up with a can or bottle.
It would also be very hard to compete with products that are this mature. Linux, Windows, and macOS have been under development for a long time, with a lot of people. If you create a new OS, people will inevitably compare your new immature product with those mature products. If you had the same resources and time, then maybe your new OS would beat them, but you don’t. So at launch you will have less optimizations, features, security audits, compatibility, etc., and few people would actually consider using your OS.
That is true, but from a human perspective it can still seem non-deterministic! The behaviour of the program as a whole will be deterministic, if all inputs are always the same, in the same order, and without multithreading. On the other hand, a specific function call that is executed multiple times with the same input may occasionally give a different result.
Most programs also have input that changes between executions. Hence you may get the same input record, but at a different place in the execution. Thus you can get a different result for the same record as well.
That exact version will end up making “true” false any time it appears on a line number that is divisible by 10.
During the compilation, “true” would be replaced by that statement and within the statement, “__LINE__” would be replaced by the line number of the current line. So at runtime, you end up witb the line number modulo 10 (%10). In C, something is true if its value is not 0. So for e.g., lines 4, 17, 116, 39, it ends up being true. For line numbers that can be divided by 10, the result is zero, and thus false.
In reality the compiler would optimise that modulo operation away and pre-calculate the result during compilation.
The original version constantly behaves differently at runtime, this version would always give the same result… Unless you change any line and recompile.
The original version is also super likely to be actually true. This version would be false very often. You could reduce the likelihood by increasing the 10, but you can’t make it too high or it will never be triggered.
One downside compared to the original version is that the value of “true” can be 10 different things (anything between 0 and 9), so you would get a lot more weird behaviour since “1 == true” would not always be true.
A slightly more consistent version would be
((__LINE__ % 10) > 0)
Bing is managing hilarious malicious compliance!
Correct! The translation is fine, except that “fan” was interpreted as the device that moves air.