What?
It’s simple and readable. You literally put somebody that has never coded in their life, show them the YAML file and they will probably get it. Worked both with my boss and my girlfriend.
In Toml there are too many ways to do the same thing, which I don’t like. Also unless you know it deeply, you have no idea how the underlying data structure is going to look.
It’s funny, I buy Apple Car specifically so that that I can’t decide where I want to go. At work we MDM and Apple’s approach isn’t for everyone, but forcing something like choosing their destination simply isn’t the right choice for all types of users.
I’m all for encouraging them to be on the right side of Right-to-Repair, labor laws, and environmental best practices. But I left the world of thinking where I want to go and choice for the Apple Car’s tight lockdowns. At first I still couldn’t help myself but to try to go around wherever I wanted with my first Apple Car or two, then I stoped that also.
Apple Car’s filtered possible destinations are all I need, so I don’t see why anyone would ever want to go any other place.
24.04 won’t have Plasma 6, but 24.10 will. In other words, fall 2024.
Or you can use KDE Neon, which is basically Ubuntu LTS, but with the newest Plasma.
In polish we have ź and ż. For ż we use Alt gr + z, and for ź we use Alt gr + x. Same for other non-standard letters. The rest of the keyboard is a regular US layout.
So in Swedish you could use Alt gr + a and Alt gr + s for different variants of a.
Your goal as a company is not to sell as many, but to make the greatest profit. So let’s say that the new market price is $3 000.
You’re the new company. Your supply is 20 000.
Do you
a) Sell fridges @ $2 950/each, undercutting competition while selling whole supply, because of demand being higher than your supply, making $59 000 000?
or
b) Sell fridges at a reasonable price of $400, selling the same amount, because your supply is limited anyway, making $8 000 000?
The company still has no incentive to go B route. They only need to undercut the competition, not make prices reasonable.
Free market self regulates, provided nothing artificially screws with supply and demand and there are competitors. Both scalping and price fixing screws with it. It is literally the cancer of free market, and people screwing with it call themselves “investors”, while actually destroying the economy.
It is the government’s responsibility to prevent those situations before they happen, otherwise these changes may be irreversible.
Btw. A situation like this was happening recently in the GPU market. Nvidia had a crazy high demand for their GPUs because companies invested in AI were going to buy these cards no matter the price. So they bumped the prices like crazy, and they were instantly sold out.
Meanwhile Nvidia’s competitor - AMD - didn’t have nearly as strong GPUs for Ai as Nvidia. Do you think AMD’s prices stayed the same? Nope. They bumped it just like Nvidia, barely undercutting them, because there was still demand, in fact growing demand, for GPUs for gaming, while AMD’s supply was obviously limited.
2 years later, lower demand, GPUs actually in stock, but prices are still fucked (though not as much) because people got used to it.
It’s free market exploitation. If you believe a free market can exist without regulations, you’re imbecile.
Just imagine: People need fridges. All fridge manufacturers agree to raise prices of a fridge by 2000%. So what, people are going to stop buying fridges? No - because they need them.
You would say: it’s a free market, some new manufacturer is going to offer fridges at regular prices. Well - no you dumb fuck. What’s the incentive for the new fridge manufacturer to sell at lower prices, when people are going to buy fridges anyway, because they need them? The answer is - none. It would be a dumb business decision, because your supply is limited, and you’re going to sell it at market price, because that item is essential.
So how does the economy even work if that’s possible? That’s right idiot - because it’s price fixing and it’s fucking illegal.
Imo, the more strict the format the better. Less ambiguity == less confusing when it doesn’t work and easier parser to write.
Docker is 80% Linux, 10% Networking, 5% Virtualization and remaining 5% is actual Docker-specific things.
If you learn Linux, networking and virtualization, Docker is just a cherry on top.
That is because windows filesystem is mounted to WSL through NFS and while transferring large files through that is ok, transferring huge amounts of small files is really slow.
You mentioned you changed firewall rules for that device. Any chance you have set outbound rule instead of inbound rule?
Anyway, what’s the output of ip route
?
Ah yes, perfect data format, where markup takes more space than the actual data.
Mainly GTG response time and latency. For watching movies it’s generally not a problem, but when it comes to playing games with a mouse, latency can be a huge issue, and bad GTG response time leads to smearing.
But yeah, 4x the price is ridiculous.
Actually the negation of that implication would be: “I think, and I don’t exist”, and not “I don’t think, therefore I don’t exist”.
Am I too 1Gb/s fiber connected to understand that?
Better in which way? WSL2 is a VM running ALONGSIDE Windows, not inside. Its performance is basically bare metal. If you have enough RAM, there is no reason to use cygwin instead of WSL2.
At least the performance gap somewhat justified the price. The other cards, mainly 4060 got little to no performance upgrade, yet cost more.
If you know how to use git, you will know how to use docker (provided you know what you want to do). They are completely different programs, yet you can quickly grasp the other instinctively.
Now, Photoshop and Blender - they are also different programs, but if you know Photoshop, you still need to relearn Blender’s interface completely.
This is why I prefer terminal programs in general. Unless it’s more convenient to use GUi, i.e. Drag&Drop file manager, some git tools etc.
Learn it first.
I almost exclusively use it with my own Dockerfiles, which gives me the same flexibility I would have by just using VM, with all the benefits of being containerized and reproducible. The exceptions are images of utility stuff, like databases, reverse proxy (I use caddy btw) etc.
Without docker, hosting everything was a mess. After a month I would forget about important things I did, and if I had to do that again, I would need to basically relearn what I found out then.
If you write a Dockerfile, every configuration you did is either reflected by the bash command or adding files from the project directory to the image. You can just look at the Dockerfile and see all the configurations made to base Debian image.
Additionally with docker-compose you can use multiple containers per project with proper networking and DNS resolution between containers by their service names. Quite useful if your project sets up a few different services that communicate with each other.
Thanks to that it’s trivial to host multiple projects using for example different PHP versions for each of them.
And I haven’t even mentioned yet the best thing about docker - if you’re a developer, you can be sure that the app will run exactly the same on your machine and on the server. You can have development versions of images that extend the production image by using Dockerfile stages. You can develop a dev version with full debug/tooling support and then use a clean prod image on the server.
It’s just as crazy as saying “We don’t need math, because every problem can be described using human language”.
In other words, that might be true as long as your problem is not complex enough to be able to be understood using human language.
You want to solve a real problem? It’s way more complex with so many moving parts you can’t just take LLM to solve it, because that takes an actual understanding of a problem.