![](https://programming.dev/pictrs/image/fbb0fc6c-aade-4537-ac38-36a7e6437814.jpeg)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Google was working on a feature that would do just that, but I can’t recall the name of it.
They backed down for now due to public outcry, but I expect they’re just biding their time.
Google was working on a feature that would do just that, but I can’t recall the name of it.
They backed down for now due to public outcry, but I expect they’re just biding their time.
Not with this announcement, but it was.
It depends on the model you run. Mistral, Gemma, or Phi are great for a majority of devices, even with CPU or integrated graphics inference.
I’m also going to push forward Tilda, which has been my preferred one for a while due to how minimal the UI is.
Pixel Experience is unfortunately dead now. 🙁
Yeah - the operating system (or perhaps the display hardware itself, not sure) has to stretch each software pixel to a fractional amount of larger hardware pixels. In the case of upscaling 720p to 1080p, each 720p software pixel has to stretch to 1.33 hardware pixels. This forces blending to occur, which makes the image less sharp.
The worst part of this in my opinion is reading text.
You also lose integer scaling if you need to run a game at common resolutions below 1080p. (720p/800p, etc.)
They added a video player with version 3, I think.
Now the question is - are they open sourcing the original Winamp, or the awful replacement?
I think there was a special process to get Nvidia working in WSL. Let me check… (I’m running natively on Linux, so my experience doing it with WSL is limited.)
https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I’m sure you’ve followed this already, but according to this, it looks like you don’t want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I’d follow the instructions from that link closely.
You may also run into performance issues within WSL due to the virtual machine overhead.
Good luck! I’m definitely willing to spend a few minutes offering advice/double checking some configuration settings if things go awry again. Let me know how things go. :-)
Ok, so using my “older” 2070 Super, I was able to get a response from a 70B parameter model in 9-12 minutes. (Llama 3 in this case.)
I’m fairly certain that you’re using your CPU or having another issue. Would you like to try and debug your configuration together?
No offense intended, but are you sure it’s using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.
On my RTX 3060, I generally get responses in seconds.
I was reflecting on this myself the other day. For all my criticisms of Zuckerberg/Meta (which are very valid), they really didn’t have to release anything concerning LLaMA. They’re practically the only reason we have viable open source weights/models and an engine.
That’s the funny thing about UI/UX - sometimes changing non-functional colors can hurt things.
At some point, you lose productivity and reduced work weeks have shown increases in productivity can happen.
My go-to solution for this is the Android FolderSync app with an SFTP connection.
I’m not familiar with creating fonts specifically, but you’ll want to commit any resources necessary to recreate the font file, including any build scripts to help ease the process and instructions specifying compatible versions of tooling (FontForge in this case). Don’t include FontForge in the repository, of course.
The compiled font files should be under releases in GitHub for the repository.
Git isn’t generally meant for binary resources but as long as they’re not too large, they’ll be fine. You just may not have meaningful ways to compare changes easily.
I mean, sysvinit was just a bunch of root-executed bash scripts. I’m not sure if systemd is really much worse.
Thank you! I was struggling to remember the proposal name.