They also created ghidra! Probably second best
They also created ghidra! Probably second best
To train a diffusion model that only outputs one image with difference is I think not possible you could do an image to image and then fix the seed so you would get a consistent result and then picking the nearest result that is nearly an identical copy
Sometimes my brain does a little funny with me and I’m not sure if I like it or not… I just didn’t realize this at all
Ollama is a big thing, do you want it to be fast? You will need a GPU, how large is the model you will be running be? 7/8B with CPU not as fast but no problem. 13B slow with CPU but possible
Sir, you just made my day thank you!
They probably also do some OCR on that and then let something other run over that to see if the text makes sense (basically letting another AI grade the output, commonly done to judge what’s a good dataset and what isn’t) and then just feed the ai again. Today you have a shortage of data since the internet is too small (yes I know it sounds crazy) so I wouldn’t wonder if they actually tried to use pictures and ocr to gather a bit more usable data
Now I get why it does what it does and how it works. I never thought that the colon was the variable name but it makes so much sense!
“Eigentlich fertig” was for an IP subnet calculator that I programmed with a fellow student
Meh that sucks i even have a perfectly working ddns, I mean I know I don’t get something like a PTR record but i wish that mail hosters would allow for more self hosting options
At least here in Germany it is like that. if you got a new number or whatever you are 99,9% certain that number is on WhatsApp it’s inevitable its the main source for chatting for everyone. So if you’d want to switch platforms youd have to convince a lot of people and most would not be ready to do that since why bother when you can just use WhatsApp?
Oh yeah I heard about this and saw that mutahar (some ordinary gamers) was doing it once on windows with a 4090. I would love to do that on my GPU and then split it between my host and my VM
I had some problems using proton experimental 9 in helldivers. I’d try some 8.X version preferably from Proton-GE as another commenter said, in hopes that that could fix the issue you are facing
Wonderful thank you so much!
I need that wallpaper! Is there a way you could provide me that?
Probably or the ai if I should have guessed in the backend it’s using something like local ai, koboldcpp, llamacpp probably
I used llamacpp with opencl but couple of months they supported rocm which is even faster
Just want to piggyback this. You will probably need more than 6gb vram to run good enough models with a acceptable speed and coherent output, but the more the better.
Is kagi a metasearch engine? Or does it have its own crawler and so on?
I need all of these! I already have them for data structures and agile but this is also golden!