Abacus.ai:

We recently released Smaug-72B-v0.1 which has taken first place on the Open LLM Leaderboard by HuggingFace. It is the first open-source model to have an average score more than 80.

  • Miss Brainfarts@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    9 months ago

    That’s nice and all, but what are some FOSS models I can run on GPU with only 4GB?

    I’ve tried Deepseek Coder, and it’s pretty nice for what I use it for. Then there’s TinyLlama, which… well it’s fast, but I need to be veeeery exact in how I prompt it.

    • Fisch@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      9 months ago

      Unfortunately LLMs need a lot of VRAM. You could try using koboldcpp, it runs on the CPU but let’s you offload layers onto the GPU. That way you might be able to stay withing those 4gb even with larger models.

      Edit: I forgot to mention there’s a fork of koboldcpp with rocm for AMD cards, which is about twice as fast if I remember correctly. Only relevant if you have an AMD card tho.

      Edit 2: This is the model I use btw

      • Miss Brainfarts@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        I’m currently playing around with the Jan client, which uses the nitro engine. I think I need to read up on it more, because when I set the ngl value to 15 in order to offload 50% to GPU like the Jan guide says, nothing happens. Though that could be an issue specific to Jan.

        • Fisch@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          Maybe 50% GPU is already using too much VRAM and it crashes. You could try to set it to 0% GPU and see if that works.

          • Miss Brainfarts@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            I may need to lower it a bit more, yeah. Though when I try to to use offloading, I can see that vram usage doesn’t increase at all.

            When I leave the setting at its default 100 value on the other hand, I see vram usage climb until it stops because there isn’t enough of it.

            So I guess not all models support offloading?

            • General_Effort@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              9 months ago

              Most formats don’t support it. It has to be gguf format, afaik. You can usually find a conversion on huggingface. Prefer offerings by TheBloke for the detailed documentation, if nothing else.

            • Fisch@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 months ago

              The models you have should be .gguf files right? I think those are the only ones where that’s supported

    • Toes♀@ani.social
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      9 months ago

      4GB is practically nothing in this space. Ideally you want at least 10GB of dedicated vram if you can’t get even more. Keep in mind you’re also probably trying to share that vram with your operating system. So it’s more like ~3GB before you even started.

      Kolboldcpp is capable of using both your GPU and CPU together, you might wanna consider that. (Using a feature called layers) There’s a trade-off that occurs between the memory available and the quality of its output and the speed of the calculation.

      The model mentioned in this post can be run on the CPU with enough system ram or swap.

      If you wanna keep it all on the GPU check out 4bit models. Also there’s been a lot of work into trying to do this with the raspberry Pi. I suspect that their work could help you out here as well.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Depends on your needs. Best look around in [email protected] or similar. (I don’t wanna say reddit but r/localLlama is much larger.)

      If you’re more into creative writing, maybe look for places that discuss SillyTavern (r/SillyTavernAI is an option). It’s software for role-play chats, which may not be what you want. But the community is (relatively) large and likely to have good tips for non-coding/less technical applications.