• noneabove1182@sh.itjust.worksOP
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 year ago

    lollms-webui is the jankiest of the images, but that one’s newish to the scene and I’m working with the dev a bit to get it nicer (main current problem is the requirement for CLI prompts which he’ll be removing) Koboldcpp and text-gen are in a good place though, happy with how those are running

  • ffhein@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Awesome work! Going to try out koboldcpp right away. Currently running llama.cpp in docker on my workstation because it would be such a mess to get cuda toolkit installed natively…

    Out of curiosity, isn’t conda a bit redundant in docker since it already is an isolated environment?

    • noneabove1182@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Yes that’s a good comment for an FAQ cause I get it a lot and it’s a very good question haha. The reason I use it is for image size, the base nvidia devel image is needed for a lot of compilation during python package installation and is huge, so instead I use conda, transfer it to the nvidia-runtime image which is… also pretty big, but it saves several GB of space so it’s a worthwhile hack :)

      but yes avoiding CUDA messes on my bare machine is definitely my biggest motivation

      • ffhein@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Ah, nice.

        Btw. perhaps you’d like to add:

        build: .

        to docker-compose.yml so you can just write “docker-compose build” instead of having to do it with a separate docker command. I would submit a PR for it but I have made a bunch of other changes to that file so it’s probably faster if you do it.