• Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    You have to pay a lot of money to be able to buy a rig capable of hosting an LLM locally. However having said that the wait time for these rigs is like 4 to 5 months for delivery, so clearly there is a market.

    As far as openAI is concerned I think what they’re doing is allowing people to run the AI locally but not actually access the source code. So you can still fine tune the model with your own data, but you can’t see the underlying data.

    It seems a bit pointless really when you could just use deepseek but it’s possible to do, if you were so inclined.