I’ve been looking into self-hosting LLMs or stable diffusion models using something like LocalAI and / or Ollama and LibreChat.

Some questions to get a nice discussion going:

  • Any of you have experience with this?
  • What are your motivations?
  • What are you using in terms of hardware?
  • Considerations regarding energy efficiency and associated costs?
  • What about renting a GPU? Privacy implications?
  • Evotech@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 months ago

    Currently seeing this up at work. Ollama with danswer as a rag / frontend. On a bunch of Nvidia L4 /L40 on kubernets

    Is pretty plug and play honestly.