• 1 Post
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 8th, 2023

help-circle

  • Mostly via terminal, yeah. It’s convenient when you’re used to it - I am.

    Let’s see, my inference speed now is:

    • ~60-65 tok/s for a 8B model in Q_5_K/Q6_K (entirely in VRAM);
    • ~36 tok/s for a 14B model in Q6_K (entirely in VRAM);
    • ~4.5 tok/s for a 35B model in Q5_K_M (16/41 layers in VRAM);
    • ~12.5 tok/s for a 8x7B model in Q4_K_M (18/33 layers in VRAM);
    • ~4.5 tok/s for a 70B model in Q2_K (44/81 layers in VRAM);
    • ~2.5 tok/s for a 70B model in Q3_K_L (28/81 layers in VRAM).

    As of quality, I try to avoid quantisation below Q5 or at least Q4. I also don’t see any point in using Q8/f16/f32 - the difference with Q6 is minimal. Other than that, it really depends on the model - for instance, llama-3 8B is smarter than many older 30B+ models.


  • Have been using llama.cpp, whisper.cpp, Stable Diffusion for a long while (most often the first one). My “hub” is a collection of bash scripts and a ssh server running.

    I typically use LLMs for translation, interactive technical troubleshooting, advice on obscure topics, sometimes coding, sometimes mathematics (though local models are mostly terrible for this), sometimes just talking. Also music generation with ChatMusician.

    I use the hardware I already have - a 16GB AMD card (using ROCm) and some DDR5 RAM. ROCm might be tricky to set up for various libraries and inference engines, but then it just works. I don’t rent hardware - don’t want any data to leave my machine.

    My use isn’t intensive enough to warrant measuring energy costs.



  • Audalin@lemmy.worldtoTechnology@lemmy.worldWhy AI is going to be a shitshow.
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    2
    ·
    7 months ago

    it cuts out the middle man of having to find facts on your own

    Nope.

    Even without corporate tuning or filtering.

    A language model is useful when you know what to expect from it, but it’s just another kind of secondary information source, not an oracle. In some sense it draws random narratives from the noosphere.

    And if you give it search results as part of input in hope of increasing its reliability, how will you know they haven’t been manipulated by SEO? Search engines are slowly failing these days. A language model won’t recognise new kinds of bullshit as readily as you.

    Education is still important.