I’m usually the one saying “AI is already as good as it’s gonna get, for a long while.”

This article, in contrast, is quotes from folks making the next AI generation - saying the same.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    I don’t think Qwen was trained with distillation, was it?

    It would be awesome if it was.

    Also you should try Supernova Medius, which is Qwen 14B with some “distillation” from some other models.

    • 31337@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Hmm. I just assumed 14B was distilled from 72B, because that’s what I thought llama was doing, and that would just make sense. On further research it’s not clear if llama did the traditional teacher method or just trained the smaller models on synthetic data generated from a large model. I suppose training smaller models on a larger amount of data generated by larger models is similar though. It does seem like Qwen was also trained on synthetic data, because it sometimes thinks it’s Claude, lol.

      Thanks for the tip on Medius. Just tried it out, and it does seem better than Qwen 14B.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        Llama 3.1 is not even a “true” distillation either, but its kinda complicated, like you said.

        Yeah Qwen undoubtedly has synthetic data lol. It’s even in the base model, which isn’t really their “fault” as its presumably part of the web scrape.