• dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      edit-2
      8 months ago

      As intended. LLMs are either good or are easy to control and censor/direct what they answer. You can’t have both. Unlike a human with actual intelligence who can self censor or intelligently evade or circunvent compromising answers. LLMs can’t do that because they’re not actually intelligent. A product has to be controllable by its client, so, to control it, you have to lobotomize it.

    • RatBin@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Neither are that good. Both need a ton of human oversight. Preferably from a humam who knows the sorce material fed to the machine.