As intended. LLMs are either good or are easy to control and censor/direct what they answer. You can’t have both. Unlike a human with actual intelligence who can self censor or intelligently evade or circunvent compromising answers. LLMs can’t do that because they’re not actually intelligent. A product has to be controllable by its client, so, to control it, you have to lobotomize it.
4 is worse today than it was a year ago.
As intended. LLMs are either good or are easy to control and censor/direct what they answer. You can’t have both. Unlike a human with actual intelligence who can self censor or intelligently evade or circunvent compromising answers. LLMs can’t do that because they’re not actually intelligent. A product has to be controllable by its client, so, to control it, you have to lobotomize it.
Neither are that good. Both need a ton of human oversight. Preferably from a humam who knows the sorce material fed to the machine.