• fidodo@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    9 months ago

    LLMs build on top of the context you provide them and as a result are very impressionable and agreeable. It’s something to keep in mind when trying to get it to come up with good answers as you need to carefully word questions to avoid biasing it.

    It can easily create a sense of false confidence in people who are just being told what they want to hear, but interpret that as validation, which was already a bad enough problem in the pre LLM world.

  • antihumanitarian@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    9 months ago

    So this is probably another example of Google using too blunt of instruments for AI. LLMs are very suggestible and leading questions can severely bias responses. Most people using them without knowing a lot about the field will ask “bad” questions. So it likely has instructions to avoid “which is better” and instead provide pros and cons for the user to consider themselves.

    Edit: I don’t mean to excuse, just explain. If anything, the implication is that Google rushed it out after attempting to slap bandaids on serious problems. OpenAI and Anthropic, for example, have talked about how alignment training and human adjustment takes a majority of the development time. Since Google is in a self described emergency mode, cutting that process short seems a likely explanation.