Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

  • theluddite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    10 months ago

    When you’re creating something new, production is research. We can’t expect Dr. Frankenstein to be unbiased, but that doesn’t mean he doesn’t have insights worth knowing.

    Yes and no. It’s the same word, but it’s a different thing. I do R&D for a living. When you’re doing R&D, and you want to communicate your results, you write something like a whitepaper or a report, but not a journal article. It’s not a perfect distinction, and there’s some real places where there’s bleed through, but this thing where companies have decided that their employees are just regular scientists publishing their internal research in arxiv is an abuse of that service./

    LLM are pretty new, how many experts even exist outside of the industry?

    … a lot, actually? I happen to be married to one. Her lab is at a university, where there are many other people who are also experts.