Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

  • Boiglenoight@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    10 months ago

    Just use imagination. An AI is programmed for battle and is ordered to hold fire. It shoots instead.

    • StaticFalconar@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      I thought the point of AI is to not specifically program it for anything hence you can ask the chatbot thats suppose to help make a sale, do your homework problems.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Imagine if there was a specific series of words that would turn any human into a rogue agent en masse. Some guy discovers that a special input causes killbot 2000 to go haywire and they broadcast it to an entire army that all has the same underlying program.