Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

  • _number8_@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    3
    ·
    10 months ago

    ‘went rogue’ is a bit of an alarmist way to say ‘typed scary text’

    i’d love to see an AI that could legitimately scare me

    • maegul (he/they)@lemmy.ml
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      3
      ·
      10 months ago

      It controls a military drone.

      It controls surgical equipment.

      It’s filtering your CV before any human sees it.

      It controls a robot taking care of your children.

      It’s involved in law enforcement or legal judgments.

      It’s involved in government policy setting.

      • normanwall@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        10 months ago

        It controls all power infrastructure, can find new exploits to build it’s own botnet and is able to reprogram firmware of devices (routers/switches/servers)

        It can send press releases, emails, tweets using language similar to any user it’s read from before

    • Boiglenoight@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      10 months ago

      Just use imagination. An AI is programmed for battle and is ordered to hold fire. It shoots instead.

      • StaticFalconar@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        10 months ago

        I thought the point of AI is to not specifically program it for anything hence you can ask the chatbot thats suppose to help make a sale, do your homework problems.

        • fidodo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Imagine if there was a specific series of words that would turn any human into a rogue agent en masse. Some guy discovers that a special input causes killbot 2000 to go haywire and they broadcast it to an entire army that all has the same underlying program.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Programming is “just text”. They doesn’t mean that programming isn’t incredibly powerful or that it can’t be used to do dangerous things. Maybe the missing piece that you’re unaware of is that LLMs are already very effective at programming and usage APIs. You don’t even need to have an LLM that’s good at programming to cause damage, it just needs access to APIs that can cause damage.