• pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    8 months ago

    I’ve been calling this for awhile now.

    I’ve been calling it the Ouroboros effect.

    There’s even bigger parts at play the paper didn’t even dig into, and that’s selective bias dye to human intervention.

    See at first let’s say an AI has 100 unique outputs for a given prompt.

    However, humans will favor let’s say half of em. Humans will naturally regenerate a couple times and pick their preferred “cream of the crop” result.

    This will then ouroboros for an iteration.

    Now the next iteration only has say 50 unique responses, as half of them have been ouroboros’d away by humans picking the one they like more.

    Repeat, each time “half-lifing” the originality.

    Over time, everything will get more abd more sameish. Models will degrade on originality as everything muddles into corporate speak.

    You know how every corporate website uses the same useless “doesn’t mean anything” jargon string of words, to say a lot without actually saying anything?

    That’s how AI is going to local minima to, as it keeps getting selectively “bred” to speak in an appealing and nonspecific way for the majority of online content.

    • yokonzo@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      I mean that’s kind of how chatgpt is now, I’ve been slowly getting into llama, can’t quite get it as good as gpt yet but I’m learning