ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

  • Sanctus@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    2
    ·
    9 months ago

    Its being trained on us. Of course its acting unexpectedly. The problem with building a mirror is proding the guy on the other end doesnt work out.

  • SomeGuy69@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    2
    ·
    edit-2
    9 months ago

    Someone probably found a way to hack or poison it.

    Another theory, Reddit just recently sold data access to an unnamed AI company, so maybe that’s where the data went.

  • Coreidan@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    12
    ·
    edit-2
    9 months ago

    We call just about anything “AI” these days. There is nothing intelligent about large language models. They are terrible at being right because their only job is to predict what you’ll say next.

    • EnderMB@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      9 months ago

      (Disclosure: I work on LLM’s)

      While you’re not wrong, how is this different to many existing techniques and compositional models that are used practically everywhere in tech?

      Similarly, it’s probably safe to assume that the LLM’s prediction isn’t the only system in use. There will be lots of auxiliary services giving an orchestrator information to reason with. In this instance, if you have a system that is trying to figure out what to say next, with several knowledge stores and feedback services telling you “you were just discussing this” or “you can access the weather from here” is that all that different from “intelligence”?

      At a given point, it’s arguing semantics. Are any AI techniques true intelligence? Probably not, but then again, we don’t really know what true intelligence is.

      • Coreidan@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        3
        ·
        9 months ago

        how is this different to many existing techniques and compositional models that are used practically everywhere in tech?

        It’s not. LLM is just a statistical model. Nothing special about it. Nothing different what we’ve already been doing for a while. This only validates my statement that we call just about anything “AI” these days.

        We don’t even know what true intelligence is, yet we are quick to make claims that this is “AI”. There is no consciousness here. There is no self awareness. No emotion. No ability to reason or deduct. Anyone who thinks otherwise is just fooling themselves.

        It’s a buzz word to get people riled up. It’s completely disingenuous.

        • sailingbythelee@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 months ago

          I think the point of the Turing test is to avoid thorny questions about the definition of intelligence. We cant precisely define intelligence, but we know that normally functioning humans are intelligent. Therefore, if we talk to a computer and it is indistinguishable from a human in a conversation, then it is intelligent by definition.

        • QuaternionsRock@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          There is no consciousness here. There is no self awareness. No emotion. No ability to reason or deduct.

          Of all of these qualities, only the last one—the ability to reason or deduct—is a widely-accepted prerequisite for intelligence.

          I would also argue that contemporary LLMs demonstrate the ability to reason by correctly deriving mathematical proofs that do not appear in the training datasets. How would you be able to accomplish such a feat without some degree of reasoning?

        • EnderMB@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          9 months ago

          So, by your definition, no AI is AI, and we don’t know what AI is, since we don’t know what the I is?

          While I hate that AI is just a buzzword for scam artists and tech influencers nowadays, dismissing a term seems a bit overkill. It also seems overkill when it’s not something that academics/scholars seem particularly bothered by.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        The worrisome thing is that LLMs are being given access to controlling more and more actions. With traditional programming sure there are bugs all but at least they’re consistent. The context may make the bug hard to track down, but at the end of the day, the code is being interpreted by the processor exactly as it was written. LLMs could just go haywire for impossible to diagnose reasons. Deploying them safely in utilities where they have control over external systems will require a lot of extra non LLM safe guards that I do not see getting added enough, and that is concerning.

    • lanolinoil@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      9 months ago

      If you look at efficacy though on academic tests or asking it some fact question and you compare that to asking a random person instead of always getting the ‘right’ answer, which we expect computers/calculators to do, would LLMs be comparable or better? Surely someone has some data on that.

      E: It looks like in certain domains at least LLMs beat out human counterparts. https://stanfordmimi.github.io/clin-summ/

    • platypus_plumba@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      9 months ago

      What is intelligence?

      Even if we don’t know what it is with certainty, it’s valid to say that something isn’t intelligence. For example, a rock isn’t intelligent. I think everyone would agree with that.

      Despite that, LLMs are starting to blur the lines and making us wonder if what matters of intelligence is really the process or the result.

      A LLM will give you much better results in many areas that are currently used to evaluate human intelligence.

      For me, humans are a black box. I give them inputs and they give me outputs. They receive inputs from reality and they generate outputs. I’m not aware of the “intelligent” process of other humans. How can I tell they are intelligent if the only perception I have are their inputs and outputs? Maybe all we care about are the outputs and not the process.

      If there was a LLM capable of simulating a close friend of yours perfectly, would you say the LLM is not intelligent? Would it matter?

        • platypus_plumba@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          9 months ago

          Things we know so far:

          • Humans can train LLMs with new data, which means they can acquire knowledge.

          • LLMs have been proven to apply knowledge, they are acing examns that most humans wouldn’t dream of even understanding.

          • We know multi-modal is possible, which means these models can acquire skills.

          • We already saw that these skills can be applied. If it wasn’t possible to apply their outputs, we wouldn’t use them.

          • We have seen models learn and generate strategies that humans didn’t even conceive. We’ve seen them solve problems that were unsolvable to human intelligence.

          … What’s missing here in that definition of intelligence? The only thing missing is our willingness to create a system that can train and update itself, which is possible.

          • Coreidan@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            4
            ·
            edit-2
            9 months ago

            Can a LLM learn to build a house and then actually do it?

            LLMs are proven to be wrong about a lot of things. So I would argue these aren’t “skills” and they aren’t capable of acting on those “skills” effectively.

            At least with human intelligence you can be wrong and understand quickly that you are wrong. LLMs have no clue if they are right or not.

            There is a big difference between actual skill and just a predictive model based on statistics.

            • platypus_plumba@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              9 months ago

              Is an octopus intelligent? Can an octopus build an airplane?

              Why do you expect these models to have human skills if they are not humans?

              How can they build a house if they don’t even have vision or a physical body? Can a paralized human that can only hear and speak build a house? Is that human intelligent?

              This is clearly not human intelligence, it clearly lacks human skills. Does it mean it isn’t intelligent and it has no skills?

              • Coreidan@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                3
                ·
                edit-2
                9 months ago

                Exactly. They are just “models”. There is nothing intelligent about them.

                Yes octopus are very intelligent. They can think themselves out of a box without relying on curated data to train them.

                Logic, reasoning, and deduction. LLMs have zero ability to reject data based on their understanding of reality. Big diff.

                • platypus_plumba@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  1
                  ·
                  edit-2
                  9 months ago

                  You keep asking questions like “can a model build a house” but keep ignoring questions like “can an octopus build a house”. Then asking “can a model learn in seconds how to escape from a complex enclosure” and then ignoring “can a newborn human baby do that?”

                  Can an octopus write a poem? Can a baby write an essay? Can an adult human speak every human language, including fictional languages?

                  Just because it isn’t as intelligent as a human doesn’t mean this isn’t some type if intelligence.

                  Go and check what we call AI in videogames. Do you think that’s a simulated human? Go see what we’ve been calling AI in chess. Is that a simulated human being playing chess? No.

                  We’ve been calling Artificial intelligence things that are waaaaaay dumber than GPTs for decades. Even in the academia. Suddenly a group of people decided “artificial intelligence must be equal to human intelligence”. Nope.

                  Intelligence doesn’t need to be the same type of human intelligence.

                • QuaternionsRock@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  9 months ago

                  LLMs have zero ability to reject data based on their understanding of reality.

                  Me:

                  I just found a proof that 2+2=5!

                  ChatGPT:

                  The statement “2 + 2 = 5” is a classic example of a mathematical fallacy or incorrect reasoning, often used for humorous or illustrative purposes. In standard arithmetic, of course, 2 + 2 = 4. Proofs claiming to show otherwise rely on some form of invalid manipulation of algebraic properties or deliberate arithmetic mistakes.

                  A common “proof” of 2 + 2 = 5 might go something like this:

                  1. Start with an incorrect assumption or manipulate the algebraic steps in a way that hides a division by zero or another illegal operation.
                  2. Through these manipulations, arrive at the conclusion that 2 + 2 = 5.

                  However, each step of a valid mathematical proof must adhere strictly to the rules and properties of mathematics. Any proof leading to 2 + 2 = 5 will contain an error or a misapplication of those rules.

                  If you have a specific “proof” in mind, I can help identify the error in it.

    • shaman1093@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      9 months ago

      The person that commented below kinda has a point. While I agree that there’s nothing special about LLMs an argument can be made that consciousness (or maybe more ego?) is in itself an emergent mechanism that works to keep itself in predictable patterns to perpetuate survival.

      Point being that being able to predict outcomes is a cornerstone of current intelligence (socially, emotionally and scientifically speaking).

      If you were to say that LLMs are unintelligible as they operate to provide the most likely and therefore most predictable outcome then I’d agree completely.

  • thehatfox@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    9 months ago

    AI in science fiction has a meltdown and starts a nuclear war or enslaves the humane race.

    “AI” in reality has a meltdown and just starts talking gibberish.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 months ago

      We all know that robots need beer to function properly. It’s more likely that it hasn’t received enough beer, that’s what really messes up robots.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    9 months ago

    “It does this as the good work of a web of art for the country, a mouse of science, an easy draw of a sad few, and finally, the global house of art, just in one job in the total rest,”

    Wow that sounds very much like a Phil Collins tune, just ad Oh Lord, and people will probably say it’s deep! But it’s a ChatGPT answer to the question “What is a computer?”

  • lettruthout@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    9 months ago

    I wonder if its LLM got poisoned. Was it Nightshade or Glaze that promised to do that?

    • Lung@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      9 months ago

      Those are for messing up image generators and they have already been defeated via de-glazing tools