I think AI is neat.

  • antidote101@lemmy.world
    link
    fedilink
    arrow-up
    41
    arrow-down
    2
    ·
    10 months ago

    I think LLMs are neat, and Teslas are neat, and HHO generators are neat, and aliens are neat…

    …but none of them live up to all of the claims made about them.

  • Starkstruck@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    10 months ago

    I feel like our current “AIs” are like the Virtual Intelligences in Mass Effect. They can perform some tasks and hold a conversation, but they aren’t actually “aware”. We’re still far off from a true AI like the Geth or EDI.

    • R0cket_M00se@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      I wish we called them VI’s. It was a good distinction in their ability.

      Though honestly I think our AI is more advanced in conversation than a VI in ME.

    • Nom Nom@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      This was the first thing that came to my mind as well and VI is such an apt term too. But since we live in the shittiest timeline Electronic Arts would probably have taken the Blizzard/Nintendo route too and patented the term.

  • pachrist@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    10 months ago

    If an LLM is just regurgitating information in a learned pattern and therefore it isn’t real intelligence, I have really bad news for ~80% of people.

    • Klear@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      P-Zombies, all of them. I happen to be the only one to actually exist. What are the odds, right? But it’s true.

  • Adalast@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    6
    ·
    10 months ago

    Ok, but so do most humans? So few people actually have true understanding in topics. They parrot the parroting that they have been told throughout their lives. This only gets worse as you move into more technical topics. Ask someone why it is cold in winter and you will be lucky if they say it is because the days are shorter than in summer. That is the most rudimentary “correct” way to answer that question and it is still an incorrect parroting of something they have been told.

    Ask yourself, what do you actually understand? How many topics could you be asked “why?” on repeatedly and actually be able to answer more than 4 or 5 times. I know I have a few. I also know what I am not able to do that with.

    • Daft_ish@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      10 months ago

      I don’t think actual parroting is the problem. The problem is they don’t understand a word outside of how it is organized. They can’t be told to do simple logic because they don’t have a simple understanding of each word in their vocabulary. They can only reorganize things to varying degrees.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        It doesn’t need to understand the words to perform logic because the logic was already performed by humans who encoded their knowledge into words. It’s not reasoning, but the reasoning was already done by humans. It’s not perfect of course since it’s still based on probability, but the fact that it can pull the correct sequence of words to exhibit logic is incredibly powerful. The main hard part of working with LLMs is that they break randomly, so harnessing their power will be a matter of programming in multiple levels of safe guards.

    • bruhduh@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      Few people truly understand what understanding means at all, i got teacher in college that seriously thinked that you should not understand content of lessons but simply remember it to the letter

      • Adalast@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        I am so glad I had one that was the opposite. I discussed practical applications of the subject material after class with him and at the end of the semester he gave me a B+ even though I only got a C by score because I actually grasped the material better than anyone else in the class, even if I was not able to evaluate it as well on the tests.

        • bruhduh@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          I’m glad for you) out teacher liked to offer discussion only to shoot us down when we tried to understand something, i was like duh that’s what teachers are for, to help us understand, if teachers don’t do that, then it’s the same as watching YouTube lectures

    • Ramblingman@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      This is only one type of intelligence and LLMs are already better at humans at regurgitating facts. But I think people really underestimate how smart the average human is. We are incredible problem solvers, and AI can’t even match us in something as simple as driving a car.

      • Adalast@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        Lol @ driving a car being simple. That is one of the more complex sensory somatic tasks that humans do. You have to calculate the rate of all vehicles in front of you, assess for collision probabilities, monitor for non-vehicle obstructions (like people, animals, etc.), adjust the accelerator to maintain your own velocity while terrain changes, be alert to any functional changes in your vehicle and be ready to adapt to them, maintain a running inventory of laws which apply to you at the given time and be sure to follow them. Hell, that is not even an exhaustive list for a sunny day under the best conditions. Driving is fucking complicated. We have all just formed strong and deeply connected pathways in our somatosensory and motor cortexes to automate most of the tasks. You might say it is a very well-trained neural network with hundreds to thousands of hours spent refining and perfecting the responses.

        The issue that AI has right now is that we are only running 1 to 3 sub-AIs to optimize and calculate results. Once that number goes up, they will be capable of a lot more. For instance: one AI for finding similarities, one for categorizing them, one for mapping them into a use case hierarchy to determine when certain use cases apply, one to analyze structure, one to apply human kineodynamics to the structure and a final one to analyze for effectiveness of the kineodynamic use cases when done by a human. This would be a structure that could be presented an object and told that humans use it and the AI brain could be able to piece together possible uses for the tool and describe them back to the presenter with instructions on how to do so.

    • Gabu@lemmy.ml
      link
      fedilink
      arrow-up
      6
      arrow-down
      3
      ·
      10 months ago

      That was never the goal… You might as well say that a bowling ball will never be effectively used to play golf.

  • BaumGeist@lemmy.ml
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    10 months ago

    I’ve know people in my life who put less mental effort into living their lives than LLMs put into crafting the most convincing lies you’ve ever read

  • Redacted@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    4
    ·
    10 months ago

    I fully back your sentiment OP; you understand as much about the world as any LLM out there and don’t let anyone suggest otherwise.

    Signed, a “contrarian”.

  • TechNerdWizard42@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    10 months ago

    Unfortunately the majority of people are idiots who just do this in real life, parroting populous ideology without understanding anything more than the proper catchphrase du jour. And there are many employed professionals who are paid to read a script, or output mundane marketing content, or any “content”. And for that, LLMs are great.

    It’s the elevator operator of technology as applied to creative writers. Instead of “hey intern, write the next article about 25 things these idiots need to buy and make sure 90% of them are from our sponsors” it goes to AI. The writer was never going to purchase a few different types of each product category, blindly test them and write a real article. They are just shilling crap they are paid to shill making it look “organic” because many humans are too stupid to not know it’s a giant paid for ad.

  • inb4_FoundTheVegan@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    10 months ago

    As someone who has loves Asimov and read nearly all of his work.

    I absolutely bloody hate calling LLM’s AI, without a doubt they are neat. But they are absolutely nothing in the ballpark of AI, and that’s okay! They weren’t trying to make a synethic brain, it’s just the culture narrative I am most annoyed at.

  • empireOfLove2@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    edit-2
    10 months ago

    The reason it’s dangerous is because there are a significant number of jobs and people out there that do exactly that. Which can be replaced…

      • Redacted@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        10 months ago

        Whilst everything you linked is great research which demonstrates the vast capabilities of LLMs, none of it demonstrates understanding as most humans know it.

        This argument always boils down to one’s definition of the word “understanding”. For me that word implies a degree of consciousness, for others, apparently not.

        To quote GPT-4:

        LLMs do not truly understand the meaning, context, or implications of the language they generate or process. They are more like sophisticated parrots that mimic human language, rather than intelligent agents that comprehend and communicate with humans. LLMs are impressive and useful tools, but they are not substitutes for human understanding.

        • Even_Adder@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          4
          ·
          edit-2
          10 months ago

          When people say that the model “understands”, it means just that, not that it is human, and not that it does so exactly humans do. Judging its capabilities by how close it’s mimicking humans is pointless, just like judging a boat by how well it can do the breast stroke. The value lies in its performance and output, not in imitating human cognition.

          • Redacted@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            10 months ago

            Understanding is a human concept so attributing it to an algorithm is strange.

            It can be done by taking a very shallow definition of the word but then we’re just entering a debate about semantics.

              • Redacted@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                edit-2
                10 months ago

                Yes sorry probably shouldn’t have used the word “human”. It’s a concept that we apply to living things that experience the world.

                Animals certainly understand things but it’s a sliding scale where we use human understanding as the benchmark.

                My point stands though, to attribute it to an algorithm is strange.

  • H4rdStyl3z@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    10 months ago

    How do you know for sure your brain is not doing exactly the same thing? Hell, being autistic, many social interactions are just me trying to guess what will get me approval without any understanding lol.

    Also really fitting that Photon chose this for a placeholder right now:

  • AwkwardLookMonkeyPuppet@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    5
    ·
    10 months ago

    I think AI is the single most powerful tool we’ve ever invented and it is now and will continue completely changing the world. But you’ll get nothing but hate and “iTs Not aCtuaLly AI” replies here on Lemmy.

    • naevaTheRat@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      6
      ·
      10 months ago

      Umm penicillin? anaesthetic? the Haber process? the transistor? the microscope? steel?

      I get it, the models are new and a bit exciting but GPT wont make it so you can survive surgery, or make rocks take the jobs of computers.

      • GeneralVincent@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        10 months ago

        Very true and valid. Tho, devils advocate for a moment, AI is great at discovering new ways to survive surgery and other cool stuff. Of course it uses the existing scientific discoveries to do that, but still. It could be the tool to find the next biggest thing on the penicillin, anaesthesia, haber process, transistor, microscope, steel list which is pretty cool.

        • naevaTheRat@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          10 months ago

          Is it? This seems like a big citation needed moment.

          Have LLMs been used to make big strides? I know some trials are going on aiding doctors in diagnosis and stuff but computer vision algorithms have been doing that for ages (shit contrast dyes, pcr, and blood analysis also do that really) but they come with their own risks and we haven’t seen like widespread unknown illnesses being discovered or anything. Is the tech actually doing anything useful atm or is it all still hype?

          We’ve had algorithms help find new drugs and stuff, or plot out synthetic routes for novel compounds; We can run DFT simulations to help determine if we should try make a material. These things have been helpful but not revolutionary, I’m not sure why LLMs would be? I actually worry they’ll hamper scientific progress by aiding fraud (unreproducible results are already a fucking massive problem) or extremely convincingly lying or omitting something if trying to use one to help in a literature review.

          Why do you think LLMs will revolutionise science?

          • GeneralVincent@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            why do you think LLMs will revolutionise science

            Idk it probably won’t. That wasn’t exactly what I was saying, but I’m also not an expert in any scientific field so that’s my bad for unintentionally contributing to the hype by implying AI is more capable than it currently is or has the potential to be

            • naevaTheRat@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              2
              ·
              10 months ago

              Fair enough, I used to be scientist (a very bad one that never amounted to anything) and my perspective has been that the major barriers to progress are:

              • We’ve just got all the low hangingfruit
              • Science education isn’t available to many people, perspectives are quite limited consequently.
              • power structures are exploitative and ossified, driving away many people
              • industry has too much influence, there isn’t much appetite to fund blue sky projects without obvious short term money earning applications
              • patents slow progress
              • publish or perish incentivises excessive volumes of publication, fraud, and splitting discoveries into multiple papers which increases burden on researchers to stay current
              • nobody wants to pay scientists, bright people end up elsewhere
            • naevaTheRat@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              10 months ago

              This seems like splitting hairs agi doesn’t exist so that can’t be what they mean. AI applies to everything from pathing algorithms for library robots to computer vision and none of those seem to apply.

              The context of this post is LLMs and their applications

              • A_Very_Big_Fan@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 months ago

                The comment you replied to first said “AI”, not “LLMs”. And he even told you himself that he didn’t mean LLMs.

                I’m not saying he’s right, though, because afaik AI hasn’t made any noteworthy progress made in medical science. (Although a quick skim through Google suggests there has been). I’m just saying that’s clearly not what he said.

                • naevaTheRat@lemmy.dbzer0.com
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  10 months ago

                  I thought they was saying they didn’t mean llms will aid science not that llms wasn’t the topic. Ambiguous in reread.

                  AI isn’t well defined which is what I was highlighting with mentions of computer vision etc, that falls into AI and it isn’t really meaningfully different from other diagnostic tools. If people mean agi then they should say that, but it hasn’t even been established it’s likely possible let alone that we’re close.

                  There are already many other intelligences on the planet and not many are very useful outside of niches. Even if we make a general intelligence it’s entirely possible we won’t be able to surpass fish level let alone human for example. and even then it’s not clear that intelligence is the primary barrier in anything, which was what I was trying to point out in my science held back post.

                  There are so many ifs AGI is a Venus is cloudy -> dinosaurs discussion, you can project anything you like on it but it’s all just fantasy.