• Ultraviolet@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    5
    ·
    9 days ago

    Because novelty is all it has. As soon as it stops improving in a way that makes people say “oh that’s neat”, it has to stand on the practical merits of its capabilities, which is, well, not much.

    • theherk@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      9 days ago

      I’m so baffled by this take. “Create a terraform module that implements two S3 buckets with cross-region bidirectional replication. Include standard module files like linting rules and enable precommit.” Could I write that? Yes. But does this provide an outstanding stub to start from? Also yes.

      And beyond programming, it is otherwise having positive impact on science and medicine too. I mean, anybody who doesn’t see any merit has their head in the sand. That of course must be balanced with not falling for the hype, but the merits are very real.

      • Eccitaze@yiffit.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 days ago

        There’s a pretty big difference between chatGPT and the science/medicine AIs.

        And keep in mind that for LLMs and other chatbots, it’s not that they aren’t useful at all but that they aren’t useful enough to justify their costs. Microsoft is struggling to get significant uptake for Copilot addons in Microsoft 365, and this is when AI companies are still in their “sell below cost and light VC money on fire to survive long enough to gain market share” phase. What happens when the VC money dries up and AI companies have to double their prices (or more) in order to make enough revenue to cover their costs?

        • theherk@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 days ago

          Nothing to argue with there. I agree. Many companies will go out of business. Fortunately we’ll still have the llama3’s and mistral’s laying around that I can run locally. On the other hand cost justification is a difficult equation with many variables, so maybe it is or will be in some cases worth the cost. I’m just saying there is some merit.

        • obbeel@lemmy.eco.br
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          I understand that it makes less sense to spend in model size if it isn’t giving back performance, but why would so much money be spent on larger LLMs then?

      • lightstream@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        9 days ago

        The merits are real. I do understand the deep mistrust people have for tech companies, but there’s far too much throwing out of the baby with the bath water.

        As a solo developer, LLMs are a game-changer. They’ve allowed me to make amazing progress on some of my own projects that I’ve been stuck on for ages.

        But it’s not just technical subjects that benefit from LLMs. ChatGPT has been a great travel guide for me. I uploaded a pic of some architecture in Berlin and it went into the history of it, I asked it about some damage to an old church in Spain - turned out to be from the Spanish civil war, where revolutionaries had been mowed down by Franco’s firing squads.

        Just today, I was getting help from an LLM for an email to a Portuguese removals company. I sent my message in English with a Portuguese translation, but the guy just replied back with a single sentence in broken English:

        “Yes a can , need tho mow m3 you need delivery after e gif the price”

        The first bit is pretty obviously “Yes I can” but I couldn’t really be sure what he was trying to say with the rest of it. So I asked ChatGPT who responded:

        It seems he’s saying he can handle the delivery but needs to know the total volume (in cubic meters) of your items before he can provide a price. Here’s how I’d interpret it:

        “Yes, I can [do the delivery]. I need to know the [volume] in m³ for delivery, and then I’ll give you the price.”

        Thanks to LLMs, I’m able to accomplish so many things that would have previously taken multiple internet searches and way more effort.