• BombOmOm@lemmy.world
    link
    fedilink
    English
    arrow-up
    83
    ·
    1 year ago

    The difficult part of software development has always been the continuing support. Did the chatbot setup a versioning system, a build system, a backup system, a ticketing system, unit tests, and help docs for users. Did it get a conflicting request from two different customers and intelligently resolve them? Was it given a vague problem description that it then had to get on a call with the customer to figure out and hunt down what the customer actually wanted before devising/implementing a solution?

    This is the expensive part of software development. Hiring an outsourced, low-tier programmer for almost nothing has always been possible, the low-tier programmer being slightly cheaper doesn’t change the game in any meaningful way.

    • Puzzle_Sluts_4Ever@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      1 year ago

      While I do agree that management is genuinely important in software dev:

      If you can rewrite the codebase quickly enough, versioning matters a lot less. Its the idea of “is it faster to just rewrite this function/package than to debug it?” but at a much larger scale. And while I would be concerned about regressions from full rewrites of the code… have you ever used software? Regressions happen near constantly even with proper version control and testing…

      As for testing and documentation: This is actually what AI-enhanced tools are good for today. These are the simple tasks you give to junior staff.

      Conflicting requests and iterating on descriptions: Have you ever futzed around with chatgpt? That is what it lives off of. Ask a question, then ask a follow up question, and so forth.

      I am still skeptical of having no humans in the loop. But all of this is very plausible even with today’s technology and training sets.


      Just to add a bit more to that. I don’t think having an AI operated company is a good idea. Even ignoring the legal aspects of it, there is a lot of value to having a human who can make irrational decisions because one customer will pay more in the long run and so forth.

      But I can definitely see entire departments being a node in a rack. Customers talk to humans (or a different LLM) which then talk to the “Network Stack” node and the “UI/UX” node and so forth.

    • akrot@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      Absolutely true, but many direction into implementing those solution with AIs.

    • doublejay1999@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Which is why plenty of companies merely pay lip service to it, or don’t do it at all and outsource it to ‘communities’

  • theluddite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    1
    ·
    1 year ago

    “I gave an LLM a wildly oversimplified version of a complex human task and it did pretty well”

    For how long will we be forced to endure different versions of the same article?

    The study said 86.66% of the generated software systems were “executed flawlessly.”

    Like I said yesterday, in a post celebrating how ChatGPT can do medical questions with less than 80% accuracy, that is trash. A company with absolute shit code still has virtually all of it “execute flawlessly.” Whether or not code executes it not the bar by which we judge it.

    Even if it were to hit 100%, which it does not, there’s so much more to making things than this obviously oversimplified simulation of a tech company. Real engineering involves getting people in a room, managing stakeholders, navigating conflicting desires from different stakeholders, getting to know the human beings who need a problem solved, and so on.

    LLMs are not capable of this kind of meaningful collaboration, despite all this hype.

    • thantik@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      1 year ago

      AI regularly hallucinates API endpoints that don’t exist, functions that aren’t part of that language, libraries that don’t exist. There’s no fucking way it did any of this bullshit. Like, yeah - it can probably do a mean autocomplete, but this is being pushed so hard because they want to drive wages down even harder. They want know-nothing middle-managers to point to this article and say “I can replace you with AI, get to work!”…that’s the only purpose of this crap.

    • R0cket_M00se@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      1 year ago

      LLMs are not capable of this kind of meaningful collaboration

      Which is why they’re a tool for professionals to amplify their workload, not a replacement for them.

  • Melco@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    2
    ·
    edit-2
    1 year ago

    …and it didn’t work.

    then a human had to fix it and it took 3x as long to fix it as if it was written by a human originally.

    This is the state of “AI” at the moment. Its a giant time waste.

    • thorbot@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      1 year ago

      This also completely glosses over the fact that AI capable of writing this had huge R&D costs to get to that point and also have ongoing costs associated with running them. This whole article is a fucking joke, probably written by AI

  • flamekhan@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    1 year ago

    “We asked a Chat Bot to solve a problem that already has a solution and it did ok.”

    • Melco@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      edit-2
      1 year ago

      Except it cuts and pastes the broken solutions.

      If there are two languages on the same page it will mix and match the code together creating… garbage.

      • frokie@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        It should generate its own acceptance tests and keep asking itself to fix it until they all pass

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Please ignore the hundreds of thousands of dollars and the corresponding electricity that was required to run the servers and infrastructure required to train and use this models, please. Or the master cracks the whip again, please, just say you’ll invest in our startup, please!

    • mrginger@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      1 year ago

      This is who will get replaced first, and they don’t want to see it. They’re the most important, valuable part of the company in their own mind, yet that was the one thing the AI got right, the management part. It still needed the creative mind of a human programmer to do the code properly, or think outside the box.

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 year ago

    A test that doesn’t include a real commercial trial or A/B test with real human customers means nothing. Put their game in the App Store and tell us how it performs. We don’t care that it shat out code that compiled successfully. Did it produce something real and usable or just gibberish that passed 86% of its own internal unit tests, which were also gibberish?

  • Fuck Yankies@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    edit-2
    1 year ago

    Future software is going to be written by AI, no matter how much you would like to avoid that.

    My speculation is that we will see AI operating systems at some point, due to the extreme effectiveness of future AI to hack and otherwise subvert frameworks, services, libraries and even protocols.

    So mutating protocols will become a thing, whereby AI will change and negotiate protocols on the fly, as a war rages between defensive AI and offensive AI. There will be shared codebase, but a clear distinction of the objective at hand.

    That’s why we need more open source AI solutions and less proprietary solutions, because whoever controls the AI will be controlling the digital world - be it you or some fat cat sitting on a Smaug hill of money.

    EDIT: gawdDAMN there’s a lot of naysayers. I’m not talking stable diffusion here, guys. I’m talking about automated attacks and self developing software, when computing and computer networking reaches a point of AI supremacy. This isn’t new speculation. It’s coming fo dat ass, in maybe a generation or two… or more…

    • Melco@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I highly doubt this. At the end of the code needs to work to he useful. That is to say it needs to do what it is intended to do. AI generated code never works.

      It is great at producing something that looks like it might work and it presents its answer with a supreme air of confidence but that is as far as it gets.

    • BetaDoggo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 year ago

      That all sounds pointless. Why would we want to use something built on top of a system that’s constantly changing for no good reason?

      Unless the accuracy can be guaranteed at 100% this theoretical will never make sense because you will ultimately end up with a system that could fail at any time for any number of reasons. Predictive models cannot be used in place of consistent, human verified and tested code.

      For operating systems I can maybe see llms being used to script custom actions requested by users(with appropriate guard rails), but not much beyond that.

      It’s possible that we will have large software entirely written by machines in the future, but what it will be written with will not in any way resemble any architecture that currently exists.

      • Melco@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        Funny you should say that. Currently AI utterly fails at the most trivial shell scripting tasks. I have a 0% success rate at getting it to write, or debug a shell scripts that actually works. It just spits out nice looking nonsense over and over.

    • shotgun_crab@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I don’t think so. Having a good architecture is far more important and makes projects actually maintainable. AI can speed up work, but humans need to tweak and review its work to make sure it fits with the exact requirements.