• Fuck Yankies@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    edit-2
    1 year ago

    Future software is going to be written by AI, no matter how much you would like to avoid that.

    My speculation is that we will see AI operating systems at some point, due to the extreme effectiveness of future AI to hack and otherwise subvert frameworks, services, libraries and even protocols.

    So mutating protocols will become a thing, whereby AI will change and negotiate protocols on the fly, as a war rages between defensive AI and offensive AI. There will be shared codebase, but a clear distinction of the objective at hand.

    That’s why we need more open source AI solutions and less proprietary solutions, because whoever controls the AI will be controlling the digital world - be it you or some fat cat sitting on a Smaug hill of money.

    EDIT: gawdDAMN there’s a lot of naysayers. I’m not talking stable diffusion here, guys. I’m talking about automated attacks and self developing software, when computing and computer networking reaches a point of AI supremacy. This isn’t new speculation. It’s coming fo dat ass, in maybe a generation or two… or more…

    • Melco@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I highly doubt this. At the end of the code needs to work to he useful. That is to say it needs to do what it is intended to do. AI generated code never works.

      It is great at producing something that looks like it might work and it presents its answer with a supreme air of confidence but that is as far as it gets.

    • BetaDoggo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 year ago

      That all sounds pointless. Why would we want to use something built on top of a system that’s constantly changing for no good reason?

      Unless the accuracy can be guaranteed at 100% this theoretical will never make sense because you will ultimately end up with a system that could fail at any time for any number of reasons. Predictive models cannot be used in place of consistent, human verified and tested code.

      For operating systems I can maybe see llms being used to script custom actions requested by users(with appropriate guard rails), but not much beyond that.

      It’s possible that we will have large software entirely written by machines in the future, but what it will be written with will not in any way resemble any architecture that currently exists.

      • Melco@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        Funny you should say that. Currently AI utterly fails at the most trivial shell scripting tasks. I have a 0% success rate at getting it to write, or debug a shell scripts that actually works. It just spits out nice looking nonsense over and over.

    • shotgun_crab@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I don’t think so. Having a good architecture is far more important and makes projects actually maintainable. AI can speed up work, but humans need to tweak and review its work to make sure it fits with the exact requirements.