• prime_number_314159@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    8 months ago

    What we have done is invented massive, automatic, no holds barred pattern recognition machines. LLMs use detected patterns in text to respond to questions. Image recognition is pattern recognition, with some of those patterns named things (like “cat”, or “book”). Image generation is a little different, but basically just flips the image recognition on its head, and edits images to look more like the patterns that it was taught to recognize.

    This can all do some cool stuff. There are some very helpful outcomes. It’s also (automatically, ruthlessly, and unknowingly) internalizing biases, preferences, attitudes and behaviors from the billion plus humans on the internet, and perpetuating them in all sorts of ways, some of which we don’t even know to look for.

    This makes its potential applications in medicine rather terrifying. Do thousands of doctors all think women are lying about their symptoms? Well, now your AI does too. Do thousands of doctors suggest more expensive treatments for some groups, and less expensive for others? AI can find that pattern.

    This is also true in law (I know there’s supposed to be no systemic bias in our court systems, but AI can find those patterns, too), engineering (any guesses how human engineers change their safety practices based on the area a bridge or dam will be installed in? AI will find out for us), etc, etc.

    The thing that makes AI bad for some use cases is that it never knows which patterns it is supposed to find, and which ones it isn’t supposed to find. Until we have better tools to tell it not to notice some of these things, and to scrub away a lot of the randomness that’s left behind inside popular models, there’s severe constraints on what it should be doing.

    • afraid_of_zombies@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      engineering (any guesses how human engineers change their safety practices based on the area a bridge or dam will be installed in? AI will find out for us), etc, etc.

      I haven’t seen that. Sent the same pieces of infrastructure equipment to pure 3rd world to super wealthy cities. Not saying it doesn’t exist but I personally haven’t seen it. You know a lot of these specs are just recycled. To the extent that I have even seen the same page references across countries. Was looking at a spec a few days ago for Mass. that was word for word identical to one I had seen in a city in British Columbia.

      In terms of biases among P.E.s what I have seen is a preference for not doing engineering and instead getting more hours. Take the same design you inherited in the 80s, make cosmetic changes, generate more paperwork (oh this one part now can’t be TBD it has to be this specific brand and I need catalog cuts/datasheet/certs/3 suppliers/batch and lot numbers and serial numbers/hardcopy of the CAD of it…). So I imagine a LLM trained on these guys (yes they are always guys) would know how to make project submittals/deliverables longer and more complex while feeling the urge to conduct more stakeholder meetings via RFIs.

      Sorry don’t mean to be bitter. I have one now demanding I replicate an exact copy of a control system from the early 80s with the same parts and they did not like when I told them that the parts are only available on eBay.