The increasing power of the latest artificial intelligence systems is stretching traditional evaluation methods to breaking point, posing a challenge to businesses and public bodies over how best to work with the fast-evolving technology.

Flaws in the evaluation criteria commonly used to gauge performance, accuracy and safety are being exposed as more models come to market, according to people who build, test and invest in AI tools. The traditional tools are easy to manipulate and too narrow for the complexity of the latest models, they said.

The accelerating technology race sparked by the 2022 release of OpenAI’s chatbot ChatGPT and fed by tens of billions of dollars from venture capitalists and big tech companies, such as Microsoft, Google and Amazon, has obliterated many older yardsticks for assessing AI’s progress.

  • eleitl@lemmy.ml
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    8 months ago

    There is no reliable risk assessment for truly intelligent, autonomous systems. Let’s stop pretending that it can exist.

  • afraid_of_zombies@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    8 months ago

    I thought it was all a scam. Why can’t people make up their mind? I need to know if I should panic about some neo-feudalism jobless dystopia or if I should panic about the AI bust that will bring our economy down.

  • unreasonabro@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    edit-2
    8 months ago

    the first laws we make about a thing are always retarded and take thirty years to change, too.

    This society is not worth living in and we do everything wrong.

  • Fedizen@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    8
    ·
    8 months ago

    Nearly all this money is wasted. The most impressive use for AI is image generation (which is less abstract than speech). Chatbots aren’t useful. Until hallucinations can be removed by some kind of automated response validation process (currently nonexistent) its just nonsense.