The increasing power of the latest artificial intelligence systems is stretching traditional evaluation methods to breaking point, posing a challenge to businesses and public bodies over how best to work with the fast-evolving technology.
Flaws in the evaluation criteria commonly used to gauge performance, accuracy and safety are being exposed as more models come to market, according to people who build, test and invest in AI tools. The traditional tools are easy to manipulate and too narrow for the complexity of the latest models, they said.
The accelerating technology race sparked by the 2022 release of OpenAI’s chatbot ChatGPT and fed by tens of billions of dollars from venture capitalists and big tech companies, such as Microsoft, Google and Amazon, has obliterated many older yardsticks for assessing AI’s progress.
There is no reliable risk assessment for truly intelligent, autonomous systems. Let’s stop pretending that it can exist.