A tragic scandal at the UK Post Office highlights the need for legal change, especially as organizations embrace artificial intelligence to enhance decision-making.
We spent decades treating computers like fancy calculators. They have more utility than that, and we are currently trying to find a more valuable way to use that utility.
In that process, there will be a time where the responses you get will need to be independently verified. As the technology matures, it will get more and more accurate and useful. If we could just skip past the development part and get to the fully engineered solution, we would… but that’s not really how anything new ever comes into being.
As for the current state of the technology, you can get a ton of useful information out of LLMs right now by asking them to give you a list of options you wouldn’t have thought of, general outlines of a course of action, places or topics to research to find a correct answer… etc. However, if you expect the current iteration of the technology to do everything for you without error and without verifying the output, you are going to have a bad time.
The thing is, intelligence is the capacity to create information that can be separately verified.
For this you need two abilities:
the ability to create information, which I believe is quantum based (and which I call “intuition”), and
the ability to validate, or verify information, which I believe is based on deterministic logic (and which I call “rationalization”).
If you get the first without the second, you end up in a case we call “insanity”, and if you have the second without the first, you are merely a computer.
Animals, for example, often have exemplary intuition, but very limited rationalization (which happens mostly empirically, not through deduction), and if they were humans, most would be “bat shit crazy”.
My point is that computers have had the ability to rationalize since day one. But they haven’t had the ability to generate new data, ever. Which is a requirement for intuition. In fact, this is absolutely true of random generators too, for the very same reasons. And the exact same way that we have pseudorandom generators, in my view, LLMs are pseudointuitive. That is, close enough to the real thing to fool most humans, but distinctively different to a formal system.
As of right now, we have successfully created a technology that creates pseudointuitive data out of seemingly unrelated, real life, actually intuitive data. We still need to find a way to reliably apply rationalization to that data.
And until then, it is utterly important that we do not conflate our premature use of that technology with “the inability of computers to produce accurate results”.
However, if you are at the receiving end of a mistake made my either a classic algorithm or an machine learning algorithm, then you probably won’t care whether it was the computer or the programmer making the mistake. In the end the result is the same.
“Computers make mistakes” is just a way of saying that you shouldn’t blindly trust whatever output the computer spits out.
if you are at the receiving end of a mistake made my either a classic algorithm or an machine learning algorithm, then you probably won’t care whether it was the computer or the programmer making the mistake
I’m absolutely expecting corporations to get away with the argument that “they cannot be blamed for the outcome of a system that they neither control nor understand, and that is shown to work in X% of cases”. Or at least to spend billions trying to.
And in case you think traceability doesn’t matter anyway, think again.
IMHO it’s crucial we defend the “computers don’t make mistakes” fact for two reasons:
Computers are defined as working through the flawless execution of rational logic. And somehow, I don’t see a “broader” definition working in the favor of the public (i.e. less waste, more fault tolerant systems), but strictly in favor of mega corporations.
If we let the public opinion mix up “computers” with the LLMs that are running on them, we will get even more restrictive ultra-broad legislation against the general public. Think “3D printers ownership heavily restricted because some people printed guns with them” but on an unprecedented scale. All we will have left are smartphones, because we are not their owners.
We spent decades on educating people that “computers don’t make mistakes” and now you want them to accept that they do?
We filled them with shit, that’s what. We don’t even know how that shit works, anymore.
Let’s be honest here.
We spent decades treating computers like fancy calculators. They have more utility than that, and we are currently trying to find a more valuable way to use that utility.
In that process, there will be a time where the responses you get will need to be independently verified. As the technology matures, it will get more and more accurate and useful. If we could just skip past the development part and get to the fully engineered solution, we would… but that’s not really how anything new ever comes into being.
As for the current state of the technology, you can get a ton of useful information out of LLMs right now by asking them to give you a list of options you wouldn’t have thought of, general outlines of a course of action, places or topics to research to find a correct answer… etc. However, if you expect the current iteration of the technology to do everything for you without error and without verifying the output, you are going to have a bad time.
The thing is, intelligence is the capacity to create information that can be separately verified.
For this you need two abilities:
If you get the first without the second, you end up in a case we call “insanity”, and if you have the second without the first, you are merely a computer.
Animals, for example, often have exemplary intuition, but very limited rationalization (which happens mostly empirically, not through deduction), and if they were humans, most would be “bat shit crazy”.
My point is that computers have had the ability to rationalize since day one. But they haven’t had the ability to generate new data, ever. Which is a requirement for intuition. In fact, this is absolutely true of random generators too, for the very same reasons. And the exact same way that we have pseudorandom generators, in my view, LLMs are pseudointuitive. That is, close enough to the real thing to fool most humans, but distinctively different to a formal system.
As of right now, we have successfully created a technology that creates pseudointuitive data out of seemingly unrelated, real life, actually intuitive data. We still need to find a way to reliably apply rationalization to that data.
And until then, it is utterly important that we do not conflate our premature use of that technology with “the inability of computers to produce accurate results”.
I mostly agree with this distinction.
However, if you are at the receiving end of a mistake made my either a classic algorithm or an machine learning algorithm, then you probably won’t care whether it was the computer or the programmer making the mistake. In the end the result is the same.
“Computers make mistakes” is just a way of saying that you shouldn’t blindly trust whatever output the computer spits out.
I’m absolutely expecting corporations to get away with the argument that “they cannot be blamed for the outcome of a system that they neither control nor understand, and that is shown to work in X% of cases”. Or at least to spend billions trying to.
And in case you think traceability doesn’t matter anyway, think again.
IMHO it’s crucial we defend the “computers don’t make mistakes” fact for two reasons: