A tragic scandal at the UK Post Office highlights the need for legal change, especially as organizations embrace artificial intelligence to enhance decision-making.
The thing is, intelligence is the capacity to create information that can be separately verified.
For this you need two abilities:
the ability to create information, which I believe is quantum based (and which I call “intuition”), and
the ability to validate, or verify information, which I believe is based on deterministic logic (and which I call “rationalization”).
If you get the first without the second, you end up in a case we call “insanity”, and if you have the second without the first, you are merely a computer.
Animals, for example, often have exemplary intuition, but very limited rationalization (which happens mostly empirically, not through deduction), and if they were humans, most would be “bat shit crazy”.
My point is that computers have had the ability to rationalize since day one. But they haven’t had the ability to generate new data, ever. Which is a requirement for intuition. In fact, this is absolutely true of random generators too, for the very same reasons. And the exact same way that we have pseudorandom generators, in my view, LLMs are pseudointuitive. That is, close enough to the real thing to fool most humans, but distinctively different to a formal system.
As of right now, we have successfully created a technology that creates pseudointuitive data out of seemingly unrelated, real life, actually intuitive data. We still need to find a way to reliably apply rationalization to that data.
And until then, it is utterly important that we do not conflate our premature use of that technology with “the inability of computers to produce accurate results”.
The thing is, intelligence is the capacity to create information that can be separately verified.
For this you need two abilities:
If you get the first without the second, you end up in a case we call “insanity”, and if you have the second without the first, you are merely a computer.
Animals, for example, often have exemplary intuition, but very limited rationalization (which happens mostly empirically, not through deduction), and if they were humans, most would be “bat shit crazy”.
My point is that computers have had the ability to rationalize since day one. But they haven’t had the ability to generate new data, ever. Which is a requirement for intuition. In fact, this is absolutely true of random generators too, for the very same reasons. And the exact same way that we have pseudorandom generators, in my view, LLMs are pseudointuitive. That is, close enough to the real thing to fool most humans, but distinctively different to a formal system.
As of right now, we have successfully created a technology that creates pseudointuitive data out of seemingly unrelated, real life, actually intuitive data. We still need to find a way to reliably apply rationalization to that data.
And until then, it is utterly important that we do not conflate our premature use of that technology with “the inability of computers to produce accurate results”.