Whilst everything you linked is great research which demonstrates the vast capabilities of LLMs, none of it demonstrates understanding as most humans know it.
This argument always boils down to one’s definition of the word “understanding”. For me that word implies a degree of consciousness, for others, apparently not.
To quote GPT-4:
LLMs do not truly understand the meaning, context, or implications of the language they generate or process. They are more like sophisticated parrots that mimic human language, rather than intelligent agents that comprehend and communicate with humans. LLMs are impressive and useful tools, but they are not substitutes for human understanding.
When people say that the model “understands”, it means just that, not that it is human, and not that it does so exactly humans do. Judging its capabilities by how close it’s mimicking humans is pointless, just like judging a boat by how well it can do the breast stroke. The value lies in its performance and output, not in imitating human cognition.
Whilst everything you linked is great research which demonstrates the vast capabilities of LLMs, none of it demonstrates understanding as most humans know it.
This argument always boils down to one’s definition of the word “understanding”. For me that word implies a degree of consciousness, for others, apparently not.
To quote GPT-4:
When people say that the model “understands”, it means just that, not that it is human, and not that it does so exactly humans do. Judging its capabilities by how close it’s mimicking humans is pointless, just like judging a boat by how well it can do the breast stroke. The value lies in its performance and output, not in imitating human cognition.
Understanding is a human concept so attributing it to an algorithm is strange.
It can be done by taking a very shallow definition of the word but then we’re just entering a debate about semantics.
Animals understand.
Yes sorry probably shouldn’t have used the word “human”. It’s a concept that we apply to living things that experience the world.
Animals certainly understand things but it’s a sliding scale where we use human understanding as the benchmark.
My point stands though, to attribute it to an algorithm is strange.
I’m starting to wonder about you though.
Well it was a fun ruse while it lasted.
You don’t have to be a living thing to experience the world.
Yes you do unless you have a really reductionist view of the word “experience”.
Besides, that article doesn’t really support your statement, it just shows that a neural network can link words to pictures, which we know.