I’m usually the one saying “AI is already as good as it’s gonna get, for a long while.”
This article, in contrast, is quotes from folks making the next AI generation - saying the same.
I’m usually the one saying “AI is already as good as it’s gonna get, for a long while.”
This article, in contrast, is quotes from folks making the next AI generation - saying the same.
repeat after me: LLMs are not AI.
LLMs are one version of AI. It’s just one tiny part of AIs that are used every day, from chess bots to voice transcription, but they also are AI.
I would replace the word version with aspect. LLMs are merely one part of the puzzle that would be AI. Essentially what’s been constructed is the mouth and the part of the brain that can form words but without any of the reasoning or intelligence behind what the mouth says.
The same goes for the art AIs. They can paint pictures based on input but they can’t reason how those pictures should look. Which is why it requires so much tweaking to get them to output something that doesn’t look like it came out of a Lovecraft novel.
I don’t believe the “I” is an accurate term.
More like “Smart” Word generators.
Of course it changes meaning if you remove the qualifier.
In effect, man-made/fake intelligence.
I think you are confusing AI with AGI.
Not at all. AI is something that uses rules, not statistical guesswork. A simple control loop is alreadu basic AI, but the core mechanism of LLMs is not (the parts before and after token association/prediction are). Don’t fall for marketing bullshit of some dumbass silicon valley snake oil vendors.