- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
You must log in or # to comment.
The term “reasoning model” is as gaslighting a marketing term as “hallucination”. When an LLM is “Reasoning” it is just running the model multiple times. As this report implies, using more tokens appears to increase the probability of producing a factually accurate response, but the AI is not “reasoning”, and the “steps” of it “thinking” are just bullshit approximations.
AI agent that can do anything you want!
looks inside
state machines and if statements
Okay I bite the bullet: without pressure from shareholders, Apple would not have released Apple Intelligence in that state, maybe never.