• James R Kirk@startrek.website
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 days ago

    The term “reasoning model” is as gaslighting a marketing term as “hallucination”. When an LLM is “Reasoning” it is just running the model multiple times. As this report implies, using more tokens appears to increase the probability of producing a factually accurate response, but the AI is not “reasoning”, and the “steps” of it “thinking” are just bullshit approximations.

  • BudgetBandit@sh.itjust.works
    link
    fedilink
    arrow-up
    5
    ·
    4 days ago

    Okay I bite the bullet: without pressure from shareholders, Apple would not have released Apple Intelligence in that state, maybe never.