In a vacuum, sure, but it also completely tracks with Sam Altman’s behavior outside of OpenAI.
Employees at previous companies he’s run had expressed very similar concerns about Altman acting in dishonest and manipulative ways. At his most high profile gig before OpenAi, Paul Graham flew from London to San Francisco to personally (and quietly) fire him from Y Combinator because Altman had gone off the reservation there too. The guy has a track record of doing exactly the kind of thing Toner is claiming.
What we know publicly strongly suggests Altman is a serial manipulator. I’m inclined to believe Toner on the basis that it fits with what we otherwise know about the man. From what I can tell, the board wasn’t wrong; they lost because Altman’s core skill is being a power broker and he went nuclear when the board tried to do their job.
There is real risk that the hype cycle around LLMs will smother other research in the cradle when the bubble pops.
The hyperscalers are dumping tens of billions of dollars into infrastructure investment every single quarter right now on the promise of LLMs. If LLMs don’t turn into something with a tangible ROI, the term AI will become every bit as radioactive to investors in the future as it is lucrative right now.
Viable paths of research will become much harder to fund if investors get burned because the business model they’re funding right now doesn’t solidify beyond “trust us bro.”