Their “manifesto”:
Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problem of our time.
We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence
SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.
We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
This way, we can scale in peace.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.
We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent.
We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.
If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.
Now is the time. Join us.
Ilya Sutskever, Daniel Gross, Daniel Levy
Lmao, no it’s not in reach.
More tech bro bullshit just to get fools to invest and him get rich (which will work)
LLMs are a start. I can see it being the machine/human interface to a broard array of specialized applications. If we want to see true ai we’ll need to add efficient complexity, and perhaps, if we don’t want it to exclusively be contained in a platonic type realm, we’ll need our programs direct access and explore our physical one.
He better watch out for the piledriver
He was careful not to mention AI…
This honestly looks like a grift to get a nice salary for a few years on VC money. These are not random sales goons peddling shit they don’t understand. They don’t even bother to define “superintelligence”, let alone what they mean by “safe superintelligence” .
I find it hard to believe this wasn’t written with malicious intent. But maybe I am too cynical and they are so used to people kissing their asses, that they think their shit doesn’t smell. But money definitely plays some role in this, they would be stupid to not cash in while the AI hype is hot.
The amount of AI companies is slowly passing the amount of cryptocurrencies. What’s gonna be the new flavor of the year?
Just noticed that the cropped image makes it look like he is doing a nazi salute and then the first sentence of their “manifesto” is “Superintelligence is within reach.” :)
I feel kind of bad commenting on his physical appearance but as a guy who balded in the same pattern…fuckin shave your head, dude. Or, since you’re rich as fuck, spent the 10k for a transplant. It looks so bad and not in a “wow it’s ugly but he can sure pull it off” kind of way. More like an “I never rescheduled the appointment I missed at the barber” kind of way
Pull an Elon Musk move then.