Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)
Considering what we’ve decided to call AI can’t actually make decisions, that’s a no-brainer.
AI term means humans are no brainers
Shouldn’t, but there’s absolutely nothing stopping it, and lazy tech companies absolutely will. I mean we live in a world where Boeing built a plane that couldn’t fly straight so they tried to fix it with software. The tech will be abused so long as people are greedy.
So long as people are rewarded for being greedy. Greedy and awful people will always exist, but the issue is in allowing them to control how things are run.
More than just that, they’re shielded from repercussions. The execs involved with ignoring all the safety concerns should be in jail right now for manslaughter. They knew better and gambled with other people’s lives.
They fixed it with software and then charged extra for the software safety feature. It wasn’t until the planes started falling out of the sky that they decided they would gracefully offer it for free.
Has anyone checked on the sister?
OpenAI went from interesting to horrifying so quickly, I just can’t look.
I’m tired of dopey white men making the world so much worse.
OpenAI went from an interesting and insightful company to a horrible and a weird one in a very little time.
People only thought it was the former before they actually learned anything about them. They were always this way.
AI shouldn’t make any decisions
I am sure Zergerberg is also claiming that they are not making any life-or-death decisions. Lets see you in a couple years when the military gets involved with your shit. Oh wait they already did but I guess they will just use AI to improve soldiers’ canteen experience.
Fair enough. I do think AI will become a valuable tool for doctors, etc who do make those decisions
Using AI to base a decision on, is different from letting it make decisions