How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can’t manage this consistently with CRUD apps and people think that this number isn’t laughable? Some companies have seen benefits during the LLM craze, but not 92% of them. 34% of companies report that generative AI specifically has been assisting with strategic decision making? What the actual fuck are you talking about?
…
I don’t believe you. No one with a brain believes you, and if your board believes what you just wrote on the survey then they should fire you.
I don’t fear Artificial Intelligence, I fear Administrative Idiocy. The managers are the problem.
Worst part is some of them aren‘t even idiots, just selfish and reckless. They don‘t care if the company still exists in a year so as long as they can make millions driving it into the ground.
Hacker News was silencing this article outright. That’s typically a sign that its factual enough to strike a nerve with the potential CxO libertarian [slur removed] crowd.
If this is satire, I don’t see it. Because i’ve seen enough of the GenAI crowd openly undermine society/the environment/the culture and be brazen about it; violence is a perfectly normal response.
Fascinating, I am not surprised at all.
Even beyond AI, some of the implicit messaging has got to strike a nerve with that kind of crowd.
I don’t think this is satire either, more like a playful rant (as opposed to a formal critique).
“If something is silenced, then that must mean it is right” is a pretty bad argument. There are genuinely good reasons to ban something. Being unnecessarily aggressive can be one.
I work in AI as a software engineer. Many of my peers have PhD’s, and have sunk a lot of research into their field. I know probably more than the average techie, but in the grand scheme of things I know fuck all. Hell, if you were to ask the scientists I work with if they “know AI” they’ll probably just say “yeah, a little”.
Working in AI has exposed me to so much bullshit, whether it’s job offers for obvious scams that’ll never work, or for “visionaries” that work for consultancies that know as little about AI as the next person, but market themselves as AI experts. One guy had the fucking cheek to send me a message on LinkedIn to say “I see you work in AI, I’m hosting a webinar, maybe you’ll learn something”.
Don’t get me wrong, there’s a lot of cool stuff out there, and some companies are doing some legitimately cool stuff, but the actual use-cases for these tools where they won’t just be productivity enhancers/tools is low at best. I fully support this guy’s efforts to piledrive people, and will gladly lend him my sword.
Another friend of mine was reviewing software intended for emergency services, and the salespeople were not expecting someone handling purchasing in emergency services to be a hardcore programmer. It was this false sense of security that led them to accidentally reveal that the service was ultimately just some dude in India. Listen, I would just be some random dude in India if I swapped places with some of my cousins, so I’m going to choose to take that personally and point out that using the word AI as some roundabout way to sell the labor of people that look like me to foreign governments is fucked up, you’re an unethical monster, and that if you continue to try { thisBullshit(); } you are going to catch (theseHands)
This aspect of it isn’t getting talked about enough. These companies are presenting these things as fully-formed AI, while completely neglecting the people behind the scenes constantly cleaning it up so it doesn’t devolve into chaos. All of the shortcomings and failures of this technology are being masked by the fact that there’s actual people working round the clock pruning and curating it.
You know, humans, with actual human intelligence, without which these miraculous “artificial intelligence” tools would not work as they seem to.
If the "AI’ needs a human support team to keep it “intelligent”, it’s less AI and more a really fancy kind of puppet.
I feel like some people in this thread are overlooking the tongue in cheek nature of this humour post and taking it weirdly personally
This is such a fun and insightful piece. Unfortunately, the people who really need to read it never will.
It blatantly contradicts itself. I would wager good money that you read the headline and didn’t go much further because you assumed it was agreeing with you. Despite the subject matter, this is objectively horribly written. It lacks a cohesive narrative.
I don’t think it’s supposed to have a cohesive narrative structure (at least in context of a structured, more formal critique). I read the whole thing and it’s more like a longer shitpost with a lot of snark.
I read every single word of it, twice, and I was laughing all the way through. I’m sorry you don’t like it, but it seems strange that you immediately assume that I haven’t read it just because I don’t agree with you.
There is literally not a chance that anyone downvoting this actually read it. It’s just a bunch of idiots that read the title, like the idea that llms suck and so they downvoted. This paper is absolute nonsense that doesn’t even attempt to make a point. I seriously think it is ppprly ai generated and just taking the piss out of idiots that love anything they think is anti-ai, whatever that means.
I hate anti-ai mania as much as the next person but the post is funny and it does have a point.
Using satire to convey a known truth some already understand implicitly, some don’t want to acknowledge, some refuse it outright, but when you think about it, we’ve always known how true it is. It’s tongue-in-cheek but it’s necessary in order to convince all these AI-washing fuckheads what a gimmick it is to really be making sweeping statements about a chatbot that still can’t spell lollipop backwards.
I’m not sure this is satire. A lot of hyperbole, but not satire.
This. Satire would be writing the article in the voice of the most vapid executive saying they need to abandon fundamentals and turn exclusively to AI.
However, that would be indistinguishable from our current reality, which would make it poor satire.
We need AI because it’s convenient to blame for any problems.
That is some good stuff actually. All the haters can focus on non-existent AI and the rest of us can work on improving society while they are distracted. Perfect scapegoat.
you know what, yes, I love this energy and I want more of it. This is how brave people should talk to management, but it’s how everyone should talk to AI hucksters.
Hey, we can always say: how can you check if an AI is working, it doesn’t come to the office? 🤔
After reading that entire post, I wish I had used AI to summarize it.
I am not in the equally unserious camp that generative AI does not have the potential to drastically change the world. It clearly does. When I saw the early demos of GPT-2, while I was still at university, I was half-convinced that they were faked somehow. I remember being wrong about that, and that is why I’m no longer as confident that I know what’s going on.
This pull quote feels like it’s antithetical to their entire argument and makes me feel like all they’re doing is whinging about the fact that people who don’t know what they’re talking about have loud voices. Which has always been true and has little to do with AI.
Yeah, this paper is time wasted. It is hilarious that they think that 3 years is a long time as a data scientists and this somehow gives them such wisdom. Then, they can’t even accurately extract the data from the chart that they posted in the article. On top of all this, like you pointed out, they can’t even keep a clear narrative, and they blatantly contradict themself on their main point. They want to pile drive people who come to the same conclusion as themself. What a strange take.
This gets a vote from me for “Best of the Internet 2024”, brilliant pacing, super braced, and with precision bluntness. I’m going to pretend the Monero remark is not even there, that’s how good it was.
very interesting read, thank you
Is the AI boom the new Blockchain of scams?
I reckon it’s more like the iPod touch. It’s applying a new idea in an area that is a mismatch for it’s potential. Eventually the best use for the emerging tech will become apparent and the current form will fall away
Ok whats the problem with Alan Iverson this time?