This is an article about a tweet with a screenshot of an LLM prompt and response. This is rock fucking bottom content generation. Look I can do this too:
Headline: ChatGPT criticizes OpenAI
I write about technology at theluddite.org
This is an article about a tweet with a screenshot of an LLM prompt and response. This is rock fucking bottom content generation. Look I can do this too:
Headline: ChatGPT criticizes OpenAI
I would love to read an actually serious treatment of this issue and not 4 paragraphs that just say the headline but with more words.
I know that this kind of actually critical perspective isn’t point of this article, but software always reflects the ideology of the power structure in which it was built. I actually covered something very similar in my most recent post, where I applied Philip Agre’s analysis of the so-called Internet Revolution to the AI hype, but you can find many similar analyses all over STS literature, or throughout just Agre’s work, which really ought to be required reading for anyone in software.
edit to add some recommendations: If you think of yourself as a tech person, and don’t necessarily get or enjoy the humanities (for lack of a better word), I recommend starting here, where Agre discusses his own “critical awakening.”
As an AI practitioner already well immersed in the literature, I had incorporated the field’s taste for technical formalization so thoroughly into my own cognitive style that I literally could not read the literatures of nontechnical fields at anything beyond a popular level. The problem was not exactly that I could not understand the vocabulary, but that I insisted on trying to read everything as a narration of the workings of a mechanism. By that time much philosophy and psychology had adopted intellectual styles similar to that of AI, and so it was possible to read much that was congenial – except that it reproduced the same technical schemata as the AI literature. I believe that this problem was not simply my own – that it is characteristic of AI in general (and, no doubt, other technical fields as well). T
I’ve already posted this here, but it’s just perennially relevant: The Anti-Labor Propaganda Masquerading as Science.
“The workplace isn’t for politics” says company that exerts coercive political power to expel its (ex-)workers for disagreeing.
The purpose of a system is what it does
According to the cybernetician, the purpose of a system is what it does. This is a basic dictum. It stands for bald fact, which makes a better starting point in seeking understanding than the familiar attributions of good intention, prejudices about expectations, moral judgment, or sheer ignorance of circumstances.
The AI is “supposed” to identify targets, but in reality, the system’s purpose is to justify indiscriminate murder.
Vermont has several towns with as little as a thousand people that have fiber internet thanks to municipal cooperatives like ECFiber. Much of the state is a connectivity wasteland but it’s really cool to see some towns working together to sort it out.
I’m suspicious of this concept of editorial independence. I think it’s a smoke screen that lets companies have their cake and eat it too. As far as I’m concerned, whoever cashes the checks also gets the blame, because either ownership means something, in which case the concept exists to obfuscate that, or it doesn’t, in which case why is nature buying up other journals?
I cannot handle the fucking irony of that article being on nature, one of the organizations most responsible for fucking it up in the first place. Nature is a peer-reviewed journal that charges people thousands upon thousands of dollars to publish (that’s right, charges, not pays), asks peer reviewers to volunteer their time, and then charges the very institutions that produced the knowledge exorbitant rents to access it. It’s all upside. Because they’re the most prestigious journal (or maybe one of two or three), they can charge rent on that prestige, then leverage it to buy and start other subsidiary journals. Now they have this beast of an academic publishing empire that is a complete fucking mess.
My two cents, but the problem here isn’t that the images are too woke. It’s that the images are a perfect metaphor for corporate DEI initiatives in general. Corporations like Google are literally unjust power structures, and when they do DEI, they update the aesthetics of the corporation such that they can get credit for being inclusive but without addressing the problem itself. Why would they when, in a very real way, they themselves are the problem?
These models are trained on past data and will therefore replicate its injustices. This is a core structural problem. Google is trying to profit off generative AI while not getting blamed for these baked-in problems by updating the aesthetics. The results are predictably fucking stupid.
I’ve posted this here before, but this phenomenon isn’t unique to dating apps, though dating apps are a particularly good example. The problem is that capitalism uses computers backwards.
Maybe this is a hot take, but it’s really unfortunate that only the unhinged conservative lunatics are willing to have this discussion. I actually think that it’d be really healthy in a democracy to come together and exercise some agency in how we allow tech companies to access our children, if at all, but American liberals seem committed to some very broken notions of technocratic progress paired with free speech, while American conservatives are happy to throw all that away in order to have total control over their children, arriving closer to the right place for very dangerous reasons.
When you’re creating something new, production is research. We can’t expect Dr. Frankenstein to be unbiased, but that doesn’t mean he doesn’t have insights worth knowing.
Yes and no. It’s the same word, but it’s a different thing. I do R&D for a living. When you’re doing R&D, and you want to communicate your results, you write something like a whitepaper or a report, but not a journal article. It’s not a perfect distinction, and there’s some real places where there’s bleed through, but this thing where companies have decided that their employees are just regular scientists publishing their internal research in arxiv is an abuse of that service./
LLM are pretty new, how many experts even exist outside of the industry?
… a lot, actually? I happen to be married to one. Her lab is at a university, where there are many other people who are also experts.
AI systems in the future, since it helps us understand how difficult they might be to deal with," lead author Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, an AI research company, told Live Science in an email.
The media needs to stop falling for this. This is a “pre-print,” aka a non-peer-reviewed paper, published by the AI company itself. These companies are quickly learning that, with the AI hype, they can get free marketing by pretending to do “research” on their own product. It doesn’t matter what the conclusion is, whether it’s very cool and going to save us or very scary and we should all be afraid, so long as its attention grabbing.
If the media wants to report on it, fine, but don’t legitimize it by pretending that it’s “researchers” when it’s the company itself. The point of journalism is to speak truth to power, not regurgitate what the powerful say.
When the writer Ryan Broderick joined Substack in 2020, it felt, he told me, like an “oasis.” The email-newsletter platform gave him a direct line to his readers.
Everyone is going to be so pumped when they learn about websites. The media has reported on substack this way since they began and it’s so fucking stupid. It’s a website with an email list as a service. Substack is nothing.
It’s not a solution, but as a mitigation, I’m trying to push the idea of an internet right of way into the public consciousness. Here’s the thesis statement from my write-up:
I propose that if a company wants to grow by allowing open access to its services to the public, then that access should create a legal right of way. Any features that were open to users cannot then be closed off so long as the company remains operational. We need an Internet Rights of Way Act, which enforces digital footpaths. Companies shouldn’t be allowed to create little paths into their sites, only to delete them, forcing guests to pay if they wish to maintain access to the networks that they built, the posts that they wrote, or whatever else it is that they were doing there.
As I explain in the link, rights of way already exist for the physical world, so it’s easily explained to even the less technically inclined, and give us a useful legal framework for how they should work.
If those same miles had been driven by typical human drivers in the same cities, we would have expected around 13 injury crashes.
I’m going to set aside my distrust at self reported safety statistics from tech companies for a sec to say two things:
First, I don’t think that’s the right comparison. You need to compare them to taxis.
Second, we need to know how often waymos employees intervene. From the NYT, cruise employed 1.5 staff-members per car, intervening to assist these not-so-self driving vehicles every 2.5 to 5 miles, making them actually less autonomous than regular cars.
Most journalists are hopelessly addicted to Twitter. Microblogging is already designed to be addictive, but journalists’ entire careers hinge on how much engagement they get, so those little engagement-rewards hit hard. They’re going to keep writing about the platform until they’re forced to quit it because it’s the main thing that they use to interact with the world. Tto them, every twitter change is fucking earth shattering.
It’s really crazy how much the people who inform the rest of us about the world have had their own reality warped by the platform.
Of course you’d hate LLMs, they know about you!
Headline: LLM slams known pervert