I don’t understand how those two things are distinct.
I don’t understand how those two things are distinct.
🎼It’s gonna be Blue~~~ Ski~~e~s for you~ and I~ 🎶
You understand that arguments about the legitimacy of law is an argument about whether a law should exist rather than if a law does exist, right?
There’s a difference between saying that someone’s “in the right” (which they’re absolutely not) and saying someone is legally capable of doing what they want. (which is debatable)
There’s no such thing as being too based.
IP law is an illegitimate institution, so no they’re absolutely not.
It’s election day, so they’ve probably been relegated to other projects, since they won’t be able to make a difference anymore.
Are you able to drive? Do you understand the difference between tactile controls and touch screens?
As someone who has sometimes been accused of being an AI cultist, I agree that it’s being pursued far too recklessly, but the people who I argue with don’t usually give very good arguments about it. Specifically, I kept getting people who argue from the assumption that AI “aren’t real minds” and trying to draw moral reasons not to use it based on that. This fails for two reasons: 1. We cannot know if AI have internal experiences and 2. A tool being sapient would have more complicated moral dynamics than the alternative. I don’t know how much this helps you, but if you didn’t know before, you know now.
Edit:y’all’re seriously downvoting me for pointing out that a question is unanswerable when it’s been known to be such for centuries. Read a fucking philosophy book ffs.
Does the author think LLMs are Artificial General Intelligence? Because they’re definitely not.
AGI is, at minimum capable of taking input and giving output from any domain that a human can, which no generative neural network is currently capable of. For one thing, generative networks are incapable of reciting facts reliably, which immediately disqualifies them.
Are you talking about epistemics in general or alethiology in particular?
Regardless, the deep philosophical concerns aren’t really germain to the practical issue of just getting people to stop falling for obvious misinformation or people being wantonly disingenuous to score points in the most consequential game of numbers-go-up.
it’s just expensive
I’m a mathematician who’s been following this stuff for about a decade or more. It’s not just expensive. Generative neural networks cannot reliably evaluate truth values; it will take time to research how to improve AI in this respect. This is a known limitation of the technology. Closely controlling the training data would certainly make the information more accurate, but that won’t stop it from hallucinating.
The real answer is that they shouldn’t be trying to answer questions using an LLM, especially because they had a decent algorithm already.
You’re welcome! I’m always happy to learn someone re-evaluated their position in light of new information that I provided. 🙂
It suggests to me that AI
This is a fallacy. Specifically, I think you’re committing the informal fallacy confusion of necessary and sufficient conditions. That is to say, we know that if we can reliably simulate a human brain, then we can make an artificial sophont (this is true by mere definition). However, we have no idea what the minimum hardware requirements are for a sufficiently optimized program that runs a sapient mind. Note: I am setting aside what the definition of sapience is, because if you ask 2 different people you’ll get 20 different answers.
We shouldn’t take for granted it’s possible.
I’m pulling from a couple decades of philosophy and conservative estimates of the upper limits of what’s possible as well as some decently-founded plans on how it’s achievable. Suffice it to say, after immersing myself in these discussions for as long as I have I’m pretty thoroughly convinced that AI is not only possible but likely.
The canonical argument goes something like this: if brains are magic, we cannot say if humanlike AI is possible. If brains are not magic, then we know that natural processes can create sapience. Since natural processes can create sapience, it is extraordinarily unlikely that it will prove impossible to create it artificially.
So with our main premise (AI is possible) cogently established, we need to ask the question: “since it’s possible, will it be done, and if not why?” There are a great many advantages to AI, and while there are many risks, the barrier of entry for making progress is shockingly low. We are talking about the potential to create an artificial god with all the wonders and dangers that implies. It’s like a nuclear weapon if you didn’t need to source the uranium; everyone wants to have one, and no one wants their enemy to decide what it gets used for. So everyone has the insensitive to build it (it’s really useful) and everyone has a very powerful disincentive to forbidding the research (there’s no way to stop everyone who wants to, and so the people who’d listen are the people who would make an AI who’ll probably be friendly). So what possible scenario do we have that would mean strong general AI (let alone the simpler things that’d replace everyone’s jobs) never gets developed? The answers range from total societal collapse to extinction, which are all worse than a bad transition to full automation.
So either AI steals everyone’s job or something worse happens.
Yes, that’s exactly the scenario we need to avoid. Automated gay space communism would be ideal, but social democracy might do in a pinch. A sufficiently well-designed tax system coupled with a robust welfare system should make the transition survivable, but the danger with making that our goal is allowing the private firms enough political power that they can reverse the changes.
Considering that the average person would likely give answers slower and less accurately (most people know exactly 0 programming languages), being correct almost half of the time in seconds is a pretty impressive performance.
I mean, AI eventually will take our jobs, and with any luck it’ll be a good thing when that happens. Just because Chat GPT v3 (or w/e) isn’t up to the task doesn’t mean v12 won’t be.
🤨
.ml don’t be a fascist challenge.
All forms of bigotry are opinions, so yes; there are specific opinions that I get offended by when I see them.
.ml don’t shill for China challenge:
🤷♂️ I only use local generators at this point,so I don’t care.