I don’t know why he’s associated with socialism and at this point I’m too afraid to ask.
I don’t know why he’s associated with socialism and at this point I’m too afraid to ask.
We had I think six eggs harvested and fertilized, of those I think two made it to blastocyst, meaning the cells doubled as they should by day five. The four that didn’t double correctly were discarded. Did we commit 4 murders? Or does it not count if the embryo doesn’t make it to blastocyst? We did genetic testing on the two that were fertilized, one is normal and the other came back with all manner of horrible deformities. We implanted the healthy one, and discarded the genetically abnormal one. I assume that was another murder. Should we have just stored it indefinitely? We would never use it, can’t destroy it, so what do? What happens after we die?
I know the answer is probably it wasn’t god’s will for us to have kids, all IVF is evil, blah blah blah. It really freaks me out sometimes how much of the country is living in the 1600s.
Some are, sure. But others have to do with the weight. The most interesting rationals for returning it are because it’s shit as a productivity tool. So if you can’t really use it for work, there aren’t many games on it, then why are you keeping it? At that point it’s just a TV that only you can watch (since it doesn’t support multiple user profiles).
Putting aside the merits of trying to trademark gpt, which like the examiner says is commonly used term for a specific type of AI (there are other open source “gpt” models that have nothing to do with OpenAI), I just wanted to take a moment to appreciate how incredibly bad OpenAI is at naming things. Google has Bard and now Gemini.Microsoft has copilot. Anthropic has Claude (which does sound like the name of an idiot, so not a great example). Voice assistants were Google Assistant, Alexa, seri, and Bixby.
Then openai is like ChatGPT. Rolls right off the tounge, so easy to remember, definitely feels like a personable assistant. And then they follow that up with custom “GPTs”, which is not only an unfriendly name, but also confusing. If I try to use ChatGPT to help me make a GPT it gets confused and we end up in a “who’s on first” style standoff. I’ve reported to just forcing ChatGPT to do a websearch for “custom GPT” so I don’t have to explain the concept to it each time.
I think it’s intentionally wordy and the opt-out is “on” by default. I am usually instinctively just trying to hit the “off” button as quickly as possible and hitting save so I can get rid of the window, without actually reading anything. I almost certainly would have accidentally opted in to third party tracking.
I fully admit I might just be dumb though.
Google scanned millions of books and made them available online. Courts ruled that was fair use because the purpose and interface didn’t lend itself to actually reading the books in Google books, but just searching them for information. If that is fair use, then I don’t see how training an LLM (which doesn’t retain the exact copy of the training data at least in the vast majority of cases) isn’t fair use. You aren’t going to get an argument from me.
I think most people who will disagree are reflexively anti AI, and that’s fine. But I just haven’t heard a good argument that AI training isn’t fair use.
Before everyone gets ahead of themselves like in the last thread on this, this is not a Musk company. This is a separate startup based on the same (dumb) idea, that was later bought by Richard Branson’s Virgin. It’s IP is going to the Dubai company that is it’s biggest investor, so I’m sure they’ll actually build one with slave labor and all that.
Richard Branson hates public transit? Cause it’s his company that shut down, Virgin Hyperloop One.
I personally remain neutral on this. The issue you point out is definitely a problem, but Threads is just now testing this, so I think it’s too early to tell. Same with embrace, extend, extinguish concerns. People should be vigilant of the risks, and prepared, but we’re still mostly in wait and see land. On the other hand, threads could be a boon for the fidiverse and help to make it the main way social media works in five years time. We just don’t know yet.
There are just always a lot of “the sky is falling” takes about Threads that I think are overblown and reactionary
Just to be extra controversial, I’m actually coming around on Meta as a company a bit. They absolutely were evil, and I don’t fully trust them, but I think they’ve been trying to clean up their image and move in a better direction. I think Meta is genuinely interested in Activitypub and while their intentions are not pure, and are certainly profit driven, I don’t think they have a master plan to destroy the fidiverse. I think they see it in their long term interest for more people to be on the fidiverse so they can more easily compete with TikTok, X, and whatever comes next without the problems of platform lockin and account migration. Also meta is probably the biggest player in open source llm development, so they’ve earned some open source brownie points from me, particularly since I think AI is going to be a big thing and open source development is crucial so we don’t end up ina world where two or three companies control the AGI that everyone else depends on. So my opinion of Meta is evolving past the Cambridge Analytica taste that’s been in my mouth for years.
I look forward to reading everyone’s calm and measured reactions
It’s not even that.
California: “Please tell us if you allow nazis or not. We just want you to be transparent.”
Elon: “California is trying to pressure me into banning nazis! If I disclose I’m cool with nazis, people will be mad and they’ll want me to stop. Also, a lot of hate watch groups say I’m letting nazis run free on X, and I’m suing them for defamation for saying that, but if I have to publicly disclose my pro-nazi content moderation policies I’m going to lose those lawsuits and likely have to pay attorneys fees! Not cool California, not cool at all.”
Me with interest, but no technical knowledge reading your comment:
which can be as easy as
:-)
running syncthing or resilio sync on your NAS
:-(
I didn’t understand any of those words
I think that’s why the article mentions the lawsuit. Apart from future collection, it appears X is scanning eyes from photos people post on X and retaining that information.
Higher ed, primary ed, and homework were all subcategories ChatGPT classified sessions into, and together, these make up ~10% of all use cases. That’s not enough to account for the ~29% decline in traffic from April/May to July, and thus, I think we can put a nail in the coffin of Theory B.
It’s addressed in the article. First, use started to decline in April, before school was out. Second, only 23 percent of prompts were related to education, which includes both homework type prompts, and personal/professional knowledge seeking. Only about 10 percent was strictly homework. So school work isn’t a huge slice of ChatGPTs use.
Combine that with schools cracking down on kids using ChatGPT (in classroom assignments and tests, etc), and I don’t think your going to see a major bounce back in traffic when school starts. Maybe a little.
I’m starting to think generative AI might be a bit of a fad. Personally I was very excited about it and used ChatGPT, Bing, and Bard all the time. But over time I realized they just weren’t very good, inaccurate answers, bland writing, just not much help to me, a non programmer. I still use them, but now it’s maybe once a day or less, not all day like I used to. Generative AI seems more like a tool that is helpful in some limited cases, not the major transformation it felt like early in the year. Who knows, maybe they’ll get better and more useful.
Also, not super related, but I saw a static the other day that only about a third of the US has even tried ChatGPT. It feels like a huge thing to us tech nerdy people, but your average person hasn’t bothered to even try it out.
While I appreciate the focus and mission, kind of I guess, your really going to set up shop in a country literally using AI to identify air strike targets and handing over to the Ai the decision making over whether the anticipated civilian casualties are proportionate. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes
And Isreal is pretty authorarian, given recent actions against their supreme court and banning journalists (Al jazera was outlawed, the associated press had cameras confiscated for sharing images with Al jazera, oh and the offices of both have been targeted in Gaza), you really think the right wing Israeli government isn’t going to coopt your “safe superai” for their own purposes?
Oh, then there is the whole genocide thing. Your claims about concerns for the safety of humanity ring a little more than hollow when you set up shop in a country actively committing genocide, or at the very least engaged in war crimes and crimes against humanity as determined by like every NGO and international body that exists.
So Ilya is a shit head is my takeaway.