Digging on Concord was funny for longer than its server were online.
Digging on Concord was funny for longer than its server were online.
Luddites weren’t against new technology, they were against the aristocrats using new technology as a tool or excuse to oppress and kill the labor class. The problem is not the new technology, the problem is that people were dying of hunger and being laid off in droves. Destroying the machinery, which almost always they were the operators of when working on said aristocrat’s factories, was an act of protest, just like a riot, or a strike. It was a form of collective bargaining.
You assume most stock investors read beyond the headline, you assume wrong.
Well, you see, that’s the really hard part of LLMs. Getting good results is a direct function of the size of the model. The bigger the model, the more effective it can be at its task. However, there’s something called compute efficient frontier (technical but neatly explained video about it). Basically you can’t make a model more effective at their computations beyond said linear boundary for any given size. The only way to make a model better, is to make it larger (what most mega corps have been doing) or radically change the algorithms and method underlying the model. But the latter has been proving to be extraordinarily hard. Mostly because to understand what is going on inside the model you need to think in rather abstract and esoteric mathematical principles that bend your mind backwards. You can compress an already trained model to run on smaller hardware. But to train them, you still need the humongously large datasets and power hungry processing. This is compounded by the fact that larger and larger models are ever more expensive while providing rapidly diminishing returns. Oh, and we are quickly running out of quality usable data, so shoveling more data after a certain point starts to actually provide worse results unless you dedicate thousands of hours of human labor producing, collecting and cleaning the new data. That’s all even before you have to address data poisoning, where previously LLM generated data is fed back to train a model but it is very hard to prevent it from devolving into incoherence after a couple of generations.
Replacing garbage with sewer water. Not exactly an improvement.
Some linux installers will refuse to erase the bitlocker partition automatically. Then you have to manually erase it before running the installer.
True that, the tri-core PowerPC is quite a unique challenging mess. But underneath it is just the same processor.
My biggest fear is that so far Nvidia has a track record of introducing regressions and new bugs with each new driver version. Just a week ago all my flatpaks weren’t working on Wayland, again. It happens almost with every single update. Some games that are native or platinum randomly stop working and it takes several updates before they start working again. While on AMD everything just works all the time and regressions are solved in a day not weeks. It’s just annoying.
If gaming is top priority. Go all amd, disregard Nvidia. AMD has extraordinary linux support and if it runs on the steam deck it will run on any all AMD machine.
Always remember to disable secureboot and remove bitlocker before installing linux on a oem windows machine. They make it hell to remove that malware from newer machines.
I know it is hard to believe. But the gamecube, Wii, and wiiu are the same machine. Same architecture and family of processors (IBM’s PowerPC). That’s why the wiiu is just a Wii with a beefier CPU (three Wii cores slapped together), and then a newer more powerful GPU sticked to the side. Thats why a single emulator can target all three consoles. The switch 2 will just be a newer version of the Tegra chip.
Have you ever sat in front of a casino’s slot machine. They are also trash, awful and disgusting. But they’re also engineered with the worst dark pattern psychology to manipulate any human being that sits on it to keep playing and be so addictive that people will burn their money just to keep playing. The qualities of fun, and additive are independent of each other. A game can be very addictive and really bad at the same time. Unlike slot machines, they have the advantage of constantly sitting in your pocket and going with you everywhere you go.
That’s a poor understanding of the situation. Nothing in the licensing changed. The SDK has always been the proprietary business to business secrets management product. The client integrates with and can use that SDK to provide the paid service to businesses. The client and the server side management of password has always been and still is FOSS.
This was apparently an accidental change in the build code (not the client code, just the building scripts) that required the inclusion of the SDK to build the client when actually it has never and doesn’t really need any of that code. It prevented building the client without accepting the SDK license. Which it shouldn’t.
This was fixed and some things will be put in place so it doesn’t happen again. Nothing in the licensing scheme changed, at all. This is not a catastrophic enshittification event. A Dev was just being lazy and forgot to check the dependencies on the build chain before their commit.
“I don’t believe it…I’m on the cover of the movie!”
They would be ceased and desisted out of existence. There’s a reason no one on the scene right now discloses methods and streaming piracy is a closely guarded secret. I’m sure it is perfectly possible, as that is how most piracy occurs nowadays. But it is extremely technical and most likely risks exposing any person doing it wrong.
I’m aware of this. But no corporation will ever let anyone get even close to releasing a consumer product like TiVo used to be.
That’s the problem. They already wisened up and HDMI, the propietary standard they forced everyone to change to for HD+, has built-in DRM. Most smart TV have DRM built-in as well.
Didn’t they already had a paid and with ads tier?
Most likely, as with all AI as a service startups. After a certain mass of users the models can’t keep up. So to reduce the response times they pay offshore firms to have real people answer the chat. Unfortunately, doctors willing to answer a chat all day are way less numerous than cheap labor.