Looks paywalled or something, anyone can provide a tldr?
Looks paywalled or something, anyone can provide a tldr?
It may be no different than using Google as the search engine on safari, assuming I get an opt out. If it’s used for Siri interactions then that gets extremely tricky for one to verify that your interactions aren’t being used to inform adds and or train an LLM. Much harder to opt out vs default search engine there, perhaps.
LLMs do not need terabytes of ram. Heck you can run quantized 7billion param models on 16gb or less (Bloom, Falcon7B — falcon outperforms models with higher memory by the way, so there’s room here for optimization). While not quite as good as openAIs offerings, they’re still quite good. There are Android phones with 24gb of ram so it’s quite possible for Apple to release an iPhone pro with that much, and run it similar to running any large language model on an M1 or M2 Mac. Hell you could probably fit an inference only model in less. Performance wouldn’t be blazing but depending on the task, it could absolutely be sufficient. With Apple MLX and Ferret coming online it’s totally possible that you could, basically today, have a reasonable LLM running on an iPhone 15 Pro. People run OpenHermes 7B for example which uses ~4.4GB to run, without those frameworks. Battery life does take a major hit, but to be honest I’m at a loss for what I need an LLM for on my phone anyways.
Regardless, I want a local LLM or none at all.
This is a really bad look. It will probably be the case that it will be an opt in feature, and maybe Apple negotiates that Google gives them a model they house on premises and don’t send any data back on, but it’s getting very hard for Apple here to claim privacy and protection (and not that they do a particularly good job of that unless you stop all their telemetry).
If an LLM is gonna be on a phone, it needs to be local. Local is really hard because the models are huge (even with quantization and other tricks). So this seems incredibly unlikely. Then it’s just “who do you trust to sell your data for ads more, Apple or Google?” To which I say neither, and pray Linux phones take off (yes yes I know root an Android and de google it but still).
This should actually work against them. It would be more like “See, we’re not interested in competing, we’d rather maintain monopolies and cartel it up!”
I never equated LLMs to intelligence. And indexing the data is not the same as reproducing the webpage or the content on a webpage. For you to get beyond a small snippet that held your query when you search, you have to follow a link to the source material. Now of course Google doesn’t like this, so they did that stupid amp thing, which has its own issues and I disagree with amp as a general rule as well. So, LLMs can look at the data, I just don’t think they can reproduce that data without attribution (or payment to the original creator). Perplexity.ai is a little better in this regard because it does link back to sources and is attempting to be a search engine like entity. But OpenAI is not in almost all cases.
SoRA is a generative video model, not exactly a large language model.
But to answer your question: if all LLMs did was redirect you to where the content was hosted, then it would be a search engine. But instead they reproduce what someone else was hosting, which may include copyrighted material. So they’re fundamentally different from a simple search engine. They don’t direct you to the source, they reproduce a facsimile of the source material without acknowledging or directing you to it. SoRA is similar. It produces video content, but it doesn’t redirect you to finding similar video content that it is reproducing from. And we can argue about how close something needs to be to an existing artwork to count as a reproduction, but I think for AI models we should enforce citation models.
This is such a bad look for Apple. Like we get it, you’re a trillion dollar dragon sitting on your horde. But like, dude, innovate instead of sitting there. You shouldn’t be afraid of side loading. Steam has shown that if your experience is the best, you can do fine. Apple just realizes they wouldn’t have the best App Store experience and would lose revenue. Tough shit.
Eh, I’d personally just get a more ergonomic vertical mouse, wired if you need the absolute lowest latency. Much easier to come by and they’re cheap. I used an evoluent I think it was wireless mouse for DotA2 when I played it ranked years ago, and I found it more than capable to climb the ranks with it. I also have a Logitech gaming mouse, for comparison.
Some vertical mice can easily be found on Amazon or anywhere really. Logitech makes one (it’s okay) and a few others as well.
Conversely, if just looking for more ergonomics, you could try a hyperlight mouse with a good gliding wrist wrest.
Also, I am planning on purchasing some more hyper PLA to see if I can observe the same buzzing with that filament. I can’t imagine the filament type would cause the buzzing, but strange things do happen. Will report back here with any findings on that, but if anyone else has any thoughts in the meantime on this buzzing, I’m very keen to hear them!
Thanks! Looks like they don’t specify any fine amounts just saying that it’s probably coming and could be leveled before leadership change in the fining body in EU.