Yeah true, plus I bought my a770 at pretty much half price during the whole driver issues and so eventually got a 3070 performing card for like $250, which is an insane deal for me but no way intel made anything on it after all the rnd and production costs
The main reason Intel can’t compete is the fact CUDA is both proprietary and the industry standard, if you want to use a library you have to translate it yourself which is kind of inconvenient and no datacentre is going to go for that
The main reason Intel can’t compete is the fact CUDA is both proprietary and the industry standard
Funnily enough this is actually changing because of the AI boom. Would-be buyers can’t get Nvidia AI cards so they’re buying AMD and Intel and reworking their stacks as needed. It helps that there’s also translation layers available now too which translate CUDA and other otherwise vebdor-specific stuff to the open protocols supported by Intel and AMD
I think intel support it (or at least a translation layer) but there’s no motivation for Nvidia to standardise to something open-source as the status quo works pretty well
Yeah true, plus I bought my a770 at pretty much half price during the whole driver issues and so eventually got a 3070 performing card for like $250, which is an insane deal for me but no way intel made anything on it after all the rnd and production costs
The main reason Intel can’t compete is the fact CUDA is both proprietary and the industry standard, if you want to use a library you have to translate it yourself which is kind of inconvenient and no datacentre is going to go for that
Funnily enough this is actually changing because of the AI boom. Would-be buyers can’t get Nvidia AI cards so they’re buying AMD and Intel and reworking their stacks as needed. It helps that there’s also translation layers available now too which translate CUDA and other otherwise vebdor-specific stuff to the open protocols supported by Intel and AMD
AFAIK the AMD stack is open source, I’d hoped they’d collaborate on that.
I think intel support it (or at least a translation layer) but there’s no motivation for Nvidia to standardise to something open-source as the status quo works pretty well