Yotsugi
Golden Member
- Oct 16, 2017
- 1,029
- 487
- 106
Raja has a different opinion on that.Intel is not getting into the discrete graphics business to win gaming benchmarks
Raja has a different opinion on that.Intel is not getting into the discrete graphics business to win gaming benchmarks
I never foresaw Larrabee coming to fruition in full capacity anyway but I do foresee Intel trying lock-in their customers as much as possible ...
Anyone without an x86 license can go pout elsewhere and design their own x86 cores instead leaching off of either AMD or Intel ...
Intel are not pursuing dedicated graphics just to lose it. They will try and pull all the stops as they can such as taking the performance crown by introducing a big die and excluding the RT and tensor cores to win in benchmarks. If Apple are going to Apple then Intel are probably going to Intel as well since neither of them are compelled to make their platforms compatible with 3rd party devices ...
I am sure that Raja and Intel want to win the gaming market as well. Nvidia, Intel, AMD even with Zen are proving that designs made for server first can still be competitive and win in the consumer market in different ways and it should be easier to compete since all will making the same sacrifices. But I until proven otherwise I refuse to believe that Intel is going to chuck out everything compute to be the third player in the discrete video card business. Are they going to do Tensor cores for RT or something like that? I don't know. But I believe Intel is looking for a long term solution in the markets that Nvidia and now AMD with the MI series are really hitting their strides. Those were the same ones they were targeting with Larrabee and its ilk.Raja has a different opinion on that.
Intel is not getting into the discrete graphics business to win gaming benchmarks. Intel wants to own up and down the whole server infrastructure. The compute Graphics business has quickly outpaced everything they have been trying to do since Larrabee for multi-small core compute. They don't want to be pushed out. That's why they are doing it.
Raja has a different opinion on that.
Here's hoping that Intel manages to ship anything smaller than 7nm, if they're planning on winning through sheer transistor count...
They are entering the market in 2020 - so 14nm? Intel 7nm isn’t due till 2021.Here's hoping that Intel manages to ship anything smaller than 7nm, if they're planning on winning through sheer transistor count...
They are entering the market in 2020 - so 14nm? Intel 7nm isn’t due till 2021.
I don’t think so, GPU die will be too big for useful yields on 10nm unless a miracle occurs. Intel could use P1273, their SoC process to get high density, and lower power per mm^2 - and build a large die for the top of their stack. I expect that Intel is using it’s 4500 GFX engineers to design a follow on 7nm GPU stack pretty quickly (two hardware teams).Could be 10nm. Intel should have a working 10nm node with high yields by then. If they don't, I don't even know what to say.
Well, I think GFX will need to go dual die to get high end GPUs @ 3nm. $1.5B, for a TU102 size GPU will be unprofitable for every market segment aside from compute/ML. So it’s really a problem for NV. If AMD sticks with smaller dies (sub $750M) then they don’t have a problem, aside from increasing prices. The spoiler will be Intel, who will likely 'spend' their way into the market.Tru, but we're >impying here applying MCM concept to graphics.
Which is beyond tricky.
We just don't know the actual design costs for something this big@3nm.$1.5B, for a TU102 size GPU will be unprofitable for every market segment aside from compute/ML
it's too bad that they don't have any competent hardware or a compute stack to be able to deploy yet as server solutions. Intel needs to win gaming benchmarks to be to be able to court the professionals among the general public to invest in their platform and I mean a platform for development. Who's going to use Intel graphics for deployment when the professionals don't even want to develop on Intel graphics ? There's more developers who work on GPU compute acceleration for AMD than there is for Intel which still doesn't offer a single source C++ compute API ...
CUDA wasn't even originally designed for servers since it's original release also didn't feature multi-node support. It was initially intended for DCC applications from Adobe or distributed computing like Folding@home. ROCm had a pretty similar humble beginning as well since it's predecessor, HSA only intended GPU compute acceleration for daily productivity applications and it wasn't meant for machine learning either ...
It's more important to have development platform than it is to have deployment platform at the beginning. People deploy x86 devices because they build for it as well and a similar principle applies to CUDA devices. Intel's integrated GPU won't do them any good because nobody is interested in porting their CUDA accelerated applications and developing them on their integrated graphics. Intel must fix this by offering some compelling gaming benchmarks to determine if it's worthy or not for the professionals to starting adding in support for Intel GPU compute acceleration ...
TSMC 3nm or Intel 3nm?We just don't know the actual design costs for something this big@3nm.
That also really assumes we'll ever reach 3nm!
AMD had a strong growth, Nvidia actually had a decent loss.hogwash, NVIDIA's data center revenue is several times bigger than AMD's RTG revenue all together.
NVIDIA reported $679 million for their Q4 data center alone, that's 2/3 of AMD's entire CPU + Graphics division revenue, which garnered $986M .
https://www.anandtech.com/show/13965/nvidia-earnings-report-q4-2019-crypto-pain-but-full-year-gain
https://www.anandtech.com/show/13917/amd-earnings-report-q4-fy-2018
Cuda predated openCL by two years. Apple was the original author of openCL, not AMD.Cuda not designed for servers? It was answer to AMD's work in OpenCL and was/is/will forever be a api and hardware tool set to offer compute capability on their hardware.
AMD are doing great in the professional GPU segment, they've had strong growth throughout the year, Nvidia has been posting decrease after decrease, yet AMD has been posting increase after increase.No, no it hasn't, and claiming it has doesn't make it so. A percent or two gain in marketshare due to the mining boom doesn't represent "significant" improvement for AMD. I really do hope that AMD gets their act together on the GPU side of things like they have with their CPUs, but they're still likely many many years out from competing with NVIDIA in a way that increases marketshare. I fully expect to see 3000 series NVIDIA cards bring NVIDIA back with a vengeance due to the poor reception/sales of the 2000 series. If AMD hopes to compete they need a completely new architecture and direction on their gaming GPUs.
TSMC will get to 3nm. Fewer companies will be able to afford it initially - well, unless Apple balks.Foundry (TSMC) 3nm.
Look at the yoy results, NV was up 50% in datacenter.AMD had a strong growth, Nvidia actually had a decent loss.
AMD are doing great in the professional GPU segment, they've had strong growth throughout the year, Nvidia has been posting decrease after decrease, yet AMD has been posting increase after increase.
They have the 7nm process advantage over Nvidia and they've been developing Navi for 3 years. The RX 400/500 series has been performance wise and price wise very competitive with Nvidia, in fact I'd say generally the RX 570 and 580 have been the better buy for a long time. Vega 56 when it came out was 100% the better buy over the GTX 1070.
Nvidia's next gen is likely very late Q4 2019 or most likely Q1 2020. So AMD has good 9 months of process node advantage.
I didn't say AMD wrote it. I said the work AMD was doing work with it. They had partnered early with Apple early on it and AMD had working with several guys on working on a decent API to use and OpenCL was widely known before it's first release (including a pre launch, launch in 2008). But you have to remember that Hetergenous processing and using graphics as compute and 3d were the reasons AMD purchased ATI to begin with. Nvidia was working with an on OpenCL as well a bit in a Rambus like setting, where they always planned on pushing Cuda and locking everyone else out, but they still wanted to make sure their stuff worked in OpenCL and got to be part of the decision making process. The problem is like every open Standard. Cuda came as a finished product really quick because it was all AMD. OpenCL took longer because a comity was working on it through in features and requirements.Cuda predated openCL by two years. Apple was the original author of openCL, not AMD.
AMD doesn’t break down GFX sales. It uses computing and graphics which has been driven up by Ryzen sales.
I believe I understand what you mean, but this is a metaphor I haven't seen before. Money machine?Nvidia is a money cannon
They have nowhere else to go, if they don’t innovate they will grind to a halt. Also, Intel will keep pressing on as well. The pace will likely slow down for both as more exotic materials and geometries extend the time line for tool, metrology and process development. Smarter, more powerful software and hardware will need to be developed to control costs and keep design timelines from exploding. It all just engineering, really hard engineering, but still engineering (well, and physics and chemistry). The biggest challenge is cost - which can be dealt with by amortizing over a longer period of time than older process nodes (as we’ve already begun to see). It’s not a matter of if, but only a matter of when.Maybe.
Maybe.
That's kind of a perverse way to look at it. Intel needs to get into the compute graphics server space so there answer is to start from the ground up developing graphics with little to no compute capability?
Cuda not designed for servers? It was answer to AMD's work in OpenCL and was/is/will forever be a api and hardware tool set to offer compute capability on their hardware. Now waaaay back in the day when Cuda was first created graphics cards didn't have enough oooomph to viable server hardware choice and the markets they excel in didn't really exist back then. It was more of a CPU accelerator then a full blown computing solution. But that's how these always start. But it was always a professional tech meant for compute.
The last part is non-sense. This has nothing to do with integrated graphics. Yes the Brought Raj in for discrete graphics. Yes they are going to try to be competitive in Discrete graphics. But this isn't happening at the cost of Compute. Compute is the key to the whole venture.