BenchPress
Senior member
- Nov 8, 2011
- 392
- 0
- 0
You seem to be under the impression that there's much left to improve on the hardware side. That's not the case. Integrated graphics is limited by bandwidth. Just have a look at how tiny the 16 GPU EU's are on Ivy Bridge. They can easily increase that number and beat AMD in theoretical GFLOPS. No NVIDIA technology required. But the only way that Haswell GT3 can have 40 EUs is by adding a big chunk of eDRAM to provide additional bandwidth for frequently used data. Again, I doubt any of NVIDIA's patents were required for that.As I just posted, Intel has access to Nvidia's patents as part of that agreement. So are they taking advantage of it? Not so far.
There's room for improvement on the driver side, and they've made good progress over the last few years, but there's no need for them to increase their investment into it. They are doing incredibly well with what they have, and that's because the average consumer only plays games casually.
It's not easy, but it would be easy enough for a company like Intel. But again, hardware is not the problem here. Using a GCN clone would not end up making a difference in their bottom line.x86 into everything because they have exclusive rights to it. Intel wanted to push x86 into graphics because that would help extent their virtual monopoly. Ill conceived yes, but that was a major motivating force for them.
Again on the question of why Intel doesn't go out and make a GCN clone, it is not so easy. You need hundreds of engineers with the proper experience, a strategical challenge that takes years.
You're right about x86 into everything though. And they've not abandoned that idea. Larrabee got cancelled but much of that technology went straight into AVX2. Aside from being twice as wide, LRBni has a lot of overlap with AVX2. Larrabee has also evolved into Xeon Phi, and they changed the encoding format to MVEX, which is very similar to the VEX encoding used for AVX. That can't be a coincidence.
With AVX-512, the CPU cores' computing density would be equivalent to that of a GPU. And unlike a GPU, you don't need an API and layers of drivers to make use of it. It's highly generic computing power available to any developer. So Intel doesn't need hundreds of highly skilled software engineers in-house to implement drivers for all these different APIs before application developers can do something with their hardware. Application developers themselves can use AVX directly by simply flipping a compiler switch.
Unified computing also benefits from the vibrant software eco-system. Developers can create x86 libraries and frameworks that use AVX2+ and sell or share them with other developers. In contrast, anything developed to run on the GPU is much less likely to be directly interchangeable between projects.
