Show us then, how does this manufacturing prowess come to bear when Intel is falling exponentially behind every generation in graphics performance?
Do I really have to explain this?
Oh, wait, you're saying "in graphics performance". Sorry, I thought your graph compared CPU FLOPS vs GPU FLOPS; I misunderstood the graph.
First things first, it's hard to compare how Intel can be falling behind exponentially when this graph isn't logarithmic.
Secondly, 2X the number of FLOPS don't equate into a 2X higher frame rate. Your GTX Titan, which shows a nice FLOPS improvement, doesn't have the same performance increase in benchmarks.
Intel claims a 75X gaming improvement since 2006, outpacing both Moore's Law and your graph:
The gap really is closing. Intel fastest Sandy Bridge IGP, HD3000 performed much worse in comparison to the dGPU competition.
You are participating in the same thread as us, right? With the title "
Linus Torvalds: Discrete GPUs are going away"?
If you don't want to make such a comparison, why even post here? It's the whole point of the thread.
How is that relevant to reality? Or is your whole argument that Intel can compete with an imaginary single-core GPU that doesn't even exist?
I hope my comment above clarified the confusion.
How is Intel going to make anything obsolete if they're exponentially falling further and further behind, despite having a process advantage?
They're not falling behind further, and certainly not exponentially. And if you want to compare FLOPS, I think even a theoretical Gen7 IGP with 72-144 EUs can give you a decent understanding of how Intel will catch up in the coming 1-2 years.
I already told you. Intel will improve its microarchitecture so that it isn't much behind anymore, and because its manufacturing lead is expanding, Intel will be able to get much better IGPs than what would have been possible without this 2-3 node advantage. Just look at how good or bad GPUs were 2-3 nodes (or 4-6 years) ago.
More theoretical fluff. We heard similar for years with Larrabee and it absolutely failed.
Why do you have to refer to the situation multiple years ago? Why is that relevant? Roadmaps change, plans change, targets change, all sort of things change. If you're going to refer to the past, when Intel was much more behind, you're always going to come to the conclusion that Intel will never catch up, obviously.
Going by past scaling, in four years time, GPU performance will be up by a factor of about 4x-5x across the board. That shifts Intel's target because even low-end parts will be that much faster.
I'm not so sure about that. I already told you that TSMC will have 10nm only in H2'18 or '19, and since Dennard scaling is already a decade dead, the difference between 28nm and 16FF+ won't be that staggering. But Intel will also have 14nm, 10nm and 7nm to catch up until Nvidia will have a new node.
Again, show us, in practical terms, when Intel will be able to beat today's Titan for graphics processing on a discrete CPU socket for made for consumer parts.
A GTX Titan? I think 10nm is very likely: 10nm is 5x more dense than 28nm, so your massive 550mm² GTX Titan is reduced to 110mm². Add the CPU and you APU is about the size of Ivy Bridge/Haswell. I don't know how high it will be able to clock, but note that Intel will use germanium at 10nm, which could potentially quite dramatically reduce improve consumption and performance.
This APU with a GTX Titan is quite small, so a bigger 14nm IGP (like the ~260mm² of GT3 Haswell) might also be able to come close. I don't think a Titan will be low-end in 2016.