- Nov 14, 2011
- 10,419
- 5,712
- 136
What's the market share of Linux? In the real world I mean. People who do work on computers sure don't use Linux
Linux is massive in servers. Most web servers run Linux, as do most HPC servers.
What's the market share of Linux? In the real world I mean. People who do work on computers sure don't use Linux
DDR3 on discrete is kind of irrelevant. The HD7750 GDDR5 smokes Kaveri. Double the passmark score. Double the framerate in games like bioshock infinite. It will be 2-3 more years before any IGP under $200 will outperform a 7750. There might be some $500 iris super mega pro that does it next year for $666, but I'm not counting that.
Linux is massive in servers. Most web servers run Linux, as do most HPC servers.
Kaveri was originally meant to support GDDR5M, which would have broken the memory bottleneck and made it faster than a 7750. But Elpida went bankrupt, and scuppered those plans.
Seriously, fast RAM on an APU is far from inconceivable. The PS4 already has GDDR5, and HBM is just around the corner.
There will be a GT4, which will have 2.4X as much cores as Haswell GT3.
Nvidia improved their already cutting edge architecture by 1.35X per core and 2X per watt, according to their own slide, so Intel could certainly do something similar or better with 2 iterations.
Ah, the mythical GT4e dGPU killer. I'll believe that when I see it. Even with 96EUs (roughly 384 "AMD/Nvidia" shaders) its properly only going to match or slightly exceed Kaveri's current performance. Even a HD7750/GDDR5 completely smokes Kaveri. It also unlikely to come cheap, I'm not paying $300+ (for the CPU alone) for Kaveri level graphics performance.
Also what's stopping Nvidia from doing the same 2x per watt with next generation Pascal? "Big" Maxwell hasn't even launched yet. We can extrapolate from the GM107, but we don't know its performance yet. We also haven't even seen what 20nm is good for yet.
dGPU performance isn't stationary, IGPs are chasing a moving target...
No, first up they're chasing the fixed target of being usable for a moderate spec gaming machine at 1900*1080. Not there yet of course.
Then you keep iterating, trying to eat away at the low/mid end etc.
(Trying to have an impact before 4k resets things)
Kaveri's iGPU is bandwidth strangled but Intel's stuff will have the edram to help out. Like the XBox1 APU![]()
I think you are mixing stuff up, so let's get this straight:We have heard these optimistic predictions over and over from both Intel and amd about the great performance of the next Gen of igp, but each new generation has been either a marginal improvement or very expensive with limited availability.
How do you come up with those numbers? In any case, they're wrong. Iris Pro is already better than any APU of AMD.Ah, the mythical GT4e dGPU killer. I'll believe that when I see it. Even with 96EUs (roughly 384 "AMD/Nvidia" shaders) its properly only going to match or slightly exceed Kaveri's current performance. Even a HD7750/GDDR5 completely smokes Kaveri. It also unlikely to come cheap, I'm not paying $300+ (for the CPU alone) for Kaveri level graphics performance.
A moving target isn't a problem. IGPs are moving faster than the moving target. They have steadily been catching up in the last years and that isn't going to change. Instead, it seems that it will accelerate with Intel releasing GT3, GT4, Gen8 and Gen9 in 2015.Also what's stopping Nvidia from doing the same 2x per watt with next generation Pascal? "Big" Maxwell hasn't even launched yet. We can extrapolate from the GM107, but we don't know its performance yet. We also haven't even seen what 20nm is good for yet.
dGPU performance isn't stationary, IGPs are chasing a moving target...
No, first up they're chasing the fixed target of being usable for a moderate spec gaming machine at 1900*1080. Not there yet of course.
Then you keep iterating, trying to eat away at the low/mid end etc.
(Trying to have an impact before 4k resets things)
Kaveri's iGPU is bandwidth strangled but Intel's stuff will have the edram to help out. Like the XBox1 APU![]()
I think you are mixing stuff up, so let's get this straight:
Intel released a new architecture and more EUs with Ivy Bridge. It was a good improvement over HD3000 and HD2000. No optimistic predictions.
Intel increased the number of EUs with Haswell to 20 and announced GT3 Iris Pro with 2X as many EUs, so that was obviously a big improvement. However, they decided to limit the number of SKUs and ignore LGA. I personally wasn't expecting anything from Haswell anyway: it would continue to use Gen7, so anything new was a bonus.
This year with Broadwell, you can indeed expect a great improvement in architecture + 4X as many EUs for Atom and 20% more for Core. Again: no optimistic predictions or limited availability.
Next year, Intel will release another architectural revision with Gen9, and the number of EUs may increase by 50%. So the desktop will suddenly go from GT2/20EUs to GT4e/96-144EUs in Q2'15, on top of Gen7.5 -> Gen9.
How do you come up with those numbers? In any case, they're wrong. Iris Pro is already better than any APU of AMD.
And GT4 isn't mythical at all. It's a real SKU for LGA. It will blow any current IGP from Intel or AMD away with a much improved architecture and >2X as much EUs. Think of it as an improvement on the order of something like Fermi/Kepler -> Maxwell from 2,880 to 6,912 or 10,368 cores.
A moving target isn't a problem. IGPs are moving faster than the moving target. They have steadily been catching up in the last years and that isn't going to change. Instead, it seems that it will accelerate with Intel releasing GT3, GT4, Gen8 and Gen9 in 2015.
Pascal is a 2016 product according to Nvidia's roadmap. The best node that will be available is 20+FF, compared to Intel's 10nm with III-V. Don't forget that Intel is the richest semiconductor company and has the most advanced architecture, so why couldn't they do the same with their graphics architecture?
How do you come up with those numbers? In any case, they're wrong. Iris Pro is already better than any APU of AMD.
Actually you are just reinforcing my point. None of these great "advances" from broadwell or skylake can be purchased yet. They are all projections and speculation. Show me an igp on a sub 200 dollar cpu that is the equivalent of a GT750Ti, and I will be convinced.
And I am talking about the desktop here, as per my other post. I see a much more competitive position for igps in mobile, where you cannot easily add a discrete card and you are much more limited by power and heat constraints. And I suppose you could say SB and IB igps were a "great advance" relative to the total crap igps before them, but compared to a discrete card, they are very anemic.
Actually, don't the chips in the XB1/PS4 pretty much qualify? Ok, not available for sale for putting in our own computer builds but still a CPU/iGPU
A shame for current top end iGPU performance that Broadwell K has been delayed so long/Kaveri lost the GDDR5M option.
As for people buying, I imagine they'll be semi niche for a bit. People specifically after small machines and the like. I'd definitely like one vs my current set up. 4k tempting in another direction of course, so goodness knows.
How do you come up with those numbers? In any case, they're wrong. Iris Pro is already better than any APU of AMD.
And GT4 isn't mythical at all. It's a real SKU for LGA. It will blow any current IGP from Intel or AMD away with a much improved architecture and >2X as much EUs. Think of it as an improvement on the order of something like Fermi/Kepler -> Maxwell from 2,880 to 6,912 or 10,368 cores.
A moving target isn't a problem. IGPs are moving faster than the moving target. They have steadily been catching up in the last years and that isn't going to change. Instead, it seems that it will accelerate with Intel releasing GT3, GT4, Gen8 and Gen9 in 2015.
Pascal is a 2016 product according to Nvidia's roadmap. The best node that will be available is 20+FF, compared to Intel's 10nm with III-V. Don't forget that Intel is the richest semiconductor company and has the most advanced architecture, so why couldn't they do the same with their graphics architecture?
Actually you are just reinforcing my point. None of these great "advances" from broadwell or skylake can be purchased yet. They are all projections and speculation. Show me an igp on a sub 200 dollar cpu that is the equivalent of a GT750Ti, and I will be convinced.
And I am talking about the desktop here, as per my other post. I see a much more competitive position for igps in mobile, where you cannot easily add a discrete card and you are much more limited by power and heat constraints. And I suppose you could say SB and IB igps were a "great advance" relative to the total crap igps before them, but compared to a discrete card, they are very anemic.
Dont think so.
http://pclab.pl/art56975-4.html
And that is only using system memory against eDRAM, just imagine what will happen when AMD will start using HBM or even TSV.
It must be extremely embarrassing having a more advanced process, using eDRAM and at the same time having a bigger die (+eDRAM) and loose in performance.
Don't worry, a certain user has the "counter-argument" than AMD and Nvidia "can't afford" 20nm and will stay on 28nm until 2016 or later.![]()
The idea that people here have is that IGPs will become "good enough" for 99% of the market, like integrated sound cards. That has a decent chance of happening, but the people saying it'll happen in 5 years are crazy.It is hard to believe iGPs will ever catch up to dGPUs when the first ones have only sub 100W thermal envelopes shared with CPUs while the latter have up to 500W of juice to stretch their legs.
The rumor is amd and nv are going to roll back to 40nm to reduce costs...![]()
The idea that people here have is that IGPs will become "good enough" for 99% of the market, like integrated sound cards. That has a decent chance of happening, but the people saying it'll happen in 5 years are crazy.
Five years sounds completely plausible for killing discrete GPU's in mobile devices. Hell... the Iris Pro 5200 on my Macbook Pro are already good enough to handle Civilization V and a few other games without any issue.
Oh the Windows desktop, I'd give it a decade. For the Linux desktop, we're basically already there... most Linux users aren't gamers, and most of the 3D gaming drivers for Linux suck.