Intel Iris Pro 6200 is something else

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SPBHM

Diamond Member
Sep 12, 2012
5,065
418
126
Dont think so, performance is like 250X at low res/settings.

Currently $90-100 gets you R7 260X or GTX750

you are right, it's probably a lot slower, and with decent settings the 250X (7770) is probably also significantly better for most games, that graphic from GTA V is a bad reference for this comparison, it's CPU limited game and the VGAs are combined with a massively slower CPU

still, OC results for the IGP could be interesting, the HD 4600 had a good potential if you ignored the power usage increase
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Skylake iGP and particularly GT4e, should leave Broadwell iGP behind though.
 
Last edited:

Lorne

Senior member
Feb 5, 2001
873
1
76
They seriously need proof reading, besides all the other errors pointed out in the comments, There test hardware section "MSI K9A2 Plat" was not used in any of those test because its an AM2+ DDR2 MB.
 

dark zero

Platinum Member
Jun 2, 2015
2,655
140
106
Except for the part where Pascal will also be using HBM and Arctic Island will be AMD's second iteration of HBM. Oh, and both Nvidia and AMD will be moving to smaller node process. Given AMD and Nvidia have quite a bit more experience in the graphics department then Intel, they don't have anything to worry about.

If Zen is any good then AMD will be back in the game with their APUs and will no doubt be using HBM. Don't forget AMD is partners with Hynix and has exclusivity with HBM, that's why Nvidia can't use it this year.
Yeah, I know it.

Also nVIDIA is improving their Cpu who are beast and when they moves to 14 nm, they might bite the Big Core eventually.

Intel won't allow this and they are going to chase them hard until both dies (along VIA)
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
When Intel uses HBM, Pascal and Artic Islands will be DOA. Intel won. Everyone else lost.

Not joking, Intel only needs HBM to get their memory problems solved. They already have a decent GPU.

Iris Pro 6200 is a little less powerful than a GTX 750. That is about 1/4 as powerful as a GTX 980 - let alone the Titan X. This is despite the fact that Broadwell is on Intel's newest node while the Maxwell cards are on the ancient 28nm process.

Intel isn't even close to displacing discrete graphics - HBM or no. They need a lot more raw power to do that. And at any specific lithography, a discrete GPU is going to outperform an integrated GPU, simply because the discrete unit can be much bigger and have a significantly higher TDP.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Iris Pro 6200 is a little less powerful than a GTX 750. That is about 1/4 as powerful as a GTX 980 - let alone the Titan.

Intel isn't even close to displacing discrete graphics - HBM or no. They need a lot more raw power to do that. And at any specific lithography, a discrete GPU is going to outperform an integrated GPU, simply because the discrete unit can be much bigger and have a significantly higher TDP.

Are there any 9th gen Iris Pro benches out yet?
 

MisterLilBig

Senior member
Apr 15, 2014
291
0
76
http://www.techpowerup.com/202196/as...herboards.html



Pretty much everyone else did the same thing.

Next!

That is a year old and this is Intel themselves stating the "C" wont be overclocked on anything but a Z board.

"Next!"

EDIT: Actually, I am wrong. That slide, on the new review is also a year old. =/

Also nVIDIA is improving their Cpu who are beast

Nvidia took all references of Denver out of their website and the core is bug ridden and out performed by pretty much every 3-4 core "small core" CPU and out done by pretty much every other 2+ "big core" CPU. It is also an in order architecture. And products with it fell on their face, just to make it worse. I had hope for it, a lot of hype and interesting ideas but it was a complete failure.
 
Last edited:

sm625

Diamond Member
May 6, 2011
8,172
137
106
Do any of these reviews have numbers on the edram? Die size, transistor count? etc
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
Frankly it's a miracle that Denver got into products so we could see how poorly it performed.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
That is a year old and this is Intel themselves stating the "C" wont be overclocked on anything but a Z board.

"Next!"

EDIT: Actually, I am wrong. That slide, on the new review is also a year old. =/

You never can trust internet slides. They'll bite you every time. :)
 

MisterLilBig

Senior member
Apr 15, 2014
291
0
76
Frankly it's a miracle that Denver got into products so we could see how poorly it performed.

After the fact, I am surprised that products even used it. But a mayor warning sign to me was NV not using it themselves, the X1 made that even more obvious.

You never can trust internet slides. They'll bite you every time.

I totally didn't expect them to mix new and old slides on the review. xD
 
Feb 19, 2009
10,457
10
76
Nobody who buys a 4670K will be running it with iGPU. Iris is a total waste of time in CPUs with such a price bracket.

True, its still in no man's land, CPU too powerful for a media HTPC, iGPU too weak to play games on 1080p with some IQ.

For a HTPC chip, it's way too expensive.

For a main PC rig its so much wasted die space for an iGPU gamers won't use as they go for a nice mid-range dGPU.

It's a similar situation to all my i5 & i7 CPU builds, their iGPU is a complete waste and their price make them belong in a proper gaming build which always have a nice dGPU.

Maybe next-gen iGPU with HBM will do the trick!
 

cyclohexane

Platinum Member
Feb 12, 2005
2,837
19
81
someone really needs to figure out how to use the integrated with discrete GPUs together at the same time
 
Feb 19, 2009
10,457
10
76
Seriously just look at this:

BDW-U.png


BDW-H-Map.png


That doesn't include the edram which itself is also quite large.

Now, I wish Intel focus on releasing a CPU within the $300 price range that had NO iGPU and made it 8-12 cores with the same die space. That CPU would fly in so many apps with multi-threads and in newer games that scale beyond 4 physical cores, would be an awesome enthusiast CPU for the price.

Instead, we are basically paying that much to not use up to 50% - 66% (much more counting the edram die) of the chip, as soon as we game on dGPUs.
 

jamesgalb

Member
Sep 26, 2014
67
0
0
Now, I wish Intel focus on releasing a CPU within the $300 price range that had NO iGPU and made it 8-12 cores with the same die space. That CPU would fly in so many apps with multi-threads and in newer games that scale beyond 4 physical cores, would be an awesome enthusiast CPU for the price.

Instead, we are basically paying that much to not use up to 50% - 66% (much more counting the edram die) of the chip, as soon as we game on dGPUs.

When games start coming out in DX12 and Vulkan (which they will because of the consoles), the iGPs will be able to be used.

Its yet to be seen, but I would suspect an iGP would provide more boost than anything beyond a 8 thread CPU... Being able to assign things like all AA to the iGP, so it doesnt impact the dGPU, would provide a much larger performance increase than going from a Haswell to a Haswell-E, for example...

someone really needs to figure out how to use the integrated with discrete GPUs together at the same time

Seemingly in the next generation of APIs, entire processes will be able to be offloaded to different GPUs (and iGPUs), like AA, rendering of textures, bloom, motion blur, or other similar effect...

Hypothetically it would allow you to enable advance settings without taking any hits in FPS, so long as the second (or third) GPU can handle them.
 
Last edited:

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
Looks like it has it's niche, but it's quite a bit to pay for system builders and low end gamers who can live with the likes of an 860k or i3 and put some real budget into the GPU.

Also, I don't like that none of the tests were standard 1080p + high settings with AA/no AA. Almost seems a bit underhanded, but on the other hand, such GPU loadings could still be in the favor of the HD 6200 with 'dat eDRAM. The performance increase is still staggering, if not shocking, and HBM in this case will not save AMD APUs at Intel's rate of advancement because they are increasing their actual raw GPU capability and leveraging that process advantage at the same time.
 
Last edited:

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
1080p med settings is all I want/need. my dream of a laptop that can do that is coming true with skylake I think. 3 pounds gaming laptop here I come baby!

anyone got any news about amd zen apus?
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Nobody who buys a 4670K will be running it with iGPU. Iris is a total waste of time in CPUs with such a price bracket.

In order for it to take on low end dGPUs, it needs to be in low end CPUs that cost $50-$100. The problem is, it can't be there because it would no longer be a $50-$100 CPU.

Of course the shipping figures will still count it as "video card market share" even though nobody in their right mind would ever use it. Much the same way I use a Titan with my 4790K, yet it's still a "+1" for Intel's GPU shipments.


Likewise, dGPUs will ramp up performance considerably, including the low end. In one refresh we've managed to attain Titan performance on a 970 at one third the price, which is now classed as mid-range. And that's without any die shrinks.

With a die-shrink (e.g. Pascal) I expect Titan levels of performance at the low-end. Iris / Skylake being competitive with that is a fantasy.

I have a 3770k and a 2600k, use the IGP on both to drive my 2nd monitor. Slightly less VRAM is used up that way and it also enables quicksync which I use pretty regularly.
 
Feb 19, 2009
10,457
10
76
When games start coming out in DX12 and Vulkan (which they will because of the consoles), the iGPs will be able to be used.

I suspect that's merely a footnote since it would require excellent developers to handle such close to the metal integration of different GPU architectures, ie. Intel + AMD or Intel + NV. Chances of devs going above & beyond just to put an iGPU to good use? I guess Intel better start IntelWorks program and throw $ at game devs..
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
Intel's plan of eating the discrete GPU market feet first is starting to unfold... $260 for a 4c/4t Broadwell with a built-in R7 250/GTX 750-class GPU is a dream come true for prebuilt sellers.

I don't think they are really getting that level of performance, as many are noting the modes / benchmark settings look rigged. In some cases it's clear the games are going CPU limited with the graphics set so low, which naturally gives the Intel part a perceived advantage.

Having said that, if real testing shows them to actually have near GTX 750 / R7 260 performance ...

Well the money in this market is in the $150-$350 volume sellers. This would encompass the 750 Ti to the GTX 970, R7 260X to the R9 290. If Intel has that performance and proliferates it into other chips it would begin a really fast evisceration of Nvidia and AMDs dGPU market.

Things like the Titan X, 980 Ti, Fiji, 295X2, and other > $350 cards are tiny market share bragging rights cards, sort of like the Dodge Hellcat vs a normal Challenger. The only reason the exotic one exists is to sell more of the normal one. If normal challengers stop selling, you won't see any more hellcats (or challengers). Same for these video cards.

This is something I really don't want to see happen. So, I hope Intel's iGPU does in fact suck when put up in real benchmarks.
 

MrTeal

Diamond Member
Dec 7, 2003
3,911
2,677
136
I suspect that's merely a footnote since it would require excellent developers to handle such close to the metal integration of different GPU architectures, ie. Intel + AMD or Intel + NV. Chances of devs going above & beyond just to put an iGPU to good use? I guess Intel better start IntelWorks program and throw $ at game devs..

I can't see AMD doing something like this, but nVidia is surely aware that a large percentage of their dGPUs are running on Intel platforms that feature processor graphics that are just sitting idle. If nVidia ported something like PhysX to OpenCL and allowed it to run on processor graphics as long as the display adapter was an nVidia card, that could be another large selling point in their favor that really costs them nothing other than some support.

I don't think they will given how tight-fisted they are with their IP, but paying an extra $50 to get the Iris Pro version of the top i7 and being able to run physics on the iGPU+eDRAM would be a very interesting value proposition.
 

Blue_Max

Diamond Member
Jul 7, 2011
4,223
153
106
Intel's plan of eating the discrete GPU market feet first is starting to unfold... $260 for a 4c/4t Broadwell with a built-in R7 250/GTX 750-class GPU is a dream come true for prebuilt sellers.

Exactly. As soon as this is an Intel NUC box, I'm grabbing one!

Here's another perspective... the GTX 750TI is equal to the mobile GeForce 860M.

860M.

The same 860M found in $1000 gaming laptops.

Think about that. Impressive, no? :cool:

[EDIT] Okay... The charts in the link show the Iris Pro 6200 closer to the 850M in performance, but that's damned great for Intel IGP!
 
Last edited:
Feb 19, 2009
10,457
10
76
Exactly. As soon as this is an Intel NUC box, I'm grabbing one!

Here's another perspective... the GTX 750TI is equal to the mobile GeForce 860M.

860M.

The same 860M found in $1000 gaming laptops.

Think about that. Impressive, no? :cool:

[EDIT] Okay... The charts in the link show the Iris Pro 6200 closer to the 850M in performance, but that's damned great for Intel IGP!

It makes sense for notebooks for sure, but not for desktops as the price and performance profile isn't good. For a HTPC or NUC, you don't need such CPU performance and for a gaming desktop build, it lacks gaming performance that cheap dGPUs provide.

No man's land due to the price. At least for AMD, the el-cheapo APU has a small niche.