iris pro vs. 8670d

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
I saw those impressive results, the iris pro is a decent performer to say the least but It got me think, yes it outperforms the 8670d in most benches but by how much? 20-50% while having 2-3X more bandwidth...

this is theoretical but what if the 8670 had even 2x is current bandwidth ddr3-2133[~34GB/s*2->68GB/s] what would the performance be like?

secondly what if amd, instead of overclocking the cpu, focused on the gpu for a stock 1000-1100MHz stock clock with stock ddr3-2400[19GB/s*2 -> 38GB/s] or even ddr3-2666[21gb/s*2 -> 41GB/s], what would the performance be like?

partially answered my own question.
refresher
55309.png

source post

ram scaling upto ddr3-2400
7689e9d8-4c71-447d-b829-a97c2e36317f.png

source post

amd a10-5800k, ddr3-2400 and 1250MHz gpu clock
3dmark1145CPU1250GPUSingleMonitor_zps2c34b908.jpg

source thread | post

1169 GPU and dr3-2400
b132d267fa5f8852a9f1a8b7ba00ef3fa511738165348e389cc3c1e6b42ed0146g.jpg

source thread | post

1169MHz gpu, ddr3-2400[a10-6800k]
A10-6800K-3DMark11-OC.png

source post[google translate]

it seems that the gpu has no more performance to gain after 1200MHz, combined with ddr3-2400 it claws its way back at the iris pro, so the rest of my question still stands, what about higher than dd3-2400 or is 2400 the max stable speed?
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
What does it matter?

Also discussing using $170 ram on a $150 cpu/gpu combos boggles my mind.

With Intel licensing nVidia tech it was only a matter of time with a far greater R&D budget and a process node advantage before Intel overtook AMD in that areas as well.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
I saw those impressive results, the iris pro is a decent performer to say the least but It got me think, yes it outperforms the 8670d in most benches but by how much? 20-50% while having 2-3X more bandwidth...

this is theoretical but what if the 8670 had even 2x is current bandwidth ddr3-2133[~34GB/s*2->68GB/s] what would the performance be like?

secondly what if amd, instead of overclocking the cpu, focused on the gpu for a stock 1000-1100MHz stock clock with stock ddr3-2400[19GB/s*2 -> 38GB/s] or even ddr3-2666[21gb/s*2 -> 41GB/s], what would the performance be like?

Share those amazing benches with us! I want to see this!
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
What does it matter?

Also discussing using $170 ram on a $150 cpu/gpu combos boggles my mind.

With Intel licensing nVidia tech it was only a matter of time with a far greater R&D budget and a process node advantage before Intel overtook AMD in that areas as well.
not sure if troll but i'll bite...the iris pro is 400+ dollars...not counting is ddr3-1600 ram and

ddr3-2400 is the same price as ddr3-2133 http://www.newegg.com/Product/Produc...atedMark=False

as for ddr3-2666, yes you are right it is expensive, but my goal for this post is performance scaling first, secondarily to find out which gpu is better when bandwidth is equal and then costs.
 

sushiwarrior

Senior member
Mar 17, 2010
738
0
71
What does it matter?

Also discussing using $170 ram on a $150 cpu/gpu combos boggles my mind.

With Intel licensing nVidia tech it was only a matter of time with a far greater R&D budget and a process node advantage before Intel overtook AMD in that areas as well.

Why does it boggle the mind when it still costs over $100 less than the intel solution in question? And as cameron pointed out, it's $65 RAM, so get a little bit of perspective before you go making conclusions about how good Intel's solution is.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Why does it boggle the mind when it still costs over $100 less than the intel solution in question? And as cameron pointed out, it's $65 RAM, so get a little bit of perspective before you go making conclusions about how good Intel's solution is.

Well the intel solution cpu wise pretty much kills it (higher grade i7 mobile quads are almost the same as desktop i5 in singlethread and a bit ahead in multithread) while using just over half the power.

AMD's igp is good, no doubt about that but generally the theoretical output is lower than the HD 5200.

Also note that the AT article is using 2133 mhz RAM for trinity , 2400 mhz ram for 4770k and 1600 mhz ram for the mobile platforms. From 2133 mhz to 2400 mhz you aren't going to get enough performance gains (maybe 5% given diminishing returns) to get close to Iris Pro which easily outperforms trinity/richland.

Anyway AMD is going down the wrong route with the faster RAM business. They really need to fix their memory controller on their APUs (which would bring a much larger effect). 26% gain if they get trinity/richland's bandwidth to the same level as Vishera.

memory-bandwidth.png


I also highly doubt that OEMs pay anything like the listed price. The 3770k is listed at $332 but I can buy it from retail for $319 from amazon. 4770k is listed for $339 but retail costs $336. And considering retailers generally need something like at least a 20% margin there is no way those CPUs are costing anything like the listed price.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Well the intel solution cpu wise pretty much kills it (higher grade i7 mobile quads are almost the same as desktop i5 in singlethread and a bit ahead in multithread) while using just over half the power.

AMD's igp is good, no doubt about that but generally the theoretical output is lower than the HD 5200.

Also note that the AT article is using 2133 mhz RAM for trinity , 2400 mhz ram for 4770k and 1600 mhz ram for the mobile platforms. From 2133 mhz to 2400 mhz you aren't going to get enough performance gains (maybe 5% given diminishing returns) to get close to Iris Pro which easily outperforms trinity/richland.

Anyway AMD is going down the wrong route with the faster RAM business. They really need to fix their memory controller on their APUs (which would bring a much larger effect). 26% gain if they get trinity/richland's bandwidth to the same level as Vishera.

memory-bandwidth.png


I also highly doubt that OEMs pay anything like the listed price. The 3770k is listed at $332 but I can buy it from retail for $319 from amazon. 4770k is listed for $339 but retail costs $336. And considering retailers generally need something like at least a 20% margin there is no way those CPUs are costing anything like the listed price.

good points, I wonder why the trinity imc is so bad relative to amds fx chip ddr3-1600 theoretically should be 24GB/s. that is ~50 less bandwidth, what is happening...so even at ddr3-2400 at ~40GB/s amd would only be getting 20-25GB/s...

so they dont even have to add faster memory, jusr find a way to get to theoretical limits...
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
good points, I wonder why the trinity imc is so bad relative to amds fx chip ddr3-1600 theoretically should be 24GB/s. that is ~50 less bandwidth, what is happening...so even at ddr3-2400 at ~40GB/s amd would only be getting 20-25GB/s...

so they dont even have to add faster memory, jusr find a way to get to theoretical limits...

Think its probably to do with the L3 cache in vishera (which also includes some logic that helps with RAM).
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
Iris Pro currently adds $90-100 to the CPU cost and is only sold with i7, so it only makes sense for low-power-use high-end ultrabooks.

For a full-size laptop a cheaper intel i5 or i7 without it + an nvidia 650m would be better, even though it adds 50 watts to load power and -??- to idle since Optimus will probably never work as well as the single intel chip.
 

SPBHM

Diamond Member
Sep 12, 2012
5,066
418
126
one thing that is impressive is how far ahead Intel is for tessellation,

also if you look at this http://images.anandtech.com/graphs/graph6993/55313.png
it's clear that the current AMD IGP is limited in several other ways, so I think it's far more adequate to come with a new design... like Kaveri, crazy high GPU and memory clocks are not efficient....

anyway, the big problem for AMD is that mobile HD4600 is probably going to perform the same, or pretty close to the mobile Richland IGP...
 

Yuriman

Diamond Member
Jun 25, 2004
5,530
141
106
My uneducated opinion is that I'd place my bets on Iris for the long-run. Performance is so all over the place that I suspect it may get significant gains from driver updates.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
For a full-size laptop a cheaper intel i5 or i7 without it + an nvidia 650m would be better, even though it adds 50 watts to load power and -??- to idle since Optimus will probably never work as well as the single intel chip.

Optimus works very well and it could easily be so that HD 4600 + optimus idles lower than the larger and more power hungry HD 5200 + Edram (edram alone according to the AT article is 0.5-1 watt). Optimus rarely kicks in for general use and you can always set the notebook to use the igp all the time except for certain programs (games). Considering optiums vs igp is virtually the same battery life within margin of error under most tasks and the high edram idle power use I wouldn't be surprised if battery life is lower with Hd5200.

And there is no freaking way a 650m is using anywhere close to 50 watts under gaming loads. 30-35 watts tops. The rmbp has an 85 watt adapter for the whole system and runs games just fine (thats for the 45 watt cpu + screen, RAM, SSD, etc).

But I agree that 650m + i5/i7 is the way to go especially if you get programs that can use the igp + dgpu at the same time.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
My bad, I misread the theoretical TDP of the Mac (90 watts) vs. Iris Pro (47 - 55) as actual power draw. And the article also noted that the 90 watt figure was too high, d'oh!:

http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/5
"I listed TDPs with all of the parts I’m comparing here. In the case of the GT 640 I’m adding the TDP of the CPU (84W) and the GPU (65W). TDP is half of the story with Iris Pro, because the CPU, GPU and eDRAM all fit into the same 47W power envelope. With a discrete GPU, like the 650M, you end up with an extra 45W on top of the CPU’s TDP. In reality the host CPU won’t be running at anywhere near its 45W max in that case, so the power savings are likely not as great as you’d expect but they’ll still be present."