Mullins vs Tegra K1, Benchmark scores

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jdubs03

Senior member
Oct 1, 2013
377
0
76
Lot of weird benchmarks and numbers and claims in this thread but I think what you can depend on with K1 is that it will A) Not be what nvidia claims it is (be less) B) Not win any designs.



The three step process for nvidia releasing mobile SoCs is:

1: CLAIM TEGRA IS AWESOME
2: SHOW BENCHMARKS SHOWING TEGRA IS AWESOME
3: DELIVER LACKLUSTER PRODUCT WITH LITTLE RESEMBLANCE TO ANYTHING AWESOME

Troll status response man. There has already been some benchmark leaks and it beats the Snapdragon 805. (cherry picking antutu). Wait until the under-delivering then criticize.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Lot of weird benchmarks and numbers and claims in this thread but I think what you can depend on with K1 is that it will A) Not be what nvidia claims it is (be less) B) Not win any designs.

The three step process for nvidia releasing mobile SoCs is:

1: CLAIM TEGRA IS AWESOME
2: SHOW BENCHMARKS SHOWING TEGRA IS AWESOME
3: DELIVER LACKLUSTER PRODUCT WITH LITTLE RESEMBLANCE TO ANYTHING AWESOME

spot on. Nvidia has a history of overpromising and underdelivering in mobile SOCs. OEMs are well aware and thats the reason Nvidia can't get key high volume design wins.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Troll status response man. There has already been some benchmark leaks and it beats the Snapdragon 805. (cherry picking antutu). Wait until the under-delivering then criticize.

the question is " Does it beat at comparable TDP ? " :whiste:
 

jdubs03

Senior member
Oct 1, 2013
377
0
76
the question is " Does it beat at comparable TDP ? " :whiste:

hopefully we'll find out soon. the issue with tegra 4 was two-fold: it was late to market inhibiting design wins, and there is not lte integration.

if nvidia can learn from that, then they'll get more wins. but the product will be competitive, hopefully underneath around 5W tdp.
 

Lonyo

Lifer
Aug 10, 2002
21,939
6
81
Lets put it another way:
Tegra K1 will run Android only (Windows RT is a who cares product)
AMD's chip will run Windows only (at least for now).

So... WHO CARES!
Most people here wouldn't be choosing between Tegra or AMD, because they would either be getting Android or Windows. Therefore the perf/watt this gen is irrelevant from an end user perspective, and comparisons don't achieve anything because the software stack will always be different (at least for now), and perf/watt, power consumption and performance comparisons are all irrelevant.
AMD could have a product 3x more efficient than Tegra K1. But you'd buy a K1 over it if you were buying an Android tablet.
K1 could be 3x more efficient than AMDs chip, but you want an x86 tablet, so it's irrelevant.
 

LogOver

Member
May 29, 2011
198
0
0
Lets put it another way:
Tegra K1 will run Android only (Windows RT is a who cares product)
AMD's chip will run Windows only (at least for now).

So... WHO CARES!
Most people here wouldn't be choosing between Tegra or AMD, because they would either be getting Android or Windows. Therefore the perf/watt this gen is irrelevant from an end user perspective, and comparisons don't achieve anything because the software stack will always be different (at least for now), and perf/watt, power consumption and performance comparisons are all irrelevant.
AMD could have a product 3x more efficient than Tegra K1. But you'd buy a K1 over it if you were buying an Android tablet.
K1 could be 3x more efficient than AMDs chip, but you want an x86 tablet, so it's irrelevant.

Well. Let's put it this way.
People who need Windows Tablet will buy Cherry Trail.
People who need Android Tablet will buy Cherry Trail.
People who need Windows/Android Tablet will buy Cherry Trail.
So... WHO CARES!
K1 could be 3x more efficient than AMDs chip, but ... it's irrelevant.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Well. Let's put it this way.
People who need Windows Tablet will buy Cherry Trail.
People who need Android Tablet will buy Cherry Trail.
People who need Windows/Android Tablet will buy Cherry Trail.
So... WHO CARES!
K1 could be 3x more efficient than AMDs chip, but ... it's irrelevant.

Gather all! Listen to the wise man. Hear his prophecy... and follow his leadership as a lamb follows its shepherd. Rise your wallets!
 

caswow

Senior member
Sep 18, 2013
525
136
116
i hear rumours that there will be tablets with titan z in it thats why it got delayed.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
I'd be interested to see how a Titan Z (or a Titan M with 2x15 Kepler.M SMXs) would perform at such low clock speeds with a tablet TDP.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
I'd be interested to see how a Titan Z (or a Titan M with 2x15 Kepler.M SMXs) would perform at such low clock speeds with a tablet TDP.
Wouldn't work, you couldn't scale a Titan let alone a Titan Z to a tablet TDP.

If I recall the minimum driving voltage of tsmc 28nm process is about .8 volts, the titan at idle is already running at .875 volts. There is a limit on how much you can decrease the voltage of the chip. Furthermore the gddr5 memory will have its own tdp which would be greater than 5 volts.

For comparison a normal titan at idle had a system power consumption of 111 volts (2688 shaders 6gb gddr5) according to anandtech, the same test bed with a gtx 650 (384 shaders 1gb gddr5) had an idle system power consumption of 104 volts. Thus we know the titan uses 7 watts more of power at idle compared to a gtx 650 which has twice the calculations units of Tegra K1.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Yeah, I'm not sure what I was thinking when writing that post. I though you could do something like what Apple did with the Macbook Air with its HD5000 graphics: more EUs at lower power clock speeds will improve performance/watt and thus actual performance of a 15W SKU. So maybe if you could do that an a much larger scale like 10 times as much EUs, you could get a nice performance/watt boost; just by trading die area for power consumption. But there's obviously only so much voltage you can trade for die area, so it doesn't work indefinitely.

Similar to the CPU discussion, on the GPU front Haswell has to operate under more serious thermal limits than with Ivy Bridge. Previously the GPU could take the lion’s share of a 17W TDP with 16 EUs, now it has 15W to share with the PCH as well as the CPU and 2.5x the number of EUs to boot. As both chips are built on the same 22nm (P1270) process, power either has to go up or clocks have to come down. Intel rationally chose the latter. What you get from all of this is a much larger GPU, that can deliver similar performance at much lower frequencies. Lower frequencies require lower voltage, which in turn has a dramatic impact on power consumption.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
Yeah, I'm not sure what I was thinking when writing that post. I though you could do something like what Apple did with the Macbook Air with its HD5000 graphics: more EUs at lower power clock speeds will improve performance/watt and thus actual performance of a 15W SKU. So maybe if you could do that an a much larger scale like 10 times as much EUs, you could get a nice performance/watt boost; just by trading die area for power consumption. But there's obviously only so much voltage you can trade for die area, so it doesn't work indefinitely.

The big deal is to get the chip to run at the lowest voltage possible. That is why intel has more calculations units with the hd 5000 in the 15w SKU.

Voltage increases tdp exponentially.
More transistors/die area increases tdp linearly.
More frequency increases tdp linearly.

Thus you want to find the highest frequency possible that you can run at the minimum voltage. After that you can add more die area (you can always turn off units by clock gating) but there will always be a maximum amount of performance you can reach in a certain tdp.
 
Last edited: