Go Back   AnandTech Forums > Hardware and Technology > Video Cards and Graphics

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals with Free Stuff/Contests
· Black Friday 2014
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Closed Thread
 
Thread Tools
Old 12-24-2012, 05:12 PM   #1
Lonyo
Lifer
 
Lonyo's Avatar
 
Join Date: Aug 2002
Posts: 21,655
Default NVidia Tegra 3 power consumption (AT article)

http://www.anandtech.com/show/6529/b...wer-analysis/6

Quote:
NVIDIA's GPU power consumption is more than double the PowerVR SGX 545's here, while its performance advantage isn't anywhere near double. I have heard that Imagination has been building the most power efficient GPUs on the market for quite a while now, this might be the first argument in favor of that heresay.


It seems like NV might be struggling with power consumption on their GPUs in comparison to Imagination Tech (PowerVR).
Certainly from a platform power standpoint, it does indicate some difficulties in competing with ARM SoCs if it's true, although it doesn't seem to result in significant penalties in Android ARM tablets.

It would be interesting to see Android vs WinRT on Tegra 3 to see what sort of loads Tegra 3 ended up having, given than WinRT isn't quite the right fit for Tegra 3 given things like the additional unusable core (as RT doesn't support heterogenous processors).
Intel does have a process advantage with their SGX545 being 32nm vs the 40nm for the Tegra 3 SOC though, in addition to being unsure what else might be going through the line being tested (although it does seem to be mostly activated by GPU workloads), so it's not all NV's fault.


Quote:
Across the board Intel manages a huge advantage over NVIDIA's Tegra 3. Again, this shouldn't be a surprise. Intel's 32nm SoC process offers a big advantage over TSMC's 40nm G used for NVIDIA's Cortex A9 cores (the rest of the SoC is built on LP, the whole chip uses TSMC's 40nm LPG), and there are also the architectural advantages that Atom offers over ARM's Cortex A9. As we've mentioned in both our Medfield and Clover Trail reviews: the x86 power myth has been busted. I think it's very telling that Intel didn't show up with an iPad for this comparison, although I will be trying to replicate this setup on my own with an iPad 4 to see if I can't make it happen without breaking too many devices. We've also just now received the first Qualcomm Krait based Windows RT tablets, which will make another interesting comparison point going forward.
http://www.anandtech.com/show/6529/b...wer-analysis/8
__________________
CPU: Q3570K @ 4.1GHz 1.23v // Mobo: Asus P8Z77-V // GFX: Sapphire Tri-X 290 @ 1000/5200 // RAM: Corsair DDR3 @ 1600MHz 9-9-9-24 // SSD: Samsung 830 128GB
Video cards: TNT2, Ti4400, 9800, 7800GT(+7200GS), HD4850(+HD2400), HD6850, HD7950 (Laptops: GF6150, HD3200, GMA500)

Last edited by Lonyo; 12-24-2012 at 05:16 PM.
Lonyo is offline  
Old 12-24-2012, 05:45 PM   #2
ShintaiDK
Lifer
 
ShintaiDK's Avatar
 
Join Date: Apr 2012
Location: Copenhagen
Posts: 11,047
Default

Thats what processnode advancement gives you. Plus remember the Tegra 3 is now a year old.
__________________
APUSilicon.com=AMD advocates shill site.
ShintaiDK is offline  
Old 12-24-2012, 08:29 PM   #3
Will Robinson
Golden Member
 
Will Robinson's Avatar
 
Join Date: Dec 2009
Posts: 1,402
Default

They musta used dodgy drivers and sacrificed quality to get those numbers right?
__________________
Intel CPUs and AMD GPUs.


Will Robinson is offline  
Old 12-24-2012, 09:05 PM   #4
Lonyo
Lifer
 
Lonyo's Avatar
 
Join Date: Aug 2002
Posts: 21,655
Default

Quote:
Originally Posted by Will Robinson View Post
They musta used dodgy drivers and sacrificed quality to get those numbers right?
Having used one of the Atoms with PowerVR graphics... the drivers were ridiculuosly dodgy. BSOD when trying to use DXVA in Windows XP, which was a known problem they never cared to fix.
__________________
CPU: Q3570K @ 4.1GHz 1.23v // Mobo: Asus P8Z77-V // GFX: Sapphire Tri-X 290 @ 1000/5200 // RAM: Corsair DDR3 @ 1600MHz 9-9-9-24 // SSD: Samsung 830 128GB
Video cards: TNT2, Ti4400, 9800, 7800GT(+7200GS), HD4850(+HD2400), HD6850, HD7950 (Laptops: GF6150, HD3200, GMA500)
Lonyo is offline  
Old 12-25-2012, 03:45 AM   #5
Lepton87
Golden Member
 
Lepton87's Avatar
 
Join Date: Jul 2009
Location: Poland(EU)
Posts: 1,749
Default

I bet Tegra 3 GPU is only faster because it stutters! Not such a a smooth sailing as with PowerVR SGX 545
__________________
i5 2600K@4778MHz(47x101.7MHz) 1.45V,Noctua NH-D14, Asus Maximus IV Extreme, 8GB Corsair 1866MHz, Gigabyte GTX Titan SLI, 2x Corsair MX100 256 in Raid 0, 2xSeagate 3TB 7200RPM in RAID 0, Sandforce 2 120GB + 2TB WD Caviar Green, Seagate 1TB 7200RPM, BE Quiet 1200W, dell u2711
Lepton87 is offline  
Old 12-25-2012, 04:58 AM   #6
ViRGE
Super Moderator
Elite Member
 
ViRGE's Avatar
 
Join Date: Oct 1999
Posts: 30,226
Default

Quote:
Originally Posted by ShintaiDK View Post
Thats what processnode advancement gives you. Plus remember the Tegra 3 is now a year old.
Goes to show the ridiculousness of equipping the first Surface tablet with it though. MS should have lined up a 32nm SoC; a Krait would have been better from both a CPU and GPU standpoint.
__________________
ViRGE
Team Anandtech: Assimilating a computer near you!
GameStop - An upscale specialized pawnshop that happens to sell new games on the side
Todd the Wraith: On Fruit Bowls - I hope they prove [to be] as delicious as the farmers who grew them
ViRGE is offline  
Old 12-25-2012, 05:36 AM   #7
sontin
Platinum Member
 
Join Date: Sep 2011
Posts: 2,226
Default

So Anandtech.com compares a 40nm SoC against a 32nm SoC and comes to the conclusion that the 32nm SoC uses less power.

Wow.

Quote:
Originally Posted by ViRGE View Post
Goes to show the ridiculousness of equipping the first Surface tablet with it though. MS should have lined up a 32nm SoC; a Krait would have been better from both a CPU and GPU standpoint.
Dell is selling a Windows RT tablet with a DualCore S4.

Last edited by sontin; 12-25-2012 at 05:39 AM.
sontin is online now  
Old 12-25-2012, 06:10 AM   #8
mablo
Member
 
Join Date: Sep 2012
Location: Norway
Posts: 29
Default

Nice to see it put in numbers, but nothing I haven't experienced after owning a HTC One X for a while. I've been very unimpressed by Tegra 3 from the start, and not just because of the power consumption. The only positive thing I can think of is the companion core, but for me any gains in idle power draw is nullified when I use it.
mablo is offline  
Old 12-25-2012, 06:51 AM   #9
tviceman
Diamond Member
 
Join Date: Mar 2008
Posts: 4,939
Default

Quote:
Originally Posted by ViRGE View Post
Goes to show the ridiculousness of equipping the first Surface tablet with it though. MS should have lined up a 32nm SoC; a Krait would have been better from both a CPU and GPU standpoint.
Drivers? I thought it was common knowledge MS went with tegra 3 because the drivers and software was ready. Better to launch with a usable product and mediocre hardware (apple did this with it Mac lines for years) than ship with amazing hardware that does not work.

Anyways, that article is crap beyond showing Intel now has a viable tablet SoC. It almost read like Intel cheer leading advertisement. HEY GUYS a new 32nm chip is more power efficient than a 40nm one! Compare Intel's fancy new chip to krait, A6X, or tegra 4 then let's see how shiny and awesome it is.
tviceman is offline  
Old 12-25-2012, 07:25 AM   #10
notty22
Diamond Member
 
notty22's Avatar
 
Join Date: Jan 2010
Location: Beantown
Posts: 3,313
Default

Toms Hardware is running what seems like the same article. Intel elves delivered Christmas presents and they want their intentions heard ! LOL

They also seem 'giddy'.

__________________
i5 4670K@4100mhz, 32GB Kingston 1600,H50
MSI GTX 970 gaming Seasonic SS-760XP2

240gb SSD, Win 8.1
Let's make sure history never forgets... the name... 'Enterprise'. Picard out.
notty22 is offline  
Old 12-25-2012, 07:35 AM   #11
sontin
Platinum Member
 
Join Date: Sep 2011
Posts: 2,226
Default

nVidia's power saving core is not working with RT right now. That screws the video playback numbers. Compare the Asus Vivo tablet to the Prime: The Prime is running >2h longer with the same display and the same battery.
sontin is online now  
Old 12-25-2012, 07:44 AM   #12
zebrax2
Senior Member
 
Join Date: Nov 2007
Posts: 821
Default

Quote:
Originally Posted by sontin View Post
So Anandtech.com compares a 40nm SoC against a 32nm SoC and comes to the conclusion that the 32nm SoC uses less power.

Wow.
It may seem like a strange move but if you consider that most of the time Intel is ahead in process nodes compared to its competitors not so much
zebrax2 is offline  
Old 12-25-2012, 07:48 AM   #13
krumme
Platinum Member
 
Join Date: Oct 2009
Posts: 2,216
Default

Quote:
Originally Posted by notty22 View Post
Toms Hardware is running what seems like the same article. Intel elves delivered Christmas presents and they want their intentions heard ! LOL

They also seem 'giddy'.

Intel can flex all their technical marketing power but Atom is not on the market at all, and there is reasons for that.

They are way to late on this market - and we are talking at least 5 years to late. Besides its about keeping total cost down, and their products is anything but cheap. The new Atoms have no advantage at all. The old at least had the advantage of using depreciated equipment. Now Intel intend to compete on the market using state of the art 22 and 14nm for their future Atoms, while TSMC is flooding the market with dirt cheap 28nm A15, beeing half way depreciated already. That is bad business.

We have Intel marketing talk wrapped in information about measuring power consumption over a resistor. Its focus is absolutely the wrong one, and brings nothing new to the table.

What we just knows from this article is what was not showed to us: That Intel does not have anything remotely competitive to the 32nm hkmg A7V, and that it probably will stay that way with little big A7-A15 on 28nm compared to the new Atoms.
krumme is offline  
Old 12-25-2012, 07:54 AM   #14
sontin
Platinum Member
 
Join Date: Sep 2011
Posts: 2,226
Default

Quote:
Originally Posted by zebrax2 View Post
It may seem like a strange move but if you consider that most of the time Intel is ahead in process nodes compared to its competitors not so much
But right now Intel is on the same node and months behind the competition. Samsung and Qualcomm released their 28nm/32nm parts 8 months ago. Tegra 3 is 12 months old and yet the Transformer Prime using less power while watching videos.

There is a reason why Intel choosed Tegra 3 and Windows RT: 40nm and a disabled power saving core.
sontin is online now  
Old 12-25-2012, 09:13 AM   #15
ShintaiDK
Lifer
 
ShintaiDK's Avatar
 
Join Date: Apr 2012
Location: Copenhagen
Posts: 11,047
Default

Quote:
Originally Posted by ViRGE View Post
Goes to show the ridiculousness of equipping the first Surface tablet with it though. MS should have lined up a 32nm SoC; a Krait would have been better from both a CPU and GPU standpoint.
Thats the classic issue when companies enter a stage where "they know better" than the consumers. However going ARM in the first place was the biggest disaster.

MS already downsized their Surface sales from 4 million to 2 million to now just a few 100K units. I dont think there will be any sucessors to WinRT and Surface RT. Number 2 didnt get fired for fun. And heads are still rolling.
__________________
APUSilicon.com=AMD advocates shill site.
ShintaiDK is offline  
Old 12-25-2012, 11:10 AM   #16
jpiniero
Golden Member
 
Join Date: Oct 2010
Posts: 1,574
Default

How is Ballmer still employed?

Quote:
However going ARM in the first place was the biggest disaster.
Doing ARM was a good idea, but they totally botched it. As is typical of MS in this era. It's not like any of the other W8 tablets - x86 or ARM - are going to sell either. They aren't really competitive with anything Google or Apple offers at this point.
jpiniero is offline  
Old 12-25-2012, 11:36 AM   #17
SPBHM
Platinum Member
 
SPBHM's Avatar
 
Join Date: Sep 2012
Posts: 2,484
Default

I think Tegra 3 is not one of the best options in terms of ARM SoCs

but it's impressive how well Atom is doing,
SPBHM is offline  
Old 12-25-2012, 11:56 AM   #18
Pottuvoi
Senior Member
 
Join Date: Apr 2012
Posts: 265
Default

Tile based renderers do have nice advantage on data movement and I believe that is what causes the gab. (each tile stays in edram until it's finished)
Pottuvoi is offline  
Old 12-25-2012, 05:18 PM   #19
Skurge
Diamond Member
 
Skurge's Avatar
 
Join Date: Aug 2009
Location: Namibia
Posts: 4,940
Default

I think this is more about X86 than Atom vs Tegra. We were all lead to believe that x86 is not competative with ARM when it comes to low power. Although Tegra 3 is about to be replaced, It shows that Atom is Faster and uses less power than A9 at least. I believe it will be on par with A6 and Krait (Krait being on 28nm), but I think Exynos 5 would still pull ahead. I would like to see Exynos 5 in a more optimised Tablet to compare. Samsung is not too far behind Intel in Fab process and the seem to be pulling closer.

The next few years should be exciting in the mobile SOC space. Now if someone would put out a tablet with USB3.0 Transfer speeds.
__________________
Intel Core i5-4670K |MSI Z97-Gaming 5|32GB DDR3-1600|Gigabyte R9 290 Windforce CF [stock]|Samsung SSD 840 Evo 500GB|Corsair AX860 PSU|Corsair 750D|Windows 8.1 Pro|Samsung U28D590D|Logitech G27 Racing Wheel|Nexus 5 32GB
Skurge is offline  
Old 12-26-2012, 04:19 AM   #20
BenSkywalker
Elite Member
 
Join Date: Oct 1999
Posts: 8,955
Default

It's a bit sad to see how extremely far Anand has fallen with the integrity of his reviews. If he were to try and have some real honest work done, he would have taken the time to, I don't know, glance at his own previous reviews to see that his testing methodology was shockingly broken.

Let's take a look at the performance numbers, how about we start with Kraken?

Atom- 33K ms
Tegra 3- 49.6K ms

Lower numbers are better, but what if we compare that to a mystery CPU from a prior review done right here on AT-

Mystery SoC- 22.7K

Hmmm, so the mystery SoC is ~50% faster then the Atom which is ~50% faster then the Tegra 3. What is the mystery SoC? Tegra 3-

http://images.anandtech.com/graphs/graph6425/51299.png

Anand seems to have no desire to even pretend to have a hint of credibility when it comes to benching SoCs. It has, repeatedly, been pointed out to him that browser benches are *NOT* OK to use as general CPU benches. Clearly, something is horribly broken with that particular browser for Kraken(I could also point out that the SunSpider benches were better under Win then And for T3).

Also, for the GPU comparison, where are the bench numbers for the game test they ran? Obviously the power benches have zero merit without performance numbers to back them up, else we could point out that a FX5200 uses a lot less power then a 7970GE like that is anything besides stupid.

Of course we also don't get the explenation as to why he didn't use a 2010 55nm Atom part to compare to Tegra3, to be on an equal footing to the rigged test Intel decided he was going to run.

Really, all things considered, its' kind of shocking how bad Intel did in this particular comparison. A node advantage, benefit of a broken browser for ARM from MS, not having to offer any performance numbers for their GPU(single core 545 is *very* low end) and avoiding going against the numerous A15 derivatives on the market.

Kind of stupid how bad this works out on a logical basis.
BenSkywalker is offline  
Old 12-26-2012, 08:49 AM   #21
SPBHM
Platinum Member
 
SPBHM's Avatar
 
Join Date: Sep 2012
Posts: 2,484
Default

Quote:
Originally Posted by Skurge View Post
I think this is more about X86 than Atom vs Tegra. We were all lead to believe that x86 is not competative with ARM when it comes to low power. Although Tegra 3 is about to be replaced, It shows that Atom is Faster and uses less power than A9 at least. I believe it will be on par with A6 and Krait (Krait being on 28nm), but I think Exynos 5 would still pull ahead. I would like to see Exynos 5 in a more optimised Tablet to compare. Samsung is not too far behind Intel in Fab process and the seem to be pulling closer.

The next few years should be exciting in the mobile SOC space. Now if someone would put out a tablet with USB3.0 Transfer speeds.
I think it's only about this Atom vs Tegra 3, there is clearly something wrong with how much power the GPU is using (nothing to do with ARM CPUs), there are clearly better ARM SoCs on the market.
SPBHM is offline  
Old 12-26-2012, 08:57 AM   #22
Grooveriding
Diamond Member
 
Grooveriding's Avatar
 
Join Date: Dec 2008
Location: Toronto, CA
Posts: 6,340
Default

Quote:
Originally Posted by SPBHM View Post
I think it's only about this Atom vs Tegra 3, there is clearly something wrong with how much power the GPU is using (nothing to do with ARM CPUs), there are clearly better ARM SoCs on the market.
Agreed. Recently got a Galaxy Note 2 and the Exynos 4412 in there is far and away better than anything out there. Have a Transformer Infinity and the Note 2 blows it away in performance.

Not getting the sour grapes in this thread though. You compare what is on the market to what is on the market, regardless of process differences. Complaining about it is like complaining about comparing a Bulldozer to an IB because of Intel's advantage.

Intel is going to stomp on everyone else here once they get in full stride. The only real competition they'll likely have is from Samsung.
__________________
5960X @ 4.5 | X99 Deluxe | 16GB 2600 GSkill DDR4 | 780ti SLI | Evo 500GB Raid 0 | Dell U3011 | EVGA 1300W G2
under custom water
Grooveriding is offline  
Old 12-26-2012, 02:35 PM   #23
sontin
Platinum Member
 
Join Date: Sep 2011
Posts: 2,226
Default

Quote:
Originally Posted by SPBHM View Post
I think it's only about this Atom vs Tegra 3, there is clearly something wrong with how much power the GPU is using (nothing to do with ARM CPUs), there are clearly better ARM SoCs on the market.
The ULP Geforce in Tegra 3 has 2 Vec4 PS and 1 Vec4 VS. With the standard 520MHz clock rate of Tegra 3 the SGX545 in Clover Tail has in the best case the same PS performance and twice the VS performance.

In games there is always a mix of both calculations. So that gives Tegra 33% more PS performance (8 MADD over 6 MADD) and nearly 100% of VS performance (4 MADD over 2 MADD).
sontin is online now  
Old 12-26-2012, 02:49 PM   #24
s44
Diamond Member
 
Join Date: Oct 2006
Posts: 8,916
Default

Quote:
Originally Posted by Grooveriding View Post
You compare what is on the market to what is on the market, regardless of process differences.
Are you joking?

T3 was released *a year ago* and is about to hit EOL with T4 appearing at CES.

Clover Trail was released *just now*.

So Intel is only a year behind, despite being a year+ ahead on process! Domination is right around the corner. Or... not. This entire round of Intel PR-sponsored articles is being taken about as seriously as it deserves (not at all).

Last edited by s44; 12-26-2012 at 02:51 PM.
s44 is offline  
Old 12-26-2012, 03:16 PM   #25
Lonyo
Lifer
 
Lonyo's Avatar
 
Join Date: Aug 2002
Posts: 21,655
Default

Quote:
Originally Posted by BenSkywalker View Post
It's a bit sad to see how extremely far Anand has fallen with the integrity of his reviews. If he were to try and have some real honest work done, he would have taken the time to, I don't know, glance at his own previous reviews to see that his testing methodology was shockingly broken.

Let's take a look at the performance numbers, how about we start with Kraken?

Atom- 33K ms
Tegra 3- 49.6K ms

Lower numbers are better, but what if we compare that to a mystery CPU from a prior review done right here on AT-

Mystery SoC- 22.7K

Hmmm, so the mystery SoC is ~50% faster then the Atom which is ~50% faster then the Tegra 3. What is the mystery SoC? Tegra 3-

http://images.anandtech.com/graphs/graph6425/51299.png

Anand seems to have no desire to even pretend to have a hint of credibility when it comes to benching SoCs. It has, repeatedly, been pointed out to him that browser benches are *NOT* OK to use as general CPU benches. Clearly, something is horribly broken with that particular browser for Kraken(I could also point out that the SunSpider benches were better under Win then And for T3).

Also, for the GPU comparison, where are the bench numbers for the game test they ran? Obviously the power benches have zero merit without performance numbers to back them up, else we could point out that a FX5200 uses a lot less power then a 7970GE like that is anything besides stupid.

Of course we also don't get the explenation as to why he didn't use a 2010 55nm Atom part to compare to Tegra3, to be on an equal footing to the rigged test Intel decided he was going to run.

Really, all things considered, its' kind of shocking how bad Intel did in this particular comparison. A node advantage, benefit of a broken browser for ARM from MS, not having to offer any performance numbers for their GPU(single core 545 is *very* low end) and avoiding going against the numerous A15 derivatives on the market.

Kind of stupid how bad this works out on a logical basis.
1) He used Windows vs Windows, with ostensibly the same browser (IE10) on each platform, therefore making it comparable.
Comparing Android/Mozilla Tegra 3 to Windows/IE x86 would be dumb. If you want consistency, you have it... by doing what Anand did.
2) Comparing 32nm Atom to 40nm Tegra 3 makes sense because THAT'S WHAT'S ON THE MARKET. Next up, why doesn't Anand compare a Pentium 4 to the latest AMD CPUs, I mean, AMD is behind on process nodes, lets ignore the fact that new AMD processors are competing with new Intel processors, and instead throw in a random old Intel processor instead to compare to. Much more valid.
3) Rigged test? The machines were compared to "non rigged" systems in terms of power draw and performance, and found to be equal. Caveats of the measurements were also mentioned.

4) READ THE WHOLE DAMNED ARTICLE. PRETTY MUCH EVERY POINT YOU MAKE IS COVERED BY IT.
__________________
CPU: Q3570K @ 4.1GHz 1.23v // Mobo: Asus P8Z77-V // GFX: Sapphire Tri-X 290 @ 1000/5200 // RAM: Corsair DDR3 @ 1600MHz 9-9-9-24 // SSD: Samsung 830 128GB
Video cards: TNT2, Ti4400, 9800, 7800GT(+7200GS), HD4850(+HD2400), HD6850, HD7950 (Laptops: GF6150, HD3200, GMA500)
Lonyo is offline  
Closed Thread

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 07:24 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.