• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

nvidia tegra K1

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
No. According to the whitepaper, the average power consumed by the Kepler.M GPU with some of today's best Android games is < 2w. This makes sense because Kepler.M easily handles most Android games today. Pushed to peak levels, Kepler.M really cannot consume more than ~4w to fit in a tablet form factor.

5w is the rated TDP for the entire TK1 SoC (same as T4).

The perf. per watt (and hence perf.) of the Kepler.M GPU is well beyond any high end ultra mobile GPU used today.

Ummm, doesn't a quad core A15 processor have a TDP of 5-10w by itself? I've got no clue how Nvidia could fit FOUR A15 cores into a 1w power envelope. It's more likely to be somewhere around 10-15w TDP with quad A15's, that's not bad at all for a tablet. For a phone, they'd have to cut the clocks and perhaps end up at around 7-10?
 
nVidia spent more money than AMD on their GPU tech. Kepler is the first architecture they really focused on efficient. Kepler.M is an evolution of the concept. nVidia has started to develop new architectures with mobile in mind. Maxwell will be the first architecture of this concept.

It is possible to deliver more performance with less power when you focus on this.

I'm just skeptical of something that promises to be 3x+ more efficient than kabini's gcn.
 
Ummm, doesn't a quad core A15 processor have a TDP of 5-10w by itself? I've got no clue how Nvidia could fit FOUR A15 cores into a 1w power envelope. It's more likely to be somewhere around 10-15w TDP with quad A15's, that's not bad at all for a tablet. For a phone, they'd have to cut the clocks and perhaps end up at around 7-10?

That's not how it works. TDP for an SoC needs to be split between CPU/GPU/mem/etc. Depending on CPU and GPU utilization percentages, power consumption will vary, but on average the total dissipated power can be close to 5w in total with CPU and/or GPU intensive apps for an SoC such as T4 or TK1. Peak power is subject to go a bit higher of course.

The total power consumed in a handheld device is a function of SoC, screen, and anything else that consumes power. IIRC, something like an ipad 4 has a peak (not sustained) power consumption of ~ 12w in total for the whole system. A Tegra 4 high res tablet would be similar in consumption.
 
Last edited:
That's not how it works. TDP for an SoC needs to be split between CPU/GPU/mem/etc. Depending on CPU and GPU utilization, power allocation will vary, but on average the total dissipated power can be close to 5w in total with CPU and/or GPU intensive apps. Peak power is subject to go a bit higher of course.

The total power consumed in a handheld device is a function of both SoC and screen power. IIRC, something like an ipad 4 has a peak (not sustained) power consumption of ~ 12w in total for the whole system.

TDP should be the maximum power, not "peak" per se but the maximum sustained (ie. longer than a few seconds) load. I'm skeptical of the TK1 being able to operate a graphically intense game at anything close to max clockspeed, even at <2W for the GPU, that leaves ~2W for FOUR A15 cores and ~1W for the rest of the SoC. 2W for four A15 cores? That requires some serious throttling/clockspeed reduction.

Sure, it could pull 5w with a reasonable (ie. not intense) load, but at maximum power draw? 5W is going to be really tough to believe without lots of throttling.
 
I'm just skeptical of something that promises to be 3x+ more efficient than kabini's gcn.

Why? We know that efficiency comes from the architecture on the same node. And we know that nVidia has money to spend on R&D.

For me it's clear that AMD is not even trying to improve their perf/watt because for that they need money. Kabini shows it: Kabini is alright, Temash was DoA.
 
Last edited:
TDP should be the maximum power, not "peak" per se but the maximum sustained (ie. longer than a few seconds) load. I'm skeptical of the TK1 being able to operate a graphically intense game at anything close to max clockspeed, even at <2W for the GPU, that leaves ~2W for FOUR A15 cores and ~1W for the rest of the SoC. 2W for four A15 cores? That requires some serious throttling/clockspeed reduction.

Sure, it could pull 5w with a reasonable (ie. not intense) load, but at maximum power draw? 5W is going to be really tough to believe without lots of throttling.

It is very unlikely that any modern day Android game will be pegging all four A15 CPU cores (let alone pegging two in fact). So in most games, GPU utilization % will be much higher than CPU utilization %. And with the vast majority of Android games, Kepler.M will not come close to being fully utilized either. Last but not least, the R3 variant of Cortex A15 on 28nm HPM used in Tegra K1 has superior power efficiency compared to the Cortex A15 variant in Tegra 4.

Once again, 5w is the TDP for the entire SoC. For CPU-intensive apps, most of the power is allocated to the CPU (and vice-versa with the GPU). Any scenario where both CPU and GPU are pegged at the same time is pretty unrealistic. It would also be pretty unrealistic to expect Kepler.M to have the same clock operating frequencies in a device like Shield compared to a tablet or phone.
 
Last edited:
Why? We know that efficiency comes from the architecture on the same node. And we know that nVidia has money to spend on R&D.

For me it's clear that AMD is not even trying to improve their perf/watt because for that they need money. Kabini shows it: Kabini is alright, Temash was DoA.

Even though Tegra has had growing pains, just as Atom has had growing pains, investing many years ago in ultra-mobile technology was the right thing for NVIDIA and Intel to do. AMD was (and is) more financially strapped in comparison, and invested in "semi-custom" APU's instead. It is what it is, and these decisions will translate directly into products and technology we see from these companies in the near future.
 
Even though Tegra has had growing pains, just as Atom has had growing pains, investing many years ago in ultra-mobile technology was the right thing for NVIDIA and Intel to do. AMD was (and is) more financially strapped in comparison, and invested in "semi-custom" APU's instead. It is what it is, and these decisions will translate directly into products and technology we see from these companies in the near future.

Here's hoping. They need to turn around the Tegra line, fast, and actually start making a profit on it.
 
^tegra k1 is very promising, maybe the comparison with the consoles was to spark the ouyas of the world to rise up and get console performance for a relatively low price.
 
^tegra k1 is very promising, maybe the comparison with the consoles was to spark the ouyas of the world to rise up and get console performance for a relatively low price.

I think that seeing the roaring success of the Ouya may put them off rather more than NVidia's performance promises.
 
It is very unlikely that any modern day Android game will be pegging all four A15 CPU cores (let alone pegging two in fact). So in most games, GPU utilization % will be much higher than CPU utilization %. And with the vast majority of Android games, Kepler.M will not come close to being fully utilized either. Last but not least, the R3 variant of Cortex A15 on 28nm HPM used in Tegra K1 has superior power efficiency compared to the Cortex A15 variant in Tegra 4.

Once again, 5w is the TDP for the entire SoC. For CPU-intensive apps, most of the power is allocated to the CPU (and vice-versa with the GPU). Any scenario where both CPU and GPU are pegged at the same time is pretty unrealistic. It would also be pretty unrealistic to expect Kepler.M to have the same clock operating frequencies in a device like Shield compared to a tablet or phone.

So in other words intel's sdp is completely valid because that is what everyone in the mobile space is doing.
 
Maybe not a console, but I'd like to see nVidia do a Denver NUC-like device (at a price that might actually move units)

But what OS would it run? Android isn't made for keyboard and mouse, Windows RT is a complete disaster, and Ubuntu doesn't sell consumer boxes. Only real option is Chrome OS.
 
But what OS would it run? Android isn't made for keyboard and mouse, Windows RT is a complete disaster, and Ubuntu doesn't sell consumer boxes. Only real option is Chrome OS.

The limitations of Chrome OS makes it possibly the worst choice. AOSP with a custom UI would be better. Considering Denver K1's purported code morphing when nVidia can do x86 that way (currently blocked by Intel I think) SteamOS would be an option.
 
The limitations of Chrome OS makes it possibly the worst choice. AOSP with a custom UI would be better. Considering Denver K1's purported code morphing when nVidia can do x86 that way (currently blocked by Intel I think) SteamOS would be an option.

Yeah, if it was x86 then it would have a lot more potential- but I think that Intel have got that route too locked down.
 
58063.png

Go check out AT's bench results for the past 2~3 years in mobile devices. They are a mess (even after giving benefit of doubt, such as updated OS). I won't speculate why that is so.

P.S. What happened to Tegra 4i? No "journalists" had a curiosity to ask about it? :biggrin:
 
Short memory-span of collective tech community + lack of ombudsman in this industry = corporations' propaganda dwarfing critical consumer voices at every occasion.

Sad but true.
 
Back
Top