• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[Rumor] NVIDIA Maxwell, Denver CPU, Shield 2 and Tegra 5 Announcement 01/06 @ CES

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Devs ask for all kind of silly things, just look at kickstater...nice fallacy 😉

If DICE had been able to speed up Battlefield 4 by 45% by using OpenGL, instead of helping to implement a whole new API, they would have done just that.

It's about the featureset, nice fallacy 😉

So OpenGL ES has the same featureset as OpenGL or DirectX? Right.
 
Last edited:
Devs ask for all kind of silly things, just look at kickstater...nice fallacy 😉

If DICE had been able to speed up Battlefield 4 by 45% by using OpenGL, instead of helping to implement a whole new API, they would have done just that.



So OpenGL ES has the same featureset as OpenGL or DirectX? Right.
yes but in which OP system.Not even 5% use open GL IOS.AMD PR,Nvidia PR and Intel PR and Slides is something else and actual results are different.

U really think Developers were gone to AMD and ask an API that is could not even 10% of user can use.
 
If DICE had been able to speed up Battlefield 4 by 45% by using OpenGL, instead of helping to implement a whole new API, they would have done just that.

That was based on an AMD marketing slide. If history has taught us anything, it is that take the claim from an AMD marketing slide and expect 50% of AMD's claim - that will likely match reality. AMD without exception will claim the world in powerpoints. And they never come to fruition. This is why I can't ever take AMD's marketing seriously. Bulldozer anyone? I seem to remember BD having some pretty inflated claims in terms of marketing slides. I would suggest that it isn't wise to take such slides at face value, based on AMD's past history.

So I would basically take "45%" with a grain of salt. But we'll see. When BF4 Mantle benchmarks hit, that is when i'll pay attention. I won't pay attention to a stupid marketing slide. If it does indeed increase BF4 performance by 45%, that would be great for competition. Must have benchmarks though. Real benchmarks tested by outside reviewers.

We'll see. In any case, why the heck are we talking about this garbage in the Denver / Tegra K1 thread? Mantle in a Tegra K1 thread? Really?
 
Last edited:
If DICE had been able to speed up Battlefield 4 by 45% by using OpenGL, instead of helping to implement a whole new API, they would have done just that.

Dice is against OpenGL.
On the other hand Epic has no problem with OpenGL.


So OpenGL ES has the same featureset as OpenGL or DirectX? Right.
Tegra K1 supports OpenGL 4.4. So it's full compartible to any other plattform out their.
 
Dice is against OpenGL.
On the other hand Epic has no problem with OpenGL.

Any evidence for that? And if so, why are they against it? (But yeah, this is pretty off topic, so I'll drop it.)

Tegra K1 supports OpenGL 4.4. So it's full compartible to any other plattform out their.

I was under the impression that Android only allowed OpenGL ES, and not full OpenGL? Not a game dev though, so I could be wrong.

Back on topic, the Unreal Engine 4 demo is available here: http://youtu.be/GkJ5PupShfk?t=41s The lighting features are impressive, but the framerate and general image quality leaves a lot to be desired. I suspect that developers would ditch some of those lighting effects to get the framerate up.
 
So it's basically a GeForce GT 635. (down clocked of course)

No, not at all. The Kepler.M GPU in Tegra K1 will have clock operating frequencies up to about 1GHz (!). The perf. per watt is also extremely high (roughly 2-3x higher than we have seen from prior GPU’s), which is remarkable given a 28nm fabrication process. Many people here didn't believe that Kepler.M could achieve this level of perf. and efficiency on this process node.

The biggest news was really about the inclusion of dual Denver 64-bit CPU cores from a pin-compatible Tegra K1 variant to be available in 2H 2014. The CPU die size will be much, much larger than Cortex A15 (implying superior IPC), but clock operating frequency will be higher still, and a battery saver core is absent too.

On a side note, I expect Kepler.M's efficiency gains to carry through to Maxwell GPU's too. Investing in Tegra and ultra-mobile will likely prove to be a wise decision by NVIDIA even with some growing pains along the way.
 
Last edited:
NTMBK said:
I was under the impression that Android only allowed OpenGL ES, and not full OpenGL? Not a game dev though, so I could be wrong.

Android doesn't have an artificial limitation like that. OpenGL ES is a subset of OpenGL.

FWIW, both the PS3 and PS4 make use of the OpenGL API.

Back on topic, the Unreal Engine 4 demo is available here: http://youtu.be/GkJ5PupShfk?t=41s The lighting features are impressive, but the framerate and general image quality leaves a lot to be desired. I suspect that developers would ditch some of those lighting effects to get the framerate up.

These UE4 demos were run on prototype ultra mobile devices (at 1080p native screen resolution according to a rep's comments in a YouTube video), and are likely not very well optimized yet with respect to game demo code and GPU driver code.
 
Last edited:
The schocking thing is that even Kaveri offers only twice the GFLOPs from Tegra K1.

Wrong.

Kaveri has a spec of 856 GFlops.

index.php


Regardless,

1. Comparing real world gaming performance between AMD and NV strictly based on Gflops? Total fail. Explained below.

2. Top of the line Kaveri APU will have 512 GCN SPs. The performance should come very close to HD7750. On the other hand, the GPU in K1 only has 192 CUDA cores or nearly half the power of GT640. GK107 features a single graphics processing cluster containing dual SMX shader multiprocessors, or double that of K1's GPU.

3. HD7750 pummels GT640 into the ground, which means Kaveri's GPU should be at least 3.5x faster than Tegra 5.

value-average.png


4. You didn't even talk about the massive difference in CPU power between 4 Steamroller cores at 4.0ghz and Denver CPU cores. Obviously this comparison is unfair since Kaveri has a TDP of 95W but you are the one who went there comparing GFlops on paper/Kaveri vs. K1.

K1 uses only 5W and delivers 1/3 of the power of the Xbox One from the GFLOPs side.

Exact same mistake as before. Comparing GPU performance from different architectures based on Gflops. The GPU in Xbox One has 768 Steam Processors. It is very similar in performance to HD7790. As such, it will trounce the K1 chip by 4x in gaming performance considering HD7790 is at least 2x faster than GT640.

You again didn't even bring up the power of 8 Jaguar cores vs. 2 Denver cores. Sure, it is impressive for a 5W chip but Tegra 5 is far from the power of Xbox 1 and PS4. There is no question that NV has massively improved Tegra 5 from Tegra 4.

NV compares K1's GPU power to Cyclone in A7 but that's "old generation in tech terms" since the 28nm SOC ship has sailed. This year, A8 is expected to be made on 20nm and Tegra 5 will find itself competing against newer chips on lower nodes. If NV introduced Tegra 5 on 28nm before A7 debuted, it would have been uber impressive. Instead, NV seems to have an excellent architecture but they have a process node battle on their hands since very soon K1 will find itself competing against 20nm chips.

Also, not even sure why you conveniently chose to use Xbox 1, which costs $100 more than PS4, yet the GPU in PS4 is nearly 50% faster than in XB1. If we are going to compare modern SOCs to current generation consoles, might as well use PS4 as the benchmark. It will likely take 3-4 years before an SOC has the power of PS4. By that time, PS4 will be near half of its life cycle.

More importantly, even if you had a smartphone that had 100x the power of PS4, the end user experience is so vastly different, the comparison isn't really relevant from a consumer's point of view. My smartphone can produce graphics far superior to NES, SNES and N64 but as a gaming device, compared to those consoles, it's trash.
 
Last edited:
Yes, I know that ES 3.0 is a strict subset of OpenGL, but that doesn't change the fact that OS support is needed to enable full OpenGL.

Think logically about what you are saying. All the major ultra mobile GPU vendors (including Qualcomm and ARM) are moving towards ULP graphics with DX11 and OpenGL 4.x support. Game developers are already familiar with these industry standard API's. Android is the fastest growing and most pervasent mobile OS. It is just a matter of time before major game devs target Android with OpenGL 4.x (in addition to the other platforms they target today).
 
Last edited:
4. You didn't even talk about the massive difference in CPU power between 4 Steamroller cores at 4.0ghz and Denver CPU cores. Obviously this comparison is unfair since Kaveri has a TDP of 95W but you are the one who went there comparing GFlops on paper/Kaveri vs. K1.

I don't think anyone stated to expect discrete desktop level graphics performance in a tablet chip. But you stated the key difference yourself here. Kaveri is a desktop LGA chip with up to 65-95W TDP, versus a 5W TDP chip that will be used in 5 inch and larger phablets/tablets. Guess what, there you go. You will never see Kaveri in the same form factors as Tegra K1 is in. It won't be discrete desktop level performance. But then again, it isn't a desktop LGA CPU. The fact that Tegra K1 is about half the performance of a dekstop LGA kaveri, despite being a tablet SOC, is a sad state of affairs for Kaveri. The key difference here? Will you ever see this 65 TDP Kaveri in something the size of a 7 inch tablet? Absolutely not. This is a desktop LGA chip we're talking about here. Good grief.

If you go back to Kabini, the 15 Watt TDP variant was 180Gflops at 15W TDP. I don't think Kaveri will ever hit a 5W TDP. Even at 15W TDP it will still have worse graphics performance than the Tegra K1. The bottom line here is that you're comparing apples to pineapples. Kaveri is a desktop LGA CPU. Despite the fact that its relative performance per watt is poor, if it didn't exceed the performance of the K1, something would be severely wrong. Hello? Desktop LGA CPU. Versus a 5W TDP tablet SOC.

Wake me up when AMD has an applicable SOC for the tablet market. The A4-5000 was a massive failure which didn't get any meaningful design wins, while Bay Trail basically mopped up the Windows 8.x tablet scene, and Qualcomm won the mobile market due to LTE integration. Tegra 4 didn't do amazingly well, but if these performance stats are true on the K1 -- make no mistake, it will be a heck of a mobile chip. It will far exceed anything on the market right now in terms of graphics performance among all ARM SOCs, including the A7.

IMO, the key reason Qualcomm "won" the mobile market in 2013 was because of having widespread CDMA compatibility and LTE integration. The performance was fantastic as well, but it was not the only mobile SOC with great performance. Bay Trail beat it on the CPU front, and Tegra 4 beat it in graphics. Then there's the A7, but it isn't meaningful to compare it to the A7 since the A7 is only an iOS product. No other product will ever be used for iOS, and the A7 will never be used for android. So it's not quite a meaningful comparison. Anyway, once the other players have LTE integration with CMDA compatibility, which WILL happen in 2014, Qualcomm won't retain the lead that they had in 2013 quite as easily. Literally LTE/CMDA integration was the biggest reason why Qualcomm succeeded.
 
Last edited:
@ RussianSensation: ignoring the marketing billboard and spam that was just put up in this Tegra thread, try learning about Kepler.M before jumping in. Tegra K1 will have up to 365-384 GFLOPS GPU throughput. So Kaveri would have slightly more than 2x greater throughput at best in comparison. TK1 has a 5w TDP, so it has far superior perf. per watt compared to Kaveri. In fact, comparing TK1 to Kaveri is insane because TK1 will work well in high end smartphones and thin fanless tablets while Kaveri is meant for much larger devices.

And FWIW, your comparison of GT 640 to HD 7750 is completely disingenuous because the former used GDDR3 RAM while the latter used GDDR5 RAM for far superior mem. bandwidth. Kaveri will perform like HD 7750 with GDDR3 RAM and will be way slower in comparison due to dramatically lower bandwidth vs. the GDDR5 discrete variant.
 
Last edited:
Interesting event from Nvidia. I'm reading through it on Anandtech and it's hard to wade through all the marketing BS. It's pretty thick today too, so hopefully Anand can get his hands on some real information and present something. I did like the marketing slide of the UE4, A7 vs. TK1... A7 DNR! ... TK1... INFINITY!!!

I want to see more on the car integration stuff and the gauges.
 
The crop field portion at the end of the live stream was literally the stupidest marketing stunt i've ever seen. I'm not sure if that really happened or what, but, lame.

You know, i'm with you on the marketing. IMO, the best bet is letting the actual products do the talking. I'll be interested in K1 when the real product hits - if the performance matches the claims, it will be a great SOC. To that end I haven't found nvidia's marketing to be disingenuous with respect to performance or Perf/watt in the past (unlike AMD's powerpoint marketing), but of course K1 isn't out yet. We'll see I guess. It will be great if the claims are met.
 
Last edited:
The crop field portion at the end of the live stream was literally the stupidest marketing stunt i've ever seen. I'm not sure if that really happened or what, but, lame.

You know, i'm with you on the marketing. IMO, the best bet is letting the actual products do the talking. I'll be interested in K1 when the real product hits - if the performance matches the claims, it will be a great SOC. To that end I haven't found nvidia's marketing to be disingenuous with respect to performance or Perf/watt in the past (unlike AMD's powerpoint marketing), but of course K1 isn't out yet. We'll see I guess. It will be great if the claims are met.



Nvidia runs a great show, the last AMD one was pretty... disjointed. I trust nobody at these events though until Anand actually has hardware in hand. It's understandable though, it's PR to generate hype. It comes with the territory of watching these things.

I'm really relieved no information on Maxwell was revealed other than K1 merging into the desktop roadmap. 28nm Maxwell would have really been a downer for me. :'(

I however am making breakfast and will fully concede I may be missing all sorts of revelations right now. D:
 
So all this excitement is about little hand held Gameboy style things? Like a super Nvidia Shield or netbook or something?
 
Back
Top