[OnLS] NVIDIA Maxwell Steam Machine with up to 16 Denver Cores and 1M Draw Calls

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
Meh, its not the word, but the contents & claims when AMD is mentioned. NV have a much better history of delivering on their claims and you know it....
But hey, at least you got the ball rollin eh?

That's pretty hilarious. Where have you been on the recent Tegra launches?
The CEO of NV is a known blowhard who is wired to overpromise.


Still, if this rumor was true that'd be pretty awesome.
 

el etro

Golden Member
Jul 21, 2013
1,584
14
81
Fun read. That said brace yourselves for some outrageous claims

To date NVIDIA's upcoming Maxwell GPU architecture has been rumored to be coming with up to 8 NVIDIA custom designed Denver 64-bit ARM CPU cores.
Well, a friendly mole from their cloud gaming division has let me know that they are mulling the option of equipping the highest-end Maxwell GPU with 16 Denver cores.

NVIDIA has been able to design the Denver architecture in such a way that it can be manufactured on the same die and process like their high-end GPU's.
They somehow managed to architect Denver so that it can be efficiently manufactured on the same process required by high density GPUs.
Presumably the trick is that Denver actually very closely resembles a GPU architecture, but has a very powerful instruction set translation unit.

As rumored, that translation unit has been first developed for NVIDIA's x86 project years ago after they licensed Transmeta technology.

So just what is the 16 Denver cores toting Maxwell beast capable of? My source told me one number, 1 Million draw calls in DirectX 11 and OpenGL 4.4.
Just for reference, AMD claims that their upcoming low-level API Mantle will be able to issue up to 150,000 draw calls.

Presumably NVIDIA's new hardware beast will be able to obliterate AMD's Mantle API, and this with no code changes required by game developers as it will all be done in hardware.

You ask yourself what game developer would need so many draw calls?
This is the maximum number of draw calls that the 16 Denver cores enable, but they can be used for much more.

NVIDIA is working on integrating the Denver CPU cores into their GameWorks code library that game developers can integrate freely into their games.
They are porting the library to OpenGL and SteamOS.


http://www.onlivespot.com/2014/01/nvidia-maxwell-steamos-machine-with-up.html


Much worse than Fudzilla! 16 year old wrtting!




----------------------




*Game works is made with evil intentions as Mantle is, but the second can be much more helpful to the PC gaming scene.*




1 million draw calls huh? Good thing all the Nvidia guys debunked the need for draw calls in the Mantle thread. Let's all take random blogs and "inside source" as real sources :thumbsup:

They are the same prophets that stated that AMD traded frametimes performance per absolute FPS numbers, or said that Dual-Graphics would not be fixed.
Technological Nostradamuses failing over and over again. :D
 
Last edited:

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Maxwell will most likely be the most noticeable leap in GPU tech since G92 ---> full GT200 and 3870 ----> 4970.

My guess is they will have two different approaches with TSMC's 20nm process, which allows for either substantial performance gains or substantial efficiency gains, or a less drastic mixture of the two. Steambox would be a perfect fit.

The narrative may be completely backwards as far as who is "answering" who in regards to how data is processed.
 

parvadomus

Senior member
Dec 11, 2012
685
14
81
Very funny read. 1M calls with what SO? what software overhead? Also he is comparing raw hardware perf. with an average API perf running on top of a SO.
Imho a pile of random BS.
 

sushiwarrior

Senior member
Mar 17, 2010
738
0
71
Maxwell will most likely be the most noticeable leap in GPU tech since G92 ---> full GT200 and 3870 ----> 4970.

My guess is they will have two different approaches with TSMC's 20nm process, which allows for either substantial performance gains or substantial efficiency gains, or a less drastic mixture of the two. Steambox would be a perfect fit.

The narrative may be completely backwards as far as who is "answering" who in regards to how data is processed.

Biggest leap in GPU tech without even having a die shrink? I would call that somewhat optimistic thinking to say the least.
 

ph2000

Member
May 23, 2012
77
0
61
was wondering what tech patent that nvidia get from transmeta
is it their translation engine ?
wondering if the translation is done on the gpu
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
My guess is they will have two different approaches with TSMC's 20nm process, which allows for either substantial performance gains or substantial efficiency gains, or a less drastic mixture of the two.
They've been doing that since 40nm, so er...solid guess? :p

was wondering what tech patent that nvidia get from transmeta
nV and Transmeta have a decade-old history, now. Transmeta had patents, but their patents were mostly traditional in nature, relating to physical design. nVidia got a license to LongRun in '08, so it's quite possible that Kepler may even be using them. Sony and Toshiba have license to those patents, too.

Code morphing was partly a way around getting patent licenses in the first place. They don't need anything special for the code morphing nor a simple CPU to use it, at least nothing they don't already have with GPU cross-licensing agreements. They could, however, also have ARM-specific hardware in there, as well, rather than do it 100% in software.

is it their translation engine ?
Is what their translation engine?
wondering if the translation is done on the gpu
Code translations is not going to be run inside the graphics processing FUs. That would make no sense to do, as that process needs to be low latency. However, having them share a cache hierarchy (as opposed to what AMD has, where they CPU and GPU are totally different until the memory controller), including keeping coherence between each other, would make sense--the CPU part could stand to help the GPU part much more than the other way around.
 
Last edited:

FalseChristian

Diamond Member
Jan 7, 2002
3,322
0
71
So, with Maxwell and its Denver cores we don't even need a dedicated CPU anymore, then?

Why not just use 64-128 ROPS and 256 Texture Units running at 1500MHz and the GDDR5 at 10000MHz using a 384-bit bus giving a bandwidth of 479.9 Gigabytes/sec.

Have it overclock to 2200MHz on the core and 12500MHz on the vRAM.

What I'm trying to say is that NVidia should just go with brute force. Make the GTX 880 Ti 3x as fast as the GTX 780 Ti.

I'd buy that for a dollar.
 

Sisyphean

Junior Member
Jan 22, 2014
9
0
0
Isn't solving this software related problem by just brute-forcing additional hardware bad for the industry? How much extra costs is having this extra on-board processors gonna cost the consumer? I'm all for performance upgrades but perhaps this is going about it in the wrong way...
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
this is good news. it would be really, really [redacted] lovely if [redacted] inventions like DX were reduced to their natural size. to put it another way, DX has been a large portion of Microsoft's lifeblood and the Xbox One is clearly not very popular (didnt deliver much good especially given the unreasonable price) so now there is even more hope that microsoft will become largely irrelevant just from relying on IP too much (without IP even being reduced).

and then perhaps companies will learn that those needing IP to prosper are not the best.

Profanity isn't allowed in the technical forums; asterisks don't change this.
-- stahlhart
 
Last edited by a moderator: