[OnLS] NVIDIA Maxwell Steam Machine with up to 16 Denver Cores and 1M Draw Calls

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
Fun read. That said brace yourselves for some outrageous claims

To date NVIDIA's upcoming Maxwell GPU architecture has been rumored to be coming with up to 8 NVIDIA custom designed Denver 64-bit ARM CPU cores.
Well, a friendly mole from their cloud gaming division has let me know that they are mulling the option of equipping the highest-end Maxwell GPU with 16 Denver cores.

NVIDIA has been able to design the Denver architecture in such a way that it can be manufactured on the same die and process like their high-end GPU's.
They somehow managed to architect Denver so that it can be efficiently manufactured on the same process required by high density GPUs.
Presumably the trick is that Denver actually very closely resembles a GPU architecture, but has a very powerful instruction set translation unit.

As rumored, that translation unit has been first developed for NVIDIA's x86 project years ago after they licensed Transmeta technology.

So just what is the 16 Denver cores toting Maxwell beast capable of? My source told me one number, 1 Million draw calls in DirectX 11 and OpenGL 4.4.
Just for reference, AMD claims that their upcoming low-level API Mantle will be able to issue up to 150,000 draw calls.

Presumably NVIDIA's new hardware beast will be able to obliterate AMD's Mantle API, and this with no code changes required by game developers as it will all be done in hardware.

You ask yourself what game developer would need so many draw calls?
This is the maximum number of draw calls that the 16 Denver cores enable, but they can be used for much more.

NVIDIA is working on integrating the Denver CPU cores into their GameWorks code library that game developers can integrate freely into their games.
They are porting the library to OpenGL and SteamOS.


http://www.onlivespot.com/2014/01/nvidia-maxwell-steamos-machine-with-up.html
 

Ventanni

Golden Member
Jul 25, 2011
1,432
142
106
Wouldn't it be awesome to have like two monitors running Windows 7/8 for your games and a third monitor running like Android for streaming stff off the built-in Denver cores? I don't know why I keep wanting that, but hey, why not?
 

Venomous

Golden Member
Oct 18, 1999
1,180
0
76
*YAWN*

What's amazing about this place is, the words AMD mentioned in a thread about upcoming rumors or releases, creates and all out war... Seeing unrealistic examples of fud surrounding Nvidia it's absolutely quiet...

Looking at Nvidias past ARM processors makes this an absolute joke.
 

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
Meh, its not the word, but the contents & claims when AMD is mentioned. NV have a much better history of delivering on their claims and you know it....
But hey, at least you got the ball rollin eh?
 

digitaldurandal

Golden Member
Dec 3, 2009
1,828
0
76
@Venomous Really? Are you sure, because you are the 4th post and you are already calling BS. I guess the first 2 replies let you down.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,329
126
From what I'm seeing this is not even a news/review site, but rather a blog by a random individual that for the most part has been about whatever games onlive.com has added to their service.

Now the most recent two articles have taken a turn to op-ed pieces spinning positive on nvidia and negative on Microsoft and AMD.

Meh, seems just a random blog/possible viral shill site for onlive and maybe nvidia now ? The claims certainly are asinine and no source cited. Just the usual pile of garbage a random 'author' decides to excrete out on their internet blog.
 

ph2000

Member
May 23, 2012
77
0
61
Presumably NVIDIA's new hardware beast will be able to obliterate AMD's Mantle API, and this with no code changes required by game developers as it will all be done in hardware.
You ask yourself what game developer would need so many draw calls? This is the maximum number of draw calls that the 16 Denver cores enable, but they can be used for much more.
NVIDIA is working on integrating the Denver CPU cores into their GameWorks code library that game developers can integrate freely into their games. They are porting the library to OpenGL and SteamOS.
a bit contradiction

so now NV is adding processor to their GPU
AMD change their GPU architecture to be more CPU like for HSA
Intel is making GPU from CPU tech

see a trend here :p
 

tential

Diamond Member
May 13, 2008
7,348
642
121
@Venomous
I don't see how this is similar to AMD at all.
Using Mantle as the most recent AMD example, AMD made claims on marketing slides and has continued to do so for months. Meanwhile, this is, from my understand, a blog that got their material from an "inside source".
Two different situations. AMD could learn a thing or two from Nvidia when it comes to releasing a product. I do like AMD though strictly on hardware, but they're behind Nvidia on every other aspect.

Also, what is this thing? Is it an Android CPU/GPU or is it a GPU for Windows/Linux? I'm a little confused, only skimmed.
 

Despoiler

Golden Member
Nov 10, 2007
1,968
773
136
If true, this is further proof that DX is a huge bottleneck for continued high performance gaming. AMD is tackling draw calls with Mantle and Nvidia is tackling draw calls with onboard co-processors.
 

TreVader

Platinum Member
Oct 28, 2013
2,057
2
0
Anything that eats up more silicon is just going to cut the number of shaders on die. There is zero reason to be exited about this. Look foreward to seeing fewer performance gains per die shrink from nvidia.


This is an awful way to counter mantle. Bad nvidia. Bad.
 

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
1 million draw calls huh? Good thing all the Nvidia guys debunked the need for draw calls in the Mantle thread. Let's all take random blogs and "inside source" as real sources :thumbsup:
 

Aikouka

Lifer
Nov 27, 2001
30,383
912
126
At this point, I think the biggest question on the table is how potent the Denver cores are going to be (in general). We've known about the ISA translation for a few weeks now, but it's anyone's guess if it'll be the next Itanium or not. :p

In a way, it is also rather odd to think of a GPU as including a CPU. Doesn't that technically make our GPU a ridiculously beefy SoC?

a bit contradiction

Actually, it's not a contradiction. The whole "no change to code" is strictly for the Denver cores handling draw calls, but adding the functionality to GameWorks would be for developers that want to leverage the Denver cores for other things.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Ok, lets just say this is true, and the CPU's in isolation can put out a million draw calls a second...

The game still has to know how to send those draw calls to those co-processors instead of the main CPU. Which means its still having to process those draw calls. That or it batches up what needs to be done, and then sends it to those CPU's.

Just because there is a claim (if even true, its not a reputable news site by any means) that those CPU's *could* handle that load, does not mean there is a way to feed them at that speed.

EDIT: Its possible I misunderstood the first post after re-reading. This could be the 16 arm cores are the main CPU, and there is no x86 CPU at all. In which case OpenGL is not being used, and this is some new API? Or is this some benchmark where if those cores do nothing else at all, they can do a million? Because I am sure an i7 could handle a LOT if its the only thing it does.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
If true, this is further proof that DX is a huge bottleneck for continued high performance gaming. AMD is tackling draw calls with Mantle and Nvidia is tackling draw calls with onboard co-processors.

It's obvious from what the game devs are saying that DX is a huge resource hog. Interesting to see if we still have people saying games aren't CPU bottlenecked and draw call limited now that both companies are addressing it (assuming there's facts behind this rumor, of course). This should put that argument to rest.

So, what do we prefer, brute forcing DX or a newer more efficient API? Could this help with thread scheduling in DX? Does it address the massive overhead DX has for some things like AA, for example?

At least this should mean that devs won't have to cripple games draw call wise so they'll still run well on nVidia.

Ok, lets just say this is true, and the CPU's in isolation can put out a million draw calls a second...

The game still has to know how to send those draw calls to those co-processors instead of the main CPU. Which means its still having to process those draw calls. That or it batches up what needs to be done, and then sends it to those CPU's.

Just because there is a claim (if even true, its not a reputable news site by any means) that those CPU's *could* handle that load, does not mean there is a way to feed them at that speed.

EDIT: Its possible I misunderstood the first post after re-reading. This could be the 16 arm cores are the main CPU, and there is no x86 CPU at all. In which case OpenGL is not being used, and this is some new API? Or is this some benchmark where if those cores do nothing else at all, they can do a million? Because I am sure an i7 could handle a LOT if its the only thing it does.
Reading your post it dawned on me that there is no context to the 1m draw calls. If it's per second, that's not really that fast. Mantle can do ~100K per frame. Help me with my math but isn't that ~6M draw calls a second @ 60fps? We need to see the context for this "1M draw calls". Or, am I missing something?
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
So according to the article they haven't decided to include a 16 core cpu on their high end parts but know it will do 1 million draw calls.

Cell phones now doing 1 million draws when a 4770k can do around 15k in the best case scenario. I will believe that when it happens.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
So today's games use 3,000-6,000 draw calls and brings the strongest Intel CPUs to their knees with direct X.

But somehow without any software modifications to directX or codeing to games, a few ARM processors can suddenly do 1million draw calls?

It sounds almost too good to be true.


Im looking forwards to seeing what the maxwell card(s), do once they are released. Seeing reviews put those arm processors to the use, and seeing if this holds any truth.


The 750ti was supposed to be a maxwell and right around the corner right? looking forwards to that now.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
From what I'm seeing this is not even a news/review site, but rather a blog by a random individual that for the most part has been about whatever games onlive.com has added to their service.

Now the most recent two articles have taken a turn to op-ed pieces spinning positive on nvidia and negative on Microsoft and AMD.

Meh, seems just a random blog/possible viral shill site for onlive and maybe nvidia now ? The claims certainly are asinine and no source cited. Just the usual pile of garbage a random 'author' decides to excrete out on their internet blog.

I agree with you but would like to add to the last part. It could just be a guy out out for page hits. Sensational BS.
There is no way i would take any of that seriously.

Fake, totally fake.
We are so far out from Big Maxwell it isnt even funny. No reason to believe any crazy talk now
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Source is some dudes blog... yeah I'll pass

Not just some dude's blog. Some nutjob's blog who thinks Microsoft can give the go ahead for Mantle like they have that control over Windows.

When his other articles are ridiculous and without factual basis, there's no reason to think this one has any factual basis.
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
1 million draw calls huh? Good thing all the Nvidia guys debunked the need for draw calls in the Mantle thread. Let's all take random blogs and "inside source" as real sources :thumbsup:

Here is one debunk. OpenGL and it comes from NV itself:

Page 40-
 

NTMBK

Lifer
Nov 14, 2011
10,411
5,677
136
One Denver core has roughly the same power consumption and die area as a pair of Cortex A15s, based on the fact that NVidia swapped out 4 A15s for 2 Denvers in the Denver K1. Shoving the equivalent of 16 A15s onto a GPU die would eat up a hell of a lot of die space and power budget, and hence significantly limit the number of GPU cores. I'm really not buying it.