Nvidia’s Drive PX 2 prototype allegedly powered by Maxwell, not Pascal

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

MrTeal

Diamond Member
Dec 7, 2003
3,916
2,700
136
So now it's trolling to point out how a company announces a new product while using old product chips in the public announcement? Man, I didn't get the memo on how to do new product launches/business introductions. Guess for iPhone 7, Broadwell-E, New E-Class Benz, companies should show off last gen's products and claim them to be next generation so they can see reaction of "Internet Trolls".

If Intel launched Skylake in August 2015 with paper slides and then flaunted i7 4790K Skylake as the next gen's product, it would have been SO much more professional, way better than having real demos running in real time, powered by real next gen chips. Fake mock ups and power slides is next gen way now. /sarcasm

Also, amusing to see NV using water-cooling on a 250W TDP product but the competitor was mocked for months for "requiring" LC to function.

Good to see NV continues to push into other industry sectors though since the glory days of record volume into graphics card sales are well behind the graphics card industry.

I don't see this as a big deal. If they were announcing a new Pascal GPU and it had a Maxwell chip on there it would be an issue, but what they're announcing here is a new product category. Having a development board with substitute chips on there that provides at least partial functionality until the final silicon is done is hardly new. Granted I wasn't there so it would be interesting to get Arachnotronic's take on it, but unless JHH that it was Pascal silicon, I wouldn't call it a fake.

Now, it might depend on context, but pointing out that Drive PX 2 is shown with Maxwell silicon is hardly trolling, either.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
I didnt know we needed that much GPU power for AI self driving cars.
It runs DriveWorks code ??

But seriously no we don't at all at least going by the few cars that can self drive they don't have massive processing capabilities on board at all.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
Now, it might depend on context, but pointing out that Drive PX 2 is shown with Maxwell silicon is hardly trolling, either.

I don't think pointing out the substitute hardware is trolling. That is a fact in this case. What I was saying is that by using the substitute hardware they give the trolls something jump on instead of them trying to trash the actual product based on pictures only. As I pointed out, in the first post already someone mentioned no HBM. Considering this is a mobile product, even in final form it will probably have little in common with the layout of the Titan X successor, yet somehow all the engineers on this board will know how it will perform anyway. Eliminate speculation by not giving anything away early.
 

MrTeal

Diamond Member
Dec 7, 2003
3,916
2,700
136
I don't think pointing out the substitute hardware is trolling. That is a fact in this case. What I was saying is that by using the substitute hardware they give the trolls something jump on instead of them trying to trash the actual product based on pictures only. As I pointed out, in the first post already someone mentioned no HBM. Considering this is a mobile product, even in final form it will probably have little in common with the layout of the Titan X successor, yet somehow all the engineers on this board will know how it will perform anyway. Eliminate speculation by not giving anything away early.

Well, nVidia already announced some processing numbers, so we do have at least an idea of how it will perform. I'm actually surprised by how modest the numbers are; at 8 GTFLOPS the board JHH help up with two GM204 chips is probably as powerful as the final board will be with Pascal silicon. The final version must use smaller dies and less power, but it might not be any more powerful than the current prototype.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
I dont actually have a problem with showcasing the Drive PX2 board with Maxwell GPUs instead of Pascal but just imagine what we would read in this pages if it wasnt an NVIDA but AMD presentation.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
233
106
"Using NVIDIA's DIGITS deep learning platform, in less than four hours we achieved over 96 percent accuracy using Ruhr University Bochum's traffic sign database. While others invested years of development to achieve similar levels of perception with classical computer vision algorithms, we have been able to do it at the speed of light." -- Matthias Rudolph, director of Architecture Driver Assistance Systems at Audi - See more at: http://nvidianews.nvidia.com/news/n...telligence-supercomputer#sthash.mBdrd5oY.dpuf
They way it's meant to be <insert your favorite verb here> :)
 

NTMBK

Lifer
Nov 14, 2011
10,411
5,677
136
If I was Nvidia, I wouldn't be using Pascal modules for public mockups either. The product hasn't been officially announced. Makes more sense to have internet trolls mock that it's a fake than try and draw conclusions of the actual product before it is revealed.

They officially announced Pascal at GTC last year.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
They officially announced Pascal at GTC last year.

Why is it necessary on this board to spell out every point in such finite detail that a monkey could understand it? I'm praying you're just arguing for the sake of arguing and not because you didn't understand what I was saying.
 

dark zero

Platinum Member
Jun 2, 2015
2,655
140
106
HBM may be flagship only, just like AMD.

GDDR5(X) will rule for a long time in GTX980/970/390/390X and down.
Nope.

HBM 2 for Top tier both brands.
HBM 1 for AMD mid tier and GDDR6 for nVIDIA
GDDR5 for low tier both brands.

GDDR3 is dead or for a eventual return of VIA S3.
 

MrTeal

Diamond Member
Dec 7, 2003
3,916
2,700
136
Nope.

HBM 2 for Top tier both brands.
HBM 1 for AMD mid tier and GDDR6 for nVIDIA
GDDR5 for low tier both brands.

GDDR3 is dead or for a eventual return of VIA S3.

I know it's kind of a lost cause, but still...
Webcomic_xkcd_-_Wikipedian_protester.png
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Nope.

HBM 2 for Top tier both brands.
HBM 1 for AMD mid tier and GDDR6 for nVIDIA
GDDR5 for low tier both brands.

GDDR3 is dead or for a eventual return of VIA S3.

There is no such thing as GDDR6. And 4GB limit in midrange or lower is terrible. And why use HBM1 when you can use HBM2.

It seems 4GB will be low, 8GB mid and 16GB flagship. Give or take.
 

MrTeal

Diamond Member
Dec 7, 2003
3,916
2,700
136
There is no such thing as GDDR6. And 4GB limit in midrange or lower is terrible. And why use HBM1 when you can use HBM2.

It seems 4GB will be low, 8GB mid and 16GB flagship. Give or take.

What do you consider mid and flagship? IE, are you expecting 16GB on Titan Y and 8GB on 1080Ti, or 8GB on 1080, 16GB on 1080Ti and more on the new Titan?
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Nope.

HBM 2 for Top tier both brands.
HBM 1 for AMD mid tier and GDDR6 for nVIDIA
GDDR5 for low tier both brands.

GDDR3 is dead or for a eventual return of VIA S3.

2048 GCN core, GDDR5 - around 75W GPU - low end.
4096 GCN core, HBM1 - around 130W GPU - mid end.
6144 GCN core, HBM2 - around 200W GPU - high end.

From this lineup they can cut down the parts as much as they like.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
What do you consider mid and flagship? IE, are you expecting 16GB on Titan Y and 8GB on 1080Ti, or 8GB on 1080, 16GB on 1080Ti and more on the new Titan?

If we use current numbers.
4GB 750/950/960 (GDDR5)
8GB 970/980 (GDDR5(X))
16GB 980TI/Titan (HBM2)

Titan with 12GB, 390/390X with 8GB and 960 with 4GB is already there so to say.

HBM1 obvious dont fit in anywhere with its 4GB limit. And the limit for HBM2 is 16GB.
 
Last edited:

MrTeal

Diamond Member
Dec 7, 2003
3,916
2,700
136
2048 GCN core, GDDR5 - around 75W GPU - low end.
4096 GCN core, HBM1 - around 130W GPU - mid end.
6144 GCN core, HBM2 - around 200W GPU - high end.

From this lineup they can cut down the parts as much as they like.

Do you really think they'll be able to get 2048 GCN shaders into a low end die? That's the same number as full Tonga, which is a 359mm^2 die. Even the rather ROP starved Fiji is 4096 on a 596mm^2 die. If speculation is correct and the small die is smaller than Cape Verde or GK107, packing that many shaders in might be a stretch.
 

MrTeal

Diamond Member
Dec 7, 2003
3,916
2,700
136
If we use current numbers.
4GB 750/950/960 (GDDR5)
8GB 970/980 (GDDR5(X))
16GB 980TI/Titan (HBM2)

Titan with 12GB, 390/390X with 8GB and 960 with 4GB is already there so to say.

HBM1 obvious dont fit in anywhere with its 4GB limit. And the limit for HBM2 is 16GB.

Do you have a source on HBM2 being limited to 16GB? Four 8Hi stacks would give a max of 32GB of memory for big Pascal when it comes out.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Hynix haven't been able to create 8Hi stacks yet. This is also why AMD got its headache with Fiji. HBM1 was also supposed to be 8Hi.

If it gets solved, we could see 32GB Pro cards (And maybe Titan). But for now lets just assume the limit is 16GB.

The problem seems to be that 8 needs to share busses, unlike 2 and 4.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Do you really think they'll be able to get 2048 GCN shaders into a low end die? That's the same number as full Tonga, which is a 359mm^2 die. Even the rather ROP starved Fiji is 4096 on a 596mm^2 die. If speculation is correct and the small die is smaller than Cape Verde or GK107, packing that many shaders in might be a stretch.

Well, We do not know the actual die sizes of the next gen GCN architecture, 14 nm is at least 2 times denser than 28, and some said that it is a bit more dense.

So in the end, 360mm2/2=180. But that is old architecture on this process, we would need to see if the new arch affects also the density of the dies. In the end I may not be that far off from the reality.

P.S. I thought this was thread about Polaris GPU. My mistake.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Absolutely. The very first post here, someone already analyzing the design and noted no HBM. Don't reveal anything about the design until it is done and you can reveal benchmarks.

They already did a mock up with HBM. I think it has GDDR mem on it because that's what it's going to have.

NVIDIA-Pascal-GPU-Chip-Module.jpg
 

Piroko

Senior member
Jan 10, 2013
905
79
91
Guys, this is the Drive PX2 thread. Also, I'd assume that HBM2 comes with cost savings over HBM1 (less lanes to get the same bandwith, less chips to get the same size).
 

Good_fella

Member
Feb 12, 2015
113
0
0
NVIDIA showed off both Tegra and MXM flavors of their Pascal based module. While the Drive PX 2 didn’t feature an actual Pascal GPU, NVIDIA did show that the two discrete Pascal chips which will be housed on the back of the board will be offered in MXM form factor. Being packed in a MXM type solution means that this GPU will be housed in a range of desktop and mobility solutions and confirms that NVIDIA will have both HBM2 and GDDR5X GPUs when Pascal hits the market. The GPU shown in the pictures above is presumably based on the Maxwell architecture and looks quite similar to the GeForce GTX 980M graphics chip that is also offered in the MXM package.

http://wccftech.com/nvidia-pascal-gpu-drive-px-2/
 

Krteq

Golden Member
May 22, 2015
1,007
719
136
Hynix haven't been able to create 8Hi stacks yet. This is also why AMD got its headache with Fiji. HBM1 was also supposed to be 8Hi
Nope, HBM(1)s were 4 Hi only according to Hynix materials from November 2014

sk_hynix_hbm_dram_2.jpg


Hynix and Samsung wil provide both 4/8 Hi Stacks HBM2 chips this year.
 
Last edited:

csbin

Senior member
Feb 4, 2013
904
605
136
http://www.extremetech.com/gaming/2...otype-allegedly-powered-by-maxwell-not-pascal

When Nvidia&#8217;s CEO, Jen-Hsun Huang took the stage at CES last week he unveiled the company&#8217;s next-generation self-driving car platform, the Drive PX 2. According to Nvidia, its Drive PX 2 platform packs the same amount of compute power as six Titan X boards, in just two GPUs. During the show, Jen-Hsun displayed the new system &#8212; but what he showed from stage almost certainly wasn&#8217;t Pascal.

PX2GPUs-640x383.jpg


As Anandtech readers noted, the hardware Jen-Hsun showed was nearly identical to the GTX 980 in an MXM configuration. The new Drive PX 2 is shown above, the GTX 980 MXM is shown below. The hardware isn&#8217;t just similar &#8212; the chips appear to be identical. Some readers have also claimed they can read the date code on the die as 1503A1 &#8212; which would mean the GPUs were produced in the third week of 2015.

GTX980MXM.jpg


If Nvidia actually used a GTX 980 MXM board for their mockup, it would explain why the Drive PX 2 looks as though it only uses GDDR5. While Nvidia could still be tapping that memory standard for its next-generation driving platform, this kind of specialized automotive system is going to be anything but cheap. We&#8217;ve said before that we expect GDDR5 and HBM to split the upcoming generation, but we expect that split in consumer hardware with relatively low amounts of GPU memory (2-4GB) and small memory busses. The Drive PX 2 platform sports four Denver CPU cores, eight Cortex-A57 CPUs, 8 TFLOPS worth of single-precision floating point, and a total power consumption of 250W. Nvidia has already said that they&#8217;ll be water-cooling the module in electric vehicles and offering a radiator block for conventional cars. Any way you slice it, this is no tiny embedded product serving as a digital entertainment front-end.
Then again, it is still possible that the compute-heavy workloads the Drive PX 2 will perform don&#8217;t require HBM. It seems unlikely, but it&#8217;s possible.


Wood screws 2.0?

These issues with Pascal and the Drive PX 2 echo the Fermi &#8220;wood screw&#8221; even of 2009. Back then, Jen-Hsun held up a Fermi board that was nothing but a mock-up, proclaimed the chip was in full production, and would launch before the end of the year. In reality, NV was having major problems with GF100 and the GPU only launched in late March, 2010.


The good news is, we&#8217;ve seen no sign that Nvidia is having the same types of problems that delayed Fermi&#8217;s launch and hurt the final product. As far as we know, both AMD and Nvidia are on-track to launch new architectural revisions in 2016.
What&#8217;s more perplexing is why Nvidia engages in these kind of overreaches in the first place. Claiming the first public demo of a 16nm FinFET GPU may be a decent PR win, but claiming it in the context of automotive market isn&#8217;t going to ignite the coals of a GPU fanboy&#8217;s heart. Nvidia&#8217;s push into self-driving cars and deep learning is a long-term game. Launching a new Shield or game GPU might push some fans to upgrade immediately; precious few people are going to have the luxury of planning a new car purchase on the basis of Nvidia&#8217;s PX 2, no matter how powerful it might be.
The entire point of holding up a product from stage is to demonstrate to the audience that the hardware actually exists. If it&#8217;s later shown that the hardware in question wasn&#8217;t what it was claimed to be, it undercuts the original point. Worse, it invites the audience to question why the company is playing fast and loose with the truth. There&#8217;s no reason to think Pascal is suffering an unusual delay &#8212; but these kind of antics invite speculation to the contrary.