NVIDIA GeForce 20 Series (Volta) to be released later this year - GV100 announced

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Mar 10, 2006
11,715
2,012
126
The Volta launch will look just like Pascal. It's going to lead with the very top end HPC/data center/deep learning chip. Consumer will be several months after. Nvidia is doing this because Radeon Instinct(Vega architecture) is AMD's first legit attempt at a card in that segment. AMD has already laid the tools ground work with the Boltzmann Initiative, MIOpen, and ROCm. They just needed a card to go with it. Radeon Instinct is that card. Nvidia actually has to start competing against someone other than itself. That means if they have an opportunity to launch early they will.

Has nothing to do with Radeon Instinct, and everything to do with the fact that the data center business is NVIDIA's fastest growing segment by far.
 

Despoiler

Golden Member
Nov 10, 2007
1,968
773
136
Has nothing to do with Radeon Instinct, and everything to do with the fact that the data center business is NVIDIA's fastest growing segment by far.

That they are trying to protect..... Several of the Vega features directly benefit HPC/data center/deep learning. They don't align directly to gaming. Ignoring that is well....being ignorant.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
nVidia's much higher datacenter pace absolutely has something to do with the fact that they have big, fast moving competitors in that space. Intel with Xeon Phi, AMD with their GPUs, various FPGA and DSP companies chomping at the bit for specialized high throughput computing.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
TIL: NV's behavior from the last 3 GPU generations is going to change because AMD is finally making a strong move in datacenters.

It's not like consumers got GK104/GM104/GP104 before datacenters got GK110/GM100/GP100. Dang AMD and their market movements! It's your fault NV is milking us!
 

Bouowmx

Golden Member
Nov 13, 2016
1,150
553
146
Right from Anandtech live blog:
ssp_403.jpg
 

xpea

Senior member
Feb 14, 2014
458
156
116
Tesla V100
815mm2, TSMC 12nm FFN, 21B transistors
15 TFLOPS FP32, 7.5 TFLOPS FP64
20MB register file, 16MB cache, 16GB of HBM2 at 900GB/sec. NVLink 2 at 300GB/sec

ssp_406_575px.jpg
 

xpea

Senior member
Feb 14, 2014
458
156
116
Volta demo live simulating andromeda galaxy with hundreds of million of stars !!!
 

xpea

Senior member
Feb 14, 2014
458
156
116
Still MI25 is 36% faster than GV100 in TFLOPS/mm^2.
Wrong. Not in deep learning, the main market of these GPUs
ssp_406_575px.jpg

ssp_408_575px.jpg


Edit: V100 is 120 TensorFlops for 815mm2 vs 50 Tensor Flops on Mi25 for 500mm2
a Toy I said
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
17,165
7,543
136
Still MI25 is 36% faster than GV100 in TFLOPS/mm^2.

Yeah but the instinct doesn't have DP. Still, for that big (and on the 12 nm node) I would expect more.

Edit: nVidia is still doing the separate FP32 and FP64 cores, and now has the Tensor cores. So I guess that's why it's so big.
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
Wrong. Not in deep learning, the main market of these GPUs
ssp_406_575px.jpg

ssp_408_575px.jpg


Edit: V100 is 120 TensorFlops for 815mm2 vs 50 Tensor Flops on Mi25 for 500mm2
a Toy a said
I'm talking about FP32 TFLOPS. The yields on this thing assuming a 24mm*34mm die(816mm^2) is GF100 levels at worst and early Kepler levels at best.
 

xpea

Senior member
Feb 14, 2014
458
156
116
I'm talking about FP32 TFLOPS. The yields on this thing assuming a 24mm*34mm die(816mm^2) is GF100 levels at worst and early Kepler levels at best.
I want to say that NV doesn't care about FP32. Biggest market is deep learning FP8 performance when Volta crushes everything else in the market with the dedicated instruction.
NV invented this market, they know what scientists want. They are not AMD followers. They are again one generation ahead...
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
I want to say that NV doesn't care about FP32. Biggest market is deep learning FP8 performance when Volta crushes everything else in the market with the dedicated instruction.
NV invented this market, they know what scientists want. They are not AMD followers. They are again one generation ahead...
Of course a GPU will be good at matrix multiplications, it doesn't take an Einstein to figure that out.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
So let's take a guess that GV102:GV100 will be the same as GP102:GP100. What can we expect? And if we assume that the x104 follows the same ratio, what can we expect for GTX 2080 (hopefully later this year)? What would the cuda cores and clockrates look like?
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
Volta is confirmed to be built at TSMC 12FFC. 5120 cuda cores. 15 TFLOPS FP32, 7.5 TFLOPS FP64. 815 sq mm die. 21 billion transistors. This is a monster GPU. Heck it takes GPU die size to a new record - 33% larger than GP100. Incredible. Poor AMD. That stupid "Poor Volta" image is going to haunt AMD.