• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Titan V announced, $3000 volta gpu

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
It is supposed to crush Titan X by 50%.... If not is Titan Z all over again, kills HBM2 and 12 nm process at the same time...

The clocks are pretty lower to even Vega 64...

From only looking at the core count it should be ~1.4x at normalized clocks against Titan X and 1080ti, ~1.3x against Titan XP.
 
Comming as lightning without thunder?

Ok lets wait and see reviews of this and the new adrennaline drivers. This card could have been launched one month before or one month later. The timing is made to keep mindshare. Secure total performance control. Secure brand. Until next gen sp16 arives.
Nobody will care if a rx64 is as fast as a 1080ti in february or if this is 815mm2 or 3000 usd.

I don't think that a single review sample of this will be sent to the likes of Anandtech, PCPer, etc. This isn't a gaming card. The reason that it launched now is because JHH launched it at the Conference on Neural Information Processing Systems, or NIPS for short. You know... the conference about AI.
 
And the well oiled Nvidia machine keeps ticking over...

It's not for gaming but you know a bunch of rich people will sell 1/2 a bitcoin and buy 2 of them, and reviewers will review them and Nvidia stays in the limelight.
 
I don't think that a single review sample of this will be sent to the likes of Anandtech, PCPer, etc. This isn't a gaming card. The reason that it launched now is because JHH launched it at the Conference on Neural Information Processing Systems, or NIPS for short. You know... the conference about AI.
Ok. If that turns out to be the case surely its just a coincidence.

But hey we will get drama non the less whatever happens. Thats what counts 🙂
 
Possibly stupid question ahead:

Titan V: DP 6.9 TFLOPS (1/2 rate)
Titan Xp: DP 0.38 TFLOPS (1/32 rate)

Will this prove to be of any value to crypto mining? Folding? Or only AI/tensor focused applications?
 
No and no.
TPUs are small-ish FP16 ALUs. The wiring probably costs them more than the actual die area.
But FP64 eats a LOT of die area.

Looking deeper into this the additional L2 cache is almost nothing in terms of die area, but the TPUs look like they consume close to the same amount of die area as the FP64 cores.

Each TPU has to perform matrix multiplication on 4x4 FP16 matrices and add the result to another 4x4 matrix (which can be FP32). That works out to 64 FP16 multiplications and 64 FP additions (48 FP16 + 16 FP16 or FP32) per cycle. That can not be done with a trivial amount of die area.

Given that the number of gates required for multiplication scales with the square of the precision (from what I remember high speed logic arrays are used in FPUs) a TPU doing 64 FP16 multiplications should be 4x bigger than a FP64 core (64/2^4 = 4). The V100 has 4x the number of FP64 cores than TPUs (32 vs 8 per SM) so the area hit from the TPUs should be close to the area hit from the FP64 cores.

An approximation of the die hit for adding TPUs can also be derived from comparing the area per shader module of the GP100 with the GV100 while accounting for the increased density of 12nm. This approximation yields that 103mm^2 of the die is dedicated to the TPUs (815mm^2 - 5376 cores/3840 Cores * 610mm^2 / 1.2x density = 103mm^2) on GV100. The die hit for FP64 cores can be approximated by looking at the die size of GP100 vs GP102. This approximation yields that the 1920 FP64 cores on GP100 consume 139mm^2 of die (610mm^2 - 471mm^2 = 139mm^2) or 0.072mm^2 per FP64 core. On 12nm this would translate to 0.060mm^2 per FP64 core, thus the hit for FP64 on GV100 should be 154mm^2 (2560cores * 0.060mm^2 = 154mm^2). So it looks to me the TPU are not smallish at all, but that the TPUs take up close to the same amount of area as the FP64 cores.
 
Last edited:
New Possibly stupid question ahead:

Titan V: DP 6.9 TFLOPS (1/2 rate)
Titan Xp: DP 0.38 TFLOPS (1/32 rate)

Will this prove to be of any value to crypto mining? Folding? Or only AI/tensor focused applications?

It is a very valid question, but crypto mining is currently about SP flop rates, memory latency and architecture.

Titan V has incredible DP Flops rate that would get used in "generic" stuff like physics etc simulation. 3k$ is very fair price for what is 7TFlops DP, as long as one can fit into 12GB and even if <75% of that number is usable due to cut memory bw.
And it also has those Tensor cores, when combined with Nvidia's neural/ai zoo of APIs and Frameworks it has potential to provide 10x boost versus what they have now.
 
It is a very valid question, but crypto mining is currently about SP flop rates, memory latency and architecture.
So we're not going to see miners gobbling these up, meaning they might actually reach the hands of people that can really use them (machine learning, etc.). And gamers that just have to have the very best possible for any cost, of course.
 
This has literally nothing to do with gaming. It's a deep learning monster, but barely an improvement over a 1080ti for FP32. They want to sell this to deep learning scientists.

Or aspiring plebeians trying to get 800FPS in CSGO?
 
Comming as lightning without thunder?

Ok lets wait and see reviews of this and the new adrennaline drivers. This card could have been launched one month before or one month later. The timing is made to keep mindshare. Secure total performance control. Secure brand. Until next gen sp16 arives.
Nobody will care if a rx64 is as fast as a 1080ti in february or if this is 815mm2 or 3000 usd.
No dude, no. This has nothing to do with some AMD driver release. They announced this at the Neural Information Processing Systems 2017 conference.
 
Since this thing is actually an insane beast for scientific and industrial purposes, $3k is a really pretty reasonable price. When you consider enterprises won't blink at dropping $2k on a single stick of the highest density ECC RAM multiplied across a whole data center, then $3k for a workstation beast card doesnt seem bad
 
As expected Nvidia will continue to creep up prices because there isn't enough competition at the high end. At least this is something to read about
 
As expected Nvidia will continue to creep up prices because there isn't enough competition at the high end. At least this is something to read about
The thing would cost 3 grand even if Vega64 annihilated GP102 silly.
Simply because AMD is not going to fab anything remotely as huge as V100 any time soon.
Besides, Titan V is priced conservatively, given the die size (ffs the thing is probably barely yielding).
 
The thing would cost 3 grand even if Vega64 annihilated GP102 silly.
Simply because AMD is not going to fab anything remotely as huge as V100 any time soon.
Besides, Titan V is priced conservatively, given the die size (ffs the thing is probably barely yielding).
Yeah Nvidia isn't anywhere close to cost based pricing and this argument has been invalid for a long time. When GP104 was released at $700 for a ~300mm2 die I think everyone agreed about that.
 
Yeah Nvidia isn't anywhere close to cost based pricing and this argument has been invalid for a long time.
Gotta maintain those sweet 55-56% margins in your only consistenly profitable non-meme, non-bubbling segment, you know.
See, nVidia fails (usually with hilarious denial) at making anything but GPUs.
Remember Denver?
Yeah that existed.
 
News flash for you, quote me on that later, Adrenalin won't come with any substantial improvements for fps, may be 2% in some titles.
To confirm my previous statement, here is Ryan Shrout's comments about the Adrenalin driver:

there are people that I saw in the comments on this video and in our post talking about 'oh finally we are gonna get the huge vega performance increase'. I hate, eh, let me just settle everybody down here: that's not going to happen. There's not a 20% increase in GPU performance that's happening with this. This is much more about features and capabilities, and UI elements, and making everything easier to use.

https://www.youtube.com/watch?v=u8Zk74dfCeE&feature=youtu.be&t=2287
 
Back
Top