• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

What do you do with 5,280 V100 GPU's?

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
101,626
5,923
126
if they're exactly 12" long you can have a mile of GPUs. like beer mile, but dorkier.
 
  • Like
Reactions: Phynaz

Krteq

Senior member
May 22, 2015
972
648
136
660 petaFLOPS :astonished:
*(AI) / 80 PFLOPS (FP32)

Not so impressed

Anyone knows how they get that - ehm - "AI" number and what is that number represents? It's not for INT8/FP16 peak performance, so what is it?
 

Phynaz

Lifer
Mar 13, 2006
10,140
817
126
*(AI) / 80 PFLOPS (FP32)

Not so impressed

Anyone knows how they get that - ehm - "AI" number and what is that number represents? It's not for INT8/FP16 peak performance, so what is it?
Matrix math.

A single Tensor Core performs the equivalent of 64 FMA operations per clock (for 128 FLOPS total), and with 8 such cores per SM, 1024 FLOPS per clock per SM.
https://www.anandtech.com/show/11367/nvidia-volta-unveiled-gv100-gpu-and-tesla-v100-accelerator-announced
 

crisium

Platinum Member
Aug 19, 2001
2,632
593
136
I'd let it turn loose on itself - use them FLOPs with machine learning to design faster, more efficient GPUs at the same node. GPU singularity.
 
  • Like
Reactions: Phynaz

Qwertilot

Golden Member
Nov 28, 2013
1,589
243
106
I'd let it turn loose on itself - use them FLOPs with machine learning to design faster, more efficient GPUs at the same node. GPU singularity.
Good plan up to the point where it works out to do molecular self assembly to make them and we've got a homogonising swarm on our hands :)
 

ASK THE COMMUNITY