• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

HPC Video card design

cbn

Lifer
Mar 27, 2009
12,968
221
106
Is there any disadvantage to designing HPC cards from the ground up as 300 watt TDP Dual GPU models?

I know gamers sometimes shy away from multi-gpu solutions because of "microstutter", but in High Performance computing this is obviously a non-issue.

Or would there be other HPC specific problems to including a SLI bridge on a single Video card? (Bear in mind I have no IT knowledge when I ask these questions)
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Speaking of TDP, how much of a problem would noise be in the data center?

Four dual slot 300 watt Video cards in each 1U case...Wouldn't that require a good amount of fan speed?
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
For HPC, there are really only one thing you care about: maximizing the total compute power per 1U. This means you don't necessarily have to go with large, hot, fast chips; you can go with smaller chips if it means you can pack enough on to get more compute performance than the large chips. The tasks generally being run on HPC clusters are embarrassingly parallel, so there's little-to-no penalty to splitting up a task to run on many smaller GPUs than on one big GPU.

The catch right now is that usually only the biggest GPUs have double-precision (FP64) support. NV and ATI usually strip that on lesser parts. So HPC users are really only given one option: buy the HPC-targeted card built on the big GPU since it's the only thing not restricted in some manner.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
For HPC, there are really only one thing you care about: maximizing the total compute power per 1U. This means you don't necessarily have to go with large, hot, fast chips; you can go with smaller chips if it means you can pack enough on to get more compute performance than the large chips. The tasks generally being run on HPC clusters are embarrassingly parallel, so there's little-to-no penalty to splitting up a task to run on many smaller GPUs than on one big GPU.

The catch right now is that usually only the biggest GPUs have double-precision (FP64) support. NV and ATI usually strip that on lesser parts. So HPC users are really only given one option: buy the HPC-targeted card built on the big GPU since it's the only thing not restricted in some manner.

Thanks.

Do you know if these HPC tasks require the same amount of memory bandwidth as 3D tasks? Or are we talking much smaller amounts of memory bandwidth needed?
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Thanks.

Do you know if these HPC tasks require the same amount of memory bandwidth as 3D tasks? Or are we talking much smaller amounts of memory bandwidth needed?
It depends heavily on the task. Something like a simple key cracker is not bandwidth intensive, while ray tracing would be.