CPU beware: nVidia Tesla Linpack Numbers Analyzed

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dug777

Lifer
Oct 13, 2004
24,778
4
0
It is, and surprisingly the problem on the power consumption side runs into how much a power plant can supply over the wire. I have heard about data centers that are tapped out from a power perspective. The only way to get more performance is to find a faster solution at the same power envelope or build their own on site power plant. In this case if a DC had a problem like this, the Nvidia solution would provide 4 times the performance for the same power envelope.

That's a rather romanticised and slightly strange way of describing it ;)

You deliver as much power as you like via a 'wire' for all practical intents and purposes, otherwise how would you get power away from a power plant? ;)

In most cases you'll be limited by your wiring and transformer most immediately, and then maybe your local distribution network and substation. Really big users might even run straight into the transmission network, but that's some pretty serious kit right there...

Now, it may be prohibitively expensive to upgrade the distribution and/or the transmission network to deliver that power, but it's certainly entirely possible and practicable.

EDIT: Out of curiousity, it appears that one of the largest data centres in the world uses about 100MW (and is in an amazingly cool old building!):

Lakeside Technology Center (350 East Cermak

http://www.datacenterknowledge.com/...ters/worlds-largest-data-center-350-e-cermak/

That's no worries at all for the grid to deliver (remember you're pulling power out of multiples of thousands of megawatt facilities onto the grid).

The diesel gensets they mention will be for backup I presume, burning diesel is scarily expensive compared to good old coal :)
 
Last edited:

yh125d

Diamond Member
Dec 23, 2006
6,886
0
76
A cpu box with dual zeons and 48gb of memory cost 7,000$
A cpu/gpu box with dual Telsa and dual zeons and 48gb's of memory cost 11,000$
So the cards cost $1,500 a piece?

Thats about 35% more cost for 800% more performance.

wat? It's 57% more cost and 82% more power for 820% more perf


This is big. We already know how much faster F@H runs on GPU, how many of the similar big research projects would stand to gain as much as F@H?
 

lifeblood

Senior member
Oct 17, 2001
999
88
91
How much are tesla systems going to run? I suspect they will be competitively priced against x86 boxes. The previous champions in this costed quite a bit more than x86 offerings.

How many x86 boxes will need to be run, maintained, and powered to equal one Tesla box?

So Tesla has impressive Linpack numbers? Great for the research community, (mostly) irrelevant for the rest of us.

Tesla poses no challenge to AMD/Intel outside of the research environment. The x86 is a general purpose CPU. It runs things like Word, Firefox, Telnet clients, Solitaire, etc. It's used for a variety of dissimilar apps. A highly specialized chip like the Tesla does not perform well running general purpose apps like these.

I worked in a University IT department that did research. While we did have some specialized research computer clusters, most of the servers in our data center were used for email, file services, printing, etc. Jobs for which x86 was good at and Tesla is useless at. And lets not forget all those PC's on staff and administrations desks that ran basic office productivity apps.

What is good for most of us is a computer with a general purpose CPU that also has a specialized chip like the Tesla for the times you need that power (folding at home, games, ect). That, of course, is exactly where AMD is going with Fusion for tomorrow, and Intel is also probably heading as well. That is also where we are today with an x86 CPU and an AMD or nVidia GPU running OpenCL or CUDA.

So nVidia and Tesla generate impressive Linpack scores? Great! I wish them the best and hope we can start finding cures to some of these diseases. But I do not see the crack of doom about to swallow AMD and Intel because of it.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
So Tesla has impressive Linpack numbers? Great for the research community, (mostly) irrelevant for the rest of us.

Tesla poses no challenge to AMD/Intel outside of the research environment. The x86 is a general purpose CPU. It runs things like Word, Firefox, Telnet clients, Solitaire, etc. It's used for a variety of dissimilar apps. A highly specialized chip like the Tesla does not perform well running general purpose apps like these.

I worked in a University IT department that did research. While we did have some specialized research computer clusters, most of the servers in our data center were used for email, file services, printing, etc. Jobs for which x86 was good at and Tesla is useless at. And lets not forget all those PC's on staff and administrations desks that ran basic office productivity apps.

What is good for most of us is a computer with a general purpose CPU that also has a specialized chip like the Tesla for the times you need that power (folding at home, games, ect). That, of course, is exactly where AMD is going with Fusion for tomorrow, and Intel is also probably heading as well. That is also where we are today with an x86 CPU and an AMD or nVidia GPU running OpenCL or CUDA.

So nVidia and Tesla generate impressive Linpack scores? Great! I wish them the best and hope we can start finding cures to some of these diseases. But I do not see the crack of doom about to swallow AMD and Intel because of it.

/facepalm

Are you under some impression I thought Tesla was designed for running word? lol

It is quite obvious we are discussing the HPC space. Not desktop computing.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
That's a rather romanticised and slightly strange way of describing it ;)

You deliver as much power as you like via a 'wire' for all practical intents and purposes, otherwise how would you get power away from a power plant? ;)

In most cases you'll be limited by your wiring and transformer most immediately, and then maybe your local distribution network and substation. Really big users might even run straight into the transmission network, but that's some pretty serious kit right there...

Now, it may be prohibitively expensive to upgrade the distribution and/or the transmission network to deliver that power, but it's certainly entirely possible and practicable.

EDIT: Out of curiousity, it appears that one of the largest data centres in the world uses about 100MW (and is in an amazingly cool old building!):

Lakeside Technology Center (350 East Cermak

http://www.datacenterknowledge.com/...ters/worlds-largest-data-center-350-e-cermak/

That's no worries at all for the grid to deliver (remember you're pulling power out of multiples of thousands of megawatt facilities onto the grid).

The diesel gensets they mention will be for backup I presume, burning diesel is scarily expensive compared to good old coal :)

Well I was thinking about an article a couple of years ago about a google DC run by IBM. In the article they mentioned the DC was unable to get any more power from the grid. That was the basis for my example. Either way for the power consumption Tesla doign the same work as an x86 chip could get 4 times the performance. That is nothing to scoff at.
 

sandorski

No Lifer
Oct 10, 1999
70,677
6,250
126
Small market huh? Ok. Lets just take one little item out of hundreds of thousands of applications. Mapping the Human Genome. I chose this because I used to work in the IT dept. of one of the biggest laboratories in the world. For mapping the human genome alone (there are literally hundreds of other types of research going on), Entire NOC's (network operations Centers) were dedicated with rows and rows of 40U racks filled top to bottom with hundreds of 1U dual CPU rigs. 25 40U racks in one NOC alone for this one application.
Each populated 40U rack costing upwards of $125,000, and this was years ago.

This is just one lab, running in one dept of the lab, running one app in one dept of that lab.
Multiply that by how many labs around the world collaborating the same app with their own NOC's and server resources?

I highly doubt...... scratch that. I know you do not realize the sheer magnitude of the HPC market. Cancer research alone has thousands and thousands of departments focusing on different types of cancers and causes of cancer. It's limitless.

In that same dept that was Mapping the Human Genome, don't forget each scientist with his/her own office workstations.

It's MEGA HUGE, even in this teeny tiny segment of the overall HPC usage in the world today.

I know you think this is PR dude. Call it what you want, but it's the truth. I just gave you 1st hand facts on the littlest example from one dept of a lab.

Multiply that by millions for all other applications in the scientific community alone. And this is Academia alone. We haven't even gotten into Corporate usage yet.

It's really mind blowing, the sheer size of it. It's no wonder it elludes people when they can't fathom the size or use a little example like this to put things into perspective.

Then there are the other numerous departments.
Alzheimers research
Cancer research
Autism research

Too many to list, and each with their own numbers crunching NOC's.

Still feel the same way now?

Wasted post, I already addressed it.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
/facepalm

Are you under some impression I thought Tesla was designed for running word? lol

It is quite obvious we are discussing the HPC space. Not desktop computing.

Hey hey hey hey, there are people with massive spreadsheets that would really need that 200Gflops of processing power. :p