[AT] Nvidia releases GK210

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
I sense some work done on improving efficiency of GK110 chip.

One thing i don't understand is this low clocks: why instead of clock the card low like this don't clock the card high and lower the price for the end user pick two of these?
Since i not understand so much Tesla cards, i cannot give a more accurate opinion.

Worth the sacrifice of clock speed to get 4900+ CUDA cores in one PCI-e slot. More CUDA cores, less real estate. Makes a huge difference per node in a super computer cluster.
 

el etro

Golden Member
Jul 21, 2013
1,584
14
81
Because,
if die size is almost a non-issue - and it is because K80 has effectively twice the area of GK210,
you can achieve better perf/W by packing twice the count of cuda cores and clocking them very low

Also you are creating new product, new choices, selling two GPU's per 1 unit, bragging rights etc etc.

Oh yeah, cooking two GK110/210 chips maybe is so little price comparing to the Teslas(Or FirePro S series) prices :)

Worth the sacrifice of clock speed to get 4900+ CUDA cores in one PCI-e slot. More CUDA cores, less real estate. Makes a huge difference per node in a super computer cluster.

Yeah, clockspeed is still the most determinant factor for final power consumption of the cards. :)
 

DA CPU WIZARD

Member
Aug 26, 2013
117
7
81
Guys, you're looking at a chart explaining why people should buy Nvidias new card made by their marketing team. Of course it's going to be deceiving. Welcome to the business world.
 
Feb 19, 2009
10,457
10
76
300W passive cooler? Quite impressive...

Edit - Ah, it relies on external fans in the setup.

Yeah same with AMD Firepros, its a giant copper heatsink with large intake and outlet, for HPC fan setups. AMD requires 20 CFM airflow over the heatsink.

I wonder why NV don't launch GM200 based Tesla instead though? It taped out awhile ago if things go alright they should have Teslas in production.

I mean they are asking $5000 for this card, and if GM200 Teslas come soon with Maxwell pwning Kepler so much, that's a big investment instantly obsoleted.
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
Guys, you're looking at a chart explaining why people should buy Nvidias new card made by their marketing team. Of course it's going to be deceiving. Welcome to the business world.

The fact that you can't run essential scientific software like GROMACS on AMD GPU is deceiving? :eek:

Nvidia should, what... extrapolate FirePro performance from OpenCL benchmarks, and simulate it on CUDA?
 
Last edited:

Galatian

Senior member
Dec 7, 2012
372
0
71
Yeah same with AMD Firepros, its a giant copper heatsink with large intake and outlet, for HPC fan setups. AMD requires 20 CFM airflow over the heatsink.

I wonder why NV don't launch GM200 based Tesla instead though? It taped out awhile ago if things go alright they should have Teslas in production.

I mean they are asking $5000 for this card, and if GM200 Teslas come soon with Maxwell pwning Kepler so much, that's a big investment instantly obsoleted.

Probably because nVidia knows that Maxwells power efficiency gains mostly come from better load adjustment. I would bet that under GPGPU loads, which probably constantly keep the GPU occupied, those efficiency gains vanish. See the tomshardware review for reference.

Maxwell is a nice GPU for gamers but probably not so much for GPGPU.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Probably because nVidia knows that Maxwells power efficiency gains mostly come from better load adjustment. I would bet that under GPGPU loads, which probably constantly keep the GPU occupied, those efficiency gains vanish. See the tomshardware review for reference.

Maxwell is a nice GPU for gamers but probably not so much for GPGPU.

How is GM204 for DP loads again?

Thats where it is, nothing else.
 

Excessi0n

Member
Jul 25, 2014
140
36
101
Probably because nVidia knows that Maxwells power efficiency gains mostly come from better load adjustment. I would bet that under GPGPU loads, which probably constantly keep the GPU occupied, those efficiency gains vanish. See the tomshardware review for reference.

Maxwell is a nice GPU for gamers but probably not so much for GPGPU.

The 980 utterly trashes Kepler cards in compute.

If Maxwell didn't have a better perf/watt than Kepler in compute, then the 980 would turn into a fiery blowtorch that makes reference 290Xs look chilly. Given that there are no reports of the exhaust setting houses aflame (or the incredible power draw making power supplies explode), it's a safe assumption that Maxwell really is an incredible architecture and that the power gating thing is a ludicrous myth. :awe:
 

Galatian

Senior member
Dec 7, 2012
372
0
71
The 980 utterly trashes Kepler cards in compute.

If Maxwell didn't have a better perf/watt than Kepler in compute, then the 980 would turn into a fiery blowtorch that makes reference 290Xs look chilly. Given that there are no reports of the exhaust setting houses aflame (or the incredible power draw making power supplies explode), it's a safe assumption that Maxwell really is an incredible architecture and that the power gating thing is a ludicrous myth. :awe:


First of all read what I posted: I was talking about power draw during load scenarios.

Second of all: go read some benchmarks. Tomshardware is a good one as they made precise power measurements. The quintessence is: Maxwell has very good power gating. But that is not going to give you much when you are running the GPU at 100% load the entire time, e.g. GPGPU computing. Power draw under stress is a lot higher then what Nvidia states being the "normal" power draw.

So again my point stands: Maxwell is incredible for games, as the power gating is most welcome on those task and reduced the power draw and therefore heat by a tremendous amount. But simply not so much if you keep the GPU busy.

If you have something really substantial prove that this is not the case, please do share it. But simply saying: "well Maxwell doesn't go up in flames, hence it must be super efficient" is a very unscientific point to make.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
First of all read what I posted: I was talking about power draw during load scenarios.

Second of all: go read some benchmarks. Tomshardware is a good one as they made precise power measurements. The quintessence is: Maxwell has very good power gating. But that is not going to give you much when you are running the GPU at 100% load the entire time, e.g. GPGPU computing. Power draw under stress is a lot higher then what Nvidia states being the "normal" power draw.

So again my point stands: Maxwell is incredible for games, as the power gating is most welcome on those task and reduced the power draw and therefore heat by a tremendous amount. But simply not so much if you keep the GPU busy.

If you have something really substantial prove that this is not the case, please do share it. But simply saying: "well Maxwell doesn't go up in flames, hence it must be super efficient" is a very unscientific point to make.

Can we please let this die. So many (i'm not going to say it) people trying to make maxwell look bad yet haven't done any research themselves.

Toms realized they goofed up. Their 970 measurements in the first place were done using a simulated 970.

http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-970-maxwell,3941-13.html

[EDIT] We originally posted Power Consumption Torture (GPGPU) results that showed a simulated GeForce GTX 970 reference card pulling over 240 Watts. This does not represent Nvidia's reference GeForce GTX 970 board because our data point was simulated with a Gigabyte GTX 970 card that has a non-reference ~250 Watt power target, unlike the reference board's ~150 W power target.

We have since pulled that data since it does not represent Nvidia's reference GeForce GTX 970 card. On the other hand, as far as we know there are no actual GeForce GTX 970 reference card designs for sale as each manufacturer has put their own spin on this model. None of the manufacturers we have talked to have released a GeForce GTX 970 card with a ~150 Watt power target as of this time, opting instead to give this product more performance headroom.

This is an issue we are keeping a close eye on, and we will follow up with a detailed investigation in the near future. We are curious to see if a reference-based GeForce GTX 970 will perform in the same league as the cards we have tested with higher power targets, but it would certainly make more sense in an HTPC or for use in smaller form factors. In the meantime, we have removed the 'simulated' GeForce GTX 970 data point from the following charts. [/EDIT]

980 reference draws no more power under GPGPU load than it does under gaming. Maxwell's trick is not power gating.

103-Overview-Power-Consumption-Torture.png
 

Galatian

Senior member
Dec 7, 2012
372
0
71
Can we please let this die. So many (i'm not going to say it) people trying to make maxwell look bad yet haven't done any research themselves.



Toms realized they goofed up. Their 970 measurements in the first place were done using a simulated 970.



http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-970-maxwell,3941-13.html







980 reference draws no more power under GPGPU load than it does under gaming. Maxwell's trick is not power gating.



103-Overview-Power-Consumption-Torture.png


I stand corrected. Curiously they didn't change the article to reflect these points in the German website.

I mean I was aware that they had different results depending on which power target was used, but it looked like GPGPU loads were all the same more or less.

Well I was wrong, Maxwell is indeed not only a good gaming card but also packs a big punch in GPGPU.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
234
106
The reason behind not deploying Maxwell in Tesla Accelerators is said to be the lack of FP64 Floating Point Units and additional double precision hardware. And the reason behind not including DP FPUs in Maxwell might have to do with the extremely efficient design that NVIDIA was aiming for. This however means that NVIDIA’s upcoming Maxwell core, the GM200 which is the flagship core of the series might just remain the GeForce only offering unlike Kepler which was aimed for the HPC market first with the Titan supercomputer and launched a year later after the arrival of the initial Kepler cores as the GeForce GTX Titan.
Source.
 
Last edited:
Feb 19, 2009
10,457
10
76
Well, that would certainly explain the GK210, to hold the GPGPU fort until Pascal.

It is strange NV would devote an entire architecture to gaming. Normally its been only their mid-range chip that's FP64 & DP castrated..
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Ya, I read that too. Pretty much would mean NV made Maxwell a pure Quadro/gaming card. The answer lies in the number of CUDA cores. Even if thr flagship GM200 has 3072, that's hardly an improvement over GK110/210. NV may have decided it wasn't worth millions of dollars to design Maxwell with DP for Tesla. Then again since Maxwell is so power efficient, one would think even 30-40% improvements could have been realized due to higher clock speeds. Kinda shocking if GK210 will be the top Tesla card until end of 2016. Not sure I am ready to accept that rumour just yet until other sources validate it.
 

jpiniero

Lifer
Oct 1, 2010
16,987
7,387
136
I would not be surprised. The 20 nm debacle probably caused them to have to refocus and perhaps cut things, and maybe the DP heavy Big Maxwell was the biggest casualty.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I would not be surprised. The 20 nm debacle probably caused them to have to refocus and perhaps cut things, and maybe the DP heavy Big Maxwell was the biggest casualty.

Most likely this. Probably not much more DP they could fit into a Maxwell based Telsa. What are the odds any datacenter/compute guru would upgrade for a side grade? Not at those cost. Double Titan makes more sense.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Releasing a pure gaming variant lets them get efficiency and specificity to the workload impossible if they have to simultaneously design for HPC. Normally, the blistering pace of GPU development would make it impractical to release two separate designs given the additional work (thus release date slippage) it would take and instead it made more sense to do a single scalable design.

2 things changed: 1) the rise of the nVidia smartphone/tablet chip -- there's no room for slack in efficiency in mobile, 2) nodes are going to stay around for longer and longer so releasing an architecture followed by a major tweak of the architecture allows meaningful performance increases on the same node. The only downside would be that they have to pay more to do extra masks and architectural work -- however as shown by the sales of the 970 and 980 it seems consumers are more than willing to pay for a new architecture with new features even if its at relatively the same top performance level. So they probably don't lose any money versus a single-architecture-top-to-bottom chip.

In short; people bought midrange Kepler as the "top chip" then people bought Titans as the top chip, then 780 Ti as the top chip. Then they did the same for the 980 even though it is rather similar to the 780 Ti in top performance. They can design the gaming specific chip and it doesnt carry the drawbacks it once would have both in time-to-market and profitability.
 
Last edited: