GeForce Titan coming end of February

Page 36 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

njdevilsfan87

Platinum Member
Apr 19, 2007
2,349
270
126
Those clocks are impressive if true, but IF it doesn't have voltage control I doubt we'll see much headroom out of overclocking.

If it has the same 1.175v wall as GTX 670/680 do now, then 1200mhz isn't out of the question. Hopefully they didn't lower it...
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
If it has the same 1.175v wall as GTX 670/680 do now, then 1200mhz isn't out of the question. Hopefully they didn't lower it...

I think this one is going to be uncorked. Its not gonna have the baby PCB that the 680 has, and at the price we are expecting I would be shocked if the voltage was locked.
 

xeledon20005

Senior member
Feb 5, 2013
300
0
86
I want this card, problem is I just bought a 680 few months ago and i love it. I don't see how this card would improve any of the games I play or change the way I do anything on my computer right now, but Damn it does look very tempting to get.
 

Redshirt 24

Member
Jan 30, 2006
165
0
0
Halo products tempt everybody a little. :) This will be overkill for the vast majority, though, so unless you knock over an Apple Store and get three 30" Cinema Displays you can probably safely skip it...
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
EDIT: I don't think the card will be clocked that high. I think someone saw the fake arabworld slide I posted earlier in this thread and has been running with it. 1019mhz boost clocks would give Titan 70% more shader power, 45% more ROP power, 20% more setup (geometry), and 50% more memory bandwidth, and would put it dead even with a gtx690 in gaming benchmarks @ 1440p and higher. According to all the leaks so far, Titan should be about 85% of a gtx690 in real world scenarios, putting core clocks around 900mhz boost.

Agreed.

If you look at the link posted from DonanimHaber, the English translation reads:

"The performance of texture fill 288 billion / sec (GeForce GTX 690'da 234) we learn that the graphics card 5.4 teraflops of computing power will at the same time."

If you click to go back to original source without Google Translator, we get:

"Doku doldurma performansının 288 milyar/saniye (GeForce GTX 690'da 234) olduğunu öğrendiğimiz ekran kartı aynı zamanda 4.5 TeraFLOP işlem gücü sunacak."

That's an obvious error by Google translator.

2688 CUDA cores clocked at 837mhz gives us roughly 4.5 Tflops of SP.

vs.

2688 CUDA cores clocked at 1019mhz gives us 5.48 Tflops of SP.

Since K20X has 2688 CUDA cores, 235W TDP, 732mhz GPU / 5.2ghz GDDR5 clocks, guess which one of those specs is most likely the unrealistic spec?

I have been saying this for a long time but it gets ignored. Can someone explain to me how you can increase GPU clocks by 39% (1019 / 732), increase GDDR5 from 5.2ghz to 6Ghz (288GB/sec memory bandwidth) and end up with a 250W TDP card?

Increasing GPU clocks from 915mhz 670 to 1006mhz 670 bumps up power consumption by 14W. That's on a small 294mm2 die. Moving from 1344 SP 980mhz GTX670 to a slightly higher clocked 1058mhz 1536 GTX680 bumps up the power consumption 22% (186W vs. 152W), and yet a 2688 chip would have 2.07x the shading power of a 152W GTX670 but power consumption only goes up 64% (250W vs. 152W)?
 
Last edited:

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
If you look at the link posted from DonanimHaber, the English translation reads:

"The performance of texture fill 288 billion / sec (GeForce GTX 690'da 234) we learn that the graphics card 5.4 teraflops of computing power will at the same time."

If you click to go back to original source without Google Translator, we get:

"Doku doldurma performansının 288 milyar/saniye (GeForce GTX 690'da 234) olduğunu öğrendiğimiz ekran kartı aynı zamanda 4.5 TeraFLOP işlem gücü sunacak."

That's an obvious error by Google translate.

2688 CUDA cores clocked at 837mhz gives us roughly 4.5 Tflops of SP.

vs.

2688 CUDA cores clocked at 1019mhz gives us 5.48 Tflops of SP.

Since K20X has 2688 CUDA cores, 235W TDP, 732mhz GPU / 5.2ghz GDDR5 clocks, guess which one of those specs is most likely the BS one?

I have been saying this for a long time but it gets ignored. Can someone explain to me how you can increase GPU clocks from by 39% (1019 / 732), increase GDDR5 from 5.2ghz to 6Ghz (288GB/sec memory bandwidth) and end up with a 250W TDP card?

Increasing GPU clocks from 915mhz 670 to 1006mhz 670 bumps up power consumption by 14W. That's on a small 294mm2 die.

1019mhz 2688 chip would have 2.07x the shading power of a 152W GTX670 but power consumption only goes up 64% (250W vs. 152W?)

250w @ idle? :biggrin:
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I'm a bit surprised it took you so long to come up with THAT screenshot. Yeah we all know Crysis has nice looking vegies if you very careful when modding and taking screenie. I bet my wife's behind when she's 60 will look nice from certain angles. It's still a 6 years old game with the look and feel of 6 years old game. And if Crysis indeed looked like that, we would have remembered it, instead of you passing shocking pics. Nice try though.

I posted that in the context of Titan's ability (or inability) to max out next generation 2013-2015 games. You read that as me discussing C1 vs. C3 because you still seem hurt that people ripped C3's graphics apart in Beta. It seems you still haven't gotten over it since you brought up things like Crysis 1's 6-year-old launch date. Way to correlate another thread with a completely unrelated topic about a next flagship card which entices many to upgrade for next gen games like BF4, Witcher 3, etc. The screenshot I linked is just a showcase of what next gen PC games might look like. Is Titan ready/fast enough to play them? That's all I asked.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
I have been saying this for a long time but it gets ignored. Can someone explain to me how you can increase GPU clocks by 39% (1019 / 732), increase GDDR5 from 5.2ghz to 6Ghz (288GB/sec memory bandwidth) and end up with a 250W TDP card?

:hmm:

:confused:

:awe:

GTX480 - 250 Watt - 700MHz w 15SM -177 GB/s
M2050 - 225 Watt - 515MHz w 14SM -150 GB/s
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Agreed.

If you look at the link posted from DonanimHaber, the English translation reads:

"The performance of texture fill 288 billion / sec (GeForce GTX 690'da 234) we learn that the graphics card 5.4 teraflops of computing power will at the same time."

If you click to go back to original source without Google Translator, we get:

"Doku doldurma performansının 288 milyar/saniye (GeForce GTX 690'da 234) olduğunu öğrendiğimiz ekran kartı aynı zamanda 4.5 TeraFLOP işlem gücü sunacak."

That's an obvious error by Google translator.

2688 CUDA cores clocked at 837mhz gives us roughly 4.5 Tflops of SP.

vs.

2688 CUDA cores clocked at 1019mhz gives us 5.48 Tflops of SP.

Since K20X has 2688 CUDA cores, 235W TDP, 732mhz GPU / 5.2ghz GDDR5 clocks, guess which one of those specs is most likely the unrealistic spec?

I have been saying this for a long time but it gets ignored. Can someone explain to me how you can increase GPU clocks by 39% (1019 / 732), increase GDDR5 from 5.2ghz to 6Ghz (288GB/sec memory bandwidth) and end up with a 250W TDP card?

Increasing GPU clocks from 915mhz 670 to 1006mhz 670 bumps up power consumption by 14W. That's on a small 294mm2 die. Moving from 1344 SP 980mhz GTX670 to a slightly higher clocked 1058mhz 1536 GTX680 bumps up the power consumption 22% (186W vs. 152W), and yet a 2688 chip would have 2.07x the shading power of a 152W GTX670 but power consumption only goes up 64% (250W vs. 152W)?


nvidia always misrepresents TDP numbers anyways. Look at GTX 480, that card ate up way more power than the rated TDP. Titan is going to eat power and run hot as well.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Dont mix Tesla and Geforce numbers. Tesla, unlike Geforce, Radeon etc needs to be able to actually run 100% heavy loads without throttle.

We dont exactly lack examples of graphics cards that cant handle the loads.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I posted that in the context of Titan's ability (or inability) to max out next generation 2013-2015 games. You read that as me discussing C1 vs. C3 because you still seem hurt that people ripped C3's graphics apart in Beta. It seems you still haven't gotten over it since you brought up things like Crysis 1's 6-year-old launch date. Way to correlate another thread with a completely unrelated topic about a next flagship card which entices many to upgrade for next gen games like BF4, Witcher 3, etc. The screenshot I linked is just a showcase of what next gen PC games might look like. Is Titan ready/fast enough to play them? That's all I asked.

Oh come on RS you're grasping so much here I can feel the tingles of my neck hairs.

http://www.techpowerup.com/gpudb/b1417/ASUS_GTX_Titan.html

Base Clock: 915 MHz

Boost Clock: 1019.5 MHz

Thanks goes to Demo!


So far that's the best leak IMO, nobody talking, just a GPUz upload.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
nvidia always misrepresents TDP numbers anyways. Look at GTX 480, that card ate up way more power than the rated TDP. Titan is going to eat power and run hot as well.

Just like AMD? Like the HD6970.

Its the key nemesis of graphics cards. And why drivers are used to control consumption. Its simply weighted better than runnig at Tesla speeds for gaming that is often relatively light loads.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
GTX480 - 250 Watt - 700MHz w 15SM -177 GB/s
M2050 - 225 Watt - 515MHz w 14SM -150 GB/s

Nice try. GTX480 had more functional units than M2050.

http://www.nvidia.com/docs/IO/105880/DS-Tesla-M-Class-Aug11.pdf

It appears it was based on the GTX470, with 448 CUDA cores. GTX480 was a 480 CUDA core part, with more TMUs and ROPs as well. K20X and Titan are rumored to share 2688 CUDA cores and 224 TMUs. Are you suggesting the ROP count has been neutered from 56 to 48 then, or the card has a TDP of 250 but draws more than that in gaming like the 480 did? Most amusing of all that 250W TDP on the 480 was a useless metric since the card drew 270W+ in actual games.

nvidia always misrepresents TDP numbers anyways. Look at GTX 480, that card ate up way more power than the rated TDP. Titan is going to eat power and run hot as well.

Ya, that's why I also provided power consumption #s of 670/680 for reference. At 1019mhz, the card is going to draw 250W easy, unless they dropped some functional units like ROPs in the K20X or were able to bin the best K20X chips, like the binned 680s in the 690 :)

Oh come on RS you're grasping so much here I can feel the tingles of my neck hairs.

Believe what you want to believe. I posted that screenshot in hopes that we get to use next gen flagship GPUs for next gen looking PC games.
 
Last edited:

Hypertag

Member
Oct 12, 2011
148
0
0
:hmm:

:confused:

:awe:

GTX480 - 250 Watt - 700MHz w 15SM -177 GB/s
M2050 - 225 Watt - 515MHz w 14SM -150 GB/s


I pointed out the same thing forever ago. They just ignore the fact that GF100 and GF110 Tesla cards run significantly slower than their consumer counterparts. They just continue to claim that it must be the opposite because.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Nice try. GTX480 had more functional units than M2050.

http://www.nvidia.com/docs/IO/105880/DS-Tesla-M-Class-Aug11.pdf

It appears it was based on the GTX470, with 448 CUDA cores. GTX480 was a 480 CUDA core part, with more TMUs and ROPs as well. K20X and Titan are rumored to share 2688 CUDA cores and 224 TMUs. Are you suggesting the ROP count has been neutered from 56 to 48 then?

It's a GTX 480 with an additional SM cut, it has the big bus with the 470s core count.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Just like AMD? Like the HD6970.

Its the key nemesis of graphics cards. And why drivers are used to control consumption. Its simply weighted better than runnig at Tesla speeds for gaming that is often relatively light loads.

Who was talking about AMD here ? ....:rolleyes:
 

Hypertag

Member
Oct 12, 2011
148
0
0
Nice try. GTX480 had more functional units than M2050.

http://www.nvidia.com/docs/IO/105880/DS-Tesla-M-Class-Aug11.pdf

It appears it was based on the GTX470, with 448 CUDA cores. GTX480 was a 480 CUDA core part, with more TMUs and ROPs as well. K20X and Titan are rumored to share 2688 CUDA cores and 224 TMUs. Are you suggesting the ROP count has been neutered from 56 to 48 then, or the card has a TDP of 250 but draws more than that in gaming like the 480 did? Most amusing of all that 250W TDP on the 480 was a useless metric since the card drew 270W+ in actual games.

Here is the GF110 Tesla card http://www.amazon.com/Nvidia-Tesla-M.../dp/B005TJKPWU

Compare it directly to a GTX 580.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Here is the GF110 Tesla card http://www.amazon.com/Nvidia-Tesla-M.../dp/B005TJKPWU

Compare it directly to a GTX 580.

(Core) 650 (1.3/2)
Processor clock 1.3 GHz
Memory clock 1.85 GHz
Board power <= 225 W

vs

(Core) 772MHz
Processor clock 1544
Memory clock 2.02 GHz
Board power <= 244 W


Edit: Figured I'd do the math...

Tesla to Geforce

+20% core/shader clock speed
+10% memory clock speed
+9% TDP
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Just like AMD? Like the HD6970.Its the key nemesis of graphics cards. And why drivers are used to control consumption.

Twisting facts, I see. HD6970 had a TDP of 250W and in games it peaked slightly above 200. I don't recall any instance ever of HD6970 hitting or going over 250W of power in games. I know, I had one. It used less power than my HD7970 OCed and my 7970 OC uses less than 250W.

Considering HD6970's power consumption is actually very close to a GTX680 and is under GTX580 (which in games used about 230W), your comment about HD6970's real world power consumption being out of line with its much higher 250W TDP is grossly inaccurate. The only thing HD6970's and GTX480's TDPs shared in common is that both didn't represent real world power consumption of those respective cards. In the case of the Radeon, it used way less, and the 480 used way more than its TDP stated.

You might want to go back to reviews and compare HD6970's power consumption in real world games to the GTX480 before directly comparing those 250W theoretical TDP numbers. Please don't come back with some rebuttal that Crysis 1 didn't load HD6970's GPU high enough.

34663.png
 
Last edited:

parvadomus

Senior member
Dec 11, 2012
685
14
81
You know what 250watts TDP means for nvidia, right?
power_maximum.gif

580 - 244watts TDP
480 - 250watts TDP
Maximum power consumption looks like TDP * ~1.33 for nv high-end cards. That thing will be power-hungry. Also, have a look at 7950 (200 watts TDP) and 7970 (250w TDP), AMD card TDP's match much better with power consumptions.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Twisting facts, I see. HD6970 had a TDP of 250W and in games it peaked slightly above 200. I don't recall any instance ever of HD6970 hitting or going over 250W of power in games. I know, I had one. It used less power than my HD7970 OCed.

Considering HD6970's power consumption is actually very close to a GTX680 and is under GTX580 (which in games used about 230W), your comment about HD6970's real world power consumption being out of line with its much higher 250W TDP is grossly inaccurate. The only thing HD6970's and GTX480's TDPs shared in common is that both didn't represent real world power consumption of those respective cards. In the case of the Radeon, it used way less, and the 480 used way more than its TDP stated.

You might want to go back to reviews and compare HD6970's power consumption in real world games to the GTX480 before directly comparing those 250W theoretical TDP numbers.

The HD6970 had to throttle because it couldnt handle the load. Unlike the less fortunate HD4870 cards for example.

And you still need to learn the key differences between Tesla and Geforce cards.

The post above also shows a 283W load on a HD6970. But this is simply how gaming cards work due to the shortcuts both manufactors make. And the difference between for example the Tesla and Geforce line shows how big it is.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
It's a GTX 480 with an additional SM cut, it has the big bus with the 470s core count.

Ya, and I said the Titan is assumed to be uncut K20X. My comment still stands. All the Tesla cards linked were either gimped versions of the GTX480, or they were clocked a lot closer to the 580, not the 39% difference. Is the Titan going to have some units cut, perhaps ROPs from 56 to 48?

Here is the GF110 Tesla card http://www.amazon.com/Nvidia-Tesla-M.../dp/B005TJKPWU

Compare it directly to a GTX 580.

Ya, what about it?

M2090_specs.bmp


GTX580 had 512 CUDA cores with 1.58Tflops of SP
2090 had 512 CUDA cores with 1.33 Tflops of SP

Quick back of the envelope calculation suggests GTX580 was clocked just 19% more. Based on the rumoured specs, K20X's GPU clocks go up from 732mhz to 1019mhz, or 39%. Additionally, GTX580's memory clock was 8% higher. The Titan's memory goes up from 5.2 to 6Ghz or 15%.

2090 vs. GTX580 and comparing that to K20X vs. Titan's projected specs aren't even remotely comparable in terms of GPU processing / memory bandwidth speed increase relative to their TDP power consumption estimates.

The HD6970 had to throttle because it couldnt handle the load. Unlike the less fortunate HD4870 cards for example.

And you still need to learn the key differences between Tesla and Geforce cards.

The post above also shows a 283W load on a HD6970. But this is simply how gaming cards work due to the shortcuts both manufactors make. And the difference between for example the Tesla and Geforce line shows how big it is.

Don't confuse synthetic power consumption #s in Furmark with games. That only undermines your entire point since in the power virus bench the 480 pulled 361W. HD6970 with +20% power tune never exceeded 250W of power in games and remained stable at 880mhz. I had the card for more than a year. Provide proof from professional reviews where HD6970 exceeded 250W of power consumption in games and at the same time still downclocked below 880mhz, or you are making things up. In fact even at 950mhz, that card didn't throttle with a power tune set at +20%. Go ask anyone who owned it. Throughout its life the HD6970 was known to pull less power in games than the GTX580, which itself drew less than the 480. In most reviews the 6970 even used up less power than even the GTX570. Don't make things up. HD5000-6000 series were more efficient than NV's offerings, which is exactly why NV went to the drawing board with Kepler and improved performance/watt tremendously from Fermi days.

BF: BC2 real world gaming power consumption of the entire system. GTX580 >>> HD6970 and we know 580 was more efficient than the 480.

1292337625zR9jST7GBp_8_1.gif

http://www.hardocp.com/article/2010/12/14/amd_radeon_hd_6970_6950_video_card_review/8#.URqyl6VZUeo

The 6970 even outperformed the GTX570 in that game while using less power.
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
lol I love how this went back to Fermi and Furmark...


/golfclap

SEE GUYS LOOK NVIDIA IS LYING LOOK AT THESE FURMARK NUMBERS!
 
Status
Not open for further replies.