GeForce Titan coming end of February

Page 55 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
So what is the performance estimates now? I hear some people are saying 45%~ faster than a GTX 680 around the interwebz.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Someone already linked this nice aggregate roundup from 3dcenter

That chart is full of errors. From HT4U they linked load power usage for 7970 at 210W, but then the same chart shows 183W for the 680, yet they use average of 170W. Then despite using load power usage at HT4U, they use average from TPU. Mixing and matching averages and load power usages not only across different AMD/NV GPUs, but also between websites. Wasted effort.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
We're talking about efficiency now.

All my posts were never related to efficiency, but power usage. Not sure why you assumed I was talking about performance/watt.

And btw, peaks as single maximum values are quite error prone. Average values are always more reliable since more information is used in their calculation, not just one data point that may or may not be a quirk.

You keep missing this: some people use their GPU at 99% load for hours/days/weeks at a time. For those people the peak rate is not a single error prone value, but their 95th percentile distribution, if not greater. There is nothing wrong with saying that GTX680 uses 166W of power on average in games from review ABCD, while and HD7970 uses 163W. However, that included many CPU limited games and cases where the GPU is not loaded. A lot of people on this forum are looking at peak load in games because some run 99% GPU intensive programs such as distributed computing, etc. You ignoring peak as irrelevant is quite telling because it means you are assuming this group of PC enthusiasts who use their GPUs for things other than games does not exist. Performance/watt should be looked at for peak values as well for those users.

If most of your usage patterns involve playing CPU limited games, then sure look at the average power usage for yourself. You keep claiming that you love using downsampling. That generally means 99% GPU load, or peak values, not averages. In that case the average power usage will approach peak reported at TPU/HT4U, etc.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
So what is the performance estimates now? I hear some people are saying 45%~ faster than a GTX 680 around the interwebz.

67% more shaders and 50% more memory bandwidth. I assume TMUs and ROPs gets equally expanded to their respecting component connections.

In raw performance on cores, a GTX680 gives 1536*1006=1545216.
The same with Titan gives 2688*837= 2249856. Or 45.6% more.

So I would guess between 45.6% and 50%. Lets just say 50% to make it easier.
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
boxleitnerb is right on the money.
Thermal and power peaks are only somewhat relevant when it comes to certain parts of PCB/electric circuitry and PSU.

Average heat dissipation while doing heavy lifting is what defines TDP.
Precise TDP definition proly differs between AMD/Intel/NV, but it always revolves at
"What kind of cooling solution do I need"

The answer to that question has little to do with absolute peaks.
It is such cooler that is able to continuously take away amount of heat equal to maximum sustained chip power draw,
because essentially all P=U*I ends up "wasted" as a heat.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I think you're wrong here, the idea that Crysis 2 at 1200p /w max settings isn't riding high on 99% the entire time without vysnc would only mean that the limiting factor is the CPU.

I don't think you guys are understanding what I am saying. Even if Crysis 2 showed 98-99.0% GPU usage, it does NOT mean that 98-99% of that GPU's functional units are all used up. It doesn't mean at all that every single CUDA core is loaded to 99%. There are programs out there that may use 99% of the GPU but use more of its functional units simultaneously. Since we can't cover every single program someone may use, we have to account for these cases unless you want to ask every single person who wants GPU purchasing advice what programs they will use (distributed computing, rendering, bitcoin mining, code compiling, etc.). That peak value in games will essentially become the average for those types of users because they will use more of the GPU's functional units as games do not use most of the GPU's resources. Those usage patterns are still real world, unlike Furmark. Not only that but when you use more of the GPU's resources, the VRMs are also loaded up more which pushes the power usage higher.

If all you do is play videogames and nothing else, by all means look at average power usage only.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
All my posts were never related to efficiency, but power usage. Not sure why you assumed I was talking about performance/watt.

Well, fisherman and I were and you chimed in, so this side discussion is actually a bit off topic ;)

You keep missing this: some people use their GPU at 99% load for hours/days/weeks at a time. For those people the peak rate is not a single error prone value, but their 95th percentile distribution, if not greater. There is nothing wrong with saying that GTX680 uses 166W of power on average in games from review ABCD, while and HD7970 uses 163W. However, that included many CPU limited games and cases where the GPU is not loaded. A lot of people on this forum are looking at peak load in games because some run 99% GPU intensive programs such as distributed computing, etc. You ignoring peak as irrelevant is quite telling because it means you are assuming this group of PC enthusiasts who use their GPUs for things other than games does not exist. Performance/watt should be looked at for peak values as well for those users.

If most of your usage patterns involve playing CPU limited games, then sure look at the average power usage for yourself. You keep claiming that you love using downsampling. That generally means 99% GPU load, or peak values, not averages. In that case the average power usage will approach peak reported at TPU/HT4U, etc.

As for computing, you're right. But I think most people will game on Titan since you can get more compute power for cheap with a 7970 or 7990.
I always look at things from my perspective first. Sure I love downsampling and SGSSAA, but I also hate tearing, so my fps are locked at 60 anyways, meaning no 99% all the time unless I go below 60.

But I'd be happy to do some power measurements at the wall with different settings once my cards arrive.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Average heat dissipation while doing heavy lifting is what defines TDP.
Precise TDP definition proly differs between AMD/Intel/NV, but it always revolves at
"What kind of cooling solution do I need"

That's not the definition of TDP, unless the company specifically states that's how they are defining it for their product.

The thermal design power (TDP), sometimes called thermal design point, refers to the maximum amount of power the cooling system in a computer is required to dissipate. The TDP is typically not the most power the chip could ever draw, such as by a power virus, but rather the maximum power that it would draw when running "real applications".*

- Distributed Computing (Folding @ Home, Milky Way @ Home)
- Bitcoin mining
- HPC / code compiling / ray-tracing, etc.

All of these real world applications will max out the GPU more than any game. NV/AMD design the GPU's VRMs/Heatsink components and generally quote the TDP around the most intensive real world applications, which are not games. It makes total sense that Furmark/and other similar power viruses do not load the GPU realistically, which is why we don't care about TDP/max power usage in their context. However, all those other real world applications are taken into account in the arrival of the GPU's clock speeds, VRMs, thermal solution design. Average power consumption in games is meaningless in this context.

If NV only designed the Titan around average power consumption in games, the GPU would have shipped with much higher clock speeds.

* In some cases the TDP has been underestimated in real world applications, such was the case with the GTX480. That was most likely the case of NV intentionally low-balling real world TDP of the 480 to save face. The real TDP of the 480 should have been 280W.

As for computing, you're right. But I think most people will game on Titan since you can get more compute power for cheap with a 7970 or 7990.

I am not telling you guys that average power usage is a wrong figure to use. If all you do is play games, then use that! What I am saying is the GPU's clock speeds and TDP are dictated by maximum power usage in real world applications and those are not just games. NV/AMD account for these apps, which is why we are seeing the Titan ship with 876mhz GPU clocks not 1019mhz. You could easily have a situation where the average power usage of a 1019mhz Titan would be similar to the average power usage of a 925mhz Titan in distributed computing projects because games do not have the ability to load the GPU's functional units to the same extent. This likely explains why NV had to drop the clocks on the Titan and why from the very beginning I kept using GTX670/680's peak power usage to make this point regarding my hesitation to believe the 1019mhz clocks in a 250W power usage envelope.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
Btw this begs the question:

What are real applications for a graphics card that are marketed as gaming cards under the brand "Geforce" or "Radeon"? I would say it's primarily games. Sure you can run other stuff on them, but that is not the primary use case, so I would somewhat understand if that were not incluced in TDP calculation. Do you have a source that explains how Nvidia and AMD actually do this?

But this is a slippery slope I guess, no one could say for sure what AMD and Nvidia are thinking about this. I would assume they want people to buy their professional products if you're doing this type of workload.
 
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
The guy I deal with in sales at NCIX confirmed $900 MSRP for the card and said they don't have them in their warehouse yet. Same guy who told me the correct price for the 680 a few days early so it is likely accurate. Too bad, $2000 is way too much for what two single GPU cards are worth for my buying habits. Will wait for the price to drop.

I don't think nvidia will ever do another GTX 480 card. That card sucked balls, it was so horrible I can't see them ever making that mistake again. This a nice looking card, sure it will use more power and run hot, but no way it will be like the 480 dustbuster, and people who buy it are not going to give a crap about thermals, it will probably be very similar to the GTX 580. It's only noise that is annoying, not power consumption, and I doubt this card will be excessively loud unless you crank the fan.

At some point I will get a few and put them under water cooling anyways. Even 50% more than a 680 is still really impressive, it's just the price that isn't.
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
100% incorrect.

The thermal design power (TDP), sometimes called thermal design point, refers to the maximum amount of power the cooling system in a computer is required to dissipate.


That's what I've said ;)

It is such cooler that is able to continuously take away amount of heat equal to maximum sustained chip power draw

I even went step ahead (your definition is pretty self-obvious :D) and equated dissipation needed with power drawn
because essentially all P=U*I ends up "wasted" as a heat.


If sustained is what's troubling you, think about it for a sec:

Does my cooler really gives a damn because for the duration of one mili-second my chip can draw power equal to 130% of maximum sustained power?
Not really.

But if you are thinking in seconds (not in mili and micro seconds), then that would qualify as "sustained", and not as "peak".
Why?

Because obviously you are using bad test application.
And if this app can load chip with 130% power for couple of seconds, than sure as hell it can be rewritten to keep chip 130% loaded for longer periods.

And so you see again - peaks are irrelevant when it comes to TDP.
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
* In some cases the TDP has been underestimated in real world applications, such was the case with the GTX480. That was most likely the case of NV intentionally low-balling real world TDP of the 480 to save face. The real TDP of the 480 should have been 280W.

Or they had wide ranging sample varience, with some being more leaky than others.

You don't think I pull 45% overclocks on a 220w stock card on reference air that is already undervalued in the TDP department with a lackluster cooler do you?

:confused:
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Still no word on whether voltage control is unlocked at all. The presence of boost clocks and how the other Kepler cards deal with boost and voltage makes me think no it isn't unlocked, which is too bad if that ends up true. It will still be interesting to see how much the card is "underclocked" to stay within the 250w TDP. If it can hit 1050mhz regularly without voltage adjustments, then manual voltage control isn't needed.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Nvidia has hardware and software that monitor the tdp and temperatures. Go back to the gtx 680 launch reviews. This is why people see/will have higher boost clocks in some games.

I expect, most reviews of the Geforce Titan will be done with a gaming focus, compared to other gaming cards running games.

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/
One revolutionary change that allows GeForce GTX 680 to aim high, is an extremely smart self-tuning logic that fine-tunes clock speeds and voltages, on the fly, with zero user intervention, to yield the best possible combination of performance and efficiency for a given load scenario. The GTX 680 hence reshapes the definition of fixed load clock speed, with dynamic clock speeds. Think of it as a GPU-take on Intel's Turbo Boost technology, which works in conjunction with SpeedStep to produce the best performance-per-Watt for CPUs that feature it.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Btw this begs the question:

What are real applications for a graphics card that are marketed as gaming cards under the brand "Geforce" or "Radeon"? I would say it's primarily games. Sure you can run other stuff on them, but that is not the primary use case, so I would somewhat understand if that were not incluced in TDP calculation. Do you have a source that explains how Nvidia and AMD actually do this?

But this is a slippery slope I guess, no one could say for sure what AMD and Nvidia are thinking about this. I would assume they want people to buy their professional products if you're doing this type of workload.

It can't be primarily games since HD7000 was already designed for HPC to begin with, which right away means using those chips in more intensive apps than games. NV/AMD both talked about this when the whole issue of HD4870-4890 and GTX200 cards being blown up in Furmark began. They started first with software and then hardware thermal throttling for apps they felt didn't represent real world usage patterns. Other real world apps that load the GPU more than games are still considered.

Nvidia has hardware and software that monitor the tdp and temperatures. Go back to the gtx 680 launch reviews. This is why people see/will have higher boost clocks in some games.

The TDP of the 680 is 225W. If NV only looked at power consumption in games, they could have clocked the GPU at 1200-1300mhz. They didn't. A 1058mhz 680 peaks at about 186W in games which leaves almost 40W of extra headroom based on the TDP. NV clearly considered the design around more intensive real world applications than games when setting GPU clock speeds of the 680. The reference design can cope with 225W of power usage but games do not even get there.
 
Last edited:

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Btw this begs the question:

What are real applications for a graphics card that are marketed as gaming cards under the brand "Geforce" or "Radeon"? I would say it's primarily games. Sure you can run other stuff on them, but that is not the primary use case, so I would somewhat understand if that were not incluced in TDP calculation. Do you have a source that explains how Nvidia and AMD actually do this?

But this is a slippery slope I guess, no one could say for sure what AMD and Nvidia are thinking about this. I would assume they want people to buy their professional products if you're doing this type of workload.

Agreed but I would sure test some programs and will see how it fares compared to a Quadro 6000.It's memory bandwidth will give it a good advantage.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
It can't be primarily games since HD7000 was already designed for HPC to begin with, which right away means using those chips in more intensive apps than games. NV/AMD both talked about this when the whole issue of HD4870-4890 and GTX200 cards being blown up in Furmark began. They started first with software and then hardware thermal throttling for apps they felt didn't represent real world usage patterns. Other real world apps that load the GPU more than games are still considered.

HD7k SKU != FirePro SKU.
Look at K20X and Titan. Significantly higher clocks for core and memory and almost the same TDP if those 250W are indeed correct. SKUs for different market segments are not comparable regarding TDP.

The TDP of the 680 is 225W. If NV only looked at power consumption in games, they could have clocked the GPU at 1200-1300mhz. They didn't. A 1058mhz 680 peaks at about 186W in games which leaves almost 40W of extra headroom based on the TDP. NV clearly considered the design around more intensive real world applications than games when setting GPU clock speeds of the 680. The reference design can cope with 225W of power usage but games do not even get there.

I've seen values of 170W and 195W for GTX680 TDP, never 225W though. 225W is just what you get when you add up the power connectors.

NVIDIA’s official TDP is 195W, though as with the GTX 500 series they still consider this is an average number rather than a true maximum. The second number is the boost target, which is the highest power level that GPU Boost will turbo to; that number is 170W.
http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review

Considering that Furmark doesn't go beyond approx. 195W (see ht4u review) and Furmark does represent the heaviest load I know of, I wonder how one can arrive at 225W TDP. I know of no scenario where the 680 uses more than those 195W. In games the 170W is spot on with 3DCenters analysis (169W), even if there is a typo here and there.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Too bad, $2000 is way too much for what two single GPU cards are worth for my buying habits. Will wait for the price to drop. Even 50% more than a 680 is still really impressive, it's just the price that isn't.

Agreed.

Notice what I said earlier in this thread how people overhyped GTX480/580/680's specs/real world gaming performance increase? We are seeing history repeating itself 4th time in a row.

We went from claims of 1Ghz 2880SP GK110 last fall to 1Ghz 2688SP recently and then ended up with an ~880mhz card. Shading & texture fill-rate power increases are less than 50% over the 680, pixel fill-rate is up less than 25%, which suggests the card will probably be ~50-60% faster than the 680 possibly due to Kepler's memory bandwidth bottleneck being opened up. It's impressive, but nowhere near as impressive considering the price increase NV is asking 1 year after 680 launched, esp. if it's also voltage locked.

GTX580 -> 680 (+35-40%) -> Titan (+50-60%). More than 2 years later but a price increase from $499 to $899.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
It can't be primarily games since HD7000 was already designed for HPC to begin with, which right away means using those chips in more intensive apps than games. NV/AMD both talked about this when the whole issue of HD4870-4890 and GTX200 cards being blown up in Furmark began. They started first with software and then hardware thermal throttling for apps they felt didn't represent real world usage patterns. Other real world apps that load the GPU more than games are still considered.



The TDP of the 680 is 225W. If NV only looked at power consumption in games, they could have clocked the GPU at 1200-1300mhz. They didn't. A 1058mhz 680 peaks at about 186W in games which leaves almost 40W of extra headroom based on the TDP. NV clearly considered the design around more intensive real world applications than games when setting GPU clock speeds of the 680. The reference design can cope with 225W of power usage but games do not even get there.

There is another thing wear and tear.Transistor like everything else "ages" so you can't really build a chip based on "best case scenario" loads.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I've seen values of 170W and 195W for GTX680 TDP, never 225W though. 225W is just what you get when you add up the power connectors.


Sorry fellas, I mixed that up. I remember reading back when 680 launched that after-market 680's had a TDP of 225W. I remember now that the reference 680 had a TDP of 195W. Thanks for the correction. :thumbsup:

There is another thing wear and tear.Transistor like everything else "ages" so you can't really build a chip based on "best case scenario" loads.

Good point. I think NV and AMD leave a lot of headroom on the table, which is why we overclockers exploit it. :p

And so you see again - peaks are irrelevant when it comes to TDP.

I think I see where the misunderstanding comes from. I am not talking about "peaks for milliseconds" but Peak power usage graph at websites like TPU. I am saying that those Peak measurements TPU shows will be "averages", or very close to average, when using more intensive real world applications. NV/AMD must take into account those cases when quoting the TDP. Distributing computing, raytracing, etc. all fall into this category and NV/AMD have to account for that. Otherwise you end up with an HD7970 that uses just 163W of power in games on average but has a TDP of 250W! Avg power consumption in games is not what dictates the GPU clocks, heatsink / VRM design or TDP quotes on AMD/NV's behalf. If there is a real world app that uses > 200W on a 7970, AMD can't just quote a TDP of 195W if the 7970 uses just 163W in games. Otherwise that would just be misleading. That only goes to show how useless the TDP number is unless both companies define it the same way or accurately report it.
 
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
vcf0Kd3.jpg


If you look at the backside of the card near the power connectors it does not have the small chip that is on the GTX 680, 670, 660 that regulates the voltage. I would think they would of done the right thing on an enthusiast card and included voltage control. They can't be deaf to feedback and the disappointment enthusiasts had about how locked down GK104 was, especially with a card they are trying to attach such a high premium on.

The chip is located somewhere else on the 690 though, so who knows..
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Agreed.

Notice what I said earlier in this thread how people overhyped GTX480/580/680's specs/real world gaming performance increase? We are seeing history repeating itself 4th time in a row.

Uhhh I don't remember the gtx580 being overhyped. In fact, I mostly remember people saying nvidia couldn't release anything faster on 40nm because they were at the limits of power usage. And as far as the 680 was concerned, up until two weeks before the card came out, no one and I mean NO ONE thought it would outperform an hd7970. Neither of those cards were overhyped in the performance discussion, at least not by any one except passer-byers.

Anyways, the overhyping goes both ways equally. Sliverforce's prophetic appearance here at vc&g with numerous 6970 performance claims that it would be 30% faster than the gtx480 is still fresh in mind.
 

Smartazz

Diamond Member
Dec 29, 2005
6,128
0
76
Is it too optimistic to think that this card will be $600 in the near future? I'd love to pick one of these up, but I would consider $600 toward the limit of what I would spend on a graphics card.
 
Status
Not open for further replies.