GeForce Titan coming end of February

Page 54 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
According to that slide it would mean that it has more DP GFLOPS then K20 and about the same as K20X. This card really has to be extremely low volume, otherwise why would they jeopardize their tesla business? Anyway, if that's true I'm very positively surprised, I thought they would castrate DP performance to 1/16 or even 1/32 just like the rest of Kepler cards.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
Was 480 king of the perf/W hill? In positive sense...

Because Titan will be. Fermi was perf/W bottom feeder if I remember correctly

240/170=1.47
So 47% higher power consumption which should also equal performance advantage versus the 680. I don't think Titan will be more efficient than the 680.
 

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
Not that I know of. But by now we know everything, don't we? Configuration, clocks, memory amount, TDP, price (900-1000ish).

Not as far as I'm aware. I've seen from 800-1000 clock speeds so that's probably not 100% sure(?). The price is certainly unkown, they may be raising the prices to unseen levels according to various rumors. These things are yet to be confirmed as far as I am aware.

Of course the performance is unknown as well!

The price is the most important question for many of us as it will determine how much of a rip off this card is (unless if it's sitting in the usual high end $550-600 range).
 
Last edited:

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
240/170=1.47
So 47% higher power consumption which should also equal performance advantage versus the 680. I don't think Titan will be more efficient than the 680.


Oh but it will be, mark my word
27124291-indexg5qc9.gif


It's the same Kepler arch, same (if not better) 28nm process, and underlocking brings better perf/W, same as overclocking destroying perf/W. There is no way around it.

Didn't W1zzard mention some secret sauce, ie something besides more BandWidth and more Cuda Cores?
 

parvadomus

Senior member
Dec 11, 2012
685
14
81
Fermi jokes are little out of order here.
Because almost double size of GK104, with 25-30% underclock pretty much guarantees green-like TDP.
I am thinking something like (2 x 120W) +- d = 240W +- d.

So if we are talking perf/W, Titan will be one lean gaming chip.
Even more so than GTX 680 or HD 7870

I doubt it, but we will see. It is not that simple.
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
No it's not, but TBH I should have been more kind to GK110 with 30% underclock.
More like 2 x (100-110)W ;)
 

parvadomus

Senior member
Dec 11, 2012
685
14
81
Look at Tahiti vs Pitcairn, not even 7950 is near as efficient as 7870, despite being 800mhz vs 1ghz.
With kepler this will be the same, it has always been the same thing with mid-range vs high-end gpus gaming wise.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Oh but it will be, mark my word
27124291-indexg5qc9.gif


It's the same Kepler arch, same (if not better) 28nm process, and underlocking brings better perf/W, same as overclocking destroying perf/W. There is no way around it.

Didn't W1zzard mention some secret sauce, ie something besides more BandWidth and more Cuda Cores?

fd.gif

You're going to start a bad precedent with those emoticons!
27124291-indexg5qc9.gif
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
And maybe some of us would rather make our money the honest way, instead of doing something that by all reports contributes to multiple criminal enterprises ;)
Now it definitely sounds like you're upset you're missing out.
There it is:

1O0tiPU.jpg

vcf0Kd3.jpg
Interesting on the power connectors. It looks like they're keeping many similarities with the GTX 690's design but unfortunately not the fan. Like I said earlier, I imagine they learned from their mistakes with the GTX 480, but we'll see.
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
According to that slide it would mean that it has more DP GFLOPS then K20 and about the same as K20X. This card really has to be extremely low volume, otherwise why would they jeopardize their tesla business? Anyway, if that's true I'm very positively surprised, I thought they would castrate DP performance to 1/16 or even 1/32 just like the rest of Kepler cards.

Releasing Titan with anything more than standard Kepler 1/24,
goes against everything NV stand for. "No free rides" :p

Then again it would be humiliating to be destroyed by $300 7950, even if it's some overlooked chart.
Not to mention what happens if AMD or I dunno Square Enix starts pushing DP... just for the kick.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
According to that slide it would mean that it has more DP GFLOPS then K20 and about the same as K20X. This card really has to be extremely low volume, otherwise why would they jeopardize their tesla business? Anyway, if that's true I'm very positively surprised, I thought they would castrate DP performance to 1/16 or even 1/32 just like the rest of Kepler cards.

This doesn't jeopardize tesla business, since it is a geforce card. Quadro/Tesla software/drivers do specific things for workstations and have specific acceleration features for adobe suite, 3ds max, among others. This won't be the case with the Geforce. Consumer cards have a lot of limitations from a combination of software/hardware that workstation cards do not, such as 10b color output in adobe suite (consumer cards only allow 10 bit in D3D, not adobe suite) among other things.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
This doesn't jeopardize tesla business, since it is a geforce card. Quadro/Tesla software/drivers do specific things for workstations and have specific acceleration features for adobe suite, 3ds max, among others. This won't be the case with the Geforce. Consumer cards have a lot of limitations from a combination of software/hardware that workstation cards do not, such as 10b color output in adobe suite (consumer cards only allow 10 bit in D3D, not adobe suite) among other things.

You can install workstation drivers on GeForce, you just can't uninstall the cut DP performance by doing so.

All it takes is an ini edit.
 
Last edited:

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
No business is going to try to save a few bucks by doing that? They don't get any support. And if you are doing big jobs, they pay well. And if you talked your boss in to buying a gaming card and your work stalled from bugs, who would get the blame?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
And 169W on average in games with only one exceeding 170W by a mere 4W
(lower section of the table):
http://www.3dcenter.org/artikel/ein...auchs/eine-neubetrachtung-des-grafikkarten-st

Average power consumption is only useful for measuring your electricity costs. Peak load in games is what you need to look at because that's what the cooler/fan combination and the PSU have to cope with. Based on the links you provided, the 680 peaks at 183W. TPU has 680 at 186W. This is why I kept saying over and over that peak power usage of 670/680's is what I was always using to arrive at Titan's expected GPU clocks. I still missed the mark since I said 900-925mhz or similar and it looks like it will be even lower. The math was all there. I even used GTX650Ti/670/680's die sizes and power usage but people ignored it. Tviceman and Groover and a couple other guys saw it this way too.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
It's called "thermal" design power. Thermal power transfer is very slow compared to electrical one, so (naturally) short peaks are irrelevant for cooling. PSU is another matter, but it's very naive to think that any graphics card/PSU combination is running at the limit regarding power delivery. Usually Nvidia recommends a certain kind of power supply, like 500W or so, which is quite oversized because this includes lower quality PSUs also. No PSU in the world of the recommended wattage, not even a crappy one, would fold because a card uses a little more than the average power for a very short time period. Reserve for these naturally occuring spikes are surely included in the PSU recommendation given by Nvidia or AMD and have nothing to do with TDP.

In conclusion:
Peaks are irrelevant. And if we are talking about efficiency, which we are, they are especially irrelevant since efficiency is all about average consumption during a longer benchmark/gaming session, not some microsecond value. I play longer than a couple of microseconds ;)
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Was there any more info leaked?

- 2688 CUDA cores, memory clocked at 6Ghz, 6GB GDDR5
- GPU Boost may be 878mhz not 876mhz
- some reviewers say that the boost mode is not working properly, but I am sure this will be fixed with a driver update.
- the card should not require more than 250W of power in games
- will be paper-launched tomorrow, while first reviews should appear a day later.
- it should cost $900 (€860)
- there is no backplate

GeForce-GTX-Titan-Back.jpg

GeForce-GTX-Titan-Picture.jpg

GeForce-GTX-Titan-Pictures.jpg
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Average power consumption is only useful for measuring your electricity costs. Peak load in games is what you need to look at because that's what the cooler/fan combination and the PSU have to cope with. Based on the links you provided, the 680 peaks at 183W. TPU has 680 at 186W. This is why I kept saying over and over that peak power usage of 670/680's is what I was always using to arrive at Titan's expected GPU clocks. I still missed the mark since I said 900-925mhz or similar and it looks like it will be even lower. The math was all there. I even used GTX650Ti/670/680's die sizes and power usage but people ignored it. Tviceman and Groover and a couple other guys saw it this way too.

I think the closer they can get average to peak the better. I'm pretty sure that was the intent of boost clocks with Kepler, they seem to need to do some work in that area.

If peak is 183w, with a TDP limit of 190~ than it's undershooting on performance, and with an average closer to 165w it's not boosting high enough to increase performance while it has more TDP left.

I like the concept of boost, I don't like the lack of voltage control... Looks like Titan will have boost as well, hopefully it won't be locked but it's unlikely.


$900, whew, yeah. Unless there are spin off's it looks like I'll be playing right into AMD's logic when I pick up a 7950.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
It's called "thermal" design power. Thermal power transfer is very slow compared to electrical one, so (naturally) short peaks are irrelevant for cooling. PSU is another matter, but it's very naive to think that any graphics card/PSU combination is running at the limit regarding power delivery.
In conclusion:
Peaks are irrelevant. And if we are talking about efficiency, which we are, they are especially irrelevant since efficiency is all about average consumption during a longer benchmark/gaming session, not some microsecond value. I play longer than a couple of microseconds ;)

I completely disagree. Those peaks will occur in many games, and for longer and longer periods of time as games get more demanding. The more demanding a game will be, the more power a card can use (to a limit of course) because the GPU will be loaded 99% more often (GPU limited case). You also ignored that people use their GPU outside of games where 99% GPU load for days at a time is common. If you only look at average power consumption of a videocard in a game, you aren't seeing what it can actually go to if say it's loaded to 99% in a particular section of a game/very demanding game/some other program. If someone is running 2-3 GPUs and is overclocking too, Peak load consumption in games is way more important to assess what PSU they need and if it's worth it to get a card with an after-market open air cooler. We have clearly seen this situation play out exactly this way with the GTX670/680 cards that would often peak in games and exceed 70*C as a result. The consequence was dropped GPU power boost during those cases in the game which lowered your performance.

Average power usage is relevant to arrive at your electricity costs. What I never agreed on was Furmark/power virus maximum power usage.

If GTX680 peaks at 183-186W in games like Crysis 2, it will probably do so in many areas of Crysis 3 and Metro Last Light and especially in more modern games slated in the future. Average power consumption of a GPU generally never has the GPU pegged at 99%. It stands to reason that next generation games will peg it at 98-99% level a lot more frequently, which implies that those Peaks being reported at h4t and TPU will be much closer to the average usage of a GPU in more demanding modern videogames.

This even goes back to your desire to have a 950mhz card with a 280W TDP. At 950mhz-1Ghz the Titan might have on average used 250W of power and peaked at closer to 280W and NV likely didn't want to go this route. With ~876mhz, it might on average use 225-230W and peak at 250W. Peak in games matters if this peak is repeatable and quantifiable on many occasions because GPU makers take this into account. There is a big difference in noise levels and coolers between a 230W GTX580 and a 270W GTX480. That extra 40W is the difference between a jet engine game experience and a decent one. That 40W delta occurs at Peak in games. This is why so many 480 owners were not happy with their cards (the early 6-8 months batch of 480's specifically).

I think the closer they can get average to peak the better. I'm pretty sure that was the intent of boost clocks with Kepler, they seem to need to do some work in that area.

Exactly. If NV tested the Titan in a CPU dependent game like Skyrim, they might have had a lot more headroom for GPU Boost. If they ran the Titan for hours in Crysis 3 and saw that it uses 30-40W more power, they would have needed to back off the GPU Boost from 1Ghz down because games such as C3 are going to be way more GPU limited. Average power consumption starts including areas in the game where the GPU is not fully loaded / CPU limited cases too. More importantly, when you are designing a GPU's heatsink/VRMs/power circuitry, you have to make sure everything can work for days/weeks at a time in 99% GPU loaded cases because people do not just use their GPUs for games. In those cases, the Average and Peak power usage will be extremely close. Usage scenarios such as distributed computing (Folding@Home, etc.) can load the GPU to 99%. NV has to account for all of this, which is why they care a great deal about those 99% GPU loaded cases. They have to account for people who will run the GPU at 99% load for days at a time and that ultimately impacts how high they can clock the GPU.
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Average power consumption of a GPU generally never has the GPU pegged at 99%. Problem is next generation games will peg it there a lot more, which means those Peaks being reported will be much closer to the average usage of a GPU in more demanding videogames.

I think you're wrong here, the idea that Crysis 2 at 1200p /w max settings isn't riding high on 99% the entire time without vysnc would only mean that the limiting factor is the CPU.

None of the cards today are at a point where a modern cpu is going to limit them in Crysis 2.


Look at their ARES II review...

Test Setup: Intel Core i7-3770K @ 4.6 GHz

AVG (1200p, max settings)
power_average.gif


PEAK (1200p, max settings)
power_peak.gif


FPS (1200p, max settings)
crysis2_1920_1200.gif


I don't believe at all the 680/7970 aren't peaked through the entire run.

You could make a case for what you're saying with ARES II, given scaling is lackluster - but not with the single cards imo.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
I completely disagree. Those peaks will occur in many games, and for longer and longer periods of time as games get more demanding. The more demanding a game will be, the more power a card can use (to a limit of course) because the GPU will be loaded 99% more often (GPU limited case). If you only look at average power consumption of a videocard in a game, you aren't seeing what it can actually go to if say it's loaded to 99% in a particular section of a game/very demanding game. If someone is running 2-3 GPUs and is overclocking too, Peak load consumption in games is way more important to assess what PSU they need and if it's worth it to get a card with an after-market open air cooler. We have clearly seen this situation play out exactly this way with the GTX670/680 cards that would often peak in games and exceed 70*C as a result. The consequence was dropped GPU power boost during those cases in the game which lowered your performance.

Average power usage is relevant to arrive at your electricity costs. What I never agreed on was Furmark/power virus maximum power usage.

If GTX680 peaks at 183-186W in games like Crysis 2, it will probably do so in many areas of Crysis 3 and Metro Last Light and especially in more modern games slated in the future. Average power consumption of a GPU generally never has the GPU pegged at 99%. It stands to reason that next generation games will peg it at 98-99% level a lot more frequently, which implies that those Peaks being reported at h4t and TPU will be much closer to the average usage of a GPU in more demanding modern videogames.

We're talking about efficiency now. And if we compare efficiency between two cards as we did or tried to do, and want to take into account future developments, both cards will be affected. Maybe equally, maybe not. If you want to analyze efficiency or anything else for that matter, you always need a reference point, and this reference point, too, will be affected by changing conditions with newer games.

Thus, it is valid to look at average power consumption if we do it for all cards involved in such comparisons at a certain point in time. And btw, peaks as single maximum values are quite error prone. Average values are always more reliable since more information is used in their calculation, not just one data point that may or may not be a quirk.
 
Last edited:
Status
Not open for further replies.