GeForce Titan coming end of February

Discussion in 'Video Cards and Graphics' started by Rikard, Jan 21, 2013.

Thread Status:
Not open for further replies.
  1. badb0y

    badb0y Diamond Member

    Joined:
    Feb 22, 2010
    Messages:
    3,934
    Likes Received:
    0
    So what is the performance estimates now? I hear some people are saying 45%~ faster than a GTX 680 around the interwebz.
     
  2. RussianSensation

    RussianSensation Elite Member

    Joined:
    Sep 5, 2003
    Messages:
    19,169
    Likes Received:
    115
    That chart is full of errors. From HT4U they linked load power usage for 7970 at 210W, but then the same chart shows 183W for the 680, yet they use average of 170W. Then despite using load power usage at HT4U, they use average from TPU. Mixing and matching averages and load power usages not only across different AMD/NV GPUs, but also between websites. Wasted effort.
     
  3. RussianSensation

    RussianSensation Elite Member

    Joined:
    Sep 5, 2003
    Messages:
    19,169
    Likes Received:
    115
    All my posts were never related to efficiency, but power usage. Not sure why you assumed I was talking about performance/watt.

    You keep missing this: some people use their GPU at 99% load for hours/days/weeks at a time. For those people the peak rate is not a single error prone value, but their 95th percentile distribution, if not greater. There is nothing wrong with saying that GTX680 uses 166W of power on average in games from review ABCD, while and HD7970 uses 163W. However, that included many CPU limited games and cases where the GPU is not loaded. A lot of people on this forum are looking at peak load in games because some run 99% GPU intensive programs such as distributed computing, etc. You ignoring peak as irrelevant is quite telling because it means you are assuming this group of PC enthusiasts who use their GPUs for things other than games does not exist. Performance/watt should be looked at for peak values as well for those users.

    If most of your usage patterns involve playing CPU limited games, then sure look at the average power usage for yourself. You keep claiming that you love using downsampling. That generally means 99% GPU load, or peak values, not averages. In that case the average power usage will approach peak reported at TPU/HT4U, etc.
     
  4. ShintaiDK

    ShintaiDK Lifer

    Joined:
    Apr 22, 2012
    Messages:
    20,098
    Likes Received:
    15
    67% more shaders and 50% more memory bandwidth. I assume TMUs and ROPs gets equally expanded to their respecting component connections.

    In raw performance on cores, a GTX680 gives 1536*1006=1545216.
    The same with Titan gives 2688*837= 2249856. Or 45.6% more.

    So I would guess between 45.6% and 50%. Lets just say 50% to make it easier.
     
  5. f1sherman

    f1sherman Platinum Member

    Joined:
    Apr 5, 2011
    Messages:
    2,244
    Likes Received:
    0
    boxleitnerb is right on the money.
    Thermal and power peaks are only somewhat relevant when it comes to certain parts of PCB/electric circuitry and PSU.

    Average heat dissipation while doing heavy lifting is what defines TDP.
    Precise TDP definition proly differs between AMD/Intel/NV, but it always revolves at
    "What kind of cooling solution do I need"

    The answer to that question has little to do with absolute peaks.
    It is such cooler that is able to continuously take away amount of heat equal to maximum sustained chip power draw,
    because essentially all P=U*I ends up "wasted" as a heat.
     
    #1355 f1sherman, Feb 17, 2013
    Last edited: Feb 17, 2013
  6. RussianSensation

    RussianSensation Elite Member

    Joined:
    Sep 5, 2003
    Messages:
    19,169
    Likes Received:
    115
    I don't think you guys are understanding what I am saying. Even if Crysis 2 showed 98-99.0% GPU usage, it does NOT mean that 98-99% of that GPU's functional units are all used up. It doesn't mean at all that every single CUDA core is loaded to 99%. There are programs out there that may use 99% of the GPU but use more of its functional units simultaneously. Since we can't cover every single program someone may use, we have to account for these cases unless you want to ask every single person who wants GPU purchasing advice what programs they will use (distributed computing, rendering, bitcoin mining, code compiling, etc.). That peak value in games will essentially become the average for those types of users because they will use more of the GPU's functional units as games do not use most of the GPU's resources. Those usage patterns are still real world, unlike Furmark. Not only that but when you use more of the GPU's resources, the VRMs are also loaded up more which pushes the power usage higher.

    If all you do is play videogames and nothing else, by all means look at average power usage only.
     
    #1356 RussianSensation, Feb 17, 2013
    Last edited: Feb 17, 2013
  7. boxleitnerb

    boxleitnerb Platinum Member

    Joined:
    Nov 1, 2011
    Messages:
    2,596
    Likes Received:
    1
    Well, fisherman and I were and you chimed in, so this side discussion is actually a bit off topic ;)

    As for computing, you're right. But I think most people will game on Titan since you can get more compute power for cheap with a 7970 or 7990.
    I always look at things from my perspective first. Sure I love downsampling and SGSSAA, but I also hate tearing, so my fps are locked at 60 anyways, meaning no 99% all the time unless I go below 60.

    But I'd be happy to do some power measurements at the wall with different settings once my cards arrive.
     
    #1357 boxleitnerb, Feb 17, 2013
    Last edited: Feb 17, 2013
  8. RussianSensation

    RussianSensation Elite Member

    Joined:
    Sep 5, 2003
    Messages:
    19,169
    Likes Received:
    115
    That's not the definition of TDP, unless the company specifically states that's how they are defining it for their product.

    The thermal design power (TDP), sometimes called thermal design point, refers to the maximum amount of power the cooling system in a computer is required to dissipate. The TDP is typically not the most power the chip could ever draw, such as by a power virus, but rather the maximum power that it would draw when running "real applications".*

    - Distributed Computing (Folding @ Home, Milky Way @ Home)
    - Bitcoin mining
    - HPC / code compiling / ray-tracing, etc.

    All of these real world applications will max out the GPU more than any game. NV/AMD design the GPU's VRMs/Heatsink components and generally quote the TDP around the most intensive real world applications, which are not games. It makes total sense that Furmark/and other similar power viruses do not load the GPU realistically, which is why we don't care about TDP/max power usage in their context. However, all those other real world applications are taken into account in the arrival of the GPU's clock speeds, VRMs, thermal solution design. Average power consumption in games is meaningless in this context.

    If NV only designed the Titan around average power consumption in games, the GPU would have shipped with much higher clock speeds.

    * In some cases the TDP has been underestimated in real world applications, such was the case with the GTX480. That was most likely the case of NV intentionally low-balling real world TDP of the 480 to save face. The real TDP of the 480 should have been 280W.

    I am not telling you guys that average power usage is a wrong figure to use. If all you do is play games, then use that! What I am saying is the GPU's clock speeds and TDP are dictated by maximum power usage in real world applications and those are not just games. NV/AMD account for these apps, which is why we are seeing the Titan ship with 876mhz GPU clocks not 1019mhz. You could easily have a situation where the average power usage of a 1019mhz Titan would be similar to the average power usage of a 925mhz Titan in distributed computing projects because games do not have the ability to load the GPU's functional units to the same extent. This likely explains why NV had to drop the clocks on the Titan and why from the very beginning I kept using GTX670/680's peak power usage to make this point regarding my hesitation to believe the 1019mhz clocks in a 250W power usage envelope.
     
    #1358 RussianSensation, Feb 17, 2013
    Last edited: Feb 17, 2013
  9. boxleitnerb

    boxleitnerb Platinum Member

    Joined:
    Nov 1, 2011
    Messages:
    2,596
    Likes Received:
    1
    Btw this begs the question:

    What are real applications for a graphics card that are marketed as gaming cards under the brand "Geforce" or "Radeon"? I would say it's primarily games. Sure you can run other stuff on them, but that is not the primary use case, so I would somewhat understand if that were not incluced in TDP calculation. Do you have a source that explains how Nvidia and AMD actually do this?

    But this is a slippery slope I guess, no one could say for sure what AMD and Nvidia are thinking about this. I would assume they want people to buy their professional products if you're doing this type of workload.
     
    #1359 boxleitnerb, Feb 17, 2013
    Last edited: Feb 17, 2013
  10. Grooveriding

    Grooveriding Diamond Member

    Joined:
    Dec 25, 2008
    Messages:
    7,727
    Likes Received:
    3
    The guy I deal with in sales at NCIX confirmed $900 MSRP for the card and said they don't have them in their warehouse yet. Same guy who told me the correct price for the 680 a few days early so it is likely accurate. Too bad, $2000 is way too much for what two single GPU cards are worth for my buying habits. Will wait for the price to drop.

    I don't think nvidia will ever do another GTX 480 card. That card sucked balls, it was so horrible I can't see them ever making that mistake again. This a nice looking card, sure it will use more power and run hot, but no way it will be like the 480 dustbuster, and people who buy it are not going to give a crap about thermals, it will probably be very similar to the GTX 580. It's only noise that is annoying, not power consumption, and I doubt this card will be excessively loud unless you crank the fan.

    At some point I will get a few and put them under water cooling anyways. Even 50% more than a 680 is still really impressive, it's just the price that isn't.
     
  11. f1sherman

    f1sherman Platinum Member

    Joined:
    Apr 5, 2011
    Messages:
    2,244
    Likes Received:
    0


    That's what I've said ;)

    I even went step ahead (your definition is pretty self-obvious :D) and equated dissipation needed with power drawn

    If sustained is what's troubling you, think about it for a sec:

    Does my cooler really gives a damn because for the duration of one mili-second my chip can draw power equal to 130% of maximum sustained power?
    Not really.

    But if you are thinking in seconds (not in mili and micro seconds), then that would qualify as "sustained", and not as "peak".
    Why?

    Because obviously you are using bad test application.
    And if this app can load chip with 130% power for couple of seconds, than sure as hell it can be rewritten to keep chip 130% loaded for longer periods.

    And so you see again - peaks are irrelevant when it comes to TDP.
     
    #1361 f1sherman, Feb 17, 2013
    Last edited: Feb 17, 2013
  12. BallaTheFeared

    BallaTheFeared Diamond Member

    Joined:
    Nov 15, 2010
    Messages:
    8,128
    Likes Received:
    0
    Or they had wide ranging sample varience, with some being more leaky than others.

    You don't think I pull 45% overclocks on a 220w stock card on reference air that is already undervalued in the TDP department with a lackluster cooler do you?

    :confused:
     
  13. tviceman

    tviceman Diamond Member

    Joined:
    Mar 25, 2008
    Messages:
    6,032
    Likes Received:
    12
    Still no word on whether voltage control is unlocked at all. The presence of boost clocks and how the other Kepler cards deal with boost and voltage makes me think no it isn't unlocked, which is too bad if that ends up true. It will still be interesting to see how much the card is "underclocked" to stay within the 250w TDP. If it can hit 1050mhz regularly without voltage adjustments, then manual voltage control isn't needed.
     
  14. notty22

    notty22 Diamond Member

    Joined:
    Jan 1, 2010
    Messages:
    3,376
    Likes Received:
    0
    Nvidia has hardware and software that monitor the tdp and temperatures. Go back to the gtx 680 launch reviews. This is why people see/will have higher boost clocks in some games.

    I expect, most reviews of the Geforce Titan will be done with a gaming focus, compared to other gaming cards running games.

    http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/
     
  15. RussianSensation

    RussianSensation Elite Member

    Joined:
    Sep 5, 2003
    Messages:
    19,169
    Likes Received:
    115
    It can't be primarily games since HD7000 was already designed for HPC to begin with, which right away means using those chips in more intensive apps than games. NV/AMD both talked about this when the whole issue of HD4870-4890 and GTX200 cards being blown up in Furmark began. They started first with software and then hardware thermal throttling for apps they felt didn't represent real world usage patterns. Other real world apps that load the GPU more than games are still considered.

    The TDP of the 680 is 225W. If NV only looked at power consumption in games, they could have clocked the GPU at 1200-1300mhz. They didn't. A 1058mhz 680 peaks at about 186W in games which leaves almost 40W of extra headroom based on the TDP. NV clearly considered the design around more intensive real world applications than games when setting GPU clock speeds of the 680. The reference design can cope with 225W of power usage but games do not even get there.
     
    #1365 RussianSensation, Feb 17, 2013
    Last edited: Feb 17, 2013
  16. BallaTheFeared

    BallaTheFeared Diamond Member

    Joined:
    Nov 15, 2010
    Messages:
    8,128
    Likes Received:
    0
    :confused:
     
  17. Jaydip

    Jaydip Diamond Member

    Joined:
    Mar 29, 2010
    Messages:
    3,613
    Likes Received:
    3
    Agreed but I would sure test some programs and will see how it fares compared to a Quadro 6000.It's memory bandwidth will give it a good advantage.
     
  18. boxleitnerb

    boxleitnerb Platinum Member

    Joined:
    Nov 1, 2011
    Messages:
    2,596
    Likes Received:
    1
    HD7k SKU != FirePro SKU.
    Look at K20X and Titan. Significantly higher clocks for core and memory and almost the same TDP if those 250W are indeed correct. SKUs for different market segments are not comparable regarding TDP.

    I've seen values of 170W and 195W for GTX680 TDP, never 225W though. 225W is just what you get when you add up the power connectors.

    http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review

    Considering that Furmark doesn't go beyond approx. 195W (see ht4u review) and Furmark does represent the heaviest load I know of, I wonder how one can arrive at 225W TDP. I know of no scenario where the 680 uses more than those 195W. In games the 170W is spot on with 3DCenters analysis (169W), even if there is a typo here and there.
     
    #1368 boxleitnerb, Feb 17, 2013
    Last edited: Feb 17, 2013
  19. RussianSensation

    RussianSensation Elite Member

    Joined:
    Sep 5, 2003
    Messages:
    19,169
    Likes Received:
    115
    Agreed.

    Notice what I said earlier in this thread how people overhyped GTX480/580/680's specs/real world gaming performance increase? We are seeing history repeating itself 4th time in a row.

    We went from claims of 1Ghz 2880SP GK110 last fall to 1Ghz 2688SP recently and then ended up with an ~880mhz card. Shading & texture fill-rate power increases are less than 50% over the 680, pixel fill-rate is up less than 25%, which suggests the card will probably be ~50-60% faster than the 680 possibly due to Kepler's memory bandwidth bottleneck being opened up. It's impressive, but nowhere near as impressive considering the price increase NV is asking 1 year after 680 launched, esp. if it's also voltage locked.

    GTX580 -> 680 (+35-40%) -> Titan (+50-60%). More than 2 years later but a price increase from $499 to $899.
     
  20. Jaydip

    Jaydip Diamond Member

    Joined:
    Mar 29, 2010
    Messages:
    3,613
    Likes Received:
    3
    There is another thing wear and tear.Transistor like everything else "ages" so you can't really build a chip based on "best case scenario" loads.
     
  21. RussianSensation

    RussianSensation Elite Member

    Joined:
    Sep 5, 2003
    Messages:
    19,169
    Likes Received:
    115
    Sorry fellas, I mixed that up. I remember reading back when 680 launched that after-market 680's had a TDP of 225W. I remember now that the reference 680 had a TDP of 195W. Thanks for the correction. :thumbsup:

    Good point. I think NV and AMD leave a lot of headroom on the table, which is why we overclockers exploit it. :p

    I think I see where the misunderstanding comes from. I am not talking about "peaks for milliseconds" but Peak power usage graph at websites like TPU. I am saying that those Peak measurements TPU shows will be "averages", or very close to average, when using more intensive real world applications. NV/AMD must take into account those cases when quoting the TDP. Distributing computing, raytracing, etc. all fall into this category and NV/AMD have to account for that. Otherwise you end up with an HD7970 that uses just 163W of power in games on average but has a TDP of 250W! Avg power consumption in games is not what dictates the GPU clocks, heatsink / VRM design or TDP quotes on AMD/NV's behalf. If there is a real world app that uses > 200W on a 7970, AMD can't just quote a TDP of 195W if the 7970 uses just 163W in games. Otherwise that would just be misleading. That only goes to show how useless the TDP number is unless both companies define it the same way or accurately report it.
     
    #1371 RussianSensation, Feb 17, 2013
    Last edited: Feb 17, 2013
  22. Grooveriding

    Grooveriding Diamond Member

    Joined:
    Dec 25, 2008
    Messages:
    7,727
    Likes Received:
    3
    [​IMG]

    If you look at the backside of the card near the power connectors it does not have the small chip that is on the GTX 680, 670, 660 that regulates the voltage. I would think they would of done the right thing on an enthusiast card and included voltage control. They can't be deaf to feedback and the disappointment enthusiasts had about how locked down GK104 was, especially with a card they are trying to attach such a high premium on.

    The chip is located somewhere else on the 690 though, so who knows..
     
  23. f1sherman

    f1sherman Platinum Member

    Joined:
    Apr 5, 2011
    Messages:
    2,244
    Likes Received:
    0
    #1373 f1sherman, Feb 17, 2013
    Last edited: Feb 17, 2013
  24. tviceman

    tviceman Diamond Member

    Joined:
    Mar 25, 2008
    Messages:
    6,032
    Likes Received:
    12
    Uhhh I don't remember the gtx580 being overhyped. In fact, I mostly remember people saying nvidia couldn't release anything faster on 40nm because they were at the limits of power usage. And as far as the 680 was concerned, up until two weeks before the card came out, no one and I mean NO ONE thought it would outperform an hd7970. Neither of those cards were overhyped in the performance discussion, at least not by any one except passer-byers.

    Anyways, the overhyping goes both ways equally. Sliverforce's prophetic appearance here at vc&g with numerous 6970 performance claims that it would be 30% faster than the gtx480 is still fresh in mind.
     
  25. Smartazz

    Smartazz Diamond Member

    Joined:
    Dec 29, 2005
    Messages:
    6,128
    Likes Received:
    0
    Is it too optimistic to think that this card will be $600 in the near future? I'd love to pick one of these up, but I would consider $600 toward the limit of what I would spend on a graphics card.
     
Thread Status:
Not open for further replies.