Go Back   AnandTech Forums > Hardware and Technology > Video Cards and Graphics

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals with Free Stuff/Contests
· Black Friday 2014
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Closed Thread
 
Thread Tools
Old 02-17-2013, 12:27 PM   #1351
ShintaiDK
Lifer
 
ShintaiDK's Avatar
 
Join Date: Apr 2012
Location: Copenhagen
Posts: 11,031
Default

Quote:
Originally Posted by badb0y View Post
So what is the performance estimates now? I hear some people are saying 45%~ faster than a GTX 680 around the interwebz.
67% more shaders and 50% more memory bandwidth. I assume TMUs and ROPs gets equally expanded to their respecting component connections.

In raw performance on cores, a GTX680 gives 1536*1006=1545216.
The same with Titan gives 2688*837= 2249856. Or 45.6% more.

So I would guess between 45.6% and 50%. Lets just say 50% to make it easier.
__________________
Anandtech forums=Xtremesystems forums
ShintaiDK is offline  
Old 02-17-2013, 12:30 PM   #1352
f1sherman
Golden Member
 
Join Date: Apr 2011
Posts: 1,972
Default

boxleitnerb is right on the money.
Thermal and power peaks are only somewhat relevant when it comes to certain parts of PCB/electric circuitry and PSU.

Average heat dissipation while doing heavy lifting is what defines TDP.
Precise TDP definition proly differs between AMD/Intel/NV, but it always revolves at
"What kind of cooling solution do I need"

The answer to that question has little to do with absolute peaks.
It is such cooler that is able to continuously take away amount of heat equal to maximum sustained chip power draw,
because essentially all P=U*I ends up "wasted" as a heat.

Last edited by f1sherman; 02-17-2013 at 12:33 PM.
f1sherman is offline  
Old 02-17-2013, 12:32 PM   #1353
RussianSensation
Elite Member
 
RussianSensation's Avatar
 
Join Date: Sep 2003
Location: Dubai, UAE
Posts: 14,565
Default

Quote:
Originally Posted by BallaTheFeared View Post
I think you're wrong here, the idea that Crysis 2 at 1200p /w max settings isn't riding high on 99% the entire time without vysnc would only mean that the limiting factor is the CPU.
I don't think you guys are understanding what I am saying. Even if Crysis 2 showed 98-99.0% GPU usage, it does NOT mean that 98-99% of that GPU's functional units are all used up. It doesn't mean at all that every single CUDA core is loaded to 99%. There are programs out there that may use 99% of the GPU but use more of its functional units simultaneously. Since we can't cover every single program someone may use, we have to account for these cases unless you want to ask every single person who wants GPU purchasing advice what programs they will use (distributed computing, rendering, bitcoin mining, code compiling, etc.). That peak value in games will essentially become the average for those types of users because they will use more of the GPU's functional units as games do not use most of the GPU's resources. Those usage patterns are still real world, unlike Furmark. Not only that but when you use more of the GPU's resources, the VRMs are also loaded up more which pushes the power usage higher.

If all you do is play videogames and nothing else, by all means look at average power usage only.
__________________
i5 2500k | Asus P8P67 Rev.3 | Sapphire Dual-X HD7970 1150/1800 1.174V CFX | G.Skill Sniper 8GB DDR3-1600 1.5V
SeaSonic Platinum 1000W | OCZ Vertex 3 120GB + HITACHI 7K1000.B 1TB | Windows 7
Westinghouse 37" 1080P | X-Fi Platinum | Logitech Z5300E 5.1

Last edited by RussianSensation; 02-17-2013 at 12:34 PM.
RussianSensation is offline  
Old 02-17-2013, 12:32 PM   #1354
boxleitnerb
Platinum Member
 
Join Date: Oct 2011
Posts: 2,522
Default

Quote:
Originally Posted by RussianSensation View Post
All my posts were never related to efficiency, but power usage. Not sure why you assumed I was talking about performance/watt.
Well, fisherman and I were and you chimed in, so this side discussion is actually a bit off topic

Quote:
Originally Posted by RussianSensation View Post
You keep missing this: some people use their GPU at 99% load for hours/days/weeks at a time. For those people the peak rate is not a single error prone value, but their 95th percentile distribution, if not greater. There is nothing wrong with saying that GTX680 uses 166W of power on average in games from review ABCD, while and HD7970 uses 163W. However, that included many CPU limited games and cases where the GPU is not loaded. A lot of people on this forum are looking at peak load in games because some run 99% GPU intensive programs such as distributed computing, etc. You ignoring peak as irrelevant is quite telling because it means you are assuming this group of PC enthusiasts who use their GPUs for things other than games does not exist. Performance/watt should be looked at for peak values as well for those users.

If most of your usage patterns involve playing CPU limited games, then sure look at the average power usage for yourself. You keep claiming that you love using downsampling. That generally means 99% GPU load, or peak values, not averages. In that case the average power usage will approach peak reported at TPU/HT4U, etc.
As for computing, you're right. But I think most people will game on Titan since you can get more compute power for cheap with a 7970 or 7990.
I always look at things from my perspective first. Sure I love downsampling and SGSSAA, but I also hate tearing, so my fps are locked at 60 anyways, meaning no 99% all the time unless I go below 60.

But I'd be happy to do some power measurements at the wall with different settings once my cards arrive.

Last edited by boxleitnerb; 02-17-2013 at 12:35 PM.
boxleitnerb is offline  
Old 02-17-2013, 12:37 PM   #1355
RussianSensation
Elite Member
 
RussianSensation's Avatar
 
Join Date: Sep 2003
Location: Dubai, UAE
Posts: 14,565
Default

Quote:
Originally Posted by f1sherman View Post
Average heat dissipation while doing heavy lifting is what defines TDP.
Precise TDP definition proly differs between AMD/Intel/NV, but it always revolves at
"What kind of cooling solution do I need"
That's not the definition of TDP, unless the company specifically states that's how they are defining it for their product.

The thermal design power (TDP), sometimes called thermal design point, refers to the maximum amount of power the cooling system in a computer is required to dissipate. The TDP is typically not the most power the chip could ever draw, such as by a power virus, but rather the maximum power that it would draw when running "real applications".*

- Distributed Computing (Folding @ Home, Milky Way @ Home)
- Bitcoin mining
- HPC / code compiling / ray-tracing, etc.

All of these real world applications will max out the GPU more than any game. NV/AMD design the GPU's VRMs/Heatsink components and generally quote the TDP around the most intensive real world applications, which are not games. It makes total sense that Furmark/and other similar power viruses do not load the GPU realistically, which is why we don't care about TDP/max power usage in their context. However, all those other real world applications are taken into account in the arrival of the GPU's clock speeds, VRMs, thermal solution design. Average power consumption in games is meaningless in this context.

If NV only designed the Titan around average power consumption in games, the GPU would have shipped with much higher clock speeds.

* In some cases the TDP has been underestimated in real world applications, such was the case with the GTX480. That was most likely the case of NV intentionally low-balling real world TDP of the 480 to save face. The real TDP of the 480 should have been 280W.

Quote:
Originally Posted by boxleitnerb View Post
As for computing, you're right. But I think most people will game on Titan since you can get more compute power for cheap with a 7970 or 7990.
I am not telling you guys that average power usage is a wrong figure to use. If all you do is play games, then use that! What I am saying is the GPU's clock speeds and TDP are dictated by maximum power usage in real world applications and those are not just games. NV/AMD account for these apps, which is why we are seeing the Titan ship with 876mhz GPU clocks not 1019mhz. You could easily have a situation where the average power usage of a 1019mhz Titan would be similar to the average power usage of a 925mhz Titan in distributed computing projects because games do not have the ability to load the GPU's functional units to the same extent. This likely explains why NV had to drop the clocks on the Titan and why from the very beginning I kept using GTX670/680's peak power usage to make this point regarding my hesitation to believe the 1019mhz clocks in a 250W power usage envelope.
__________________
i5 2500k | Asus P8P67 Rev.3 | Sapphire Dual-X HD7970 1150/1800 1.174V CFX | G.Skill Sniper 8GB DDR3-1600 1.5V
SeaSonic Platinum 1000W | OCZ Vertex 3 120GB + HITACHI 7K1000.B 1TB | Windows 7
Westinghouse 37" 1080P | X-Fi Platinum | Logitech Z5300E 5.1

Last edited by RussianSensation; 02-17-2013 at 12:54 PM.
RussianSensation is offline  
Old 02-17-2013, 12:45 PM   #1356
boxleitnerb
Platinum Member
 
Join Date: Oct 2011
Posts: 2,522
Default

Btw this begs the question:

What are real applications for a graphics card that are marketed as gaming cards under the brand "Geforce" or "Radeon"? I would say it's primarily games. Sure you can run other stuff on them, but that is not the primary use case, so I would somewhat understand if that were not incluced in TDP calculation. Do you have a source that explains how Nvidia and AMD actually do this?

But this is a slippery slope I guess, no one could say for sure what AMD and Nvidia are thinking about this. I would assume they want people to buy their professional products if you're doing this type of workload.

Last edited by boxleitnerb; 02-17-2013 at 12:48 PM.
boxleitnerb is offline  
Old 02-17-2013, 12:52 PM   #1357
Grooveriding
Diamond Member
 
Grooveriding's Avatar
 
Join Date: Dec 2008
Location: Toronto, CA
Posts: 6,331
Default

The guy I deal with in sales at NCIX confirmed $900 MSRP for the card and said they don't have them in their warehouse yet. Same guy who told me the correct price for the 680 a few days early so it is likely accurate. Too bad, $2000 is way too much for what two single GPU cards are worth for my buying habits. Will wait for the price to drop.

I don't think nvidia will ever do another GTX 480 card. That card sucked balls, it was so horrible I can't see them ever making that mistake again. This a nice looking card, sure it will use more power and run hot, but no way it will be like the 480 dustbuster, and people who buy it are not going to give a crap about thermals, it will probably be very similar to the GTX 580. It's only noise that is annoying, not power consumption, and I doubt this card will be excessively loud unless you crank the fan.

At some point I will get a few and put them under water cooling anyways. Even 50% more than a 680 is still really impressive, it's just the price that isn't.
__________________
5960X @ 4.5 | X99 Deluxe | 16GB 2600 GSkill DDR4 | 780ti SLI | Evo 500GB Raid 0 | Dell U3011 | EVGA 1300W G2
under custom water
Grooveriding is online now  
Old 02-17-2013, 12:54 PM   #1358
f1sherman
Golden Member
 
Join Date: Apr 2011
Posts: 1,972
Default

Quote:
Originally Posted by RussianSensation View Post
100% incorrect.

[I]The thermal design power (TDP), sometimes called thermal design point, refers to the maximum amount of power the cooling system in a computer is required to dissipate.
That's what I've said

Quote:
Originally Posted by f1sherman View Post
It is such cooler that is able to continuously take away amount of heat equal to maximum sustained chip power draw
I even went step ahead (your definition is pretty self-obvious ) and equated dissipation needed with power drawn
Quote:
because essentially all P=U*I ends up "wasted" as a heat.

If sustained is what's troubling you, think about it for a sec:

Does my cooler really gives a damn because for the duration of one mili-second my chip can draw power equal to 130% of maximum sustained power?
Not really.

But if you are thinking in seconds (not in mili and micro seconds), then that would qualify as "sustained", and not as "peak".
Why?

Because obviously you are using bad test application.
And if this app can load chip with 130% power for couple of seconds, than sure as hell it can be rewritten to keep chip 130% loaded for longer periods.

And so you see again - peaks are irrelevant when it comes to TDP.

Last edited by f1sherman; 02-17-2013 at 01:02 PM.
f1sherman is offline  
Old 02-17-2013, 12:54 PM   #1359
BallaTheFeared
Diamond Member
 
BallaTheFeared's Avatar
 
Join Date: Nov 2010
Posts: 8,128
Default

Quote:
Originally Posted by RussianSensation View Post
* In some cases the TDP has been underestimated in real world applications, such was the case with the GTX480. That was most likely the case of NV intentionally low-balling real world TDP of the 480 to save face. The real TDP of the 480 should have been 280W.
Or they had wide ranging sample varience, with some being more leaky than others.

You don't think I pull 45% overclocks on a 220w stock card on reference air that is already undervalued in the TDP department with a lackluster cooler do you?

BallaTheFeared is offline  
Old 02-17-2013, 12:56 PM   #1360
tviceman
Diamond Member
 
Join Date: Mar 2008
Posts: 4,938
Default

Still no word on whether voltage control is unlocked at all. The presence of boost clocks and how the other Kepler cards deal with boost and voltage makes me think no it isn't unlocked, which is too bad if that ends up true. It will still be interesting to see how much the card is "underclocked" to stay within the 250w TDP. If it can hit 1050mhz regularly without voltage adjustments, then manual voltage control isn't needed.
tviceman is online now  
Old 02-17-2013, 12:57 PM   #1361
notty22
Diamond Member
 
notty22's Avatar
 
Join Date: Jan 2010
Location: Beantown
Posts: 3,312
Default

Nvidia has hardware and software that monitor the tdp and temperatures. Go back to the gtx 680 launch reviews. This is why people see/will have higher boost clocks in some games.

I expect, most reviews of the Geforce Titan will be done with a gaming focus, compared to other gaming cards running games.

http://www.techpowerup.com/reviews/N...Force_GTX_680/
Quote:
One revolutionary change that allows GeForce GTX 680 to aim high, is an extremely smart self-tuning logic that fine-tunes clock speeds and voltages, on the fly, with zero user intervention, to yield the best possible combination of performance and efficiency for a given load scenario. The GTX 680 hence reshapes the definition of fixed load clock speed, with dynamic clock speeds. Think of it as a GPU-take on Intel's Turbo Boost technology, which works in conjunction with SpeedStep to produce the best performance-per-Watt for CPUs that feature it.
__________________
i5 4670K@4100mhz, 32GB Kingston 1600,H50
MSI GTX 970 gaming
Seasonic SS-760XP2
240gb SSD, Win 8.1
Let's make sure history never forgets... the name... 'Enterprise'. Picard out.
notty22 is offline  
Old 02-17-2013, 12:57 PM   #1362
RussianSensation
Elite Member
 
RussianSensation's Avatar
 
Join Date: Sep 2003
Location: Dubai, UAE
Posts: 14,565
Default

Quote:
Originally Posted by boxleitnerb View Post
Btw this begs the question:

What are real applications for a graphics card that are marketed as gaming cards under the brand "Geforce" or "Radeon"? I would say it's primarily games. Sure you can run other stuff on them, but that is not the primary use case, so I would somewhat understand if that were not incluced in TDP calculation. Do you have a source that explains how Nvidia and AMD actually do this?

But this is a slippery slope I guess, no one could say for sure what AMD and Nvidia are thinking about this. I would assume they want people to buy their professional products if you're doing this type of workload.
It can't be primarily games since HD7000 was already designed for HPC to begin with, which right away means using those chips in more intensive apps than games. NV/AMD both talked about this when the whole issue of HD4870-4890 and GTX200 cards being blown up in Furmark began. They started first with software and then hardware thermal throttling for apps they felt didn't represent real world usage patterns. Other real world apps that load the GPU more than games are still considered.

Quote:
Originally Posted by notty22 View Post
Nvidia has hardware and software that monitor the tdp and temperatures. Go back to the gtx 680 launch reviews. This is why people see/will have higher boost clocks in some games.
The TDP of the 680 is 225W. If NV only looked at power consumption in games, they could have clocked the GPU at 1200-1300mhz. They didn't. A 1058mhz 680 peaks at about 186W in games which leaves almost 40W of extra headroom based on the TDP. NV clearly considered the design around more intensive real world applications than games when setting GPU clock speeds of the 680. The reference design can cope with 225W of power usage but games do not even get there.
__________________
i5 2500k | Asus P8P67 Rev.3 | Sapphire Dual-X HD7970 1150/1800 1.174V CFX | G.Skill Sniper 8GB DDR3-1600 1.5V
SeaSonic Platinum 1000W | OCZ Vertex 3 120GB + HITACHI 7K1000.B 1TB | Windows 7
Westinghouse 37" 1080P | X-Fi Platinum | Logitech Z5300E 5.1

Last edited by RussianSensation; 02-17-2013 at 01:01 PM.
RussianSensation is offline  
Old 02-17-2013, 01:01 PM   #1363
BallaTheFeared
Diamond Member
 
BallaTheFeared's Avatar
 
Join Date: Nov 2010
Posts: 8,128
Default

Quote:
Originally Posted by RussianSensation View Post
The TDP of the 680 is 225W.
Quote:
NVIDIA's official TDP is 195W
BallaTheFeared is offline  
Old 02-17-2013, 01:02 PM   #1364
Jaydip
Diamond Member
 
Jaydip's Avatar
 
Join Date: Mar 2010
Posts: 3,294
Default

Quote:
Originally Posted by boxleitnerb View Post
Btw this begs the question:

What are real applications for a graphics card that are marketed as gaming cards under the brand "Geforce" or "Radeon"? I would say it's primarily games. Sure you can run other stuff on them, but that is not the primary use case, so I would somewhat understand if that were not incluced in TDP calculation. Do you have a source that explains how Nvidia and AMD actually do this?

But this is a slippery slope I guess, no one could say for sure what AMD and Nvidia are thinking about this. I would assume they want people to buy their professional products if you're doing this type of workload.
Agreed but I would sure test some programs and will see how it fares compared to a Quadro 6000.It's memory bandwidth will give it a good advantage.
__________________
Windows 7 Home Premium 64 bit || i7 4770K @ 4.2 with CM V6-GT || MSI Z87 GD 65||MSI Gaming N780 TF 3GD5/OC GeForce GTX 780|| Corsair Vengeance 16GB 1600 || WD Cavier Black 1TB FAEX X2|| HAF-X || Corsair TX750 V2 ||AL MX 5021E || DELL U2713HM||SideWinder X4||Razer DA
Jaydip is offline  
Old 02-17-2013, 01:04 PM   #1365
boxleitnerb
Platinum Member
 
Join Date: Oct 2011
Posts: 2,522
Default

Quote:
Originally Posted by RussianSensation View Post
It can't be primarily games since HD7000 was already designed for HPC to begin with, which right away means using those chips in more intensive apps than games. NV/AMD both talked about this when the whole issue of HD4870-4890 and GTX200 cards being blown up in Furmark began. They started first with software and then hardware thermal throttling for apps they felt didn't represent real world usage patterns. Other real world apps that load the GPU more than games are still considered.
HD7k SKU != FirePro SKU.
Look at K20X and Titan. Significantly higher clocks for core and memory and almost the same TDP if those 250W are indeed correct. SKUs for different market segments are not comparable regarding TDP.

Quote:
Originally Posted by RussianSensation View Post
The TDP of the 680 is 225W. If NV only looked at power consumption in games, they could have clocked the GPU at 1200-1300mhz. They didn't. A 1058mhz 680 peaks at about 186W in games which leaves almost 40W of extra headroom based on the TDP. NV clearly considered the design around more intensive real world applications than games when setting GPU clock speeds of the 680. The reference design can cope with 225W of power usage but games do not even get there.
I've seen values of 170W and 195W for GTX680 TDP, never 225W though. 225W is just what you get when you add up the power connectors.

Quote:
NVIDIA’s official TDP is 195W, though as with the GTX 500 series they still consider this is an average number rather than a true maximum. The second number is the boost target, which is the highest power level that GPU Boost will turbo to; that number is 170W.
http://www.anandtech.com/show/5699/n...gtx-680-review

Considering that Furmark doesn't go beyond approx. 195W (see ht4u review) and Furmark does represent the heaviest load I know of, I wonder how one can arrive at 225W TDP. I know of no scenario where the 680 uses more than those 195W. In games the 170W is spot on with 3DCenters analysis (169W), even if there is a typo here and there.

Last edited by boxleitnerb; 02-17-2013 at 01:10 PM.
boxleitnerb is offline  
Old 02-17-2013, 01:08 PM   #1366
RussianSensation
Elite Member
 
RussianSensation's Avatar
 
Join Date: Sep 2003
Location: Dubai, UAE
Posts: 14,565
Default

Quote:
Originally Posted by Grooveriding View Post
Too bad, $2000 is way too much for what two single GPU cards are worth for my buying habits. Will wait for the price to drop. Even 50% more than a 680 is still really impressive, it's just the price that isn't.
Agreed.

Notice what I said earlier in this thread how people overhyped GTX480/580/680's specs/real world gaming performance increase? We are seeing history repeating itself 4th time in a row.

We went from claims of 1Ghz 2880SP GK110 last fall to 1Ghz 2688SP recently and then ended up with an ~880mhz card. Shading & texture fill-rate power increases are less than 50% over the 680, pixel fill-rate is up less than 25%, which suggests the card will probably be ~50-60% faster than the 680 possibly due to Kepler's memory bandwidth bottleneck being opened up. It's impressive, but nowhere near as impressive considering the price increase NV is asking 1 year after 680 launched, esp. if it's also voltage locked.

GTX580 -> 680 (+35-40%) -> Titan (+50-60%). More than 2 years later but a price increase from $499 to $899.
__________________
i5 2500k | Asus P8P67 Rev.3 | Sapphire Dual-X HD7970 1150/1800 1.174V CFX | G.Skill Sniper 8GB DDR3-1600 1.5V
SeaSonic Platinum 1000W | OCZ Vertex 3 120GB + HITACHI 7K1000.B 1TB | Windows 7
Westinghouse 37" 1080P | X-Fi Platinum | Logitech Z5300E 5.1
RussianSensation is offline  
Old 02-17-2013, 01:09 PM   #1367
Jaydip
Diamond Member
 
Jaydip's Avatar
 
Join Date: Mar 2010
Posts: 3,294
Default

Quote:
Originally Posted by RussianSensation View Post
It can't be primarily games since HD7000 was already designed for HPC to begin with, which right away means using those chips in more intensive apps than games. NV/AMD both talked about this when the whole issue of HD4870-4890 and GTX200 cards being blown up in Furmark began. They started first with software and then hardware thermal throttling for apps they felt didn't represent real world usage patterns. Other real world apps that load the GPU more than games are still considered.



The TDP of the 680 is 225W. If NV only looked at power consumption in games, they could have clocked the GPU at 1200-1300mhz. They didn't. A 1058mhz 680 peaks at about 186W in games which leaves almost 40W of extra headroom based on the TDP. NV clearly considered the design around more intensive real world applications than games when setting GPU clock speeds of the 680. The reference design can cope with 225W of power usage but games do not even get there.
There is another thing wear and tear.Transistor like everything else "ages" so you can't really build a chip based on "best case scenario" loads.
__________________
Windows 7 Home Premium 64 bit || i7 4770K @ 4.2 with CM V6-GT || MSI Z87 GD 65||MSI Gaming N780 TF 3GD5/OC GeForce GTX 780|| Corsair Vengeance 16GB 1600 || WD Cavier Black 1TB FAEX X2|| HAF-X || Corsair TX750 V2 ||AL MX 5021E || DELL U2713HM||SideWinder X4||Razer DA
Jaydip is offline  
Old 02-17-2013, 01:11 PM   #1368
RussianSensation
Elite Member
 
RussianSensation's Avatar
 
Join Date: Sep 2003
Location: Dubai, UAE
Posts: 14,565
Default

Quote:
Originally Posted by boxleitnerb View Post
I've seen values of 170W and 195W for GTX680 TDP, never 225W though. 225W is just what you get when you add up the power connectors.
Quote:
Originally Posted by BallaTheFeared View Post
Sorry fellas, I mixed that up. I remember reading back when 680 launched that after-market 680's had a TDP of 225W. I remember now that the reference 680 had a TDP of 195W. Thanks for the correction.

Quote:
Originally Posted by Jaydip View Post
There is another thing wear and tear.Transistor like everything else "ages" so you can't really build a chip based on "best case scenario" loads.
Good point. I think NV and AMD leave a lot of headroom on the table, which is why we overclockers exploit it.

Quote:
Originally Posted by f1sherman View Post
And so you see again - peaks are irrelevant when it comes to TDP.
I think I see where the misunderstanding comes from. I am not talking about "peaks for milliseconds" but Peak power usage graph at websites like TPU. I am saying that those Peak measurements TPU shows will be "averages", or very close to average, when using more intensive real world applications. NV/AMD must take into account those cases when quoting the TDP. Distributing computing, raytracing, etc. all fall into this category and NV/AMD have to account for that. Otherwise you end up with an HD7970 that uses just 163W of power in games on average but has a TDP of 250W! Avg power consumption in games is not what dictates the GPU clocks, heatsink / VRM design or TDP quotes on AMD/NV's behalf. If there is a real world app that uses > 200W on a 7970, AMD can't just quote a TDP of 195W if the 7970 uses just 163W in games. Otherwise that would just be misleading. That only goes to show how useless the TDP number is unless both companies define it the same way or accurately report it.
__________________
i5 2500k | Asus P8P67 Rev.3 | Sapphire Dual-X HD7970 1150/1800 1.174V CFX | G.Skill Sniper 8GB DDR3-1600 1.5V
SeaSonic Platinum 1000W | OCZ Vertex 3 120GB + HITACHI 7K1000.B 1TB | Windows 7
Westinghouse 37" 1080P | X-Fi Platinum | Logitech Z5300E 5.1

Last edited by RussianSensation; 02-17-2013 at 01:23 PM.
RussianSensation is offline  
Old 02-17-2013, 01:12 PM   #1369
Grooveriding
Diamond Member
 
Grooveriding's Avatar
 
Join Date: Dec 2008
Location: Toronto, CA
Posts: 6,331
Default



If you look at the backside of the card near the power connectors it does not have the small chip that is on the GTX 680, 670, 660 that regulates the voltage. I would think they would of done the right thing on an enthusiast card and included voltage control. They can't be deaf to feedback and the disappointment enthusiasts had about how locked down GK104 was, especially with a card they are trying to attach such a high premium on.

The chip is located somewhere else on the 690 though, so who knows..
__________________
5960X @ 4.5 | X99 Deluxe | 16GB 2600 GSkill DDR4 | 780ti SLI | Evo 500GB Raid 0 | Dell U3011 | EVGA 1300W G2
under custom water
Grooveriding is online now  
Old 02-17-2013, 01:15 PM   #1370
f1sherman
Golden Member
 
Join Date: Apr 2011
Posts: 1,972
Default

GTX 680 TDP is 195W

http://www.geforce.com/hardware/desk...specifications

http://www.geforce.com/Active/en_US/...aper-FINAL.pdf

EDIT:Damn you guys are too fast for me, it took me some time to copy 680's white paper address nice and clean, and without all Google's garbage.

Last edited by f1sherman; 02-17-2013 at 01:18 PM.
f1sherman is offline  
Old 02-17-2013, 01:15 PM   #1371
tviceman
Diamond Member
 
Join Date: Mar 2008
Posts: 4,938
Default

Quote:
Originally Posted by RussianSensation View Post
Agreed.

Notice what I said earlier in this thread how people overhyped GTX480/580/680's specs/real world gaming performance increase? We are seeing history repeating itself 4th time in a row.
Uhhh I don't remember the gtx580 being overhyped. In fact, I mostly remember people saying nvidia couldn't release anything faster on 40nm because they were at the limits of power usage. And as far as the 680 was concerned, up until two weeks before the card came out, no one and I mean NO ONE thought it would outperform an hd7970. Neither of those cards were overhyped in the performance discussion, at least not by any one except passer-byers.

Anyways, the overhyping goes both ways equally. Sliverforce's prophetic appearance here at vc&g with numerous 6970 performance claims that it would be 30% faster than the gtx480 is still fresh in mind.
tviceman is online now  
Old 02-17-2013, 01:21 PM   #1372
Smartazz
Diamond Member
 
Join Date: Dec 2005
Posts: 6,128
Default

Is it too optimistic to think that this card will be $600 in the near future? I'd love to pick one of these up, but I would consider $600 toward the limit of what I would spend on a graphics card.
__________________
i5 2500K@4.6GHz, 16GB G.SKILL 1600MHz, R9 290x, Seasonic X850, X-Fi Fatal1ty, Samsung 830 and 840 with Antec 1200.
Retina MacBook Pro 15", 2.6GHz, 16GB, 512GB SSD
Achieva Shimian, Das Keyboard, Logitech G400 and Razer Scarab.
Smartazz is offline  
Old 02-17-2013, 01:22 PM   #1373
tviceman
Diamond Member
 
Join Date: Mar 2008
Posts: 4,938
Default

Die size is being estimated at 513mm^2: http://forum.beyond3d.com/showpost.p...&postcount=900

Smaller than GF100 / GF110 and GT200. Noticeably smaller than the rumored 550mm^2.

Here is a discussion I'd like to have: why the heck did they make this die so big? Why didn't Nvidia just go with a 12 SMX based die, save the space to improve yields and run at higher clocks within a similar TDP, and essentially get the same perfomance?

Last edited by tviceman; 02-17-2013 at 01:25 PM.
tviceman is online now  
Old 02-17-2013, 01:27 PM   #1374
Jaydip
Diamond Member
 
Jaydip's Avatar
 
Join Date: Mar 2010
Posts: 3,294
Default

Maybe it will be revealed as Gtx 760
__________________
Windows 7 Home Premium 64 bit || i7 4770K @ 4.2 with CM V6-GT || MSI Z87 GD 65||MSI Gaming N780 TF 3GD5/OC GeForce GTX 780|| Corsair Vengeance 16GB 1600 || WD Cavier Black 1TB FAEX X2|| HAF-X || Corsair TX750 V2 ||AL MX 5021E || DELL U2713HM||SideWinder X4||Razer DA
Jaydip is offline  
Old 02-17-2013, 01:27 PM   #1375
wand3r3r
Platinum Member
 
wand3r3r's Avatar
 
Join Date: May 2008
Posts: 2,850
Default

Quote:
Originally Posted by RussianSensation View Post
Agreed.
GTX580 -> 680 (+35-40%) -> Titan (+50-60%). More than 2 years later but a price increase from $499 to $899.
If this is the case, which is appearing more likely, this card will be a failure in my book. The performance would be nice (great), but it's merely a true generational update which should fall in its successors price range.

What a joke. Price/performance will be dismal and probably one of the most overpriced cards to date.

If it's only say 30-40% faster then a 7970 GE (if 50% more than the 680) and costs more than 200% I don't know how the NVidiots will defend this one.
wand3r3r is online now  
Closed Thread

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 01:29 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.