[VR-Zone] NVidia GTX-590 *FINAL* Specs Revealed!

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91

I know that, but I'm talking about the SOI angle for which ARM ran parallel test vehicles and noted that at the same operating frequency on same architecture and same process node there was a 40% reduction in power-consumption in addition to a 7% reduction in die-size when doing it on SOI versus not SOI.

http://forums.anandtech.com/showthread.php?p=28892393&post28892393

With the same test vehicle, ARM observed that they could alternatively boost clockspeeds 20% while still securing a 30% reduction in power consumption.

If the technology makes sense for AMD to use in their $100 CPU's then it sure seems like it would be applicable to their $100 GPU's.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
I know that, but I'm talking about the SOI angle for which ARM ran parallel test vehicles and noted that at the same operating frequency on same architecture and same process node there was a 40% reduction in power-consumption in addition to a 7% reduction in die-size when doing it on SOI versus not SOI.

http://forums.anandtech.com/showthread.php?p=28892393&post28892393

With the same test vehicle, ARM observed that they could alternatively boost clockspeeds 20% while still securing a 30% reduction in power consumption.

If the technology makes sense for AMD to use in their $100 CPU's then it sure seems like it would be applicable to their $100 GPU's.

I don't know if GloFo's 28nm process will utilize both SOI and HKMG as 32nm does, because at their site they only talk about HKMG on 28nm process.

http://www.globalfoundries.com/technology/28nm.aspx

If they do then AMD could really have a manufacturing process advantage.

One more thing, I don’t know for sure but I would bet that SOI will increase the manufacturing cost because of the higher manufacturing cost of SOI wafers.
 

GFORCE100

Golden Member
Oct 9, 1999
1,102
0
76
Well, here are the final specs of the NVidia GTX-590 according to VR-Zone:

Source

Cliff's Notes:

I'm disappointed honestly. The clocks are so low! We will have to wait to see the reviews on this card and how it compares to the AMD 6990.

The memory frequency does seem a little on the low side especially. I think Nvidia is battling on two fronts:
a) Power consumption and heat, they ideally need 22nm for a card like this.
b) Speed. They don't want it to be as fast as 2 GTX580's in SLI because then most people would rather buy a single GTX590 which means less cards sold and potentially less profit for Nvidia and AIB partners.

I bet you they're running the G110's at reduced voltage so curb heat and power consumption, hence the lowish clocks. If the memory speed is true then "ah well", 4GHz will likely artefact - you win some, you lose some.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Texture Fill Rate = (# of TMUs) x (Core Clock)

GTX560 Ti has 64 Texture Units (TMUs) and 822MHz Core Clock = 52608

GTX570 has 60 Texture Units (TMUs) and 732MHz Core Clock = 43920

Raster Fillrate (ROPs) = (# of ROPs) x (Core Clock)

GTX560 Ti has 32 Raster Units (ROPs) and 822MHz Core Clock = 26304

GTX570 has 40 Raster Units (ROPs) and 732MHz Core Clock = 29280

Sorry i was talking about 560 (GF114) vs 570 (GF104)
But again, you'll notice that the GTX 570 still has a higher texture fillrate, which is the point I made awhile ago. I'm not understanding what your point was here.
From the same link, blu ray power. Another factor effecting overall power usage.

power_bluray.gif
What does this have to do with maximal power consumption, which would determine the card's final specs, which is what we were talking about?
I know that, but I'm talking about the SOI angle for which ARM ran parallel test vehicles and noted that at the same operating frequency on same architecture and same process node there was a 40% reduction in power-consumption in addition to a 7% reduction in die-size when doing it on SOI versus not SOI.

http://forums.anandtech.com/showthread.php?p=28892393&post28892393

With the same test vehicle, ARM observed that they could alternatively boost clockspeeds 20% while still securing a 30% reduction in power consumption.

If the technology makes sense for AMD to use in their $100 CPU's then it sure seems like it would be applicable to their $100 GPU's.
One more thing, I don’t know for sure but I would bet that SOI will increase the manufacturing cost because of the higher manufacturing cost of SOI wafers.
Interesting, very interesting. So it would seem this would then be a cost vs. benefit dilemma - is the increased power savings/added performance worth the increased manufacturing cost. IDC, you make a great point - if they can crank out $100 CPU's on it, why not $100 GPU's? Or better yet, why not at least $300 GPU's which have a much higher profit margin?
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
But again, you'll notice that the GTX 570 still has a higher texture fillrate, which is the point I made awhile ago. I'm not understanding what your point was here.

GTX570 has LESS texture fillrate (43920) than GTX560 Ti (52608) but it has more memory bandwidth (Raster Fillrate).

My point is that Cayman can close the gap with GTX580 at high res and filters because of its Raster units and not because it has more Texture units.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
GTX570 has LESS texture fillrate (43920) than GTX560 Ti (52608) but it has more memory bandwidth (Raster Fillrate).

My point is that Cayman can close the gap with GTX580 at high res and filters because of its Raster units and not because it has more Texture units.
the tmus most certainly play a role too though.
 

MentalIlness

Platinum Member
Nov 22, 2009
2,383
11
76
An opinion here guys. Not my opinion, but the articles opinion...

http://www.hexus.net/content/item.php?item=29597

NVIDIA GeForce GTX 590 to be faster than AMD Radeon HD 6990? Maybe not


There's no reason for anyone to hit a website if we copy entire articles into forum posts.

Super Moderator BFG10K.
 
Last edited by a moderator:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
An opinion here guys. Not my opinion, but the articles opinion...

http://www.hexus.net/content/item.php?item=29597

NVIDIA GeForce GTX 590 to be faster than AMD Radeon HD 6990? Maybe not

I read this article too. I'm not disagreeing with anything in the article. Just 2 points though... It's pure conjecture/educated guessing based on the released specs. They do state in the article that they are not currently under NDA from nVidia. That tells me that they aren't on the short list to get a 590 to review. Might be a bit of retaliation/sour grapes.

We'll see if this ends up true. Right now though, they really don't know any better than we do.
 

MentalIlness

Platinum Member
Nov 22, 2009
2,383
11
76
Yea, I could care less either way. :)

But it had to do with a 590 and a users opinion so I posted it. After all, isnt all forums online based on user opinion ?

I predict the 590 to be within 5% of the 6990, whether that is slower or faster, thats still what I think. :)
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I think the 590 is going to struggle. I think the 6990 is purposely designed to leave nVidia without the ability to use more power than it. Stock TDP is @ 375W. The max for the 2x 8pin connectors if conforming at all to pcie specs. Then in AUSUM mode it's set to the thermal limits of the cooler. Unless nVidia can work some sort of driver magic that limits max power while allowing it to use more power than the 6990 for gaming, or they have a superior cooler (both of which aren't out of the realm of possibility) they aren't going to be able to push any more watts through it than AMD does. At the same wattage AMD is faster than nVidia.

If AMD had stuck purely to pcie spec, 300W, then nVidia could have easily made a faster card, like they have done recently, by using more power.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Cayman is not that much more 'efficient' than the 580/570 are either. I think the 6970 and gtx570 are about even in watts pulled. The 580 uses almost 20% more, but is sometimes 15% faster also.
The 'key' imo will be the magic of so called cherry picked cores. We have seen this marketing line before to account for 'magical' results !
 
Feb 19, 2009
10,457
10
76
SLI gtx 580 pulls ~200W more than a 6990. At high res and multi screen res, they are real close in performance already. If you downclock the 590 by 20% to reduce power usage, it's going to be ugly.

So either NV downclocks to fit 375W "average TDP" or they don't, just wait and see.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
If the 600-620MHz is true then in default clocks GTX590 will be close to GTX570 SLI performance and it could be equal/faster than HD6990(default clocks 830MHz) in 1920x1200 res (depend of the game). At 2560x1600 res or Triple monitor setup the HD6690 could end up faster.

One more thing that can play a big role is drivers.

Things will change when both cards will be O/C. If GTX590 can draw more than the default 375W then it could regain the crown.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Cayman is not that much more 'efficient' than the 580/570 are either. I think the 6970 and gtx570 are about even in watts pulled. The 580 uses almost 20% more, but is sometimes 15% faster also.
The 'key' imo will be the magic of so called cherry picked cores. We have seen this marketing line before to account for 'magical' results !

It is more efficient at default speeds typically.
It closes at higher resolutions typically.
Crossfire with these GPUs scales better than SLI typically.

That means that the gap is pretty much close to its smallest when comparing default to default, and the 6990 is more efficient at stock speeds, so it all comes down to what NV can manage to do with clocks in their chosen power envelope, but they are coming from a position where they are down against the opposition.
 
Feb 19, 2009
10,457
10
76
Have to really commend AMD for improving cross-fire so much, especially quad scaling. I just had a look at Guru3D's CF review (very late of me) and OMG, is that near 100% scaling with 2x 6990 in BF, Crysis and Metro at 2560 res, other games are hitting CPU wall.

It's really the difference in making dual GPU cards viable and this is the gen where scaling is so great.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
If the 600-620MHz is true then in default clocks GTX590 will be close to GTX570 SLI performance and it could be equal/faster than HD6990(default clocks 830MHz) in 1920x1200 res (depend of the game). At 2560x1600 res or Triple monitor setup the HD6690 could end up faster.

One more thing that can play a big role is drivers.

Things will change when both cards will be O/C. If GTX590 can draw more than the default 375W then it could regain the crown.


I wonder how many people would buy this level of card for a 1920x1200 monitor though? And two 8 pin connectors and a PCIE slot are only rated for 375 watts, how much beyond spec can they go? I guess I don't see why anyone would bt this card or a 6990 when you can get two 570's/580's or two 6970's. I guess if you only have two PCIE slots and want quadfire or 4 way SLI they make sense, and depending on pricing the 590 could be a good buy. But I think getting two cards is just overall better when looking at these two.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
I'm going to try some benches with two of my cards at 602/3500, then add 6% to my results to account for the extra sharers on the 580. See how it fares against a 6990 at 2560x1600.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
I wonder how many people would buy this level of card for a 1920x1200 monitor though? And two 8 pin connectors and a PCIE slot are only rated for 375 watts, how much beyond spec can they go? I guess I don't see why anyone would bt this card or a 6990 when you can get two 570's/580's or two 6970's. I guess if you only have two PCIE slots and want quadfire or 4 way SLI they make sense, and depending on pricing the 590 could be a good buy. But I think getting two cards is just overall better when looking at these two.

I would buy it, i had an HD5970. There are games that need more power than a single card even at 1920x1200 filters on.

Cards can draw more than 375W even with 2x 8-pin.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
I would buy it, i had an HD5970. There are games that need more power than a single card even at 1920x1200 filters on.

Cards can draw more than 375W even with 2x 8-pin.

Why would you buy a 6990 over dual 6950 or 6970 though, if you are only going for 2 cards?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
And two 8 pin connectors and a PCIE slot are only rated for 375 watts, how much beyond spec can they go?

I thought the 8-pin connectors being "rated" at 150W each was more of a guideline than an actual spec.

Didn't someone here mention in an earlier thread that the connectors are electrically rated to function just fine up to 300W or 500W each?

Provided the PSU is spec'ed to deliver the power, and the video card has been designed to manage the power (and heat) then it's not entirely clear to me what the concern would be or should be other than it just being a question of "does my PSU and case support such a video card's electrical and cooling needs, and do I want a card that requires as much?".
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Why would you buy a 6990 over dual 6950 or 6970 though, if you are only going for 2 cards?

There are advantages and disadvantages with single PCB dual chip cards.

Advantages

1: Its a single PCI-e card (no SLI/CF mobo needed)
2: You only need one cooler if you want to upgrade(air/water)
3: some times you cant O/C in SLI or CF but you can in dual chip card
4: Lower power consumption/thermals and most of the time lower noise because of a single fan (perhaps 6990 doesn't count on that but 5970 had low noise) vs dual cards in SLI/CF
5: Single card with Triple monitor capability (only apply for GTX590)

Disadvantages

1: lower performance than dual cards but not that much. If OC they have almost the same performance.
2: Its easier to sell a chipper single card than a monster dual chip card

individuals may have different opinions but the above should cover most of them.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
Cayman is not that much more 'efficient' than the 580/570 are either. I think the 6970 and gtx570 are about even in watts pulled. The 580 uses almost 20% more, but is sometimes 15% faster also.
The 'key' imo will be the magic of so called cherry picked cores. We have seen this marketing line before to account for 'magical' results !

Well you have to remember that the 6990 is basicly twin 6950s, a card that is probably the most effiecent of any card over $200. It uses the same amount of power as a gtx 460 but is much faster. Its gonna be hard beating that.

The 6970 uses so much more power not only cause it has higher clocks core clocks but also much faster GDDR5 memory that is also volted higher.

Also, 6970s are faster than gtx 570s in sli. so that battle is lost essentialy.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
I'm going to try some benches with two of my cards at 602/3500, then add 6% to my results to account for the extra sharers on the 580. See how it fares against a 6990 at 2560x1600.


So I ran the two benchmarks I could that would mirror Anandtech's benchmark results, Metro 2033 and Crysis Warhead.

I clocked two of my cards to 602/1500 to match the rumored specs. I have then taken the results and added 6% to each to account for the extra shaders on the 580. Per Anandtech's own testing, at the same clock speeds a GTX 580 is 6% faster than a GTX 480. The extra speed is from the additional shaders and the numbers make sense as the 580 has 6% more shaders than a 480. http://www.anandtech.com/show/4012/nvidias-geforce-gtx-580-the-sli-update/3


GTX 590 Theoretical Metro 2033 @ 2560x1600 = 35.33 FPS
Radeon 6990 Metro 2033 @ 2560x1600 = 43 FPS http://www.anandtech.com/show/4209/amds-radeon-hd-6990-the-new-single-card-king/9

GTX 590 Theoretical Crysis Warhead @ 2560x1600 = 37.6 FPS
Radeon 6990 Crysis Warhead @ 2560x1600 = 43.9 FPS http://www.anandtech.com/show/4209/amds-radeon-hd-6990-the-new-single-card-king/7

If the rumored specs of 602/3500 are true, the 590 will fall a little short of a 6990.

Below are pics of my results.

330g11t.jpg


2liasms.jpg