• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

If the 580 was shrunk to 28nm, how fast would it be?

vshin

Member
While perusing all this talk about the new 580, I had a nagging question pop up. Since the foundries are skipping the 32nm process and jumping directly from 40nm to 28nm, would that mean we should expect a much higher step-up in performance in 2011? We're all accustomed to our cards becoming overshadowed by new cards, but the next generation of cards could potentially blow the ones coming out now out of the water by a huge margin. Something like going from a 90nm card to a 40 nm one.

So if nVidia did nothing but shrink the current 580 from 40nm to 28nm, how much faster would it be?
 
It would be exactly the same speed while likely consuming less power.

There is no guarantee it would have higher clocks but is likely given that you'd have more chips per wafer and other factors. So perhaps you'd get the same thing at 900Mhz, 10% faster at best.
 
Node labels are a marketing term, don't read too much into them. IDC has a few good posts about this, I wish I could find them.

Shrinks alone do not increase performance. Supposedly nV already has 28nm Kepler samples, but the source on that is about as shady as it gets, one step above Charlie and one step below a flaming turd.
 
Node labels are a marketing term, don't read too much into them. IDC has a few good posts about this, I wish I could find them.

Shrinks alone do not increase performance. Supposedly nV already has 28nm Kepler samples, but the source on that is about as shady as it gets, one step above Charlie and one step below a flaming turd.

Now now, few like Charlie but there is no reason to set him on fire 😛
 
While perusing all this talk about the new 580, I had a nagging question pop up. Since the foundries are skipping the 32nm process and jumping directly from 40nm to 28nm, would that mean we should expect a much higher step-up in performance in 2011? We're all accustomed to our cards becoming overshadowed by new cards, but the next generation of cards could potentially blow the ones coming out now out of the water by a huge margin. Something like going from a 90nm card to a 40 nm one.

So if nVidia did nothing but shrink the current 580 from 40nm to 28nm, how much faster would it be?

It's like a full-node jump. 90nm -> 65nm.

Fermi on 28nm could be expected to consume half the power if all else remained the same (clocks, cores, etc) and Nvidia invested all the resources necessary to do a decent litho shrink (no cutting corners).

If they wanted to boost performance at the expense of power-consumption then yeah they could probably push clocks up 50% or so depending on how aggressive they get with the power-budget and the cooling.

Given how parallelized graphics processing is, it makes more sense from a power-budget standpoint to keep the clocks low and chew up more diespace making a wider chip with more cores. (i.e. Kepler)
 
While perusing all this talk about the new 580, I had a nagging question pop up. Since the foundries are skipping the 32nm process and jumping directly from 40nm to 28nm, would that mean we should expect a much higher step-up in performance in 2011? We're all accustomed to our cards becoming overshadowed by new cards, but the next generation of cards could potentially blow the ones coming out now out of the water by a huge margin. Something like going from a 90nm card to a 40 nm one.

So if nVidia did nothing but shrink the current 580 from 40nm to 28nm, how much faster would it be?

Wait 9~12 months and see 😛
 
It's like a full-node jump. 90nm -> 65nm.

Fermi on 28nm could be expected to consume half the power if all else remained the same (clocks, cores, etc) and Nvidia invested all the resources necessary to do a decent litho shrink (no cutting corners).

If they wanted to boost performance at the expense of power-consumption then yeah they could probably push clocks up 50% or so depending on how aggressive they get with the power-budget and the cooling.

Given how parallelized graphics processing is, it makes more sense from a power-budget standpoint to keep the clocks low and chew up more diespace making a wider chip with more cores. (i.e. Kepler)


Somewhat off topic, but you may have just answered a question I have wondered about buy never thought to ask. That is, modern CPU's are available over 3GHz, many overclock well over 4GHz. Yet 1GHz is screaming by GPU standards.

Is that more because it makes more sense from a performance stand point to have more cores at a slower clock speed than less cores at a higher clock speed? That is, a GPU with 100 cores at 1GHz should be faster at most graphics work loads than that same architecture with 50 cores at 2GHz?
 
While perusing all this talk about the new 580, I had a nagging question pop up. Since the foundries are skipping the 32nm process and jumping directly from 40nm to 28nm, would that mean we should expect a much higher step-up in performance in 2011? We're all accustomed to our cards becoming overshadowed by new cards, but the next generation of cards could potentially blow the ones coming out now out of the water by a huge margin. Something like going from a 90nm card to a 40 nm one.

So if nVidia did nothing but shrink the current 580 from 40nm to 28nm, how much faster would it be?

28nm is potentially twice as nice, since it would give twice the area to work with. 40nm is just a stopgap, which is why I'm semi-cheaping out on it and don't want to buy a high-end 40nm.
 
I guess the corollary to my first post is this:

In the past year, performance has increased by around 20% (480->580, 5870->6970). But if clock speeds will be going up by as much as 40-50% because of a full node jump in the fabrication process, then we should see a much bigger leap in performance as soon as 8 months from now.

If that's the case, then spending a lot of money right now on a new card is probably not a good idea since we're on the cusp of a bigger jump than usual in graphics performance.
 
Last edited:
Clock speeds going up by 50%? Uhh....

Remember smaller nodes are more prone to electromigration, meaning they can't handle as much voltage.

Or am I applying CPU issues to the GPU and am off base? I'm sure someone here knows.
 
didn't you buy a GTX 460? Now I see a 6850 in your sig

I was furious at the 6850 price gouging and convinced myself that I wouldn't miss Eyefinity. So I decided to REALLY cheap out and get a 460-768, but the EVGA packaging and/or Amazon shipping damaged the card (defect on the PCB) so I RMA'd it. I ordered a MSI Cyclone GTX460 as a replacement, but it still hadn't shipped 2-3 days later, at which point 6850 prices finally dropped to SEP. Around that time I realized how much I missed Eyefinity and hotkey-switching resolutions after all (NV needs to get its act together and make hotkey profile switching ASAP, it's unbelievable how much clicking it would have taken if I had a Surround system to switch among display options), so I switched my order to a 6850. If I didn't have triple monitor I would stayed with the MSI Cyclone order, because I prefer NV all else equal, and the GTX460 and 6850 are otherwise equal. This is why it bugs me that NV still hasn't designed a triple-monitor single-GPU card--it sorta vendor-locks me to AMD unless the price/perf gets too far out of whack. (I already had an Eyefinity cable from my previous setup, so that was a sunk cost.)
 
I was waiting for a 6850 deal to come up but I saw the Asus 460 768MB w Hawx 2 for $115 and jumped on that. Already sold Hawx 2 for $18.

I really wanted to get an AMD card cause my last 3 or 4 cards were Nvidia (no problems, just wanted to change) but at this price it was too good to pass up (this is my 2nd 768MB 460). Probably won't get an AMD card till next gen unless I see a 6850 for $120 or so...
 
I was waiting for a 6850 deal to come up but I saw the Asus 460 768MB w Hawx 2 for $115 and jumped on that. Already sold Hawx 2 for $18.

I really wanted to get an AMD card cause my last 3 or 4 cards were Nvidia (no problems, just wanted to change) but at this price it was too good to pass up (this is my 2nd 768MB 460). Probably won't get an AMD card till next gen unless I see a 6850 for $120 or so...

Saw that deal, was sorely tempted but I already had a 6850 in hand and no real need for another card. It'll probably be a while before we see a quality 460-768 priced that low again, and with a game, too!
 
Somewhat off topic, but you may have just answered a question I have wondered about buy never thought to ask. That is, modern CPU's are available over 3GHz, many overclock well over 4GHz. Yet 1GHz is screaming by GPU standards.

Is that more because it makes more sense from a performance stand point to have more cores at a slower clock speed than less cores at a higher clock speed? That is, a GPU with 100 cores at 1GHz should be faster at most graphics work loads than that same architecture with 50 cores at 2GHz?

This is my understanding. Huge amount of parallel work makes it more effective to pack a higher amount of simple cores at lower clock.
 
Back
Top