Has there been any word on G90?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow

Right, which makes more sense to eliminate the largest and most expensive part of the core where possible and essentially gain "free" performance by simply taking advantage of the clockspeeds available from a die shrink and smaller process. But while we're on the topic of the GTX, you are aware that its currently so large, the entire "core" isn't even on a single die right?
That would only be feasible if you can achieve a more than 2x clockspeed improvement going from 90nm to 65nm. I doubt that's going to happen.

Dismissing the ability to scale performance with additional cores instead of laser locking or neutering a part while maintaining all of its production costs screams "n00b" to me. You said it yourself, if parallel functionality comes naturally, why would they continue to produce a high-end part only to strip it down to meet market segments when they could simply scale performance (and cost) with the number of cores?
It would be dumb to use multiple separate cores because then each core would need additional components that are currently shared among the quads. Components like the memory controller, ROP's, texture units, and more. And then you'd need more complex scheduling logic between each core. The whole premise of going multi-core makes no sense for a gpu.

If NV does go this direction, which I'm not necessarily in agreement with either, its pretty obvious they saw the same things I'm seeing. That the shaders scale extremely well to clockspeed to the point more clocks can overcome a lack of transistors. They already have working silicon providing solid evidence of this. G84 with only 64 shaders on an 80nm process allows them to hit higher clockspeeds. Notch those improvements up a bit further with a 65nm or 55nm process and the OP doesn't seem so outrageous.
It is outrageous, because a g84 would need to work at ~1.2ghz just to equal g80 performance. I can assure you no version of the 65nm g84 will hit 1.2ghz on air.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: munky
That would only be feasible if you can achieve a more than 2x clockspeed improvement going from 90nm to 65nm. I doubt that's going to happen.
90nm to 65nm, judging from precedents set by the CPU industry, is a 50% reduction in die size. If they don't increase transistor count significantly with G90, much higher clockspeeds aren't out of the question. The OP seems to indicate as much as well. I wouldn't be shocked if G84 hits 900MHz on the core with some effort, at which point it could easily outperform a G80 GTS with 32 more shaders. I'm skeptical about a two-fold increase in clockspeeds, but I'm not going to make the mistake of dismissing the possibility.

It would be dumb to use multiple separate cores because then each core would need additional components that are currently shared among the quads. Components like the memory controller, ROP's, texture units, and more. And then you'd need more complex scheduling logic between each core. The whole premise of going multi-core makes no sense for a gpu.
SLI is a waste of time too I suppose. I actually do think SLI in its current form is wasted potential, but a multi-core on multi-die set-up or even a multi-core on one die set-up would be somewhere inbetween SLI and a single core implementation. Except the costs and scalability reflect the level of performance for both NV and the end-user.

But back to your example, while this its true you'll have some redundancy, you're forgetting the increased die size by trying to pack everything onto one die, which is going to drive up heat, power consumption, cost while decreasing clock speed and yields. You also discount any benefit each core would gain from its own memory controller, ROPs, texture units etc. The main overhead would be the scheduler, but if its handled on the hardware level and not the driver level, it should still be much more efficient than SLI.

In the end its still a much cheaper solution for NV to pack a few more transistors on each chip than to sell full-blown flagships with a few nuts chopped off.

It is outrageous, because a g84 would need to work at ~1.2ghz just to equal g80 performance. I can assure you no version of the 65nm g84 will hit 1.2ghz on air.
G84 will only be 80nm and its stock clocks are guesstimated to be anywhere between 100-200MHz faster than G80 clockspeeds. Depending on how well these OC, we'll get a good idea if there's any truth to this rumor, but if we start seeing 800-900MHz clockspeeds on the G84 on air, it'll start rivaling G80 GTS performance numbers.....at less than 1/2 the price.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
When there is a G90, I wouldnt expect before the end of the year. Maybe Nov\Dec at the earliest. Honestly with the way ATI is dropping the ball here, they dont need it.

 

Chaotic42

Lifer
Jun 15, 2001
34,697
1,868
126
Originally posted by: Genx87
When there is a G90, I wouldnt expect before the end of the year. Maybe Nov\Dec at the earliest. Honestly with the way ATI is dropping the ball here, they dont need it.

I might wait. The 8800GTX is simply not worth $550 to me.