Has there been any word on G90?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Schmeh
Originally posted by: happy medium

G90 specs (to be named 9800GTX)

* 55nm Architecture
* Native Support for Dual-core Alternative Cards (Dual-Core G90)
* Over 1 Ghz Core Speed
* 64 Unified Shader Units
* GDDR4 XDR2 @ over 2Ghz
* Quad SLI Support
* May have a dedicated PPU on Board
* Predicted 3DMark06 Score for a Single Core G90 (based on a 4800+ X2): 14000 - 16000

Release Date: Expected Mid 2007

I need to get me some of that GDDR4 XDR2 to replace my SRAM EDO DDR RDRAM in my pc.

i didnt catch that one :)
 

Bill Brasky

Diamond Member
May 18, 2006
4,324
1
0
Originally posted by: Schmeh
Originally posted by: happy medium

G90 specs (to be named 9800GTX)

* 55nm Architecture
* Native Support for Dual-core Alternative Cards (Dual-Core G90)
* Over 1 Ghz Core Speed
* 64 Unified Shader Units
* GDDR4 XDR2 @ over 2Ghz
* Quad SLI Support
* May have a dedicated PPU on Board
* Predicted 3DMark06 Score for a Single Core G90 (based on a 4800+ X2): 14000 - 16000

Release Date: Expected Mid 2007

I need to get me some of that GDDR4 XDR2 to replace my SRAM EDO DDR RDRAM in my pc.
:laugh:
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
G90, hmm, all the rumours semm to indicate that the G90 is the refresh of the current G80 core. Instead of the jump to 80nm, they may have gone straight to 65nm.

Along the ride could be tweaks in the architecture, 512bit memory bus, more shaders, etc.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: munky
Originally posted by: happy medium

G90 specs (to be named 9800GTX)

* 55nm Architecture
* Native Support for Dual-core Alternative Cards (Dual-Core G90)
* Over 1 Ghz Core Speed
* 64 Unified Shader Units
* GDDR4 XDR2 @ over 2Ghz
* Quad SLI Support
* May have a dedicated PPU on Board
* Predicted 3DMark06 Score for a Single Core G90 (based on a 4800+ X2): 14000 - 16000

Release Date: Expected Mid 2007

Those numbers look like something made up by a 12 year old kid.

*edit: since we're speculating, how's this?

65mn process
192 scalar unified shaders
700+ mhz core speed
1.4+ ghz shader clocks
Predicted 3dmock06 score 15000+ based on an unknown cpu from the future
Taped out and waiting in the wings for Ati's r600 (as usual... lol)

Why does everyone find those numbers so hard to believe? Sure, 64 shader processors would seem like a step backwards, but not if they went to a smaller process and were getting much higher clock speeds. After all, they need to give themselves some room for improvement. 1.2GHz core with 2GHz actual, not effective RAM speed, could easily produce the 14k-16k 3DMark06 scores with only 64 shaders on a 65nm process.

From a performance standpoint, keeping at least the same number of shaders and increasing the clockspeed would produce the biggest improvement, @100%, but from a business standpoint, cutting the number of shaders would decrease costs and provide future options while still giving a signfiicant increase in performance. Not 100%, but at least 50%. If NV's next GPU is only 64 shaders on a 65nm process, they'll definitely be "holding back", but no one's going to complain if its a big increase over G80, especially if they know what's planned in the future (128 shaders at the same clock speed etc.).
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow
Originally posted by: munky
Originally posted by: happy medium

G90 specs (to be named 9800GTX)

* 55nm Architecture
* Native Support for Dual-core Alternative Cards (Dual-Core G90)
* Over 1 Ghz Core Speed
* 64 Unified Shader Units
* GDDR4 XDR2 @ over 2Ghz
* Quad SLI Support
* May have a dedicated PPU on Board
* Predicted 3DMark06 Score for a Single Core G90 (based on a 4800+ X2): 14000 - 16000

Release Date: Expected Mid 2007

Those numbers look like something made up by a 12 year old kid.

*edit: since we're speculating, how's this?

65mn process
192 scalar unified shaders
700+ mhz core speed
1.4+ ghz shader clocks
Predicted 3dmock06 score 15000+ based on an unknown cpu from the future
Taped out and waiting in the wings for Ati's r600 (as usual... lol)

Why does everyone find those numbers so hard to believe? Sure, 64 shader processors would seem like a step backwards, but not if they went to a smaller process and were getting much higher clock speeds. After all, they need to give themselves some room for improvement. 1.2GHz core with 2GHz actual, not effective RAM speed, could easily produce the 14k-16k 3DMark06 scores with only 64 shaders on a 65nm process.

From a performance standpoint, keeping at least the same number of shaders and increasing the clockspeed would produce the biggest improvement, @100%, but from a business standpoint, cutting the number of shaders would decrease costs and provide future options while still giving a signfiicant increase in performance. Not 100%, but at least 50%. If NV's next GPU is only 64 shaders on a 65nm process, they'll definitely be "holding back", but no one's going to complain if its a big increase over G80, especially if they know what's planned in the future (128 shaders at the same clock speed etc.).

If NV's next high end GPU is only 64 shaders on a 65nm process then it will get smoked by the competition. Clockspeed can only get you so far; there's a reason why gpus have been getting increasingly "wider" and more complex.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: munky
Originally posted by: chizow
Originally posted by: munky
Originally posted by: happy medium

G90 specs (to be named 9800GTX)

* 55nm Architecture
* Native Support for Dual-core Alternative Cards (Dual-Core G90)
* Over 1 Ghz Core Speed
* 64 Unified Shader Units
* GDDR4 XDR2 @ over 2Ghz
* Quad SLI Support
* May have a dedicated PPU on Board
* Predicted 3DMark06 Score for a Single Core G90 (based on a 4800+ X2): 14000 - 16000

Release Date: Expected Mid 2007

Those numbers look like something made up by a 12 year old kid.

*edit: since we're speculating, how's this?

65mn process
192 scalar unified shaders
700+ mhz core speed
1.4+ ghz shader clocks
Predicted 3dmock06 score 15000+ based on an unknown cpu from the future
Taped out and waiting in the wings for Ati's r600 (as usual... lol)

Why does everyone find those numbers so hard to believe? Sure, 64 shader processors would seem like a step backwards, but not if they went to a smaller process and were getting much higher clock speeds. After all, they need to give themselves some room for improvement. 1.2GHz core with 2GHz actual, not effective RAM speed, could easily produce the 14k-16k 3DMark06 scores with only 64 shaders on a 65nm process.

From a performance standpoint, keeping at least the same number of shaders and increasing the clockspeed would produce the biggest improvement, @100%, but from a business standpoint, cutting the number of shaders would decrease costs and provide future options while still giving a signfiicant increase in performance. Not 100%, but at least 50%. If NV's next GPU is only 64 shaders on a 65nm process, they'll definitely be "holding back", but no one's going to complain if its a big increase over G80, especially if they know what's planned in the future (128 shaders at the same clock speed etc.).

If NV's next high end GPU is only 64 shaders on a 65nm process then it will get smoked by the competition. Clockspeed can only get you so far; there's a reason why gpus have been getting increasingly "wider" and more complex.

Not true at all. G80, and pretty much every graphics chip before it has benefitted directly from an increase in clock speed with gains at least proportionate to the increase in clock. Only when you can't increase clockspeeds do you need to go back and start re-working the architecture or increasing die size. The P4 lasted how long by simply scaling on clockspeed?

As for getting smoked by the competition....it just needs to outperform an 8800 GTX, which I'm sure it would. The original post gives a hint at which direction NV would be going. Dual-core native design which would bring it up to 128 shaders at much higher clockspeeds.
 
Jun 14, 2003
10,442
0
0
Originally posted by: Schmeh
Originally posted by: happy medium

G90 specs (to be named 9800GTX)

* 55nm Architecture
* Native Support for Dual-core Alternative Cards (Dual-Core G90)
* Over 1 Ghz Core Speed
* 64 Unified Shader Units
* GDDR4 XDR2 @ over 2Ghz
* Quad SLI Support
* May have a dedicated PPU on Board
* Predicted 3DMark06 Score for a Single Core G90 (based on a 4800+ X2): 14000 - 16000

Release Date: Expected Mid 2007

I need to get me some of that GDDR4 XDR2 to replace my SRAM EDO DDR RDRAM in my pc.

and you'll still lose out to curiously high bandwidth LMNOPRAM


 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow
Originally posted by: munky
Originally posted by: chizow
Originally posted by: munky
Originally posted by: happy medium

G90 specs (to be named 9800GTX)

* 55nm Architecture
* Native Support for Dual-core Alternative Cards (Dual-Core G90)
* Over 1 Ghz Core Speed
* 64 Unified Shader Units
* GDDR4 XDR2 @ over 2Ghz
* Quad SLI Support
* May have a dedicated PPU on Board
* Predicted 3DMark06 Score for a Single Core G90 (based on a 4800+ X2): 14000 - 16000

Release Date: Expected Mid 2007

Those numbers look like something made up by a 12 year old kid.

*edit: since we're speculating, how's this?

65mn process
192 scalar unified shaders
700+ mhz core speed
1.4+ ghz shader clocks
Predicted 3dmock06 score 15000+ based on an unknown cpu from the future
Taped out and waiting in the wings for Ati's r600 (as usual... lol)

Why does everyone find those numbers so hard to believe? Sure, 64 shader processors would seem like a step backwards, but not if they went to a smaller process and were getting much higher clock speeds. After all, they need to give themselves some room for improvement. 1.2GHz core with 2GHz actual, not effective RAM speed, could easily produce the 14k-16k 3DMark06 scores with only 64 shaders on a 65nm process.

From a performance standpoint, keeping at least the same number of shaders and increasing the clockspeed would produce the biggest improvement, @100%, but from a business standpoint, cutting the number of shaders would decrease costs and provide future options while still giving a signfiicant increase in performance. Not 100%, but at least 50%. If NV's next GPU is only 64 shaders on a 65nm process, they'll definitely be "holding back", but no one's going to complain if its a big increase over G80, especially if they know what's planned in the future (128 shaders at the same clock speed etc.).

If NV's next high end GPU is only 64 shaders on a 65nm process then it will get smoked by the competition. Clockspeed can only get you so far; there's a reason why gpus have been getting increasingly "wider" and more complex.

Not true at all. G80, and pretty much every graphics chip before it has benefitted directly from an increase in clock speed with gains at least proportionate to the increase in clock. Only when you can't increase clockspeeds do you need to go back and start re-working the architecture or increasing die size. The P4 lasted how long by simply scaling on clockspeed?

As for getting smoked by the competition....it just needs to outperform an 8800 GTX, which I'm sure it would. The original post gives a hint at which direction NV would be going. Dual-core native design which would bring it up to 128 shaders at much higher clockspeeds.

The benefits in clockspeed were nowhere near the gains from new architecture or more "pipes". The last time Nvidia relied on heavily clockspeed was Nv30, and that was a disaster, even with the same number of pipes as a GF4. Imagine what would have happened if they cut it down to 2 pipes. Every new generation since then relied mostly on a wider architecture, like the GF6, GF7, and GF8. I'm 100% sure the g90 will have at least as many shaders as the g80 if Nvidia has any hope to compete in the high end video card market.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Originally posted by: chizow
Originally posted by: munky
Originally posted by: chizow
Originally posted by: munky
Originally posted by: happy medium

G90 specs (to be named 9800GTX)

* 55nm Architecture
* Native Support for Dual-core Alternative Cards (Dual-Core G90)
* Over 1 Ghz Core Speed
* 64 Unified Shader Units
* GDDR4 XDR2 @ over 2Ghz
* Quad SLI Support
* May have a dedicated PPU on Board
* Predicted 3DMark06 Score for a Single Core G90 (based on a 4800+ X2): 14000 - 16000

Release Date: Expected Mid 2007

Those numbers look like something made up by a 12 year old kid.

*edit: since we're speculating, how's this?

65mn process
192 scalar unified shaders
700+ mhz core speed
1.4+ ghz shader clocks
Predicted 3dmock06 score 15000+ based on an unknown cpu from the future
Taped out and waiting in the wings for Ati's r600 (as usual... lol)

Why does everyone find those numbers so hard to believe? Sure, 64 shader processors would seem like a step backwards, but not if they went to a smaller process and were getting much higher clock speeds. After all, they need to give themselves some room for improvement. 1.2GHz core with 2GHz actual, not effective RAM speed, could easily produce the 14k-16k 3DMark06 scores with only 64 shaders on a 65nm process.

From a performance standpoint, keeping at least the same number of shaders and increasing the clockspeed would produce the biggest improvement, @100%, but from a business standpoint, cutting the number of shaders would decrease costs and provide future options while still giving a signfiicant increase in performance. Not 100%, but at least 50%. If NV's next GPU is only 64 shaders on a 65nm process, they'll definitely be "holding back", but no one's going to complain if its a big increase over G80, especially if they know what's planned in the future (128 shaders at the same clock speed etc.).

If NV's next high end GPU is only 64 shaders on a 65nm process then it will get smoked by the competition. Clockspeed can only get you so far; there's a reason why gpus have been getting increasingly "wider" and more complex.

Not true at all. G80, and pretty much every graphics chip before it has benefitted directly from an increase in clock speed with gains at least proportionate to the increase in clock. Only when you can't increase clockspeeds do you need to go back and start re-working the architecture or increasing die size. The P4 lasted how long by simply scaling on clockspeed?

As for getting smoked by the competition....it just needs to outperform an 8800 GTX, which I'm sure it would. The original post gives a hint at which direction NV would be going. Dual-core native design which would bring it up to 128 shaders at much higher clockspeeds.

This article support this I believe.

http://theinquirer.net/default.aspx?article=37791
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Nvidia?s New Software Development Kit Supports Shader Model 5.0.
Nvidia Prepares for Shader Model 5.0

Category: Video

by Anton Shilov

[ 03/12/2007 | 03:25 PM ]

Even though not all game developers have adopted shader model (SM) 3.0 introduced three years ago and some claiming that transiting to DirectX 10?s shader model 4.0 right now hardly makes sense, Nvidia?s new software development kit (SDK) already features profiles for shader model 5.0, which is believed to be an improved version of the SM4.0.

Nvidia new SDK 10, which was released just last week, apparently contains macro invocations that define the supported CG profiles, including such profiles as Fragment50, Vertex50, Geometry50, meaning that current SDK supports architecture micro-code profiles for pixel shaders 5.0, vertex shaders 5.0 and geometry shaders 5.0.

While hardly anybody knows what shader model 5.0 actually is and how much is it different from the shader model 4.0, the inclusion of the architecture micro-code inside a compiler indicates that Nvidia foresees the arrival of shader model 5.0-capable hardware soon enough to enable game developers to compile their titles for it. On the other hand, it is logical that Nvidia is working on next-generation graphics technologies and chips, such as G81, G90 and so on.

Nvidia?s new set of tools for software developers is primarily designed to take advantage of the company?s latest hardware. The Developer Toolkit includes SDK 10 with code samples for the latest graphics processors; PerfKit 5, a set of powerful tools for debugging and profiling GPU applications for Windows Vista and DirectX 10 with shader edit-and-continue, render state modification, customizable graphs and counters, and more; ShaderPerf 2 suite that provides detailed shader performance information with support for new drivers; FX Composer 2, a development environment for cross-platform shader authoring; and some other tools.

Officials for Nvidia did not comment on the news-story.
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
I'm going to have to agree with Munky here.

there's no way that Nvidia is going to cut the number of shaders in half.

Clockspeeds mean nothing if it's only pushing half the operations per clock. Theoretically, Nvidia would have to double the clock speed in order make a 64 shader GPU as fast as a 128 shader GPU. That is of course assuming that all other things are equal and there isnt a huge breakthrough in the way that shader processes are executed.

The dual core explanation doesnt cut it for me either.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: munky
The benefits in clockspeed were nowhere near the gains from new architecture or more "pipes". The last time Nvidia relied on heavily clockspeed was Nv30, and that was a disaster, even with the same number of pipes as a GF4. Imagine what would have happened if they cut it down to 2 pipes. Every new generation since then relied mostly on a wider architecture, like the GF6, GF7, and GF8. I'm 100% sure the g90 will have at least as many shaders as the g80 if Nvidia has any hope to compete in the high end video card market.

Sure they are. An 8800 GTS OC'd @25-33% on the core is enough to bring it up to stock GTX levels, which also happens to be the difference in its number of shaders. And that's not to say there wouldn't be improvements to the shader architecture as well. The examples you bring up are instances where changes per core and clockspeed are mutually exclusive given the technological limitations at any given time.

You can either 1) improve the architecture/cram more transistors into a part or 2) keep the architecture the same and try to shrink it, cool it, make it draw less power and have it run a lot faster. Going nuts on one precludes going nuts on the other, so past examples from chip to chip aren't going to provide relevant examples. What we have seen though on die shrinks on the same architecture are faster clocks that scale linearly to improve performance.

Which brings us back to G90, if you were able to double the clock speed with half the shaders, it could very well outperform a GTX in a single core configuration and with 2, it'd be close to double the performance. Considering the only thing to compete with in the high-end market is the GTX, I'm sure it'd do just fine. ;)
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Matt2
I'm going to have to agree with Munky here.

there's no way that Nvidia is going to cut the number of shaders in half.

Clockspeeds mean nothing if it's only pushing half the operations per clock. Theoretically, Nvidia would have to double the clock speed in order make a 64 shader GPU as fast as a 128 shader GPU. That is of course assuming that all other things are equal and there isnt a huge breakthrough in the way that shader processes are executed.

The dual core explanation doesnt cut it for me either.

And why wouldn't they? If they could get 2x the clockspeed with 64 shaders, you don't think they would? G80 is expensive for NV, especially at 90nm. If going to a smaller process allows them to 1) increase clockspeeds enough for a much smaller die with half the shaders to perform slightly better than a G80 while 2) opening up the possibility for a dual-core configuration on the same die, they wouldn't jump all over that?
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow
Originally posted by: munky
The benefits in clockspeed were nowhere near the gains from new architecture or more "pipes". The last time Nvidia relied on heavily clockspeed was Nv30, and that was a disaster, even with the same number of pipes as a GF4. Imagine what would have happened if they cut it down to 2 pipes. Every new generation since then relied mostly on a wider architecture, like the GF6, GF7, and GF8. I'm 100% sure the g90 will have at least as many shaders as the g80 if Nvidia has any hope to compete in the high end video card market.

Sure they are. An 8800 GTS OC'd @25-33% on the core is enough to bring it up to stock GTX levels, which also happens to be the difference in its number of shaders. And that's not to say there wouldn't be improvements to the shader architecture as well. The examples you bring up are instances where changes per core and clockspeed are mutually exclusive given the technological limitations at any given time.

You can either 1) improve the architecture/cram more transistors into a part or 2) keep the architecture the same and try to shrink it, cool it, make it draw less power and have it run a lot faster. Going nuts on one precludes going nuts on the other, so past examples from chip to chip aren't going to provide relevant examples. What we have seen though on die shrinks on the same architecture are faster clocks that scale linearly to improve performance.

Which brings us back to G90, if you were able to double the clock speed with half the shaders, it could very well outperform a GTX in a single core configuration and with 2, it'd be close to double the performance. Considering the only thing to compete with in the high-end market is the GTX, I'm sure it'd do just fine. ;)

The very mention of "dual-core" in relation to video cards screams "n00b" to me. Gpus are already multi-core parts, with each quad acting as a separate core. That's why you can have gpus with disabled quads and still retain all the functionality of the same part with more quads. Which brings me up to my second point: whoever wrote those specs has no idea what he's talking about. I can't believe I'm even having this debate, because I would think anyone who has seen the 8800gtx would dismiss the idea of halving the amount of shaders for a next gen gpu.
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: happy medium
Nvidia?s New Software Development Kit Supports Shader Model 5.0.
Nvidia Prepares for Shader Model 5.0

Category: Video

by Anton Shilov

[ 03/12/2007 | 03:25 PM ]

Even though not all game developers have adopted shader model (SM) 3.0 introduced three years ago and some claiming that transiting to DirectX 10?s shader model 4.0 right now hardly makes sense, Nvidia?s new software development kit (SDK) already features profiles for shader model 5.0, which is believed to be an improved version of the SM4.0.

Nvidia new SDK 10, which was released just last week, apparently contains macro invocations that define the supported CG profiles, including such profiles as Fragment50, Vertex50, Geometry50, meaning that current SDK supports architecture micro-code profiles for pixel shaders 5.0, vertex shaders 5.0 and geometry shaders 5.0.

While hardly anybody knows what shader model 5.0 actually is and how much is it different from the shader model 4.0, the inclusion of the architecture micro-code inside a compiler indicates that Nvidia foresees the arrival of shader model 5.0-capable hardware soon enough to enable game developers to compile their titles for it. On the other hand, it is logical that Nvidia is working on next-generation graphics technologies and chips, such as G81, G90 and so on.

Nvidia?s new set of tools for software developers is primarily designed to take advantage of the company?s latest hardware. The Developer Toolkit includes SDK 10 with code samples for the latest graphics processors; PerfKit 5, a set of powerful tools for debugging and profiling GPU applications for Windows Vista and DirectX 10 with shader edit-and-continue, render state modification, customizable graphs and counters, and more; ShaderPerf 2 suite that provides detailed shader performance information with support for new drivers; FX Composer 2, a development environment for cross-platform shader authoring; and some other tools.

Officials for Nvidia did not comment on the news-story.

You do know that it takes at least 3 years to go from paper to product right?

OMGWTF they are working on G110.
 
Dec 21, 2006
169
0
0
The very mention of "dual-core" in relation to video cards screams "n00b" to me. Gpus are already multi-core parts, with each quad acting as a separate core. That's why you can have gpus with disabled quads and still retain all the functionality of the same part with more quads. Which brings me up to my second point: whoever wrote those specs has no idea what he's talking about. I can't believe I'm even having this debate, because I would think anyone who has seen the 8800gtx would dismiss the idea of halving the amount of shaders for a next gen gpu.

Agreed. I'm not yet willing to spend $800 and buy a 1KW PSU just to power my graphics cards. Cutting the number of shader units is possible if they attempt a different, more complex shader unit (Vec4?) like AMD is doing, but I doubt they would just throw out their progress on g80 and start from scratch (ie. new drivers, new design, etc.). More of the same makes more sense to me; the current architecture is far from mature.

EDIT: changed drives to drivers ><
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: shadowofthesun
The very mention of "dual-core" in relation to video cards screams "n00b" to me. Gpus are already multi-core parts, with each quad acting as a separate core. That's why you can have gpus with disabled quads and still retain all the functionality of the same part with more quads. Which brings me up to my second point: whoever wrote those specs has no idea what he's talking about. I can't believe I'm even having this debate, because I would think anyone who has seen the 8800gtx would dismiss the idea of halving the amount of shaders for a next gen gpu.

Agreed. I'm not yet willing to spend $800 and buy a 1KW PSU just to power my graphics cards. Cutting the number of shader units is possible if they attempt a different, more complex shader unit (Vec4?) like AMD is doing, but I doubt they would just throw out their progress on g80 and start from scratch (ie. new drivers, new design, etc.). More of the same makes more sense to me; the current architecture is far from mature.

EDIT: changed drives to drivers ><

QFT. AT needs rep points!

 

natto fire

Diamond Member
Jan 4, 2000
7,117
10
76
It does seem to be a more CPU dependent game. I run 1280x1024, (even tried 1024x768@low) and it always starts running like crap when I and an AI or two get larger armies. Video card is a 7900GS@GT speeds, but I think this Venice 3200, is just not able to hack it. If going from an FX-60 to a C2D@ ~3 GHz doubles the frame rate, then my stock 754 Venice has no chance.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: munky
Originally posted by: chizow
Originally posted by: munky
The benefits in clockspeed were nowhere near the gains from new architecture or more "pipes". The last time Nvidia relied on heavily clockspeed was Nv30, and that was a disaster, even with the same number of pipes as a GF4. Imagine what would have happened if they cut it down to 2 pipes. Every new generation since then relied mostly on a wider architecture, like the GF6, GF7, and GF8. I'm 100% sure the g90 will have at least as many shaders as the g80 if Nvidia has any hope to compete in the high end video card market.

Sure they are. An 8800 GTS OC'd @25-33% on the core is enough to bring it up to stock GTX levels, which also happens to be the difference in its number of shaders. And that's not to say there wouldn't be improvements to the shader architecture as well. The examples you bring up are instances where changes per core and clockspeed are mutually exclusive given the technological limitations at any given time.

You can either 1) improve the architecture/cram more transistors into a part or 2) keep the architecture the same and try to shrink it, cool it, make it draw less power and have it run a lot faster. Going nuts on one precludes going nuts on the other, so past examples from chip to chip aren't going to provide relevant examples. What we have seen though on die shrinks on the same architecture are faster clocks that scale linearly to improve performance.

Which brings us back to G90, if you were able to double the clock speed with half the shaders, it could very well outperform a GTX in a single core configuration and with 2, it'd be close to double the performance. Considering the only thing to compete with in the high-end market is the GTX, I'm sure it'd do just fine. ;)

The very mention of "dual-core" in relation to video cards screams "n00b" to me. Gpus are already multi-core parts, with each quad acting as a separate core. That's why you can have gpus with disabled quads and still retain all the functionality of the same part with more quads. Which brings me up to my second point: whoever wrote those specs has no idea what he's talking about. I can't believe I'm even having this debate, because I would think anyone who has seen the 8800gtx would dismiss the idea of halving the amount of shaders for a next gen gpu.

Right, which makes more sense to eliminate the largest and most expensive part of the core where possible and essentially gain "free" performance by simply taking advantage of the clockspeeds available from a die shrink and smaller process. But while we're on the topic of the GTX, you are aware that its currently so large, the entire "core" isn't even on a single die right?

Dismissing the ability to scale performance with additional cores instead of laser locking or neutering a part while maintaining all of its production costs screams "n00b" to me. You said it yourself, if parallel functionality comes naturally, why would they continue to produce a high-end part only to strip it down to meet market segments when they could simply scale performance (and cost) with the number of cores?

If NV does go this direction, which I'm not necessarily in agreement with either, its pretty obvious they saw the same things I'm seeing. That the shaders scale extremely well to clockspeed to the point more clocks can overcome a lack of transistors. They already have working silicon providing solid evidence of this. G84 with only 64 shaders on an 80nm process allows them to hit higher clockspeeds. Notch those improvements up a bit further with a 65nm or 55nm process and the OP doesn't seem so outrageous.

 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: chizow
Originally posted by: Matt2
I'm going to have to agree with Munky here.

there's no way that Nvidia is going to cut the number of shaders in half.

Clockspeeds mean nothing if it's only pushing half the operations per clock. Theoretically, Nvidia would have to double the clock speed in order make a 64 shader GPU as fast as a 128 shader GPU. That is of course assuming that all other things are equal and there isnt a huge breakthrough in the way that shader processes are executed.

The dual core explanation doesnt cut it for me either.

And why wouldn't they? If they could get 2x the clockspeed with 64 shaders, you don't think they would? G80 is expensive for NV, especially at 90nm. If going to a smaller process allows them to 1) increase clockspeeds enough for a much smaller die with half the shaders to perform slightly better than a G80 while 2) opening up the possibility for a dual-core configuration on the same die, they wouldn't jump all over that?

It starts to kinda make sense with all the SLI 2.0(8-way SLI) stuff and that blurb about native multi-gpu support. Perhaps Nvidia is working on a scalable architecture with different price points having different numbers of GPUs on the PCB and different memery bus configurations?
 

tuteja1986

Diamond Member
Jun 1, 2005
3,676
0
0
Originally posted by: munky
Originally posted by: happy medium

G90 specs (to be named 9800GTX)

* 55nm Architecture
* Native Support for Dual-core Alternative Cards (Dual-Core G90)
* Over 1 Ghz Core Speed
* 64 Unified Shader Units
* GDDR4 XDR2 @ over 2Ghz
* Quad SLI Support
* May have a dedicated PPU on Board
* Predicted 3DMark06 Score for a Single Core G90 (based on a 4800+ X2): 14000 - 16000

Release Date: Expected Mid 2007

Those numbers look like something made up by a 12 year old kid.

*edit: since we're speculating, how's this?

65mn process
192 scalar unified shaders
700+ mhz core speed
1.4+ ghz shader clocks
Predicted 3dmock06 score 15000+ based on an unknown cpu from the future
Taped out and waiting in the wings for Ati's r600 (as usual... lol)

lol that sound like an ATI R600 :
* 65nm Architecture (Rumor)
* Native Support for Dual-core Alternative Cards
* Over 1 Ghz Core Speed
* 64 Unified Shader Units
* GDDR4 @ over 2Ghz
* Quad crossfire Support
* dedicated Video processor for Video encoding/decoding @ full 1080p on Board
* Predicted 3DMark06 12000 - 15000 (rumor )