Damn good G80 article

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Dual core makes no sense for GPU's. We already organise things into quads, with each quad quite capable of acting as a standalone GPU (how do you think we get things like 7300?)

even on CPU's I am far from convinced dual core is necessary. I would have been MUCH happier to see the SIMD (SSE/3DNow!) units doubled or quadrupled in number alongside the x64 extra registers being made available in 32 bit mode.
 

tuteja1986

Diamond Member
Jun 1, 2005
3,676
0
0
Originally posted by: munky
The 384-bit memory bus and the 12 mem chip configuration sounds somewhat plausible, I've seen it mentioned before. Also, it's been said before that it's a huge chip, so don't expect stratospheric clock speeds. But 96 pipes, half DX9, half DX10 is just pure speculation, and IMO would be the worst design and waste of resourses since NV30, maybe even worse. Not to mention, I dont expect more than 32 PS in total, nevermind 96.

lol :) I think G80 will be like NV30... ATI in theory should totally destroy Nvidia at DX10 like they to Nvidia with 9700pro in the transition of DX8 to DX9 .
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
I don't buy a lot of this story.

First, the claim that the g80 will be "96 pipes, half DX9, half DX10". That would have to be the most craptacular design the world has ever seen. The only way you could get dx10 support was to bolt an additional core onto the die so you'd have 1 dx9 die and 1 dx10 die? Nuts. That'd be the most expensive, inefficient way of doing it ever. Besides, the only big real hardware change that DX10 requires is the addition of geometry shaders. Most of the other changes in DX10 are just meant to cut overhead and make DX better suited as a replacement for GDI and these 2 changes are primarily software issues rather than hardware issues. In light of these facts, I think it'd be relatively easy for a DX10 card to support DX9 without requiring a "96 pipes, half DX9, half DX10" configuration.

Now regarding the 400mm^2 die: 400mm is unlikely in my opinion for economic reasons. The current 7900gtx is 196mm^2 die I believe and the r580 is 352mm^2. 400mm^2 isn't impossible but I think it'd be a stretch.

 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
Originally posted by: Gstanfor
Dual core makes no sense for GPU's. We already organise things into quads, with each quad quite capable of acting as a standalone GPU (how do you think we get things like 7300?)

even on CPU's I am far from convinced dual core is necessary. I would have been MUCH happier to see the SIMD (SSE/3DNow!) units doubled or quadrupled in number alongside the x64 extra registers being made available in 32 bit mode.
You're right about the quads. A modern GPU is already a multicore processor and has been for a long time. The presence of crossbar memory controllers and, multiple memory busses, and the gpus like the 7300 is proof of that. So the G71 is already a 6 core processor. However, a dual chip card can provide some benefits and of course, we already have such a thing in the form of the 7950.

The extra registers would be impossible to use in 32bit mode. The ia32 instruction set simply doesn't have enough bits reserved for register names to allow for more registers.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
i read it yesterday and was gonna ignore it . . .

but couldn't resist.

it is BS speculation that is Piled Higher and Deeper to give it the impression that there is some 'worth'.

there is none. i could write a better article.
:disgust:

theInq is desperate for more 'traffic'. . . . stupid, really
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
The extra registers would be impossible to use in 32bit mode. The ia32 instruction set simply doesn't have enough bits reserved for register names to allow for more registers.
So... add some more register name bits - it isn't rocket science.

x86's two great weaknesses are register pressure and FP performance. I think it's easier and cheaper to tackle the problem the way I suggested than to slap entire extra cores on there.
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
Originally posted by: Gstanfor
The extra registers would be impossible to use in 32bit mode. The ia32 instruction set simply doesn't have enough bits reserved for register names to allow for more registers.
So... add some more register name bits - it isn't rocket science.

x86's two great weaknesses are register pressure and FP performance. I think it's easier and cheaper to tackle the problem the way I suggested than to slap entire extra cores on there.
But if you add more bits, all applications would have to be recompiled and you might as well just implement 64bit mode if you need to do that. Oh wait, AMD already did that!

Adding more SSE units is a lot cheaper than adding an entirely core but it's tough to get additional IPC. With C2D, Intel has finally implemented real 128bit width SSE which is sorta like adding more sse units. AMD is doing the same thing with k8l.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
I really don't see the issue myself. PC's have survived the 8086 >>> 820286, 80286 >>> 80386 transtition quite well.

64 bit has been around for a while now and has failed to take off in any significant way, I think most users for the next couple of years at least would find a recompiled executable that works with their current OS far morre convenient than changing and OS, even if that OS change means upgrading to vista.

SSE128 bit is good and useful, but multiple SSE units are also good. Think physics plus the usual 3d setup calculations happening at once just for starters.
 

amheck

Golden Member
Oct 14, 2000
1,712
0
76
I think its pretty bad when the author himself tells you to take it with a grain of salt.
 

nrb

Member
Feb 22, 2006
75
0
0
Originally posted by: Gstanfor
Dual core makes no sense for GPU's. We already organise things into quads, with each quad quite capable of acting as a standalone GPU (how do you think we get things like 7300?)
Think about the transition between systems where you have two single-core CPUs in separate sockets to systems where you have a single processor socket holding a dual-core CPU. Now imagine the same process applied to GPUs. Initially we have two GPUs on two entirely separate circuit boards (SLI). Then we move to a "two chips in two sockets on the same board" model (7950GX2 - single-card SLI). Why not follow the same path as CPUs did with two GPUs working in SLI mode inside a single chip package?

 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Is everyone posting tongue in cheek, or are people really out there today?

The article is IQ reducing vomit at best.

If we are to believe any of it is truly viable then we all must realize that the R600 will not run any game on the market. You know, because non unified shaders have to be kept super secret special from the unified shaders magical sensationalism and their fruity flavor :p WTF kind of moron is this guy? On an engineering basis, mathematical basis, comp sci aspect- how the fvck can he think a 'half DX9 half DX10' GPU could be done IF EVERYONE WANTED THEM???

There isn't some odd chemical that differentiates DX9 from DX10- it is still binary- any unit that will run DX10 is -REQUIRED- to run DX9 code. This isn't any sort of secret operative talk or anything else- it is very clearly stated by MS not to mention simple logic.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Originally posted by: nrb
Originally posted by: Gstanfor
Dual core makes no sense for GPU's. We already organise things into quads, with each quad quite capable of acting as a standalone GPU (how do you think we get things like 7300?)
Think about the transition between systems where you have two single-core CPUs in separate sockets to systems where you have a single processor socket holding a dual-core CPU. Now imagine the same process applied to GPUs. Initially we have two GPUs on two entirely separate circuit boards (SLI). Then we move to a "two chips in two sockets on the same board" model (7950GX2 - single-card SLI). Why not follow the same path as CPUs did with two GPUs working in SLI mode inside a single chip package?

Because for the most part it wouldn't acheive anything extra quads couldn't. SLI is useful because it can create more effective bandwidth (in effect). You are fundamentally limited on a die though, because you are limited in the number of pins you can use and the number of PCB traces it's feasible to employ which will limit any gains you might expect to make.