From China with Love [G80, R600 and G81 info]

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Originally posted by: Creig
Originally posted by: ronnn
now that Nvidia pr has given us (from many sources), pictures and features. Also some fairly meaningless 3dm benching. Nicely done launch with lots of viral seeding. If this card lives up to the hype, should be a monster! If it doesn't, should sell well anyways, as this has been one fine ad campaign - with several internet personalities used nicely. Not one has broke ranks and released any real fantasy game numbers. Is unfortunate that preview websites and forums are losing meaning. :beer:

ronnn, there are ALWAYS leaks when new cards are released. Especially when it's something as significant and as highly anticipated as the G80. I don't think Nvidia needed to "seed" anything.

When all the leaks are virtually the same, with slightly different pictures? Nice little mystery chips, and only 3dm scores? Very nice official looking charts with all the exciting features. This is marketing and yes all companies do the same. But give credit when due, has been very nicely done. The flow of information has been nicely controlled, not one leak showing several games with some IQ comparison - maybe by a lack of fully functioning drivers out there. Personally I think this card will be a monster performer!
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Josh.. your copy-n-pasted parallelism logic is interesting, but it does not imply CPU to GPU operations directly interact with each other without the use of system memory.
Like I said, I don't know exactly whether or not they'll use system memory or not. I suppose they might incorporate it somewhat like the memory controller on the A64 processors, in which case system memory wouldn't matter but cache would be important.
As long as system memory is being used, the same FSB speed matters and there isn't an improvement in performance.
I may have misread this. Are you saying that the increasing the FSB does not improve performance? Because as long as the system memory is being used, an increased FSB does matter because it increases the frequency at which the CPU interacts with the memory, thereby improving data draws and operative instructions.
Memory is, in comparison to CPU registry, a bottleneck. That is why often times increased cache would improve performance.
Of course. Cache in a CPU is much faster than system memory. That's why I suspect DAMIT might incorporate the communication between the CPU/GPU in a similar way the memory controller interacts on an A64.
However, I do not see an improvement in performance as you have implied it will have.
I'd think there would be some performance improvments depending on the way they implement it, I'm not certain about anything. If they incorporated it using somewhat of the same method they used with the A64's memory controller it would, IMO, give some nice performance increases.
Perhaps you are mistaken?
Perhaps. I'm not claiming to know exactly how ATi/AMD plan to make their CPU/GPU work and at this point I don't think anyone does. It's in development, we've just recently started to understand the specs of the G80 and it will be launching very soon. DAMIT's CPU/GPU won't be here for another estimated 2 years.
 

Elfear

Diamond Member
May 30, 2004
7,083
599
126
Originally posted by: Fox5

Maybe nvidia will go with Sony and the Playstation Computer Platform will become dominant. I'd say alternative PC markets will be much more likely than anyone challenging nvidia (and x86) in the PC market. Look for a build up of cell phones and other mobile platforms, along with game consoles to strike at the heart of the PC.

Yay. I can't wait for a Miss Pacman and Centipede revival. :roll:
 

crazydingo

Golden Member
May 15, 2005
1,134
0
0
Originally posted by: Creig
Crusader was already given a one week vacation not long ago. I'd say this last comment should get him either a month (or dare we hope) a permanent ban.

edit:
Oh well, two weeks is better than nothing. Thank you, Mods!
Crusader = Owned! :laugh: :)
 

beggerking

Golden Member
Jan 15, 2006
1,703
0
0
Originally posted by: josh6079
I may have misread this. Are you saying that the increasing the FSB does not improve performance? Because as long as the system memory is being used, an increased FSB does matter because it increases the frequency at which the CPU interacts with the memory, thereby improving data draws and operative instructions.
I'm saying integrating GPU on CPU won't help on memory speed as you have stated earlier.

Of course. Cache in a CPU is much faster than system memory. That's why I suspect DAMIT might incorporate the communication between the CPU/GPU in a similar way the memory controller interacts on an A64.
Communication between memory controller is very different than if a CPU was to communcation GPU directly...
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
I'm saying integrating GPU on CPU won't help on memory speed as you have stated earlier.
Ah, okay. Thanks for clarifying.
Communication between memory controller is very different than if a CPU was to communcation GPU directly...
Perhaps. We truly won't know for another ~2 years.
 

redbox

Golden Member
Nov 12, 2005
1,021
0
0
Originally posted by: beggerking
Originally posted by: josh6079
I may have misread this. Are you saying that the increasing the FSB does not improve performance? Because as long as the system memory is being used, an increased FSB does matter because it increases the frequency at which the CPU interacts with the memory, thereby improving data draws and operative instructions.
I'm saying integrating GPU on CPU won't help on memory speed as you have stated earlier.

Of course. Cache in a CPU is much faster than system memory. That's why I suspect DAMIT might incorporate the communication between the CPU/GPU in a similar way the memory controller interacts on an A64.
Communication between memory controller is very different than if a CPU was to communcation GPU directly...

Did you see my post about ATI/AMD getting a patent for GPU/CPU on die cache. If they can get the GPU to use on die cache then that will be much better than to go to system ram.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Did you see my post about ATI/AMD getting a patent for GPU/CPU on die cache. If they can get the GPU to use on die cache then that will be much better than to go to system ram.
I hadn't until you mentioned it.

Thanks for the info! It looks very promising.

Someone should make a GPU/CPU thread as this info isn't so much about the G80 nor the R600. Not that the rest of the thread is entirely on topic anyway, it still wold serve better having its own title.
 

beggerking

Golden Member
Jan 15, 2006
1,703
0
0
Originally posted by: redbox

Did you see my post about ATI/AMD getting a patent for GPU/CPU on die cache. If they can get the GPU to use on die cache then that will be much better than to go to system ram.

sharing cache can be cost effective but you would still need system ram. Cache and memory go complimentary to improve performance. I would suspect GPU is already using some type of cache to cache to video memory...? and that sharing die cache is for reduce cost, not increase performance..

The Torrenza article about hyper transport is interesting but its about co-processors (physics chip?) , not GPUs.

The AMD HTX Co-processor article is again, about using specialized co-processors to improve specific tasks up to 100x. It is about specialized processing power, which is very different from GPUs.
 

zephyrprime

Diamond Member
Feb 18, 2001
7,510
2
81
You guys are naive if you think these leaks aren't intentional. If any normal person got a g80 and wanted to leak, they would post very detailed pics, benchmark games and 3dmak06. Performing this level of analysis is something that any one of us could do. Instead, the leaks we get are very limited in info. We get pics but no real closeups to determine the nature of the die. We get benchs but only of 3dmark and not of any games. These leaks are entirely controlled.
 

redbox

Golden Member
Nov 12, 2005
1,021
0
0
Originally posted by: beggerking

Cache and memory go complimentary to improve performance. I would suspect GPU is already using some type of cache to cache to video memory...? and that sharing die cache is for reduce cost, not increase performance..

Isn't memory and cache the samething. I think you are refering to system memory but I am just wanting to make sure. Also I am not sure what you mean by save cost. Do you mean money costs, or say something like energy costs? I would think that we would see a good increase in performance by using die cache as it is much much faster than system memory of course the question is how much cache are they going to let the GPU use.

The Torrenza article about hyper transport is interesting but its about co-processors (physics chip?) , not GPUs.

The AMD HTX Co-processor article is again, about using specialized co-processors to improve specific tasks up to 100x. It is about specialized processing power, which is very different from GPUs.

I put those articles up to show that AMD is showing some interest in getting processors of different types direct connections to the cpu. You say that they are aimed at co-processors that deal with specialized processing needs. I don't see how those differ from current GPU's. A GPU is pretty much a very specialized processor that works on special data. Granted the article was talking about 10-100X in multimedia tasks not neccasarily gaming. I do agree with josh though that we should open a new thread this conversation is getting interesting.
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Originally posted by: zephyrprime
You guys are naive if you think these leaks aren't intentional. If any normal person got a g80 and wanted to leak, they would post very detailed pics, benchmark games and 3dmak06. Performing this level of analysis is something that any one of us could do. Instead, the leaks we get are very limited in info. We get pics but no real closeups to determine the nature of the die. We get benchs but only of 3dmark and not of any games. These leaks are entirely controlled.

Viral marketing works! Pretty soon they will have people convinced to be good consumers and buy one, even before any games exist that can use that type of power. This marketing is all about convincing nvidia owners, they need to upgrade. Hell they even throw in a virtual girlfriend. So expect some articles on what was wrong with old nv IQ (and girls) the day after launch day. :beer:
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
The AMD HTX Co-processor article is again, about using specialized co-processors to improve specific tasks up to 100x.
What do you think the GPU is? It's a specialized processing unit intended for the specific task of graphic rendering, hence the term "GPU--Graphics Processing Unit".
It is about specialized processing power, which is very different from GPUs.
No it's not. A GPU is pretty much the definition for "specialized processing power." That is exactly what GPU's are.
One specific area that was addressed was the discussion of a throwback to ?co-processors?. Early x86 processors lacked muscle when it came to floating point math, which is critical for things like visualization, and more important to us at Hard|OCP: games.
[/b]
With AMD opening up their HT architecture, quick communication between their processors and a HTX card or socket could allow for media encoder co-processors that can accelerate media encoding by up to 10-100 times, and interestingly, physics acceleration was mentioned. Does this mean that Ageia PhysX technology could get a boost in being able to plug into this architecture? Or perhaps there?s a dark horse option waiting in the wings that we could see in the future? Certainly, at the very least, specialized and high performance FPUs could be plugged in to reach an even higher level of immersion in games or special effects used in movies.
It looks promising in improving many things simply because it looks to increase the efficiency by which the computer's anatomy functions. Even if it were specified for just physics like speculated above, that would still be an improvment for gaming as well--especially with more and more games ermerging with more demand on physic processing. However I imagine AMD plans on incorporating several entities that will improve several sections of the computer for enthusiasts. It's going to do more than simply cut manufacturing costs.
 

zephyrprime

Diamond Member
Feb 18, 2001
7,510
2
81
Originally posted by: beggerkingsharing cache can be cost effective but you would still need system ram. Cache and memory go complimentary to improve performance. I would suspect GPU is already using some type of cache to cache to video memory...? and that sharing die cache is for reduce cost, not increase performance..
GPUs definitely use caches. They have texture caches and vertex caches and who knows what else. These are well known facts. You can learn more by going to the developer sections of the nvidia and ati sites.
 

Avalon

Diamond Member
Jul 16, 2001
7,564
142
106
Originally posted by: Ibiza
Since when has asking a genuine question been arguing?

Can you answer?

Is compulsively trolling on internet forums one of the symptoms of your illness? I'm genuinely interested.

I've got a genuine question for you.

Can you be anymore annoying?

Can you answer?
 

beggerking

Golden Member
Jan 15, 2006
1,703
0
0
Originally posted by: josh6079
The AMD HTX Co-processor article is again, about using specialized co-processors to improve specific tasks up to 100x.

What do you think the GPU is? It's a specialized processing unit intended for the specific task of graphic rendering, hence the term "GPU--Graphics Processing Unit".

co-processor can be vaguely defined as a processor that process a single set of instruction wthout need of additional component. examples: such as a Math/floating point co-processor that is now integrated into all modern CPU, or perhaps physics processor?..

Gpu is a processor , not a co-processor.

google is your friend.

edit: wiki
 

beggerking

Golden Member
Jan 15, 2006
1,703
0
0
Originally posted by: josh6079

It looks promising in improving many things simply because it looks to increase the efficiency by which the computer's anatomy functions. Even if it were specified for just physics like speculated above, that would still be an improvment for gaming as well--especially with more and more games ermerging with more demand on physic processing. However I imagine AMD plans on incorporating several entities that will improve several sections of the computer for enthusiasts. It's going to do more than simply cut manufacturing costs.

Its possible but mostly wishful thinking. The only time it happened was back when I486 was introduced, the integrated math coprocessor was actually faster than a seperate coprocessor.

Don't get me wrong.. I've been an AMD fan ever since the AMD386 era, just trying to be realistic.. Its not impossible, but it is very unlikely.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
co-processor can be vaguely defined as a processor that process a single set of instruction wthout need of additional component.
If you go further into it, it can more specifically be defined as a different processing unit as a whole. The link you provided states this. (Thanks for the link by the way). An additional component can be a co-processor.
Gpu is a processor , not a co-processor.
That's not what your link said:
The demand for a dedicated graphics co-processor has grown, however, particularly due to an increasing demand for realistic 3D graphics in computer games; this dedicated processor removes a considerable computational load from the primary CPU, and increases performance in graphic-intensive applications. As of 2002, graphics cards with dedicated Graphics Processing Units (GPUs) are commonplace.
The term, "co-processor" implies that there are more than one type of processing units existing in one system. In this sense, even a dedicated GPU can be seen as a "co-processor" being a different type of processing unit than that of the core processing unit. They even continue the classification further into discussing other co-processors such as the X-Fi and Ageia.

EDIT:

Its possible but mostly wishful thinking.
It's not wishful thinking, but logical thinking. AMD wouldn't have taken a 5.4 billion dollar hit to merge with ATi and tout Fuzion just to have some artful, integrated graphics solution with their platforms.
Don't get me wrong.. I've been an AMD fan ever since the AMD386 era, just trying to be realistic..
As am I. In truth, I don't know what exactly will come of such technology, but I don't see Intels biggest competitor only interested in cutting production costs without increasing performance as well.
Its not impossible, but it is very unlikely.
I will say, they have their work cut out for them. No doubt about that.
 

beggerking

Golden Member
Jan 15, 2006
1,703
0
0
Originally posted by: josh6079

Gpu is a processor , not a co-processor.
That's not what your link said:
The demand for a dedicated graphics co-processor has grown, however, particularly due to an increasing demand for realistic 3D graphics in computer games; this dedicated processor removes a considerable computational load from the primary CPU, and increases performance in graphic-intensive applications. As of 2002, graphics cards with dedicated Graphics Processing Units (GPUs) are commonplace.
The term, "co-processor" implies that there are more than one type of processing units existing in one system. In this sense, even a dedicated GPU can be seen as a "co-processor" being a different type of processing unit than that of the core processing unit. They even continue the classification further into discussing other co-processors such as the X-Fi and Ageia.

Incorrect according to definition of a coprocessor vs definition of a GPU:
A coprocessor is a computer processor used to supplement the functions of the primary processor (the CPU).

GPU
A Graphics Processing Unit or GPU (also occasionally called Visual Processing Unit or VPU) is a dedicated graphics rendering device for a personal computer, workstation, or game console...

GPU is a dedicated processing Unit for graphic, not a supplement to function of CPU. Even if it is on the same die as the CPU, it is still by itself a "dedicated processing unit", not a coprocessor.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
The fact that the GPU is a dedicated processing unit doesn't mean that it can't be a co-processor. It still "supplements the functions of the primary processor (the CPU)".
GPU is a dedicated processing Unit for graphic, not a supplement to function of CPU. Even if it is on the same die as the CPU, it is still by itself a "dedicated processing unit", not a coprocessor.
If you think so you should correct Wikipedia then. I'm just quoting the information you gave me. Click
The demand for a dedicated graphics co-processor has grown...due to an increasing demand for realistic 3D graphics...this dedicated processor removes a considerable computational load from the primary CPU, and increases performance in graphic-intensive applications.
Wiki's your enemy.
 

schneiderguy

Lifer
Jun 26, 2006
10,763
32
91
Originally posted by: ronnn
Originally posted by: zephyrprime
You guys are naive if you think these leaks aren't intentional. If any normal person got a g80 and wanted to leak, they would post very detailed pics, benchmark games and 3dmak06. Performing this level of analysis is something that any one of us could do. Instead, the leaks we get are very limited in info. We get pics but no real closeups to determine the nature of the die. We get benchs but only of 3dmark and not of any games. These leaks are entirely controlled.

Viral marketing works! Pretty soon they will have people convinced to be good consumers and buy one, even before any games exist that can use that type of power. This marketing is all about convincing nvidia owners, they need to upgrade. Hell they even throw in a virtual girlfriend. So expect some articles on what was wrong with old nv IQ (and girls) the day after launch day. :beer:

what viral marketing?

leaks arent viral marketing :confused:

 

beggerking

Golden Member
Jan 15, 2006
1,703
0
0
Originally posted by: josh6079
The fact that the GPU is a dedicated processing unit doesn't mean that it can't be a co-processor. It still "supplements the functions of the primary processor (the CPU)".

well, GPU uses different portions of memory than the CPU, they operate independently.

I'm done talking to you. Obviously you are only here to argue, not to learn. go ahead, insist on believing CPU/GPU integration to improve performance and mixing up coprocessor vs GPU. its your loss.
If you think so you should correct Wikipedia then. I'm just quoting the information you gave me. Click
The demand for a dedicated graphics co-processor has grown...due to an increasing demand for realistic 3D graphics...this dedicated processor removes a considerable computational load from the primary CPU, and increases performance in graphic-intensive applications.
Wiki's your enemy.

bolded. Its also saying that it is a processor, so which one is it? who do you believe ? obviously you don't believe me, so I'm done talking to you. no point wasting my time trying to "prove" to you.