- Jul 30, 2002
- 324
- 0
- 0
I'd like to keep things on topic here. So if you don't fully understand what I am asking, do not answer.
Now it is one thing to look at a processors specs, their clock cycle, their cache sizes, etc. and then look at some SYSmark tests and others to see which is faster. With the whole AMD vs. Intel thing, clockspeed means nothing. (I'm not going any further into this, because this isn't my point, if it was, I wouldn't be posting in the video forum).
Now there are obviously some very distinct differences between your processor and your video card. First being, your processor controls everything and, to make it simple, tells them what to do. It controls the bus speeds and controls the speed of how fast applications will work. The CPU runs with the motherboard, the RAM (DDR, SDR, or Rambus), and the hard drive. The video card, on the other hand, controls the graphics part. The CPU gives the intructions, the graphics card renders the scenes and the extras (lighting, filtering, anti-aliasing, shading, textures, etc.). It runs on it's own board, with its own independant RAM, and it's own processor clock. Two different components with two completely different jobs. The CPU is general hardware, whereas the video card is dedicated to rendering scenes (and the sound card does sound, ethernet card does internet, etc.). You may be wondering where I am going with this, but it'll hopefully make sense.
The video card, unlike the processor, has unique capabilities and aspects that control the graphical parts of the whole computer. It has occlusion culling, pixel/vertex shaders, lighting, pixel pipelines, bandwith, fillrate, etc. But bandwith and clock speed isn't everything, technology plays a big role in performance. A good point would be the Kyro II. It has a similar clock speed to the MX, but performs near the GTS/Pro level. A good reason being the fact that it used a form of DMR (TBR) and not IMR like the GeForce cards utilize. So then we get into the comparison between the 9000 and 8500. Looking at specs alone, the 9000 should be better. But the 9000 is basically an 8500MX. Its an 8500 with a stripped feature set.
You can base CPU performance on clock speed and their clock cycle effectiveness (IPC), but with video cards it is much more complicated. So here is where I ask (and quit talking
). Obviously, feature set as a lot to do with things, but what do all these numbers and features really do. What do these pixel pipelines and texture units really work. How many passes are made and how quickly do the act. How many textures are in a game, and how does it relate to how many texture units the video card has? And where does the RAM come in? Does it store information (textures, etc.) for whatever applications it is doing? And what about fillrate and memory bandwith? Memory bandwith == how fast information travels? Fillrate == Mpixels? What do these numbers refer to and how do they play a part in games?
Now for technology. Why do no cards utilize the tile-based rendering method that the Kyro II uses? Why does Radeon use their hierarchial-z buffer? And about vertex/pixel shaders? Obviously, they shade, but how fast and how much and how are they calculating what they are suppost to shade? (the T in T&L): Why is there a whole engine behind this, and why do they group the T and the L together. How does manipulating objects in 3d space (T) relate with lighting (L)? And, like shading, how does the graphics card determine what it has to light up in the scene?
And another thing, why does no one use RGSS (like the voodoo 4 and 5). I read the FAQ, and it states, "2X MSAA utilizes a rotated grid like RGSS which results in edge anti aliasing comparable to that of RGSS". Now, that is somewhate like RGSS, but it isn't the real deal. Will any cards use RGSS fully, or will it be forgotten along with the VooDoo cards?
Well, that seems to be it. If you can't give me a straight, in-depth answer to this, please... don't. Links are good, but just keep in mind, I don't feel like reading 50pages of stuff (who would
).
Now it is one thing to look at a processors specs, their clock cycle, their cache sizes, etc. and then look at some SYSmark tests and others to see which is faster. With the whole AMD vs. Intel thing, clockspeed means nothing. (I'm not going any further into this, because this isn't my point, if it was, I wouldn't be posting in the video forum).
Now there are obviously some very distinct differences between your processor and your video card. First being, your processor controls everything and, to make it simple, tells them what to do. It controls the bus speeds and controls the speed of how fast applications will work. The CPU runs with the motherboard, the RAM (DDR, SDR, or Rambus), and the hard drive. The video card, on the other hand, controls the graphics part. The CPU gives the intructions, the graphics card renders the scenes and the extras (lighting, filtering, anti-aliasing, shading, textures, etc.). It runs on it's own board, with its own independant RAM, and it's own processor clock. Two different components with two completely different jobs. The CPU is general hardware, whereas the video card is dedicated to rendering scenes (and the sound card does sound, ethernet card does internet, etc.). You may be wondering where I am going with this, but it'll hopefully make sense.
The video card, unlike the processor, has unique capabilities and aspects that control the graphical parts of the whole computer. It has occlusion culling, pixel/vertex shaders, lighting, pixel pipelines, bandwith, fillrate, etc. But bandwith and clock speed isn't everything, technology plays a big role in performance. A good point would be the Kyro II. It has a similar clock speed to the MX, but performs near the GTS/Pro level. A good reason being the fact that it used a form of DMR (TBR) and not IMR like the GeForce cards utilize. So then we get into the comparison between the 9000 and 8500. Looking at specs alone, the 9000 should be better. But the 9000 is basically an 8500MX. Its an 8500 with a stripped feature set.
You can base CPU performance on clock speed and their clock cycle effectiveness (IPC), but with video cards it is much more complicated. So here is where I ask (and quit talking
Now for technology. Why do no cards utilize the tile-based rendering method that the Kyro II uses? Why does Radeon use their hierarchial-z buffer? And about vertex/pixel shaders? Obviously, they shade, but how fast and how much and how are they calculating what they are suppost to shade? (the T in T&L): Why is there a whole engine behind this, and why do they group the T and the L together. How does manipulating objects in 3d space (T) relate with lighting (L)? And, like shading, how does the graphics card determine what it has to light up in the scene?
And another thing, why does no one use RGSS (like the voodoo 4 and 5). I read the FAQ, and it states, "2X MSAA utilizes a rotated grid like RGSS which results in edge anti aliasing comparable to that of RGSS". Now, that is somewhate like RGSS, but it isn't the real deal. Will any cards use RGSS fully, or will it be forgotten along with the VooDoo cards?
Well, that seems to be it. If you can't give me a straight, in-depth answer to this, please... don't. Links are good, but just keep in mind, I don't feel like reading 50pages of stuff (who would