• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

What to consider when buying a video card?

timswim78

Diamond Member
From what I understand, you should consider the GPU speed, the RAM speed, and the amount of RAM. Is there anything else to consider?

For example, will a 325/800/256MB perform similarly whether or not it is a Geforce, Radeon, or whatever?

The reason I am asking is that I feel as though my current video card is the bottleneck in my system when I try to play games.

Also, do I need a motherboard that supports 8X AGP to get good gaming performance?

System:
- XP 2800+
- ASUS A7N8X Deluxe
- Geforce Ti4200 64MB (Stock speeds are 250/500)
- 2 x 512MB Dual Channel PC2700
 
Video cards usualy go by models, you want to decide on your model since most models are almost always clocked the same. 8x AGP will show you no performance gain over 4x.

No not all 325/800/256mb will perform the same, again first choice should be by brand/model.
 
Memory bandwidth is very important as well.

For example a 9800Pro with 128MB of memory running at 256-bits will be much better than an 9800 with 256MB of memory running at 128-bits.
 
Originally posted by: Tiamat
Memory bandwidth is very important as well.

For example a 9800Pro with 128MB of memory running at 256-bits will be much better than an 9800 with 256MB of memory running at 128-bits.

Well, a little more generally, what matters is (more or less in this order, from most to least important):

Fillrate
Memory bandwidth
Pixel/Vertex shader performance
Amount of memory

While it's possible to come up with synthetic tests that make shader performance or the amount of available memory the most important factor, this is generally not the case in 'real-life' situations.

Fillrate (in MPixels/sec.) = (number of pipelines) * (core clock speed in MHz). For cards with multiple texturing units per pipeline (like the GeForce 4 Ti and GeForce FX lines, which both have 4 pipelines and 2 texture units per pipe), fillrate while multitexturing (in MTexels/sec.) = (number of pipelines) * (texture units per pipe) * (clock speed in MHz). The higher this number, the more "stuff" the graphics card can draw on the screen in a given amount of time. Cards with high fillrate can run at higher resolutions, or display more objects on the screen, without slowing down.

Memory bandwidth (in MB/sec.) = (memory clock speed in MHz) (* 2 if DDR) * (interface width in bits / 8). Memory bandwidth is used to access texture and geometry data on the card's internal memory (although most of it is used for texture operations nowadays, as geometry data is usually relatively small). Higher bandwidth allows you to use higher-detail textures, or to display more textured objects on the screen, without slowing down. Antialiasing (AA) and Anisotropic Filtering (AF) also increase bandwidth requirements considerably, so cards with low bandwidth tend to perform badly with those features turned on (relative to other cards with the same architecture).

Every card currently on the market has one pixel shader per pipeline, so pixel shader performance (in shader ops/sec.) is equal to fillrate (number of pipelines) * (core clock speed). Vertex shader performance (in vertex shader ops/sec.) = (number of vertex shader units) * (core clock speed). Some shader operations (such as bump/displacement mapping) are also dependent on memory bandwidth, since they have to access the card's onboard memory as well. Note: shader performance cannot be directly compared between cards with different architectures. The GeForceFX cards, for instance, have *terrible* SM2.0 shader performance, despite their high clockspeeds.

Generally, adding more than 128MB of memory to a graphics card doesn't do much (if anything) in terms of performance. The very fastest cards available today (6800GT, 6800Ultra, X800Pro, X800XT) are, IMO, the only ones that see enough of a boost to possibly justify spending extra on a 256MB version (and, in fact, these cards only come in 256MB versions for this very reason). Low-end and midrange cards (like FX5200s and RADEON 9600s) with 256MB of RAM are just a marketing gimmick; they don't run any faster than 128MB versions -- and sometimes they have lower memory clockspeeds, making them slower than the 128MB cards.
 
Originally posted by: Matthias99
Originally posted by: Tiamat
Memory bandwidth is very important as well.

For example a 9800Pro with 128MB of memory running at 256-bits will be much better than an 9800 with 256MB of memory running at 128-bits.

Well, a little more generally, what matters is (more or less in this order, from most to least important):

Fillrate
Memory bandwidth
Pixel/Vertex shader performance
Amount of memory

While it's possible to come up with synthetic tests that make shader performance or the amount of available memory the most important factor, this is generally not the case in 'real-life' situations.

Fillrate (in MPixels/sec.) = (number of pipelines) * (core clock speed in MHz). For cards with multiple texturing units per pipeline (like the GeForce 4 Ti and GeForce FX lines, which both have 4 pipelines and 2 texture units per pipe), fillrate while multitexturing (in MTexels/sec.) = (number of pipelines) * (texture units per pipe) * (clock speed in MHz). The higher this number, the more "stuff" the graphics card can draw on the screen in a given amount of time. Cards with high fillrate can run at higher resolutions, or display more objects on the screen, without slowing down.

Memory bandwidth (in MB/sec.) = (memory clock speed in MHz) (* 2 if DDR) * (interface width in bits / 8). Memory bandwidth is used to access texture and geometry data on the card's internal memory (although most of it is used for texture operations nowadays, as geometry data is usually relatively small). Higher bandwidth allows you to use higher-detail textures, or to display more textured objects on the screen, without slowing down. Antialiasing (AA) and Anisotropic Filtering (AF) also increase bandwidth requirements considerably, so cards with low bandwidth tend to perform badly with those features turned on (relative to other cards with the same architecture).

Every card currently on the market has one pixel shader per pipeline, so pixel shader performance (in shader ops/sec.) is equal to fillrate (number of pipelines) * (core clock speed). Vertex shader performance (in vertex shader ops/sec.) = (number of vertex shader units) * (core clock speed). Some shader operations (such as bump/displacement mapping) are also dependent on memory bandwidth, since they have to access the card's onboard memory as well. Note: shader performance cannot be directly compared between cards with different architectures. The GeForceFX cards, for instance, have *terrible* SM2.0 shader performance, despite their high clockspeeds.

Generally, adding more than 128MB of memory to a graphics card doesn't do much (if anything) in terms of performance. The very fastest cards available today (6800GT, 6800Ultra, X800Pro, X800XT) are, IMO, the only ones that see enough of a boost to possibly justify spending extra on a 256MB version (and, in fact, these cards only come in 256MB versions for this very reason). Low-end and midrange cards (like FX5200s and RADEON 9600s) with 256MB of RAM are just a marketing gimmick; they don't run any faster than 128MB versions -- and sometimes they have lower memory clockspeeds, making them slower than the 128MB cards.


It took a while to read through this, but it was very useful. I think that I'll feel like a more informed consumer when I buy my card.
 
Back
Top