Does higher performance at lower resolutions "truly" scale upwards with motherboards?

joe4324

Senior member
Jun 25, 2001
446
0
0
What I'm asking is if motherboard A performs 10% better then motherboard B at Q3 in 640x480 will it perform 10% better then motherboard B in 1600x1200?

If not then what is the use of focusing motherboard benchmarks on FPS in Res no one plays at? For all practical purposes I cant say I really care how a chipset performs at doing things I "dont" do. I can understand were its a "better" thing then not. but how much does the low res benchmarks really apply to the bottomline?

For example,

SIS735 VS KT266A

We all know from anands becnches that the Via chip performs somewhat faster. but if that gap narrows (or widens) as you raise the res higher and higher it could drastically things. I mean if it was proven that the sis735 could come within 1-2% performance of a KT266A at 1600x1200x32, and I found 735 boards for $40 dollars cheaper then 266A's I'm probably going to buy the 735.

see what I'm getting at?
 

Rand

Lifer
Oct 11, 1999
11,071
1
81


<<
For example,

SIS735 VS KT266A

We all know from anands becnches that the Via chip performs somewhat faster. but if that gap narrows (or widens) as you raise the res higher and higher it could drastically things. I mean if it was proven that the sis735 could come within 1-2% performance of a KT266A at 1600x1200x32, and I found 735 boards for $40 dollars cheaper then 266A's I'm probably going to buy the 735.
>>



Almost certainly the gap will reduce drastically in most games as you increase resolution and other bottlenecks come into play. But many people may well have a video card or other hardware in the future that won't be a bottleneck at higher resolution, in which case the KT266A will be quite a bit faster.

And there are many games and applications that are purely limited by the processor, in which case again they would see a large difference in higher resolutions... in fact, if it's completely processor limited it stands to reason as the stress on the processor increases the gap would increase also and hence there would be a larger difference between the platforms at higher resolution.

It all depends on what the rest of your system hardware is and what you specifically will be doing with the system as to how much of a performance boost there will be at higher res.

Anand tests in low res because that in most cases removes all bottlenecks except the processor and cache/memory subsystem so you are able to see the pure potential performance benefit with nothing else effecting the results.
That does not mean there will always be that much difference between the platforms.... the differences can vary drastically depending upon your individual usage.
 

gaidin123

Senior member
May 5, 2000
962
1
0
I think I see what you're getting at, but I'd have to disagree about tossing out 640x480 benchmarks.

Basically when running at 640x480 you are in effect removing the video card as a bottleneck for your framerate. This is done to show what the theoretical maximum performance of a game is on the test platform. All other bottlenecks aside (and there are a lot of them) if Platform A had a 10% higher framerate in a given game at 640x480 than Platform B, then theoretically Platform A would get a 10% higher score than Platform B at any resolution. This is if there are no other bottlenecks.

I'd guess that none of us play at 640x480 but once you're playing at 1600x1200 the performance bottleneck is all the video card. Almost any current system could run any game at about the same framerate if they had the similar specs and the same video card (just different motherboard chipsets). Testing different platforms at 1600x1200 with a GeForce 3 for example would probably yield little difference in framerates (someone please correct me if I'm completely wrong! :)).

So if we tossed out the 640x480 benchmarks and/or only included the ones that stress a given platform to the max then we wouldn't be getting "the big picture" of how every aspect of the system affects overall performance in some way. The best way to guage the performance of a chunk of hardware is to test it in both synthetic and real world ways so that you can tailor your system, and its components to be complementary rather than limiting you so much in one way that all your other expensive hardware is going to waste (ie using a GeForce 3 in a Celeron system with a 66Mhz FSB).

If the review is about a motherboard, or a chipset (with several motherboards used) then I'd rather see where all the bottlenecks are and as each bottleneck is removed (or placed on the system) how the overall system deals with it. Then I'd know the hardware I'd need to buy if I wanted to run a certain game at a certain framerate and could then spend my money on specific hardware to help compensate for the platform's deficiencies (ie if the platform has crappy memory bandwidth scores compared to other platforms I could spend more money on high speed RAM and overclock the FSB in order to compensate if I wanted that platform for some other reason).

If all you want to know is how fast a given platform can run a certain game using the fastest graphics card out there at the highest resolutions possible then you should read something like Tom's Hardware GeForce 2 CPU scaling guide. Unfortunately it's a bit dated now (I hope he does a new one soon) but that's the kind of article you'd probably be more interested in than general platform benchmarks. It shows you exactly what CPU speeds you need in order to max out a GeForce 2/MX/Ultra.

I'm not sure if I made my point or just rambled but I think that showing a wide range of benchmarks is very important for the incredibly varied tasks that computers are used in. :)

Gaidin