• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Slow rig fast card, not so bad?

Wolfdog

Member
A old review at aceshardware started to make me think this morning. They had put midrange and a 9700 into an old system to see how much cpu they needed. Seeing if you would see any performance improvement with a slow system. Now this makes me seriously wonder. Take the new stuff coming out of the gates in the next few months(aka the nv40,r420) and all thier derivatives. The nv40 hardware does the shadow computations in hardware taking more work off the cpu. Then we have the ever faster hardware t&l and shader hardware which are taking more and more off the cpu. Will having a uber fast cpu cease to matter? Now I'm not talking about showing off 200+fps in your favorite game, this is more that you can run the new stuff(doom3,hl2) and it be actually playable(60~100fps) on a older system. Such as one running from 1.4ghz~2.0ghz. Even video encoding/decoding has been taken off the cpu with the nv40. Are we entering into a time where cpu speed takes the back seat to the gpu funcationality? I thought that this topic might make for a good discussion.
 
I hope system speed takes a back seat. I need to upgrade my vid card (GF2) but I can't afford to do the whole system at once. I suppose it depends on the complexity of the app, because the CPU still has to find the time to tell the GPU what to do.

 
Wolfdog you make a very good point about questioning the need for the fastest possible cpu to take advantage of that nice videocard. A lot of gamers don't realize that the settings that they play at have no bearing on the CPU or have very few bearings. For instance if you play UT2K4 at 1600x1200 4AA/8AF you think an Athlon 3700+ system with a Radeon 9600xt is going to be faster than a 2100+ Athlon XP with a 9800xt? no way, what about with a 6800Ultra, even worse. CPU speed matters in simulation games like F1 racing and flight sims, as well as, strategy games like WarCraft 3. For FPS it still matters but it gets tricky because at lower resolutions you'll see a significant boost having a faster CPU, but at higher res and quality setting CPU speeds means didly squad. The argument many people make is very solid that you should have a balanced system -- but why do they make that argument? Well most of us also want to be able to do other work fast enough so CPU speed matters for every day applications much more (to me at least) since I play games a lot less than Everything else I do. Just to give you an example my friend had a 733mhz P3 + geforce4 4200 and I had Athlon 1600+ + Radeon 8500 and most of the time his computer played the games we played together just as well (stuttered a bit at higher res in Battlefield 1942 though and MOHAA). So of course if your cpu is reaaaaaally slow (ie below 1 gig) then you will not have great gaming. But even taking this into consideration, for shader intensive games like Far Cry, Halo, Doom 3 and Half-Life 2 a videocard matters A LOT more than a CPU. Of course this analogy only goes so far and you do not want to get something too slow because AI is becoming very demanding and so on. So I'd say anyone from Athlon XP2100+ and up should upgrade the GPU first. Last point, a CPU will make your game a lot more playable by giving you more frames say from 40 to 80 in, but in games where videocard matters more (read all new games) your cpu definately won't help you make an unplayable game playable, but your videocard sure will (of course assuming reasonable CPU speed). And one last point, no one is going to buy a 6800U and not play at 1600x1200 4AA/8AF and at those settings it's 2x faster than anything out there. So you think a CPU even increase your speed by more than 10% when it's primarily videocard bound? Personally $1 per $1 GPU upgrade always seems to be better for a gamer in my eyes. And one last
 
I remember my old PIII with a geforce 4 ti 4200 in it. 600 MHz, 512MB of RAM, and 128 megs of video memory, it made for a nice gaming machine for the 2 weeks I had it 😉
 
cpu's often have a huge impact with minimum framerate. while advrage framerate in benchmarks doesn't change much, the difference between a few fps on the minimum framerate can mean the difference between choppy and playable.
 
I would say for the most part depending on the game you can get good frame rates with a slow CPU but a good video card. But of course there needs to be a bit of power from the CPU atleast Here is an example.

I was testing out the HL2 Beta with my chip clocked at 2.4 Ghz and it was running great 50fps avg. I quit the game went out of the game did something with the Video card settings I belive I turned AA and AF on and went back in and noticed it was playing unbelivable SLOW like 20FPS avg and was amazed about how slow it was running so I quit the game but there is a little console screen that tells you your CPU stats and it said my mine was running at 1.1Ghz??? which is very odd... so I rebooted went into the BIOS and fixed the problem loaded HL2 up with AA and AF still turned on and it was playing awsome 45FPS avg again and also looking great since AA and AF was turned on now.

Either way I guess my point is there needs to be enoug hto feed the video card data and 1.1 Ghz (100fsb x 11) is NOT enough. I would take a guess that around 1.6 Ghz things will pick up and be quite playable. so if you don't have atleast that much I would say upgrading is a good idea for newer games.
 
I think with more DX9-era games coming out, we'll see video card performance become more and more important. However, this is assuming no increase in complexity of AI and Physics. Soon these two will take up most of the processing time in games.

Russian Sensation: When I upgraded from a GF2 to a 9600 Pro on a PIII, Warcraft III ran much smoother when the screen was completely filled with enemies, no slowdown at all. And this was with the GF2 running the game at modest resolutions and 16-bit color. So maybe it's possible that new cards have vastly improved T&L engines compared to old cards? The point is Warcraft III runs very very smootly on a 1 GHz machine, with a GF3 or above card.

UT2k3 and 4, on the other hand, are real CPU hogs. The Unreal Engine 2 probably represents the pinnacle of DX7-era engines, and it demands at least 2.0Ghz for a smooth experience.
 
hardware does the shadow computations in hardware taking more work off the cpu. Then we have the ever faster hardware t&l and shader hardware which are taking more and more off the cpu.
Of the things you mention, I think only the shadow computations would ease processor load over what is currently done with a R300 or NV35. Shader stuff is something that the CPU doesn't do at all. If your GPU doesn't support it, it's just not done (I believe). Faster T&L makes the video card faster hence making the CPU more of a relative bottleneck.

But in general, I agree with your point. It seems like the vid cards will do more and more stuff in the furture.
 
Back
Top