"Current" Games and replacing the benches.....

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

skace

Lifer
Jan 23, 2001
14,488
7
81
Nice thread Ben. I agree about the innitial statements that benchmarks are getting old. It is sort of like one of those games teachers play with little kids where no one can lose, only win. Every card being benchmarked on Q3 and UT is a winner and their are no losers. We need to push the cards and force out the differences.

I saw quake1 mentioned! I run q1 in software mode at 320x240 because I'm hardcore :D Maybe it's just me but Q1 still doesn't 'feel right' in any other resolution than default.

I don't think CS would make for a good benchmark. CS is a mod that is really just pushing the limits of a game engine (Modified Quake1 engine at that!). The mod could be reproduced on Q3 and you would most likely get higher FPS.
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
looks like you got your wish Ben!

I don't get why they don't have Evolva tho. ah well.

Looks like the 5500 is still better than the MX, despite your strong objections to the contrary, even in Serious Sam (well, it's the equal of the MX)

I couldn't care less about any of this anymore, I WANNA SEE MORE OF THE KYRO2!!!!!

and I was inches away from a GTS-Ultra....
 

Rellik

Senior member
Apr 24, 2000
759
0
0
Sorry Ben, didn´t read your last post before i pm´d back.
What a day. First the KyroII review causing quite a stir, then it´s
only hours away from 3dmark2001. I just hope those CS players aren´t
going to be a problem creating nettraffic besause of the release of 1.1 later on...

About my Q3HQ/MX statement. True, a DDR can push 60fps. But my point
was and still stands that one of the reasons the RADEON does not score such high numbers like the GF2 is because the Radeon does imply more quality that comes at the cost of speed. The benchmarks done in 16Bit
in Q3 are not a good way too compare. The 16 Bit rendering on the Radeon is, for the lack of a better word, ugly as H§ll. I think that testing with settings like 640X480, low detail, 16 bit is ONLY justified when trying to determine where the bottleneck of a system is. When such low res is used, we assume that the vid card is not
the limiting factor since it is not anywhere near it´s fillrate. So
what we really see is CPU performance. But at what point does this reverse and you get a cpu limited performance BEFORE reaching the cards fillrate? At 1600, most cards are reaching their fillrate in 32Bit. Even more important, (and that is what I really care about)
Who plays at 640? I don´t want to offend the die hard Q3 deathmatch
players, but buying a 3000 bucks system only to play in 16 bit, 640 res is really something you ought to be shot for. I don´t want the stupid FPS debate again (too many people confusing pc with TV and Film) but peaking strictly for pc games, I would assume that an average fps around 80-100fps and a minimum around 40 is more then enough for a shooter. Same goes for acing games.

I think the reason the radeon does biliniar filtering when anistropic filtering is enabled is because anisotropic filtering is one step over tri-liniar filtering. And the 16tap AF of the Radeon sure beats trilinar filtering on the GF... Funny thing is, the first reviews of the GF3 indicated it would feature 8 tap AF. Now it´s 64tap. Hmmm only
time will tell....
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Robo-

"Looks like the 5500 is still better than the MX, despite your strong objections to the contrary, even in Serious Sam (well, it's the equal of the MX)"

From the review-

"First off, the asterisk next to the Voodoo5 5500 is there to show that the Voodoo5 5500 was tested with the 24-bit texture option off. Failure to turn this check box off on the Voodoo5 5500 resulted in extremely poor performance regardless of the resolution."

Let's drop the color depth to 16bit and see what happens between the MX and the V5:) Course, that was only MBTR where the GF2MX performed very poorly.

SS was pretty much a wash between the two, but that is the MX. What exactly happened to the big bad GTS competitor? In 1999 games the V5 was competitive, in 2001 titles it looks like the GTS had a bad day at work and quite a bit too much to drink and the V5 is the red headed stepchild;)

Relik-

I'll have to PM you in a bit, I don't want to skimp on my reply there or in this thread:)

"I think the reason the radeon does biliniar filtering when anistropic filtering is enabled is because anisotropic filtering is one step over tri-liniar filtering. And the 16tap AF of the Radeon sure beats trilinar filtering on the GF... Funny thing is, the first reviews of the GF3 indicated it would feature 8 tap AF. Now it´s 64tap. Hmmm only time will tell...."

Anisotropic is a seperate feature from bilinear and trilinear. If you rely completely on anisotropic filtering you will have horrible texture aliasing, you still need to have mip mapping and then it comes down to point, bilinear or trilinear. When you enable anisotropic filtering on the Radeon there are very clear bands between the different mip levels even if you request trilinear(which eliminates this when working). Also, the Radeon appears to be skipping a couple of mip levels and using a -LOD bias setting so there is a very clear "jerk" between extremely sharp textures and very blurry ones. Whats more, the way that the filters are applied on the Radeon leave for some rather disjointed visuals(GF boards "bow" the filter around you, Radeon seems to apply it based on polys position). There likely will be an article shortly up about this at the Basement("wumpus" bought an AIW Radeon and is pretty p!ssed off about this issue, much as he was about the S3TC problem of the nVidia boards).

The "16tap/8tap/128tap" issue is one of mixing terms up. You can say that the GeForce/GeForce2 offers anistropic filtering of one, or eight tap, both are correct. The Radeon offers anisotropic filtering of eight, or one hundred and twenty eight in theory, though apparently as of now only eight tap and sixteen tap are supported. The GeForce3 supports 64tap adaptive anisotropic, people are confusing themselves with incomplete drivers at the moment:)

As far as speed goes, I agree that the difference between the GTS and Radeon is splitting hairs. I also don't see a big difference in 3D gaming image quality, in fact overall for 3D I would definately give the edge to the GeForce boards(Radeon kinda blows for Viz). The only problem that I have seen with the GF boards is the Quake3 TC issues, which you can either quickly adjust to match the TC quality of the Radeon or disable it. Color "vibrancy" is fully adjustable on either board and you can make them both appear like the other in terms of 3D color/saturation/vibrance, either "turning down" the Radeon or "turning up" the GeForce.
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
now Ben, I never argued that the GTS was a better long-term solution than the 5500, and in fact, I always ask "which games you gonna play? If you're into the newest games, grab a GTS"

it was the MX I was always arguing about, especially now since the 5500 is as cheap (D'OH!!! bankruptcy does that to a fella!)

 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Robo-

"it was the MX I was always arguing about, especially now since the 5500 is as cheap"

Hmmm, I have AT's latest price guide opened up and the prices are-

$143 32MB GTS

$112 V5 5500 AGP

$89 GF2MX

The V5 and the GTS had price drops over the previous while the GF2MX went up in price. I'd say it pretty much splits down the middle between the two.
 

jpprod

Platinum Member
Nov 18, 1999
2,373
0
0
If you rely completely on anisotropic filtering you will have horrible texture aliasing, you still need to have mip mapping and then it comes down to point, bilinear or trilinear.

Well actually if the anisotropy degree was infinite :)D), as in the filter would be always capable of doing the averaging between all textels within a pixel's area, there'd be zero texture aliasing and in fact, texture image quality would be close to what one gets with high-end visualization applications.

But I think everyone intersted in anisotropic filtering or anyone claiming Radeon is some sort of holy grail of image quality should head over to this thread at Beyond3D BBS. Anisotropic filtering only looks good if it's implemented properly, and Radeon does indeed disable linear mipmaps with it enabled. In the thread it's also pointed out that Radeon doesn't do perspective-correct mip mapping, making bilinear mipmap borders much more noticable than for example, on GeForce or Voodoo cards.
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
actually, both the 5500 and the MX went down in price. The 5500 can be had for just over $100, the MX for just under. that's ~the same price, in my book (or at least the same price range)

Considering the advantages with the 5500, it's still a far better buy than an MX.

If I had $150 to spend, I"d probably go for a GTS out of those 3, aside from a few issues...
 

Rellik

Senior member
Apr 24, 2000
759
0
0
Just to get back on track, which games should we use for benchmarking now? I agree we still need to get Q3A and 1 old D3D game in there to
show how the other scores(those of the new games) relate, but for now,
which games do you think are a good candidate and why?

I´ll state Giants as a D3D benchmark because it has a solid engine with support for new and usable features(Dot 3 bumpmapping).

Another one would be Colin McRae Rally 2.0 as an indicator for racing games. although the implementation of cubic enviroment mapping is really buggy and the embm doesn´t work.

any takers?
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
As stated before, Serious Sam is a good one for OpenGL. It's great to see an OpenGL game that didn't come from a QuakeX engine. It has all the new technologies, large outdoor maps, high res textures, lots of action and baddies.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Rellik-

Have to agree with oldfart on using SeriousSam for OpenGL. It is extremely impressive looking, I can't wait to get my hands on the retail build(supposed to be a decent amount quicker as OF;) mentioned earlier~20%).

Giants, unfortunately, doesn't support benching yet. The FPS counter works but that is it(and even that has been questioned).

MBTR seems good, I hope that either Tribes2 or B&W will have a bench of some sort built in(B&W's graphics engine looks extremely impressive, entirely ~procedural based textures). Does CMR2 have a built in bench?

Evolva is another good one, displays minimum FPS along with the average.
 

Rellik

Senior member
Apr 24, 2000
759
0
0
SS is a good idea from what i heard about it. Cmr2.0 works partly
with fraps(keeps fading in and out of game) but I hope to get some kind of benchmark working...

B&W is a good idea as it supports 2 different start settings, D3D with and without T&L. So I hope it has a built in timedemo or works with fraps.

OK, OpenGL: Serious Sam.
D3D: Evolva(or is it OGL), BW, CMRR2,

any other games?
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
A good benchmark is one that scales both with CPU and video card.

MBTR is about the only racing game I've found that scales with both.
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
There is something weird about Giants. It has the smoothest ~ 30 FPS I have ever seen. Doesn't seem accurate.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Rellik-

I would suggest Quake3 and 3DMark2K1(WHOA!, calm down Robo, hear me out will yuh?)

I think that showing people the scores for the "benchmarks" of benchmarking compared to others may help give an understanding of WHY you shouldn't look to one bench.

Look at Anand's KyroII review. If we take the Quake3 and UT benches the KyroII slots in right around the other $150 cards, ho hum...

Then we look at SS and MDK2/MBTR and the KyroII is kicking @ss/getting its' @ss kicked. Showing that the systems are scoring ~roughly what they should be in the "standard benches" may help clarify that there is nothing wrong with xxx bench showing xxx card being extremely fast/slow.

oldfart

I'm fairly sure they are using skeletal animation in Giants versus the "moving pixels" in game like Quake3/UT, that may explain why it seems so smooth. You are definately right, 25FPS is very playable in Giants, would induce headaches in most games.
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
Ben, I agree 100% with you.

The more benchmarks the better. You can't declare a card's speed unless you truly have several benchmarks.

We've had this discussion about UT (I think it was you and me, perhaps it was BFg and me). I personally think using uT as a D3d benchmark is stupid. It's an engine, not a D3d indicator (by any stretch). It doesn't scale, it was originally designed to be a software rendered, then switched to glide, then had D3d and OGL added as an afterthought.

Yes, it's been improved in d3d and ogl, but nonetheless, to use UT as an indicator of D3D performance is silly.

I feel that way for ALL games. Q3 and MDK2 are NOT the same. The 5500 was quite good at Q3, but had issues with MDK2 until yhou used the wickedGL drivers, in which case it picked up to GTS levels (well, almost!)

the kyro merely shows this to be true again. The more benches the better. Show synthetic benchmarks, show gaming benchmarks, show the SS "theoretical fill rates" section, but show the actual framerates too. The more $hit we show, the better, all-around "worldview" we can get.


for every game, show every damn benchmark possible. If a game has a benchmark, use it. The more the merrier. You can't do too many, it can only provide you with more information.

OGL - MDK2, q3, Q3:TA, UT (with and without the compressed textures), SS

D3d - MBTR, 3dmark2001, 3dmark2000, UT (with and without compressed textures), Evolva

I also happen to believe that there is no such thing as a "standard benchmark", unfortunately. "Same settings" unfortunately doesn't equal "same quality"

examples - S3TC in Q3 (still, tho I notice that issue has been reduced)

"high quality" in Q3 (trilinear???)

5500 benchmarked with lodbias = 0 (should be -0.25, tho that gives the 5500 an advantage in visual quality)

what about anisotropic filtering? Should we compare 8-tap to 16-tap? I don't think so.

what about HW T&L settings, in MDK2 for example? benchmark it both ways. in a game like MDK2, the difference is "neglible", in that there isn't really a big deal one way or the other for visual quality. In a game like Giants or Sacrifice, that obviously isn't the case.

Benchmarking is a PORTION of a review, it should not BE the review. Some level of thought must go into the reivew, in addition to learning how to run a timedemo.
 

rogue1979

Diamond Member
Mar 14, 2001
3,062
0
0
Quake 3 is a weak video card test, just about any half-ass video card can play it with reasonable results. Try Mercedes-Benz Truck Racing or Mech Warriors 4 at 1024 x 768 with FSAA on, alot of video cards will grind to a stand still!!!! Unreal seem to tax a video card some, but I found it is more system memory dependent than the others, shove some exta RAM in your pc and a V3 with a decent cpu will do just fine.
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
Quake 3 is a weak video card test, just about any half-ass video card can play it with reasonable results. Try Mercedes-Benz Truck Racing or Mech Warriors 4 at 1024 x 768 with FSAA on, alot of video cards will grind to a stand still!!!!

uh, try Q3 @1024x768 w/ FSAA on, and EVERY viedo card will grind to a standstill
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,004
126
Quake 3 is a weak video card test, just about any half-ass video card can play it with reasonable results

That depends on your settings. Try 1600 x 1200 x 32 and get back to me.

RoboTECH, I never advocated UT as a good video card benchmark. I've always said it would make a good CPU benchmark.
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
BFg, according to the settings that guy posted, there are ALOT of games that are great video card test.

Why is Q3 still a great video card test?

1) Scales perfectly with resolution, assuming you're not using a P2-333
2) TONS of settings for visual quality adjustment
3) Has a freely downloadable demo version with a built-in benchmark
4) Carmack is an outstanding OGL programmer, he gets the most possible out of hardware, so your score is a pretty darn good indicator of potential OGL speed, vs. UT being an obviously engine-limited and CPU-limited bench

I agree, UT is a great CPU benchmark

in fact, running UT in software mode is part of my "burn-in" process. :)
 

Weyoun

Senior member
Aug 7, 2000
700
0
0
hehe :)

I agree with the more benchmarks stance, and I would be particularly interested in seeing more on minimum framerates, especially concerning the Kyro. I do notice with my GF2MX that when moving outside (CS, open stair area in Aztec) would like half my framerate, most likely because of overdraw concerning the number of object in the vicinity. It isn't a texturing issue, but rather a fillrate/bandwidth related one, it isn't an acute drop, but steady and consistent. Serious Sam sounds like a very goo dcontender for another benchmark, and by the sounds of it Anand likes our notion. It has good benchmarking features, impressive visuals, but more importantly IMHO, is actually programmed well and has new features. It really is a shame that the enthusiast community has been relying on quake3 for over 1.5 years now as the primary benchmarking concern. *eagerly waiting for doom3* :D Then again, perhaps i ought to upgrade my c466 before i even start thinking about playing that :) But on a serious note, Carmack's engines are the best and the quake 3 engine is in a league of it's own considering that it's now 1.5 years old, and still kinda keeping up with visuals.

I really think people should consider what a benchmark really is before they go out praising a card. It's only a test to see how it runs under that given configuration, and although an indicative test, can NOT be used as the be all and end all of a video card. In my humble opinion a benchmark should include more than just a speed test, but rather other areas considered. I'm not exactly sure on the implementation of this, but people always seem to be complaining as to a particular video card being better as a 'gamer's card' or 'in real life'. I think it would be nice being able to rely on a computer screenshot than someone's biassed opinion. When you really consider 3dmark, it's hardly what I'd call a conclusive benchmark suite. It does have some nice test results like fill rate tests and internal card comparing, but as far as I know, minimum frame rate or 3D image quality is not mentioned at all, let alone FSAA settings. Perhaps a test would be to sample a texture at the different levels that the hardware supports, have it dump a frame from the frame buffer and actually do a bit by bit comparison to the same texture sampled by the CPU with 100% clean code, not speed tweaked but to get perfect image quality for that particular level. Then a bit comparison could take place and the card could be given a mark regarding it's texture filtering qualities. The same could apply for FSAA, worst case scenarios, best case, lines drawn at 0* (from horizontal), 22.5* and 45*, multisampling and supersampling, and perhaps have again a software implementation written to show the maximum image quality case. Speed would be of no consideration here, as it is giving a card a mark purely based upon it's quality. As far as i know, everything a 3D accelerator can do can be done via software implementation, albeit slowly. A purely coded software method however, should yield perfect results, and driver 'tricks' could be found to see if any quality has been sacrificed for the new found speed. Also, quirks in drivers and filtering problems, such as those recently found on the Radeon concerning anisotropic filtering can be clarified more...

Any thoughts? IMHO, as an all-in-one benchmark, as it sets itself out to be, 3dmark fails miserably. Perhaps one of the more intuitive members around here would be able to indulge us with an image quality comparison and marking program :)
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Robo-

"uh, try Q3 @1024x768 w/ FSAA on, and EVERY viedo card will grind to a standstill"

GeForce3:D

Weyoun-

I want to cover several points you made, but I'm going to take them in random order:)

"When you really consider 3dmark, it's hardly what I'd call a conclusive benchmark suite. It does have some nice test results like fill rate tests and internal card comparing, but as far as I know, minimum frame rate or 3D image quality is not mentioned at all, let alone FSAA settings."

Minimum framerate would be a nice addition, everything else you mention is already in there(have you DLed 3DMark2K1 yet?).

"Perhaps a test would be to sample a texture at the different levels that the hardware supports, have it dump a frame from the frame buffer and actually do a bit by bit comparison to the same texture sampled by the CPU with 100% clean code, not speed tweaked but to get perfect image quality for that particular level. Then a bit comparison could take place and the card could be given a mark regarding it's texture filtering qualities."

What they do is dump an image from the framebuffer and then show one rendered on a reference rasterizer(software) and have you compare them. Why have you compare instead of software? Because of the way software would "look" at the image. The only way for software to compare is break it down by bits and compare them to each other. Sounds good, but you run into problems quickly with this approach. What if one board is running 32bit color and has the alpha off by one bit for every pixel on screen? For you or I sitting in front of the monitor it would barely, if at all, noticeable. To a software compare that board scores a zero. Then you could have another board that misses every other texture. Only having every other poly textured could result in a score of 50% which is a huge advantage over the board with the alpha channel being off. Which is going to look better? The one that scores a zero, by a HUGE margin, but the software has no way of knowing that. Visualization benches do test this quite regularly, but there is only one consumer video card that can score acceptably on them.

"The same could apply for FSAA, worst case scenarios, best case, lines drawn at 0* (from horizontal), 22.5* and 45*, multisampling and supersampling, and perhaps have again a software implementation written to show the maximum image quality case. Speed would be of no consideration here, as it is giving a card a mark purely based upon it's quality."

Even bigger problems here. Current FSAA(OG or RG), 3dfx ATi or nVidia, is a hack. What you would consider best case on any particular angle is not going to be the best case in another. What's more, the human eye is more perceptive to aliasing when it happens at certain angles. And then you have the fact that the human eye is also more perceptive to the blurring caused by FSAA at those same angles. Which is better? With MSAA it gets easier as you only have to worry about edge aliasing in terms of "FSAA" and you can rely on anisotropic/trilinear to deal with texture aliasing. Maximum image quality, using anything out now except the GeForce3(which does use MSAA) is going to rely greatly on what flaws bug you most, jaggies or texture blurring. If you look at MSAA and image quality, well 3DMark2K1 does have an image quality test for that:)

"A purely coded software method however, should yield perfect results, and driver 'tricks' could be found to see if any quality has been sacrificed for the new found speed. Also, quirks in drivers and filtering problems, such as those recently found on the Radeon concerning anisotropic filtering can be clarified more..."

Unfortunately, if people did start posting the comparisons more frequently you would hear much louder protests then you do now about MO being nVidia biased. The MS reference rasterizer and nVidia cards are the closest to each other, have been since the TNT1 days(well, briefly the Matrox G400 faired better then the TNT2 until the launch of the GeForce). ATi uses LOD bias settings that are more agressive then the reference rasterizer. 3dfx are less agressive(though they can be adjusted). 3dfx lacks not only anisotropic, but also trilinear for anything current. ATi's anisotropic is kinda screwey at the moment. nVidia's weakest point, their defaulting to 16bit texture interpolation for DXT1 is not something specified by MS so they wouldn't lose any points there. Would you consider that fair? Not too many people around here do(we have discussed this before, people's personal preferences seem to outweigh their desire for accuracy). I have argued that accuracy should come first with subjectivity a distant second, I'm in the minority around here though:)

"In my humble opinion a benchmark should include more than just a speed test, but rather other areas considered. I'm not exactly sure on the implementation of this, but people always seem to be complaining as to a particular video card being better as a 'gamer's card' or 'in real life'."

There is no way to gauge this aspect unfortunately. Are you FPS and nothing else? nVidia. Hardcore sim fan? 3dfx. Like watching movies? ATi. 2D at 1920x1440? Matrox. For most people a little bit of everything is closer, even if we limit it to strictly gaming. For Falcon4 I would likely pick a V5 over a GF2Ultra. For Giants I would take a GF2MX over a V5. For Quake3 I would take a GF2U over anything else(outside a GF3;)), one case of the most expensive being the overall best, but that doesn't always hold out. Look at the KyroII review. If you care more about SS then any other game then the KyroII looks a whole lot better then if you are a MBTR junkie. A "gamer's card" is a term thought up by people who have found a board that does well for the games they like to play and that doesn't always line up with the popular consensus. Most of the time this is spoken by a particular company zealot, how many times on this board have we seen comments of people having to change to different Detonators just to play a game?? I have never had to change a Det for any game ever. Same goes with ATi and people talking about the Radeon not being able to so much as load a game under Win2K. Company zealotry. Some people do have a serious point to their arguments from their perspective, but I can't see how you could call any one board the true "gamer's card" as to date they all have respective strengths and weaknesses compared to each other. If nothing else, price alone is enough to tilt the field heavily.

"Carmack's engines are the best and the quake 3 engine is in a league of it's own considering that it's now 1.5 years old, and still kinda keeping up with visuals."

Have you by chance seen Giants or Sacrifice yet? In all honesty, after firing up those two I fired up Quake3 and it looked to me like Quake2, only not as good. I would say there is a bigger rift in visuals between the latest games and Quake3 then there is between Quake2 and Quake3. It was real good for its' time, but UT with the S3TC textures certainly gives Carmack's last engine a very good run for its' money:)
 

skace

Lifer
Jan 23, 2001
14,488
7
81


<< &quot;Carmack's engines are the best and the quake 3 engine is in a league of it's own considering that it's now 1.5 years old, and still kinda keeping up with visuals.&quot;

Have you by chance seen Giants or Sacrifice yet? In all honesty, after firing up those two I fired up Quake3 and it looked to me like Quake2, only not as good.
>>


While I would agree that the newer games are starting to dwarf the original splendor of Quake3 (graphic-wise), the engine itself is still in a league of it's own. I have gained a great respect for Id games ever since doom1. Id doesn't make games, they make engines, engines that last for more games than I can list. Quake1? Quake1 slide, TF, Clan Arena, Quake done Quick, Painkeep, CTF, Expert CTF, FvF, the list goes on. I have more memories of Quake1 than I do of every game I ever played on a console :p. It is always amazing to see what is done with Id games. Sometimes I wish other games had been made with Quake engines. Namely Deus Ex and Undying :p

But I'm sort of off topic here :)
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,004
126
Why is Q3 still a great video card test?

RoboTECH, thanks for the points which I already knew. ;)
 

Vrangel

Golden Member
Jan 12, 2000
1,259
0
0
Guys I just picked up Serious Sam at local EB.
Havent had that much fun in years! Its a blast.

Forget benchmarking and have fun. :cool: