Then tell your coworkers at ATI to work more closely with developers like NVIDIA does with its developers to make sure their games are optimized for the Radeon and R200 line of video cards. You can't just make hardware and expect developers to immediately grasp it. You need to work with them and help them make the most use of the hardware. And Unreal Tournament was designed for the Voodoo line of video cards. So even NVIDIA and KyroII cards suffer from the lack of video card optimizations for thier line of video cards. However, with my older Geforce 2 GTS 32MB, I never suffered from the slow downs when having more than a couple of people on the screen that the ATI AIW that I had suffered.
187: 65.9 (OOPS that's 68.9)
205: 72.2 -- Not a bad overclock for a card with a stock speed of 148 MHz.
209: My poor card couldn't take it anymore...
Note that the graph starts at 167 MHz but the first two jumps are 6 MHz and 9 MHz respectively. Then it's a 4-5 MHz increments all the way up to 209 MHz.
205-182 MHz = 12.6%
72.2-67.8 FPS = 6.4%
The Radeon continues to scale above 183 MHz. So, while this thing does not scale as well as the Geforce 2 in the graph posted earlier, I don't think 3DWinbench2000 is a proper test of the Radeon, because the Radeon speed does improve according to my Quake III tests. However, the SE should be the last in the Radeon line, since a stock SE is only about 4% faster than a stock retail Radeon VIVO (but much faster than the OEM Radeon).
Note though, I am only using a Celeron 880, so I'm not sure if I'm a bit CPU limited.
EDIT: See below. It seems I am somewhat CPU limited.
Hi, just joined the forum.. and read through all the 50+ posts. ^^
I think its good that we're actually having this debate in the first place. Just as a while back when it was 3dfx vs Rendition (V2100/V2200 series still have the best 3d graphics quality to date.. until Radeon showed up).. then it was 3dfx vs nVidia. It's been a long time since anybody even raises the question of which company's product is superior (nVidia ofcourse is still superior!).
nVidia has gone a long way, but Ati has always been there. They still hold the biggest market shares, especially owning over half the OEM market (Q1 industry reports).
People tend to get so driven they forget the fundamentals.. as customers, you get what you pay for. Paying over $350 for a video card oughta get you top notch performance and quality.. and in the case of the GF3, it does. However, paying $70 for a Radeon LE in business sense should get you a piece of crap.. but it doesnt. It gets you a video card that once properly setup gives a GF2 a run for its money.
nVidia holds a strong stance a few years back claiming that there is a point when such a high FPS isnt needed, and the focus should be on image quality. Compared to a $70 Radeon LE, the GF2 Ultra and even the GF3 (without dx8 support in todays game) seems to fail in graphics quality utterly. This is exactly like the TnT vs the Voodoo 2 debate, where the V2 could push polygons way faster, but the TnT looked better.
Im sure when the GF3 is properly used, its graphics will be outstanding. But then again, if the Radeon's Programmable Pixel Shader were to be used properly, it would look amazing as well. Its a fact that it wont be used though. This isnt because game developers think it is inferior, its because nVidia and Microsoft have a contract to support each other due to the XBox fiasco. As it stands, feature wise nVidia with MS support will always be one step ahead. But not for long though, Radeon 2 will be fully dx8 compliant..
As for the reason why overclocking the Radeon wont give huge performance yield, simply because it only has 2 rendering pipelines. And in current games most of them use only 2 texture per pixel, hence, Radeon's 3rd texture unit is wasted, so its only half effective vs GF2. But on a GF2 with its 4 pipeline, it can withstand and delivery a lot more. However, as new games come out that support 3 texture, you will see GF2/GF3 being crippled.. if the game needs 3 texture per pixel, the GF2's 4 pipe (1st pipe renders 2 texture, 2nd pipe render 1 extra, with 1 remaining texture unit wasted) is equal to Radeon's 2 pipe.
Lets face it, graphics will become more and more complex, why would developers use only 2 texture when they can use more to make it realistic? Especially now that there wont be performance loss.
Radeon 2 will boast (i think from latest article i read) 4 pipe with 3 texture unit. This will be huge compared to GF3, just this alone will kick it in the nuts.. if what the white paper says is true.. it's up to Ati to deliver, and we shall wait.
Btw, i dont dislike nVidia, in fact i think they were great (used a TNT1, TNT2, GF2).. until they decide to overly charge consumers for their products. Without companies like Ati and ST:Micro, we'll see the day nVidia retails their mainstream video cards for 1,000.. just as they originally planned to sell GF3 for 500.. bloody ridiculous!! But what's worse is now they've joined forces with the "other" monopoly, MS.. they've got a plan for world wide domination i bet ya!
Cant wait for the GF3 Ultra vs Radeon 2 vs Kyro 3 debate in a few months!! hotdamnit i wish Rendition were still around.. *sob*
It seems I am a bit CPU limited. Running at higher resolutions gives me a difference of more like 9% at 205 vs. 182 MHz, as opposed to the 6% I reported earlier. That suggests to me that the stock SE at 198 will probably be more like 6% faster than the stock retail Radeon at 183 MHz, which is pretty good, since the SE is only clocked 8% higher than the retail standard Radeon.
(However, at the higher resolutions, the overall rates I got were pretty slow, so the margin of error for comparing the numbers is a bit higher.)
Pidge you say that the Radeon II will be about equal to the Geforce 3, but did you know the Radeon II will have 4 pipelines x 3 texture units = 12 while the Geforce 3 has 4 pipelines x 2 texture untis = 8. Brute force wise the Radeon II will be faster than the Geforce 3.
Well, I hope you realize that the VE is one of the slowest Radeons in existence. If you're complaining about performance, it's because you have the VE. If you need the dual head that's fine, but don't expect it to perform like a Radeon DDR VIVO.
And yeah, 3DMark2001 is not a very good test for the real world, for obvious reasons. It has its uses, but most of us don't "play" 3DMark.
For the same price the LE has MUCH better performance than the VE (but no dual head of course).
I dont know where I came off about complaining abut performace, I am sorry if I mislead you. My replies only come from my views of 3dmark2001 graphics cards test program(please tell me of another I should play). But I do not know of any other respectable tests that I could compare performance otherwise. Please post a link for a new test.
I am sorry Eug but I cannot recognize your reponse due to the fact that I reviewed your system specs and you are a ATI user on both ends(laptop & desktop). Is LE really better then VE? I must hear from an unbiased graphics card user!!
Regarding WSP Trooper... Uh, I don't trust a guy who uses such words as "blows monkey goats" in sentences and portrays himself as a state trooper from Washington State. Something ain't right here, folks.
I am sorry but I don't trust your benchmarks. In your scores, you show the Radeon in the first 20MHz gaining about a 2FPS improvement while in the next 20MHz higher, it gains 6FPS? That does not coincide with what others reported and from my own results.
I am not stating for a fact that the Radeon 2 will be equal to the Geforce 3. All I am going by is their track record. There is no facts to back up your statement so it could be true or it could just be a rumour. Unfortunately, due to ATI's track record, you have to just wait until they release the card because whatever ATI says they will feature and what is actually released are two different things. Let's hope it is a good card because I want to buy an AIW Radeon 2 if it is a good card for my second system.
Im not sure if you know this or not, but the VE is a "castraded" Radeon with its 64bit bus width, and only 1 pipe. Theoretically it is only 1/4th as strong as the standard 2 pipe, 128bit bus Radeon. If you have time to post multiple responses, you should have simply went to any hardware site and look through a short article about the VE. But then you wanted answers without putting effort to research, eh? hehe.
VE is the Radeon MX.. with its dual head support but drastically reduced performance.
If you dont use 2 monitors, i say sell that VE or throw it away, and pick up an LE then flash the Bios or tweak it to Radeon standards. Before you go to say im biased, i've used a Rendition V2100, PowerVR1, Voodoo 1, Banshee, TNT1, TNT2, GF2. Best graphics quality out of all these? The Rendition. Only the Radeon can produce better 3d images that it.