Is nVidia "brute force?"

MustPost

Golden Member
May 30, 2001
1,923
0
0
I don't really get it when people say they nVidia is just using a "brute force" proccessing method to beat their compeditors. people say card x is way more efficiat.
I also hear that they don't innovate, G3 is from G2 is from Geforce 256. Personally I don't really buy that these chips are even that much the same. I think its mostly marketing a building up a brand name.
The Geforce 256 was clocked at 120 MHz at a time when the TNT2 Ultra was running at like 150 or 166 MHz. I don't know how fast the Voodoo3 was, but It was at least 120 MHz.
If the Geforce256 was running at such a low clock speed and still beat up the compitition, and now with newer faster chips with more pixel pipelines and more textures per.
How can the Geforce3 be brute forcish and innoficiant. It is supposively built from the Geforce256 which blew away the compitition even though it wasn't running at ultra-high clock speeds. Now with have programible shaders and multiple pipelines, how could it be less efficiant? nVidia is in fact only successful I believe because there chips are so efficiant(ok maybe a good part because of their drivers too), not because they have a much faster core or memory speed.
Doesn't the fact that nVideo chips need so much memory say its more efficiant in some ways not less. If the Core can be kept busy at its current speed and the only thing holding it back is it can't recieve or sent info fast enough. This is kind of hidden because of one of the Geforces few weaknesses, its z-buffer seems to be inferior to the compitition, ATi and others have.
Other then the z-buffer though, nVidia chips seem to be more efficiant in almost every way.
 

CrazyHelloDeli

Platinum Member
Jun 24, 2001
2,854
0
0
Its always been trendy to root for the little guy(ATI). Remember when Voodoo was on top and everyone was cheering for the original TNT? Same way Everyone chears for Linux and not Microsoft? Same way everyone cheers for AMD and not Intel? Lite-On vs Plextor? I could go on and on...its all feather fluffing.
 

Soccerman

Elite Member
Oct 9, 1999
6,378
0
0
umm, hello?

mhz means nothing.. get that through your brain right now.

the Geforce 256 (SDRAM version) WAS faster than the competition, becuase it finally reached the memory bandwidth limit (which is why DDR versions showed up after).

basically nVidia was using brute force to squeeze as much performance out of that RAM as possible, becuase more efficient designs were already out (or very close to being released, Dreamcast used PowerVR 2 FY).

so, instead of making a core based on efficient rendering (like PowerVR, or even like ATi's Radeon and the GF3 later on), they simply added bandwidth to compensate.

simple, yet effective solution. however it IS a brute force solution, becuase it only temporarily avoids the problem of memory bandwidth, rather than a much longer term solution (of course, you'll need to address the problem probably anyway).

it's similar to, say a car. you want more acceleration, you have 2 main options. 1, more torque, or 2, less weight.

F=MA! you figure it out! one solution means a gas guzzler (which means more expenses, esp. in California), the other means saving gas, though possibly also adding cost through either design, or through expensive materials.

so basically nVidia didn't reduce the bandwidth requirements of their chip (similar to shaving weight in a car), they threw more bandwidth at it (throw more gas into an engine, you'll get more power, though this is simplified cause I don't want to get into details).

Geforce 2 GTS is simply a Geforce 256 core manufactured on a smaller process, so the core could run cooler, faster, and with less power draw (that original Geforce was a monster when it came to power draw).

they also added a second texture unit to each of the 4 pixel pipes (that's right, the Geforce 1 had 4 pixel pipes, all competitors ran with 2 or less, which is why it didn't need extreme mhz to get similar or better fillrates). AFAIK, those were the only differences between the cores. There were no feature additions. I wouldn't be surprised if they had to rewire the core for the die shrink though..

The GF3 however is more efficient in the use of it's available memory bandwidth. so it is able to reach higher fillrates.

it's not perfect, but it certainly helps alot! it's the start of nVidia's moving to more efficient architectures (after all, they acquired Gigapixel/3dfx who were already experimenting with HSR on the Voodoo 5 with varying levels of success).

nVidia however probably couldn't realistically have thought of more efficient designs until they could slow down. it takes alot of specialized research (which is why 3dfx acquired Gigapixel). release dates mattered more (in the end, that's probably the only reason 3dfx went down, becuase they were caught off guard without much more room to grow with the old voodoo architecture).
 

Adul

Elite Member
Oct 9, 1999
32,999
44
91
danny.tangtam.com
if you look at the radeon 64 DDR and the geforce gts, the radeon got about the same amount of work done with how the pixel pipelines. Hence it was the more efficient architexture when compared to nvidias.
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
nVidia has had the theoretical fillrate crown for a long time, but OTOH they've constantly been ahead of the competition in terms of performance as well.

As for people saying they don't innovate, I dont quite understand that.

They brought T&L to the consumer space, they had the first card where 32 bit color was a viable option(TNT2), the GF3 introduced the advanced shaders, what more do people want?
A card that can teleport them to work, while at the same time teleporting your mother in law to Mars?
 

Mavrick

Senior member
Mar 11, 2001
524
0
0


<< if you look at the radeon 64 DDR and the geforce gts, the radeon got about the same amount of work done with how the pixel pipelines. Hence it was the more efficient architexture when compared to nvidias. >>



Isn't the Radeon 64 DDR more of a competitor for the GF2 MX? ;) No, I'm joking, but there's no way a Radeon 64 DDR could beat a GF2 GTS 32 MB... The Radeon seems a lot less efficient (since it has more of faster memory, more functions and theoretical power, but still manages to get beaten...)
 

Soccerman

Elite Member
Oct 9, 1999
6,378
0
0
you're mistaken. the Radeon DDR had 183mhz core clock and memory, I think similar to the original GTS (at least, in terms of Mem clock speed). The raw fillrate power of the Radeon is much less however. the GTS achieves a theoretical 800 megapixels/second fillrate, and 1.6 gigatexels/second when doing multitexturing. the Radeon 64 DDR gets 366 megapixels/second, and 1100 Gigatexels/second, when rendering 3 layers of textures..

so the raw fillrate of the Radeon 64 DDR, isn't as high as the GTS, however AFAIk, the memory bandwidth was originally the same.

with that said, u can now buy the Radeon 64 VIVO with a clock of 200mhz core/mem, which gets you 400 mpixels/second, and up to 1200mtexels/second. still not up to GTS in terms of raw fillrate, but certainly more competitive due to increased memory clock speed. remember, somehow nVidia managed to pull some extra performance out of the GTS with a driver release of the Detonator 3's, which somehow managed to keep it above a Radeon with equal memory clocks.. tough to say, but perhaps already nVidia was working on more efficient ways of using the bandwidth.
 

Rahminator

Senior member
Oct 11, 2001
726
0
0


<< nVidia has had the theoretical fillrate crown for a long time, but OTOH they've constantly been ahead of the competition in terms of performance as well.

As for people saying they don't innovate, I dont quite understand that.

They brought T&L to the consumer space, they had the first card where 32 bit color was a viable option(TNT2), the GF3 introduced the advanced shaders, what more do people want?
A card that can teleport them to work, while at the same time teleporting your mother in law to Mars?
>>



Nope, if it made coffee I would be satisfied :).
 

Smbu

Platinum Member
Jul 13, 2000
2,403
0
0
If you compare the latest generation though, it seems like ATI is trying to use brute force with their Radeon 8500.

If you compare the GF3 Ti500 (240mhz core/500mhz RAM) to the Retail ATI Radeon 8500 (275mhz core/550mhz RAM). Also they both use the same 4 pipeline and 2 textures per pipeline architecture. Not taking into account the extra features of each card the Radeon should have an fillrate of 1100MPixels and 2200MTexels compared to the Ti500's fillrate of 960MPixels and 1920 MTexels.(slightly lower than the GF2 Ultra's 1000MPixels/2000MTexels) Not to mention that fact that the Radeon 8500 also has the extra advantage of more memory bandwidth with the higher clocked 550mhz RAM. Now tell me again how this card is pretty much always slower (except in 3dmark2k1) than the GF3 Ti500?
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,997
126
Don't forget that the Radeon 64 MB DDR had 183 Mhz memory, plus it had Hyper-Z which ATi claimed boosted performance by 20%. Yet the GF2 GTS was still beating it, despite having 166 MHz memory and no HSR.
 

Soccerman

Elite Member
Oct 9, 1999
6,378
0
0
well duh!

look at it this way!

these are different cores, with different charecteristics. I don't understand why ATi's cores almost always seem to be behind performance wise, even though memory bandwidth (the real limiting factor) is typically very competitive with at least one of the Geforce line..

I think it's a testament to the engineers over at nVidia, for producing cores that are more efficient overall (how else can u explain the GTS beating the Radeon with the Detonator 3's) than ATi cores.. it's also odd that they're able to somehow pick out massive speed increases every 6 months or so, I don't quite understand how they are able to do that on a regular basis..

Now tell me again how this card is pretty much always slower (except in 3dmark2k1) than the GF3 Ti500?

with the Radeon 8500, the efficiency level of this core is finally similar to the GF3.. perhaps still a bit worse, though it's hard to say for sure. one thing I do know, is that the Radeon 8500 core is capable of being VERY competitive with the GF3 Ti500 (though still not quite as efficient). The 3DMark2K1 (using 7191 drivers) and Quake 3 scores (using 7206 drivers) are on par with the Ti500 with equal image quality (or better in the 3DMark case IMHO). still the mhz advantage in the memory for the 8500 shows that either the core itself isn't very efficient, or the drivers still aren't up to snuff.

Remember, alot of HSR, and other memory bandwidth saving techniques were implimented into the GF3 as well, so the 8500 doesn't have as big an advantage there (though apparently the HSR abilities of the 8500 are much better than the GF3) as the Radeon supposedly had over the GF2.

However, nVidia has never revealed what they've done to allow such performance increases, so we can't tell what's going on in their cards.

even with the Radeon 8500, ATi hasn't quite got the efficiency, but it's not much worse than the GF3.

plus it had Hyper-Z which ATi claimed boosted performance by 20%.

take a look at the original [l=Anandtech review]http://www.anandtech.com/showdoc.html?i=1281&p=5[/i]. I'd say 20% is pretty close.

despite having 166 MHz memory and no HSR.

really, where does nVidia even claim that they don't use HSR with the GF2 (with the Detonator 3 drivers)?? they don't even AFAIK mention ANY memory bandwidth saving techniques for the GF2, and I don't remember hearing any about the GF3 either (though I know they are there). I personally wouldn't be surprised if they implimented something like that to eek out the extra performance (I still can't believe that the Radeon 64 VIVO is so inefficient a design, I blame the drivers :) )..
 

AA0

Golden Member
Sep 5, 2001
1,422
0
0
when you look at the hardware for both sets of cards, most of the time they perform close to eachother. Many reviews showed the 64meg gts and 64 meg radeon head to head in performance, the radeon slightly behind, but only by a fraction.

The real difference is the drivers. Nvidia's drivers are geared for the highest possible speed, while ATIs isn't. Their detail settings are set by default much higher than NVidia. When you start to screw with the default settings, the cards generally come within testing error to eachother.
 

Finality

Platinum Member
Oct 9, 1999
2,665
0
0
To sum up what Soccerman said. Essentially you can have a graphics card that has 20 pipelines and still be clocked at 100 MHz and still have the highest possible fill rate. On the other scale a graphics card with a single pipeline but clocked at a whopping 300Mhz would have nothing on a GeForce series card. Simply because clock speed is not a big factor just the total throughoutput of the card itself.

In terms of effeciency Nvidia does have the lead. A simple fact when you look at the theoretical fill rate achievable by a Radeon 8500 but 99% of the time you see the GeForce 3/Ti500 beating it simply because of an effienct design/driver set.