GeForce 5 (NV30) in August

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Exerpt from an article

When 3dfx announced that it was selling itself off to cover all of its debts and essentially going out of business, they had already received working Rampage silicon from the Fab. The very day they announced that they would be divesting itself of all interests, the first Sage (programmable T&L unit) was delivered. The single Rampage/Sage combination looked to have competed very well with the GeForce 3, but the cost of each card would have been significantly smaller than the GeForce 3. Much of these savings came from the actual chip costs. The Rampage rendering core was about 26 million transistors, while the Sage core was around 18 million. Producing each chip separately is much easier than producing a single 57 million transistor GeForce 3. The dual Rampage, single Sage board would have significantly outperformed the GeForce 3, and would have cost only slightly more to produce. 3dfx was in a very good position technologically, but financially the company could no longer exist and go on with its plans.

Nine months before this tragic event, 3dfx bought Gigapixel and its technology. 3dfx instantly integrated the engineering teams and the technology of the two. Things were going very, very well in terms of engineering development, and an exciting new chip was being designed. Fear was the codename, and it was going to be something special. Basically a Rampage rendering core redefined by Gigapixel technology, it would be around 58 million transistors and produce some of the most amazing graphics ever seen. That was the original plan, and tapeout was scheduled for March of 2001. The company didn?t make it that far. NVIDIA instead got the technology and all design work done to date. NVIDIA engineers were very excited to be able to see what 3dfx was working on, because it was in many ways superior to what NVIDIA had. Now, nearly 2 years after NVIDIA acquired 3dfx, we are about to see a totally new architecture that is a fusion of the 3dfx and NVIDIA technology.

The idea of a texturing computer is that texturing resources can be dynamically allocated to the pixel pipelines that most need them at that particular time. This helps the overall performance of the core by making it more flexible to the needs of the rendered scene. Add in more advanced texturing techniques, more texture passes per clock, more internal bandwidth, and you can get the basic idea what is in store. This will be NVIDIA?s first product that will feature greater than 32 bit color accuracy. Rampage had 52 bit color (13 bit RGBA vs 8 bit), and Fear was to have 64 bit color. Internal bandwidth was improved by using the Gigapixel technology, so the performance hit of higher color accuracy was not as significant as it would initially seem. Anti-aliasing would also be improved yet again in terms of quality and performance.

NVIDIA is staying with a single core design, so all functions will reside on one chip. Sage II was to be integrated with Fear, but NVIDIA will probably go with a beefed up, Dual Vertex Shader on this chip. The idea of 3dfx, Gigapixel, and NVIDIA technology wrapped in one core should give the competition night sweats. This combination will also receive a new name, as NVIDIA wants to differentiate it from the GeForce brand.

The Codename of the up and coming card is the NVIDIA eclipse.

I WILL HAVE ONE :)

Oh and some more game titles that will punish a geforce 4, Planetside, Starcraft 2, Everquest 2, and, of course, Quake 4 (2003 3 months AFTER the eclipses release)
 

Jman13

Senior member
Apr 9, 2001
811
0
76
My GF3 ti200 (@ near ti500 speeds) runs everything I can think of at at least 1152x864x32 full detail. This card will probably last me until at least the product cycle after this new card...so I'll buy the NV30 after the new cards have been released, so I can get it cheap and still have huge amounts of speed.

Jman
 

etalns

Diamond Member
Dec 20, 2001
6,513
1
0
gah, I was just planning on buying a new computer, and then I was told by some people that I should wait till the 533 mhz fsb p4's come out and the pc1066 RDRAM. Now it looks like I may have to wait until August :'(.
 

gregor7777

Platinum Member
Nov 16, 2001
2,758
0
71
The single largest setback that we have suffered because 3dfx is no longer manufacturing graphics cards is the product naming.

GeForce sucks, Radeon is ok.

Fear is a VERY cool name for a card. :)
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
I doubt it would have been called Fear in production, that was most likely an internal codename.

AMD is one comany that would definately be better off with their codenames, Thunderbird, Spitfire, Claw/Sledge hammer, all are cool names :)
 

Spaceloaf

Junior Member
Apr 30, 2002
18
0
0
I actually view things opposite to most of you I think. I would expect the GeForce5 to NOT be significantly faster than the GeForce4. If we think about nVidia's product line:
TNT
TNT2
GeForce
GeForce2
GeForce3
GeForce4

You'll notice that all the "odd" model numbers introduce new technology while the "even" numbers tend to optomize the technology (higher fill rates, crossbar memory architecture, etc.) If we look at the GeForce3, it performed the same as the GeForce2 when clocked at the same speed. The benefit to owning a GeForce3 was the DX8 pixel and vertex shaders, as well as Qunicunx AA. In otherwords, the difference between the GeForce2 and GeForce3 is graphic quality as opposed to polygon-pushing power.

The GeForce4 is actually an optimization of the GeForce3 design. The result is that the GeForce4 runs faster, even when clocked the SAME speed as a GeForce3 or GeForce2. The same analogy can be made between the other "pairs" of products (TNT vs TNT2, GeForce vs GeForce2).

From this trend, I would expect the GeForce5 to possess all sorts of DX9 abilities, and maybe some new 8x AA alogorithm or better anisotropic filtering. But in terms of raw power, I would expect the outcome to be similar to the current GeForce4. Just like the GeForce3, the true power of the chipset will not be revealed until developers start releasing games for it. Remember how sweet Doom3 looks? That's GeForce3 technology, yet games looking that good are still nowhere to be seen.

At any rate, I wouldn't feel bad about purchasing a GeForce4 right now. Most developers have barely even touched all the DX8 possibilities, and rest assured, we are a LONG way off from DX9 games.

Its cool that nVidia pushes the industry with their 6 month development cycle, but the development cycle of most games is over a year. So, nVidia is always going to be releasing hardware that won't be truly utilized until several months later.

Spaceloaf
 

MasterHoss

Platinum Member
Apr 25, 2001
2,323
0
0
I agree with you but disagree at the same time.

The GeForce3 offered much better gaming at higher resolutions. The GeForce4 offers benefits ONLY when you use AA. The nv30-based graphics cards should, like the GeForce3, bring new technologies to the graphics cards and speed improvements.
 

Spaceloaf

Junior Member
Apr 30, 2002
18
0
0


<< The GeForce3 offered much better gaming at higher resolutions. The GeForce4 offers benefits ONLY when you use AA. >>



I'm not sure where you are coming from here. If anything, I would say the oppostie is true. The GeForce3 runs the SAME as the GeForce2 when they are clocked at the same speed. The difference is that the GeForce3 made Qunicunx AA practical for modern games.

The GeForce4 features nVidia's crossbar memory architecture along with other tweaks to make the card faster, even when running at the SAME clock speed as the GeForce3. This increased speed means increased fill rates which means better FPS at higher resolutions. This also means that a GeForce4 will run faster than a GeForce3 or GeForce2 in ALL applications, whereas the GeForce3 will only run faster than a GeForce2 in certain applications like AA.

If anything, I would say that the GeForce3 made AA a reality, but the GeForce4 made it fast at high resolutions.

Spaceloaf
 

Dufusyte

Senior member
Jul 7, 2000
659
0
0
Since Nvidia owns the 3dfx and Voodoo brand names, i want them to name the card the "3dfx Voodoo 6 I'll be Baack"
 

DeathByDuke

Member
Mar 30, 2002
141
0
0
You have to remember that the Geforce4 has one major feature that is NOT a tweak - the second Vertex Shader, doubling processing performance and almost tripling polygon rates... that is a MAJOR upgrade and is hardware, LMA2 is part hardware, part software as is 4xS AA, if that Vertex Shader wasn't there, your Geforce4 would be slightly faster than a Geforce3 at same speeds... this is what makes XBOX so powerful (but not as powerful as a GF4-equipped PC! ;) ).
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,393
8,552
126


<< If we look at the GeForce3, it performed the same as the GeForce2 when clocked at the same speed. >>

no it didn't. the GF3 blew it out of the water. go read anand's review.
 

Spaceloaf

Junior Member
Apr 30, 2002
18
0
0
Taken from Anandtech's original GeForce3 Review:



<< The only real performance advantages the GeForce3 currently offers exist in three situations: 1) very high-resolutions, 2) with AA enabled or 3) in DX8 specific benchmarks. You should honestly not concern yourself with the latter, simply because you buy a video card to play games, not to run 3DMark on although there will be quite a bit of comparing of 3DMark scores of the GeForce3 regardless of what we think. >>





<< If you must purchase a card today and aren't going to replace it for the next two years, then the GeForce3 is not only your best bet, it's your only choice. But if you can wait, we strongly suggest doing just that. In 3 - 4 months ATI will have their answer to the GeForce3, and in 3 months following that, NVIDIA will have the Fall refresh of the GeForce3 running on a smaller process, at a higher clock speed, offering more performance and features at a lower cost. >>



It was the GeForce4 that made highspeed gaming a reality. Not only can you AA faster, but ALL applications will run faster on a GeForce4 when compared with a GeForce3 or GeForce2. The same CANNOT be said for the GeForce3.

Spaceloaf
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
My belief (only on what is said in the article, its more of a wait and see) is that:
1. because they are renaming the product line, we should expect a radically different chip
2. because of what 3DFX was working on at the time of the aquisition, we are more likely to see features like Multi-Sample FSAA, better HSR, 64 bit color, 256MB+ of texture memory, possibility of dual chip designs, possibility of very high resolutions. As far as speed goes, there isnt much faster you can get that we will be able to see, beyond 100fps the human eye cant tell the diff between 100fps and 400fps. Because your monitor simply cant display it. So i would imagine that increasing image quality while keeping speed at its current threshold will be NVIDIAs goal with this "eclipse" chip.
3. The only true way to increase image quality now is to increase the resolutions and refresh rates that monitors support, 4096x3072x64 would not need antialiasing at all, the "jaggies" would be too small for us to see. Until some new technology for high definition monitors comes about (and is affordable) all they can really do is make FSAA run faster, or go 6, 8, 10, 12 sample. Even then you have the blurring problems with multi-sample, and incredible bandwidth problems of supersampling.
4. NVIDIA has made multiple announcements in conjunction with Intel about a "new inerface" they are developing for video cards, this could mean goodbye to AGP, hello something else, and yet anther motherboard we will have to buy. Which would definitly suck, but we cant say it hasnt happened before ;-)
 

dip027

Junior Member
May 2, 2002
1
0
0


<< So maybe thats the time they are going to incorporate some 3DFX technology (besides the PCB size)? >>



lol
i hope so
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
We dont know for sure if it'll be faster,

Yes we do - unless nVidia suddenly (and stupidly) decides to release a slower card.

the GF3 wasnt to much faster than the GF2 Ultra and everyone praised the GF3.

On its release it was 25% faster in fillrate limited situations, which is significant. The Detonator 4s put it at 50% faster. Also keep in mind that the GF3 has the same speed memory and a slower core speed than the GF2 Ultra.

Whoopie, now I can run all those games that don't already run silky smooth at 1280x1024 on my GF3/P4 2.133GHz computer!

Yeah? Try Undying or JKII and watch some parts crawl even on a GF4 Ti4600 at those settings.

Lets get a list going of games that might punish a geforce 4:

Doom3 and Unreal2 will almost certainly punish a GF4 and I'm fairly certain that SOF2 will do it with high detail settings as well. As for current games, Undying and JK2 already punish a GF4 Ti4600.

The GeForce4 offers benefits ONLY when you use AA.

Absolute rubbish.

The GeForce3 runs the SAME as the GeForce2 when they are clocked at the same speed.

That is completely false as well.

The GeForce4 features nVidia's crossbar memory architecture along with other tweaks to make the card faster

The GF3 has the same crossbar memory controller that the GF4 has.
 

HaVoC

Platinum Member
Oct 10, 1999
2,223
0
0
If it's true that they are renaming the product line for nv30 then I have to think this chipset is going to be a significant upgrade over the GF4, not just an incrementally faster product.

The rumors of nV30 using 3DFx technology are very exciting. :D Hopefully these rumors are true.
 

Mingon

Diamond Member
Apr 2, 2000
3,012
0
0


<< The GF3 has the same crossbar memory controller that the GF4 has. >>



This should read 'the gf4 has an improved crossbar memory controller from the gf3 series'
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
This should read 'the gf4 has an improved crossbar memory controller from the gf3 series

EDIT: you're right Mingon, the actual controller was improved in addition to adding Quad Cache and the new version of z-occlusion culling. My mistake.
 

MadRat

Lifer
Oct 14, 1999
11,973
291
126
<<Call it the new NVidia "He"-Force.>>

GeRampage sounds wimpy. Think "He"Force : Attack of the My Penis Is Bigger Than Yours Syndrome.
 

AnitaPeterson

Diamond Member
Apr 24, 2001
5,994
496
126
Great... another wave of $400-$500 cards. You can buy a whole computer for that cash, and, seriously, it's only a vidocard - you're not doing "Shrek 2" on it... How many people still play Undying on a TNT2, a Voodoo3 or a Kyro? Granted, it won't be an eye-candy fest, but as long as these components are used only for gaming, I'd say it's overkill. A piece of hardware which needs a (fast) computer to function (no standalone use) costs more than a really good Hi-Fi receiver, DVD player, or other, more necessary, household items.

This type of accelerated production and escalated costs will lead sooner or later to a consumer rebuke - it's too hard on the budget of the average citizen of the planet