• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

NVIDIA GeForce FX 5800 & 5800 Ultra ($399/$499), 500MHz clock

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: BDawg
My point was all of these people doubting the NV30 were likely the same one building up the R300 before any specs were published. If I had to pick a side, I'd say history goes with nVidia.
While I understand your point that history does side with nVidia, you can't just assume that because nVidia says that it will be faster that it actually will. For example, nVidia was hyping up their detonator 40 drivers not long ago (in the original beta stage) saying that it would increase performance by nearly 30%, when it actually only gave about a 5%-10% increase.

Also, I fixed that quote I had attributed to you. I wholeheartedly apologize for that.
Originally posted by: Taz4158
It took nVidia 6 months extra and a huge core increase before they could even begin to offer a product faster than R300 and we still haven't seen benches. I knew the 9700 was "all that" because I was beta testing it. Until there are benches and an actual product it's all speculation be you a fan of ATI or a fan of nVidia and I could frankly care less since I'll have a NV30 to go along with my 9700 so I haven't a clue why you got your panties in a knot.
First off, since you apparently were omniscient and knew that the V5 6k wasn't going to work, I can say that I too knew what to expect of a certain video product before it hit the market. All that? I don't think that the R300 surprised anyone, and it surely didn't surprise me. ATI had to release a card with performance like that to prove anything to the market. They couldn't release a card that merely matched the performance of the Ti4600, they had to considerably pass it to make it worthwhile to pay for their product over nVidia's. Otherwise, they would just go back to the drawing board and try to redesign it. New cards are always sold at a premium for purposes of covering R&D costs, so if your product only slightly beat what was out, you'd have to either offer it at a better price (lowering margins to dangerous levels) or scrap that product in the hopes that you can improve on its design quickly (lest you go the way of 3dfx). I expect that every product launch should show roughly a 20%-30% increase over the current fastest card on the market. That's what's been happening for a few years now. The R300 handily beat the GF4 in Aniso performance, but that too wasn't unexpected. I didn't have to beta test the product to know that.
Originally posted by: Taz4158
I also wouldn't count on the 4X speed increase until you have seen a LOT of benchmarks, a remark like that can quickly come back and haunt you.
In response to both remarks about waiting to see benchmarks, I think that my post (had you read the whole thing) made it clear that I don't accept anything a manufacturer says as fact until I see benches. However, you were basing your original criticisms on the 50% number in nVidia's press release, therefore I got my 4x number from the same place. I can say that I don't think that nVidia would say what it does without at least some benchmark to back it up. I know how marketers work, and they won't outright lie (for fear of lawsuits). Instead, they'll find some obscure benchmark on some obscure game that will prove their point. I'm guessing that the 4x bench will be with Aniso and AA enabled, as it seems like nVidia made a concerted effort to increase its card's performance in these two areas.

I'm not sure, exactly, how quoting a press release will come back to haunt me, but I'll take that chance.
Originally posted by: HendrixFan
I have never heard that you are "supposed to" have your first slot open for airflow. In fact, I dont know of any one person personally who has had such a configuration.
Actually, I have heard this before. Now whether or not nVidia or ATI or any other manufacturer recommends this I do not know. I can also tell you that the kind of air flow you would lose wouldn't make your card overheat, but it would make your card run a little hotter than usual, which has all of the bad side effects of any other warm computer component.

More importantly, I never put a card into the first PCI slot in any system I build. The first PCI slot was notorious for causing problems with the AGP slot only a couple of years ago. Even though most of these problems (IRQ sharing problems mostly) have been fixed, why not just protect yourself from this entirely. With the integration of a network card into almost every motherboard now, and decent integrated sound on many others, there is little reason why losing the access to one PCI slot would affect anyone. I used to have a DVD decoder card, a sound card, a network card and a video card plugged into my computer. The only other thing I could see anyone using would be a modem (I have broadband). Take away the DVD decoder card (decoding DVDs is a joke to any modern CPU) and the network card (already stated why) and there are only three things anyone should ever need to plug into his or her motherboard. There are more than enough slots on any ATX motherboard to support the important cards, and I don't think that nVidia is too concerned with their top-of-the-line card not fitting into micro-ATX or Flex-ATX systems, though they would still fit in a Shuttle system.

So who is left out of the market for these new 2-slot AGP cards? Enthusiasts? Nope. Any enthusiast would be smart enough to get a board with at least 4 PCI slots (almost always 5) and 1 AGP slot, with integrated 10/100 NIC card. So, you would have room for your vid card, your sound card, your modem (a very small portion of the enthusiast market uses modems I'd reckon), and maybe even a RAID card. Anyone needing/using 2 RAID cards should have been smart enough to at least get 5 PCI slots anyway. Home user? Most won't get this kind of card, but even if they do, they won't need RAID cards and will probably even have integrated sound. The only person I could see that would be let down would be someone with a Shuttle box that's a LAN gamer that uses a separate sound card. However, most of them (I believe) would be willing to settle for a lesser sound solution in order to get the bleeding edge performance in video cards.
 
Originally posted by: Auric
Gee, this is an interesting topic... three months from now!

If NVIDIA is claiming such a paltry performance increase over the competition's then six month old part which has a significantly lower clock on a larger manufacturing process and is cheaper even now than even a reasonably projected price based on the MSRP announcements then they are truly doomed. It seems reasonable to assume that even a 9700 Pro on a smaller process with slightly higher clocks will relegate these FX's to the same place the GF4 is today -overpriced, underpowered parts being squeezed out of the market. The good news is that one way or the other we should be able to get a really kick-ass card on the cheap next Spring.

Nicely stated...
Heh too bad I have to build a new system in order to use these next gen cards to their full extent.
My only concern is the lack of benchmarks it shuts everyone up. I hope they find a better to get rid of the heat
than that huge heat pipe.
 
Soooo many "interesting" comments. Here are a few of my thoughts:

1) Does anyone complaining about the price understand marketing? Do you know what price skimming is? If you don't want to pay the price don't buy it. It is not your right to buy the fastest thing out for cost. If they can sell it for $500 they should. If you don't like it and think they are ripping people off, buy their stock, sell it after they release outrageous earnings, and then buy the card with the profit! That would be sticking it to the man.

2) It is irrelevant that this card is late to NVidia, since they have not lost desktop market share. They are not being eaten alive by the R300 as some would claim.

3) Brute force is relative and irrelevant. Core clock speed or memory bus, fast performance is the bottom line and all thats relevant.

4) ATI will have to at some point make the $$$ investment NVidia did to jump to .13 micron. You have to bite the bullet and do it at some point, NVidia just did it earlier (like Intel, who is now reaping the gains).

5) Whatever happened to Matrox's parhelia? Did that just drop off the face of the planet or what?

Thomas
 
Originally posted by: UThomas
Soooo many "interesting" comments. Here are a few of my thoughts:


2) It is irrelevant that this card is late to NVidia, since they have not lost desktop market share. They are not being eaten alive by the R300 as some would claim.

Wait until 4th quarter reports come in and then we'll see how much the 9*** cards have dented nVidia's sales and the fallout could also be many months down the line when we see IF the delay has hurt there reputation and OEM sales. Time will tell.
 
From the original link:

"Nvidia "would have loved to have had it sooner. But we haven't lost a single point in desktop market share in the last three quarters," he added. Nvidia's market share increased to 58 percent during the third quarter, according to a recent report by Mercury Research"

I will reiterate: ATI will have to bit the bullet and go to .13 at some point. They will have delays as well. NVidia choose to do it earlier. For reference, see AMD/Intel.
 
Originally posted by: UThomas
From the original link:

"Nvidia "would have loved to have had it sooner. But we haven't lost a single point in desktop market share in the last three quarters," he added. Nvidia's market share increased to 58 percent during the third quarter, according to a recent report by Mercury Research"

I will reiterate: ATI will have to bit the bullet and go to .13 at some point. They will have delays as well. NVidia choose to do it earlier. For reference, see AMD/Intel.

Ummmmmmmmm the 9700 has only been out since the 4th quarter so the last THREE quarters wouldn't reflect on any impact R300 has had........thought that would be obvious to you.
 
Originally posted by: UThomas
From the original link:

"Nvidia "would have loved to have had it sooner. But we haven't lost a single point in desktop market share in the last three quarters," he added. Nvidia's market share increased to 58 percent during the third quarter, according to a recent report by Mercury Research"

I will reiterate: ATI will have to bit the bullet and go to .13 at some point. They will have delays as well. NVidia choose to do it earlier. For reference, see AMD/Intel.

I'll just point out a couple things that may be taken as a flame but aren't in any way intended as such.

Those numbers were through the 3rd quarter, before the 9700 was out. The 4th quarter numbers will be out kind of soon and I think Nvidia will have lost a little to the 9700.

Nvidia and ATI both use TSMC as a foundry for their chips. Nvidia went through the teething for all other companies using TSMC's .13 process. ATI will be able to bring out a .13 chip a lot easier than Nvidia was able to.
 
You guys are forgetting what is obviously going to happen.

When the 5800 comes out, ATI is going to sink prices across the board, just like when the GF4 came out.
Would you buy a 9700 Pro for retail 249.99 or 299.99, or the 5800 for retail 399.99.

Street prices:
9700 Pro - 199.99 ~ 249.99
8500 - 299.99 ~ 349.99

The answer to this is only time and benchmarks will tell.

I have a feeling the GFfx could be 50% faster than a R300 in everything, except when you turn on AA and AF. My feeling is, when you turn them on, a GFfx will be knocked right back down the 9700 Pro levels. So the price difference may mean everything to you if you always play at the max AA and AF you can. But if you always play with no AA and AF, and all the details turned down, hey perhaps you might want that GFfx for its sheer speed. But that 256-bit bus of the R300 is going to let it do more with less.
 
Good call on the timing, but I don't think OEM's change their contracts quarterly, so Q4 is probably in line. Hmmm... and come to think of it, AMD and Intel have their own fabs and don't outsource. So all in all that point can be stricken from the record. Please refer to my points on marketing though 🙂 And at any rate, NVidia is ahead of the game jumping on the .13 bandwagon so soon. ATI will have to catch up.
 
And I'm having fun with GF3 Ti200 that I got for $100 a year ago. Just think - there are barely any games taxing this card. Games that will tax GFFX will be here in TWO YEARS, not even Doom III will take full advantage of the card.
 
Originally posted by: Information21
Nvidia and ATI both use TSMC as a foundry for their chips. Nvidia went through the teething for all other companies using TSMC's .13 process. ATI will be able to bring out a .13 chip a lot easier than Nvidia was able to.

I was just about to say the same thing 🙂. You just can't compare it to Intel's .13 vs AMD's .13 since they were developed completely independently. ATI is lekely to will use Nvidia's fine tuning of TSMC's .13 micron process to their advantage :Q. Once TSMC has the .13 process down pat, why not release a higher clocked version of the 9700? I bet that the savings in die size might also make it cheaper to produce. I assume there would also be nice boost in performance from doing that 🙂.

Either way, all this fighting over ultra-high-end gaming solutions is good news to me. I'll be waiting for DX9-compatible cards to be available in ~$100 cards before I make my move, which should be about the time games probably start pushing the DX8 cards too hard to play the newest titles. Heck, I just made the jumpm to DX8 this week 🙂

 
I'm just glad I'm NOT a Gamer. At least I'm not tempted to buy htese latest and greatest video cards.

Gulp
 
Originally posted by: HendrixFan
I assume all the "non biased" people planning on getting the NV30 because the 25-50% performance gains are the same that already bought the R9700 because of the 25-300% performance gains. Ill also assume that if the R350 is released around the same time, and outperforms the NV30, then these "non biased" people will get that as well.

To any "non biased" person this applies too I will be more than happy to take the 9700 off your hands for a reduced fee. 😀
 
Ok, a couple things:

1) We still don't know how fast the NV30 will be, except that it will be at most 50% faster than the 9700 Pro, and that will likely be in very special circumstances. Arguing about how much faster NV30 will be than the R300 is silly, because we still really don't know.
2) NV30 won't be available for another three months!
3) As already stated, it will be much easier for ATI to transition to .13um than it was for NVIDIA, because NVIDIA already did the work for them.
4) The 9700/Pro should be quite a bit cheaper in three months, than it is now.
5) Both ATI and NVIDIA appear to have "refreshes" scheduled for not too far after the NV30 release.
6) We're only talking about video cards here, the world will not end if one card stomps the other.
 
Again, I'd like to point out what Kyle had to say at HardOCP: Of course, how fast it will run DOOM3 is what is on everyone's mind. From the numbers that NVIDIA showed us compared to a 9700, it looked as if the GeForceFX would be approximately 25% faster in frame rate.

Edit: Source
 
Originally posted by: CrazySaint

6) We're only talking about video cards here, the world will not end if one card stomps the other.
Are you sure about that? I thought ATi producing a faster card than nVidia was one of the prophecied signs of the apocolypse! 😛 hehehe

 
Originally posted by: Ilmater
Originally posted by: Taz4158
It took nVidia 6 months extra and a huge core increase before they could even begin to offer a product faster than R300 and we still haven't seen benches. I knew the 9700 was "all that" because I was beta testing it. Until there are benches and an actual product it's all speculation be you a fan of ATI or a fan of nVidia and I could frankly care less since I'll have a NV30 to go along with my 9700 so I haven't a clue why you got your panties in a knot.
First off, since you apparently were omniscient and knew that the V5 6k wasn't going to work, I can say that I too knew what to expect of a certain video product before it hit the market.



the v6 6k DID work, samples did get out.. etc.. btw, if you were basing the argument that 2 chip implementations don't work, on the Rage MAXX.... the v5 5500 was dual chip. worked fine. all the 6k was, was a 5500 with two extra chips and 32 extra megs for each of them. 🙂

just wanted to add my comment, pretty bad huh, went into a thread about the GFFX(wow what an acronym) to talk about 3dfx, lmao, i need a life.😀



Oh, yeah, people, quit whining.. there are people who want these things for more than just gaming... can anyone say, 128 bit color, real(well, not exactly, but you know) time rendering in Maya and 3dsmax? w00t.
 
Oh, yeah, people, quit whining.. there are people who want these things for more than just gaming... can anyone say, 128 bit color, real(well, not exactly, but you know) time rendering in Maya and 3dsmax? w00t.

I'm too lazy to read the whole thread to see where this "128bit colour" bit came from. If it's true how pointless! 32bit is 17million colours. 64bit is 18,000,000,000,000,000,000. 128bit just AINT necessary! 32bit isn't quite enough, 64 is the next level and is more than anyone could ever POSSIBLY want! 128 is sooooooooooo silly!
 
Originally posted by: sebfrost
Oh, yeah, people, quit whining.. there are people who want these things for more than just gaming... can anyone say, 128 bit color, real(well, not exactly, but you know) time rendering in Maya and 3dsmax? w00t.

I'm too lazy to read the whole thread to see where this "128bit colour" bit came from. If it's true how pointless! 32bit is 17million colours. 64bit is 18,000,000,000,000,000,000. 128bit just AINT necessary! 32bit isn't quite enough, 64 is the next level and is more than anyone could ever POSSIBLY want! 128 is sooooooooooo silly!
I concur. I believe that 128 bit color would be useful if you wanted a image identification/comparision program. The human eye can distinguish a finite amount of colors, and you only find light in discrete quantities anyway. 128 bit is 3.40*10^38, just for your reference. No human can distinguish that many. For gamers, it's marketing. Peace.
 
Originally posted by: JSSheridan
Originally posted by: sebfrost
Oh, yeah, people, quit whining.. there are people who want these things for more than just gaming... can anyone say, 128 bit color, real(well, not exactly, but you know) time rendering in Maya and 3dsmax? w00t.

I'm too lazy to read the whole thread to see where this "128bit colour" bit came from. If it's true how pointless! 32bit is 17million colours. 64bit is 18,000,000,000,000,000,000. 128bit just AINT necessary! 32bit isn't quite enough, 64 is the next level and is more than anyone could ever POSSIBLY want! 128 is sooooooooooo silly!
I concur. I believe that 128 bit color would be useful if you wanted a image identification/comparision program. The human eye can distinguish a finite amount of colors, and you only find light in discrete quantities anyway. 128 bit is 3.40*10^38, just for your reference. No human can distinguish that many. For gamers, it's marketing. Peace.

the whole purpose of 128 bit color is to improve accurracy with color calculations. So that the final 32 bit output will be more true and have less errors.

besides the human eye is analog in nature, not digital. saying we can only see 32 bit color is kind fo pointless since that could vary from person to person.
 
/me puts on future goggles.

GeForce FX 5800 hits shelves in February, is fastest card ever made etc., etc., etc.

ATi releases R350 which has a small performance increase over the GFX5800U with a $25 smaller price tag.

🙂🙂🙂🙂🙂🙂🙂🙂
 
Back
Top