Nvidia Fermi versus radeon 5800 benchmarks out!

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
I don't have experience with processor design, I guess I can't post according to you, but if I have your permission I'll make this post... since you're making the rules on who can post in this thread.

If any Nvidia bashing is going on it's because right now from the enthusiast gamer point of view they are getting pummeled (This bashing also occured for AMD/ATI back in the Radeon 2900XT days). Regardless of who's architecture is inferior/superior AMD has a complete line of DX11 parts out and are better at almost every price point as well as having the fastest parts with some features that some gamers do want that can only be had on AMD hardware (much like Physx and 3D glasses can only be had on Nvidia hardware).

I don't recall anyone here saying a 4870 would be faster than a GTX280, though I do recall people claiming it may even match it. I also remember OCGuy smuggly putting that as his sig, when someone claimed the 4870 may even match the GTX280. His sig changed shortly after launch. I think most people were excited about getting 80-90% of the performance of a $650 GTX280 for $299. Again, this is regardless of who has what architecture. If AMD's architecture was so terrible I doubt Nvidia would have cut the GTX280 price in half.

You can argue one architecture is better than another depending on the metric you use. It's not worth arguing about as it's been beaten to death in the past and depending on how you look at it one can be better than the other.

Well, I hope you don't mind if myself or other people continue to post even if we don't have processor design experience. Good luck with your superior card that doens't yet exist except on paper. The superior card that will likely be quite hard to find for quite a while after launch because it's superior design appears to be very difficult to make yield well at this point (according to a few rumors floating around). I don't think most of us have a fanboy issue, most of us talk with our wallets. What would you rather game on, a $355 GeForce GTX285 or a $310 Radeon 5850?

You just got served :D
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I want to hear someone actually explain how horrible the shader architecture is from ATi. Every driver update optimizes their compiler for new games, the fact of the matter is that ATi video cards are so optimization-dependent it is ridiculous.

Why do you think consoles perform so well relative to the hardware inside of them? The software that runs on them is optimized/coded for their hardware specifically. Both NV and ATI depend on optimizations for performance improvements. It just happens that they target most of their time on popular games. If you want to argue that out of the box, NV provides better performance than ATI for 90% of crappy PC games no one plays, then maybe you have a valid point. As it stands, both ATI and NV release drivers and improve performance for their customers regularly.

I remember people hailing the 4xxx series as the OWNAGE of the GTX 280. Someone show me reviews where the 4870 makes the GTX 280 or 285 pound sand in a game that's been release within the past year?

I dont recall anyone saying that 4xxx series owned GTX280 performance wise. But at $299 4870 did own the price/performance ratio to GTX 280 which cost $650 at launch. 12 months after GTX280's launch, we had 4890 which came within 90% of GTX280 for $180 and GTX280 still cost $270...

Stormrise - http://www.xbitlabs.com/articles/video/display/radeon-hd5830_8.html#sect1

I am pretty sure a 4870 will make "GTX 280 or 285 pound sand". That's not the point, there are games where GTX280 will be superior to 4870. GTX280 was considered superior performance wise, but it's pricing...well that's a different matter.

I'm looking at benchmarks for "premature" 5850/5870 drivers and they don't perform as awesome as you guys say they do.

Compared to which cards? Sure there is no point in getting a 5850 for Left 4 Dead 2 or Unreal Tournament 3 over GTX275. What about Battlefield Bad Company 2? Also a ton of ATI users don't think 5850 is fast enough to upgrade to. That's why there is so much pressure on Fermi....

I will also tell you guys to show me how the ATI architecture is superior to the nVidia because it isn't.

It's interesting that GTX470 is probably going to end up with a similar performance to 5870. 5870 = 334mm^2 on 40nm process + 256-bit memory bus. GTX470 = 500mm^2 on 40nm process + 320-bit memory bus. There is no question that ATI's current architecture is vastly more efficient for games. Is 5870 more efficient for Folding, SETI@Home or professional 3D design applications? Not even close. But here we are comparing gaming performance strictly.

Don't confuse ATI stream processors with NV's. ATI's stream processors perform simple instructions. But at this time, this approach is more efficient. Think about it, if ATI crammed their shaders into a 500mm^2 card with 320-bit memory interface, NV wouldn't stand a chance. ATI wants to make $ on its graphics cards though, so they won't design something that bloated. However, Fermi was designed to support future technologies (tesselation/geometry) and is heavily focused on GPGPU side. This partially explains why the gpu is bigger in the first place.
 
Last edited:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
ATi's stream processors are setup in clusters of five, which means that each cluster if its properly fed, can process up to 5 instructions per clock in the best case scenario, in the worst case scenario only will process one instruction per cluster. But worst case scenarios are rare because of the type of work that is done at the rendering level.

You can compare that with the example of the HD 3870 taking aside Anti Aliasing performance, it has only 320 stream processors but in reality they are 64 SuperScalar processors. In the worst case scenario it would be twice slower than the 8800GTX and it was never that slow, it the best case scenario it would outperform it, something that only happened once in a lifetime. So it would means that the HD 3870 was running like a 96 stream processor design since it was close to the 9600GSO in terms of performance. The same can be said of the HD 4850 with its 160 superscalar processors which isn't much faster than the 128 stream processors of the GTS 250.

Considering that the HD 4870 has optimizations tricks that weren't present on the HD 3870, and are now absent with the HD 5870, assuming then that it will means that performance gains at the driver level will only work to maximize the execution engine utilization, and not maximizing efficiency code like it could be done before with the HD 4x00 series. So I doubt a miracle driver will boost the HD 5x00 series performance to outperform Fermi if Fermi results to be faster.
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
Does anyone have experience with processor design and architecture?
I think evolucion8 does, but no one else. Until then please be quiet.

I am sick and tired of people who own AMD video cards coming in and bashing nVidia products due to some sort of marketing strategy or their tactics, get over it.

Now for the people who don't want to argue intangibles and actual hardware, I'm down to talk. I've read 6 pages of subjective nonsense and focus on features. I want to hear someone actually explain how horrible the shader architecture is from ATi. Every driver update optimizes their compiler for new games, the fact of the matter is that ATi video cards are so optimization-dependent it is ridiculous.

People were crying when their 3xxx cards were crap until proper drivers came out. I remember people hailing the 4xxx series as the OWNAGE of the GTX 280. Someone show me reviews where the 4870 makes the GTX 280 or 285 pound sand in a game that's been release within the past year? I'm looking at benchmarks for "premature" 5850/5870 drivers and they don't perform as awesome as you guys say they do.

I am waiting patiently to buy my video cards, but I will tell you one thing, I will not let people who have this agenda against companies affect my decision.

I will also tell you guys to show me how the ATI architecture is superior to the nVidia because it isn't. It's a big bloody lie, the ATi architecture has 1 TRUE shader per cluster, and this shader is responsible for offloading the computation to the other 4 that are connected in series. You realize this is no different than the processor in our computers handling interrupts and sending instructions to the respective parts of the machine, right?

With that comes overhead, if you don't optimize the decoding of the instructions properly, you're asking for trouble.

Anyways, anyone want to argue the real meat here? Stop cowering and using horrible metrics to determine the superior card. Lets talk about the compiler, the hardware, the architecture; the guts!

You make a post saying for people who don't know what they are talking about to be quiet, but then you post that wall of complete and utter nonsense? How ridiculous.
 

McCartney

Senior member
Mar 8, 2007
388
0
76
ATi's stream processors are setup in clusters of five, which means that each cluster if its properly fed, can process up to 5 instructions per clock in the best case scenario, in the worst case scenario only will process one instruction per cluster. But worst case scenarios are rare because of the type of work that is done at the rendering level.

You can compare that with the example of the HD 3870 taking aside Anti Aliasing performance, it has only 320 stream processors but in reality they are 64 SuperScalar processors. In the worst case scenario it would be twice slower than the 8800GTX and it was never that slow, it the best case scenario it would outperform it, something that only happened once in a lifetime. So it would means that the HD 3870 was running like a 96 stream processor design since it was close to the 9600GSO in terms of performance. The same can be said of the HD 4850 with its 160 superscalar processors which isn't much faster than the 128 stream processors of the GTS 250.

Considering that the HD 4870 has optimizations tricks that weren't present on the HD 3870, and are now absent with the HD 5870, assuming then that it will means that performance gains at the driver level will only work to maximize the execution engine utilization, and not maximizing efficiency code like it could be done before with the HD 4x00 series. So I doubt a miracle driver will boost the HD 5x00 series performance to outperform Fermi if Fermi results to be faster.

I agree with you 100 percent evolucion. I wasn't stating that ATi can somehow catch up if Fermi is indeed faster, but the excuses will be made as a marketing move. And while these tricks may be gone now, the efficiency is a problem. Isn't the goal of hardware design to allow a better abstraction? I think it's quite sad that Ati has to optimize much more frequently. So what if nVidia's cards win on 80 percent "useless" games, all I'm saying is that people should open their eyes and realize that the card isn't as general purpose.

I'm not using general purpose to defend stuff like folding @ home or any junk like that, but the future of parallelization encourages use of architectures like Nvidia. You can send jobs to any of the shaders since they're all equally as capable, which allows alot more work to get done. Imagine the overhead you would incur by continually having your superscalar shader delegating these instructions to yet another set of worker shaders... It's a bit messy.

People can buy ATi cards all they want, and I hope they do for the future of competition. What I want people to start demanding from ATi is an architecture that challenges the industry and encourages adoption. ATIs setup isn't doing that. nVidia's cards are getting adopted like crazy in our Department because for our super large input data-sets we have something that can help out. Even IF nVidia loses the benchmark war, there is no doubt a clear change in direction from 2 companies. One wants to sell video cards and optimizes for one area, and the other has the primary goal of selling video cards AND ushering in parallelization.

The choice is up to you, people. I know that the price is a big thing, and we will have to see what will become of the GTX 480.

I remember at one time ~10 years ago there existed a giant, I think its name was Intel. This giant was lazy, forgot its duty to the industry to push the boundaries even in face of bleak competition, and this giant fell asleep and got kicked in the nuts by a Rumplestiltskin-esque midget named AMD.
What happened then? Well after a few years of brooding it looked like the Monster woke up, and that giant has yet to relinquish its duties.

We need manufacturers to push themselves, and I would say nVidia hasn't worried about ATi's offering in quite a while. It seems like their goals are very similar, but nVidia wants the crown to push itself, and the industry. I think it's a very noble thing to do.

Every series of video card they've released since the 6xxx has proposed a new way to do things. Sure the shaders are relatively the same, but that doesn't matter. It's the vision.

I hope one day that ATi has this vision and markets that product in all areas of Computing Science, just as nVidia has. I know I will be happy to see that day. Right now I see one company pushing itself and getting flamed for being late on, what could be, a revolutionary video card.

*shrug*
 

ScorcherDarkly

Senior member
Aug 7, 2009
450
0
0
Your math is waaaaaaaaaaaay wrong. The power draw of any dual gpu card is not anywhere close to 2x the power draw from one of it's single gpu brethren.

The math wasn't meant to be perfect, just a rough demonstration of the reasoning behind why it's going to take a few months before a dual part is possible. And I was right:

http://www.fudzilla.com/content/view/18038/1/

Fudzilla said:
Fermi dual in a few months, at the earliest


It looks like dual Fermi won’t be coming anytime soon. A few partners have said that at this time they are not aware that there will be a dual Fermi card and if Nvidia decides to make one, it won't be close to March 27th's single GF100 launch.

Dual Fermi can be expected hopefully towards the end of Q2 2010 and the high thermal dissipation and complexity of cards are few reasons for this postponement. Nvidia would definitely like to have dual cards out, but it might take a little while before it happens.

A good news is that mainstream and entry level Fermi versions should be coming this summer, slightly delayed from its original plan, but for the Dual card, it’s not certain when should it happen. Nvidia is not telling much to us or its partners either.

So in the next few months we guess that Radeon HD 5970 will remain the fastest card around and this is not something that Nvidia and its fearsome Jensen likes. You can be sure that Nvidia is not at all thrilled for this delay, but everything related to Fermi looks quite delayed and definitely doesn't sound like a walk in the park, but apparently it's possible.
 

Lonyo

Lifer
Aug 10, 2002
21,939
6
81
I agree with you 100 percent evolucion. I wasn't stating that ATi can somehow catch up if Fermi is indeed faster, but the excuses will be made as a marketing move. And while these tricks may be gone now, the efficiency is a problem. Isn't the goal of hardware design to allow a better abstraction? I think it's quite sad that Ati has to optimize much more frequently. So what if nVidia's cards win on 80 percent "useless" games, all I'm saying is that people should open their eyes and realize that the card isn't as general purpose.

I'm not using general purpose to defend stuff like folding @ home or any junk like that, but the future of parallelization encourages use of architectures like Nvidia. You can send jobs to any of the shaders since they're all equally as capable, which allows alot more work to get done. Imagine the overhead you would incur by continually having your superscalar shader delegating these instructions to yet another set of worker shaders... It's a bit messy.
I hope one day that ATi has this vision and markets that product in all areas of Computing Science, just as nVidia has. I know I will be happy to see that day. Right now I see one company pushing itself and getting flamed for being late on, what could be, a revolutionary video card.

*shrug*

There's one reason for that.
Survival.

NV needs to go general purpose to survive, ATI doesn't. Simple as that.
Saying that NV are doing this or doing that in non gaming markets and that makes them better, a better company, whatever, it doesn't.
They are doing what they have to do to survive. Who are they competing against with their HPC products? CPU makers. Name a CPU maker? Oh, how about AMD. Who owns ATI? Oh, AMD.

NV has no future in chipsets, they may lose their future in GPUs (when they move on die) or at least it might decline somewhat further depending on how things go. That leaves them with 2 product areas, and they are focusing on them with Tegra and the Fermi. Mobile and HPC.

You want to tell me that NV is making a better general purpose card, go ahead.
You want to tell me NV is making a better general purpose card because they want to be revolutionary, I'll tell you to take a hike.
 

Stoneburner

Diamond Member
May 29, 2003
3,491
0
76
The math wasn't meant to be perfect, just a rough demonstration of the reasoning behind why it's going to take a few months before a dual part is possible. And I was right:

http://www.fudzilla.com/content/view/18038/1/

That article is bad even by Fud/inq/SA standards.

"Nobody has any news on a Dual Fermi card so can safely assume it won't be out for a few months. We can also assume Nvidia wants to make one. Please ignore our misleading headline. "
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I'm starting to feel dejavu with many of these arguments about Fermi's tessellation/potential future performance vs. 5870's current performance. It reminds me so much of when the Geforce256 was first released and 3dfx was screaming that hardware T&L wasn't relevant yet.

How is 3dfx today, anyways?

EDIT: With a little bit of 5800fx in there too.
 
Last edited:

ScorcherDarkly

Senior member
Aug 7, 2009
450
0
0
That article is bad even by Fud/inq/SA standards.

"Nobody has any news on a Dual Fermi card so can safely assume it won't be out for a few months. We can also assume Nvidia wants to make one. Please ignore our misleading headline. "

What exactly is misleading about "Dual GeForce 400 to come later"?
 

McCartney

Senior member
Mar 8, 2007
388
0
76
How could you argue otherwise Lonyo? I'm sure nVidia could refresh and beef up their GT200 architecture and sell it. What's the point? I'm sure they learned their lesson when they put out the FX5900 ultra, which was just a ridiculously high-clocked FX5800 (which got pwnt by the Radeon 9700 Pro at the time).

I respect your alternative stance on why they're taking this long,but I do not think it's a design issue as much as it is a yield issue.

We will all see though, and it will be interesting.
 

McCartney

Senior member
Mar 8, 2007
388
0
76
I'm starting to feel dejavu with many of these arguments about Fermi's tessellation/potential future performance vs. 5870's current performance. It reminds me so much of when the Geforce256 was first released and 3dfx was screaming that hardware T&L wasn't relevant yet.

How is 3dfx today, anyways?

EDIT: With a little bit of 5800fx in there too.

QFT, but the 5800FX didn't have the lasting power of the Geforce 256. I remember that bad boy lasting me 2 very solid years. What a great card. I had the guillemot edition, too...
 

McCartney

Senior member
Mar 8, 2007
388
0
76
You make a post saying for people who don't know what they are talking about to be quiet, but then you post that wall of complete and utter nonsense? How ridiculous.

What I think is ridiculous is arguign superiority of a card that has yet to have its NDA lifted.

I am also astounded to see you ridicule my claims my concern of the architecture of the video cards. It's unfair you expect great performance metrics without first understanding how they are achieved.

If we all were told something and took it at face value, the world would be a big heap of doo-doo.
 

TemjinGold

Diamond Member
Dec 16, 2006
3,050
65
91
McCartney:

Let me get this straight: You want us to praise nVidia (nothing wrong with that in and of itself) despite being late to the party with anything competitive (it can certainly be argued that timing is of minor importance) for having a more general purpose card (why is this even a good thing?) that is "superior" when all indications are that it more or less trades blows with ATi's "outdated" stuff (err...) AND will more likely than not cost a boatload more (wha?). Then when it releases, you want us to parade merrily online to buy up all the Fermi cards even at insane prices because you believe nVidia folks are visionaries? That about sum it up? How's their Kool-Aid tasting these days?

I can understand you respecting them for believing that they are more revolutionary. Whether they are or aren't, the fact that YOU believe they are is certain. I myself have respect for companies that I would believe to be revolutionary as well. But I generally show my appreciation by purchasing said revolutionary products when they are actually worth my money. I wish nVidia the best on this revolutionary path. I once bought them when they were the best value for my money and I will continue to do so when they are again. Until then, they don't get my money. Visionary is neat but it doesn't run my games any faster. General purpose sounds cool but that's what the cpu is for. When I plop down money for a gfx card, I want it to accelerate graphics. I really don't care if it also knows how to make me breakfast.
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
This isn't a discussion about the engineering genius behind the designs of the cards. This is a discussion about the end result that matters for us consumers: How expensive it will be, how easy or hard will it be to get one, and how well it will perform. People want to know if they should buy one or buy a Radeon 5000. People want to know if it was worth waiting 6 months.

There are certain performance expectations to be had for the device. There are also expectations of price. These expectations are based on competition and history. Regardless of how brilliant the design might be in theory or how smart the engineers behind it may be, if the card doesn't meet price and performance expectations, it will let down enthusiasts. One could spend all day discussing how amazing and revolutionary NV30, R600, and even Prescott was on paper, but to what end?
 
Last edited:

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
They perform the same to me.

What is more important is their DX10 and DX11 support.

Whos optimized for what engines.

In benchies they are the same guys, but real world game performance, who knows. Because you dont know if the game is optimized for nvidia or amd.

From what Ive seen over the years. . games are optimized for nvidia chip.

Soo we buy a nvidia chip . Go Gang Green lol
 

Madcatatlas

Golden Member
Feb 22, 2010
1,155
0
0
McCartney:

Let me get this straight: You want us to praise nVidia (nothing wrong with that in and of itself) despite being late to the party with anything competitive (it can certainly be argued that timing is of minor importance) for having a more general purpose card (why is this even a good thing?) that is "superior" when all indications are that it more or less trades blows with ATi's "outdated" stuff (err...) AND will more likely than not cost a boatload more (wha?). Then when it releases, you want us to parade merrily online to buy up all the Fermi cards even at insane prices because you believe nVidia folks are visionaries? That about sum it up? How's their Kool-Aid tasting these days?

I can understand you respecting them for believing that they are more revolutionary. Whether they are or aren't, the fact that YOU believe they are is certain. I myself have respect for companies that I would believe to be revolutionary as well. But I generally show my appreciation by purchasing said revolutionary products when they are actually worth my money. I wish nVidia the best on this revolutionary path. I once bought them when they were the best value for my money and I will continue to do so when they are again. Until then, they don't get my money. Visionary is neat but it doesn't run my games any faster. General purpose sounds cool but that's what the cpu is for. When I plop down money for a gfx card, I want it to accelerate graphics. I really don't care if it also knows how to make me breakfast.

My thoughts aswell, well summarized. Ill add one point: Its not that the consumer couldnt wait for Nvidias ..revolutionary.. product, its the fact that Nvidia said it was to be out 3-6 months? ago. I dont like being cheated or missinformed like this. My time is precious.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
I'm starting to feel dejavu with many of these arguments about Fermi's tessellation/potential future performance vs. 5870's current performance. It reminds me so much of when the Geforce256 was first released and 3dfx was screaming that hardware T&L wasn't relevant yet.

How is 3dfx today, anyways?

EDIT: With a little bit of 5800fx in there too.

Except the 5xxx cards have tesselation ability just as well. And I don't get it, according to the benches shown, at 8xAA the 5850 is very, very close to the GTX270 (19FPS vs. 20FPS) and the 5870 is ahead of the GTX470 with 23FPS.

At 4xAA the GTX470 is the fastest, faster than the 5870 by 2FPS and the 5850 by 7FPS.

So, depending on the situation the 5850 is within 5% with the GTX470 and the 5870 is faster or slower by a little bit.

This doesn't at all seem like 3dfx with a lack of hardware T&L. This is two different brands of GPU's that both have the ability to run tesselation. I don't think your comparrison is a very good one...
 

cheesehead

Lifer
Aug 11, 2000
10,079
0
0
Does anyone know what's up with Linux support for ATi products? I'm a big Linux fan, and from what I've seen, Nvidia has been very good about driver support. Admittedly, I'm running a bit behind the curve (8800GTX FTW!), but as far as I can tell, Nvidia products tend to get drivers long before ATI drivers.
 

ZimZum

Golden Member
Aug 2, 2001
1,281
0
76
I'm starting to feel dejavu with many of these arguments about Fermi's tessellation/potential future performance vs. 5870's current performance. It reminds me so much of when the Geforce256 was first released and 3dfx was screaming that hardware T&L wasn't relevant yet.

How is 3dfx today, anyways?

Not a valid analogy at all. Seeing as ATI GPU's have been capable of hardware tesselation for about 10 years. And nVidia still doenst have a card capable of tesselation that you can currrently buy.
 

Madcatatlas

Golden Member
Feb 22, 2010
1,155
0
0
Does anyone know what's up with Linux support for ATi products? I'm a big Linux fan, and from what I've seen, Nvidia has been very good about driver support. Admittedly, I'm running a bit behind the curve (8800GTX FTW!), but as far as I can tell, Nvidia products tend to get drivers long before ATI drivers.

considering the fact that they have had the same product since November 2006, its a nobrainer that they have the better drivers, isnt it?