5850 just as fast as a 5870?

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ugaboga232

Member
Sep 23, 2009
144
0
0
http://www.xtremesystems.org/forums/showthread.php?t=235181&page=7

First post on that page, you can see that they do many trials of core vs memory. Going from 900 to 1300 gives a 12% bonus. That was the first link I posted.

Also stop with the minimum fps. And address my issue with the 5970 which is the 5870x2 which according to you with double bandwidth should give the performance it should have (~4x a 4870).

Azn please show us a benchmark that shows that the 5870 has significant drops to minimum.
 

Daedalus685

Golden Member
Nov 12, 2009
1,386
1
0
Quite simply I see your mass as ignorant blind nature. Quick to judge, discredit, blame, hate, follow the leader cause you think he cool, bla bla

Forget there's proof and examples. Forget I was right all along except call out on few people.

We might see things differently but did you ever think for a second that I saw little bit more than what you saw?

That is the most ridiculous, blind, arrogant thing I have read in ages. You argue by trying to pretend to be the moral one, trying to make it out that you are the victim in some way to mask your foolishness. You are pretty good at it, I must say.

I feel sorry for you if you actually believe that tripe. So much more to the world if you allow yourself to see your own bias once in a while, and actually learn. How depressing to live in such a closed, unchanging box.. It's fine though, nothing I say can affect you, because you are always right. There is no need to worry about anything, though it must be nice to be a god.

I don't think there is any point in saying much else to you. It will be a waste of time.

You seem to seek and crave being right.. I'm sorry but to be proven right in this situation you have to follow the rules of facts and science. You cannot ignore evidence that is valid, to substitute your own which is not. Again, your links are meaningless because they change more than one variable, in fact they change the very variables we are contesting matter. If you can't see what is wrong with that then there is no point in going on.

Without logic and rationality your opinion is as worth while as wet socks in a Canadian winter. We have said time and time again that your "facts" are not valid because of confirmation bias, because they include the very variables you are trying to disprove.

Let me ask you:

If your buddy had a faster car than you, and you wondered why, how would you figure it out? Lets say he says it is the engine that is not powerful enough to match his car, yet you think the body is not aerodynamic enough. Then you go and change the engine and the body, and conclude that the body must have been the problem and the engine was not. Despite having changed both? Are you going to tell your friend that he is " Quick to judge, discredit, blame, hate, follow the leader cause you think he cool, bla bla" that he is blind? Do you not see the problem with that? I am not trying to attack you, but you use invalid data over and over. You are coming off as a fool.

Regardless of how probable drivers and so on are to you as an issue, you cannot reasonably guarantee they are no issue. You can assume they pose little issue, but you cannot prove they have no bearing without evidence. Yet despite this you continue to use examples that disregard this possibility, you fall into the same bias pattern. We have provided many examples of viable data (data that changes only one variable, such as the memory clocks) to show you how the memory, while still changing performance, is minor.If you can not understand this it is a flaw with you, and your intellect, not with the other issues.

If you want to have a valid point all you have to do is show us one viable set of data.. Just one example of a 5870 with more memory BW making a huge difference. You can't be comparing apples to pineapples...
 
Last edited:

AzN

Banned
Nov 26, 2001
4,112
2
0
I do not, and have never said that memory BW could nto be the issue.. My oppinions are actually falsifiable unlike certain folks around here.

AZN, I simply believe, and have tried over and over again to explain, that it is very much possible that other issues coudl be at play.. I am not saying you are wrong.. just a bit silly with your blind stuborn nature.

I merely think that it is more likely due to drivers, but could also be hardware, maybe even BW.

As to why you are unscientific. You suffer from a severe confirmation bias. There are possible reasons that this behavior is seen that is not bandwidth but you ignore them. Thus you are comparing data sets that change these other possible variables becaseu you refuse to see them. If you were to do things properly you would only bring up data that changes one thing at a time, or at least as few as possible. Comparing a 4870x2 to a 5870 changes several of teh possibilities we have brought up. No matter how unlikely you think they are, it is still makes it unscientific to ignore the possibility by changing them all to try and prove a point. All you are acomplishing is showing that it could still be all of the reasons we have already brought up..

If you did this as your career you woudl ahve no job unless you learned to accept other possible answers. A good scientist always accepts that they could be wrong.. In fact science is about trying to prove you are wrong more so than proving you are right.. If you can't prove yourself wrong then you have a good theory. You do not have a good theory, and I repeat that it is nonsense, because all of the data you use as proof ignores the scientific method entirely.

If yuo want to compare things, choose one of drivers, bandwidth, and core architecture. Leave the others the same, and then compare. Proving that it coudl be memory BW does not prove you right, or us wrong. You do not seem to understand how one goes about dismissing a hypothesis.

For a hypothesis to be valid it there has to be a way to disprove it. To disprove something you have to prove it wrong.. but if there is no way to do this, "There is an elephant beside me," then the claim becomes nonsense. Repeatedly usign an example that does not prove you wrong is a sorry way to go about things...

The reason I'm not budging on my stance is because i've done extensive testing between bandwidth and fillrate to come up with this conclusion. Something you haven't it seems as you don't know how bandwidth plays with fillrate.

Never mind there are plenty of examples in this thread that is already enforcing what I've been saying. You are quick to discredit me because you already made up your mind that I'm hard headed guy who's wrong.

All video cards act same way to fillrate and bandwidth whatever the architecture currently out on the market. There's a sweet spot but you could always use more bandwidth until the core is completely saturated. If I show you numbers that is enforcing what I've been saying will you stop antagonizing what I've been saying? I can drop bandwidth on my GTX260 to 5770 levels and show you how the card effect on minimum frames.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
3dm-color-fill.gif


This is a pretty meaningless graph, it lets you get away with things that in games you would not be able to do.

Its a very crude way of getting the raw memory bandwidth. It would be like setting up the HD series with MAD operations and claiming they get true terraflop performance.

Here we go again.

If what you've been saying is true about crossfire only improve 50% more bandwidth that bench would not show 4870x2 beating 5870 by 50% in this benchmark.

You do realize 3dmark is based on a game engine. If you are going to discredit me you might as well discredit techreport.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Also I would like to say that I have a card thats memory bandwidth limited.

A mobile HD 4650 DDR2, and let me tell you, clocking the memory gives LINEAR performance increases. If you are memory bandwidth limited it shows BIG TIME.

The HD 4670, clocked the same with GDDR3 clocked higher, gets that much higher results in 3dmark.

It does not put little drops here and there.

Your example isn't even what I was proposing as I never said 5870 or 5770 was bandwidth starved to a point of linear performance increases. What I did say is that bandwidth plays a big role with minimum frame rates that also increase avg. frame rates.

Your example is probably an extreme case where the bandwidth is so small to the amount of fillrate it has that linear increase comes from just clocking memory.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
http://www.xbitlabs.com/articles/video/display/radeon-hd5850_10.html#sect0

In this review, the 5850 was clocked to the same speed as a 5870 (850/4800) and when the smoke cleared, it was a whopping 2% slower than its older brother (they also test a 1010/4960 OC, but that's besides the point). The 850/4800 clocks were achieved using stock voltage no less.

The 5870 has 160 more SP's (+10%), 8 more TMUs (+10%) and is clocked at 850Mhz Core (+10%) & 1200Mhz Mem (+17%) than the 5850, yet only manages to be +2% faster with the same clock speeds. This indicates that those extra SPs and TMUs are more dead weight than actually adding to the performance of the 5870 card.

I would really like to see this review's methodology replicated in another review and tested for repeatability, but for the sake of making inflammatory statements, I will jump to conclusions and say that 5870 is the 2900xtx revisited.

This only further fuels my desire for a 1280 SP 5830. How far from the 5870 would it lie with identical clocks?



Discuss.

http://www.xbitlabs.com/articles/video/display/radeon-hd5850_6.html#sect0

When I look at this review I don't see any discrepancies though. Increasing Vcore results in better FPS.

I think we have to realize just because GPU is 10% stronger doesn't mean frame rates will increase 10%. Frame rates are a combination of CPU time and GPU time together.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
http://www.xtremesystems.org/forums/showthread.php?t=235181&page=7

First post on that page, you can see that they do many trials of core vs memory. Going from 900 to 1300 gives a 12% bonus. That was the first link I posted.

Also stop with the minimum fps. And address my issue with the 5970 which is the 5870x2 which according to you with double bandwidth should give the performance it should have (~4x a 4870).

Azn please show us a benchmark that shows that the 5870 has significant drops to minimum.

Stop with minimum fps? That's the only thing I've been saying in this thread and even on the other thread of 5770. You seem to be trolling. I've got nothing more for you.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
This was the driver after I decided to invest into my 4850 X2 2GB... if ATI can pull out a similar ~15-20% average increase in games I play (FPS, RTS) by March (Fermi's release) then I will replace my 5870 with a 5970 2GB monster. :)

The HD5970 actually seems like a bargain compared to HD5870.

A person gets two full 1600 sp Cypress cores with that card for only 50% more money.

With Eyefinity though I would hope ATI would release a 4GB (2GBx2) model.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
That is the most ridiculous, blind, arrogant thing I have read in ages. You argue by trying to pretend to be the moral one, trying to make it out that you are the victim in some way to mask your foolishness. You are pretty good at it, I must say.

I feel sorry for you if you actually believe that tripe. So much more to the world if you allow yourself to see your own bias once in a while, and actually learn. How depressing to live in such a closed, unchanging box.. It's fine though, nothing I say can affect you, because you are always right. There is no need to worry about anything, though it must be nice to be a god.

Believe what you like and please stay on topic. I wasn't talking about this entirely on this thread anyway although it was included.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Your example isn't even what I was proposing as I never said 5870 or 5770 was bandwidth starved to a point of linear performance increases. What I did say is that bandwidth plays a big role with minimum frame rates that also increase avg. frame rates.

Your example is probably an extreme case where the bandwidth is so small to the amount of fillrate it has that linear increase comes from just clocking memory.

I think BFG10K has testest HD57xx and found the GPU and Memory bandwidth to be "balanced".

I think that means that either increasing GPU power or memory speed would result in linear gains up to a point.

Trouble with HD5xxx memory controller (AFAIK) is that is has some type of error correction that makes interpreting the true improvemnt of increasing memory OC very very difficult.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
I think BFG10K has testest HD57xx and found the GPU and Memory bandwidth to be "balanced".

I think that means that either increasing GPU power or memory speed would result in linear gains up to a point.

Trouble with HD5xxx memory controller (AFAIK) is that is has some type of error correction that makes interpreting the true improvemnt of increasing memory OC very very difficult.

I've seen his article.

I agree when you "only" compare avg. frame rates but what about minimum frame rates?

When you compare the card to say GTX 260 or 4890 minimum frame rates isn't anywhere nar it.. It's fine for 1680x1050 but not more than that or barely get by 1920.

http://www.xbitlabs.com/articles/video/display/radeon-hd5770-hd5750_8.html#sect0
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I've seen his article.

I agree when you "only" compare avg. frame rates but what about minimum frame rates?

When you compare the card to say GTX 260 or 4890 minimum frame rates isn't anywhere nar it.. It's fine for 1680x1050 but not more than that or barely get by 1920.

http://www.xbitlabs.com/articles/video/display/radeon-hd5770-hd5750_8.html#sect0

That is a good point. I am actually expected AMD to give us more memory bandwidth too.

I wonder what happens when Eyefinity is enabled for HD58xx or HD5970? Three cheap 1080p monitors actually have 50% more combined resolution than one 2560x1600.
 
Last edited:

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
With no proof that drivers are the culprit I might add.
It’s sometimes slower than a 4850, and sometimes faster than a GTX285, and you want proof the drivers need work? Are you for real?

Non bandwidth hardware limitation like?
Scheduling, caches, interpolators, raster setup, etc. There are a plethora of reasons that are elementary to someone with even the most basic understanding of 3D architecture. But I wouldn’t expect anything different from someone who has claimed in the past that bandwidth is the only thing that can affect fillrate.

When only thing 4870x2 has over 5870 is bandwidth it's the most likely limitation of the 5870.
Except oh, I don’t know, that fact that it has several different architectural traits, and isn’t even multi-GPU. But aside from that yeah, the two are exactly the same. :rolleyes:

This is a perfect example of bandwidth not saturating 5870 theoretic peaks while 4870x2 has more bandwidth that peaks more theoretical with less theoretical peaks.
It’s a nonsensically theoretical example that is your modus operandi whenever real evidence is presented to you that disproves your fictitious ideas. Synthetic fillrate tests mean nothing because games don’t work like that. We’ve been over this before repeatedly.

To discount real games tested with memory clocks in favor of 3DMark fillrate tests is comical beyond belief, and reveals a sorely lacking understanding of reality.

True that however the hardware changes were mostly artificial between cache and so on that already takes advantage.
This doesn’t appear to be a valid English sentence. When you figure out what you’re actually trying to say, please rephrase your statement so I can actually respond.

Which goes back to the bandwidth limitation. in the above picture.
No it doesn’t, because it has absolutely nothing to do with what I stated. If the scheduler isn’t using the extra shaders then bandwidth means squat to that equation.

These are what 2 7 year old games? It probably never needed any real optimization as it was already fast enough for ATI. Aside from changing a few numbers on their drivers from RV770 to Cypress there really isn't anything different about RV770 and cypress architecture besides tessellation improvements and fetch and so forth.
Please stop pretending like you know what you’re talking about when you don’t.

Of course architecture matters in this case when the card behaves entirely different to games. You claim 5770 beating GTX285 in 1 game. We have these kind of anomalies going from entirely different architecture to the next. This is nothing new.
No, I’m claiming a lot more than that. Again it’s obvious you never read the benchmarks or understood them, you just keep waving around worthless 3DMark tests.

Again. I've explained to Schmide that I don't trust Hardocp's benches. Reason being... They've had their minimum frames all over the place in the past. It seems they use fraps to record their frame rates but don't get rid of hard drive seeks or margin of errors. I don't see the consistency in their benches.
So you accept a single minimum point that could’ve from anywhere as accurate, but you dismiss an entire benchmark run because “it’s all over the place”?

If an entire benchmark run is all “over the place”, where do you think a single minimum will be, hmm? You do realize that a minimum is contained inside that run, right?

This concept has been repeated to you several times by several people, yet you still don’t appear to comprehend it. This is elementary to someone with the most basic understanding of science and statistics.

8800ultra had more bandwidth than it really needed and testing minimum fps would be a waste of time.
Is it a waste of time because you say so, or because the results back your beliefs?

The fact is, you either accept all of my results or you accept none of them. For you to accept the 8800 Ultra’s results and then around and discount the 5770’s results because they use averages (like the 8800’s) highlights your biased agenda.

In case of 5770 let me remind you it has much more processing power, 40% texture fill, and nearly same pixel with yet it has 25% lower bandwidth than the ultra.
And? This is yet again apples vs oranges. We’re talking about the 5770 being held by drivers. What relevance does the 8800 Ultra have? It’s like the others say: you constantly chop and change all over the place in the hopes that people won’t notice you’re pretending to know what you’re talking about.

Changing few numbers around 4xxx compared to 5xxx is somehow working differently?
This doesn’t even appear to address what I posted. If you don’t want to address what was posted then don’t quote it.

Again mostly prefectching difference between cypress and rv770.
Explain to us what this means in your own words, citing specific and technical examples. Then explain how it’s relevant to anything we’re discussing right now.

BFG tell me what you think about CF/SLI how bandwidth is applied or do you just agree with everyone because someone said so and I didn't? You must have an idea if you don't thin the bandwidth doesn't double.
It isn’t doubled. Claiming it’s doubled is no different to claiming my E6850 is a 6 GHz processor because I have two cores running at 3 GHz.

While AFR works on the frames independently, the bandwidth isn’t shared between cards because the cards don’t share frames. That is to say, each frame only has the bandwidth that a single card can allocate to it.

That and due to inter-GPU dependencies you’ll have duplicate memory reads and writes that aren’t present on a single card, thus reducing bandwidth further.
Vid architecture act similar to bandwidth and fillrate. you don't need a 5xxx card to get a grasp to get the picture. It's the same reason I was able to tell you that your ULTRA was fillrate limited without even owning one.
And yet you couldn’t tell me that the GTX260+ was a balanced part, and that the core is the 5770’s primary limitation. Oh that’s right, you ignore those results because they didn’t fit into your imaginary reality.

I could care less what anyone buys but when reviewers don't test minimum fps but only test avg frame rates to get conclusion and continue to misinform about bandwidth I have a problem with.
Again, the conceptual flaw of a single data point has been explained to you repeatedly. I suggest consulting a basic statistics textbook and reading up further on the issue.

Now imagine minimum frame rates where that % is multiplied by the bandwidth.
Imagine – you do that a lot. Either put up legitimate benchmark runs or retract your claims. You imagining something is not evidence.

I love how you are quick to discredit me about anything long as you have the urge to agree with the mass. This won't be the first time and I hope it won't be the last.
I’ve never cared about public opinion and I was arguing the same thing before those other guys came here.

It’s cute how you continue to spread misinformation and pretend to know what you’re talking about while playing the innocent victim (“multiple accounts, they’re out to get me!”).

I bet you never considered that those accounts are in fact several people like myself that can see the holes in your arguments.
 

ugaboga232

Member
Sep 23, 2009
144
0
0
Notice on the xbitslab article the 5870 has the with the same minimum fps as the gtx 275 but 33% more avg fps. What could that possibly mean? The 5870 spends much less time at 26 fps as the 275.

Since there seems to be a wealth of information, show me in games and not fillrate benchmarks how overclocking the memory gives a near linear increase.

Thank you, that is all.
 

Meaker10

Senior member
Apr 2, 2002
370
0
0
Here we go again.

If what you've been saying is true about crossfire only improve 50% more bandwidth that bench would not show 4870x2 beating 5870 by 50% in this benchmark.

You do realize 3dmark is based on a game engine. If you are going to discredit me you might as well discredit techreport.

I said no such thing about 50%, that was another poster. What I did say is that writes are not doubled, but in this case it avoids that because of the nature of this test.

It's just a little subtest thats got a very limited scope. Its not even part of the main benchmark for that reason.
 

hans030390

Diamond Member
Feb 3, 2005
7,326
2
76
I've seen his article.

I agree when you "only" compare avg. frame rates but what about minimum frame rates?

When you compare the card to say GTX 260 or 4890 minimum frame rates isn't anywhere nar it.. It's fine for 1680x1050 but not more than that or barely get by 1920.

http://www.xbitlabs.com/articles/video/display/radeon-hd5770-hd5750_8.html#sect0

What are you talking about? In that specific link (that specific page), the minimum FPS is usually pretty similar, relatively. The only time it's worse is when the card performs worse overall. As you've even said, Nvidia and ATI cards have different performance levels on different games...it's just a direct correlation. Plus, when the minimum FPS is about the same or higher than the other cards, the avg FPS is usually higher...meaning it spends less time hitting a minimum FPS. That's just basic math.

Since the min FPS doesn't seem to be drastically different and/or outside of expected ranges (relatively), you're basically just shooting yourself in the foot.

Edit: I'd like to come back and do some "computations" on those benchmarks...see how, percentage wise, the minimum and avg FPS are (compared to other cards as well, see if results are consistent).

Also, for those of you trying to prove that the memory is not bandwidth limited, I could run some tests on my 5770. I've heard some words mentioned about it being a linear increase in performance if it's bandwidth limited. Just tell me what I need to do for proper results, and I'll do it.
 
Last edited:

Daedalus685

Golden Member
Nov 12, 2009
1,386
1
0
I said no such thing about 50%, that was another poster. What I did say is that writes are not doubled, but in this case it avoids that because of the nature of this test.

It's just a little subtest thats got a very limited scope. Its not even part of the main benchmark for that reason.

I don't think anyone said anything about 50%.. I mentioned that it would depend on the read write balance, and at 50/50 it woudld be at most 50% more BW...:rolleyes:

That aside.. the issue is not how much BW it has, it is how it changes performance, and how it comes into play in a game. We know how much bandwidth the cards have.. we can extrapolate how much the 4870x2 shoudl have, but it would depend on the program. The point is that many believe this amount of BW is still plenty, not that it doesn't have less than card "x"
 

Daedalus685

Golden Member
Nov 12, 2009
1,386
1
0
Also, for those of you trying to prove that the memory is not bandwidth limited, I could run some tests on my 5770. I've heard some words mentioned about it being a linear increase in performance if it's bandwidth limited. Just tell me what I need to do for proper results, and I'll do it.

BFG already did those tests and showed that it is quite sublinear. Do them if you want, but expect certain parties to ignore themcompletely.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
8,146
9,413
136
http://www.xbitlabs.com/articles/video/display/radeon-hd5850_6.html#sect0

When I look at this review I don't see any discrepancies though. Increasing Vcore results in better FPS.

I think we have to realize just because GPU is 10% stronger doesn't mean frame rates will increase 10%. Frame rates are a combination of CPU time and GPU time together.

-I'm not sure what you mean "increasing Vcore results in better FPS." Providing the core with more voltage does nothing for FPS. I'll interpret this as "increasing core clocks results in better performance" which no one is denying.

Regardless of what "times" the framerates are a product of, if 10% more shader & TMU units result in a 2% gain, why not just leave the extra shaders off and come away with a cooler, smaller, less power hungry die?
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Oh, I know. But he didn't have minimum FPS...remember?
While this topic was going on I pulled the minimums from my log files for the main benchmarks I ran for my 5770 bottlenecking article.

From the 5 games I have minimums for, all of them show the same trends with their minimums as they do with their averages.

That is to say, when the core affects the average the most, it also affects the minimum the most, and vice versa. This debunks Azn’s stance where he claimed the core can’t affect the minimum more than bandwidth can.

When 5 out 9 tested games confirm the averages with their minimums, I think it’s inductively reasonable to assume all of them will show the same trend given we haven’t seen evidence to the contrary.

I’m not posting them until Azn admits he’ll retract his claims when I do so, because I can just see he’ll ignore them and I’ll be wasting my time yet again.