With no proof that drivers are the culprit I might add.
Its sometimes slower than a 4850, and sometimes faster than a GTX285, and you want proof the drivers need work? Are you for real?
Non bandwidth hardware limitation like?
Scheduling, caches, interpolators, raster setup, etc. There are a plethora of reasons that are elementary to someone with even the most basic understanding of 3D architecture. But I wouldnt expect anything different from someone who has claimed in the past that bandwidth is the only thing that can affect fillrate.
When only thing 4870x2 has over 5870 is bandwidth it's the most likely limitation of the 5870.
Except oh, I dont know, that fact that it has several different architectural traits, and isnt even multi-GPU. But aside from that yeah, the two are exactly the same.
This is a perfect example of bandwidth not saturating 5870 theoretic peaks while 4870x2 has more bandwidth that peaks more theoretical with less theoretical peaks.
Its a nonsensically theoretical example that is your modus operandi whenever real evidence is presented to you that disproves your fictitious ideas. Synthetic fillrate tests mean nothing because games dont work like that. Weve been over this before repeatedly.
To discount real games tested with memory clocks in favor of 3DMark fillrate tests is comical beyond belief, and reveals a sorely lacking understanding of reality.
True that however the hardware changes were mostly artificial between cache and so on that already takes advantage.
This doesnt appear to be a valid English sentence. When you figure out what youre actually trying to say, please rephrase your statement so I can actually respond.
Which goes back to the bandwidth limitation. in the above picture.
No it doesnt, because it has absolutely nothing to do with what I stated. If the scheduler isnt using the extra shaders then bandwidth means squat to that equation.
These are what 2 7 year old games? It probably never needed any real optimization as it was already fast enough for ATI. Aside from changing a few numbers on their drivers from RV770 to Cypress there really isn't anything different about RV770 and cypress architecture besides tessellation improvements and fetch and so forth.
Please stop pretending like you know what youre talking about when you dont.
Of course architecture matters in this case when the card behaves entirely different to games. You claim 5770 beating GTX285 in 1 game. We have these kind of anomalies going from entirely different architecture to the next. This is nothing new.
No, Im claiming a lot more than that. Again its obvious you never read the benchmarks or understood them, you just keep waving around worthless 3DMark tests.
Again. I've explained to Schmide that I don't trust Hardocp's benches. Reason being... They've had their minimum frames all over the place in the past. It seems they use fraps to record their frame rates but don't get rid of hard drive seeks or margin of errors. I don't see the consistency in their benches.
So you accept a single minimum point that couldve from anywhere as accurate, but you dismiss an entire benchmark run because its all over the place?
If an entire benchmark run is all over the place, where do you think a single minimum will be, hmm? You do realize that a minimum is contained inside that run, right?
This concept has been repeated to you several times by several people, yet you still dont appear to comprehend it. This is elementary to someone with the most basic understanding of science and statistics.
8800ultra had more bandwidth than it really needed and testing minimum fps would be a waste of time.
Is it a waste of time because you say so, or because the results back your beliefs?
The fact is, you either accept all of my results or you accept none of them. For you to accept the 8800 Ultras results and then around and discount the 5770s results because they use averages (like the 8800s) highlights your biased agenda.
In case of 5770 let me remind you it has much more processing power, 40% texture fill, and nearly same pixel with yet it has 25% lower bandwidth than the ultra.
And? This is yet again apples vs oranges. Were talking about the 5770 being held by drivers. What relevance does the 8800 Ultra have? Its like the others say: you constantly chop and change all over the place in the hopes that people wont notice youre pretending to know what youre talking about.
Changing few numbers around 4xxx compared to 5xxx is somehow working differently?
This doesnt even appear to address what I posted. If you dont want to address what was posted then dont quote it.
Again mostly prefectching difference between cypress and rv770.
Explain to us what this means in your own words, citing specific and technical examples. Then explain how its relevant to anything were discussing right now.
BFG tell me what you think about CF/SLI how bandwidth is applied or do you just agree with everyone because someone said so and I didn't? You must have an idea if you don't thin the bandwidth doesn't double.
It isnt doubled. Claiming its doubled is no different to claiming my E6850 is a 6 GHz processor because I have two cores running at 3 GHz.
While AFR works on the frames independently, the bandwidth isnt shared between cards because the cards dont share frames. That is to say, each frame only has the bandwidth that a single card can allocate to it.
That and due to inter-GPU dependencies youll have duplicate memory reads and writes that arent present on a single card, thus reducing bandwidth further.
Vid architecture act similar to bandwidth and fillrate. you don't need a 5xxx card to get a grasp to get the picture. It's the same reason I was able to tell you that your ULTRA was fillrate limited without even owning one.
And yet you couldnt tell me that the GTX260+ was a balanced part, and that the core is the 5770s primary limitation. Oh thats right, you ignore those results because they didnt fit into your imaginary reality.
I could care less what anyone buys but when reviewers don't test minimum fps but only test avg frame rates to get conclusion and continue to misinform about bandwidth I have a problem with.
Again, the conceptual flaw of a single data point has been explained to you repeatedly. I suggest consulting a basic statistics textbook and reading up further on the issue.
Now imagine minimum frame rates where that % is multiplied by the bandwidth.
Imagine you do that a lot. Either put up legitimate benchmark runs or retract your claims. You imagining something is not evidence.
I love how you are quick to discredit me about anything long as you have the urge to agree with the mass. This won't be the first time and I hope it won't be the last.
Ive never cared about public opinion and I was arguing the same thing before those other guys came here.
Its cute how you continue to spread misinformation and pretend to know what youre talking about while playing the innocent victim (multiple accounts, theyre out to get me!).
I bet you never considered that those accounts are in fact several people like myself that can see the holes in your arguments.