• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

HD3870 vs 8800GT review @ legionhardware.com

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: taltamir


So are you saying the review is biased towards nvidia or amd? I am not really sure I understand you...

Besides, DX9 games are playing great with my last gen card... I don't need a 3870 or a Gt for that.

Both of these card totally own any older DX9 game... its those intense DX10 games (and possible future DX10 games) that are the issue... 60 or 80 fps doesn't matter. 15 or 20 DOES.
I picked up HL2: LC because it's "ATi-game".

There haven't been many AA tests with DX10.. In Crysis and other DX10-games with poor code are other story..but there are DX10 games that atleast 8800 GT could push with AA enabled.
 
This is not looking good for ATi except for the cooler which seems to be much quieter under load.

I hope for their sake new drivers can help this card.
 
Originally posted by: BFG10K
This is not looking good for ATi except for the cooler which seems to be much quieter under load.

I hope for their sake new drivers can help this card.

Forget AMD... this is not looking good for us the consumers...
I should have just bought a GT for 240$ at 6am on the 29th... instead of saying "eh, it will go down to nvidia's msrp of 200$ in a week or two"

Right now I think a 300$ price tag will stick...
 
Originally posted by: taltamir
WTF is called of juarez? I played the demo for 10 mintutes and then deleted it, it is not even worth pirating, much less buying a video card for.

Bioshock, World in Conflict, and Company of Heroes are not "a few handpicked titles"... they are the best DX10 games out there right now, if not the best games ever made!

It's more like a dx9 game with a dx10 add on at the end of their cycle.
 
Originally posted by: Rusin
Originally posted by: Azn


From 80nm to 55nm ATI only increased their clock by 33mhz. I don't see how ATI was going to vamp up the clock speed with 65nm part. Why doesn't ATI add more texturing units and lower the clock speed some? I really don't understand why ATI is not adding more TMU. Their engineers are able to whoop out 512bit memory controllers but not more tmu to the radeon core? Doesn't make sense. A while Nvidia is doubling tmu count every year or so. They are whooping AMD in the high end market.
Thing is that with R600 and HD2900XT power consumption set limits for clock frequency. This time that ain't the case.

You act like you know what ATI can do but unless you work for ATI you don't know their CASE.

Why don't they clock it to 1000mhz then? Give 8800gt run for the money. Who cares about power consumption long as it's cheap and it performs relative to 8800gtx or gt.
 
Originally posted by: Azn
How much? It should be about same as 2900xt. No added texture units, no extra rops, similar clocks. It's basically 2900xt with 256bit memory controller. That's what a 2900xt should have been in the first place. That 512bit memory controller was a big waste on 2900xt. I don't know what ATI engineers were thinking at the time.

The 3xxx series have DX10.1 functionality added. It may not be of much use in the near future, but it's more than just a shrunk r600. Moreover, we have not seen enough advanced DX10 features used to tell whether or not the 512-bit bus was overkill. For instance, maybe it would make a big difference in heavy use of the geometry shader, but since no games are using it, we can't see if it was beneficial.
 
Originally posted by: Azn
Maybe on paper 3870 is beast but spec wise 3870 is NOT beast. It's the same old 16 by 16 card since x850xt.

The number of texturing units says nothing about the capabilities of those units. I can tell you for sure that the r6xx texturing units are not the same as those found on previous gen HW.
 
TechSpot and LegionHardware's two reviews listed below are written by the exact same author. That by itself is not a problem. However, the following things also happened:

1. The author swapped the 8800GTS and 2900XT results. The author swapped 8800GT and 8800GTX results in certain cases. All "mistakes" are in NVIDIA's favor.
2. The OS is different across the review websites (Vista vs WindowsXP).

There is obvious bias going on. We need to get the word out!
----

http://www.techspot.com/review/74-inno3d-geforce8800gt/page3.html

http://www.legionhardware.com/document.php?id=703&p=1

Does anyone notice some similarities in the numbers between the two?


Obviously there's something shady going on. Consider the 2900XT and 8800GTS scores. They are SWITCHED in the Techspot review such that the 8800GTS performs better than the 2900XT. So which one is right? Usually the 2900XT performs better than the 8800GTS in U3 engine games, so I would suspect that the 2900XT is the higher one for this game.

Look at the other graphs too. This isn't the only page with similarities.

Bioshock:
Problem: 8800GT and 2900XT results are swapped, in NVIDIA's favor. OS is incorrect.
bioshockbk7.png


bioshock2pt2.png


World in Conflict:
Problem: 8800GT and 8800GTX results are swapped, in 8800GT's favor. OS is incorrect.
wiclhez8.png

wictsjk1.png


Supreme Commander:
Problem: 2900XT and 8800GTS results are swapped, in NVIDIA's favor. Settings are incorrect (2xAA vs 0xAA). OS is incorrect.

sclhfy1.png


sctsuz7.png


Everyone, please spread the word!

---

Merged with ongoing discussion.

AnandTech Moderator John

 
DX10.1? it can't even run DX10 at any reasonable fps... what is DX10.1 capability gonna help? look at how pretty a slideshow we make...

Not to mention there aren't any DX10.1 games out there, and future proofing in computers is for retards. When they release the first dx10.1 game, only then should you consider 10.1 capability when buying a card. (and if you bought a 10.0 card then you can always upgrade it... or run it at 10.0. As I seriously doubt any games will be offer DX10.1 without DX10.0 support)
 
Originally posted by: taltamir
DX10.1? it can't even run DX10 at any reasonable fps... what is DX10.1 capability gonna help? look at how pretty a slideshow we make...

Not to mention there aren't any DX10.1 games out there, and future proofing in computers is for retards. When they release the first dx10.1 game, only then should you consider 10.1 capability when buying a card. (and if you bought a 10.0 card then you can always upgrade it... or run it at 10.0. As I seriously doubt any games will be offer DX10.1 without DX10.0 support)

That's besides the point. The point is it does have newer functionality, and is not just a die shrink. Whether or not Dx10.1 will be useful, nobody knows - not me and not you.
 
Ok, After checking only the FIRST example you gave I have this to say...

how exactly is it, that if two articles are written by the same author and he swaps the result of two cards so in one test the AMD card comes on top, and in the other the nVidia card comes on top, is it a "mistake" in favor of nvidia? If he was being biased that way he would make the "mistake" in both articles so that BOTH show nvdia as better... instead he swapped them so his articles contradict... seems like an honest mistake to me...
if anything it is probably favoring AMD because when he swapped the GTS and the 2900XT so that in one test the XT comes on top... which is obviously a mistake since the GTS annihilates the XT. Even the 320MB version of GTS annihilates the XT in everything...
 
Whether or not intentional, it's an example of poor writing, and dimishes those sites' credibility. And if anyone thinks the 320mb gts demolishes the 2900xt, try looking at benches with AA once in a while.
 
Originally posted by: munky
Whether or not intentional, it's an example of poor writing, and dimishes those sites' credibility. And if anyone thinks the 320mb gts demolishes the 2900xt, try looking at benches with AA once in a while.

I thought that, like their failed UMD, AA didn't work PERIOD on that card... did they end up fixing it?

Originally posted by: thilan29
Originally posted by: taltamir
which is obviously a mistake since the GTS annihilates the XT. Even the 320MB version of GTS annihilates the XT in everything...

http://firingsquad.com/hardwar...e_8800_gt_performance/

You really need to stop making false statements (this isn't the only thread you've done this). If you're gonna claim something at least provide proof for it. :roll:

Then I will end up never speaking again... I can always trust there will be someone like you there to correct me when I am wrong...
Anyways from what I Can see in the review you linked in most games the XT looses by a hair to the 320MB GTS on low res... and wins by a hair over the 640MB GTS in high res (1920x1200).... in 7.10 catalyst drivers...
Which now that I think about it, they did say it greatly improved performance did it not? I guess the XT finally cought up to the GTS...

So I guess I was wrong... the XT does not get demolished by the GTS... any other 2900 card gets demolished by the GTS... the XT is slightly better then any GTS at very high res (which is what really matters).
 
Originally posted by: taltamir
Ok, After checking only the FIRST example you gave I have this to say...

how exactly is it, that if two articles are written by the same author and he swaps the result of two cards so in one test the AMD card comes on top, and in the other the nVidia card comes on top, is it a "mistake" in favor of nvidia? If he was being biased that way he would make the "mistake" in both articles so that BOTH show nvdia as better... instead he swapped them so his articles contradict... seems like an honest mistake to me...
if anything it is probably favoring AMD because when he swapped the GTS and the 2900XT so that in one test the XT comes on top... which is obviously a mistake since the GTS annihilates the XT. Even the 320MB version of GTS annihilates the XT in everything...
did you just say that the 320mb version of gts annihilates the xt in everything?????
http://www.anandtech.com/video/showdoc.aspx?i=2988&p=21
read that whole article. it was posted here on may 14th, right around 2900xt's release date. since its release, it is nearly universally accepted that 2900xt perf has improved vs 8800 due to better driver optimizations. I would be happy to go to every other reputable site on the web to find other examples for you.

I'm not saying that the 2900xt is better than 8800gts in EVERYTHING, but it is certainly AT LEAST as good as 8800gts today when all benchmarks are taken into account. Well, the g80 version of 8800gts, anyway. g92 version otoh...


 
Originally posted by: taltamir
Originally posted by: munky
Whether or not intentional, it's an example of poor writing, and dimishes those sites' credibility. And if anyone thinks the 320mb gts demolishes the 2900xt, try looking at benches with AA once in a while.

I thought that, like their failed UMD, AA didn't work PERIOD on that card... did they end up fixing it?

AA works, only it's done in the shaders, and people suspect that may be the reason it takes a bigger performance hit from AA. AMD claims this AA method was a design decision, not a defect.
 
Originally posted by: taltamir
Originally posted by: munky
Whether or not intentional, it's an example of poor writing, and dimishes those sites' credibility. And if anyone thinks the 320mb gts demolishes the 2900xt, try looking at benches with AA once in a while.

I thought that, like their failed UMD, AA didn't work PERIOD on that card... did they end up fixing it?
you really need to do some research before spouting gibberish as if it were fact. nvidia's overall lineup is much better than amd's atm. that doesn't mean that they destroy amd at every price point, with every feature, or even in every single game. the VR Zone review of 3870 actually showed one or 2 games that had 3870 above 8800gt AND gtx in dx10. gt and gtx are clearly the better-performing cards overall, just not in every single thing.

 
Originally posted by: munky
Originally posted by: taltamir
Originally posted by: munky
Whether or not intentional, it's an example of poor writing, and dimishes those sites' credibility. And if anyone thinks the 320mb gts demolishes the 2900xt, try looking at benches with AA once in a while.

I thought that, like their failed UMD, AA didn't work PERIOD on that card... did they end up fixing it?

AA works, only it's done in the shaders, and people suspect that may be the reason it takes a bigger performance hit from AA. AMD claims this AA method was a design decision, not a defect.
I believe that it was a design decision due to a defect in some amd engineer's logic circuits...

 
Originally posted by: munky
Originally posted by: taltamir
Originally posted by: munky
Whether or not intentional, it's an example of poor writing, and dimishes those sites' credibility. And if anyone thinks the 320mb gts demolishes the 2900xt, try looking at benches with AA once in a while.

I thought that, like their failed UMD, AA didn't work PERIOD on that card... did they end up fixing it?

AA works, only it's done in the shaders, and people suspect that may be the reason it takes a bigger performance hit from AA. AMD claims this AA method was a design decision, not a defect.

A mistaken design decision might not a defect TECHNICALLY... but it is still completely impractical to use anywhere, and is DEFECTIVE if you use the "doesn't work" definition of defective.

When a company replies to claims of "it doesn't work" with "it was a feature"... well... nothing really, they always do that. But we know better don't we?

Anyways... I haven't looked at the XT for a while since I chulked it up as a total failure due to those reviews in the first few monthes after it came out... However it seems to have picked up considerably thanks to software tweeking... good job AMD. Too bad it is not enough.
 
Originally posted by: bryanW1995
Originally posted by: munky
Originally posted by: taltamir
Originally posted by: munky
Whether or not intentional, it's an example of poor writing, and dimishes those sites' credibility. And if anyone thinks the 320mb gts demolishes the 2900xt, try looking at benches with AA once in a while.

I thought that, like their failed UMD, AA didn't work PERIOD on that card... did they end up fixing it?

AA works, only it's done in the shaders, and people suspect that may be the reason it takes a bigger performance hit from AA. AMD claims this AA method was a design decision, not a defect.
I believe that it was a design decision due to a defect in some amd engineer's logic circuits...

I believe this decision was made in regard to the DX10.1 requirement of fully programmable AA: Text
Which further explains why the rv670 supports DX10.1 without much apparent change in the architecture from the r600, while Nvidia will likely need to make more significant changes to support DX10.1. Again, this feature may turn out to be too forwad-looking for its own good, but I believe it was done so for this reason.
 
Originally posted by: munky
Originally posted by: Azn
Maybe on paper 3870 is beast but spec wise 3870 is NOT beast. It's the same old 16 by 16 card since x850xt.

The number of texturing units says nothing about the capabilities of those units. I can tell you for sure that the r6xx texturing units are not the same as those found on previous gen HW.

In the world of gaming texture fillrate, memory bandwidth, PS matters the most. ATI card only lacks one thing that's tmu.
 
Originally posted by: bryanW1995
Originally posted by: taltamir
Originally posted by: munky
Whether or not intentional, it's an example of poor writing, and dimishes those sites' credibility. And if anyone thinks the 320mb gts demolishes the 2900xt, try looking at benches with AA once in a while.

I thought that, like their failed UMD, AA didn't work PERIOD on that card... did they end up fixing it?
you really need to do some research before spouting gibberish as if it were fact. nvidia's overall lineup is much better than amd's atm. that doesn't mean that they destroy amd at every price point, with every feature, or even in every single game. the VR Zone review of 3870 actually showed one or 2 games that had 3870 above 8800gt AND gtx in dx10. gt and gtx are clearly the better-performing cards overall, just not in every single thing.

Give the man a cigar. Hooray for intelligence. 😛
 
Originally posted by: munky
Originally posted by: bryanW1995
Originally posted by: munky
Originally posted by: taltamir
Originally posted by: munky
Whether or not intentional, it's an example of poor writing, and dimishes those sites' credibility. And if anyone thinks the 320mb gts demolishes the 2900xt, try looking at benches with AA once in a while.

I thought that, like their failed UMD, AA didn't work PERIOD on that card... did they end up fixing it?

AA works, only it's done in the shaders, and people suspect that may be the reason it takes a bigger performance hit from AA. AMD claims this AA method was a design decision, not a defect.
I believe that it was a design decision due to a defect in some amd engineer's logic circuits...

I believe this decision was made in regard to the DX10.1 requirement of fully programmable AA: Text
Which further explains why the rv670 supports DX10.1 without much apparent change in the architecture from the r600, while Nvidia will likely need to make more significant changes to support DX10.1. Again, this feature may turn out to be too forwad-looking for its own good, but I believe it was done so for this reason.

This reminds me of the netburst archetecture "we will make crappy cpus now to have early implementation of features that MAYBE will yield great results later... so what if the first gen 1.5ghz P4 is outperformed by a 1ghz P3, we beleive we can eventually take the netburst archeticture to 10ghz..." Then amd wiped the floor with them.

Same concept really... its great that they went and developed it early in a lab, but it shouldn't have been there to hurt the performance of a gpu that wasn't even DX10.1 compliant. Keep it in the lab.
 
Back
Top