Why is HD3870 slower?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ttraveler

Junior Member
Nov 27, 2007
6
0
0
Originally posted by: bharatwaja
Should I buy HD3870 or 8800GT?

If HD3870.... which manufacturer?


That's my question too.

Would anyone like to mention/rank HD3870 brands?


:cool:
 

ttraveler

Junior Member
Nov 27, 2007
6
0
0
Originally posted by: CrystalBay
Visiontek has a lifetime warranty FWIW...


Thanks for the reply CrystalBay.

I personally have never owned a VisionTek product. Have you?

There are some who have reported great dificulty when dealing with VisionTek and a warranty issue.

Plus I have heard that VisionTek cards run a bit hotter than they should. I don't know if this problem has been fixed or not in the current batch off the assembly line.

Some cards have fixed speed fans and some have variable speed fans controlled automatically by the load/heat sensors. I would rather get one of the auto speed fan cards, but which brand does that?


:cool:
 

Butterbean

Banned
Oct 12, 2006
918
1
0

CrystalBay

Platinum Member
Apr 2, 2002
2,175
1
0
I think the bios on these are original oems therefor the are a fixed fan speed. I'm using rivatuner to set it at constant 40% to keep the noise down , default was @25%.... Much above 50% is starting to push the decibels...
 

TanisHalfElven

Diamond Member
Jun 29, 2001
3,512
0
76
Originally posted by: Dadofamunky
Originally posted by: bharatwaja
Today has been the worst of ever for me... Jus found out today that Q9450 has been delayed
Was planning a system build in Feb 2008... Looks like i have go with C2Q Q6600 till Q9450 comes out...
now in a gr8 confusion about 8800GT or HD3870, coz with the latter i can go for crossfire on X38 but SLI is never possible on X38.. besides that only X38 supports ddr3 apart from p35, but neither have SLI support.

Quads are useless unless you're using esoteric 3-D modeling or video encoding apps and using them all the time. The OSs don't support them very well. Vista offers some eye candy but mainstream apps and games don't benefit from more than two cores. I'd recommend getting an E8400 or E8500 instead. That's what I'm doing.

The hardware is so far ahead of the software right now that it's a joke.

what ?
get of gaming and do something else for change. faster processor and more cores are always usefull. i run a rasterizing programs to make ebooks for my sony reader. on the highest setting even 200 pages can take 3-4 days (on an opty 165@stock.) same goes for video encoding etc etc.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
To the OP, its because of architectural difference. Both are based on the concept of "unified shader" architecture which basically means that out of the total number of shaders which one can act as a pixel or vertex or geometry shader depending on how it is scheduled/managed which is also heavily dependent on how pixel/vertex/geometry loads change scene from scene. Back in the 7 series and X1 series, these functions were pretty much "fixed". So the cards had a fixed number of pixel shaders (or pipelines), fixed number of vertex shaders and so on.

Now, G92 (GPU core used for the 8800GT) is alot different to the RV670 (the GPU Core used for the HD3870). E.g just because both supports 256bit memory interface doesn;t mean its exactly the same. The way they implemented could be alot different which could also show performance/efficency difference. How is the G92 faster? well its quite difficult to explain but here are the number of things that could be the reason.

Firstly, there are more texture units found on G80 and G92 based cards. 56 TMU are found in the 8800GT (Texture mapping units) compared to 16 TMU of the HD3870. So in texture bound situations, the G92/G80 will be faster, if not alot faster.

Secondly, G80/G92 has unified 128 scalar shaders (GT has 112) where it lacks brute strength (theorectically compared to the RV670) but rather relies on utilization % of the shaders which is much higher than that of the RV670 which uses 64 vec5 shaders. Simply, one scalar shader will do one thing at a time. One vec5 shader can do 5 things at the same time IF it is coded that way, so if it only requires to do one thing it is wasting its potential power. This is the most basic downside, along with RV670/R600 being a VLIW architecture.

Thirdly, with the introduction of shader domains as early as G70 i think, nVIDIA has successfully allowed the shader core of their architecture to run at a substantially faster speed. This could mean that the entire chip doesn't have to clocked high enough, but rather portions of it. I dont know if this is one of the reasons why, but it sure is an advantage.

Lastly DEVELOPER RELATIONS! period. When nVIDIA brought out S.M 3.0 (the implentation of S.M 3.0 back then could be argued but thats another story), but to the developers it was exciting since the new stuff that came along with it was something the devs wanted to do. Such is that nVIDIA drives alot of new technology forward at the right time. Alot of games are tested on latest nVIDIA hardware where it is optimized while both camps try help which other for the best possible result from the game engine and the GPU hardware. Crysis, Lost Planet, and the list can go on. This is one thing i find lacking in ATi for YEARS.

I also remember people saying the performance of the R600 will get better, but the irony is that DX10 performs better on G92/G80 based cards. However, ATi/AMD products are still very much competitive. The GT is more pricey but is faster across the board then the cheaper HD3870. See my point? You cant go wrong either way.

edit - this thread is way OT.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: bharatwaja
Today has been the worst of ever for me... Jus found out today that Q9450 has been delayed
Was planning a system build in Feb 2008... Looks like i have go with C2Q Q6600 till Q9450 comes out...
now in a gr8 confusion about 8800GT or HD3870, coz with the latter i can go for crossfire on X38 but SLI is never possible on X38.. besides that only X38 supports ddr3 apart from p35, but neither have SLI support.

There are X38's available with DDR2 memory support if that is what you need. You can also go for a NForce 780 SLI as Nvidia just launched that, if you want an SLI upgrade path.
 

BlueAcolyte

Platinum Member
Nov 19, 2007
2,793
2
0
The 8800GT is about 15%-20% faster than the HD 3870, but costs 15%-20% (at least in an ideal world) so you really can't go wrong. The HD 3870 does have better cooling though.
 

MegaWorks

Diamond Member
Jan 26, 2004
3,819
1
0
I ordered The Visiontek HD3870 from ncix, sill waiting for it. :)

Edit: The reason why I ordered the 3870 is because I might install a Crossfire setup. I have a X38 board so why not! :cool:

To the OP, I believe the biggest factor is that RV670 uses 64 superscalar unified shader clusters. In other word, one scalar shader will do one thing at a time. Game developers need to code their games in order to use ATI's 64 x 5 "VLIW" architecture which is 320 stream processing units. Let just hope that they will in future game releases. ;)
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Zstream
Originally posted by: cmdrdredd
Originally posted by: Dadofamunky
Originally posted by: bharatwaja
Today has been the worst of ever for me... Jus found out today that Q9450 has been delayed
Was planning a system build in Feb 2008... Looks like i have go with C2Q Q6600 till Q9450 comes out...
now in a gr8 confusion about 8800GT or HD3870, coz with the latter i can go for crossfire on X38 but SLI is never possible on X38.. besides that only X38 supports ddr3 apart from p35, but neither have SLI support.

Quads are useless unless you're using esoteric 3-D modeling or video encoding apps and using them all the time. The OSs don't support them very well. Vista offers some eye candy but mainstream apps and games don't benefit from more than two cores. I'd recommend getting an E8400 or E8500 instead. That's what I'm doing.

The hardware is so far ahead of the software right now that it's a joke.

ignore this post we all know that Crysis and other games will and do use 4 cores.

Crysis uses two cores, this is a fact. If you have four cores all it means is that the windows apps are using the other two.

That's the main problem with thread parallelism, Is it Windows aware of the cores that Crysis is using? I don't think so, what it would happen is that most applications that are optimized for Dual Core only will take advantage only of the first two cores of the Quad Chip, so it means that other applications that are optimized for Dual Core in the background will also share the same cores, unless if you use affinity in the Task Manager, Windows is not very optimized for Quad Cores, so it probably will use mostly the first 2 cores and the other two will do less work.

 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Cookie Monster
To the OP, its because of architectural difference. Both are based on the concept of "unified shader" architecture which basically means that out of the total number of shaders which one can act as a pixel or vertex or geometry shader depending on how it is scheduled/managed which is also heavily dependent on how pixel/vertex/geometry loads change scene from scene. Back in the 7 series and X1 series, these functions were pretty much "fixed". So the cards had a fixed number of pixel shaders (or pipelines), fixed number of vertex shaders and so on.

Now, G92 (GPU core used for the 8800GT) is alot different to the RV670 (the GPU Core used for the HD3870). E.g just because both supports 256bit memory interface doesn;t mean its exactly the same. The way they implemented could be alot different which could also show performance/efficency difference. How is the G92 faster? well its quite difficult to explain but here are the number of things that could be the reason.

Firstly, there are more texture units found on G80 and G92 based cards. 56 TMU are found in the 8800GT (Texture mapping units) compared to 16 TMU of the HD3870. So in texture bound situations, the G92/G80 will be faster, if not alot faster.

Secondly, G80/G92 has unified 128 scalar shaders (GT has 112) where it lacks brute strength (theorectically compared to the RV670) but rather relies on utilization % of the shaders which is much higher than that of the RV670 which uses 64 vec5 shaders. Simply, one scalar shader will do one thing at a time. One vec5 shader can do 5 things at the same time IF it is coded that way, so if it only requires to do one thing it is wasting its potential power. This is the most basic downside, along with RV670/R600 being a VLIW architecture.

Thirdly, with the introduction of shader domains as early as G70 i think, nVIDIA has successfully allowed the shader core of their architecture to run at a substantially faster speed. This could mean that the entire chip doesn't have to clocked high enough, but rather portions of it. I dont know if this is one of the reasons why, but it sure is an advantage.

Lastly DEVELOPER RELATIONS! period. When nVIDIA brought out S.M 3.0 (the implentation of S.M 3.0 back then could be argued but thats another story), but to the developers it was exciting since the new stuff that came along with it was something the devs wanted to do. Such is that nVIDIA drives alot of new technology forward at the right time. Alot of games are tested on latest nVIDIA hardware where it is optimized while both camps try help which other for the best possible result from the game engine and the GPU hardware. Crysis, Lost Planet, and the list can go on. This is one thing i find lacking in ATi for YEARS.

I also remember people saying the performance of the R600 will get better, but the irony is that DX10 performs better on G92/G80 based cards. However, ATi/AMD products are still very much competitive. The GT is more pricey but is faster across the board then the cheaper HD3870. See my point? You cant go wrong either way.

edit - this thread is way OT.

The nVidia's SM3.0 implementation was a joke, cause it supposed to take advantage of the Dynamic Branching capabilities offered by SM3.0 which was the only difference and advantage over the SM2.0, and the way which the NV40 and G70 process the shaders in big batches is just too bad for Dynamic Branching, hence the performance impact when it's used and the Radeon X1K gains an incredible performance boost using it. The G92 for some reason has much less texture power than the G80, seems that because in most game scenarios doesn't offer a nice advantage or they remain idle most of the time.
http://www.gpureview.com/show_...hp?card1=475&card2=544

Radeon HD 3870 Crossfire scales much better than any SLI
 

imported_Section8

Senior member
Aug 1, 2006
483
0
0
I just installed my Visiontek HD3870 in a Gigabyte GA-M57SLI-S4, AM2 6000+, 2 gigs Corsair XMS2 PC6400, Enermax 450 PSU. Used Riva Tuner to boost fan speed to 60% and my first unregistered 3DMark 06 score was 10212. This was at stock settings. Every game I've played runs like butter on high settings. If you can get a 8800gt for the same price get it, but my card was 193 shipped, so I am happy. BTW, my card never gets above 70c under load. Sitts about 35c idel.
 

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
Originally posted by: bharatwaja
HD3870 having 2.25Ghz GDDR4, 775+ Mhz core clock, higher shader clock...

almost all specs are better than 8800GT

still 8800GT beats HD3870 in benches... why?

Seems illogical...

Also, would there be any noticeable difference while playing the latest games on HD3870 as opposed to 8800GT?

The HD 3870 actually has 64 shaders. The shaders can just do 5 operations at once (Giving you an effective 320 shaders) There was a good article explaining how this really works, but alas, I can't seem to find it. Maybe I will get back to you on that.