Originally posted by: bharatwaja
Should I buy HD3870 or 8800GT?
If HD3870.... which manufacturer?
That's my question too.
Would anyone like to mention/rank HD3870 brands?
Originally posted by: bharatwaja
Should I buy HD3870 or 8800GT?
If HD3870.... which manufacturer?
Originally posted by: CrystalBay
Visiontek has a lifetime warranty FWIW...
Originally posted by: ttraveler
Originally posted by: bharatwaja
Should I buy HD3870 or 8800GT?
If HD3870.... which manufacturer?
That's my question too.
Would anyone like to mention/rank HD3870 brands?
![]()
Originally posted by: Dadofamunky
Originally posted by: bharatwaja
Today has been the worst of ever for me... Jus found out today that Q9450 has been delayed
Was planning a system build in Feb 2008... Looks like i have go with C2Q Q6600 till Q9450 comes out...
now in a gr8 confusion about 8800GT or HD3870, coz with the latter i can go for crossfire on X38 but SLI is never possible on X38.. besides that only X38 supports ddr3 apart from p35, but neither have SLI support.
Quads are useless unless you're using esoteric 3-D modeling or video encoding apps and using them all the time. The OSs don't support them very well. Vista offers some eye candy but mainstream apps and games don't benefit from more than two cores. I'd recommend getting an E8400 or E8500 instead. That's what I'm doing.
The hardware is so far ahead of the software right now that it's a joke.
Originally posted by: bharatwaja
Today has been the worst of ever for me... Jus found out today that Q9450 has been delayed
Was planning a system build in Feb 2008... Looks like i have go with C2Q Q6600 till Q9450 comes out...
now in a gr8 confusion about 8800GT or HD3870, coz with the latter i can go for crossfire on X38 but SLI is never possible on X38.. besides that only X38 supports ddr3 apart from p35, but neither have SLI support.
Originally posted by: Zstream
Originally posted by: cmdrdredd
Originally posted by: Dadofamunky
Originally posted by: bharatwaja
Today has been the worst of ever for me... Jus found out today that Q9450 has been delayed
Was planning a system build in Feb 2008... Looks like i have go with C2Q Q6600 till Q9450 comes out...
now in a gr8 confusion about 8800GT or HD3870, coz with the latter i can go for crossfire on X38 but SLI is never possible on X38.. besides that only X38 supports ddr3 apart from p35, but neither have SLI support.
Quads are useless unless you're using esoteric 3-D modeling or video encoding apps and using them all the time. The OSs don't support them very well. Vista offers some eye candy but mainstream apps and games don't benefit from more than two cores. I'd recommend getting an E8400 or E8500 instead. That's what I'm doing.
The hardware is so far ahead of the software right now that it's a joke.
ignore this post we all know that Crysis and other games will and do use 4 cores.
Crysis uses two cores, this is a fact. If you have four cores all it means is that the windows apps are using the other two.
Originally posted by: Cookie Monster
To the OP, its because of architectural difference. Both are based on the concept of "unified shader" architecture which basically means that out of the total number of shaders which one can act as a pixel or vertex or geometry shader depending on how it is scheduled/managed which is also heavily dependent on how pixel/vertex/geometry loads change scene from scene. Back in the 7 series and X1 series, these functions were pretty much "fixed". So the cards had a fixed number of pixel shaders (or pipelines), fixed number of vertex shaders and so on.
Now, G92 (GPU core used for the 8800GT) is alot different to the RV670 (the GPU Core used for the HD3870). E.g just because both supports 256bit memory interface doesn;t mean its exactly the same. The way they implemented could be alot different which could also show performance/efficency difference. How is the G92 faster? well its quite difficult to explain but here are the number of things that could be the reason.
Firstly, there are more texture units found on G80 and G92 based cards. 56 TMU are found in the 8800GT (Texture mapping units) compared to 16 TMU of the HD3870. So in texture bound situations, the G92/G80 will be faster, if not alot faster.
Secondly, G80/G92 has unified 128 scalar shaders (GT has 112) where it lacks brute strength (theorectically compared to the RV670) but rather relies on utilization % of the shaders which is much higher than that of the RV670 which uses 64 vec5 shaders. Simply, one scalar shader will do one thing at a time. One vec5 shader can do 5 things at the same time IF it is coded that way, so if it only requires to do one thing it is wasting its potential power. This is the most basic downside, along with RV670/R600 being a VLIW architecture.
Thirdly, with the introduction of shader domains as early as G70 i think, nVIDIA has successfully allowed the shader core of their architecture to run at a substantially faster speed. This could mean that the entire chip doesn't have to clocked high enough, but rather portions of it. I dont know if this is one of the reasons why, but it sure is an advantage.
Lastly DEVELOPER RELATIONS! period. When nVIDIA brought out S.M 3.0 (the implentation of S.M 3.0 back then could be argued but thats another story), but to the developers it was exciting since the new stuff that came along with it was something the devs wanted to do. Such is that nVIDIA drives alot of new technology forward at the right time. Alot of games are tested on latest nVIDIA hardware where it is optimized while both camps try help which other for the best possible result from the game engine and the GPU hardware. Crysis, Lost Planet, and the list can go on. This is one thing i find lacking in ATi for YEARS.
I also remember people saying the performance of the R600 will get better, but the irony is that DX10 performs better on G92/G80 based cards. However, ATi/AMD products are still very much competitive. The GT is more pricey but is faster across the board then the cheaper HD3870. See my point? You cant go wrong either way.
edit - this thread is way OT.
Originally posted by: bharatwaja
HD3870 having 2.25Ghz GDDR4, 775+ Mhz core clock, higher shader clock...
almost all specs are better than 8800GT
still 8800GT beats HD3870 in benches... why?
Seems illogical...
Also, would there be any noticeable difference while playing the latest games on HD3870 as opposed to 8800GT?