So seems like AMD/ATI totally missed it this time

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tuteja1986

Diamond Member
Jun 1, 2005
3,676
0
0
Originally posted by: terentenet
Originally posted by: tuteja1986

Lets see :
N40 vs R400
fastest GPU 1st round : 6800U
Fastest GPU 2nd round : X800XT PE
Fastest GPU 3rd round : X850XT PE
High end Bang for buck GPU 1st round : 6800GT
High end Bang for buck GPU 2nd round : X800XL
high end Bang for buck GPU 3rd round : X800GTO2 softmod to X850XT PE
Midrange GPU bang for buck 1st round : 6600GT
Midrange GPU bang for buck 2nd round : X800GTO
Fastest GPU before 7800GTX : X850XT PE
Fastest GPU at the end of war : X850XT PE
Best High bang for GPU at the end of war : X800XL
Best Midrange GPU at the end of war : X800GTO2

G70 vs R500
fastest GPU 1st round : 7800GTX
Fastest GPU 2nd round : X1900XTX
High end Bang for buck GPU 1st round : X1800XT 512MB for $300 1st round
High end Bang for buck GPU 2nd round : X1900XT 512MB for $300
high end Bang for buck GPU 3rd round : X1950XT 256MB for $250
Midrange GPU bang for buck 1st round : none at $200 in R500 or G70 series.
Midrange GPU bang for buck 2nd round : X1950pro 256MB for $200
Fastest GPU before 8800GTX : Nvidia 7950GX
Fastest GPU at the end of war : Dual X1950pro which can compete with 8800GTX
Best High bang for GPU at the end of war : X1950XT 256mb
Best Midrange GPU at the end of war : X1950pro 256mb

Also 8800 wasn't a amazing break through... maybe to eyes of newbie.



Riiiiiiiiight. tuteja, you're such a fanboy. What would break through be? 10 times the performance of last series? Naaah; just 2 times the performance, unified shadres, DX10, more AA modes, improved IQ.
Quit it and stop posting. You're sounding like a broken disk allready.
Fastest GPU at the end of war : Dual X1950pro which can compete with 8800GTX Hear yourself. Comparing 2 cards with a single card. A single card that runs cooler, consumes less than 1 1950Pro, is DX10 compatible and has much better IQ. And, I have a slight impression that that single card beats the 2 1950Pro cards.

/me happy with 1 8800GTX performing better than 2 7900GTX in SLI or CF 1950Pro.

Nvidia power:
http://www.crazypc.ro/forum/attachment.php?attachmentid=21161&d=1178821594
http://www.crazypc.ro/forum/attachment.php?attachmentid=21162&d=1178821594

Dude you are Crazy alright... I am not a Fanboy !! i have owned many Nvidia GPU !!

List in order from latest
1. 2x 8800GTX > main computer
2. 2x 7800GTX > main computer
5. 6600GT > Lan Machine
3. 6800U > main computer
4. Geforce Ti4600 > main computer
5. Geforce Ti300 > main computer
6. Geforce 2 MX200 > main computer

I am just a more experienced GPU user than you !! thats all : ) I have seen it all and spent alot on new GPU. I have also learnt few tricks on when to sell GPU on ebay before refresh of new gpu are out. Also Unifed Shader was never been in Nvidia plans for orginal G80 but changed in arround Q1 2006.

In Q2 to Q4 of 2006 they were down playing Unified Architecture because NVIDIA was criticizing ATI for using Unified Architecture on the xbox 360. They said the market wasn?t ready for it and it wouldn?t be ready for use for many years. Also Nvidia driver team doing such an awesome job at 8800GTX support.

Anyways i am not a fanboy or noob... if you can't stand some nvidia criticism then i suggest you go to a form like nzone to only read Nvidia happy post.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: tuteja1986
Also 8800 wasn't a amazing break through... maybe to eyes of newbie.

You either don't understand how GPU architectures actually work, or just being ignorant. So it aint a big of a break through when nVIDIA's no1 competitor's next gen architecture aka R600 competes against a crippled version of the G80 core (8800GTS)?

This is coming from a person who claims that dual X1950pro is the fastest GPU at the end of the war. Most review sites claim that the 8800GTX is slightly faster than the X1950XTX crossfire setup. Logic tells me that Dual X1950pro will lose to dual X1950XTX.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: tuteja1986
Dude you are Crazy alright... I am not a Fanboy !! i have owned many Nvidia GPU !!

List in order from latest
1. 2x 8800GTX > main computer
2. 2x 7800GTX > main computer
5. 6600GT > Lan Machine
3. 6800U > main computer
4. Geforce Ti4600 > main computer
5. Geforce Ti300 > main computer
6. Geforce 2 MX200 > main computer

I am just a more experienced GPU user than you !! thats all : ) I have seen it all and send alot on new GPU. I have also learnt few tricks on when to sell GPU on ebay before refresh. Also Unifed Shader was never been in Nvidia plans for orginal G80 but changed in arround Q1 2006.

In Q2 to Q4 of 2006 they were down playing Unified Architecture because NVIDIA was criticizing ATI for using Unified Architecture on the xbox 360. They said the market wasn?t ready for it and it wouldn?t be ready for use for many years. Also Nvidia driver team doing such awesome job at 8800GTX support.

Anyways i am not a fanboy or noob... if you can't stand some nvidia criticism then i suggest you go to a form like nzone to only read Nvidia happy post.

You just digged yourself your own grave. This has nothing to do with "not standing nVIDIA criticism" but rather your illogical posts that have no substance behind those criticisms.

You realise the project "g80" was lead by a combination of engineers such as Erik Lindholm (lead for shader core) and this started during the nv40 days. The project was kept secret even from the company itself and only the higher ranking members knew what G80 was about. Thats around 4 years ago from the launch of G80.

Now fact is that by Q1 2006, YOU CANT HAVE ANY MAJOR ARCHITECTURAL changes. Its just simple as this. Secondly, by Q1 2006, nVIDIA already had engineering samples of G80 with the real deal (what we have now being taped out around Q3 for Q4 launch. These earlier samples were in the hands of the devs (crytek for example). However its assumed that these G80s were housing 256bit memory interface and had lower specs than what G80 is now. (But it was DX10 capable, unified architecture, decoupled TMUs, etc etc i.e the main architectural features stayed the same)

Anyone with a half a brain would know marketing and PR talk has no substance behind it most the time. Why do you think nVIDIA down played unified shaders? think about it. Hint: ATi had theres (Xenos and soon a second attempt at it with the upcoming R600 which was clearly confirmed as soon as R600 hit the interweb), but nVIDIA didn't yet.

And what has selling things on ebay and being an experienced gpu user got to do with this? What do you mean by experienced?

:confused:
 

tuteja1986

Diamond Member
Jun 1, 2005
3,676
0
0
Originally posted by: Cookie Monster
Originally posted by: tuteja1986
Also 8800 wasn't a amazing break through... maybe to eyes of newbie.

You either don't understand how GPU architectures actually work, or just being ignorant. So it aint a big of a break through when nVIDIA's no1 competitor's next gen architecture aka R600 competes against a crippled version of the G80 core (8800GTS)?

This is coming from a person who claims that dual X1950pro is the fastest GPU at the end of the war. Most review sites claim that the 8800GTX is slightly faster than the X1950XTX crossfire setup. Logic tells me that Dual X1950pro will lose to dual X1950XTX.
http://www.techpowerup.com/reviews/Sapphire/X1950_Pro_Dual/15

Farcry 2048x1536 4xAA 16xAF
Dual X1950pro : 107.4FPS
8800GTX : 115FPS

Prey 2048x1536 4xAA 16xAF
Dual X1950pro : 52.4
8800GTX : 65.2

Quake 4 2048x1536 4xAA 16xAF
Dual X1950pro : 32.9FPS
8800GTX : 45.9FPS

X3 2048x1536 4xAA 16xAF
Dual X1950pro : 62.7FPS
8800GTX : 57.2FPS

Power Consumption -IDE
Dual X1950pro : 151W
8800GTX : 159W

System power consumption -Average
Dual X1950pro :270W
8800GTX :248W

System power consumption -Peak
Dual X1950pro : 286W
8800GTX : 266W


Also i can tell you i understand alot of GPU architectures. Not a number idiot.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: tuteja1986
If it wasn't for ATI you wouldn't have been seen out cry for higher quaility AF and also new AA modes. Also The only richer feature that 6800U had was a Shader 3.0 which meant crap since shader 3.0 were suppose to run more efficent and faster than Shader 2.0. But if you looked at Farcry benchmark it will tell how much Shader 3.0 actually mattered.

Even if ATI has bang for buck GPU they don't sell no where as good as inferior GPU from NVIDIA.

Shader Model 3.0 and OpenEXR HDR, not to mention SLI technology, meaning Nvidia had the overall speed crown as well as the feature set crown. Quite impressive coming from the Geforce FX generation.

In the FarCry benchmarks Nvidia actually gained more using Shader Model 3.0 then ATI did using Pixel Shader 2.0b, thanks to ATI resources were implemented on a pathway that is only used by 1 generation of ATI hardware a complete waste, Nvidia's work is the default Shader Model 3.0 implementation allowing X1K users to enjoy far more Shader Model 3.0 games then wouldn't have existed as quickly if Nvidia hadn't launched Shader Model 3.0 capable hardware more then a year ago from when X1800 launched.

Nvidia's and ATI's AA quality are comparable for modes like 2x/4x MSAA while Nvidia retains an edge when they are able to use SSAA to bring higher image quality. Nvidia was the one that introduced Transparency AA, so they have their own improvements to AA.

ATI was the one who introduced the 45 Degree Angle dependent AF stuff leading Nvidia down the path to performance oriented AF, this is not impressive to me, the X1K only fixed this back to the norm of 90 Degree and Nvidia took it a step above that with the near perfect AF of the 8 Series.



 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
http://www.techpowerup.com/reviews/Sapphire/X1950_Pro_Dual/15

Farcry 2048x1536 4xAA 16xAF
Dual X1950pro : 107.4FPS
8800GTX : 115FPS

Prey 2048x1536 4xAA 16xAF
Dual X1950pro : 52.4
8800GTX : 65.2

Quake 4 2048x1536 4xAA 16xAF
Dual X1950pro : 32.9FPS
8800GTX : 45.9FPS

X3 2048x1536 4xAA 16xAF
Dual X1950pro : 62.7FPS
8800GTX : 57.2FPS

Power Consumption -IDE
Dual X1950pro : 151W
8800GTX : 159W

System power consumption -Average
Dual X1950pro :270W
8800GTX :248W

System power consumption -Peak
Dual X1950pro : 286W
8800GTX : 266W


Also i can tell you i understand alot of GPU architectures. Not a number idiot.

:confused:

No need for petty personal insults.

And whats your point from the post above? your saying that X1950XTX crossfire is slower than dual X1950pro?

What you did infact say is that the 8800GTX is THE fastest GPU right now. 8800GTX SLi being the fastest graphocs card configuration in a PC.
 

tuteja1986

Diamond Member
Jun 1, 2005
3,676
0
0
Originally posted by: Cookie Monster
Originally posted by: tuteja1986
Dude you are Crazy alright... I am not a Fanboy !! i have owned many Nvidia GPU !!

List in order from latest
1. 2x 8800GTX > main computer
2. 2x 7800GTX > main computer
5. 6600GT > Lan Machine
3. 6800U > main computer
4. Geforce Ti4600 > main computer
5. Geforce Ti300 > main computer
6. Geforce 2 MX200 > main computer

I am just a more experienced GPU user than you !! thats all : ) I have seen it all and send alot on new GPU. I have also learnt few tricks on when to sell GPU on ebay before refresh. Also Unifed Shader was never been in Nvidia plans for orginal G80 but changed in arround Q1 2006.

In Q2 to Q4 of 2006 they were down playing Unified Architecture because NVIDIA was criticizing ATI for using Unified Architecture on the xbox 360. They said the market wasn?t ready for it and it wouldn?t be ready for use for many years. Also Nvidia driver team doing such awesome job at 8800GTX support.

Anyways i am not a fanboy or noob... if you can't stand some nvidia criticism then i suggest you go to a form like nzone to only read Nvidia happy post.

You just digged yourself your own grave. This has nothing to do with "not standing nVIDIA criticism" but rather your illogical posts that have no substance behind those criticisms.

You realise the project "g80" was lead by a combination of engineers such as Erik Lindholm (lead for shader core) and this started during the nv40 days. The project was kept secret even from the company itself and only the higher ranking members knew what G80 was about. Thats around 4 years ago from the launch of G80.

Now fact is that by Q1 2006, YOU CANT HAVE ANY MAJOR ARCHITECTURAL changes. Its just simple as this. Secondly, by Q1 2006, nVIDIA already had engineering samples of G80 with the real deal (what we have now being taped out around Q3 for Q4 launch. These earlier samples were in the hands of the devs (crytek for example). However its assumed that these G80s were housing 256bit memory interface and had lower specs than what G80 is now. (But it was DX10 capable, unified architecture, decoupled TMUs, etc etc i.e the main architectural features stayed the same)

Anyone with a half a brain would know marketing and PR talk has no substance behind it most the time. Why do you think nVIDIA down played unified shaders? think about it. Hint: ATi had theres (Xenos and soon a second attempt at it with the upcoming R600 which was clearly confirmed as soon as R600 hit the interweb), but nVIDIA didn't yet.

And what has selling things on ebay and being an experienced gpu user got to do with this? What do you mean by experienced?

:confused:


Man i don't who erik lindholm is... I thought he was the Nvidia plummer ?. Well if you knew alot of details nvdiai was orginally planned to introduced with G90. It was never a secert they were working on unified acritecture but the introduction wasn't planned with G80 orginally. Also i know it takes more than 4 years to R&D a GPU. When you say its impossible "YOU CANT HAVE ANY MAJOR ARCHITECTURAL changes"... Then i suggest you read up on a little history... impossible things are very doable. I am not saying the Nvidia Unified Architecture wasn't indevelopment... it was years ago but it was never planned for orginal G80.

I really don't want to get into this fight... If i type something in it will be taken out word of context. If you want to talk to me seriously we can speak over VOIP.
 

rise

Diamond Member
Dec 13, 2004
9,116
46
91
tuteja, you've been wrong since day one on r600 so i understand its a bitter pill but you should take your own advice and don't get into fights you keep losing.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Erik Lindholm was responsible for the T&L and texture combiners (nvidia shading rasteriser in geforce 1 & 2 (nv1x). He was also responsible for the Pixel and Vertex shaders that debuted with Geforce 3 nv20. Very few in consumer 3d graphics would cast a longer shadow than him.
 

terentenet

Senior member
Nov 8, 2005
387
0
0
tuteja, what do you mean you're experienced? Experienced in what? GPU's? Shut it :) you don't know squat about GPU's. That allows you to say ATI 1950Pro is a break through while Nvidia 8800GTX isn't. You FUNNEEEY.
Look at your System link fanboy. 2 systems, both ATI. Where's all that Nvidia gear you're claiming? Oooooh you really don't have any? Too bad. Unless you have maaaany more systems in your beach house in Bahamas islands, all with Nvidia videocards.
Not that having 2 systems with ATi videocards would make anybody a fanboy. It's your statements that make you a fanboy. And IF you really know GPUs, you must acknowledge the merits of both sides. Both ATi and Nvidia have advantages and disadvantages.
What makes you a fanboy is that in your little RED eyes, all merit goes to ATi. Well, if it wasn't for Nvidia, we'd still be playing on Rage Fury Pro.

1. First came SLI, followed by CF
2. Nvidia is the first to have SM3&4
3. Nvidia first to have Transparent AA
4. ATI first to have 45 degree AF, followed by Nvidia with even better AF in 8 series
 

tuteja1986

Diamond Member
Jun 1, 2005
3,676
0
0
Originally posted by: rise
tuteja, you've been wrong since day one on r600 so i understand its a bitter pill but you should take your own advice and don't get into fights you keep losing.

FINE yes i have been wrong since on R600. : )

Even if i haven't said much about R600... I just don't have the will power to actively take part in discussion in forums like i did when rollo rained supreme. Also i am streeted out from my job which is making me brain dead. Ask apoppin... he thinks i am talking durgs or something. I know ATI weakness very well... Lazyiness , stupid managment which i was hopping would have changed after the takeover. They are always late , they suck at marketing and also have hired alot of stupid employees that know crap about ATI. I can go on about ATI flaws but it may not matter in year or so if AMD doesn't get out from financial issue they are having at the moment.
 

rise

Diamond Member
Dec 13, 2004
9,116
46
91
rollo treated you like you were his bitch when he was here. you stayed far away from him as i recall ;)
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: tuteja1986
Originally posted by: Cookie Monster
Originally posted by: tuteja1986
Also 8800 wasn't a amazing break through... maybe to eyes of newbie.

You either don't understand how GPU architectures actually work, or just being ignorant. So it aint a big of a break through when nVIDIA's no1 competitor's next gen architecture aka R600 competes against a crippled version of the G80 core (8800GTS)?

This is coming from a person who claims that dual X1950pro is the fastest GPU at the end of the war. Most review sites claim that the 8800GTX is slightly faster than the X1950XTX crossfire setup. Logic tells me that Dual X1950pro will lose to dual X1950XTX.
http://www.techpowerup.com/reviews/Sapphire/X1950_Pro_Dual/15

Farcry 2048x1536 4xAA 16xAF
Dual X1950pro : 107.4FPS
8800GTX : 115FPS

Prey 2048x1536 4xAA 16xAF
Dual X1950pro : 52.4
8800GTX : 65.2

Quake 4 2048x1536 4xAA 16xAF
Dual X1950pro : 32.9FPS
8800GTX : 45.9FPS

X3 2048x1536 4xAA 16xAF
Dual X1950pro : 62.7FPS
8800GTX : 57.2FPS

Power Consumption -IDE
Dual X1950pro : 151W
8800GTX : 159W

System power consumption -Average
Dual X1950pro :270W
8800GTX :248W

System power consumption -Peak
Dual X1950pro : 286W
8800GTX : 266W


Also i can tell you i understand alot of GPU architectures. Not a number idiot.

I was going to tell you not to reply to CookieMonsters post unless you REALLY thought it out well, but I'm too late. Now you look twice as dumb. Sorry bout that. :(
 

terentenet

Senior member
Nov 8, 2005
387
0
0
Originally posted by: tuteja1986
FINE yes i have been wrong since on R600. : )

Even if i haven't said much about R600... I just don't have the will power to actively take part in discussion in forums like i did when rollo rained supreme. Also i am streeted out from my job which is making me brain dead. Ask apoppin... he thinks i am talking durgs or something. I know ATI weakness very well... Lazyiness , stupid managment which i was hopping would have changed after the takeover. They are always late , they suck at marketing and also have hired alot of stupid employees that know crap about ATI. I can go on about ATI flaws but it may not matter in year or so if AMD doesn't get out from financial issue they are having at the moment.

AMD is not going down. I for one hope they get R600 out and get good sales with it; make some profit. If AMD goes down, CPU prices will get sky high, so will video cards. Without competition Intel and Nvidia will make no improvments, no better products will come out the door. It would be a shame.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: tuteja1986
Originally posted by: Gomce
And who's to blame them. They wanted to play good capitalists (talking about AMD here), sitting on their good product (Athlon 64) for far too long with virtually 0 innovation, enjoying rise in sales, profit and share prices.

K7 architecture is what, 5+ years old now?

I've been using AMD since the 166mhz days, then duron 700@900, Tbird 1400, A64 3000+ etc. And the performance was good, albeit accompanying chipsets weren't so stable and one had to make careful selection of what he buys.

But AMD imo, abused the underdog card for far too long. A64 ended up being $400+, and AMD quickly became the new Intel, a period that lasted for almost 18 months!

-

I'm glad Intel did the price reduction strategy, first with PentiumD and then with the awesome performing C2D. I now have 4 systems all with Intel processors which run without any problems with all sorts of ram (ddr, ddr2 value and more enthusiast alike).

------

Same thing with ATI. They had a great run with 9700pro!

But what happened afterwards? 9800pro > X800 > X1800 (lol) > X1900

Did we see any innovation and breakthrough? No, same mistake as AMD!

One is led to conclude that the A64 and 9700 gems were just extremely lucky shots by these 2 companies.


I'm saddened to think that there will be nothing in the next 3-4 years to challenge Intel/Nvidia.

The 8800 series are an amazing breakthrough, same goes for C2D. Sadly, without competition NVidia's prices will now remain stagnant and you can't expect anything more interesting than a 8800gtx in the next 2 years.

Lets see :
N40 vs R400
fastest GPU 1st round : 6800U
Fastest GPU 2nd round : X800XT PE
Fastest GPU 3rd round : X850XT PE
High end Bang for buck GPU 1st round : 6800GT
High end Bang for buck GPU 2nd round : X800XL
high end Bang for buck GPU 3rd round : X800GTO2 softmod to X850XT PE
Midrange GPU bang for buck 1st round : 6600GT
Midrange GPU bang for buck 2nd round : X800GTO
Fastest GPU before 7800GTX : X850XT PE
Fastest GPU at the end of war : X850XT PE
Best High bang for GPU at the end of war : X800XL
Best Midrange GPU at the end of war : X800GTO2

G70 vs R500
fastest GPU 1st round : 7800GTX 256 << Fixed
Fastest GPU 2nd round : X1800XT << Fixed
Fastest GPU 3rd round : 7800GTX 512 << Fixed
Fastest GPU 4rd round : X1900XTX << Fixed
High end Bang for buck GPU 1st round : X1800XT 512MB for $300 1st round
High end Bang for buck GPU 2nd round : X1900XT 512MB for $300
High End Bang for buck GPU 3rd round X1950XT 256MB for $250 or less <<Fixed
high end Bang for buck GPU 4rd round : X1950XT 256MB for $250
Midrange GPU bang for buck 1st round : none at $200 in R500 or G70 series.
Midrange GPU bang for buck 2nd round : X1950pro 256MB for $200
Fastest GPU before 8800GTX : Nvidia 7950GX
Fastest GPU at the end of war : Dual X1950pro which can compete with 8800GTX
Best High bang for GPU at the end of war : X1950XT 256mb
Best Midrange GPU at the end of war : X1950pro 256mb

Also 8800 wasn't a amazing break through... maybe to eyes of newbie.

Is a fast card, but rarely can double the performance of the 7900GTX or Radeon X1950XTX, except in ultramegasuperdupper high resolutions with crazy amounts of anti aliasing and aniso filtering. Is a nice leap in performance, not seen from a while, but the leap from the 4600ti to 9700PRO was marginally more impressive, as the leap from the FX5950 to the 6800Ultra or Radeon 9800PRO to X800XT PE.
 

rise

Diamond Member
Dec 13, 2004
9,116
46
91
AMD is not going down. I for one hope they get R600 out and get good sales with it; make some profit. If AMD goes down, CPU prices will get sky high, so will video cards. Without competition Intel and Nvidia will make no improvments, no better products will come out the door. It would be a shame.
nothing personal, but i wish people would stop saying that. i doubt theres more than a handful of people who actually want to see amd go down or live with the consequences.

there are certainly more than a handful who think that amd will go down, but few want to see it.

they'll be ok, i'd like to get some shares ~10.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: coldpower27
Originally posted by: tuteja1986
"If it wasn't for ATI you wouldn't have been seen out cry for higher quaility AF and also new AA modes. Also The only richer feature that 6800U had was a Shader 3.0 which meant crap since shader 3.0 were suppose to run more efficent and faster than Shader 2.0. But if you looked at Farcry benchmark it will tell how much Shader 3.0 actually mattered."

"Even if ATI has bang for buck GPU they don't sell no where as good as inferior GPU from NVIDIA. "

Numbers please??
_______________________________________________________________________________________________________________
Shader Model 3.0 and OpenEXR HDR, not to mention SLI technology, meaning Nvidia had the overall speed crown as well as the feature set crown. Quite impressive coming from the Geforce FX generation.
______________________________________________________________________________________________________


SLI is an idea originally from 3dfx, they do not have the overall speed crown against the X800 series, since their debut they always lagged behind in most DX games while it won in OpenGL games which are not that much. Indeed an impressive coming from the FX generation.


_______________________________________________________________________________________________________________
"In the FarCry benchmarks Nvidia actually gained more using Shader Model 3.0 then ATI did using Pixel Shader 2.0b, thanks to ATI resources were implemented on a pathway that is only used by 1 generation of ATI hardware a complete waste, Nvidia's work is the default Shader Model 3.0 implementation allowing X1K users to enjoy far more Shader Model 3.0 games then wouldn't have existed as quickly if Nvidia hadn't launched Shader Model 3.0 capable hardware more then a year ago from when X1800 launched."
_______________________________________________________________________________________________________________



Actually, Radeon X800XT PE using SM2.0b always outperformed nVidia's SM3.0 path by a significant margin. Even though the SM3.0 path increased the performance on the 6800 Ultra, was never able to catch up. After all, SM2.0b is just a SM3.0 without dynamic branching and vertex texture fetch. Both has the native 512 hardware pixel shader instructions per component, but the dynamic branching used in SM3.0 increases the shader lenght and performance, WHEN PROPERLY IMPLEMENTED!! nVidia poor SM3.0 implementation on the 6800 Ultra is what it makes this card runs so slow in newer games where the X850XT still getting playable framerates using high quality setting on standard resolutions like 1280x1024, sometimes at higher resolutions. And that only 1 generation of card lasted as long as the 6800 series, and still able to offer performance which the 6800 series cannot touch.


_______________________________________________________________________________________________________________
"Nvidia's and ATI's AA quality are comparable for modes like 2x/4x MSAA while Nvidia retains an edge when they are able to use SSAA to bring higher image quality. Nvidia was the one that introduced Transparency AA, so they have their own improvements to AA."
_______________________________________________________________________________________________________________



You need to update yourself, ATi's introduced with the X1800 series an anti aliasing method called Adaptive Anti Aliasing which uses a combination of Super sampling on Alpha textures and Multi Sampling on the rest of the scenes, even before the debut of that card, it was unofficially available for the X850 users and below. Even though the Super Sampling quality on nVidia is a bit better than on any Radeon, the gamma corrected multi sampling is slighly higher than on nVidia.


_______________________________________________________________________________________________________________
"ATI was the one who introduced the 45 Degree Angle dependent AF stuff leading Nvidia down the path to performance oriented AF, this is not impressive to me, the X1K only fixed this back to the norm of 90 Degree and Nvidia took it a step above that with the near perfect AF of the 8 Series."
_______________________________________________________________________________________________________________



Actually the GeForce 4 series also uses similar techiques of AF, and probably even older cards, the Radeon 8500 had a bad implemented AF which relied only on Bilinear filtering. X1800 series compared to the 7 series of card, was an impressive image quality gain, the only card that used to do that type of angle independent AF (lower quality of course) was the FX 5800 and it was turned off due to it's horrible impact in performance, it's been a while.

 

JBT

Lifer
Nov 28, 2001
12,094
1
81
Originally posted by: SickBeast
I never understood the whole socket debacle.

Why AMD needed Socket 939 and 940 made no sense to me; they're pretty much the same thing.

DDR2 wasn't even necessary for the A64, and AMD should have let the dust settle a bit before they jumped the gun.

No kidding for how many million they invested for something so meaningless seems esspecially in the business sense ridiculous....
 

AmdInside

Golden Member
Jan 22, 2002
1,355
0
76
Originally posted by: JBT
Originally posted by: SickBeast
I never understood the whole socket debacle.

Why AMD needed Socket 939 and 940 made no sense to me; they're pretty much the same thing.

DDR2 wasn't even necessary for the A64, and AMD should have let the dust settle a bit before they jumped the gun.

No kidding for how many million they invested for something so meaningless seems esspecially in the business sense ridiculous....

The AM2 processors are nothing more than higher clocked Socket 939 processors with DDR2 support. They really should have stayed with DDR for now and waited to switch sockets until Barcelona came out.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: evolucion8
SLI is an idea originally from 3dfx, they do not have the overall speed crown against the X800 series, since their debut they always lagged behind in most DX games while it won in OpenGL games which are not that much. Indeed an impressive coming from the FX generation.

SLI coming from 3DFX is not relevant to this discussion, all I am saying is that Nvidia has the overall speed crown with this technology, single card speed crown wasn't a concern to Nvidia which used SLI. And yes they had the overall speed crown over the X800 Series with 6800 Ultra SLI, overall means fastest implementation period bar none. Crossfire wasn't even in the picture till far later. In Single card configuration people were content with some minor performance deficiencies in exchange for a richer feature set, OpenEXR HDR as well as better OpenGL performance.

Originally posted by: evolucion8
Actually, Radeon X800XT PE using SM2.0b always outperformed nVidia's SM3.0 path by a significant margin. Even though the SM3.0 path increased the performance on the 6800 Ultra, was never able to catch up. After all, SM2.0b is just a SM3.0 without dynamic branching and vertex texture fetch. Both has the native 512 hardware pixel shader instructions per component, but the dynamic branching used in SM3.0 increases the shader lenght and performance, WHEN PROPERLY IMPLEMENTED!! nVidia poor SM3.0 implementation on the 6800 Ultra is what it makes this card runs so slow in newer games where the X850XT still getting playable framerates using high quality setting on standard resolutions like 1280x1024, sometimes at higher resolutions. And that only 1 generation of card lasted as long as the 6800 series, and still able to offer performance which the 6800 series cannot touch.

I said it gained more, I didn't say it won overall with those gains, learn the difference. Shader Model 3.0 represents a significant transistor investment to make developers lives much easier, and from a programable perspective superior. Of course some performance was sacrificed, but only due to the fact that Nvidia lacked low-k dielectric materials on 0.13 micron rather then poorly implemented Shader Model 3.0.

You just forgot SLI again :S the 6800 Ultra SLI, offered performance ATi couldn't beat not till they came up with their own solution which wasn't even in the picture till basically right before the 7800 GTX SLI came onto the scene which is far too late.

As well Pixel Shader 2.0b offers 512 hardware pixel instructions per component, while Pixel Shader 3.0 offers >=512, which is superior not to mention it allows 2^16 of executed instructions, 224 constant registers, Arbitary Swizzling, gradient instructions, position register etc...

It's even an inferior implementation compared to Nvidia's own Shader Model 2.0a, I applaud Nvidia for pushing the programability envelope at some cost to performance.

Originally posted by: evolucion8
You need to update yourself, ATi's introduced with the X1800 series an anti aliasing method called Adaptive Anti Aliasing which uses a combination of Super sampling on Alpha textures and Multi Sampling on the rest of the scenes, even before the debut of that card, it was unofficially available for the X850 users and below. Even though the Super Sampling quality on nVidia is a bit better than on any Radeon, the gamma corrected multi sampling is slighly higher than on nVidia.

Maybe you need to update your own self, Nvidia introduced Transparency AA first ATI later down the road with Adaptive Anti Aliasing. I am quite aware of what features ATI introduced. I never said anything about ATi not implementing it themselves, I just said Nvidia brought it forward first with the Geforce 7800 Series.

Originally posted by: evolucion8
Actually the GeForce 4 series also uses similar techiques of AF, and probably even older cards, the Radeon 8500 had a bad implemented AF which relied only on Bilinear filtering. X1800 series compared to the 7 series of card, was an impressive image quality gain, the only card that used to do that type of angle independent AF (lower quality of course) was the FX 5800 and it was turned off due to it's horrible impact in performance, it's been a while.

No actually the NV20/NV25/NV28 all had great AF implementations as did the NV30/NV35/NV38 on the non optimized settings. Nvidia only started down the path of Angle Dependent AF starting with NV40 because from what they saw in the R3xx generation all people cared about, was how quick it was and not how great looking it is. So that gave Nvidia the green light to keep optimizing to find out how much lower they could go before people screamed for improvement, which pretty much didn't happen till G7x where it's competitor decided to implement Angle Independent AF as a "new" feature.

I don't see the X1K bringing anything new to the AF table, it was merely putting us back where we should be. I applaud ATI for doing this, but it's not anything new. It's like taking away and giving back later.
 

ShadowOfMyself

Diamond Member
Jun 22, 2006
4,227
2
0
No actually the NV20/NV25/NV28 all had great AF implementations as did the NV30/NV35/NV38 on the non optimized settings. Nvidia only started down the path of Angle Dependent AF starting with NV40 because from what they saw in the R3xx generation all people cared about, was how quick it was and not how great looking it is. So that gave Nvidia the green light to keep optimizing to find out how much lower they could go before people screamed for improvement, which pretty much didn't happen till G7x where it's competitor decided to implement Angle Independent AF as a "new" feature.

But do keep in mind it was a wise decision, as nor NV30, nor R300 would have been able to do high quality AF at playable rates, and even after Ati "introduced it back" with the x1800/x1900 series, Nvidia still relied on the IQ optimizations to come close, because lets face it, the x1900 was by THE best card

Question is, would Nvidia have brought the awesome IQ of the 8800 series back if it werent for Ati? Or would they keep playing "dirty"?

Hopefully from now on none of them will sacrifice IQ in favor of performance anymore
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
SLI coming from 3DFX is not relevant to this discussion, all I am saying is that Nvidia has the overall speed crown with this technology, single card speed crown wasn't a concern to Nvidia which used SLI. And yes they had the overall speed crown over the X800 Series with 6800 Ultra SLI, overall means fastest implementation period bar none. Crossfire wasn't even in the picture till far later. In Single card configuration people were content with some minor performance deficiencies in exchange for a richer feature set, OpenEXR HDR as well as better OpenGL performance.
_______________________________________________________________________________________________________________


Minor performance deficiencies? LOL, just do a research and look how badly the obsolete Radeon X800 series are outperforming the more up to date 6800 series in modern games like Oblivion and F.E.A.R? Is useless having a fat feature thing without the performance to push it, GeForce FX anyone? Useless. ATi also implemented Crossfire on their X800 series which wasn't that bad considering their 1600x1200@60Hz limitation. OpenEXR HDR is a nice standard that doesn't belong to nVidia. Older generation ATi cards were able to do HDR, but not the OpenEX standard, they did the standard HDR which is based on pure DX9 pixel shader.

_______________________________________________________________________________________________________________

I said it gained more, I didn't say it won overall with those gains, learn the difference. Shader Model 3.0 represents a significant transistor investment to make developers lives much easier, and from a programable perspective superior. Of course some performance was sacrificed, but only due to the fact that Nvidia lacked low-k dielectric materials on 0.13 micron rather then poorly implemented Shader Model 3.0.

You just forgot SLI again :S the 6800 Ultra SLI, offered performance ATi couldn't beat not till they came up with their own solution which wasn't even in the picture till basically right before the 7800 GTX SLI came onto the scene which is far too late.

As well Pixel Shader 2.0b offers 512 hardware pixel instructions per component, while Pixel Shader 3.0 offers >=512, which is superior not to mention it allows 2^16 of executed instructions, 224 constant registers, Arbitary Swizzling, gradient instructions, position register etc...

It's even an inferior implementation compared to Nvidia's own Shader Model 2.0a, I applaud Nvidia for pushing the programability envelope at some cost to performance.
_______________________________________________________________________________________________________________


What DX9 game performance has to do with low dielectric material?? lol you looks ridiculous doing such claims. The low K Dielectric material was able to allow speed increase with low capacitance and less crosstalk, also because the X800 card had less transistors than the 6800 series allowed it to increase clock speed. In the GPU arena is not about clockspeed, is about efficiency, and the GeForce 6800 series of GPU has the most pathettic SM3.0 implementation to date. Since GeForce 6800 series have only 4 internal registers per pixel compared to 12 registers since the Radeon 9700 era. nVidia then increased the Pixel Pipeline depth and complexity to minimize texture access latency in the pixel shader. The main problem with this implementation is that in order to make it work has to work with large chunks of pixels at once, reducing the chance to get a performance boost using dynamic branching cause not all the shaders in the chunk will go in the same way. Since all the Radeon X1K series works with smaller threads, is able to distribute effectively the load from the Ultra Threading Dispatcher to all the pixel shaders in small chuncks allowing better granularity to improve performance when using Dynamic Branching, since they're smaller chunks is more easy to determine when a branch will ocurr and when it will go, which is a SM3.0 requirement. Dynamic branching is something that will NEVER improve performance on the GeForce 7 with it's 256 pixel thread size and larger and will make it worse on the GeForce 6 series with it's 1000 pixel thread size which are a shy from the 16/48 pixel shader thread and dedicated branching unit from the X1K series. Bear in mind also that the SM2.0a indeed is more advanced than the SM2.0b, but don't forget that it only can process up to 1,024 shader instruction, not 1,536 and also the SM2.0b has 10 more temporary registers and since it has a longer shader instruction count, doesn't need unlimited texture lookups ;). Look and do a research of the GeForce FX architecture and the SM2.a profile.

_______________________________________________________________________________________________________________

No actually the NV20/NV25/NV28 all had great AF implementations as did the NV30/NV35/NV38 on the non optimized settings. Nvidia only started down the path of Angle Dependent AF starting with NV40 because from what they saw in the R3xx generation all people cared about, was how quick it was and not how great looking it is. So that gave Nvidia the green light to keep optimizing to find out how much lower they could go before people screamed for improvement, which pretty much didn't happen till G7x where it's competitor decided to implement Angle Independent AF as a "new" feature.

I don't see the X1K bringing anything new to the AF table, it was merely putting us back where we should be. I applaud ATI for doing this, but it's not anything new. It's like taking away and giving back later.[/quote]
_______________________________________________________________________________________________________________



Also like the guy stated above about if ATi weren't using HQ AF, Will the GeForce 8800 series could do HQ AF? I guess that is thanks to ATi for the competition, also thanks to ATi, the GeForce 8800 series can do HDR with Anti Aliasing!!! Also hopefully, nVidia will never sacrifice IQ for performance like it did before many times, ATi did that once with it's Radeon 8500 series.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Question is, would Nvidia have brought the awesome IQ of the 8800 series back if it werent for Ati? Or would they keep playing "dirty"?

I believe nvidia's IQ would have improved again regardless. You have to remember that G80 started development in 2002, around the time nv30 launched and nv4x/g7x were created by a seperate engineering team (it was the task 3dfx's ex-engineers were tasked with). nv4x & g7x were really only intended to keep up with ATi until G80 could reestabish unconditional leadership.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Gstanfor
Question is, would Nvidia have brought the awesome IQ of the 8800 series back if it werent for Ati? Or would they keep playing "dirty"?

I believe nvidia's IQ would have improved again regardless. You have to remember that G80 started development in 2002, around the time nv30 launched and nv4x/g7x were created by a seperate engineering team (it was the task 3dfx's ex-engineers were tasked with). nv4x & g7x were really only intended to keep up with ATi until G80 could reestabish unconditional leadership.

Yeah, pretty much like ATi did with their R4xx series of card, to keep up until the debut of the R5xx series. Both companies and many more like Intel have different engineering teams.