ATI tries to downplay SLI

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
Originally posted by: Gamingphreek
wouldn't 4 800x600's equal 3200x2400?

One thing forgotten is it isn't that easy as we ware thinking as one side is probably going to be easier or harder for one to render than the other, like the card rendering the floor, wooo really stressful, however the one rendering eveyrhting else is going to struggle.

-Kevin

Ok as Cainam and Pete stated this way of thinking is wrong. But even with that logic

800x600=480000 and 480000x4=1920000
3200x2400=7680000

Is this anywhere near?
 

Insomniak

Banned
Sep 11, 2003
4,836
0
0
Originally posted by: jim1976
Originally posted by: Gamingphreek
wouldn't 4 800x600's equal 3200x2400?

One thing forgotten is it isn't that easy as we ware thinking as one side is probably going to be easier or harder for one to render than the other, like the card rendering the floor, wooo really stressful, however the one rendering eveyrhting else is going to struggle.

-Kevin

Ok as Cainam and Pete stated this way of thinking is wrong. But even with that logic

800x600=480000 and 480000x4=1920000
3200x2400=7680000

Is this anywhere near?



The thing is like this, if you have 4 800x600s, you have to arrange them into a square. Think 4 window panes. The thing is, the resulting square is only 1600 pixels wide and 1200 pixels tall...so 1600x1200 is like 4 800x600s in terms of pixels displayed. Now, take four 1600x1200 squares and arrange them the same way - in a square, window pane style. This square is 3200x2400. So really 3200x2400 is like 16 800x600s in terms of raw pixels displayed - as you go up in resolution, the number of pixels displayed increases exponentially, not linearly.
 

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
Originally posted by: Insomniak
Originally posted by: jim1976
Originally posted by: Gamingphreek
wouldn't 4 800x600's equal 3200x2400?

One thing forgotten is it isn't that easy as we ware thinking as one side is probably going to be easier or harder for one to render than the other, like the card rendering the floor, wooo really stressful, however the one rendering eveyrhting else is going to struggle.

-Kevin

Ok as Cainam and Pete stated this way of thinking is wrong. But even with that logic

800x600=480000 and 480000x4=1920000
3200x2400=7680000

Is this anywhere near?



The thing is like this, if you have 4 800x600s, you have to arrange them into a square. Think 4 window panes. The thing is, the resulting square is only 1600 pixels wide and 1200 pixels tall...so 1600x1200 is like 4 800x600s in terms of pixels displayed. Now, take four 1600x1200 squares and arrange them the same way - in a square, window pane style. This square is 3200x2400. So really 3200x2400 is like 16 800x600s in terms of raw pixels displayed - as you go up in resolution, the number of pixels displayed increases exponentially, not linearly.

Are you reffering to me? Because we are saying the same thing. 16x480000=7680000
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Gamingphreek, why do you keep multiplying by four? SLI in its current form is for two cards.
 

klah

Diamond Member
Aug 13, 2002
7,070
1
0
Originally posted by: Gamingphreek
One thing forgotten is it isn't that easy as we ware thinking as one side is probably going to be easier or harder for one to render than the other, like the card rendering the floor, wooo really stressful, however the one rendering eveyrhting else is going to struggle.

-Kevin

http://www.anandtech.com/video/showdoc.aspx?i=2097&p=3

First, software (presumably in the driver) analyses what's going on in the scene currently being rendered and divides for the GPUs. The goal of this (patent-pending) load balancing software is to split the work 50/50 based on the amount of rendering power it will take. It might not be that each card renders 50% of the final image, but it should be that it takes each card the same amount of time to finish rendering its part of the scene (be it larger or smaller than the part the other GPU tackled).
...
Since the work is split on the way from the software to the hardware, everything from geometry and vertex processing to pixel shading and anisotropic filtering is divided between the GPUs. This is a step up from the original SLI, which just split the pixel pushing power of the chips.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Instead of alluding that their may be lots more games that use PS2 and were released before the 6800s, why don't you just tell us so we can evaluate whether it mattered or not whether we had PS2 hardware before nVidia released hardware capable of running it well?
I have told you, repeatedly.

As for your comments about SM 3.0, they're rather ludicrous in the context of the SM 2.0 comments you were making, even back when the number of titles was just a handful. But now one game with a recalled patch supports SM 3.0 and you act like it's the second coming or something. But when ATi tooled nVidia in SM 2.0 (nVidia doesn't actually tool ATi with SM 3.0) it didn't matter because, what was it you said, "I don't care about features as long as the cards are the same speed".

At least I tried to list the games, you just said I was wrong and we should take your word for it.
Trying to make an argument by making up crap is not something to be proud of.
 

JungleMan1

Golden Member
Nov 3, 2002
1,321
0
0
ATI will downplay SLI until they can develop it themselves, then it will be the best thing since sliced bread.

Just like Intel called BS when AMD used model numbers and emphasized that Mhz aren't everything...until Intel found themselves in the same boat with the low Mhz of the Pentium M, and wanting to add cache to their chips instead of ramping the Mhz...then all of a sudden, model numbers and Mhz-isn't-everything is Intel's mantra.
 

Insomniak

Banned
Sep 11, 2003
4,836
0
0
Originally posted by: jim1976

Are you reffering to me? Because we are saying the same thing. 16x480000=7680000


I wasn't really referring to you, I was just expounding on why the situation isn't the way he said it was. I quoted you to show what my post was in reference to. Sorry if there was some confusion.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: Rollo
Jiffy:
Give it a rest, Rollo. The 9700Pro, with or without PS2.0 smoked the previous generation of cards. That's what made it a top seller. In high res and AA/AF situations, it was often 2X faster than the GF4 series. So don't go saying how worthless the 9700 Pro was just because it has taken forever for PS2 supported games to come out.
The 9700Pro smoked 4600s for six whole months before 5800Us came out and offered comparable if not better performance at all playable settings. Then the 5900s came out and evened the palying field again. There was no "Golden Era of 9700Pro Domination" unless call the 6 months the 5800U was delayed due to TSMCs failure an "era".

Typical Rollo - historical revisionism at it's finest.

July 18, 2002: ATI?s Radeon 9700 (R300) ? Crowning the New King. Anand reviews ATI Radeon 9700 Pro; it smokes the competition. Rollo insists AA and AF aren't that important.

January 27th, 2003: Nvidia GeForce FX 5800: It's Here, but is it Good? . Anand's take on it:

So there you have it, NVIDIA's response to ATI's Radeon 9700 Pro - but does anyone else feel unfulfilled by the GeForce FX? A card that is several months late, that is able to outperform the Radeon 9700 Pro by 10% at best but in most cases manages to fall behind by a factor much greater than that. Granted the problems that plagued the launch of the FX weren't all up to NVIDIA's control, after all the decision to go 0.13-micron was made 1 - 2 years ago based on data that was available at the time. ATI took a gamble on producing a 0.15-micron part and NVIDIA did the same on their 0.13-micron NV30, and it looks like ATI guessed right. "

^ The 5800U hair dryer is released and is found to be slower than the 6-month old 9700 Pro in most cases (and by a more significant margin than the tests that it bests the 9700 Pro in). Later, the pointless 5800nu would be released. Nvidia releases drivers which significantly improve the 5800U over the next year (starting with the 50 series Dets), however this is definately in the "too little, too late" category for the infamous 5800U.

March 6th, 2003: ATI's Radeon 9800, 9600 & 9200: Still Fighting Strong

ATI furthers it's lead over Nvidia with the 9800 Pro.

More than just a higher clocked Radeon 9700, the Radeon 9800 Pro doesn't cease to impress because of the minor but potent improvements ATI made to the R350 core. Definitely a pleasant surprise, ATI has produced a worthy interim successor to the Radeon 9700 Pro. If you want the best out today, look no further than the Radeon 9800 Pro; it's quiet, faster, occupies a single slot, and will enjoy much wider availability than the GeForce FX.

NVIDIA will not have a chance to respond to the Radeon 9800 Pro for another couple of months, with their NV35 part.

May 12th, 2003: Nvidia's Back with NV35 - GeForceFX 5900 Ultra

Nvidia essentially matches ATI's 9800 Pro card, albeit with partial DX9 support. Fanboys across the internet argue endlessly over which is better.


April 14th, 2004: Nvidia's GeForce 6800 Ultra: The Next Step Forward

Nvidia comes roaring back with the strong 6800 series. To nobody's surprise (and just like ATI), they release their least available cards to reviewers: the 6800 Ultra. Over the next couple of months, only the 6800nu, X800Pro are available in any kind of quantity, with the 6800U being nonexistant and the X800XT virtually impossible to find.

Then, the X800XT goes virtually AWOL and the 6800GT shows up on the scene.

To the surprise of no one, Rollo completely changes his tune, and proclaims "buying for the future" is the way to go, with the 6800 series and SM3.0 being the obvious choice.

------------------------------------------------

Originally posted by: Rollo
Meanwhile today the X800 cards are similar performers and pack similar featuresets to the 6800 cards, the only difference being PS 3.0, which I contest is a good feature for the 6800's, but is nothing as monumental as the jump from the GF4 series to the R300 series.
So if an advance isn't as big of an advance as GF4 to R300, it's not relevant? That makes a lot of sense. :roll:

Spin away, TRollo :p .

But you're already onboard for it's licensed games, so that's good to hear. They should be enjoyable to play on your 6800 cards when they come out in 3 years.
I think it's a little easier to license an engine and put different art/AI on it than design it from the ground up Jiffy. I've got $50 that says we see Doom3 license games before three years from now?[/quote]

It definately won't be during this generation of card's lifecycle. There will be something new and shiny to play D3 licensed games faster by the time they come out.
 

Insomniak

Banned
Sep 11, 2003
4,836
0
0
Originally posted by: BFG10K
Instead of alluding that their may be lots more games that use PS2 and were released before the 6800s, why don't you just tell us so we can evaluate whether it mattered or not whether we had PS2 hardware before nVidia released hardware capable of running it well?
I have told you, repeatedly.

As for your comments about SM 3.0, they're rather ludicrous in the context of the SM 2.0 comments you were making, even back when the number of titles was just a handful. But now one game with a recalled patch supports SM 3.0 and you act like it's the second coming or something. But when ATi tooled nVidia in SM 2.0 (nVidia doesn't actually tool ATi with SM 3.0) it didn't matter because, what was it you said, "I don't care about features as long as the cards are the same speed".

At least I tried to list the games, you just said I was wrong and we should take your word for it.
Trying to make an argument by making up crap is not something to be proud of.



The way I see it is this:

PS2.0 was never really useful and never really will be since most devs are skipping it and rolling forward to SM3.0

SM3.0 isn't really useful now, and by the time it is, both ATi and Nvidia will have parts on the market that support it. What will matter then is who is faster at SM3.0, which only time will tell.

Since the Unreal Engine 3 is the first big name engine using DX9 as the minimum spec, and it's coming in 2006, all these features on these cards will have a minimal impact on visual quality until then. Look at Half-Life 2, for example - the differences between DX9 and DX8.1 are not vast. They are there, but they don't mean much.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
PS2.0 was never really useful and never really will be since most devs are skipping it and rolling forward to SM3.0
How can you justify this? All SM3 cards are SM2, so the potential audience for SM2 titles is automatically larger than that for SM3. So the main advantage of SM3 is for the dev, but they're not making games to be graded on coding elegance. They're making games to be playable for as many people as possible, so SM2 would seem to be more attractive from the perspective of making money (a powerful perspective, I hear ;)).

Once Xbox2 hits, though, I can imagine SM3 will be *much* more attractive from a development standpoint. If your argument is that devs will skip SM2 to combine development for SM3 cards and Xb2, still, why would they leave out three generations of 9700/9500, 9800/9600, and X600/X300 owners? Not to mention Intel's new GPU is SM2 (I don't remember exactly how fast is is, though, so it may not be much better than a Longhorn accelerator).

The differences don't mean much if the lower-quality card is faster, but if they're the same speed, why not get the extra IQ for free? Of course, there are more variables than just speed and IQ (drivers & multimon being the main ones), but I'd imagine that most people who even bother to read a review will stop at the first two variables (and that's assuming the review even takes into account the latter two).

I'm just not sure why you think SM2 isn't useful and most devs will skip to SM3. For devs developing for 2006, sure, SM3 makes sense. But for PC-only games coming out soon, what's the attraction of SM3 before or in place of SM2?
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: jiffylube1024
Originally posted by: Rollo
Jiffy:
Give it a rest, Rollo. The 9700Pro, with or without PS2.0 smoked the previous generation of cards. That's what made it a top seller. In high res and AA/AF situations, it was often 2X faster than the GF4 series. So don't go saying how worthless the 9700 Pro was just because it has taken forever for PS2 supported games to come out.
The 9700Pro smoked 4600s for six whole months before 5800Us came out and offered comparable if not better performance at all playable settings. Then the 5900s came out and evened the palying field again. There was no "Golden Era of 9700Pro Domination" unless call the 6 months the 5800U was delayed due to TSMCs failure an "era".

Typical Rollo - historical revisionism at it's finest.

July 18, 2002: ATI?s Radeon 9700 (R300) ? Crowning the New King. Anand reviews ATI Radeon 9700 Pro; it smokes the competition. Rollo insists AA and AF aren't that important.

January 27th, 2003: Nvidia GeForce FX 5800: It's Here, but is it Good? . Anand's take on it:

So there you have it, NVIDIA's response to ATI's Radeon 9700 Pro - but does anyone else feel unfulfilled by the GeForce FX? A card that is several months late, that is able to outperform the Radeon 9700 Pro by 10% at best but in most cases manages to fall behind by a factor much greater than that. Granted the problems that plagued the launch of the FX weren't all up to NVIDIA's control, after all the decision to go 0.13-micron was made 1 - 2 years ago based on data that was available at the time. ATI took a gamble on producing a 0.15-micron part and NVIDIA did the same on their 0.13-micron NV30, and it looks like ATI guessed right. "

^ The 5800U hair dryer is released and is found to be slower than the 6-month old 9700 Pro in most cases (and by a more significant margin than the tests that it bests the 9700 Pro in). Later, the pointless 5800nu would be released. Nvidia releases drivers which significantly improve the 5800U over the next year (starting with the 50 series Dets), however this is definately in the "too little, too late" category for the infamous 5800U.

March 6th, 2003: ATI's Radeon 9800, 9600 & 9200: Still Fighting Strong

ATI furthers it's lead over Nvidia with the 9800 Pro.

More than just a higher clocked Radeon 9700, the Radeon 9800 Pro doesn't cease to impress because of the minor but potent improvements ATI made to the R350 core. Definitely a pleasant surprise, ATI has produced a worthy interim successor to the Radeon 9700 Pro. If you want the best out today, look no further than the Radeon 9800 Pro; it's quiet, faster, occupies a single slot, and will enjoy much wider availability than the GeForce FX.

NVIDIA will not have a chance to respond to the Radeon 9800 Pro for another couple of months, with their NV35 part.

May 12th, 2003: Nvidia's Back with NV35 - GeForceFX 5900 Ultra

Nvidia essentially matches ATI's 9800 Pro card, albeit with partial DX9 support. Fanboys across the internet argue endlessly over which is better.


April 14th, 2004: Nvidia's GeForce 6800 Ultra: The Next Step Forward

Nvidia comes roaring back with the strong 6800 series. To nobody's surprise (and just like ATI), they release their least available cards to reviewers: the 6800 Ultra. Over the next couple of months, only the 6800nu, X800Pro are available in any kind of quantity, with the 6800U being nonexistant and the X800XT virtually impossible to find.

Then, the X800XT goes virtually AWOL and the 6800GT shows up on the scene.

To the surprise of no one, Rollo completely changes his tune, and proclaims "buying for the future" is the way to go, with the 6800 series and SM3.0 being the obvious choice.

------------------------------------------------

Originally posted by: Rollo
Meanwhile today the X800 cards are similar performers and pack similar featuresets to the 6800 cards, the only difference being PS 3.0, which I contest is a good feature for the 6800's, but is nothing as monumental as the jump from the GF4 series to the R300 series.
So if an advance isn't as big of an advance as GF4 to R300, it's not relevant? That makes a lot of sense. :roll:

Spin away, TRollo :p .

But you're already onboard for it's licensed games, so that's good to hear. They should be enjoyable to play on your 6800 cards when they come out in 3 years.
I think it's a little easier to license an engine and put different art/AI on it than design it from the ground up Jiffy. I've got $50 that says we see Doom3 license games before three years from now?

It definately won't be during this generation of card's lifecycle. There will be something new and shiny to play D3 licensed games faster by the time they come out.[/quote]
Good Job!, Jiffy

here's a :cookie: just for you. :D

:cookie:
(it's a cookie . . . sorry the milk is spoiled . . .wanna :beer: with it?)

i'd say about 12-18 months for a new game using the Doom III engine and a GOOD doom engine game in about 2 years ;)

The 6800,6800GT, x800p will be where the 9800pro is today - by then. :p

Maybe in a worse position as the r500/nv50 is gonna DOMINATE. ;)

:roll:
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: MegaWorks
it's official, rollo is the biggest nv fanboy in this forum.

-----------------------------------
Turn X800 Pro into X800 XT PE


Pfft. I bet ATI values me as a customer far more than you Megaworks, because odds are I've bought more of their cards down at the local Best Buy?
Here's my list: VESA Graphics Pro Turbo, 32MB Rage Fury, 16MB Rage Fury, Rage Fury MAXX, Radeon 32DDR, Radeon VIVO, Radeon 8500, Radeon 9700Pro, Radeon 9800 Pro.
So how about Megaworks? Why don't you put the smack down now and show us all why you're so much bigger of an ATI fan than I am, because you've actually bought and used more of their cards than I have?

Or does your list come up a little "short"? I actually went from faster nVidia cards to slower ATI cards on several of those occasions. Yep. I'm a huge nVidia fanboy. Even though I've been buying ATI cards longer than you've likely been gaming, probably bought many more ATI cards than you have, and used their products more years than you likely have.

I'll say it again:
Pffft.

I'll add:
Meh.

You should think about what you're saying before you start flaming away. I'm sorry I didn't think the R300 core was worth buying three times for $400, but it just wasn't interesting enough. When the third quarter high end sales percentages come out, I think you'll see a lot of people agreed with me?
When SLI is launched, you'll see even more. The X800XT PE won't just have a 2 year old feature set then, it will also be slow for the high end market?

;)
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Amoppin':

Just because you don't really have anything to contribute about your experiences with modern video cards*, is that any reason to flame those of us who do?



*because your most "modern" card is a year old



Maybe in a worse position as the r500/nv50 is gonna DOMINATE.
Of course, that won't matter to you, will it Amoppin'? As the r500won't cost $200 for years, you won't really have any idea what they're like, will you?

That's the difference between you and me:
If the r500 "dominates" I'll buy one and enjoy it. You'll buy an X800Pro for $200, and post a bunch about how it's "good enough".
Pretty sad.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Originally posted by: Pete
Gamingphreek, why do you keep multiplying by four? SLI in its current form is for two cards.

Where did i multiply by four? I multiplied by 2 on each however i was proven wrong, and i understand why.

-Kevin
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
SM is shader model. PS (Pixel Shader) 2.0 is a component of SM2.0, it makes up SM2.0. A shader model is comprised oif a mess of features like Pixel and Vertex Shaders.

SM3.0 is the new model out that only Nvidia supports.... currently.

-Kevin
 

Insomniak

Banned
Sep 11, 2003
4,836
0
0
Originally posted by: Pete
PS2.0 was never really useful and never really will be since most devs are skipping it and rolling forward to SM3.0
How can you justify this? All SM3 cards are SM2, so the potential audience for SM2 titles is automatically larger than that for SM3. So the main advantage of SM3 is for the dev, but they're not making games to be graded on coding elegance. They're making games to be playable for as many people as possible, so SM2 would seem to be more attractive from the perspective of making money (a powerful perspective, I hear ;)).

Once Xbox2 hits, though, I can imagine SM3 will be *much* more attractive from a development standpoint. If your argument is that devs will skip SM2 to combine development for SM3 cards and Xb2, still, why would they leave out three generations of 9700/9500, 9800/9600, and X600/X300 owners? Not to mention Intel's new GPU is SM2 (I don't remember exactly how fast is is, though, so it may not be much better than a Longhorn accelerator).

The differences don't mean much if the lower-quality card is faster, but if they're the same speed, why not get the extra IQ for free? Of course, there are more variables than just speed and IQ (drivers & multimon being the main ones), but I'd imagine that most people who even bother to read a review will stop at the first two variables (and that's assuming the review even takes into account the latter two).

I'm just not sure why you think SM2 isn't useful and most devs will skip to SM3. For devs developing for 2006, sure, SM3 makes sense. But for PC-only games coming out soon, what's the attraction of SM3 before or in place of SM2?


Devs certainly aren't leaving SM2.0 out of the loop - but it's not the optimal method for them to choose at this point. SM2.0 can do everything SM3.0 can do, it just takes a bit longer. SM2.0 will of course be supported still, but that doesn't mean that it really served a purpose.

How many games were coded in SM2.0? Very few, and pretty much everything from this point forward is going to be done in the SM3.0 method. There will be SM2.0 alternates, of course, but they won't be the primary, which is why I don't feel it's a noteworthy feature, or something useful to have, any more.

I see SM2.0 alot like GDDR2 - it was skipped, because there was really no reason to use it. By the time the market needed/wanted GDDR2 GDDR3 was available, cheaper, ran cooler, and was more energy efficient. SM3.0 is similar - now that the market needs/wants a good shader model (graphics based on shaders are now possible...) SM3.0 is available, and is a more efficient coding method. There's really no reason to use SM2.0 except as a "catch all" to allow your product to be run on the widest variety of cards.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: Rollo
Amoppin':

Just because you don't really have anything to contribute about your experiences with modern video cards*, is that any reason to flame those of us who do?



*because your most "modern" card is a year old



Maybe in a worse position as the r500/nv50 is gonna DOMINATE.
Of course, that won't matter to you, will it Amoppin'? As the r500won't cost $200 for years, you won't really have any idea what they're like, will you?

That's the difference between you and me:
If the r500 "dominates" I'll buy one and enjoy it. You'll buy an X800Pro for $200, and post a bunch about how it's "good enough".
Pretty sad.


So now you're making digs at people because they can't afford to purchase the latest video cards as soon as they become available?


REALLY sad.
 

CaiNaM

Diamond Member
Oct 26, 2000
3,718
0
0
Originally posted by: Insomniak
Originally posted by: Pete
PS2.0 was never really useful and never really will be since most devs are skipping it and rolling forward to SM3.0
How can you justify this? All SM3 cards are SM2, so the potential audience for SM2 titles is automatically larger than that for SM3. So the main advantage of SM3 is for the dev, but they're not making games to be graded on coding elegance. They're making games to be playable for as many people as possible, so SM2 would seem to be more attractive from the perspective of making money (a powerful perspective, I hear ;)).

Once Xbox2 hits, though, I can imagine SM3 will be *much* more attractive from a development standpoint. If your argument is that devs will skip SM2 to combine development for SM3 cards and Xb2, still, why would they leave out three generations of 9700/9500, 9800/9600, and X600/X300 owners? Not to mention Intel's new GPU is SM2 (I don't remember exactly how fast is is, though, so it may not be much better than a Longhorn accelerator).

The differences don't mean much if the lower-quality card is faster, but if they're the same speed, why not get the extra IQ for free? Of course, there are more variables than just speed and IQ (drivers & multimon being the main ones), but I'd imagine that most people who even bother to read a review will stop at the first two variables (and that's assuming the review even takes into account the latter two).

I'm just not sure why you think SM2 isn't useful and most devs will skip to SM3. For devs developing for 2006, sure, SM3 makes sense. But for PC-only games coming out soon, what's the attraction of SM3 before or in place of SM2?


Devs certainly aren't leaving SM2.0 out of the loop - but it's not the optimal method for them to choose at this point. SM2.0 can do everything SM3.0 can do, it just takes a bit longer. SM2.0 will of course be supported still, but that doesn't mean that it really served a purpose.

How many games were coded in SM2.0? Very few, and pretty much everything from this point forward is going to be done in the SM3.0 method. There will be SM2.0 alternates, of course, but they won't be the primary, which is why I don't feel it's a noteworthy feature, or something useful to have, any more.

I see SM2.0 alot like GDDR2 - it was skipped, because there was really no reason to use it. By the time the market needed/wanted GDDR2 GDDR3 was available, cheaper, ran cooler, and was more energy efficient. SM3.0 is similar - now that the market needs/wants a good shader model (graphics based on shaders are now possible...) SM3.0 is available, and is a more efficient coding method. There's really no reason to use SM2.0 except as a "catch all" to allow your product to be run on the widest variety of cards.

ahh.. kinda like sli? by the time the market wants/needs it, something else will be available, that's cheaper, runs cooler, and is more energy efficient (and takes up fewer slots)? ;)
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
I'm sorry, but I think SM2.0 will in no way be secondary to SM3.0 when there are many, many more SM2.0 boards out there than SM3.0.

As for this constant refrain of "how many SM2.0 games are there," well, always more than SM3.0. And we're getting close to a magnitude more SM2.0 titles than SM3.0, cheap as that stat is with maybe ten titles on the market. Graphics based on shaders are about as possible with SM2.0 as they are with SM3.0.

I guess there's no need to use C either, but people still do. I've heard the PS2 is hard to program for, but publishers still appear willing to shoot for the biggest audience possible. OK, rather weakly related examples, but I just don't see SM2.0 dying the quick death so many predict.

Kevin,

Where did i multiply by four? I multiplied by 2 on each however i was proven wrong, and i understand why.
Er, 2 x 2 = 4? I can understand making the simple mistake that 2x(8x6)=16x12, but that first sentence confused me even more. Are you sure you understand? Because now I'm not sure *I* understand. :)

Rollo,

You should think about what you're saying before you start flaming away.
So spending a lot of money is your excuse for doing the same? (Heck, I'm pretty sure apoppin' usually disagrees with me, which lately tends to mean agreeing with you.) What's the deal with all these childish posts? If I actually had a decent card I'd be gaming instead of constantly stirring things up for no apparent reason.

When the third quarter high end sales percentages come out, I think you'll see a lot of people agreed with me?
We'll see. I'm sure ATi will lose marketshare this generation, and probably the next, too, if the X900 isn't clearly faster than the 6900 at the $400 level, but I don't think they'll see the same loss of high-end share as nV appears to have suffered with the FX series (those numbers are probably skewed b/c of HL2 voucher holders and possibly cafes using Valve-recommended ATi cards, but 46K 9800s vs. 11K 5800s & 5900s is a friggin' huge disparity--and I'm not even counting the 9700s as their numbers are lumped in with the 9500s). You really think we'll see a reversal of that magnitude with the X800 and 6800 series? I'm not so sure, but no doubt the lack of XTPEs and anything to combat the 6800 isn't doing ATi any good.
 

Insomniak

Banned
Sep 11, 2003
4,836
0
0
Originally posted by: CaiNaM
ahh.. kinda like sli? by the time the market wants/needs it, something else will be available, that's cheaper, runs cooler, and is more energy efficient (and takes up fewer slots)? ;)


Exactly. But SLI will allow even more people to enjoy their games than if it didn't exist, just like SM2.0


Would you rather we didn't have SM2.0 and SLI as options and those people were left out?
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Pete and Creig:

Perhaps you think it's fun to say "Thank you sir, may I have another?" when people flame you without cause, but I do not.

it's official, rollo is the biggest nv fanboy in this forum
If Megaworks wants to post out of nowhere that I'm a big nV fanboy, when I've never posted anything to him ever, he gets to hear about how I've been a much better ATI customer than he has. Seems fair.

If Apoppin' is going to congratulate Jiffy for his 1000 word manifesto flame directed at me, I'm going to tell him what I think of fan boys who pimp equipment they don't even bother to buy.

So I guess we've established I'm not going to give Ghandi or Jesus a run for the "Turned the other cheek" award. Oh well.