Please lock this thread. No more useful discussion going on.

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
Originally posted by: BenSkywalker

Kind of like underestimating when ATi said...

that was some moron writing here at anand who presented those absurd claims, not ati.

Originally posted by: BenSkywalker

If you have faith that ATi is going to release the fastest part this upcoming gen, you must assume that nVidia is going to screw up badly.

i am not putting any faith into this, as i said it is anybody's game. nvidia doesn't have to screw up at all to loose this round and i doubt they will screw up again this soon after the nv30, but the nv40 being a new core is not some golden ticket to first place and despite what you want to convense yourself of the r420 is very much a new core of it's own. as for pvr, it would be great to have another player in the game but i am not going to hold my breath for that one.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
that was some moron writing here at anand who presented those absurd claims, not ati.

According to ATI, the Radeon 9600 XT should be the first mainstream part to outperform the Radeon 9700 Pro in all situations ? not bad for a $199 card.

Anand and/or Derek wrote that, so if you are saying they are liars then you can attirbute it to them directly.

and despite what you want to convense yourself of the r420 is very much a new core of it's own

According to who exactly? I haven't even seen ATi PR try and claim that.

as for pvr, it would be great to have another player in the game but i am not going to hold my breath for that one.

Why not? It is looking to be a much larger step then the R420 for certain(not that that takes all that much). Based on all the information that is circulating around if you are as devout an ATi fan as you make yourself out to be I wouldn't be getting your hopes up too high for the R420. ATi should have a much better offering later in the year. Is PowerVR's existance an insult to the fanatics also, or is it just nVidia? I don't really understand your religion that much, not trying to insult your deity in any way.
 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
i didn't say lier, i said moron. it was completely idiotic to believe that ati would undercut their own product line by releasing a midrange card that keeps up with their top shelf offerings, even if it is true that someone from ati privately made the claim. more than likely the issue was nothing more than a mater of miscommunication or confusion which seems to come up more and more often in the articles here at anandtech. regardless, ati never publicly made such a claim; it was nothing more than hearsay.

as for the r420 being a new core, that is what it is. actually it is the very first true 8 pipe chip on a .13m process. sure it borrows much from its siblings but then so has the geforce line, but when they get more than a simple clock bump and minor refinement we don't rightly call them revisions.

oh and sorry to disappoint you Ben, but unlike some people i keep my religious convictions to more spiritual maters and don't participate in graphics company idolatry. if pvr takes the lead next round i will most assuredly get one, whatever the best card out next spring is it is bound to be a worthy upgrade from my 9700pro. however, with the competition heavy as it is between ati and nvidia currently i have my doubts that pvr can do any better that putting out a respectable midrange offering.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
it was completely idiotic to believe that ati would undercut their own product line by releasing a midrange card that keeps up with their top shelf offerings

Like the 5900SE is doing now? Like the Ti4200 did last year? Like the GF3 Ti200 did the year before that? Why is it so hard to believe that ATi would do something like that? Look at the current prices for the R9700Pro and R9800 non pro for that matter, ATi is selling parts with nigh 9800XT performance for close to 9600XT prices right now.

even if it is true that someone from ati privately made the claim. more than likely the issue was nothing more than a mater of miscommunication or confusion which seems to come up more and more often in the articles here at anandtech.

Privately making a claim on performance to the member of the press which is not covered under NDA, absolutely no way that they expect that not to hit press. As far as confusion, we aren't talking about being able to make heads or tails of the different GPU architectures or how different features are implemented nor how they should be implemented, it is a clear cut black and white statement that any child could understand.

as for the r420 being a new core, that is what it is. actually it is the very first true 8 pipe chip on a .13m process.

What makes you consider it a new core? Assume that all the rumors you have heard are true concerning the chip are true, what about the chip makes you assume it is any larger of a step then NV30-NV35 was? Obviously its a much larger evolution then the 9700Pro-9800XT, but there wasn't anything worth noting that was different between those parts anyway. The fact that it went through a process shrink doesn't make it a new core(although they need to make considerations when making the move). The R400 was scrapped, odds are for very good reason.

oh and sorry to disappoint you Ben, but unlike some people i keep my religious convictions to more spiritual maters and don't participate in graphics company idolatry.

You have apologized for ATi's PR(or denied if you prefer) directly in this thread and laughed at the assumption that nVidia could take the lead. You don't exactly display the most open mind in these matters.
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
It wouldn't be the first time that MS has shipped features in DX before they were supported in hardware though.

Dunno. But read the MS DirectX9 programming book recently released - they list all the shader targets that are available in DX9 and only software targets are available for PS/VS3.0

Active Camouflage? What exactly is different? Does it look noticeably worse on ATi's parts like the flashlight, or is it something that isn't that noticeable? Have to look for it next time I get a chance to play the game with a R3x0.

I don't know what they have done however they implemented it on ATI first and this failed to operate with the FX so they had to go for something different. I'll wager there are quite different paths for FX and R300, and that FX doesn't also run everything (i.e. some different elements to ATI).

The hardware being FP32 is rather important, and I was talking about the shader units in particular. Obviously having shader precission is not the only factor.

No. The construction of the FP32 hardware isn't much of an issue. Other elements in the DX Next generation are going to be a bigger issue.

If you have faith that ATi is going to release the fastest part this upcoming gen, you must assume that nVidia is going to screw up badly. ATi should be able to reclaim the crown this fall when the generational misalignment will fall in to their favor and the R500 is going up against the NV45.

I don't know what you are basing this on Ben, but for starters you won't see R500 in the fall (that?s DX next, and there's no point in releasing it without the API) and it appears you may have a slightly skewed impression of how R420 is a refresh of R300.

Basically, R420 using R300 based technology in a similar fashion to the way that NV30 uses NV20's technology / architecture or NV20, NV10's etc. T there is a relatively clear architectural development through NVIDIA's parts - this is not the case with ATI as they basically have started from scratch every architecture. R420 is them adopting an architectural development closer to NVIDIA's, so this is a leap akin to, say, NV20->NV30 technology / architecture wise.

I had the opportunity to sit down and have a fairly lengthy chat with one of ATI's directors recently and I was discussing the fact that in reality R420 wasn't even on the roadmap until March '03 so it would be impossible to make significant differences - to which he said "you have to consider what was already in development at that timeframe". He was basically confirming that R420 will be adopting technology from what was R400 at the time (and what is now being further developed to become R500) as that was initially scheduled to have sampled/introduction by July '03. I think what you'll be seeing from R420 is a mixture of technology from the R300 platform and R400 as it was.

Eric Demers (ATI architect) also stated to me the goal was "double the performance every 18 months", so my expectation is that R420 should theoretically be twice as fast as R300 in some areas.

My guess is that given roughly equivalent processes and die sizes there performances will be fairly close.
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
The R400 was scrapped, odds are for very good reason.

Absolutely, it was not scrapped. What do you think the XBox2 will be using? ;)

The reason it was moved is because of DX 10's move and the XBox - there was no point in having a part with the capabilities of R400 without the API around. Its best to concentrate on high performance with the features of the major API you'll be targetting.

 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
lol Ben, i didn't apologize or deny anything; i only pointed out the inconclusive nature of hearsay. also i never laughed at the assumption that nvidia could take the lead, mealy pointed out that we don't have enough information to make a determination and arguments one way or the other is nothing more than blind fanboyism. lastly, the 5900se, ti4200, and ti200 which you bring up are all moderately slower than their more expensive brethren, where as if ati did release a 200$ card faster than discontinued 9700pro it would also have been faster than a high end card currently in their lineup selling for 33% more, the 9800np. unless it is beyond your intellectual capacity to grasp these things, i think it is fair to say that it is you who lack an open mind in these matters.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Originally posted by: TheSnowman
lol Ben, i didn't apologize or deny anything; i only pointed out the inconclusive nature of hearsay. also i never laughed at the assumption that nvidia could take the lead, mealy pointed out that we don't have enough information to make a determination and arguments one way or the other is nothing more than blind fanboyism. lastly, the 5900se, ti4200, and ti200 which you bring up are all moderately slower than their more expensive brethren, where as if ati did release a 200$ card faster than discontinued 9700pro it would also have been faster than a high end card currently in their lineup selling for 33% more, the 9800np. unless it is beyond your intellectual capacity to grasp these things, i think it is fair to say that it is you who lack an open mind in these matters.

If you capitalized the proper letters this would be much easier to read. It looks like one big long sentence, and it's difficult to understand what you're saying. Especially since it appears that you may have left out a word or two.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
But read the MS DirectX9 programming book recently released - they list all the shader targets that are available in DX9 and only software targets are available for PS/VS3.0

Is it from MS press?

I don't know what they have done however they implemented it on ATI first and this failed to operate with the FX so they had to go for something different. I'll wager there are quite different paths for FX and R300, and that FX doesn't also run everything (i.e. some different elements to ATI).

Have you talked to the guys at GearBox about it, or is this from other sources?

The construction of the FP32 hardware isn't much of an issue. Other elements in the DX Next generation are going to be a bigger issue.

What could they include in DXN that would allow the IHVs to use lower then FP32 for combined shader units?

I don't know what you are basing this on Ben, but for starters you won't see R500 in the fall (that?s DX next, and there's no point in releasing it without the API)

So you are saying there was no reason to release the R300 when they did? There are plenty of reasons to release a part that has the DXN feature set prior to the release of the API, particularly since we are likely looking at over two years from now when DXN will hit. The feature set can be exposed under OGL not to mention as you have already brought up there are issues when you have a feature in an API without the hardware to support it. If ATi holds out until 2006 to release their true next gen part nVidia should be about to release NV55 which could put ATi in a spot that nVidia had to deal with this generation(all initial DXN development will have targeted nV's hardware outside XB2 which will take time to get ported back to the PC).

Basically, R420 using R300 based technology in a similar fashion to the way that NV30 uses NV20's technology / architecture or NV20, NV10's etc. T there is a relatively clear architectural development through NVIDIA's parts - this is not the case with ATI as they basically have started from scratch every architecture. R420 is them adopting an architectural development closer to NVIDIA's, so this is a leap akin to, say, NV20->NV30 technology / architecture wise.

Based on the most optimistic rumors I've heard the R420 looks a lot more like a NV30-NV35(excluding performance which sounds closer to NV25-NV26) leap then a NV20-NV30 style move.

I had the opportunity to sit down and have a fairly lengthy chat with one of ATI's directors recently and I was discussing the fact that in reality R420 wasn't even on the roadmap until March '03 so it would be impossible to make significant differences - to which he said "you have to consider what was already in development at that timeframe".

Which makes it sound even more like a mid generation evolution then a major revision.

He was basically confirming that R420 will be adopting technology from what was R400 at the time (and what is now being further developed to become R500)

They are continuing development for another two years on the core? And you think it was a viable part to ship as the R400?

Eric Demers (ATI architect) also stated to me the goal was "double the performance every 18 months", so my expectation is that R420 should theoretically be twice as fast as R300 in some areas.

That gives them till what, February to have a part twice the speed of the R300 out. You don't sound too confident that the board will be close to twice the speed in real world situations, are you doubting their claims?

My guess is that given roughly equivalent processes and die sizes there performances will be fairly close.

NV40-R420-PVR5 or close to their claims?

The reason it was moved is because of DX 10's move and the XBox - there was no point in having a part with the capabilities of R400 without the API around. Its best to concentrate on high performance with the features of the major API you'll be targetting.

Handy that they gave themselves another two years, to do what I wouldn't be able to guess. Any features they expose beyond DX can be exposed under OpenGL, and getting their boards out first would significantly aid them as it did this past gen with development. We heard a lot of hype about the original Radeon being a DX8 part(not sure where that started) because it exceeded DX7 specs(so did the NV10 in some instances), and the R300 obviously exceeded DX8 specs.

TheSnowman-

i didn't apologize or deny anything; i only pointed out the inconclusive nature of hearsay.

Explicitly quoting a company represenative's claims to a member of the press is hearsay?

also i never laughed at the assumption that nvidia could take the lead, mealy pointed out that we don't have enough information to make a determination and arguments one way or the other is nothing more than blind fanboyism.

When someone states that they expect one company to take the lead, but not your company of choice, you call it blind fanboyism.

lastly, the 5900se, ti4200, and ti200 which you bring up are all moderately slower than their more expensive brethren, where as if ati did release a 200$ card faster than discontinued 9700pro it would also have been faster than a high end card currently in their lineup selling for 33% more, the 9800np.

Wait a second, what is this 5700Ultra I see floating around? Selling for a higher price then the 5900SE too. The GeForce4 MX 460 was selling for a higher price then the Ti4200 also. It is far from unheard of for a company to offer a good deal to the informed buyer. The 9800NP was supposed to be faster then the R9700Pro, not to mention the 'new features' that in theory will get exposed some day(such as the FBuffer).

unless it is beyond your intellectual capacity to grasp these things, i think it is fair to say that it is you who lack an open mind in these matters.

You mean have the intellectual capacity to simply look at prices for the 5700Ultra and 5900SE right now? Or are you talking about having the intellectual capacity to remember the number of times that this has been done in the past? ATi tried to offer a PR line that would convince people that they would offer a killer mid tier part that performed as a high end part for a great price, they just didn't deliver on the performance. Even the R9500Pro bests the R9600XT a good deal of the time. You don't even have to keep an open mind on this one, simply take off the blinders.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Hadn't seen the PS2.0 comments from FM, haven't checked back since they did their last major update. Interesting.
Ben I was posting about that whole fiasco in the last discussion we had.

The evidence is clear, from the Beyond3D benchmarks which showed massive differences between the builds on nVidia hardware, the ongoing process where FutureMark is required to single out nVidia drivers because some cheat and the fact that nVidia have basically said that they have absolutely no plans to stop cheating in 3DMark in the future.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
We have had this discussion before BFG. There is no use having the conversation when you go so far as to actually believe everything Gabe Newell said.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
We have had this discussion before BFG.
Yes we have and you still refuse to believe any and all evidence I post to you. I'm not sure what else you want me to post if I've given you an nVidia employee who's basically confirming the whole issue:

Derek went on to suggest that they may well end up chasing each patch release and re-optimising as Futuremark puts a patch that defeats previous detections!
Link.

There is no use having the conversation when you go so far as to actually believe everything Gabe Newell said.
And what basis do you have to discount everything he says? Gabe Newell is a high profile developer with high levels of DirectX 9 knowledge. He has also publicly released his findings and allowed the public scrutiny of them.

Are you in the same position as him? If not then you can't expect me to simply discount everything he says solely on the grounds of what you say.
 

DaveBaumann

Member
Mar 24, 2000
164
0
0


Yes

Have you talked to the guys at GearBox about it, or is this from other sources?

Other sources.

What could they include in DXN that would allow the IHVs to use lower then FP32 for combined shader units?

Eh? I?m not saying DX Next won?t necessarily be FP32 (although it will be Int16 as well), I?m saying that developing FP32 hardware isn?t the significant issue. Anyway, ATI already have FP32 ALU?s for the VS as it is.

So you are saying there was no reason to release the R300 when they did?

Errr, no, R300 was designed as a DX9 part and DX9 was going to be released in a reasonable timeframe for time ? the rest of competition had to follow with DX9 parts for that generation.

There are plenty of reasons to release a part that has the DXN feature set prior to the release of the API, particularly since we are likely looking at over two years from now when DXN will hit. The feature set can be exposed under OGL not to mention as you have already brought up there are issues when you have a feature in an API without the hardware to support it. If ATi holds out until 2006 to release their true next gen part nVidia should be about to release NV55 which could put ATi in a spot that nVidia had to deal with this generation(all initial DXN development will have targeted nV's hardware outside XB2 which will take time to get ported back to the PC).

ATI and NVIDIA will not be on significantly different feature roadmaps (unless something unforeseen happens, such as serious silicon issues) ? both will basically be timed to hit a new generation at roughly the point of a new DX generation. DirectX is the API they will primarily follow since that?s what the majority of developers use. The competition is such that its highly unlikely that either party will be willing to waste a significant quantity of die space on features that won?t be exposed through the primary API and as much of the available die will be used to concentrate on performance at the level of compliance of the current DX release. That?s almost certainly why R400 was moved in the first place ? it was probably significantly beyond PS/VS3.0 capabilities but that would have come at the detriment of overall performance given the die estimates they had at that time (although still faster than their current generation), so they opted to move that further back and build a part that would have an even higher performance but at a lesser overall featureset.

Based on the most optimistic rumors I've heard the R420 looks a lot more like a NV30-NV35(excluding performance which sounds closer to NV25-NV26) leap then a NV20-NV30 style move

So, if its VS/PS3.0 you would still think that?s the case? But as I said, this is a case of ATI adopting an architectural development pattern closer to NVIDIA.

That gives them till what, February to have a part twice the speed of the R300 out. You don't sound too confident that the board will be close to twice the speed in real world situations, are you doubting their claims?

I don?t yet know the exact configuration of it yet, so I can?t really say in what situations it can or will meet those targets. However, I?m pointing out that this is ATI?s intention for this part, also indicating that its not a refresh part.

NV40-R420-PVR5 or close to their claims?

NV40 / R420. As much as I?d like to see PowerVR produce a cool high performance part I have reservations over the time to market that raises some questions as to its potential performance.

Handy that they gave themselves another two years, to do what I wouldn't be able to guess.

Quite a lot. I?m sure that the specifications for DX Next will move on from their interpretation in the original R400, but most importantly during this period die space estimates will move on significantly (90nm or possibly 65nm) hence they?ll be moving the performance on. Remember, that this part will also be significant to the Xbox development as well.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
BFG-

The main issue is that you consider any otpimization that nVidia does a cheat, there is no ground to have a reasonable discussion on the subject between us.

As for Newell- which of his claims have been backed up? I'm still waiting for you to post links to HL2 benches from the Valve/ATi PR event on the 9800XT.

Dave-

That is the one that I saw at Borders the other day, they were asking $60 for it there, thanks for the link :)

Other sources.

You were talking about the active camouflage right? There was a fairly in depth post on GearBox's board as to exactly what they were doing with AC as it was the reason that AA wouldn't work with Halo. Based on all of the comments made by the dev team there, the R350 is doing the same thing as the NV20 for that particular effect. He also posted a workaround to enable AA in the game by disabling the feature IIRC.

Eh? I?m not saying DX Next won?t necessarily be FP32 (although it will be Int16 as well), I?m saying that developing FP32 hardware isn?t the significant issue. Anyway, ATI already have FP32 ALU?s for the VS as it is.

Of course ATi is using FP32 for the VSs, the reason we got on this discussion was I was saying that the main benefit of FP32 will be when the functional units of the shaders can be combined in to a more flexible unit. I don't see FP32 being very viable in terms of need inside of real time for a couple of generations of hardware yet(exlcuding the Fire/Quadro parts).

Errr, no, R300 was designed as a DX9 part and DX9 was going to be released in a reasonable timeframe for time ? the rest of competition had to follow with DX9 parts for that generation.

Wasn't it close to six months prior to the release of DX9 that the R9700Pro hit?

ATI and NVIDIA will not be on significantly different feature roadmaps (unless something unforeseen happens, such as serious silicon issues) ? both will basically be timed to hit a new generation at roughly the point of a new DX generation.

nVidia is already planning on breaking this mold, not that they seemed ever to have really followed it. With DXN still two years(or more) off we should be looking at NV55 by the time it hits.

DirectX is the API they will primarily follow since that?s what the majority of developers use. The competition is such that its highly unlikely that either party will be willing to waste a significant quantity of die space on features that won?t be exposed through the primary API and as much of the available die will be used to concentrate on performance at the level of compliance of the current DX release.

If we are to believe everything MS has stated through all of their various branches, ATi should have their DXN part ready some time prior to it hitting. XBox2 should already have had a decent amount of time on the market by the time Longhorn is released, with the current contract I understand that MS will be farming out the fabrication on a license basis but I can't see ATi taking too much time shipping the PC counterpart. Also, exposing features in OpenGL well beyond what the current revision of DX supports is something I could easily see nVidia doing just to p!ss in MS's Cheerios with the way things have gone over the last year and a half ;)

That?s almost certainly why R400 was moved in the first place ? it was probably significantly beyond PS/VS3.0 capabilities but that would have come at the detriment of overall performance given the die estimates they had at that time (although still faster than their current generation), so they opted to move that further back and build a part that would have an even higher performance but at a lesser overall featureset.

Here is the problem I have with that line of thought- they knew a year in advance of their roadmap change what the PS/VS 3.0 standards were going to be. Why wouldn't they adjust their plans when they saw the final specifications instead of wasting a year developing a part that was essentialy going to be close to completely redesigned anyway? If their resource management was truly that poor they would quickly find themselves in serious trouble, but we know that isn't the case. We all knew prior to the roadmap change that Longhorn wouldn't be hitting until '05 at the earliest, so why have a part designed for that generation of DX scheduled to be released a year, at least, ahead of it? It doesn't make sense.

So, if its VS/PS3.0 you would still think that?s the case? But as I said, this is a case of ATI adopting an architectural development pattern closer to NVIDIA.

If all they added was VS/PS 3.0 support and moved to GDDRIII and PCI Express then yes, I would still think of it as a refresh. If all nVidia did was add VS/PS 3.0 support and moved to GDDRIII PCI Express I would say the same.

I don?t yet know the exact configuration of it yet, so I can?t really say in what situations it can or will meet those targets. However, I?m pointing out that this is ATI?s intention for this part, also indicating that its not a refresh part.

Compare the GF2U to the original NV10 that shipped and you actually see double the performance in a great deal of real world situations, not to mention they did expand the functionality of the register combiners between them. For double the performance, the bandwith numbers we are hearing about could account for a significant improvement in AA performance in many real world situations, without them touching the core at all. I don't know how much information you have available to you, nor am I trying to get anything out of you that you may be under NDA about, but based on every optimistic rumor I've seen to date the R420 looks more like a refresh then it does a new generation. If we heard the same type of details about the NV40 I'd be saying the same thing(although it appears that they are doing quite a bit more then ATi, they also have some areas that need to be addressed moreso then ATi).

NV40 / R420. As much as I?d like to see PowerVR produce a cool high performance part I have reservations over the time to market that raises some questions as to its potential performance.

Don't tell me they are going to be late :| Give K a swift kick in the @ss and tell him to get going ;)

Quite a lot. I?m sure that the specifications for DX Next will move on from their interpretation in the original R400, but most importantly during this period die space estimates will move on significantly (90nm or possibly 65nm) hence they?ll be moving the performance on. Remember, that this part will also be significant to the Xbox development as well.

A year ago I would have figured that 65nm would be probable, but with the heat issues everyone is having with 90nm I'm not so sure. To me the move to 65nm looks like it could present another .13u transition gamble, doubtful nVidia will bank on it after then NV30 but ATi may be a bit more constrained if they plan on using that build level for the R500(due to the XB2).
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
Based on all of the comments made by the dev team there, the R350 is doing the same thing as the NV20 for that particular effect.

I think you'll find the implementation is different.

Of course ATi is using FP32 for the VSs, the reason we got on this discussion was I was saying that the main benefit of FP32 will be when the functional units of the shaders can be combined in to a more flexible unit.

So, you don't think R400 was completely unviable then? ;)

The main benefit for high precision is with longer shaders.

Wasn't it close to six months prior to the release of DX9 that the R9700Pro hit?

I don't remember the exact timings, but it was released in 9700's lifetime, before 9800 IIRC. There were other considerable advantages to R300 over ATI's previous offerings that made it compelling to release at that time.

However, DX9 was late. There were numerous and considerable changes to DX9 prior to its release which pushed back its estimated release - PS/VS2.0 Extended being one and PS/VS3.0 being another.

nVidia is already planning on breaking this mold, not that they seemed ever to have really followed it. With DXN still two years(or more) off we should be looking at NV55 by the time it hits.

I'll wager that won't happen. Roadmaps are fluid and they usually change dependant on numerous market conditions. For instance - R360 and NV38 are very late additions to the roadmaps, neither had estimated on third gen PS/VS2.0 parts when they started on the DX9 path, but as thing have evolved that?s what became the sensible option. NV's roadmap a while back also states that NV40 would be there instead of NV38, but that?s not possible, or sensible.

Its all about inflexion points and the PS/VS2.0 was the last major one, PCi-Express is the next for the 3D vendors (PS/VS3.0 is likely not as much of a business issue) and the next will be PS/VS4.0 - this will be a very important one as devs will be ramping up with it for XB2.

Also, exposing features in OpenGL well beyond what the current revision of DX supports is something I could easily see nVidia doing just to p!ss in MS's Cheerios with the way things have gone over the last year and a half

And if they don't have any other significant console contracts, annoying MS is the last thing that NV will be doing from here on in. Regardless of what you may think, having the XBox at the moment is giving them considerable leverage as to what they can do - without the XBox that leverage goes, and to miss DX specifications somehow will be the last thing they can afford as this is still by far their biggest revenue stream. I've had numerous NV people quote to me how tied down they are with next generation DX revisions, and I find it no surprise that they are.

Here is the problem I have with that line of thought- they knew a year in advance of their roadmap change what the PS/VS 3.0 standards were going to be. Why wouldn't they adjust their plans when they saw the final specifications instead of wasting a year developing a part that was essentialy going to be close to completely redesigned anyway?

R400 was initially scheduled for release in July '03 - the development of it had probably been occurring for 12-14 months prior to that, and it had initially been sent to the fab probably at the end of '02. There was considerable uncertainty in exactly what DX9 would end up to be and how long it would last. It wasn't known until very late that it would include PS/VS3.0 specifications and that was the first real indication that DX9 would be lasting longer than usual. This was also the same period that MS were talking seriously about the XBox 2 - although they announced ATI had the contract in August they were already working with them before then, and ATI had actually been hiring people in anticipation of the XBox at GDC (March '03), about the same point as their roadmap shift.

If all they added was VS/PS 3.0 support and moved to GDDRIII and PCI Express then yes, I would still think of it as a refresh. If all nVidia did was add VS/PS 3.0 support and moved to GDDRIII PCI Express I would say the same.

There are considerable changes over PS/VS2.0 required to hit PV/VS3.0, and the instruction lengths are the least of the issues. These will not be simple refresh products if they hit PS/VS3.0.

To me the move to 65nm looks like it could present another .13u transition gamble, doubtful nVidia will bank on it after then NV30 but ATi may be a bit more constrained if they plan on using that build level for the R500(due to the XB2).

ATI may only be responsible for the RTL source, which means that whoever MS asks to fab may end up actually doing the layout etc. With the model MS have chosen its not actually a given that ATI will necessarily be tied to any fabbing plans that MS are rooting for.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Mods, please lock this... this battle of "wits" and endless bickering and measuring of penises needs to stop.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
The main issue is that you consider any otpimization that nVidia does a cheat,
That isn't true at all and you know it. I've said many times that I consider their shader reorderer to be a valid optimization. What I do consider cheating is application detection and Carmack/Futuremark do as well.

I wouldn't have even posted in this thread but I was puzzled by your response about not recognizing FutureMark's latest response to the issue when I quite clearly demonstrated this and the performance drops exhibited by nVidia in the 340 build in the other thread.

You claim nVidia isn't cheating yet no other vendor experienced any performance loss (besides benchmarking noise) in all of the tests, unlike nVidia whose scores plummeted. You consider that a valid optimization on nVidia's part? Then tell me, why is that other vendors can achieve high levels of performance and not fall over when the build number changes?

You also claim nVidia's actions are normal yet in reality they're the exception to the norm. Their application specific cheats are experiencing exactly the sorts of things that I previously illustrated would happen to any vendor who proceeded down that path. And not only that, but you've got an nVidia employee basicially confirming the entire thing!

As for Newell- which of his claims have been backed up?
Which have been disproven? Show me where nVidia, Microsoft or otherwise have stepped forward and publicly denounced his findings.

I'm still waiting for you to post links to HL2 benches from the Valve/ATi PR event on the 9800XT.
The other forum has had a major overhaul and the search function is broken.

Mods, please lock this... this battle of "wits" and endless bickering and measuring of penises needs to stop.
No forum rules have been violated and we are discussing video related material. You can't expect threads to be locked just because you don't like them. This is a public open form.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Dave

I think you'll find the implementation is different.

I'm just going on what the dev team stated. They have a list of the differences between the different shader versions on Bungie's site(IIRC, may be GB's)- AC isn't listed there either.

So, you don't think R400 was completely unviable then? ;)

Obviously ATi did for whatever reason :)

The main benefit for high precision is with longer shaders.

Of course, but we have a long way to go before we see parts that can run shaders of that length at reasonable speeds for real time. With the Quadro parts one frame every ten seconds is a lot better then one frame every ten minutes. As it stands now with the speed of the parts that are available FP16 is plenty the vast majority of the time.

I don't remember the exact timings, but it was released in 9700's lifetime, before 9800 IIRC. There were other considerable advantages to R300 over ATI's previous offerings that made it compelling to release at that time.

However, DX9 was late. There were numerous and considerable changes to DX9 prior to its release which pushed back its estimated release - PS/VS2.0 Extended being one and PS/VS3.0 being another.

And due to them releasing very early they gained a significant advantage in terms of all the early DX9 dev being done on their hardware. If they ship close to nVidia, nVidia is still the dominant player in terms of marketshare, they will be at a decided disadvantage to get even treatment let alone preferential in terms of PC native development(outside of say Futuremark and Valve).

I'll wager that won't happen. Roadmaps are fluid and they usually change dependant on numerous market conditions. For instance - R360 and NV38 are very late additions to the roadmaps, neither had estimated on third gen PS/VS2.0 parts when they started on the DX9 path, but as thing have evolved that?s what became the sensible option. NV's roadmap a while back also states that NV40 would be there instead of NV38, but that?s not possible, or sensible.

I was always under the impression that NV38 was a given. They did it with the NV1x core, and then they did it with the NV2x core. For NV55 by Longhorn- NV40 Q1 '04, NV45 Q4 '04, NV48 Q2 '05, NV50 Q4 '05, NV55 Q2-Q3 '06. I figure they will slip at least one quarter with one of the parts, possibly two by the time NV55 comes around. Figuring that in, and figuring that they will have another x8 part that would still have them right around NV55 by the time Longhorn hits.

And if they don't have any other significant console contracts, annoying MS is the last thing that NV will be doing from here on in.

The XBox1 went real well for them on a financial basis, as long as we ignore profit completely. It's quite obvious that nVidia is very willing to annoy Microsoft(not that I think that's the brightest move) with how this generation went.

Regardless of what you may think, having the XBox at the moment is giving them considerable leverage as to what they can do - without the XBox that leverage goes, and to miss DX specifications somehow will be the last thing they can afford as this is still by far their biggest revenue stream.

MS's already hurt them with their DX specs for this generation, and nVidia's marketshare surpassed Intel's based on the last numbers I saw(Mercury). nVidia has been doing a lot of things to irritate MS for some time, they offer proper support for Linux which ATi has refused to do, they started a legal battle over the NV2A's pricing structure, and they have very actively supported and promoted OpenGL native developers. I'm not saying any of this is wise long term, but their sole projects that relied entirely on MS has not given them the sort of financial return that was hoped for, not even close.

R400 was initially scheduled for release in July '03 - the development of it had probably been occurring for 12-14 months prior to that, and it had initially been sent to the fab probably at the end of '02. There was considerable uncertainty in exactly what DX9 would end up to be and how long it would last. It wasn't known until very late that it would include PS/VS3.0 specifications and that was the first real indication that DX9 would be lasting longer than usual.

So now you are saying that the R400 was no good, and that ATi decide to throw it away? By the sounds of it, things are very confused at ATi.

There are considerable changes over PS/VS2.0 required to hit PV/VS3.0, and the instruction lengths are the least of the issues.

Of course, if instruction length were simply the issue and ATi managed to get the FBuffer to go from the PR department in to their parts then the R350 would be PS/VS3.0 compliant.

These will not be simple refresh products if they hit PS/VS3.0.

I'm sure you see it that way, you have been rather obsessed with shaders for a long time now. It's been around a year now since DX9 hit and we have two shader heavy games, one of them worth owning. It is appearing that DX9 is the slowest adopted DirectX revision in a long time, since the Glide days anyway. Clearly if what you were talking about before with IHV's spending time on what matters is real, then VS/PS3.0 is going to be a checkbox feature and that's about it. If all they do is add VS/PS3.0 it will be the biggest let down from a hardware 'generation' in a long time.

ATI may only be responsible for the RTL source, which means that whoever MS asks to fab may end up actually doing the layout etc. With the model MS have chosen its not actually a given that ATI will necessarily be tied to any fabbing plans that MS are rooting for.

But it would be utterly foolish for them not to get a part in devs hands ASAP on the PC side if they want to exploit the advantage that the XBox2 contract can give them.

BFG-

That isn't true at all and you know it. I've said many times that I consider their shader reorderer to be a valid optimization. What I do consider cheating is application detection and Carmack/Futuremark do as well.

Think about it- Futuremark releases a patch and approves them for use in all drivers saying that they have eliminated all invalid optimizations. Then, some time later, they realise that they don't approve of the PS optimizations that were present the entire time in the drivers. They worked around all the other optimizations but not the one for pixel shader test, and they didn't notice anything was going on till some time after they said they were OK? nVidia's driver team is far beyond genius, they must be god like to use an application detection that relies on differing techniques for each sub segment of a test that not even the developers can catch after they claimed they have. Also pretty interesting that noone can spot the IQ differences.

Which have been disproven?

Every time someone buys an ATi card you get so excited you go out and kick a puppy. Noone can disprove this to me, therefore it obviously contains some truth, right?

Show me where nVidia, Microsoft or otherwise have stepped forward and publicly denounced his findings.

There were no 'findings', he flat out lied. Gabe Newell stated that HL2 would ship September 30th 2003, that was a lie(and he knew that was a lie when he said it). Gabe Newell then stated that they would release a public version of the benchmark used for HL2 on September 30th 2003, that was also a lie(another lie he knew was such when he stated it). Some other lies-

Working closely with NVIDIA (according to Gabe), Valve ended up developing a special codepath for NVIDIA's NV3x architecture that made some tradeoffs in order to improve performance on NVIDIA's FX cards. The tradeoffs, as explained by Gabe, were mainly in using 16-bit precision instead of 32-bit precision for certain floats and defaulting to Pixel Shader 1.4 (DX8.1) shaders instead of newer Pixel Shader 2.0 (DX9) shaders in certain cases.

Cross reference that with the more up to date comments from Valve that Dave posted on B3D a while ago(pay close attention to the details). Then there was the whole driver debacle that Newell insisted upon(not to mention going on about spending 5x as long with nVidia hardware while not even using MS's compilers for the DX9 codepath). What would you have said if Carmack demanded that only the latest official drivers be used to run the DooM3 benches when they hit? Not a lie, but without a doubt dishonest. Newell is a PR boy for ATi, just as BB is for nVidia. Take anything they say and ignore it, it's that simple. The reason why I've stated we should avoid this discussion is that it is best to wait until the game ships. I'll gladly agree to the fact that you can harp on me until you get sick of it in any way you want if the results are comparable to Newell's PR event. I'm not in the least worried about it either ;)
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Think about it- Futuremark releases a patch and approves them for use in all drivers saying that they have eliminated all invalid optimizations.
Except that didn't happen. Build 340 was always accompanied with a list of valid drivers that it was tested on.

nVidia's driver team is far beyond genius, they must be god like to use an application detection that relies on differing techniques for each sub segment of a test that not even the developers can catch after they claimed they have.
Is this another case of the global anti-nVidia consipiracy and FutureMark are yet another company that are involved in it?

Did you even read the quote from the nVidia employee that I posted where he confirmed the application detection in 3DMark? Or he also dellusional/ATI PR monkey/on ATi's payroll/lying/etc etc etc?

Also pretty interesting that noone can spot the IQ differences.
Except a difference in IQ doesn't automatically imply the presence or non-presence of a cheat.

Every time someone buys an ATi card you get so excited you go out and kick a puppy. Noone can disprove this to me, therefore it obviously contains some truth, right?
It's a fact that numerous developers have commented on the low shader performance on nVidia cards and how careful instruction scheduling and reduced precision is needed to get better results.

It's a fact that nVidia's drivers detect screen captures and alter the output, even when using FRAPS 2.0 to take the captures.

It's a fact that the NV3x architecture has limited registers and relies on heavy SIMD to get any sort of reasonable performance and such an architecture will always have problems when running real-world applications.

There were no 'findings',
Yes there were.

he flat out lied.
He did not.

Gabe Newell stated that HL2 would ship September 30th 2003, that was a lie
Major projects will often slip behind their schedule. Also I can't imagine that having the source code leaked to the public would've helped his cause.

(and he knew that was a lie when he said it).
How do you know this?

Cross reference that with the more up to date comments from Valve that Dave posted on B3D a while ago(pay close attention to the details).
What am I looking for here? Or are you bringing up that nonsense again about Valve not using Microsoft's compiler for the irrelevant full precision path?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Except that didn't happen. Build 340 was always accompanied with a list of valid drivers that it was tested on.

And the 52.16s were on the approved list without disclaimer when they released the 340 build, then they changed it. The rest of your post is just you believing anything negative about nVidia and we have had this discussion numerous times. You make the jump from optimization to cheat, and you believe the screenshot story. Just post some links to some evidence, a lot of people have looked for it so it should be no problem at all to show piles of evidence of this dishonesty if it exists. As for the ship date of the game, the rendering engine wasn't even feature complete by the ship date of the game, there is no way they didn't know the game was never going to hit that date.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
No forum rules have been violated and we are discussing video related material. You can't expect threads to be locked just because you don't like them. This is a public open form.
Considering the thread has been totally crapped and the current discussion has absolutely NOTHING to do with the origional post...
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
You can't control the direction a thread takes. The thread isn't yours to control; all you can do is throw out a conversation starter and hope enough knowledgable people find it interesting to form a worthwhile discussion. I think Ben's and Dave's discussion is pretty interesting, and BFG and Ben are free to hammer away at each other provided they remain civil (no more kicking puppies! :D).

Anyway, I think your original post was decently discussed. We can't really conclude why ATi and nV are being tight-lipped as only their NDA'ed employees know the truth, and they're not going to share with us. Beyond the speculation put forth here for the delay in chest-beating, there's not much more to say, is there? So why not let the thread continue without complaining? It's not like more ppl in here actually search for previous threads when they have a question. You know as well as I do that most ppl post first and think later, if at all. ;)