nvidia first to market with DX11 GPUs?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: B3rCik
Originally posted by: evolucion8
Originally posted by: B3rCik
Originally posted by: OCguy
I like how even the ATi sources are bashing that d-bag charilie. :thumbsup:

Wow, Q3 GT300? That would be awesome....

Hopefully.
But for now, it's only a palmistry ;)

Jum, never heard of that word before, what does it mean? It's like in a movie were a character got so impressed because of the madness of a guy and he said, "That's travesty" (or Trasvesty?) don't know how to spell it. :p

First of all - I'm from Poland, so sorry for any inappropriate word ;)

Second : Palmistry - I mean prediction :)

hehe, don't worry, I'm from Puerto Rico and we have lots of inappropiate words in spanish hehe.
 

imported_Shaq

Senior member
Sep 24, 2004
731
0
0
It's possible that AMD was ahead of Nvidia and decided to redo the design to make it more competitive based on what GT300 is going to be. I'm sure AMD knows more about what Nvidia is doing than we do.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Originally posted by: Shaq
It's possible that AMD was ahead of Nvidia and decided to redo the design to make it more competitive based on what GT300 is going to be. I'm sure AMD knows more about what Nvidia is doing than we do.

It takes much longer than a few week delay to "re-design" a chip based off of what your competitor may or may not be doing.
 

allies

Platinum Member
Jun 18, 2002
2,572
0
71
Originally posted by: OCguy
Originally posted by: Shaq
It's possible that AMD was ahead of Nvidia and decided to redo the design to make it more competitive based on what GT300 is going to be. I'm sure AMD knows more about what Nvidia is doing than we do.

It takes much longer than a few week delay to "re-design" a chip based off of what your competitor may or may not be doing.

Unless they're going for a respin where they can get higher clocks, right?
 
Dec 24, 2008
192
0
0
I don't think the GT300 figures are accurate. if they indeed have 512 cores, then the die size will be huge. Going from 55 to 40nm is only a change of 30 percent, with the figures they provided, the GT 300 should have a die size of about 1.5 times that of the GT200, and that is quite scary, considering the GT200 is already double the HD 4000 series. that means if HD 5800's size remains the same, the HD5800X4 (if they do produce it) will have the same die size as a GTX 380 or whatever NV decides to call it. Still, it just may be tempting enough for an upgrade
 

alcoholbob

Diamond Member
May 24, 2005
6,390
469
126
I think 40nm TSMC is actually 45nm? 45 nm is about 67% of 55nm. 512/240 times 67% would give you a core 42% bigger. That's pretty scary actually.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: OCguy
Originally posted by: Shaq
It's possible that AMD was ahead of Nvidia and decided to redo the design to make it more competitive based on what GT300 is going to be. I'm sure AMD knows more about what Nvidia is doing than we do.

It takes much longer than a few week delay to "re-design" a chip based off of what your competitor may or may not be doing.

>1yr...basically you don't.

If you find out competitive info regarding the chips that are going to debut (dubbed N+1 chips) then typically you factor that info into the currently ongoing initial stages of your chip N+2 development.

The design pipeline is so lengthy, and in a lot of ways serial that stopping at stage 5 (in project roadmap) to redo something at stage 2 phase based on competitive info then requires redoing stages 3, 4, and 5 all over again.

It's prohibitive. Sometimes small tweaks can be done to attempt to intersect a newly developed market careabout, but it is truly grueling on the development team and resource intensive. The respin of Barcelona for B3 stepping to eliminate TLB bug really sapped a lot of resources from other planned projects for example.

Originally posted by: allies
Originally posted by: OCguy
Originally posted by: Shaq
It's possible that AMD was ahead of Nvidia and decided to redo the design to make it more competitive based on what GT300 is going to be. I'm sure AMD knows more about what Nvidia is doing than we do.

It takes much longer than a few week delay to "re-design" a chip based off of what your competitor may or may not be doing.

Unless they're going for a respin where they can get higher clocks, right?

Minimum 2 months from the start of a respin iteration until silicon is out of the fab and into your testers to verify the respin worked. Absolutely minimum of 2 months, that's full-speed crisis mode, drop everything else, all hands on deck, world will end if we don't get this done at absolute breakneck speeds. 8 weeks, and you hope your rushing about didn't introduce new bugs that escaped detection because you gave your verification team so little time to do their job.
 

Kuzi

Senior member
Sep 16, 2007
572
0
0
It takes much longer to redesign a chip for higher performance than what some people here seem to think.

It is too late in the game now for ATI (or NV) to redesign their next-gen GPU to get added performance. The work on those GPUs probably started in 2006/2007, and if they want to change stuff now it would delay the release by at least a year, which would be crazy to do.

Tweaking the chip for higher clocks can be done faster but still needs some months, a quarter or more. For example, the 4890 (RV790) clocks really well on the same 55nm process as the 4870, but it took ATI about 3 quarters to tweak the RV770 chip and release the RV790 with higher clocks.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
TO be DX10.1 complanent isn't an easy trick in itself.

For nVidia's engineers it would be incredibly simple actually. At most they need to add a few regsiters. That isn't some made up dream scenario, you can check the exacting DX10.1 hardware requirements for yourself. Now, is it worth the costs associated with it for at most a marginal performance gain in a few titles? Not likely, particularly with current market conditions.

I can't see NV just jumping threw all those loops without screwing the pooch.

Please get very technical and explain what major issues you see nVidia having. No need to generalize in the least, break down exactly what architectural hurdles you see them having using all of your extensive expertise :)
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: BenSkywalker
TO be DX10.1 complanent isn't an easy trick in itself.

For nVidia's engineers it would be incredibly simple actually. At most they need to add a few regsiters. That isn't some made up dream scenario, you can check the exacting DX10.1 hardware requirements for yourself. Now, is it worth the costs associated with it for at most a marginal performance gain in a few titles? Not likely, particularly with current market conditions.

I can't see NV just jumping threw all those loops without screwing the pooch.

The performance improvement that DX10.1 can bring is quite big compared to the very small modification that can be done under the chip to make it DX10.1 compliant (Like more registers, a data path for shader based anti aliasing, etc). Probably the nVidia's GPU stream processor is done in a way that is very hard to modify to make it DX10.1 and would require a nice investment that would simply make it worse than already is, but for sure it would help in the PR marketing,so they can sell more cards.
 
Status
Not open for further replies.