Dual GT300 card in works

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Originally posted by: dguy6789
*snip*

Everything you just said has been debunked in this thread already.

It doesnt matter if you think they are "taped together." In fact, the Rev 2 295s are on a single PCB as well.

nV is already planning the GT300 x2 card well in advance, so your two-week comment is bogus.


"Finesse" and "design" are not worried about by 99% of people who buy graphics. All that matters to most people are benchmarks and price. Some people are willing to pay more for that extra power, some prefer the price V performance like that of the 4890.

By your theory, the people should have purchased Phenom over the Q6600 because the Q6600 was just two E6600's taped together.

Like said before, it could be ran off of magic fairy dust. If it still performs well, is within your budget, you are good to go.


Originally posted by: dguy6789
Nvidia does accomplish what they set out to do(achieve the performance crown), but to suggest that they put the same level of effort and planning into it from the get go as ATI is absurd.

What does this even mean? How do you know what effort and planning is put into each design? :confused:
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
I said that the GTX 295 is faster, which was Nvidia's goal. You're saying how they get there doesn't matter, I am saying it does. The problem is that being faster at any cost isn't always good. I'm arguing the point that it's obvious that they didn't intend to run the GT200 as a dual GPU from the get go like ATI did. In fact, they couldn't even run the full GT200 GPU in a dual GPU configuration even after a die shrink, they still had to use a cut down one. Yes the GTX 295 is now on a single PCB for the latest revisions, but just how long did that take? I'm not arguing that it is better just because it's on a single PCB for cosmetic reasons, but because of the end result; two similarly performing cards with very different price tags. It is reasonable to believe that the 4870x2 sells for much less than the GTX 295 for reasons that probably include design decisions regarding multi GPU application made long before either single version of the card even came out.

In a nutshell, I think early on ATI thought "we're definitely going to be making a dual RV770 card" and Nvidia thought "we'll make a dual GT200 card if need be but we'll get to it whenever we get to it"
 

dflynchimp

Senior member
Apr 11, 2007
468
0
71
Originally posted by: dguy6789
I said that the GTX 295 is faster, which was Nvidia's goal. You're saying how they get there doesn't matter, I am saying it does. The problem is that being faster at any cost isn't always good. I'm arguing the point that it's obvious that they didn't intend to run the GT200 as a dual GPU from the get go like ATI did. In fact, they couldn't even run the full GT200 GPU in a dual GPU configuration even after a die shrink, they still had to use a cut down one. Yes the GTX 295 is now on a single PCB for the latest revisions, but just how long did that take? I'm not arguing that it is better just because it's on a single PCB for cosmetic reasons, but because of the end result; two similarly performing cards with very different price tags. It is reasonable to believe that the 4870x2 sells for much less than the GTX 295 for reasons that probably include design decisions regarding multi GPU application made long before either single version of the card even came out.

In a nutshell, I think early on ATI thought "we're definitely going to be making a dual RV770 card" and Nvidia thought "we'll make a dual GT200 card if need be but we'll get to it whenever we get to it"

Time to jump ship my friend. No use defending a point that's clearly been lost.

Intent may have minor consequences on the final product, but the biggest difference is in the optimization. Whether Nvidia's card is elegant or not is m00t point as the goal is to make a competitive and fast performing gpu with what resources you have on hand. Seriously would you look inside your case every day and go "damn 4870X2, you are one elegant fine looking card!"

Chances are once you've installed the card and got it running smoothly you'll never steal another glance at it until you have to do hardware maintenance. All that you'll be seeing is guns, rockets, and clockspeeds/thermals if you're into hardware monitoring.

The real benefit that I see, but not a definite one, is that if you start out intending to build something, as opposed to just considering it, you're more likely to put forth more effort in the planning stage. This however does not necessarily guarentee the end product's superiority.

On a side note, I've been slightly more impressed with Nvidia's drivers as of late. ATI is pretty decent, but they still have a few kinks to work out on their DX10 hardware, as well as crossfireX issues. This is one area that I think makes or breaks the customer's impression.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: dguy6789
In a nutshell, I think early on ATI thought "we're definitely going to be making a dual RV770 card" and Nvidia thought "we'll make a dual GT200 card if need be but we'll get to it whenever we get to it"

Clearly, this is your opinion, but I beg to differ.

I personally think NV's plan was more like this, "we're going to build a big, fast, single chip; then we're going to shrink it and put two of them together". Given that this is exactly what NVIDIA has been doing since 2006, it's not exactly a stretch to think it might actually have actually been part of their plan all along.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: nitromullet
Originally posted by: waffleironhead
Even a blind squirrel gets a nut every once in awhile. It's pretty easy to predict a dual card in the future, as they are pretty common now.

Originally posted by: ShadowOfMyself
Well, usually Nvidia puts a large gap between the launch of single gpu and multi gpu, so this probably means they acknowledge AMD is doing quite well with their X2 line, and decided to hurry with their X2 card as well

In the past NVIDIA has only done the dual gpu cards after a a die shrink of the original launch core. This has been the case with the 7950GX2, 9800GX2, and GTX 295. So, if NVIDIA is coming out of the gate with a dual chip card, it would be a significant change in strategy IMO.

I think after the RV770 launch Nvidia changed stategy. I think their CEO was quoted as saying something to that effect. Normally they took the more conservitive approach as far as the process their GPU's are built on (I'm guess since it's known and they will get decent yields in theory, then they'll have a solid refresh 'tock'-type product for later) around a launch. This time around they will be launching their parts on 40nm, same as AMD. So my guess is that is a big part of the reason we'll see an x2 card sooner this go around from Nvidia.
 

thilanliyan

Lifer
Jun 21, 2005
12,085
2,281
126
Originally posted by: dflynchimp
Time to jump ship my friend. No use defending a point that's clearly been lost.

Intent may have minor consequences on the final product, but the biggest difference is in the optimization. Whether Nvidia's card is elegant or not is m00t point as the goal is to make a competitive and fast performing gpu with what resources you have on hand. Seriously would you look inside your case every day and go "damn 4870X2, you are one elegant fine looking card!"

:thumbsup:

As long as it performs and stays relatively cool I wouldn't care how or when it was made. I think cooling is made more difficult by the sandwich type card but they've addressed that with the newer revision. The only thing I don't like about the 295 is the price since it performs close to the 4870x2 which can be had for much less ($150 difference here in Canada).
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: OCguy
Originally posted by: Shaq
Those cards are going to be insanely fast. I can't wait. lol And yes these cards can dominate Crysis. I don't see any games coming that can even remotely use all that power. We will be set for 2-3 years with one of these.

Unless once and for all this proves that it hasnt been the hardware, it has been the engine.


Anyway, that card is going to be a freaking beast. Something tells me nV is going to suprise people and launch the single GPUs in November. They are being strangely quiet. :p

ARMA 2 is chewing up my 2X280s, so I need more power!

Crysis doesn't seem to be very well multithreaded. It doesn't even seem to make full use of dual cores, which is strange given the level of physics it is capable of.

]

Hmm, nvidia and ATI die sizes were closer than I thought. Also interesting how much of a dog R600 was, about as large as G80 but was barely an improvement over R580, yet the much tinier RV670 was faster than R600 and RV770 much faster.

Actually kinda like the "of AMD and Intel, which did IMC first?" arguments (answer: Intel did, 386SL w/IMC) the story of who did dual-gpu single-card first between AMD and NVidia is that NVidia did it first:

ATI had dual gpu cards first. The had a dual chip version of the rage way back when, and had a specialty dual gpu version of the r300 (not mass market) if you want to go with a true gpu.

But it's truly amazing what nvidia and ati can accomplish now compared to what vid card companies did in the late 90's. Remember when the voodoo5 was delayed because 3dfx was having trouble getting two slightly-beyond-voodoo3 chips to work together? Remember when the voodoo5 6000 was cancelled because 3dfx couldn't get a working bridge chip from intel to tie it all together?
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Last time I seen this kind of hype over GPU gossip was R600. I am betting type. (Shame). We won't be seeing the NV 300 for atleast 6 months. Just intime for ATI 28nm.

THat Globial and ATI are working on as we chat.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: Fox5
Originally posted by: OCguy
Originally posted by: Shaq
Those cards are going to be insanely fast. I can't wait. lol And yes these cards can dominate Crysis. I don't see any games coming that can even remotely use all that power. We will be set for 2-3 years with one of these.

Unless once and for all this proves that it hasnt been the hardware, it has been the engine.


Anyway, that card is going to be a freaking beast. Something tells me nV is going to suprise people and launch the single GPUs in November. They are being strangely quiet. :p

ARMA 2 is chewing up my 2X280s, so I need more power!

Crysis doesn't seem to be very well multithreaded. It doesn't even seem to make full use of dual cores, which is strange given the level of physics it is capable of.

]

Hmm, nvidia and ATI die sizes were closer than I thought. Also interesting how much of a dog R600 was, about as large as G80 but was barely an improvement over R580, yet the much tinier RV670 was faster than R600 and RV770 much faster.

Actually kinda like the "of AMD and Intel, which did IMC first?" arguments (answer: Intel did, 386SL w/IMC) the story of who did dual-gpu single-card first between AMD and NVidia is that NVidia did it first:

ATI had dual gpu cards first. The had a dual chip version of the rage way back when, and had a specialty dual gpu version of the r300 (not mass market) if you want to go with a true gpu.

But it's truly amazing what nvidia and ati can accomplish now compared to what vid card companies did in the late 90's. Remember when the voodoo5 was delayed because 3dfx was having trouble getting two slightly-beyond-voodoo3 chips to work together? Remember when the voodoo5 6000 was cancelled because 3dfx couldn't get a working bridge chip from intel to tie it all together?

I think the Rage Fury Maxx came before any Nvidia dual GPU cards. I could be wrong on that, but I know that card was pretty early, I *think* Nvidia's first dual GPU card was the 7950x2.

As far as the more elegant design, I do think AMD wins there... the problem is I don't think that matters at all in the real world. :) AMD's Phenom 1 was more elegant then Intel's pre-i7 quads, but generally slower. Though I do agree that AMD's card is more elegant, either way I'll generally take the better bang for the buck card personally.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: SlowSpyder
I think the Rage Fury Maxx came before any Nvidia dual GPU cards. I could be wrong on that, but I know that card was pretty early, I *think* Nvidia's first dual GPU card was the 7950x2.

Yeah, NV certainly didn't invent the dual gpu card. The Rage Fury Maxx was much earlier, and even the Volari V8 Duo http://www.driverheaven.net/reviews/Volari/ came before the 7950GX2. I think the 7950GX2 (7900GX2, actually) was the first PCIe dual gpu card to employee the current "SLI (or CF) on a single card" concept. I think the significance (and success) of this is that SLI and CF were already existing and accepted technologies, so driver support and commitment already existed.

Incidentally, XGI had the same strategy with the Volari V8 as ATI is currently employing (dual gpu to compete on the high end), so it's not a new concept. ATI is making the idea work for them though, and that's what counts.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: SlowSpyder
I think the Rage Fury Maxx came before any Nvidia dual GPU cards. I could be wrong on that, but I know that card was pretty early, I *think* Nvidia's first dual GPU card was the 7950x2.

I'm not going to nitpick ya over any technical differences between AMD's implementation of AFR on the Rage Fury Max (compatible with only win98) any more so than I would nitpick Nvidia on their early implementations of SLI on the Voodoo 5 cards that came out around the same time.

My comments were more aimed at modern times (AMD vs. NV, not ATI vs. 3dfx) just to say there is no clear delineation from the data to suggest that AMD's dual-GPU strategy was (1) truly their brainchild, and (2) Nvidia played catch-up the entire time.

FWIW that is why I was so selective in my wording of the analogy I made with Intel vs AMD regarding first IMC...because neither had THE first, that award goes to some DEC or IBM mainframe from the 1960s (irrelevant to modern IMC, but if we are to dredge they technicalities then it is out there in the footnotes of history). Rage Fury Max was an ATI product, not AMD, just as Voodoo 5 was a 3dfx product, not Nvidia. We are talking AMD vs. NV in this thread, so I tried to keep my post topical.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Originally posted by: Nemesis 1
Last time I seen this kind of hype over GPU gossip was R600. I am betting type. (Shame). We won't be seeing the NV 300 for atleast 6 months. Just intime for ATI 28nm.

THat Globial and ATI are working on as we chat.

Marking this prediction.
 

Forumpanda

Member
Apr 8, 2009
181
0
0
I didn't read that as a definite statement just a prediction, I guess AT forusm don't take kind to predictions?

TBH most 'news sources' quoted here are less credible than the average forum poster.
I can't believe some of things I read when I follow the links people post as 'proof'.

They read like a 15 year old mixed a few buzz words with some dates, and stuck in fill words until it reached minimum article length.
That, or they are just horrid FUD.
 

lopri

Elite Member
Jul 27, 2002
13,328
709
126
Originally posted by: Idontcare

Actually kinda like the "of AMD and Intel, which did IMC first?" arguments (answer: Intel did, 386SL w/IMC) the story of who did dual-gpu single-card first between AMD and NVidia is that NVidia did it first:
?
Talk about change of subject (to something irrelevant). I don't see anyone here 'arguing' who did first what. And I'd think such a question would be answered by a 'fact', rather than an argument.

NV pioneered multi-GPU market and SLI still has a huge name value. ATI downplayed it initially then got hot on the heel as it lost market share and mind share, and had no choice but to follow. Without the necessary R/D, it took years for ATI to come up with an answer and even then the resulting products were a laughing stock. (w/ a dongle haning out behind)

Fast forward today AMD and NV are more or less on an even footing on multi-GPU front, and they now use it as marketing tool to one-up each other and bully others. Thus the tone of PR speaks changes depending on what they've got and what they've not got....

I think that much is pretty clear. I thought the discussion here is what would possibly imply if NV decided/pulled off a simultaneous launch of a single-chip GT300 board and a dual-chip GT300 board? As nitromullet astutely noted, it'd be an unprecedented move by NV if true.

I have my (huge) share of doubt, but since we're speculating here anyway - if NV can and will release a dual-GPU SKU based on GT300 this year why/how so? Maybe;

- GT300's yields saw a breakthrough (and possible to stuff two of them within the thermal/power budget with some modification), or
- NV has more chips in its wings than GT300 for the next gen. It's a possibility (I think) knowing today's CPUs/GPUs are becoming more modular, or
- It will be a paper-ish launch and GT300's performance isn't up to the expectation.
- etc..

It's hard even to speculate, though, because I think the probability of this happening is very, very low.

Edit: I didn't realize the topic has already changed before I wrote the above. (but not before I read what I read) Rage Fury.. lol.
 

dflynchimp

Senior member
Apr 11, 2007
468
0
71
Originally posted by: Forumpanda
I didn't read that as a definite statement just a prediction, I guess AT forusm don't take kind to predictions?

TBH most 'news sources' quoted here are less credible than the average forum poster.
I can't believe some of things I read when I follow the links people post as 'proof'.

They read like a 15 year old mixed a few buzz words with some dates, and stuck in fill words until it reached minimum article length.
That, or they are just horrid FUD.

heh, well looks like you finally got us figured out XD. We'll take just about anything here.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Originally posted by: nitromullet
Originally posted by: dguy6789
In a nutshell, I think early on ATI thought "we're definitely going to be making a dual RV770 card" and Nvidia thought "we'll make a dual GT200 card if need be but we'll get to it whenever we get to it"

Clearly, this is your opinion, but I beg to differ.

I personally think NV's plan was more like this, "we're going to build a big, fast, single chip; then we're going to shrink it and put two of them together". Given that this is exactly what NVIDIA has been doing since 2006, it's not exactly a stretch to think it might actually have actually been part of their plan all along.

People seem to forget when Nvidia introduced the dual GPU setup they had the smaller, cooler, slightly less performing chip in the 7900 series. This reversal of fortunes of Nvidia having a large single chip and AMD\ATI the smaller chip is very recent.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Maybe nvidia is just going to rename a previous gen card and release it as the dual chip version in the next gen.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: lopri
Originally posted by: Idontcare

Actually kinda like the "of AMD and Intel, which did IMC first?" arguments (answer: Intel did, 386SL w/IMC) the story of who did dual-gpu single-card first between AMD and NVidia is that NVidia did it first:
?
Talk about change of subject (to something irrelevant). I don't see anyone here 'arguing' who did first what. And I'd think such a question would be answered by a 'fact', rather than an argument.
.
.
.
Edit: I didn't realize the topic has already changed before I wrote the above. (but not before I read what I read) Rage Fury.. lol.

Dam lopri did I shit in your wheaties or something recently? Why the pointed rebuttal of something so trivial? The topic was clearly being discussed regarding whether or not Nvidia had a dual-GPU single card strategy dating to the same timeframe as AMD.

I tried to add some facts and info to the discussion (yes facts, surely you saw my my links with dates in the same post you quoted) to highlight that the GX2 pre-dated the HD3870 X2 (not that it mattered who pre-dated who, the point was the data suggests NV had dual-GPU strategy in parallel to AMD), the point of my info got misinterpreted IMO so I came back to clarify that even if you want to dig all the way back in time to go to AMD's history with Rage Fury Maxx then you've got Nvidia with their Voodoo 5 5500 at nearly the same time as well.

Good grief, and for what its worth, being a debate team person, whenever I use the word "argument" as a descriptor to my posting style it is with this in mind:

Debate or debating is a formal method of interactive and representational argument.

Debate is a broader form of argument than logical argument, which only examine the consistency from axiom, and factual argument, which only examine what is or isn't the case or rhetoric which is technique of persuasion.

http://en.wikipedia.org/wiki/Debate

I don't use the term argument to characterize my position or my posts as being emotionally charged or temperamental of the nature as your parent's or my parent's arguments would be. I acknowledge that others may see it differently though.

IMO my representational argument was fact-laden, as was Slowspyder's rebuttal regarding rage fury maxx. Why am I the one getting selectively jumped on here? :confused:

Originally posted by: Nemesis 1
Last time I seen this kind of hype over GPU gossip was R600. I am betting type. (Shame). We won't be seeing the NV 300 for atleast 6 months. Just intime for ATI 28nm.

THat Globial and ATI are working on as we chat.

You really think ATI is going to start shipping 28nm based GPU product at nearly the same time frame as NV starts shipping 40nm GT300 based product?...and that ATI 28nm product is coming in 6 months?

There are two bets in there. NV300 not showing up for 6 months is always a possibility, some kind of fatal last-minute design flaw could always crop up (Phenom B2) and the ensuing respin delay would certainly push the timeline out an additional 3 months, but you are taking bets on it happening meaning you are ascribing the probability as being greater than 50% which IMO is a bit overly pessimistic.

Is this a gut feeling or is it based on you tracking the milestone evolution and extrapolating from those data to project a delivery timeline?

And the 28nm stuff...seriously?
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
I got what you meant after your post, IDC... since the AMD/ATi merge you were talking about. :thumbsup:

Anyway, I know I'm looking forward to a new card. My 512MB 4870 was perfect for me on a 22" monitor, but now that I have this 26" (it was on clearence at Best Buy for about half price... I couldn't resist) I'm really looking forward to an upgraded 1GB or more card.

If the rumors are true, or even in the ballpark I would think a 1600 sharder/32 rob/80 tmu card would be a huge jump over the current parts assuming clock speeds are where they need to be.

Anyone please feel free to comment on this... I think AMD has a big advantage due to the fact that they have more wiggle room to create a larger chip. Their chips right now are 65% of the size of Nvidia's and give you 90+% of the performance you get from Nvidia. If AMD made a chip that even approaches the Nvidia GPU's, it would be easier for AMD to fit much more 'stuff' than Nvidia I would think. If AMD made a 500mm chip, they could probably have a 3200-4000 shader part (just pulling a number out of my ass... not doing math as I'm lazy today :p) or something huge-ish, couldn't they? Not sure if they want that strategy or to keep with the smaller chip and Crossfire/dual GPU cards for the higher end, but it's something I thought that they could do.
 

MODEL3

Senior member
Jul 22, 2009
528
0
0
I think Nemesis1 was just playing around!

The most probable scenario for TSMC is to have 28nm at Q4 2010!
Unless they skip 32nm (I doubt),
but TSMC will probably only gain 1 quarter (Q3 2010), and will lose more!

For Globial, the most probable scenario (99,9999%) is to have 28nm after TSMC!

He didn't clarify, if he meant a winning one! :laugh:
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: SlowSpyder
I got what you meant after your post, IDC... since the AMD/ATi merge you were talking about. :thumbsup:

Thanks for the sanity check :thumbsup: At any rate it really wasn't my goal to establish a "who did what first" when it came to dual-gpu stuff, my memory is poor in this area and I acknowledge I'll be bested by the technicalities of history any day of the week. Folks can take me task over the accuracy of the technicalities, I really don't mind and in fact I appreciate the learning experience, but I don't think we can ignore the GX2 if we are talking about the 3870 X2 is all.

Originally posted by: SlowSpyder
If the rumors are true, or even in the ballpark I would think a 1600 sharder/32 rob/80 tmu card would be a huge jump over the current parts assuming clock speeds are where they need to be.

Anyone please feel free to comment on this... I think AMD has a big advantage due to the fact that they have more wiggle room to create a larger chip. Their chips right now are 65% of the size of Nvidia's and give you 90+% of the performance you get from Nvidia. If AMD made a chip that even approaches the Nvidia GPU's, it would be easier for AMD to fit much more 'stuff' than Nvidia I would think. If AMD made a 500mm chip, they could probably have a 3200-4000 shader part (just pulling a number out of my ass... not doing math as I'm lazy today :p) or something huge-ish, couldn't they? Not sure if they want that strategy or to keep with the smaller chip and Crossfire/dual GPU cards for the higher end, but it's something I thought that they could do.

I bolded your clockspeed comment because that is the meat of the "problem" with trying to scale the performance of smallish-chips to that of large(r)-chips...in addition to the functional yield impairment that comes with die-size scaling (a matter of cost and harvesting) there is a parametric yield impairment that comes with die-size scaling as well.

Within-chip process-induced variation which results in the weakest circuit being the rate-limiter in terms of the shmoo plot (GHz vs. Vcc, which then means GHz vs. power-consumption) as well as the simply unavoidable consequences of clock-propagation delay across the chip (physics of the situation) means for an otherwise identical chip cut in half the clockspeed of the two halves can always be higher than the clockspeed of a single monolithic chip (for a normalized "system" shmoo if you will, same Vcc and power-consumption, etc).

The big chip vs. two small chip paradigm is actually an interesting outcome of Moore's Law for those who have studied his original article in detail. Each basically takes an opposing position of equivalent aggregate cost on the "Number of Components per IC versus Relative Manufacturing Cost/Component" curve. AMD takes a position on the lagging edge of the curve (somewhere near the optimum) whereas NV takes a position definitely farther up the leading edge (where costs are rising due to functional yield and IC design costs).

Originally posted by: MODEL3
The most probable scenario for TSMC is to have 28nm at Q4 2010!

TSMC having 28nm in Q4 2010 (even if it did happen) is not the same thing as their customer's having 28nm based product for sale on Newegg in Q4 2010.

Originally posted by: MODEL3
Unless they skip 32nm (I doubt),

They skipped 45nm, who is to say 32nm won't be skipped in favor of transitioning to 28nm as well?

Originally posted by: MODEL3
For Globial, the most probable scenario (99,9999%) is to have 28nm after TSMC!

Given that GF had their even-higher performing (parametrics-wise) 45nm process tech out in production nearly 9 months before TSMC fielded their 40nm process tech, and that GF is a member of the bulk-Si development alliance at IBM, I am not sure what basis you are relying upon to conclude that TSMC is six-nines probable for debuting 28nm before GF...I'd put the odds almost exactly the other way around.
 

MODEL3

Senior member
Jul 22, 2009
528
0
0
Originally posted by: Idontcare
Originally posted by: SlowSpyder
I got what you meant after your post, IDC... since the AMD/ATi merge you were talking about. :thumbsup:

Thanks for the sanity check :thumbsup: At any rate it really wasn't my goal to establish a "who did what first" when it came to dual-gpu stuff, my memory is poor in this area and I acknowledge I'll be bested by the technicalities of history any day of the week. Folks can take me task over the accuracy of the technicalities, I really don't mind and in fact I appreciate the learning experience, but I don't think we can ignore the GX2 if we are talking about the 3870 X2 is all.

Originally posted by: SlowSpyder
If the rumors are true, or even in the ballpark I would think a 1600 sharder/32 rob/80 tmu card would be a huge jump over the current parts assuming clock speeds are where they need to be.

Anyone please feel free to comment on this... I think AMD has a big advantage due to the fact that they have more wiggle room to create a larger chip. Their chips right now are 65% of the size of Nvidia's and give you 90+% of the performance you get from Nvidia. If AMD made a chip that even approaches the Nvidia GPU's, it would be easier for AMD to fit much more 'stuff' than Nvidia I would think. If AMD made a 500mm chip, they could probably have a 3200-4000 shader part (just pulling a number out of my ass... not doing math as I'm lazy today :p) or something huge-ish, couldn't they? Not sure if they want that strategy or to keep with the smaller chip and Crossfire/dual GPU cards for the higher end, but it's something I thought that they could do.

I bolded your clockspeed comment because that is the meat of the "problem" with trying to scale the performance of smallish-chips to that of large(r)-chips...in addition to the functional yield impairment that comes with die-size scaling (a matter of cost and harvesting) there is a parametric yield impairment that comes with die-size scaling as well.

Within-chip process-induced variation which results in the weakest circuit being the rate-limiter in terms of the shmoo plot (GHz vs. Vcc, which then means GHz vs. power-consumption) as well as the simply unavoidable consequences of clock-propagation delay across the chip (physics of the situation) means for an otherwise identical chip cut in half the clockspeed of the two halves can always be higher than the clockspeed of a single monolithic chip (for a normalized "system" shmoo if you will, same Vcc and power-consumption, etc).

The big chip vs. two small chip paradigm is actually an interesting outcome of Moore's Law for those who have studied his original article in detail. Each basically takes an opposing position of equivalent aggregate cost on the "Number of Components per IC versus Relative Manufacturing Cost/Component" curve. AMD takes a position on the lagging edge of the curve (somewhere near the optimum) whereas NV takes a position definitely farther up the leading edge (where costs are rising due to functional yield and IC design costs).

Originally posted by: MODEL3
The most probable scenario for TSMC is to have 28nm at Q4 2010!

TSMC having 28nm in Q4 2010 (even if it did happen) is not the same thing as their customer's having 28nm based product for sale on Newegg in Q4 2010.

Originally posted by: MODEL3
Unless they skip 32nm (I doubt),

They skipped 45nm, who is to say 32nm won't be skipped in favor of transitioning to 28nm as well?

Originally posted by: MODEL3
For Globial, the most probable scenario (99,9999%) is to have 28nm after TSMC!

Given that GF had their even-higher performing (parametrics-wise) 45nm process tech out in production nearly 9 months before TSMC fielded their 40nm process tech, and that GF is a member of the bulk-Si development alliance at IBM, I am not sure what basis you are relying upon to conclude that TSMC is six-nines probable for debuting 28nm before GF...I'd put the odds almost exactly the other way around.

I am just playing, don't mind me today, i feel good!

I meant for their partners to have products in the market (Q4 2010)!

I just have a feeling, i don't know anything about if they skip or not!

Well you put the odds to GF, i put the odds at TSMC!

Even if you are right in your logic about 40-45nm TSMC/GF (i don't think the comparison you make is fair) I always liked better: "the higher-risk, the higher-stakes"! :laugh:

EDIT*
Although i think that the most probable scenario is for TSMC,
to have the capability to offer to its partners the capability to have in Q4 2010 products,
I think the recent delay with the PCI-Express 3.0 standard will delay the actual roadmap & launch of those producrs a little bit!

About my comment about Nemesis1 i think it is pretty clear what i meant but in any case i will clarify:

1.I like him because he seems to me that he is playing around!
2.I like him also because he seems to me not one of the "safe bets-low stakes guys"! (at least about predictions, I mean there is no challenge otherwise, too boring!)

 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: Idontcare
Originally posted by: SlowSpyder
I got what you meant after your post, IDC... since the AMD/ATi merge you were talking about. :thumbsup:

Thanks for the sanity check :thumbsup: At any rate it really wasn't my goal to establish a "who did what first" when it came to dual-gpu stuff, my memory is poor in this area and I acknowledge I'll be bested by the technicalities of history any day of the week. Folks can take me task over the accuracy of the technicalities, I really don't mind and in fact I appreciate the learning experience, but I don't think we can ignore the GX2 if we are talking about the 3870 X2 is all.

Yea, no point in going off on a tangent on who did what first. Obviously Nvidia brought SLi to the mainstream in 'recent' times and Crossfire was simply ugly in it's early days (there's that not an elegant design thing coming up again...). It's obvious that both have come a long way since the GeForce 6800/Radeon 1800 days.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: SlowSpyder
It's obvious that both have come a long way since the GeForce 6800/Radeon 1800 days.

And isn't it sweet that for about the same amount of coin (inflation adjusted) we get the kind of gpu power we can today out of either company? Best thing we could ask for is for Larrabee to be neck and neck with these two so everybody gets even more competitive at 28/22nm.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Based off off AMDs great success with immersion I feel they have a leg up . ATI has really never had its own foundery befor (Globial) . AMD can.t get to 32nm first but ATI can and probably will release a 28nm. Part in 2nd QT. Bulk will be whats used . I think ATI has more than 1 leg up on the comp. NV 300 part as speced on the net . This year not a chance in hell.