Why GT300 isn't going to make 2009

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
http://www.theinquirer.net/inq...dia-gt300-architecture

Really, don't just say it's the inq, so it sucks. I think some of it actually is well thought out, and althought it can't be taken as the truth, it isn't bad speculation. Especially if you look past the anti Nvidia-bias.

Discuss:

THERE'S A LOT of fake news going around about the upcoming GPUish chip called the GT300. Let's clear some air on this Larrabee-lite architecture.
First of all, almost everything you have heard about the two upcoming DX11 architectures is wrong. There is a single source making up news, and second rate sites are parroting it left and right. The R870 news is laughably inaccurate, and the GT300 info is quite curious too. Either ATI figured out a way to break the laws of physics with memory speed and Nvidia managed to almost double its transistor density - do the math on purported numbers, they aren't even in the ballpark - or someone is blatantly making up numbers.

That said, lets get on with what we know, and delve into the architectures a bit. The GT300 is going to lose, badly, in the GPU game, and we will go over why and how.

First a little background science and math. There are three fabrication processes out there that ATI and Nvidia use, all from TSMC, 65nm, 55nm and 40nm. They are each a 'half step' from the next, and 65nm to 40nm is a full step. If you do the math, the shrink from 65nm to 55nm ((55 * 55) / (65 *65) ~= 0.72) saves you about 1/4 the area, that is, 55nm is 0.72 of the area of 65nm for the same transistor count. 55nm shrunk to 40nm gives you 0.53 of the area, and 65nm shrunk to 40nm gives you 0.38 of the area. We will be using these later.

Second is the time it takes to do things. We will use the best case scenarios, with a hot lot from TSMC taking a mere six weeks, and the time from wafers in to boards out of an AIB being 12 weeks. Top it off with test and debug times of two weeks for first silicon and one week for each subsequent spin. To simplify rough calculations, all months will be assumed to have 4 weeks.

Okay, ATI stated that it will have DX11 GPUs on sale when Windows 7 launches, purportedly October 23, 2009. Since this was done in a financial conference call, SEC rules applying, you can be pretty sure ATI is serious about this. Nvidia on the other hand basically dodged the question, hard, in its conference call the other day.

At least you should know why Nvidia picked the farcical date of October 15 for its partners. Why farcical? Lets go over the numbers once again.
According to sources in Satan Clara, GT300 has not taped out yet, as of last week. It is still set for June, which means best case, June 1st. Add six weeks for first silicon, two more for initial debug, and you are at eight weeks, minimum. That means the go or no-go decision might be made as early as August 1st. If everything goes perfectly, and there is no second spin required, you would have to add 90 days to that, meaning November 1st, before you could see any boards.

So, if all the stars align, and everything goes perfectly, Nvidia could hit Q4 of 2009. But that won't happen.

Why not? There is a concept called risk when doing chips, and the GT300 is a high risk part. GT300 is the first chip of a new architecture, or so Nvidia claims. It is also going to be the first GDDR5 part, and moreover, it will be Nvidia's first 'big' chip on the 40nm process.

Nvidia chipmaking of late has been laughably bad. GT200 was slated for November of 2007 and came out in May or so in 2008, two quarters late. We are still waiting for the derivative parts. The shrink, GT206/GT200b is technically a no-brainer, but instead of arriving in August of 2008, it trickled out in January, 2009. The shrink of that to 40nm, the GT212/GT200c was flat out canceled, Nvidia couldn't do it.

The next largest 40nm part, the GT214 also failed, and it was redone as the GT215. The next smallest parts, the GT216 and GT218, very small chips, are hugely delayed, perhaps to finally show up in late June. Nvidia can't make a chip that is one-quarter of the purported size of the GT300 on the TSMC 40nm process. That is, make it at all, period - making it profitably is, well, a humorous concept for now.

GT300 is also the first DX11 part from the green team, and it didn't even have DX10.1 parts. Between the new process, larger size, bleeding-edge memory technology, dysfunctional design teams, new feature sets and fab partners trashed at every opportunity, you could hardly imagine ways to have more risk in a new chip design than Nvidia has with the GT300.

If everything goes perfectly and Nvidia puts out a GT300 with zero bugs, or easy fix minor bugs, then it could be out in November. Given that there is only one GPU that we have heard of that hit this milestone, a derivative part, not a new architecture, it is almost assuredly not going to happen. No OEM is going to bet their Windows 7 launch vehicles on Nvidia's track record. They remember the 9400, GT200, and well, everything else.

If there is only one respin, you are into 2010. If there is a second respin, then you might have a hard time hitting Q1 of 2010. Of late, we can't think of any Nvidia product that hasn't had at least two respins, be they simple optical shrinks or big chips.
Conversely, the ATI R870 is a low risk part. ATI has a functional 40nm part on the market with the RV740/HD4770, and has had GDDR5 on cards since last June. Heck, it basically developed GDDR5. The RV740 - again, a part already on the market - is rumored to be notably larger than either the GT216 or 218, and more or less the same size as the GT215 that Nvidia can't seem to make.

DX11 is a much funnier story. The DX10 feature list was quite long when it was first proposed. ATI dutifully worked with Microsoft to get it implemented, and did so with the HD2900. Nvidia stomped around like a petulant child and refused to support most of those features, and Microsoft stupidly capitulated and removed large tracts of DX10 functionality.

This had several effects, the most notable being that the now castrated DX10 was a pretty sad API, barely moving anything forward. It also meant that ATI spent a lot of silicon area implementing things that would never be used. DX10.1 put some of those back, but not the big ones.

DX11 is basically what DX10 was meant to be with a few minor additions. That means ATI has had a mostly DX11 compliant part since the HD2900. The R870/HD5870 effectively will be the fourth generation DX11 GPU from the red team. Remember the tessellator? Been there, done that since 80nm parts.

This is not to say that is will be easy for either side, TSMC has basically come out and said that its 40nm process basically is horrid, an assertion backed up by everyone that uses it. That said, both the GT300 and R870 are designed for the process, so they are stuck with it. If yields can't be made economically viable, you will be in a situation of older 55nm parts going head to head for all of 2010. Given Nvidia's total lack of cost competitiveness on that node, it would be more a question of them surviving the year.

That brings us to the main point, what is GT300? If you recall Jen-Hsun's mocking jabs about Laughabee, you might find it ironic that GT300 is basically a Larrabee clone. Sadly though, it doesn't have the process tech, software support, or architecture behind it to make it work, but then again, this isn't the first time that Nvidia's grand prognostications have landed on its head.

The basic structure of GT300 is the same as Larrabee. Nvidia is going to use general purpose 'shaders' to do compute tasks, and the things that any sane company would put into dedicated hardware are going to be done in software. Basically DX11 will be shader code on top of a generic CPU-like structure. Just like Larrabee, but from the look of it, Larrabee got the underlying hardware right.

Before you jump up and down, and before all the Nvidiots start drooling, this is a massive problem for Nvidia. The chip was conceived at a time when Nvidia thought GPU compute was actually going to bring it some money, and it was an exit strategy for the company when GPUs went away.

It didn't happen that way, partially because of buggy hardware, partially because of over-promising and under-delivering, and then came the deathblows from Larrabee and Fusion. Nvidia's grand ambitions were stuffed into the dirt, and rightly so.
Nvidia Investor Relations tells people that between five to ten per cent of the GT200 die area is dedicated to GPU compute tasks. The GT300 goes way farther here, but let's be charitable and call it 10 per cent. This puts Nvidia at a 10 per cent areal disadvantage to ATI on the DX11 front, and that is before you talk about anything else. Out of the gate in second place.

On 55nm, the ATI RV790 basically ties the GT200b in performance, but does it in about 60 per cent of the area, and that means less than 60 per cent of the cost. Please note, we are not taking board costs into account, and if you look at yield too, things get very ugly for Nvidia. Suffice it to say that architecturally, GT200 is a dog, a fat, bloated dog.

Rather than go lean and mean for GT300, possibly with a multi-die strategy like ATI, Nvidia is going for bigger and less areally efficient. They are giving up GPU performance to chase a market that doesn't exist, but was a nice fantasy three years ago. Also, remember that part about ATI's DX10 being the vast majority of the current DX11? ATI is not going to have to bloat its die size to get to DX11, but Nvidia will be forced to, one way or another. Step 1) Collect Underpants. Step 2) ??? Step 3) Profit!
On the shrink from 55nm to 40nm, you about double your transistor count, but due to current leakage, doing so will hit a power wall. Let's assume that both sides can double their transistor counts and stay within their power budgets though, that is the best case for Nvidia.

If AMD doubles its transistor count, it could almost double performance. If it does, Nvidia will have to as well. But, because Nvidia has to add in all the DX11 features, or additional shaders to essentially dedicate to them, its chips' areal efficiency will likely go down. Meanwhile, ATI has those features already in place, and it will shrink its chip sizes to a quarter of what they were in the 2900, or half of what they were in the R770.

Nvidia will gain some area back when it goes to GDDR5. Then the open question will be how wide the memory interface will have to be to support a hugely inefficient GPGPU strategy. That code has to be loaded, stored and flushed, taking bandwidth and memory.

In the end, what you will end up with is ATI that can double performance if it choses to double shader count, while Nvidia can double shader count, but it will lose a lot of real world performance if it does.

In the R870, if you compare the time it takes to render 1 Million triangles from 250K using the tesselator, it will take a bit longer than running those same 1 Million triangles through without the tesselator. Tesselation takes no shader time, so other than latency and bandwidth, there is essentially zero cost. If ATI implemented things right, and remember, this is generation four of the technology, things should be almost transparent.

Contrast that with the GT300 approach. There is no dedicated tesselator, and if you use that DX11 feature, it will take large amounts of shader time, used inefficiently as is the case with general purpose hardware. You will then need the same shaders again to render the triangles. 250K to 1 Million triangles on the GT300 should be notably slower than straight 1 Million triangles.

The same should hold true for all DX11 features, ATI has dedicated hardware where applicable, Nvidia has general purpose shaders roped into doing things far less efficiently. When you turn on DX11 features, the GT300 will take a performance nosedive, the R870 won't.

Worse yet, when the derivatives come out, the proportion of shaders needed to run DX11 will go up for Nvidia, but the dedicated hardware won't change for ATI. It is currently selling parts on the low end of the market that have all the "almost DX11" features, and is doing so profitably. Nvidia will have a situation on its hands in the low end that will make the DX10 performance of the 8600 and 8400 class parts look like drag racers.

In the end, Nvidia architecturally did just about everything wrong with this part. It is chasing a market that doesn't exist, and skewing its parts away from their core purpose, graphics, to fulfill that pipe dream. Meanwhile, ATI will offer you an x86 hybrid Fusion part if that is what you want to do, and Intel will have Larrabee in the same time frame.

GT300 is basically Larrabee done wrong for the wrong reasons. Amusingly though, it misses both of the attempted targets. R870 should pummel it in DX10/DX11 performance, but if you buy a $400-600 GPU for ripping DVDs to your Ipod, Nvidia has a card for you. Maybe. Yield problems notwithstanding.

GT300 will be quarters late, and without a miracle, miss back to school, the Windows 7 launch, and Christmas. It won't come close to R870 in graphics performance, and it will cost much more to make. This is not an architecture that will dig Nvidia out of its hole, but instead will dig it deeper. It made a Laughabee. µ
 

OCGuy

Lifer
Jul 12, 2000
27,227
36
91
Someone posted this in the GT300 thread already, and it was discussed there.
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
I missed it. Looked it up though, and it was 'disregarded', because the inq can't be trusted. Just read the article, and if something occurs to you as inplausible, unlikely or a flat out lie, make it known.

The speculation isn't any worse then the crap thebrightsideofnews (charlie) spouts, or hardware-infos, who claim to be working with thebrightsideofnews (go figure).
 

OCGuy

Lifer
Jul 12, 2000
27,227
36
91
It would be like arguing with someone if the world is flat or round. There is no point. That garbage is meant for the audience that laps it up.

The more outrageous he is, the more likely people like you are to post it in tech forums.

Yellow journalism at its finest.
 

Blazer7

Golden Member
Jun 26, 2007
1,099
5
81
It's true that the speculation isn't any worse but when it comes to nV it's hard to take Charlie's articles into consideration. Not that he's always wrong but he's so biased that it's hard for anyone to take him seriously. I've tried to find another more credible source that backs his story but couldn't find any.
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
Really? I think statements like these: "Second is the time it takes to do things. We will use the best case scenarios, with a hot lot from TSMC taking a mere six weeks, and the time from wafers in to boards out of an AIB being 12 weeks. Top it off with test and debug times of two weeks for first silicon and one week for each subsequent spin. To simplify rough calculations, all months will be assumed to have 4 weeks."

Can be 'disproven'. If IDC sais, this ain't even bullshit, it's horseshit, it's score 1 for us, 0 for the inq.

Also, have other websites confirmed this: Okay, ATI stated that it will have DX11 GPUs on sale when Windows 7 launches, purportedly October 23, 2009. Since this was done in a financial conference call, SEC rules applying, you can be pretty sure ATI is serious about this. Nvidia on the other hand basically dodged the question, hard, in its conference call the other day. "
 

Forumpanda

Member
Apr 8, 2009
181
0
0
I love reading these speculation articles, its always fun going back afterwards and see who got it (guessed it) right.

That being said this article seems credible with a somewhat big anti-nvidia spin to it.
In some parts it just uses bad language to describe nvidia, and in others it just always assumes worst case scenario, just because nVidia wont disclose when their dx11 hardware comes out doesn't have ot mean it is delay, maybe they just dont want to disclose the information as it seems no credible information has leaked yet.

That being said I still think it is a fair bet that (respins not counting) ATI should have a dx11/40nm part out a fair bit earlier than nvidia, they have more experience with the process node, the dx feature set, the memory and on top of that they are making a chip with a much smaller die size.

Given the information that the 40nm node will not decrease power usage per xtor much ATI might just have backed their way into the right approach, if both chips will be performance limited by power usage, then we might very well see 2 similar performing chips but ATIs being cheaper to produce and with better yields.

Of course this means nVidia will be able to release the gtMonster395XL, claim the performance crown and probably sell quite a few cards at a very high margin.
And thus at the end of the day, win the round in terms of profit.
 

Blazer7

Golden Member
Jun 26, 2007
1,099
5
81
Originally posted by: Forumpanda
That being said I still think it is a fair bet that (respins not counting) ATI should have a dx11/40nm part out a fair bit earlier than nvidia, they have more experience with the process node, the dx feature set, the memory and on top of that they are making a chip with a much smaller die size.

I agree to that.

Given the information that the 40nm node will not decrease power usage per xtor much ATI might just have backed their way into the right approach, if both chips will be performance limited by power usage, then we might very well see 2 similar performing chips but ATIs being cheaper to produce and with better yields.

I'm not so sure that ATI's chip can compete with the G300.


@Marc

So far all we have is rumors. The thing is that these rumors speak of A1 silicon not A0. If this is true then nV is way ahead of what any of us expected. Maybe this is why they remain silent. Still, only rumors.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Originally posted by: Blazer7
[
I'm not so sure that ATI's chip can compete with the G300.

Well if this article is true (regarding ATIs Current DX10 feature list) HD58xx has a chance of being more efficiently laid out compared to Nvidia DX11 implementation (since ATI has a head start). This could mean more performance per transistor is possible. If that happens we might see a reversal of fortunes for ATI.



 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: MarcVenice
I missed it. Looked it up though, and it was 'disregarded', because the inq can't be trusted.

Exactly so why re-post it as a "inflammatory" and possibly "inaccurate" thread?
 

Spike

Diamond Member
Aug 27, 2001
6,770
1
81
If you remove all the anti-nvidia ranting the article would probably shrink by half or more. Still, he does have some interesting points, now I just have to wonder if they are true.

I guess no one outside of the companies in question will know for quite some time, all we are left with is speculation... that and all the constructive and informative arguments we have here on the video forums at anandtech ;)
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
This is really like three articles rolled up into one.

First let me point out the obvious:
If you do the math, the shrink from 65nm to 55nm ((55 * 55) / (65 *65) ~= 0.72) saves you about 1/4 the area, that is, 55nm is 0.72 of the area of 65nm for the same transistor count. 55nm shrunk to 40nm gives you 0.53 of the area, and 65nm shrunk to 40nm gives you 0.38 of the area.

This is simply an embarrassment. I'm embarrassed for Charlie because he doesn't even know why he should feel embarrassed for publishing this "do the math".

Node labels are simply nothing more than that. They could call it the Apple node, the Pear node, and the Banana node for all the node label means anymore. A "55nm" node has nothing to do with the number "55" or the units "nm". Go to Home Depot and by yourself a 2x4 and measure it, does it measure 2inches by 4inches? No, the label "2x4" is not meant to mean anything mathematical or numeric beyond the basic logic that "a 2x4 is smaller than a 4x4 is smaller than a 4x6" etc.

A "55nm" device is supposed to mean its features are smaller than a "65nm" device. But they are NOT 55/65 = 0.846x smaller. To use a phrase, that kind of math is simply wronger than wrong.

Nevertheless it really doesn't matter that the areal reduction calculations are hopelessly flawed...they are mostly irrelevant in supporting (or not) the main message of his article anyways. That message being no GT300 in 2009.

The second sub-article of the article is all that DX11 speculation...I have no idea how much insight charlie has into the ISA and architecture of GT300. He really goes to the extreme though in trying to paint the bleakest picture I think he knows how regarding the DX11 "overhead" that is going to hinder the GT300 versus the willingness he goes to in attributing ATI has having 4th generation DX11 parts...this whole angle of the article just seems incredulous. I imagine even the AMD engineers cringe a little when they read that.

The third sub-article is relating to the timeline and "consequence of the math" type reasoning. So does it take 6 weeks to get a wafer out of a fab? Yes it can take that long for a not-so-hot hot-lot, but they can come out faster too. Much faster. Depending on how many metal levels (hot lots take 1 day per metal level once they get to the BEOL) it really should take no more than 3-4 weeks for NV to have first silicon coming out of the fab from tape-out.

The second "odd" thing about the timeline is the 12 weeks for AIB to boards in hand. This time is about right but it is entirely parallel to the actual debug process. NV will have packaged chips in their hands and on their debuggers within 72 hrs of the wafer exiting the fab. It will be plugged into a PCB board synthesizer (a tester) and put thru startup tests. Debug time of 2 weeks is about right, although obviously if there are critical flaws they will know before 2 weeks too.

To put it mildly the timeline that charlie put together is plausible in that NV could take that much time if they wanted to (you can always run a marathon slower than your best if you want) but it can be done much quicker if NV was so inclined (and motivated).

As others are noting there is just so much wiggle room in the language charlie uses that he has ensured he's got caveats liberally placed everywhere so that if he's wrong he can say "well I said "assuming" this and that were true, which it turned out they weren't so of course the timeline was different".

Charlie's timeline is plausible. It could take NV that long to iterate thru their debug and verification loops. The leakage on 40nm could be so bad that NV has no motivation to make it go faster if they are waiting on TSMC to implement leakage fixes. It could be true that DX11 overhead consumes half the xtor budget of the GT300 so it really is nothing more than GT200+DX11. Nothing he says is 100% refutable, its all plausible.

But is it probable? NV's challenge with GT300 is so complex and there are so many factors involved, any of which could be critically gating, that charlie's timeline could end up being absolutely right for absolutely all the wrong reasons.
 

OCGuy

Lifer
Jul 12, 2000
27,227
36
91
Originally posted by: Chaotic42
Now ATI just needs to get their drivers straight and we might be okay.

I dont think GT300 coming out a couple months behind 5870 would really matter much if the performance is what some of the rumors are throwing out there.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Originally posted by: OCguy
Originally posted by: Chaotic42
Now ATI just needs to get their drivers straight and we might be okay.

I dont think GT300 coming out a couple months behind 5870 would really matter much if the performance is what some of the rumors are throwing out there.

I hope regardless when GT300 and 5870 come out that both companies price them reasonably...meaning reasonable for them so they actually make some money and get back on track to being sustainable businesses again.

It's nice for us in the short-term when they lose money as that means we got to buy their product below cost, but in the end that just hurts us down the road as it short-shrifts their R&D teams today resulting in less stellar products for us to buy tomorrow.

Keep the performance/$ going up and up guys, but no repeats of the 4870 price-point situation please. I want to be able to buy a 9870 or GT700 someday too and that ain't going to happen with AMD and NV pricing themselves such that NV loses $200m a quarter and AMD makes $1m on graphics a quarter. Non-sustainable.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
Originally posted by: Idontcare
Originally posted by: OCguy
Originally posted by: Chaotic42
Now ATI just needs to get their drivers straight and we might be okay.

I dont think GT300 coming out a couple months behind 5870 would really matter much if the performance is what some of the rumors are throwing out there.

I hope regardless when GT300 and 5870 come out that both companies price them reasonably...meaning reasonable for them so they actually make some money and get back on track to being sustainable businesses again.

It's nice for us in the short-term when they lose money as that means we got to buy their product below cost, but in the end that just hurts us down the road as it short-shrifts their R&D teams today resulting in less stellar products for us to buy tomorrow.

Keep the performance/$ going up and up guys, but no repeats of the 4870 price-point situation please. I want to be able to buy a 9870 or GT700 someday too and that ain't going to happen with AMD and NV pricing themselves such that NV loses $200m a quarter and AMD makes $1m on graphics a quarter. Non-sustainable.

IDC, I don't suppose you have any insider info, or can give us an educated guess on the cost to build some of the current cards, could you?

When the 8800GT was launched it was usually around $200 (a little more if I remember right) and I have no doubt Nvidia made a ton of money on them. The 4870 has a similar sized GPU, uses a 256 bit memory connection, and both being considered being a mid to lower part of the highend card I would imagine they use similar quality components.

Obviously they aren't apples to apples, they are built on different processes, the 4870 uses GDDR5 memory, and I'll assume that their 2 slot cooler might cost a bit more. But I doubt the launch price they set wasn't making AMD a lot of cash. I don't know how much profit a 4870 makes now at it's average selling point, but I still think that AMD can hold better projit margins in the price war than Nvidia.

But, AMD found out that Nvidia will do whatever they have to to keep their market share, so who knows, maybe they'll decide against going for the cheaper to prodcue card for volume. I just hope we aren't seeing $650 cards that don't really justify their price next round.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: Idontcare


I hope regardless when GT300 and 5870 come out that both companies price them reasonably...meaning reasonable for them so they actually make some money and get back on track to being sustainable businesses again.

They better because I'm sure there are many people like me who will be just as happy to keep the card they already have.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Originally posted by: Idontcare


Keep the performance/$ going up and up guys, but no repeats of the 4870 price-point situation please. I want to be able to buy a 9870 or GT700 someday too and that ain't going to happen with AMD and NV pricing themselves such that NV loses $200m a quarter and AMD makes $1m on graphics a quarter. Non-sustainable.

Do you think Nvidia having CUDA implemented in its hardware is significantly affecting their performance/$ ratio? (with respect to Games)

If this is true....then I hope someone is able to write some legitimately useful non-gaming software for CUDA.
 

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
Originally posted by: Idontcare
This is really like three articles rolled up into one.

First let me point out the obvious:
If you do the math, the shrink from 65nm to 55nm ((55 * 55) / (65 *65) ~= 0.72) saves you about 1/4 the area, that is, 55nm is 0.72 of the area of 65nm for the same transistor count. 55nm shrunk to 40nm gives you 0.53 of the area, and 65nm shrunk to 40nm gives you 0.38 of the area.

This is simply an embarrassment. I'm embarrassed for Charlie because he doesn't even know why he should feel embarrassed for publishing this "do the math".

Node labels are simply nothing more than that. They could call it the Apple node, the Pear node, and the Banana node for all the node label means anymore. A "55nm" node has nothing to do with the number "55" or the units "nm". Go to Home Depot and by yourself a 2x4 and measure it, does it measure 2inches by 4inches? No, the label "2x4" is not meant to mean anything mathematical or numeric beyond the basic logic that "a 2x4 is smaller than a 4x4 is smaller than a 4x6" etc.

Umm a 2x4 is two inches by four inches or replace inches with units.. All of the die sizes listed above serve as an average number.
 

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
Originally posted by: Just learning
Originally posted by: Idontcare


Keep the performance/$ going up and up guys, but no repeats of the 4870 price-point situation please. I want to be able to buy a 9870 or GT700 someday too and that ain't going to happen with AMD and NV pricing themselves such that NV loses $200m a quarter and AMD makes $1m on graphics a quarter. Non-sustainable.

Do you think Nvidia having CUDA implemented in its hardware is significantly affecting their performance/$ ratio? (with respect to Games)

If this is true....then I hope someone is able to write some legitimately useful non-gaming software for CUDA.

The problem is that in terms of games, Physx will remain low. Until consoles receive a license, the tehnology which means less profit or more cost to the user.

Dunno about cool factor. I am sure they could provide some software so you make your own psyx levels. Outside of that it depends on the developers.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Zstream
Originally posted by: Idontcare
This is really like three articles rolled up into one.

First let me point out the obvious:
If you do the math, the shrink from 65nm to 55nm ((55 * 55) / (65 *65) ~= 0.72) saves you about 1/4 the area, that is, 55nm is 0.72 of the area of 65nm for the same transistor count. 55nm shrunk to 40nm gives you 0.53 of the area, and 65nm shrunk to 40nm gives you 0.38 of the area.

This is simply an embarrassment. I'm embarrassed for Charlie because he doesn't even know why he should feel embarrassed for publishing this "do the math".

Node labels are simply nothing more than that. They could call it the Apple node, the Pear node, and the Banana node for all the node label means anymore. A "55nm" node has nothing to do with the number "55" or the units "nm". Go to Home Depot and by yourself a 2x4 and measure it, does it measure 2inches by 4inches? No, the label "2x4" is not meant to mean anything mathematical or numeric beyond the basic logic that "a 2x4 is smaller than a 4x4 is smaller than a 4x6" etc.

Umm a 2x4 is two inches by four inches or replace inches with units.. All of the die sizes listed above serve as an average number.

No, Zstream. A standard 2x4 (in the continental United States at least) is actually 1 1/2 to 1 5/8 inches by 3 1/2 to 3 5/8 inches, depending on the mill. No 2x4 is 2 inches by 4 inches. Unless you go back to the late 1800's early 1900's.

Yes, I was a carpenter. And yes, this is what Idontcare was referring to with his analogy.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Heh, started ripping the post apart from the top, I see IDC took care of the fab end of it better then I could have, I'll grab the DX side ;)

Nvidia Investor Relations tells people that between five to ten per cent of the GT200 die area is dedicated to GPU compute tasks. The GT300 goes way farther here, but let's be charitable and call it 10 per cent. This puts Nvidia at a 10 per cent areal disadvantage to ATI on the DX11 front

This simply shows Charlie is a flat out moron, and he didn't leave himself an escape route of any sort on this one. CS is a DX11 requirement. ATi's DX11 part will have more transistors dedicated to GPGPU style functionality then nV's GT200 or it won't be DX11. This is spelled out explicitly and actually in easy to understand laymens terms I thought. Perhaps where he got confused was in what level of complexity CS actually required for DX11 parts? As big of a fan as I am about the potential for GPGPU functionality, CS goes way too far this soon in the game IMO, and adds way too much complexity given its potential payoff. It will help DX11 stay viable for a rather extensive period of time(by computing standards of course) but it is too much, too soon for either nVidia or ATi.

In the R870, if you compare the time it takes to render 1 Million triangles from 250K using the tesselator, it will take a bit longer than running those same 1 Million triangles through without the tesselator. Tesselation takes no shader time, so other than latency and bandwidth, there is essentially zero cost. If ATI implemented things right, and remember, this is generation four of the technology, things should be almost transparent.

Contrast that with the GT300 approach. There is no dedicated tesselator, and if you use that DX11 feature, it will take large amounts of shader time, used inefficiently as is the case with general purpose hardware. You will then need the same shaders again to render the triangles. 250K to 1 Million triangles on the GT300 should be notably slower than straight 1 Million triangles.

ATi has already explicitly stated they can not use their tesselator for DX11 as it fails to hit the spec. That isn't nV spin doctors, that was just ATi being honest. Second part, he clearly doesn't have a clue on CS as the way he explains things will need to be done is as if CS wasn't going to be part of nV's DX11 part which if that was the case, it wouldn't be a DX11 part to start with.

The same should hold true for all DX11 features, ATI has dedicated hardware where applicable, Nvidia has general purpose shaders roped into doing things far less efficiently.

Rather amusing this bit- what DX11 features are ATi doing with dedicated hardware? Outside of the tesselator(and even that has caveats), how many DX functions are still allowed to use dedicated hardware? Even if what he were saying were true(it isn't, I'm not giving myself an out ;) ) then in cases where you were not using the tesselator nV would be at a decisive advantage based on what he is saying. He used this same argument in reverse when ATi went with unified shaders over dedicated which nV stuck with at the time. I'm not saying one or the other is better, they have trade offs as they always have, but given the requirements of DX11 having as much unified as possible is the most logical way to go.

Meh, realized after rereading some of this I needed to provide at least one link. Anyone who understands anything about software/hardware development should instantly recognize what a HUGE step towards pure GPGPU architectures CS is, cross thread data sharing and OoO I/O ops between the two of them without considering everything else is a staggering increase in complexity in GPGPU terms compared with what the GT200 has.

Summation- this is very much a Charle article. Some of the things he claims come from insider knowledge and we can't prove those one way or the other. He makes a big mistake in talking about a bunch of other things that we can very easily shred to bits. Do I by default discount every bit of 'insider information' he has? Absolutely. CS isn't insider info by any means, my link is a publicly available slide show from Siggraph, and the man can't seem to comprehend what it says, why on Earth would I think he is capable of understanding any insider information that he may have access to?
 

Creig

Diamond Member
Oct 9, 1999
5,171
13
81
Originally posted by: BenSkywalker
ATi has already explicitly stated they can not use their tesselator for DX11 as it fails to hit the spec. That isn't nV spin doctors, that was just ATi being honest.
While the current AMD tesselator found in 2000, 3000 and 4000 cards may not be fully DX11 compatible, I was under the impression that they will be able to perform the majority of the functions of a full DX11 tesselator. Or is that not true?

Originally posted by: BenSkywalker
Do I by default discount every bit of 'insider information' he has? Absolutely.
While it's true that a lot of his articles are nothing but anti-Nvidia blather, he isn't always wrong. He seems to have hit pretty close to the mark with his insider info regarding the failing laptops with Nvidia GPUs.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
When people start sticking up for Charlie, it's time to start ducking those flying pigs. I know that this has been said a billion times, but even a broken clock is right twice per day.
Creig, how are those 9 Inspiron 630s doing?
 

Creig

Diamond Member
Oct 9, 1999
5,171
13
81
Originally posted by: Keysplayr
When people start sticking up for Charlie, it's time to start ducking those flying pigs. I know that this has been said a billion times, but even a broken clock is right twice per day.
Regardless whether it was by design or by luck, he was still pretty much correct. It's always best to read the info and decide for yourself if it's plausible rather than to dismiss it out of hand. As we saw from the info released by Nvidia's insurance company, the problem is even more widespread than Nvidia has admitted to.

Originally posted by: Keysplayr
Creig, how are those 9 Inspiron 630s doing?
Actually, I just checked my inventory and it's only eight D630s, two D620s, and about eleventy billion D610s. And so far, the D630s are okay. But I'm going to have them in my inventory for a few years to come. The D610s are entering the end of their service life and will be replaced by the D630s and D620s as we start to order new E series laptops. So I'll have to keep an eye on them for signs of failure for probably at least another 3-4 years. :frown: