• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Nvidia reveals Specifications of GT300

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: ibex333
Good thing I didn't get a new video card. Thanks to the GT300 I should be able to buy a 295gtx for less than $100 less than a year from now.

Errr......probably not.
 
Originally posted by: ibex333
Good thing I didn't get a new video card. Thanks to the GT300 I should be able to buy a 295gtx for less than $100 less than a year from now.

These smaller process innovations really help don't they.

I mean look at how much it costs today to duplicate the computing performance 110nm used to give us?
 
Originally posted by: BenSkywalker
In spite of the GT200 not being supply limited at any point which the 4870 was, in spite of nV reporting margins significantly higher then ATi for their GPUs, and in spite of the fact that nV has had no issues moving their prices, you think that is truly the case?

Mainly because the demand on the $299 HD4870 512mb was exceeding that of the GTX260/GTX280 priced at $449/649 respectively I doubt supply was the problem here. Its only after several months that price cuts took the GTX series down low enough for consumers to actually compare the two competing products.

Reporting significantly higher margins? I doubt this unless you know the very specific details regarding production costs involving these cards. Or are your referring to the financial performance figures which I can safely say GT200 made small to no impact on those profit figures.

nVIDIA has a much deeper pocket than AMD and this is a fact. Sure they fumbled with the pricing and what not, i.e had to cut into their margins, but when you have been on a record breaking Q to Q for a long time, I doubt this one fumble hurt them as a whole although no one likes to see losses.

I wouldn't quite say irrelevant, as nV's design choice did offer superior performance/watt which is somewhat relevant for consumers. Yields- all evidence that we have would indicate were rather strong for nV. Chips that failed to yield perfect were sold as 260s, indicators point to the 192sp 260s being a bit too conservative and their yields were better then expected bringing us the 216. People seriously overestimate the costs of a larger die unless it impacts the ability to fill orders which didn't seem to impact nV this generation at all. nV chose to price their higher end parts in the ~$500 range which has been the norm for a long time now. The x800xt pe and 6800U were over $600 MSRP, it isn't like they overpriced the norm, ATi just undercut in an obvious focus on gaining marketshare(which is a VERY valid approach, not knocking them at all).

I agree with some of the points but the performance/watt figures is somewhat vague. nVIDIA had superior idle power consumption, but when it came to load the GT200 would suck alot more power for the performance they gave.

When we talk about yields, its hard to assume what those figures were for a full fledged GT200 chip initially. GTX280s weren't so hot, especially when the previous 9800GX2s were performing a little faster and priced similarly. But I think they had alot of chips that failed to reach GTX280 specs but more capable than the GTX260 specs.

The cost of a 300mm wafer is in the ballpark of $5000~6000 with variances of course depending on the process technology and what not. Larger the die size means fewer chips being produced per wafer (and not all these meet the specs). When these are looked from a volume production perspective, you want use as few number of wafers as possible and hence most companies try to design chips that perform up to spec while trying to achieve an optimal die size. Coming back to nVIDIA, I guess the 65nm process might have been cheaper than 55nm process technology but both of these processes have been available around for sometime so the cost differents wouldn't have been so large.

They beat ATi. The timeline you placed out has a 1Q varriation across four years from any of the other launches, not exactly a big difference.

We already know that nV is planning on a 40nm GT2xx part prior to moving to GT3xx which will also be 40nm. (Some dont think so apparently 😛 hence my response to them)

I guess we could look at it a different way, if nV had build the GT200 on 55nm from the start as ATi did given the thermals we had coming out of TSMC at that time it would have had to been clocked around 400MHZ and would have been destroyed by the 4870, it would have been the NV30 all over again.

Q1 means 4 months. If I specify the exact launch dates, GT200 has been delayed ~7 months in comparison to other launch timeframes. Its a big difference because time is money and you should know this.

What makes you think that the GT200 on 55nm would have resulted in another NV30 fiasco? I dont know what thermal issues you are referring to (maybe the overheating issues many people faced early in GT200s life?). I actually think it would have been the opposite unless they were initially having problems with the optical shrink as well.

Think we've gone too OT though Ben. 😀
 
Originally posted by: ibex333
Good thing I didn't get a new video card. Thanks to the GT300 I should be able to buy a 295gtx for less than $100 less than a year from now.

Except that you probably wouldn't want a gtx295 in a year. How many people thought buying a 9800gx2 was a good idea when the gtx280 launched?
 
Originally posted by: munky
Originally posted by: ibex333
Good thing I didn't get a new video card. Thanks to the GT300 I should be able to buy a 295gtx for less than $100 less than a year from now.

Except that you probably wouldn't want a gtx295 in a year. How many people thought buying a 9800gx2 was a good idea when the gtx280 launched?

If this GT300 is true you are right because it will stomp 295 GTX while being a lot cheaper to produce.

I can only imagine the power and efficiency of something with 512 40nm processors and true 2 GB (possibly) compared to 480 stream 55nm processors and 1GB of "effective" memory.

 
I don't understand the hatred towards Crysis. It was an fantastic game with amazing visuals and is the best FPS I've ever played. Who cares if it isn't all maxed out, it still looks better than any other game even though it isn't maxed. Everyone just keep complaining though, we'll have Call of Duty style games full of invisible walls and scripted events just to make sure you can max out and run at 100fps +.
 
Originally posted by: Beanie46
And no one is mentioning the fact that the author of the article, Theo Valich, has about as much credibility as CNBC does regarding the stock market?

But is it possible that this might be Nvidia's worst possible move, taking such a big risk making such a radical architecture? If it doesn't pan out, they'll be completely screwed...but then again they could rename gt200 to cover it up.

It also seems they are focusing too much on the general purpose part of their GPUs, which is not always beneficial to game performance. And game performance will still be the most important metric by which a GPU is judged, not GPGPU performance.

Are you SNiiPE_DoGG from XS?

Originally posted by: SNiiPE_DoGG
now that I read ^^^ that I think Nvidia's worst possible move is to take a risk making such a radical architecture. If it doesnt work out they will be completely screwed - but then again they could rename gt200 to cover it up....

http://www.xtremesystems.org/f...p=3750158&postcount=73

Erm, nevermind, I guess you are Hellmore?

Originally posted by: Hellmore
Yeah, it seems they are focusing to much on the general purpose part of their GPUs, which is not always beneficial to game performance and game performance will still be the most important metric by which a GPU is judged. Not GPGPU performance.

http://www.xtremesystems.org/f...p=3750165&postcount=74
 
If its a 50% size reduction, and the GT200 is already iffy-big at 55nm, I think its possible a 50% speed increase would actually be pretty impressive. It probably won't beat the 295 in benchmarking but being a single GPU, I think its as good as being king.
 
Originally posted by: Astrallite
If its a 50% size reduction, and the GT200 is already iffy-big at 55nm, I think its possible a 50% speed increase would actually be pretty impressive. It probably won't beat the 295 in benchmarking but being a single GPU, I think its as good as being king.

If GT300 is 50% smaller doesn't it mean it could be 100% faster?

50% smaller means twice as many processors could fit on the die right?

 
Guys, there is no data on the size of the GT300 die. None. You really can't use the GT200 as a reference if the rumor is true that the GT300 will be a new architecture. For example, a "shader processor" on a GT300 might be significantly smaller or larger than a "shader processor on a GT200. You can't really use GT200 as a reference to predict what size the GT300 will end up as. Sorry.
 
You can't? No, maybe not just by itself, but looking at the trend Nvidia has been following for the past decade, im willing to bet the GT300 will end up equal or bigger then the GT200 in die size. And shaders, mimd-units, that can do more, but take less space, that would be pretty revolutionary. Extrapolating, which is very common, widely used, and although not always spot on, will give an idea of what's to come. GT300 => GT200.
 
Originally posted by: SunnyD
Originally posted by: OCguy
Wow...that could be an amazing chip. :Q

Amazingly HUGE and HOT and POWER HUNGRY... yeah. Oh yeah, also amazingly EXPENSIVE too.

You can't get good performance for nothing you know. Are there any powerful video cards that aren't hot and power hungry these days? It will probably be no more hot than the cards we have available today- maybe a bit more power hungry though. Manufactures don't seem to have been able how to figure out how to deliver more powerful cards that are less power hungry. But isn't that physics for you? The more operations performed = more power needed?

At any rate, it will be interesting to see the performance these cards give.
 
So even if the most powerful GT300 sku is ~ 3 Tflops, it's not really a match for the 5870X2 which should debut around the same time or before - clocking in at 4.6 Tflops.

Just like nVidia's GT300 architecture, the actual RV870 chip is manufactured in TSMC's 40nm half-node process, packing more transistors than GT200 chips. Regardless of what ATI says about nVidia and large dies, the fact of the matter is that ATI is making a large die as well - but the company will continue to use the dual-GPU approach to reach high-end performance.

The RV870 chip should feature 1200 cores, divided into 12 SIMD groups with 100 cores each [20 "5D" units], while RV770 was based on 10 SIMD group with 80 cores total [16 "5D" groups consisting out of one "fat" and four simpler ones]. Thus, it is logical to conclude that when it comes to execution cores, not much happened architecturally - ATI's engineers increased the number of registers and other demanding architectural tasks in order to comply with Shader Model 5.0 and DirectX 11 Compute Shaders. The core is surrounded with 48 texture memory units, meaning ATI is continuing to increase the ROP:Core:TMU. For the first time, ATI is shipping a part with 32 ROP [Rasterizing OPeration] units, meaning the chip is able to output 32 pixels in a single clock.

When it comes to products, ATI plans to launch four parts: Radeon HD 5850 and 5850X2 in more affordable pricing bracket and HD5870 and HD5870X2 for the high-end parts. While there were no clocks for the Radeon HD 5850/5850X2 parts, alleged clocks for HD5870 and HD5870X2 reveal that for the first time, an X2 part is clocked higher than a single-GPU part. Was this a requirement of SidePort memory interface, we are not aware atm. German site Hardware-Infos placed all of the data in a very convenient table, which we are running here with permission. Their story also contains more data about the upcoming ATI RV870 architecture.
ATI 4870 vs 5870 table...courtesy of Hardware-Info

http://www.brightsideofnews.co.../ATI_5870Specs_550.jpg

These units should result in 2.16 TFLOPS for the HD5870 and 4.56 TFLOPS for the dual-GPU part. Yes, you've read correctly - we are going from 1TFLOPS chip to 4.6TFLOPS within 13 months. Is it now clear that CPUs are in a standstill when it comes to performance improvements? The biggest question though is - while there is no doubt that ATI pulled another miracle out of their hat with a brilliant on-time execution, releasing a 40nm part that will be relatively cheap to manufacture. BUT - can it beat nVidia's GT300 and by how much?

GT300 architecture groups processing cores in sets of 32 - up from 24 in GT200 architecture. But the difference between the two is that GT300 parts ways with the SIMD architecture that dominate the GPU architecture of today. GT300 Cores rely on MIMD-similar functions [Multiple-Instruction Multiple Data] - all the units work in MPMD mode, executing simple and complex shader and computing operations on-the-go. We're not exactly sure should we continue to use the word "shader processor" or "shader core" as these units are now almost on equal terms as FPUs inside latest AMD and Intel CPUs.

GT300 itself packs 16 groups with 32 cores - yes, we're talking about 512 cores for the high-end part. This number itself raises the computing power of GT300 by more than 2x when compared to the GT200 core. Before the chip tapes-out, there is no way anybody can predict working clocks, but if the clocks remain the same as on GT200, we would have over double the amount of computing power.
If for instance, nVidia gets a 2 GHz clock for the 512 MIMD cores, we are talking about no less than 3TFLOPS with Single-Precision. Dual precision is highly-dependant on how efficient the MIMD-like units will be, but you can count on 6-15x improvement over GT200.

Edit:

After rereading these links, GT300 could beat RV870 if it really is 6-15x faster than gt200.
:Q
 
Originally posted by: jaredpace
After rereading these links, GT300 could beat RV870 if it really is 6-15x faster than gt200.
:Q

Considering that ATi has the triple amount of shaders to remain competitive with the nVidia counterpart which has less shaders, GTX 260 216 shaders vs HD 4870 800 shaders, it would mean that if the nVidia GT 300 has 512 which I find unlikely, I believe that it should have 480 shader processors, it would still be competitive against the 1200 of the ATi card which keeps almost the same difference/ratio that the HD 4870 vs GTX 2x0 series currently holds.
 
Originally posted by: MarcVenice
You can't? No, maybe not just by itself, but looking at the trend Nvidia has been following for the past decade, im willing to bet the GT300 will end up equal or bigger then the GT200 in die size. And shaders, mimd-units, that can do more, but take less space, that would be pretty revolutionary. Extrapolating, which is very common, widely used, and although not always spot on, will give an idea of what's to come. GT300 => GT200.

That's really nice an all Marc, but sorry, you just can't know. You tell me how you think MIMD will take up more space and I'll concede. Do you know much about the transistor architecture for MIMD cores? I sure don't. I don't know how it differentiates itself from current shader architecture, do you? Nah. If GT300 was based on the same architecture as GT200, and they were adding an additional 272 shader processors, then I say you're right on the money with your extrapolations. But according to rumor, and that's all it is right now is a rumor, the architecture is different and who knows how much it actually has in common with GT200? Could be a lot, could be a little. Could be an entire re-work, memory controller, ROP's and all.

So no, Marc. You can't.

 
Originally posted by: evolucion8
Originally posted by: jaredpace
After rereading these links, GT300 could beat RV870 if it really is 6-15x faster than gt200.
:Q

Considering that ATi has the triple amount of shaders to remain competitive with the nVidia counterpart which has less shaders, GTX 260 216 shaders vs HD 4870 800 shaders, it would mean that if the nVidia GT 300 has 512 which I find unlikely, I believe that it should have 480 shader processors, it would still be competitive against the 1200 of the ATi card which keeps almost the same difference/ratio that the HD 4870 vs GTX 2x0 series currently holds.

Yes indeed. But only IF the architecture of GT300 was the same as GT200, and that number would be 1600 on the ATI side, not 1200 If you were to double the shaders on each. Because you'd still have to double the shaders for the ATI card to be able to do as twice as much as it does now. And besides. Look at the GTS250 (128 shader) vs. 4850 (800 shaders). You're ratio kind of melts away there.

 
Originally posted by: Keysplayr
Originally posted by: MarcVenice
You can't? No, maybe not just by itself, but looking at the trend Nvidia has been following for the past decade, im willing to bet the GT300 will end up equal or bigger then the GT200 in die size. And shaders, mimd-units, that can do more, but take less space, that would be pretty revolutionary. Extrapolating, which is very common, widely used, and although not always spot on, will give an idea of what's to come. GT300 => GT200.

That's really nice an all Marc, but sorry, you just can't know. You tell me how you think MIMD will take up more space and I'll concede. Do you know much about the transistor architecture for MIMD cores? I sure don't. I don't know how it differentiates itself from current shader architecture, do you? Nah. If GT300 was based on the same architecture as GT200, and they were adding an additional 272 shader processors, then I say you're right on the money with your extrapolations. But according to rumor, and that's all it is right now is a rumor, the architecture is different and who knows how much it actually has in common with GT200? Could be a lot, could be a little. Could be an entire re-work, memory controller, ROP's and all.

So no, Marc. You can't.

I think the same thing every time I read a "Larrabee will suck because..." post.

The people who know, can't post. The people who post, don't know.

This is true of R870, GT300, and Larrabee.

I personally prefer the sort of "if...then..." gedanken debates on these matters where all sides concede the point that a priori they are debating over the merits of fictitious products that only exist in (as envisioned and characterized by) the minds of the posters.

Once framed properly, an "If...Then..." speculative conversation can be enjoyable as well as create some reasonable logic surrounding the boundary conditions of what the architectures probably won't be capable of accomplishing.

For instance Larrabee isn't likely to reach 30GHz on 45nm or 32nm process tech, as Benskywalker pointed out, so IF 30GHz is necessary for Larrabee to reach the performance needed to compete with GT300 THEN we can agree to conclude it ain't gonna happen.

Same here, some can safely argue about the ramifications of another 600 mm^2 GT300 die in 40nm process space...they should not be derided for speculating, but at the same time no one should fool themselves or attempt to convince others that the GT300 simply MUST be a 600 mm^2 die to pack the punch some foresee it packing.
 
Originally posted by: jaredpace

After rereading these links, GT300 could beat RV870 if it really is 6-15x faster than gt200.
:Q

Hahaha, that would be the biggest boost in performance a new generation of cards brings over the last one. It is highly unlikely to make GT300 6-15X faster then GT200. 2x faster, most probably , 6-15X, fantasy land. 🙂
 
Mainly because the demand on the $299 HD4870 512mb was exceeding that of the GTX260/GTX280 priced at $449/649 respectively I doubt supply was the problem here.

They were supply limited, so supply was clearly the problem. You always adjust for demand in a capitalist market, it's just the way things work. If they were constantly under the demand for their parts, they should have priced them higher. You can't gain marketshare if you can't build enough parts to start with.

Or are your referring to the financial performance figures which I can safely say GT200 made small to no impact on those profit figures.

I am talking about the financials, nV breaks down their margins per segment, and you are absolutely right, the GT2xx cores had almost no impact on those margins. That is the entirety of my point.

I agree with some of the points but the performance/watt figures is somewhat vague. nVIDIA had superior idle power consumption, but when it came to load the GT200 would suck alot more power for the performance they gave.

Not exactly, An OC core 216 uses less power under load then a 512MB 4870, despite the build process advantage. Performance per watt was rather heavily in nV's favor for most of this generation. I'd say the 4890 improved this significantly, but obviously the build process has been refined considerably in the time between launch and now.

When we talk about yields, its hard to assume what those figures were for a full fledged GT200 chip initially. GTX280s weren't so hot, especially when the previous 9800GX2s were performing a little faster and priced similarly. But I think they had alot of chips that failed to reach GTX280 specs but more capable than the GTX260 specs.

And that is the pay off to their design philosophy. The odds of them getting chips they can't sell are significantly smaller due to the monolithic nature and them using downscaled cores for their mid tier offerings. On the downside, peak profit potential is reduced as building mid tier offerings directly would improve margins on them a decent amount.

The cost of a 300mm wafer is in the ballpark of $5000~6000 with variances of course depending on the process technology and what not.

Perhaps we should look at this a bit differently then. Is yielding a 600mm chip going to be more expensive then yielding 2 300mm chips and doubling the amount of RAM? That is the real comparion in terms of comparing monolithic compared to AMD's approach. I'm not saying either is right or wrong, I haven't seen convincing evidence, or anything approaching it really, from either side.

Q1 means 4 months.

WTF? Errr, I live on Earth. On Earth we have 12 months in a year. A quarter is 1/4. 12/4= 3 😀

If I specify the exact launch dates, GT200 has been delayed ~7 months in comparison to other launch timeframes.

8/11/99- 3/22/01, 3/22/01- 1/27/03, 1/27/03-4/14/04, 4/14/04-6/22/05, 6/22/05-11/08/06, 11/08/06- 6/16/08

19 months, 21 months, 15 months, 14 months, 17 months, 19 months

Those are the GeForce core timeframes. Yes, I looked them all up by exact date. So, what are you saying about being 7 months late? It doesn't make sense to me.

What makes you think that the GT200 on 55nm would have resulted in another NV30 fiasco?

The link I posted above has power and thermals, if the GT2x0 parts ended up anything like the 4x00 parts nVidia would have been dead.

Think we've gone too OT though Ben.

This is a speculation thread as it is, and we are certainly discussing design philosophy. I see our discussion as being on target quite honestly 🙂

I think the same thing every time I read a "Larrabee will suck because..." post.

Quite different when you are talking about software emulation versus a DSP.
 
Originally posted by: error8
Originally posted by: jaredpace

After rereading these links, GT300 could beat RV870 if it really is 6-15x faster than gt200.
:Q

Hahaha, that would be the biggest boost in performance a new generation of cards brings over the last one. It is highly unlikely to make GT300 6-15X faster then GT200. 2x faster, most probably , 6-15X, fantasy land. 🙂

It should beat it if it's 2x faster anyway. Still, they wouldn't be making huge changes if they didn't bring some benefit right? 6x may be a bit high, but more than 2x may not be unrealistic.
 
Originally posted by: WaitingForNehalem
I don't understand the hatred towards Crysis. It was an fantastic game with amazing visuals and is the best FPS I've ever played. Who cares if it isn't all maxed out, it still looks better than any other game even though it isn't maxed. Everyone just keep complaining though, we'll have Call of Duty style games full of invisible walls and scripted events just to make sure you can max out and run at 100fps +.

Visuals don't make a game great. The gameplay was boring, with absolutely no cinematic feel like COD4, or non-linear progression like Stalker or Oblivion.
 
Originally posted by: BenSkywalker
WTF? Errr, I live on Earth. On Earth we have 12 months in a year. A quarter is 1/4. 12/4= 3 😀

Er Im sorry Ben but human beings tend to make mistakes once in awhile! Guess some of us are just machines 😛

edit - When they mean 6x-15x faster, it means in DP performance.
 
Originally posted by: munky
Originally posted by: WaitingForNehalem
I don't understand the hatred towards Crysis. It was an fantastic game with amazing visuals and is the best FPS I've ever played. Who cares if it isn't all maxed out, it still looks better than any other game even though it isn't maxed. Everyone just keep complaining though, we'll have Call of Duty style games full of invisible walls and scripted events just to make sure you can max out and run at 100fps +.

Visuals don't make a game great. The gameplay was boring, with absolutely no cinematic feel like COD4, or non-linear progression like Stalker or Oblivion.

Are you kidding me? The whole game had a cinematic feel. Linear progression? This is a FPS, not RPG as mentioned. BTW, STALKER is a horrible game that is boring and has some of the worst hit detection I've ever seen.
 
Originally posted by: WaitingForNehalem
Originally posted by: munky
Originally posted by: WaitingForNehalem
I don't understand the hatred towards Crysis. It was an fantastic game with amazing visuals and is the best FPS I've ever played. Who cares if it isn't all maxed out, it still looks better than any other game even though it isn't maxed. Everyone just keep complaining though, we'll have Call of Duty style games full of invisible walls and scripted events just to make sure you can max out and run at 100fps +.

Visuals don't make a game great. The gameplay was boring, with absolutely no cinematic feel like COD4, or non-linear progression like Stalker or Oblivion.

Are you kidding me? The whole game had a cinematic feel. Linear progression? This is a FPS, not RPG as mentioned. BTW, STALKER is a horrible game that is boring and has some of the worst hit detection I've ever seen.

Crysis looked better on low settings then most games do on high, but people couldn't just be happy with a game that looked great and played fine on medium settings, with bonus settings for future technology. It would be exactly the same as if Call of Duty 4's highest setting was made into the medium setting and a super enhanced setting was added for the highest range. The game wouldn't change in any way at all, but human perception is so flawed that everyone would bitch that CoD made a game that couldn't be played on current computers, which wouldn't be true at all. In essence, people would get more, but perceive that they received less, and bitch and whine endlessly about it
 
Back
Top