Originally posted by: ibex333
Good thing I didn't get a new video card. Thanks to the GT300 I should be able to buy a 295gtx for less than $100 less than a year from now.
Errr......probably not.
Originally posted by: ibex333
Good thing I didn't get a new video card. Thanks to the GT300 I should be able to buy a 295gtx for less than $100 less than a year from now.
Originally posted by: ibex333
Good thing I didn't get a new video card. Thanks to the GT300 I should be able to buy a 295gtx for less than $100 less than a year from now.
Originally posted by: BenSkywalker
In spite of the GT200 not being supply limited at any point which the 4870 was, in spite of nV reporting margins significantly higher then ATi for their GPUs, and in spite of the fact that nV has had no issues moving their prices, you think that is truly the case?
I wouldn't quite say irrelevant, as nV's design choice did offer superior performance/watt which is somewhat relevant for consumers. Yields- all evidence that we have would indicate were rather strong for nV. Chips that failed to yield perfect were sold as 260s, indicators point to the 192sp 260s being a bit too conservative and their yields were better then expected bringing us the 216. People seriously overestimate the costs of a larger die unless it impacts the ability to fill orders which didn't seem to impact nV this generation at all. nV chose to price their higher end parts in the ~$500 range which has been the norm for a long time now. The x800xt pe and 6800U were over $600 MSRP, it isn't like they overpriced the norm, ATi just undercut in an obvious focus on gaining marketshare(which is a VERY valid approach, not knocking them at all).
They beat ATi. The timeline you placed out has a 1Q varriation across four years from any of the other launches, not exactly a big difference.
We already know that nV is planning on a 40nm GT2xx part prior to moving to GT3xx which will also be 40nm. (Some dont think so apparently 😛 hence my response to them)
I guess we could look at it a different way, if nV had build the GT200 on 55nm from the start as ATi did given the thermals we had coming out of TSMC at that time it would have had to been clocked around 400MHZ and would have been destroyed by the 4870, it would have been the NV30 all over again.
Originally posted by: ibex333
Good thing I didn't get a new video card. Thanks to the GT300 I should be able to buy a 295gtx for less than $100 less than a year from now.
Originally posted by: munky
Originally posted by: ibex333
Good thing I didn't get a new video card. Thanks to the GT300 I should be able to buy a 295gtx for less than $100 less than a year from now.
Except that you probably wouldn't want a gtx295 in a year. How many people thought buying a 9800gx2 was a good idea when the gtx280 launched?
Originally posted by: Beanie46
And no one is mentioning the fact that the author of the article, Theo Valich, has about as much credibility as CNBC does regarding the stock market?
But is it possible that this might be Nvidia's worst possible move, taking such a big risk making such a radical architecture? If it doesn't pan out, they'll be completely screwed...but then again they could rename gt200 to cover it up.
It also seems they are focusing too much on the general purpose part of their GPUs, which is not always beneficial to game performance. And game performance will still be the most important metric by which a GPU is judged, not GPGPU performance.
Originally posted by: SNiiPE_DoGG
now that I read ^^^ that I think Nvidia's worst possible move is to take a risk making such a radical architecture. If it doesnt work out they will be completely screwed - but then again they could rename gt200 to cover it up....
http://www.xtremesystems.org/f...p=3750158&postcount=73
Originally posted by: Hellmore
Yeah, it seems they are focusing to much on the general purpose part of their GPUs, which is not always beneficial to game performance and game performance will still be the most important metric by which a GPU is judged. Not GPGPU performance.
http://www.xtremesystems.org/f...p=3750165&postcount=74
Originally posted by: Astrallite
If its a 50% size reduction, and the GT200 is already iffy-big at 55nm, I think its possible a 50% speed increase would actually be pretty impressive. It probably won't beat the 295 in benchmarking but being a single GPU, I think its as good as being king.
Originally posted by: SunnyD
Originally posted by: OCguy
Wow...that could be an amazing chip. :Q
Amazingly HUGE and HOT and POWER HUNGRY... yeah. Oh yeah, also amazingly EXPENSIVE too.
Just like nVidia's GT300 architecture, the actual RV870 chip is manufactured in TSMC's 40nm half-node process, packing more transistors than GT200 chips. Regardless of what ATI says about nVidia and large dies, the fact of the matter is that ATI is making a large die as well - but the company will continue to use the dual-GPU approach to reach high-end performance.
The RV870 chip should feature 1200 cores, divided into 12 SIMD groups with 100 cores each [20 "5D" units], while RV770 was based on 10 SIMD group with 80 cores total [16 "5D" groups consisting out of one "fat" and four simpler ones]. Thus, it is logical to conclude that when it comes to execution cores, not much happened architecturally - ATI's engineers increased the number of registers and other demanding architectural tasks in order to comply with Shader Model 5.0 and DirectX 11 Compute Shaders. The core is surrounded with 48 texture memory units, meaning ATI is continuing to increase the ROP:Core:TMU. For the first time, ATI is shipping a part with 32 ROP [Rasterizing OPeration] units, meaning the chip is able to output 32 pixels in a single clock.
When it comes to products, ATI plans to launch four parts: Radeon HD 5850 and 5850X2 in more affordable pricing bracket and HD5870 and HD5870X2 for the high-end parts. While there were no clocks for the Radeon HD 5850/5850X2 parts, alleged clocks for HD5870 and HD5870X2 reveal that for the first time, an X2 part is clocked higher than a single-GPU part. Was this a requirement of SidePort memory interface, we are not aware atm. German site Hardware-Infos placed all of the data in a very convenient table, which we are running here with permission. Their story also contains more data about the upcoming ATI RV870 architecture.
ATI 4870 vs 5870 table...courtesy of Hardware-Info
http://www.brightsideofnews.co.../ATI_5870Specs_550.jpg
These units should result in 2.16 TFLOPS for the HD5870 and 4.56 TFLOPS for the dual-GPU part. Yes, you've read correctly - we are going from 1TFLOPS chip to 4.6TFLOPS within 13 months. Is it now clear that CPUs are in a standstill when it comes to performance improvements? The biggest question though is - while there is no doubt that ATI pulled another miracle out of their hat with a brilliant on-time execution, releasing a 40nm part that will be relatively cheap to manufacture. BUT - can it beat nVidia's GT300 and by how much?
GT300 architecture groups processing cores in sets of 32 - up from 24 in GT200 architecture. But the difference between the two is that GT300 parts ways with the SIMD architecture that dominate the GPU architecture of today. GT300 Cores rely on MIMD-similar functions [Multiple-Instruction Multiple Data] - all the units work in MPMD mode, executing simple and complex shader and computing operations on-the-go. We're not exactly sure should we continue to use the word "shader processor" or "shader core" as these units are now almost on equal terms as FPUs inside latest AMD and Intel CPUs.
GT300 itself packs 16 groups with 32 cores - yes, we're talking about 512 cores for the high-end part. This number itself raises the computing power of GT300 by more than 2x when compared to the GT200 core. Before the chip tapes-out, there is no way anybody can predict working clocks, but if the clocks remain the same as on GT200, we would have over double the amount of computing power.
If for instance, nVidia gets a 2 GHz clock for the 512 MIMD cores, we are talking about no less than 3TFLOPS with Single-Precision. Dual precision is highly-dependant on how efficient the MIMD-like units will be, but you can count on 6-15x improvement over GT200.
Originally posted by: jaredpace
After rereading these links, GT300 could beat RV870 if it really is 6-15x faster than gt200.
:Q
Originally posted by: MarcVenice
You can't? No, maybe not just by itself, but looking at the trend Nvidia has been following for the past decade, im willing to bet the GT300 will end up equal or bigger then the GT200 in die size. And shaders, mimd-units, that can do more, but take less space, that would be pretty revolutionary. Extrapolating, which is very common, widely used, and although not always spot on, will give an idea of what's to come. GT300 => GT200.
Originally posted by: evolucion8
Originally posted by: jaredpace
After rereading these links, GT300 could beat RV870 if it really is 6-15x faster than gt200.
:Q
Considering that ATi has the triple amount of shaders to remain competitive with the nVidia counterpart which has less shaders, GTX 260 216 shaders vs HD 4870 800 shaders, it would mean that if the nVidia GT 300 has 512 which I find unlikely, I believe that it should have 480 shader processors, it would still be competitive against the 1200 of the ATi card which keeps almost the same difference/ratio that the HD 4870 vs GTX 2x0 series currently holds.
Originally posted by: Keysplayr
Originally posted by: MarcVenice
You can't? No, maybe not just by itself, but looking at the trend Nvidia has been following for the past decade, im willing to bet the GT300 will end up equal or bigger then the GT200 in die size. And shaders, mimd-units, that can do more, but take less space, that would be pretty revolutionary. Extrapolating, which is very common, widely used, and although not always spot on, will give an idea of what's to come. GT300 => GT200.
That's really nice an all Marc, but sorry, you just can't know. You tell me how you think MIMD will take up more space and I'll concede. Do you know much about the transistor architecture for MIMD cores? I sure don't. I don't know how it differentiates itself from current shader architecture, do you? Nah. If GT300 was based on the same architecture as GT200, and they were adding an additional 272 shader processors, then I say you're right on the money with your extrapolations. But according to rumor, and that's all it is right now is a rumor, the architecture is different and who knows how much it actually has in common with GT200? Could be a lot, could be a little. Could be an entire re-work, memory controller, ROP's and all.
So no, Marc. You can't.
Originally posted by: jaredpace
After rereading these links, GT300 could beat RV870 if it really is 6-15x faster than gt200.
:Q
Mainly because the demand on the $299 HD4870 512mb was exceeding that of the GTX260/GTX280 priced at $449/649 respectively I doubt supply was the problem here.
Or are your referring to the financial performance figures which I can safely say GT200 made small to no impact on those profit figures.
I agree with some of the points but the performance/watt figures is somewhat vague. nVIDIA had superior idle power consumption, but when it came to load the GT200 would suck alot more power for the performance they gave.
When we talk about yields, its hard to assume what those figures were for a full fledged GT200 chip initially. GTX280s weren't so hot, especially when the previous 9800GX2s were performing a little faster and priced similarly. But I think they had alot of chips that failed to reach GTX280 specs but more capable than the GTX260 specs.
The cost of a 300mm wafer is in the ballpark of $5000~6000 with variances of course depending on the process technology and what not.
Q1 means 4 months.
If I specify the exact launch dates, GT200 has been delayed ~7 months in comparison to other launch timeframes.
What makes you think that the GT200 on 55nm would have resulted in another NV30 fiasco?
Think we've gone too OT though Ben.
I think the same thing every time I read a "Larrabee will suck because..." post.
Originally posted by: error8
Originally posted by: jaredpace
After rereading these links, GT300 could beat RV870 if it really is 6-15x faster than gt200.
:Q
Hahaha, that would be the biggest boost in performance a new generation of cards brings over the last one. It is highly unlikely to make GT300 6-15X faster then GT200. 2x faster, most probably , 6-15X, fantasy land. 🙂
Originally posted by: WaitingForNehalem
I don't understand the hatred towards Crysis. It was an fantastic game with amazing visuals and is the best FPS I've ever played. Who cares if it isn't all maxed out, it still looks better than any other game even though it isn't maxed. Everyone just keep complaining though, we'll have Call of Duty style games full of invisible walls and scripted events just to make sure you can max out and run at 100fps +.
Originally posted by: BenSkywalker
WTF? Errr, I live on Earth. On Earth we have 12 months in a year. A quarter is 1/4. 12/4= 3 😀
Originally posted by: munky
Originally posted by: WaitingForNehalem
I don't understand the hatred towards Crysis. It was an fantastic game with amazing visuals and is the best FPS I've ever played. Who cares if it isn't all maxed out, it still looks better than any other game even though it isn't maxed. Everyone just keep complaining though, we'll have Call of Duty style games full of invisible walls and scripted events just to make sure you can max out and run at 100fps +.
Visuals don't make a game great. The gameplay was boring, with absolutely no cinematic feel like COD4, or non-linear progression like Stalker or Oblivion.
Originally posted by: WaitingForNehalem
Originally posted by: munky
Originally posted by: WaitingForNehalem
I don't understand the hatred towards Crysis. It was an fantastic game with amazing visuals and is the best FPS I've ever played. Who cares if it isn't all maxed out, it still looks better than any other game even though it isn't maxed. Everyone just keep complaining though, we'll have Call of Duty style games full of invisible walls and scripted events just to make sure you can max out and run at 100fps +.
Visuals don't make a game great. The gameplay was boring, with absolutely no cinematic feel like COD4, or non-linear progression like Stalker or Oblivion.
Are you kidding me? The whole game had a cinematic feel. Linear progression? This is a FPS, not RPG as mentioned. BTW, STALKER is a horrible game that is boring and has some of the worst hit detection I've ever seen.