GT300 Benchmarks

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: 5150Joker
It doesn't matter how much cash nVidia has, it's not enough to get them an x86 license is it?
They don't need one. AMD has had one and they have been bleeding money for years. No point in fighting Intel when it's killing AMD. They are doing just fine with Tegra now anyways.

The PC graphics future lies with all in one solutions via on die graphics.
Integrated graphics, which as I mentioned Intel already dominates. This will only hurt AMD more.

Any more FUD you need smacked down? I got plenty of facts behind me.
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Originally posted by: evolucion8
Its a miracle that Wreckage didn't put out the CUDA and PhysX shit, that's what is missing and for sure his words will sound identical accross all the threads that he derails.

Its his thread. Why would he derail it? :laugh:
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: ronnn
Originally posted by: evolucion8
Its a miracle that Wreckage didn't put out the CUDA and PhysX shit, that's what is missing and for sure his words will sound identical accross all the threads that he derails.

Its his thread. Why would he derail it? :laugh:

BUAHAHAHAHAHAHAAHHAHAHA.... Oh....

That's true.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: BenSkywalker
if you had to purchase a new high end card today, would you buy a gtx 295? Actually, I guess you might, but the vast majority of non-biased individuals would go red atm.

No way would I buy a GTX295 or a 5870, both of them are terrible values no matter how you look at it. Out of all of the new parts and high end offerings the 5850 is the only one that would be remotely attractive.

By the way, I think that you're wrong. Nvidia has shown time and time again that they'll shovel shit down our throats if they think that they can get away with it.

Really? When? The NV3x line is the only time I can think of, and they obviously learned their lesson, ATi absolutely killed them and took over the market in that generation. nVidia pushed out the 8800GT under no competitive pressure at all, they released a part that performed far beyond anything close to its price point with nothing approaching a threat from team red- and it worked to cement them as the dominant parts in terms of marketshare for years.

There are several reasons why it would make little sense for nV to rush out a part that was just 'good enough' to take the performance crown. I would wager that nV wants to be in a position where their 360 series part outperforms the 5870 so they can demand a hefty premium for their 380 series offering. They know that ATi is going to counter with a x2, and they need to be in a position to respond to that. Having their top tier part barely outperform the 5870 puts them in a position where ATi can quickly bury them soundly and leave them with the halo effect for quite some time. There are also the margin issues to deal with. If they force out a spin that has issues with yields beyond the norm, they put themselves into a position where they need to remain cost competitive at the high end on a huge GPU with poor yields, bad idea. If they create a part they can charge a significant premium for and then can use 'bad' chips with disabled sectors as the next step down they can enormously increase their useable yields(even if they aren't useable as 380 parts, they could be as 360s) protecting their margins and making it a profitable situation to be in. They also have their reputation to protect. While the tech enthusiasts may look at things a bit differently, overwhelmingly in the gaming market nVidia is viewed as the top tier manufacturer, ATi a budget choice. This type of mindshare was created over many years of them remaining on top of the benchmark charts. Everyone remembers the NV30, that took nVidia years to recover from, they aren't going to be foolish and surrender that mindshare now when they are faced with a relatively low availability series of parts that has just launched.

It is looking more and more like amd's small ball strategy is superior to nvidia's "build the fastest no matter what" philosophy.

The only way that statement can be viewed as accurate is that you have seen exactly how the GT3x0 parts are going to perform, so why not share with us? What has currently been proven in differences is that ATi's strategy allowed them to ship earlier this generation, nothing more. Considering that the 5xxx series is a modified 4xxx(which dates back to the 2xxx parts) while the GT300 series chips are an entirely new core, perhaps that shouldn't be surprising.

nV may very well have totally blown this generation, but I certainly am not willing to state that with anything resembling certainty until I see some real benchmarks comparing the parts. It may well end up that ATi is vastly superior, it could end up that nV is vastly superior, more then likely it will end up somewhere in the middle of those. I will wait until I see some numbers to pass judgement.

every day with plastic mockups and viral you tube trash made up by upset nvidia fanboys brings us closer to 6xxx. the longer the delay, the more pressure on gt300 to perform. A slightly faster gt300 NOW is much better than a 30% faster gt300 in 4 months. Nvidia's partners can't remain loyal to them over the long haul if they don't have something better than g92 to sell for the next 6 mos.
 

terentenet

Senior member
Nov 8, 2005
387
0
0
Those benchmark numbers are fake. There are no gt300 cards out there to post benchmark numbers, perhaps, the only gt300's in existance (if there are any) are well kept within Nvidia's HQ. I do believe even Nvidia doesn't have working silicon at this point. If they had it, they would have shown it instead of a mockup card.
GT300 availability this year is compromised, the earliest we will see GT300 should be Feb-Mar 2010. Perhaps, by Christmas this year we will see some real benchmark numbers of GT300, but the card won't be on the shelves.

I also believe right now Nvidia doesn't want to put out a card to beat the 5870, but the soon to be out 5890. X2 cards are a different business and I am sure Nvidia doesn't think to fight the ATI X2 card with the GTX380. They will fight the 5890 with the GTX380 and when ATI counters with the X2 card, Nvidia will also bring it's GTX395 to the table.

I don't understand why many would look so happy if Nvidia or AMD go out of business, or leave the x86 market (Nvidia). Competition brought us where we are today, why would one want to end it? As I see it, Nvidia is hurt right now. They indeed don't have an x86 licence and Intel won't grant them one, this is why they focus on the ARM market. It is a good move on their part, there are money to be made there. There are more handheld devices sold than computers...
There are money to be made in the discrete GPU business, regardless the current plans to include the GPU on the CPU die. There are many, myself included that don't want to hear of a CPU+GPU solution. That would limit the upgrade paths too much for my taste. I want a CPU that's good at being a CPU and a GPU that good at being a GPU. When new and more demanding games appear, I don't want to upgrade the whole computer, I want to be able to choose specific components I want to change. Maybe now I have a GPU, buying another one for SLI/Crossfire solution will help the situation. How will you handle that with a CGPU solution?
Even if CGPU's appear in 2012, it won't be before 2015 until they gain enough market to matter. People are skeptical, remember 64bit; even if it was proven that it's the way to go and it's better than 32bit, it took many years before it was truly adopted and even now there are many that won't make the switch to 64bit.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,971
126
Originally posted by: Wreckage

Why don't the specs allow that?
Because the leaked info so far seems to point at a 60%-80% performance gain over a GTX285.

It's an entirely new architecture using MIMD. It could well exceed those benchmarks
It could, but it?s unlikely. G80 was a new architecture too but it wasn?t three times faster than a 7900 GTX.
 

youthyo2007

Junior Member
Oct 20, 2009
1
0
0
Originally posted by: BFG10K
Originally posted by: Wreckage

Why don't the specs allow that?
Because the leaked info so far seems to point at a 60%-80% performance gain over a GTX285.

It's an entirely new architecture using MIMD. It could well exceed those benchmarks
It could, but it?s unlikely. G80 was a new architecture too but it wasn?t three times faster than a 7900 GTX.


I totally agree that GT 300 is not just a double GTX 285 hence 2 x speed (like RV870 vs RV770) but an efficient re-design of GT200 (solved the problem with SP & SPU thread deadlocks). We might see a 3X speed or more for GTX380 comparing to a GTX 285. The issue is not to spend 300$ to buy double the silicon as this is the RV870 case and make use 60% of it but to buy a GPU that makes use 90% of its SP's.

If Intel didn't move to high-k and different re-design of Core-i7 would still be in a Pentium architecture for the past 15 years. That is the case with Nvidia Gt300. A more efficient architecture. I prefer to wait for 2 months........




 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
One place where I think the small-die strategy has helped AMD is in pricing flexibility. It has allowed them to effectively re-price previously high-end parts as the market pushes on. I think it also saves on R&D (if you can profitably sell last-gen's high-end parts in the mid-range, there is no need to do a separate midrange design each generation).

Two parts to this, one is during the last generation AMD made a bit on GDDR5 prices dropping rapidly. They won that bet. It was a risk, and hats off too them for putting it on the line. The reason I bring this up is if you look at the current pricing issues, the GDDR3 on the GTX parts are the largest price difference between them and the 4xxx parts, not the chip. This gets compounded a bit because nV is using higher bit width creating a more complex PCB. Not saying that AMD's chips aren't cheaper, but there isn't nearly the price rift some are making it out to be.

Second part- The G92 is the chip that is hanging in for several generations, not the ATi counterparts(which everything made when the G92 launched on the red side is long dead and buried). Creating a larger chip in several ways makes it easier to scale up and down to account for yield issues. It won't be ideal on a margin side all of the time, but it does give them some flexibility in maximizing less then ideal yields. This isn't saying AMD's approach is wrong, just pointing out that nV's side isn't wrong either. Two different approaches, pros and cons to each.

As an aside, I'm really hoping for a top-to-bottom launch from NV early next year.

Honestly I think there is close to no chance that will happen. With them launching their low end DX10.1 parts I think it is safe to say that they are only going to push mid to high end parts with the GT300 core to start with.

But I think the strategy itself is a sound one, and one I'd like to see NV pursue. A GT360 that starts at $299 and gets pushed down to $175-180 when the next generation hits? Yes, please!

Isn't that precisely what they did with the G92 parts? If they had gone with GDDR5 and a 256bit bus they could have done that with the GTX260 too, they didn't as they made the wrong bet. This generation it will be a bit easier to see the impact of the die size impact as there are less varriable between the parts.

I give them 1-2 more years tops in the high end before they are gone for good.

First up Intel needs to clear the lawsuit they are facing to see if they will be allowed to make graphics chips at all. Second, even using Intel's most agressive projections it is highly unlikely that the fastest Larrabee available in two years will be competitive with the GTX295. Not the comparable part in that timeframe, the currently available one. Intel poses no threat to nV or ATi in the high end for several years at the earliest.

nVidia has no viable future in the discrete graphics market. Once 2012 rolls around and AMD + Intel start incorporating their GPU's on die, they will have all-in-one solutions that nVidia won't be able to compete with.

So many ways this is wrong, and it isn't specific to nV by any stretch of the imagination.

First off, come 2012 the next generation of consoles is going to be hitting meaning that PC games are going to see a massive instant spike in system requirements. I know PC gamers aren't used to it, but being tied to the consoles changes the normal ebb and flow of things around considerably. A GPU that can push a 2011 title at 200FPS may be unplayable for those hitting in late 2012. To frame this statement, the GS was a souped up Voodoo1, not even matching the Voodoo1 in everythnig, its' follow up was the GF 7900 based part- that is a normal console style evolution and we will see something comparable in 2012. I state this mainly as an example of why GPU power is going to matter.

Now, if you believe that PC gaming is going to utterly die, then perhaps you will think this won't matter. I don't believe that at all.

So we get to the next segment, GPU comparisons. i7 has 730Million transistors. The 5750 has 1.04billion transistors. Both of these chips are in the 90Watt range for TDP(I'm using the lowest i7 numbers). In order for an on die CPU/GPU combo chip to competitive with today's mid range, it will need to be packing ~2Billion transistors and pushing ~180Watt TDP, die size would be ~450mm2 give or take on Intel's build process. Intel demands a 60% margin to enter a market, that's just how they do business. In order for them to reach that level of margin they would need to be pushing far beyond the price of the GTX295 to reach that level given even their exceptional fabbing abilities. Given, you are getting a CPU and GPU in that deal, so that certainly must be taken into account.

That brings us to mid level graphics and a high end CPU for die space, certainly reasonable given the general guidelines in terms of fabbing the chip for Intel, a bit rougher for AMD, but we come across our first major issue. While AMD certainly has the graphics expertise to make it happen, they don't have the fab capability to handle something that complex on a large scale. For Intel, they certainly have the fab capability for it, but they can't build a remotely decent GPU. Thinking that Intel could be vaguely competitive with AMD transistor for transistor in the GPU space is obscene, the way things are shaping up it is looking like Larrabee will end up quite a bit larger then the largest AMD GPU while it will struggle to compete in the $100 segment. Neither company is in a good position to leverage this technology towards the mid end segment for one reason or another. And this is the good news for them.

Then we get to bandwidth. Even in the lower mainstream market, the CPU socket doesn't have remotely close to enough bandwidth to provide for a CPU and GPU. Of course, Intel can double the bit width to compensate, but this will drive up the cost of the motherboard considerably along with requiring premium RAM in order to get the performance you are paying for. Between the added cost to the CPU, the added cost of the mobo and added cost of the RAM in all you are likely looking at somewhere around $200 in additional expenses to compete with a $100 video card.

For anything in the next few years, on CPU die graphics are simply going to be a replacement for integrated chips. There is no chance they are going to even manage to be cost effective in the mainstream segment, let alone the enthusiast space.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: BFG10K
Originally posted by: Wreckage

Why don't the specs allow that?
Because the leaked info so far seems to point at a 60%-80% performance gain over a GTX285.

It's an entirely new architecture using MIMD. It could well exceed those benchmarks
It could, but it?s unlikely. G80 was a new architecture too but it wasn?t three times faster than a 7900 GTX.

It actually was in some cases, especially newer, more shader heavy games. Even the X1900XT had a significant advantage over the 7900GTX in more shader heavy games, and it could be beat 2:1 by the 8800GTX.

To frame this statement, the GS was a souped up Voodoo1, not even matching the Voodoo1 in everythnig, its' follow up was the GF 7900 based part- that is a normal console style evolution and we will see something comparable in 2012. I state this mainly as an example of why GPU power is going to matter.

Feature wise, it was a voodoo1. But many of its performance characteristics (memory bandwidth and fillrate....err...did it have any other performance characteristics?) were Radeon 9700 pro level. VU0 was as powerful as the T&L unit of the Naomi 2, which puts it around the level of a 1Ghz Athlon. VU1 was fixed function iirc, but comparable to the Geforce 2's T&L power.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: youthyo2007
Originally posted by: BFG10K
Originally posted by: Wreckage

Why don't the specs allow that?
Because the leaked info so far seems to point at a 60%-80% performance gain over a GTX285.

It's an entirely new architecture using MIMD. It could well exceed those benchmarks
It could, but it?s unlikely. G80 was a new architecture too but it wasn?t three times faster than a 7900 GTX.


I totally agree that GT 300 is not just a double GTX 285 hence 2 x speed (like RV870 vs RV770) but an efficient re-design of GT200 (solved the problem with SP & SPU thread deadlocks). We might see a 3X speed or more for GTX380 comparing to a GTX 285. The issue is not to spend 300$ to buy double the silicon as this is the RV870 case and make use 60% of it but to buy a GPU that makes use 90% of its SP's.

If Intel didn't move to high-k and different re-design of Core-i7 would still be in a Pentium architecture for the past 15 years. That is the case with Nvidia Gt300. A more efficient architecture. I prefer to wait for 2 months........

2 months in dog years, or do you think that nvidia is going to release a new gpu arch on dec 20?
 

Creig

Diamond Member
Oct 9, 1999
5,171
13
81
Even the guy who posted the video says it's BS:

Originally posted by: thelasthallow
Stan you? say the people post BS results on ATI cards and then you go and do it to? wtf man?


Originally posted by: stanbony
See what happens when the shoe is? on the other foot?

Its ok for others to post bullshit right?

And you ATI boys know who im talking about.

Stanbony


It's interesting that Wreckage constantly complains about Charlie, yet is becoming more and more like him every time he posts treasures like this.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Originally posted by: Compddd
Wreckage is the Nvidia Charlie. I'm surprised people still respond to him.

Have you strolled thru Charlie's forum on semi-accurate? It's loaded with hundreds and hundreds of interactive posters.

The web-driven long-tail may not have come to fruition in etail business like amazon.com but it certainly did come to fruition in social-networking constructs.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,971
126
Originally posted by: stanbony

See what happens when the shoe is on the other foot?

Its ok for others to post bullshit right?

And you ATI boys know who im talking about.

Stanbony
LOL, somehow I doubt Wreckage will be posting in this thread anymore. ;)
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: BFG10K
Originally posted by: stanbony

See what happens when the shoe is on the other foot?

Its ok for others to post bullshit right?

And you ATI boys know who im talking about.

Stanbony
LOL, somehow I doubt Wreckage will be posting in this thread anymore. ;)

Probably got banned again for being such a troll, if not then just got pissed off because nobody could buy his pathettic marketing tricks. But OCGuy can do his job just fine like Compddd said. Back on topic, I hope the GT300 gets sooner, so I can get the HD 5870 cheaper, I'm quite satisfied with my card overall tough.