POLL: nVidia's Silence on GT300

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

akugami

Diamond Member
Feb 14, 2005
5,654
1,848
136
Originally posted by: bryanW1995
funny how amd's old argument against intel's q6600 is now being used against them...

"but, but, q6600 is just two e6600's slapped together with duct tape, glue, and paper clips" - amd fanboi

"it's also about 17 bajillion times faster than your piece of shit monolithic quad core that is later than hell and has to be slowed down 10% via a bios update to even work" - pat gelsinger

"um, but ours looks cooler in a diagram" - amd fanboi

Yep. I did find it ironic. I admit I owned an Athlon X2 at the time and it offered good bang for the buck and was a better processor than what Intel was putting out at the time. I've since switched to a quad core Intel.

Originally posted by: alyarb
you can't be serious; larrabee is the least likely architecture to succeed in D3D and there is absolutely no performance data available to support this or that outcome.

The thing is, Larrabee doesn't have to be great. It just has to be good enough for the great unwashed masses. Remember that Intel sells the great majority of the GPU's in the world because it's packaged with their motherboard chipsets. Intel has a ton of old fabs to roll out these chips, be it for motherboards or gpu's and can roll them out cheaper than what AMD or nVidia can get similar sized chips on the same/similar process for.

OEM's integrate a ton of discrete graphics cards that are below the $100 mark. There are a lot of if's but assuming Larrabee is competitive on the low to mid end and it launches with relatively few bugs then ATI and nVidia will have plenty to worry about. Intel can release Larrabee as a low end competitor to these products and essentially wipe out a large market segment from ATI and nVidia because they can offer package deals with the CPU, motherboard, and GPU on desktop "platforms" similar to how the Centrino is a mobile platform. This is a scary prospect for these two companies.

While ATI and nVidia will still have the mid to high end of the video card market, if Intel is serious about Larrabee and stays in for the long term we might see ATI and nVidia in a world of hurt in three years time. Granted Intel would need a lot of good fortune along the way but it is in the realm of possibility given their large bankroll. They don't need Larrabee to succeed on the first try. It just doesn't need to suck miserably while they work on a better second or third iteration.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: kreacher
With their marketing department's past record, being silent means the situation is so bad right now that they can't even think of a way to spin it into anything positive.

Lol, that's just what I was thinking.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Well, companies generally release products at a price they believe will be most profitable (on a sale curve). When they have to sell below this price, it obviously cuts into their profits (sometimes so much that there are none). I wonder how much money NVIDIA was making (or losing) due to the price/performance blow it took from the 4xxx series.

Economics 101:
1. Something is worth as much as people are willing to pay for it
2. Decreasing cost means increasing profits, not decreasing price to consumer.

As for wondering how much nvidia is "losing"... according to nvidia's and AMD's reports, nvidia makes twice the profit per card, and has been for years... aka, AMD sells closer to cost than nvidia.
 

bfdd

Lifer
Feb 3, 2007
13,312
1
0
Originally posted by: Idontcare
Originally posted by: Keysplayr
They don't even list the correct memory amount for GTX280 and GTX275. Wonder how many other things they got wrong.

You know what a consultant does, right? A consultant takes your watch off your arm, reads it and tells you the time, and then puts your watch back on your arm.

You know what a market researcher does, right? They take the watch off a consultants arm, read it...well you know the rest of the joke by now. ;) :laugh:


(preventative misinterpretation disclaimer: this was a joke, no consultants or market researchers were harmed or otherwise injured in any way in the creation of this joke, the author of this post was himself at one time an overpaid consultant who enjoys, to this day, the seemingly lost art of self-denigration)

I take offense to this seeing as I work for a market research firm! We can totally read time better than a consultant :p
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: Keysplayr
So do the cores. 65.00 for a GTX280 65nm core, and 90.00 for a GTX285 55nm core.
How does that compute? ;)

What makes that so hard to believe? I have to agree that some errors do make it less credible.

Not all GT200 chips are created equal. The chips found on the GTX285 are cream of the crop. They run at much lower voltages compared to the chips used on the GTX275. They are full fledged cores and is only found in the GTX285 SKU.

65nm process technology has been out for quite some time, and it makes alot more sense that TSMC charges alot less per wafer for 65nm process tech compared to the 55nm and even more so for the 40nm. Im also assuming that the cost of the chips are also affected by the number of chips produced per wafer.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Originally posted by: Cookie Monster
Originally posted by: Keysplayr
So do the cores. 65.00 for a GTX280 65nm core, and 90.00 for a GTX285 55nm core.
How does that compute? ;)

What makes that so hard to believe?

While TSMC does charge more for 55nm wafers versus 65nm wafers (assuming a lot of other things are held constant between two such wafer contracts) the hard to believe part comes into play when the speculated cost differential is nearly 50%.

That definitely seems suspect.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: taltamir
Originally posted by: Barfo
Originally posted by: taltamir
nvidia released products at a certain MSRP.
AMD LATER released similar product at half the MSRP.
nVidia adjusted prices... yet for some reason some people have decided that this means they are incapable to compete or catch up. what gives? When it comes time to buy, so far nvidia products seem to more often than not be the better deal.

I was under the impression ATI is still the best value under most scenarios, even with Nvidia price cuts.

If you look at MSRP; or maybe just at newegg as if no other etailers exist.
Most etailers don't sell at msrp. selling below or above it.

in general they are very competitive atm. 4850 competes with 250 gts, that's a definite win for amd in the value category. however, the 4870/4890 vs 260/275 is pretty even. you can snag a 4870 512mb card for a little bit less but it's a little bit less of a card, too. some of the lower end models like 4830 and 4670 are pretty competitive vs their nvidia counterparts, but 9600 gso spanks them all in price/performance at around $40 AR.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: happy medium
A 8800gtx has the 4850 beat in value because it's 3/4 years old and is almost as fast, so who needed a 4850. The 9800gt is also 3/4 years old and everybody and there mother had/has a 8800/9800gt. Just because a company can finally catch up to anothers technology 2/3 years later and beat there prices dosen't mean that they are a better value. It shows they have to lower there prices to gain a little market share because of prior poor slow uncompetative products (2900xt/pro,3850,3870). Nvidia charges higher prices cause they can and make money doing it. Why? because there launched products are always faster and become popular. 4870 1gb and the gtx260 are very competative price wise. What's the difference 5$? The 4890 was just released 5 months ago? Everyone already had a gtx280/285, 4870 1gb, so who really needed it? The speed of the 4890 was released 2 years ago with the gtx280. So after 2 years they made a card cheaper and as fast.
I'm no fanboy but AMD hasn't exactly been doing well latley. They seem to be followers more then innovators. Don't get me wrong, I hope they make good products so I don't have to spend 400$ on a gtx360.

um, wasn't gtx 280 released in june 2008?
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: bryanW1995
Originally posted by: taltamir
Originally posted by: Barfo
Originally posted by: taltamir
nvidia released products at a certain MSRP.
AMD LATER released similar product at half the MSRP.
nVidia adjusted prices... yet for some reason some people have decided that this means they are incapable to compete or catch up. what gives? When it comes time to buy, so far nvidia products seem to more often than not be the better deal.

I was under the impression ATI is still the best value under most scenarios, even with Nvidia price cuts.

If you look at MSRP; or maybe just at newegg as if no other etailers exist.
Most etailers don't sell at msrp. selling below or above it.

in general they are very competitive atm. 4850 competes with 250 gts, that's a definite win for amd in the value category. however, the 4870/4890 vs 260/275 is pretty even. you can snag a 4870 512mb card for a little bit less but it's a little bit less of a card, too. some of the lower end models like 4830 and 4670 are pretty competitive vs their nvidia counterparts, but 9600 gso spanks them all in price/performance at around $40 AR.

yea, value very much depends on which card is on sale this week
 

DefRef

Diamond Member
Nov 9, 2000
4,041
1
81
There's a very simple reason why nVidia would be quiet over any upcoming "monster" cards: If the customers know that something uber is coming soon, they'll hold off on buying the current-gen products sitting on shelves now. This is what happened to Palm in the wake of the introduction of the Pre at CES. Though it wouldn't come out for five more months, people stopped buying their existing Treos and Centros as they waited for the slick smartphone to arrive.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: Idontcare
Originally posted by: Cookie Monster
Originally posted by: Keysplayr
So do the cores. 65.00 for a GTX280 65nm core, and 90.00 for a GTX285 55nm core.
How does that compute? ;)

What makes that so hard to believe?

While TSMC does charge more for 55nm wafers versus 65nm wafers (assuming a lot of other things are held constant between two such wafer contracts) the hard to believe part comes into play when the speculated cost differential is nearly 50%.

That definitely seems suspect.

Im not sure how much nVIDIA pays TSMC for its GPUs to be manufactured, but im sure it is nVIDIA themselves that sell their chips to its partners at prices that they feel is justifiable to a given chip's spec. These are normally classified as seen in the GTX series e.g G200-XXX where the last three digits denote what kind of chip it is. Im guessing there is price disparity between these different spec'ed G200 GPUs, and in this case, the G200-350 (used in the GTX285) are the expensive of the bunch.

Not only are they full fledged (nothing is disabled), they run at a lower voltage while being clocked much higher than the original GTX280 clocks. All this for lower power consumption. I doubt nVIDIA wouldn't sell these to its partners at premium prices. Look at intel, they charge almost double the price of i7 860 compared to the 133MHz faster clocked 870.

Another interesting point to note is that even with the 55nm shrink, the GT200 chip is still very large. Not many can be produced per wafer and since nVIDIA went out and made even more SKUs using 55nm based GT200 chips (GTX260 using 55nm, GTX260 c216, GTX275) they could have intentionally made the G200-350 chips more expensive (on top of what I just said about it being cream of the crop). Persuading partners to focus on the low/mid highend.

Thinking about it, the GTX285 was an odd ball. It was in between the HD4870X2 and HD4870. Sure it was faster than the latter, but quite slower than the former but priced at exactly middle of the two.

Im going too far OT. To revert back, before G80 was released, no one knew what it was. They were quite silent on the card if memory serves me right. No one knew it was going to be based on unified shader architecture. Same could be true for GT300. This could truly be nVIDIA's can of whoop ass.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
I remember anticipating the new releases after the X19xx and 7xxx series. A rumor was spread that Nvidia would not be going unified arch, but continue with the traditional pipeline approach. Everyone expected the successor of G71 to have at least 32 traditional pipes, maybe even doubling them to 48 amoung other feature enhancements with a DX10 arch.
And poof! G80. Everyone was like "W....T.....H....?" A pleasant surprise.

Many believe G200 was a test run preceding a truly new architecture. Who knows.
We'll all find out soon enough. Just as always.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: DefRef
There's a very simple reason why nVidia would be quiet over any upcoming "monster" cards: If the customers know that something uber is coming soon, they'll hold off on buying the current-gen products sitting on shelves now. This is what happened to Palm in the wake of the introduction of the Pre at CES. Though it wouldn't come out for five more months, people stopped buying their existing Treos and Centros as they waited for the slick smartphone to arrive.

um, they already know about "something uber" on the way, it's called 5870.

@keys: I thought that gt300 was definitely going to be a new arch. Haven't you heard anything?
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,949
504
126
Keysplayr really comes across as a foolish fanboy with his head in the sand, spewing FUD at every opportunity. The attitude actually echos Nvidia as a corporation lately. When faced with competitive pressure they come up with implausible, ridiculous claims that basically insult their target buyers.

And saying R8xx is a dual core GPU is completely false. Although it if WAS, that would actually be a win for AMD, it would make it easier to bring out a faster refresh part. Although there is a rumour that the die has disabled sections that can/will be enabled as yields@TSMC improve, but no clear evidence of this.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Well, I selected #2, but now I think of it, I should have put other.

After looking at Fudzilla's results.

They might beat ATI performance wise, but the gap might not be large and the difference in pricing and die size(even power) will be so much larger that it might not be worth it to most buyers.

Another thing is maybe Nvidia is still not sure yet how it will turn out in final silicon. All reports indicate a very ambitious project(ambitious doesn't always equal success).
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: AnandThenMan
Keysplayr really comes across as a foolish fanboy with his head in the sand, spewing FUD at every opportunity. The attitude actually echos Nvidia as a corporation lately. When faced with competitive pressure they come up with implausible, ridiculous claims that basically insult their target buyers.

And saying R8xx is a dual core GPU is completely false. Although it if WAS, that would actually be a win for AMD, it would make it easier to bring out a faster refresh part. Although there is a rumour that the die has disabled sections that can/will be enabled as yields@TSMC improve, but no clear evidence of this.

I didn't say R8xx "IS" a dual core GPU. I said it "LOOKS" like two R740 cores in the same package. It's been pointed out that the schematic pic is not an accurate depiction of the actual layout of the core. All I can do is wait for an actual R8xx die shot. So I "was" holding off on any further comments about it. You confuse FUD with discussion dude. I think it's BS for you to say MY head is in the sand if you interpret these previous discussions as FUD, when you don't even know yourself what the GPU core will be like in it's setup. So, you counter perceived FUD with different FUD.

Kewl :thumbsup:
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Originally posted by: BenSkywalker

and the only ones that speak of massive issues for nV is Charlie who I can't recall being right on a prediction about new hardware..... ever?(Maybe he has, but I can't think of it)

Well I can. Core 2.

Anyways, he didn't say the performance sucked. Not necessarily. He said "Low Yields" and "Delayed Availability". That's kinda different. Of course performance might have been better without those two factors but doesn't mean its always lower.
 

Kakkoii

Senior member
Jun 5, 2009
379
0
0
Originally posted by: AnandThenMan

And saying R8xx is a dual core GPU is completely false. Although it if WAS, that would actually be a win for AMD, it would make it easier to bring out a faster refresh part. Although there is a rumour that the die has disabled sections that can/will be enabled as yields@TSMC improve, but no clear evidence of this.

It pretty much is a dual core actually. It's 2 R7xx reorganized a bit and slapped together into 1 chip. With the needed DX11 tweaks and Eyefinity pathway added.

http://www.chiphell.com/2009/0914/103.html

http://img43.imageshack.us/img...creencapture821200.jpg


Notice the dual like structure of it? Two of the RV770 SIMD Engines side by side. Not to mention an exact double in specs over the last series.

http://www.rage3d.com/reviews/...itecture/index.php?p=3
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Originally posted by: Kakkoii
It pretty much is a dual core actually. It's 2 R7xx reorganized a bit and slapped together into 1 chip. With the needed DX11 tweaks and Eyefinity pathway added.

http://www.chiphell.com/2009/0914/103.html

http://img43.imageshack.us/img...creencapture821200.jpg


Notice the dual like structure of it? Two of the RV770 SIMD Engines side by side. Not to mention an exact double in specs over the last series.

http://www.rage3d.com/reviews/...itecture/index.php?p=3

The drawing you show here as proof was done by someone who incorrectly read the slides showed earlier. While it may be possible the physical layout will be like AMD's slide (and not like on the private sketch that's different from the official slide anyway), it's all about the logical pathways on the diagram and in the physical chip. Since we still don't know how the chip looks like inside in detail and the logical distribution on the slide is fully parallel, those claims are false.

As much as AnandThenMan's statement was a bit blunt and straight to the point, I share his view that right now it's grasping at straws when trying to discredit ATi's new design.

And I would like to apologize to Keys for my harsh words earlier. They were unnecessary and do not represent my way of discussing things. Had a few bad weeks recently and I get annoyed easily. Again, sorry.
 

Genx87

Lifer
Apr 8, 2002
41,095
513
126
No idea. Probably waiting to see what AMD shows. I dont think it will be as fast as people think due to their concentration on GPGPU stuff.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Qbah
Originally posted by: Kakkoii
It pretty much is a dual core actually. It's 2 R7xx reorganized a bit and slapped together into 1 chip. With the needed DX11 tweaks and Eyefinity pathway added.

http://www.chiphell.com/2009/0914/103.html

http://img43.imageshack.us/img...creencapture821200.jpg


Notice the dual like structure of it? Two of the RV770 SIMD Engines side by side. Not to mention an exact double in specs over the last series.

http://www.rage3d.com/reviews/...itecture/index.php?p=3

The drawing you show here as proof was done by someone who incorrectly read the slides showed earlier. While it may be possible the physical layout will be like AMD's slide (and not like on the private sketch that's different from the official slide anyway), it's all about the logical pathways on the diagram and in the physical chip. Since we still don't know how the chip looks like inside in detail and the logical distribution on the slide is fully parallel, those claims are false.

As much as AnandThenMan's statement was a bit blunt and straight to the point, I share his view that right now it's grasping at straws when trying to discredit ATi's new design.

And I would like to apologize to Keys for my harsh words earlier. They were unnecessary and do not represent my way of discussing things. Had a few bad weeks recently and I get annoyed easily. Again, sorry.

We all have our off days/weeks. Thanks. And it would be great if everyone just understood that I am NOT trying to discredit ATi's (AMD's) new design. I didn't say there was anything wrong with it, not whatsoever. I was just making an observation of how it appeared from the schematic pic. Every single feature of R7xx was doubled. Exactly doubled. 800 to 1600sp. 40 to 80 TMUs. 16 to 32 ROPs. The only difference being a 256-bit bus across the board. Telling me that if they DID slap two cores side by side, they are not the true R770 cores, but more like two R740 cores (128-bit memory interface) but with the full 800sp 40TMU 16ROPs of the R770. Like a hybrid. Something that if made today would be called something like a HD4790.
Saying the logical pathways doesn't really tell me that there aren't two separated GPU's in the R8xx core.

Looking at the second photo Kakkoii linked to, tells me that there are two Command Processors, Two Setup Engines, Two interpolators and Two Ultr-Threaded Dispatch Processors. Is this schematic inaccurate? Are there only one of each? Because is does not look like these items are "connected" via logical pathways as you have mentioned earlier. It looks like the only time data actually gets "merged" from the seemingly two cores, is when the data reaches the high Bandwidth Crossbar/Shader Export.

If this was TRULY a completely monolithic core, I would expect to see only one Command Processor, one Setup Engine, One interpolator, and one Ultra-Threaded Dispatch Processor across ALL SIMD Blocks and TMU's.

Again for emphasis, I am not discrediting this design, at all. If performance is there, and it seems like it is, nobody really cares if it's a single monolithic core or two setup side by side. This is just for some interesting discussion while we are waiting for the big "GT300 vs. R8xx" wars coming.

And even if I was discrediting the design, nobody should really care one way or another. When it comes down to the wire, nobody cares. They just care how it performs. Hell, look at Core 2 Quad. Do people care that it is essentially 2 Core 2 Duos in one package? I don't think so.

:thumbsup:

 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Originally posted by: Keysplayr
Originally posted by: Qbah
The drawing you show here as proof was done by someone who incorrectly read the slides showed earlier. While it may be possible the physical layout will be like AMD's slide (and not like on the private sketch that's different from the official slide anyway), it's all about the logical pathways on the diagram and in the physical chip. Since we still don't know how the chip looks like inside in detail and the logical distribution on the slide is fully parallel, those claims are false.

As much as AnandThenMan's statement was a bit blunt and straight to the point, I share his view that right now it's grasping at straws when trying to discredit ATi's new design.

And I would like to apologize to Keys for my harsh words earlier. They were unnecessary and do not represent my way of discussing things. Had a few bad weeks recently and I get annoyed easily. Again, sorry.

We all have our off days/weeks. Thanks. And it would be great if everyone just understood that I am NOT trying to discredit ATi's (AMD's) new design. I didn't say there was anything wrong with it, not whatsoever. I was just making an observation of how it appeared from the schematic pic. Every single feature of R7xx was doubled. Exactly doubled. 800 to 1600sp. 40 to 80 TMUs. 16 to 32 ROPs. The only difference being a 256-bit bus across the board. Telling me that if they DID slap two cores side by side, they are not the true R770 cores, but more like two R740 cores (128-bit memory interface) but with the full 800sp 40TMU 16ROPs of the R770. Like a hybrid. Something that if made today would be called something like a HD4790.
Saying the logical pathways doesn't really tell me that there aren't two separated GPU's in the R8xx core.

Looking at the second photo Kakkoii linked to, tells me that there are two Command Processors, Two Setup Engines, Two interpolators and Two Ultr-Threaded Dispatch Processors. Is this schematic inaccurate? Are there only one of each? Because is does not look like these items are "connected" via logical pathways as you have mentioned earlier. It looks like the only time data actually gets "merged" from the seemingly two cores, is when the data reaches the high Bandwidth Crossbar/Shader Export.

If this was TRULY a completely monolithic core, I would expect to see only one Command Processor, one Setup Engine, One interpolator, and one Ultra-Threaded Dispatch Processor across ALL SIMD Blocks and TMU's.

Again for emphasis, I am not discrediting this design, at all. If performance is there, and it seems like it is, nobody really cares if it's a single monolithic core or two setup side by side. This is just for some interesting discussion while we are waiting for the big "GT300 vs. R8xx" wars coming.

And even if I was discrediting the design, nobody should really care one way or another. When it comes down to the wire, nobody cares. They just care how it performs. Hell, look at Core 2 Quad. Do people care that it is essentially 2 Core 2 Duos in one package? I don't think so.

:thumbsup:

Original slide is here. One command processor, one ultra-threaded dispatch processor that goes to all SIMD blocks. No mentioning of the Setup Engine and Interpolator on the official slide.

Anyway, I can see why one could think like that. And it can go both ways in the end I guess. But based on what's on the slides, it's one monolithic chip. Either way:

Originally posted by: Keysplayr
When it comes down to the wire, nobody cares. They just care how it performs.