NVIDIA to Launch GeForce 8600 Series on April 17th

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

s44

Diamond Member
Oct 13, 2006
9,427
16
81
Originally posted by: ahmurat
One question to experts:
I am upgrading my HTPC system and i mainly use it for HD videos, not gaming. I'm looking for a card that would just give me good 2D video performance. (720p mostly). Which of the above do you think would satisfy me?
Thanks in advance.
For future-proofing, I'd get one of the HDCP models. $100 list for the 8500GT looks pretty good.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Cookie Monster
Memory interface isnt a big factor in terms of performance. Its the architecture that matters the most. You can clearly see this in the form of the 7600GT 128bit, beating the 6800ultra all across the board. The 6600GT 128bit outperformed both 9700pro/5950FX ultra, while being toe to toe with the 9800pro.

This is true, however, with a 256bit bus, you wouldve seen an enormous increase in performance across the board.

Its their method of gutting the performance to make the high end cards more attractive at their retardedly high price tags.

I just want a $200 card that doesnt "feel like" engineering toilet paper.

Im *STILL* using my 6800GT i bought at launch because there are no clear alternatives at the $200 or less market.

2 full generations of cards and im still not likely looking at a 100%+ increase in performance for $200.

If you want to see what im talking about, cut the memory clock of your current card in half, whatever it is.

You wont lose 50% of your performance, but its damn sure noticable.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
The 128bit bus merely provides less memory bandwidth than the 256bit or 384bit bus. But keep in mind, they have the memory clocked at a whopping **2GHZ** which should more than make up for the decrease in many cases.

And for the last time the 6600GT did not struggle with SM3 features. SM3 was purely performance improvements and shortcuts in codes (As well as allowing for more complex code in the future (Max shaders etc...)).

-Kevin
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Im more interested in the rest of the specs. Is it going to use GDDR4? is it going to have 64 shaders? assuming that the source is true to some extent, 700mhz core clock speed results in a whopping shader clock speed of 1550mhz. This card should be pin to pin compatible with the 7600 series PCB (ala the 6600 series PCB).

So if they can produce a card that can outperform both the X1950pro, 7950GT or be at least on par it would be a deal breaker. Better IQ, CSAA allows to use higher level of AA then anyother midrange card ever to exist. Its small as well, and at $199~249 its a sweet deal. Dx10 is just icying on the cake.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,007
126
This is all well and good but I seriously hope we won't be waiting until April for new XP drivers. :|
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Gamingphreek
The 128bit bus merely provides less memory bandwidth than the 256bit or 384bit bus. But keep in mind, they have the memory clocked at a whopping **2GHZ** which should more than make up for the decrease in many cases.

And for the last time the 6600GT did not struggle with SM3 features. SM3 was purely performance improvements and shortcuts in codes (As well as allowing for more complex code in the future (Max shaders etc...)).

-Kevin

But this makes even less sense to me.

The whole reason to gut the card and go 128bit is to save money by reducing the number of layers on the pcb.

You then turn around and INCREASE costs by using expensive memory, which doesnt nearly make up for the deficit they created by cheaping out on the bus width.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Originally posted by: Acanthus
Originally posted by: Gamingphreek
The 128bit bus merely provides less memory bandwidth than the 256bit or 384bit bus. But keep in mind, they have the memory clocked at a whopping **2GHZ** which should more than make up for the decrease in many cases.

And for the last time the 6600GT did not struggle with SM3 features. SM3 was purely performance improvements and shortcuts in codes (As well as allowing for more complex code in the future (Max shaders etc...)).

-Kevin

But this makes even less sense to me.

The whole reason to gut the card and go 128bit is to save money by reducing the number of layers on the pcb.

You then turn around and INCREASE costs by using expensive memory, which doesnt nearly make up for the deficit they created by cheaping out on the bus width.

I'm not sure, this is pure speculation on my part, but the memory may be cheaper than the added logic required to create another 128bit bus. With the appropriate manufacturing process, it could very well be cheaper this way.

-Kevin
 

xfile

Senior member
Nov 26, 2005
499
0
76
I can't wait any longer. I just ordered an EVGA 8800 GTS 620. DirectX 10 is just a bonus....
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Gamingphreek
Originally posted by: Acanthus
Originally posted by: Gamingphreek
The 128bit bus merely provides less memory bandwidth than the 256bit or 384bit bus. But keep in mind, they have the memory clocked at a whopping **2GHZ** which should more than make up for the decrease in many cases.

And for the last time the 6600GT did not struggle with SM3 features. SM3 was purely performance improvements and shortcuts in codes (As well as allowing for more complex code in the future (Max shaders etc...)).

-Kevin

But this makes even less sense to me.

The whole reason to gut the card and go 128bit is to save money by reducing the number of layers on the pcb.

You then turn around and INCREASE costs by using expensive memory, which doesnt nearly make up for the deficit they created by cheaping out on the bus width.

I'm not sure, this is pure speculation on my part, but the memory may be cheaper than the added logic required to create another 128bit bus. With the appropriate manufacturing process, it could very well be cheaper this way.

-Kevin

I still fail to see it, unless the added die real estate reduces the yields enough to matter.
 

bigsnyder

Golden Member
Nov 4, 2004
1,568
2
81
Originally posted by: ahmurat
Really? Because I can get it in AGP and wouldn't need to upgrade the whole system. But do u actually mean a 6600Gt on a Core 2 Duo setup?
Currently i've got an Athlon XP 2800.

The best two AGP cards for HTPC use I can think of is the 7600GS (GT if you want the extra gaming power), and the Radeon X1600 series. Sapphire even makes a X1600 HDCP compliant card with a HDMI connector, though these have disappeared lately. Of course there are more powerful choices, but I don't consider those cards since noise and heat are very important in HTPC applications.

C Snyder


 

Modular

Diamond Member
Jul 1, 2005
5,027
67
91
Originally posted by: SunnyD
I don't rightfully see how a 128-bit card is going to out-perform across the board last gen cards with a 256-bit bus.

Remember the 9800pro vs. 6600gt?

Edit: Read the rest of the thread, I see someone beat me to this point.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76

Originally posted by: gersson
Originally posted by: CU
If they say meant to replace card "x" does that mean it is that fast. If so I think I would rather get a 7900gs or x1950pro now for $150 or so than by the same speed card but with DX10 which I won't use for awhile for $199-$250.

Think about it...if that were true we'd NEVER see performance increases... 6800-7800-8800 would all perform about the same but with better features?


::buzzer::

The X1950 Pro and 7900 GS have MSRP's of 199USD, so the 8600 GTS will likely be faster then these cards,as it's being introduce at likely 199USD, as that has always been the mid range price point.

I would say it should at least e on par with minimum the 7950 GT, though I am not sure if it will out perform the 7900 GTX.


 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: SunnyD
Originally posted by: Acanthus
Originally posted by: gersson
Originally posted by: CU
If they say meant to replace card "x" does that mean it is that fast. If so I think I would rather get a 7900gs or x1950pro now for $150 or so than by the same speed card but with DX10 which I won't use for awhile for $199-$250.

Think about it...if that were true we'd NEVER see performance increases... 6800-7800-8800 would all perform about the same but with better features?


::buzzer::

Youre describing the mid range and low end markets ;)

128 bit :roll:

Is this 2001?

Finally someone hit it on the nose. Part of the reason the 8800 has been so successful in terms of performance is it's ginormous memory bus. I don't rightfully see how a 128-bit card is going to out-perform across the board last gen cards with a 256-bit bus.

Isn't that what they said about the mid range Geforce 7? And the Geforce 7600 GT outperforms the 6800 Ultra nonetheless and for the most part equal to the 7800 GS.

Remembering 2GHZ GDDR4 will have equal bandwidth to a 1GHZ GDDR3 256Bit Bus like the one used on the 7800 GT.

And compared to the 7950 GT, it has 71.43% as much bandwidth. So I expect the same thing to occur this generation and have the 8600 GTS at least outperform the 7950 GT.

Edit: People have already stated my point, but I feel it's important enough ti reiterate it once again. Alot of people are still believing that memory bandwidth has a significant impact on performance, but it's a distant second in comparision to shader power, which is easily increased through clockspeeds.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: Acanthus
Originally posted by: Gamingphreek
Originally posted by: Acanthus
Originally posted by: Gamingphreek
The 128bit bus merely provides less memory bandwidth than the 256bit or 384bit bus. But keep in mind, they have the memory clocked at a whopping **2GHZ** which should more than make up for the decrease in many cases.

And for the last time the 6600GT did not struggle with SM3 features. SM3 was purely performance improvements and shortcuts in codes (As well as allowing for more complex code in the future (Max shaders etc...)).

-Kevin

But this makes even less sense to me.

The whole reason to gut the card and go 128bit is to save money by reducing the number of layers on the pcb.

You then turn around and INCREASE costs by using expensive memory, which doesnt nearly make up for the deficit they created by cheaping out on the bus width.

I'm not sure, this is pure speculation on my part, but the memory may be cheaper than the added logic required to create another 128bit bus. With the appropriate manufacturing process, it could very well be cheaper this way.

-Kevin

I still fail to see it, unless the added die real estate reduces the yields enough to matter.

In general, having a more complex PCB for a 256Bit Interface, is more expensive then using higher clocked memory. From what I remember, PCB prices remain fairly stable while the price of memory goes down as production levels crank up. So the 8600 GTS cost will decrease overtime as GDDR4 production matures.
 
Dec 21, 2006
169
0
0
Seems like they are trying to compensate for the 128 bit interface with faster memory clock speeds- some of them look pretty massive. Still, I will withold judgment/speculation on relative performance until I get concrete numbers for the Stream Procs and what they will be clocked at. Power draw would be nice too, considering I don't have a spare nuclear reactor sitting around to power my next graphics card.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: Acanthus
Originally posted by: Cookie Monster
Memory interface isnt a big factor in terms of performance. Its the architecture that matters the most. You can clearly see this in the form of the 7600GT 128bit, beating the 6800ultra all across the board. The 6600GT 128bit outperformed both 9700pro/5950FX ultra, while being toe to toe with the 9800pro.

This is true, however, with a 256bit bus, you wouldve seen an enormous increase in performance across the board.

Its their method of gutting the performance to make the high end cards more attractive at their retardedly high price tags.

I just want a $200 card that doesnt "feel like" engineering toilet paper.

Im *STILL* using my 6800GT i bought at launch because there are no clear alternatives at the $200 or less market.

2 full generations of cards and im still not likely looking at a 100%+ increase in performance for $200.

If you want to see what im talking about, cut the memory clock of your current card in half, whatever it is.

You wont lose 50% of your performance, but its damn sure noticable.

Your comparing Apples to Oranges, your High End 6800 GT at launch was 399USD MSRP, and the 7600 GT which is superior retailed for 199USD MSRP, though it was 23 Months later.

Nevertheless what matters is the performance of the package, not that it looks bad on paper. Compare an Athlon 64x2 3800+ Manchester vs Athlon 64x2 3800+ Windsor. The latter has double the memory bandwidth, but is only marginally faster.

 

TanisHalfElven

Diamond Member
Jun 29, 2001
3,512
0
76
Originally posted by: coldpower27
Originally posted by: Acanthus
Originally posted by: Cookie Monster
Memory interface isnt a big factor in terms of performance. Its the architecture that matters the most. You can clearly see this in the form of the 7600GT 128bit, beating the 6800ultra all across the board. The 6600GT 128bit outperformed both 9700pro/5950FX ultra, while being toe to toe with the 9800pro.

This is true, however, with a 256bit bus, you wouldve seen an enormous increase in performance across the board.

Its their method of gutting the performance to make the high end cards more attractive at their retardedly high price tags.

I just want a $200 card that doesnt "feel like" engineering toilet paper.

Im *STILL* using my 6800GT i bought at launch because there are no clear alternatives at the $200 or less market.

2 full generations of cards and im still not likely looking at a 100%+ increase in performance for $200.

If you want to see what im talking about, cut the memory clock of your current card in half, whatever it is.

You wont lose 50% of your performance, but its damn sure noticable.

Your comparing Apples to Oranges, your High End 6800 GT at launch was 399USD MSRP, and the 7600 GT which is superior retailed for 199USD MSRP, though it was 23 Months later.

Nevertheless what matters is the performance of the package, not that it looks bad on paper. Compare an Athlon 64x2 3800+ Manchester vs Athlon 64x2 3800+ Windsor. The latter has double the memory bandwidth, but is only marginally faster.
that becasue of ddr2s inherent latency issues. and its a processor. now your comparing apples to cars.
 

GEOrifle

Senior member
Oct 2, 2005
833
15
81
I just interesting with:
8600GS vs X1950PRO and X1950XT AGP series.
Will nVidia's chip more promising over ATI ?
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: tanishalfelven
Originally posted by: coldpower27
Originally posted by: Acanthus
Originally posted by: Cookie Monster
Memory interface isnt a big factor in terms of performance. Its the architecture that matters the most. You can clearly see this in the form of the 7600GT 128bit, beating the 6800ultra all across the board. The 6600GT 128bit outperformed both 9700pro/5950FX ultra, while being toe to toe with the 9800pro.

This is true, however, with a 256bit bus, you wouldve seen an enormous increase in performance across the board.

Its their method of gutting the performance to make the high end cards more attractive at their retardedly high price tags.

I just want a $200 card that doesnt "feel like" engineering toilet paper.

Im *STILL* using my 6800GT i bought at launch because there are no clear alternatives at the $200 or less market.

2 full generations of cards and im still not likely looking at a 100%+ increase in performance for $200.

If you want to see what im talking about, cut the memory clock of your current card in half, whatever it is.

You wont lose 50% of your performance, but its damn sure noticable.

Your comparing Apples to Oranges, your High End 6800 GT at launch was 399USD MSRP, and the 7600 GT which is superior retailed for 199USD MSRP, though it was 23 Months later.

Nevertheless what matters is the performance of the package, not that it looks bad on paper. Compare an Athlon 64x2 3800+ Manchester vs Athlon 64x2 3800+ Windsor. The latter has double the memory bandwidth, but is only marginally faster.
that becasue of ddr2s inherent latency issues. and its a processor. now your comparing apples to cars.

The point I am trying to make is that lots of bandwidth doesn't mean substantial performance increases, for GPU's the primary mode of increase of performance is adding more functional units, with clock frequency on those units being second and memory bandwidth a distant third.

For CPU's the primary mode of increase of performance was clockspeed, but that has been decreasing in effectiveness, and now we are relying on architecture and additional cores. Memory bandwidth is about 4th in comparison to these.

My original points stands, what matter is the performance of the package, and that it is produced as cheaply as possible. Memory prices can fall as new optical nodes are developed and make reaching a certain bin of frequency easier, PCB don't as much in terms of pricing as your still using more less the same amount of materials to create it.
 

TanisHalfElven

Diamond Member
Jun 29, 2001
3,512
0
76
Originally posted by: coldpower27
Originally posted by: tanishalfelven
Originally posted by: coldpower27
Originally posted by: Acanthus
Originally posted by: Cookie Monster
Memory interface isnt a big factor in terms of performance. Its the architecture that matters the most. You can clearly see this in the form of the 7600GT 128bit, beating the 6800ultra all across the board. The 6600GT 128bit outperformed both 9700pro/5950FX ultra, while being toe to toe with the 9800pro.

This is true, however, with a 256bit bus, you wouldve seen an enormous increase in performance across the board.

Its their method of gutting the performance to make the high end cards more attractive at their retardedly high price tags.

I just want a $200 card that doesnt "feel like" engineering toilet paper.

Im *STILL* using my 6800GT i bought at launch because there are no clear alternatives at the $200 or less market.

2 full generations of cards and im still not likely looking at a 100%+ increase in performance for $200.

If you want to see what im talking about, cut the memory clock of your current card in half, whatever it is.

You wont lose 50% of your performance, but its damn sure noticable.

Your comparing Apples to Oranges, your High End 6800 GT at launch was 399USD MSRP, and the 7600 GT which is superior retailed for 199USD MSRP, though it was 23 Months later.

Nevertheless what matters is the performance of the package, not that it looks bad on paper. Compare an Athlon 64x2 3800+ Manchester vs Athlon 64x2 3800+ Windsor. The latter has double the memory bandwidth, but is only marginally faster.
that becasue of ddr2s inherent latency issues. and its a processor. now your comparing apples to cars.

The point I am trying to make is that lots of bandwidth doesn't mean substantial performance increases, for GPU's the primary mode of increase of performance is adding more functional units, with clock frequency on those units being second and memory bandwidth a distant third.

For CPU's the primary mode of increase of performance was clockspeed, but that has been decreasing in effectiveness, and now we are relying on architecture and additional cores. Memory bandwidth is about 4th in comparison to these.

My original points stands, what matter is the performance of the package, and that it is produced as cheaply as possible. Memory prices can fall as new optical nodes are developed and make reaching a certain bin of frequency easier, PCB don't as much in terms of pricing as your still using more less the same amount of materials to create it.

relax dude. no one is arguing that bandwidth t. why do you think the r600 has a 512 bit buss. however ever x6xxx card for the past 2 gens has been faster than the prevoius high end despite a crippled bus.
 

40sTheme

Golden Member
Sep 24, 2006
1,607
0
0
Originally posted by: SunnyD
Originally posted by: Acanthus
Originally posted by: gersson
Originally posted by: CU
If they say meant to replace card "x" does that mean it is that fast. If so I think I would rather get a 7900gs or x1950pro now for $150 or so than by the same speed card but with DX10 which I won't use for awhile for $199-$250.

Think about it...if that were true we'd NEVER see performance increases... 6800-7800-8800 would all perform about the same but with better features?


::buzzer::

Youre describing the mid range and low end markets ;)

128 bit :roll:

Is this 2001?

Finally someone hit it on the nose. Part of the reason the 8800 has been so successful in terms of performance is it's ginormous memory bus. I don't rightfully see how a 128-bit card is going to out-perform across the board last gen cards with a 256-bit bus.

Yeah, I do not like the 128 bit bus idea. Even with those very high clocks, I doubt it would outperform my 512MB/256 bit 7950GT @ 600/1600. (basically a 7900GTX now, eh? Okay,so it's 50MHz less on the core. Oh well. ;))
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: tanishalfelven
relax dude. no one is arguing that bandwidth t. why do you think the r600 has a 512 bit buss. however ever x6xxx card for the past 2 gens has been faster than the prevoius high end despite a crippled bus.

Perhaps R600 needs every drop of performance possible to beat the G80 and the upcoming G81 by a decent margin. R600 is also a high end card, so cost effectiveness isn't as much of an issue.
 

TanisHalfElven

Diamond Member
Jun 29, 2001
3,512
0
76
Originally posted by: coldpower27
Originally posted by: tanishalfelven
relax dude. no one is arguing that bandwidth t. why do you think the r600 has a 512 bit buss. however ever x6xxx card for the past 2 gens has been faster than the prevoius high end despite a crippled bus.

Perhaps R600 needs every drop of performance possible to beat the G80 and the upcoming G81 by a decent margin. R600 is also a high end card, so cost effectiveness isn't as much of an issue.
i never said it was. i am just saying that despite crippled buses the midrange has always been good value. however a higher mem bandwidth is truly very very important.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: tanishalfelven
Originally posted by: coldpower27
Originally posted by: tanishalfelven
relax dude. no one is arguing that bandwidth t. why do you think the r600 has a 512 bit buss. however ever x6xxx card for the past 2 gens has been faster than the prevoius high end despite a crippled bus.

Perhaps R600 needs every drop of performance possible to beat the G80 and the upcoming G81 by a decent margin. R600 is also a high end card, so cost effectiveness isn't as much of an issue.
i never said it was. i am just saying that despite crippled buses the midrange has always been good value. however a higher mem bandwidth is truly very very important.

I was speculating since you said why do you think the R600 has a 512Bit Bus, high memory bandwidth has only proven important in the high end sector, in the mid range sector bandwidth is one of the weakest ways to increase performance and can be sacrificed. The point of reduced memory interfaces is to do just, reducing the cost of the product so that it can be sold for less and be a good value.