• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Radeon 2950pro

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: cmdrdredd
Originally posted by: swtethan
Originally posted by: cmdrdredd
Originally posted by: SickBeast
So in November Nvidia will have a top-end GPU that is more than twice as fast as ATI's fastest offering...interesting and too bad.

ATI has the R680 as well...which is the enthusiast part. This 2950Pro is for the performance "mid-range" sector.

Why don't you get Nvidia to make some decent drivers and just wait for all products from both manufacturers to be released and shown.

They have good drivers, why dont you buy a nvidia card before you speak.

Because I had one running in Vista x64 and their drivers suck. Completely suck...no excuse.

Don't worry it's natural to defend a product you own blindly 😀

Seriously they have huge slowdown issues in a variety of games.

I actually run vista ultimate 64 and I have no problems, same fps (maybe 10 fps slower) as my XP thanks. You may have had one running in the past, but you cannot speak for them now.
 
Originally posted by: swtethan
Originally posted by: cmdrdredd
Originally posted by: SickBeast
So in November Nvidia will have a top-end GPU that is more than twice as fast as ATI's fastest offering...interesting and too bad.

ATI has the R680 as well...which is the enthusiast part. This 2950Pro is for the performance "mid-range" sector.

Why don't you get Nvidia to make some decent drivers and just wait for all products from both manufacturers to be released and shown.

They have good drivers, why dont you buy a nvidia card before you speak.

No they don't.

Yes, they've gotten better.

But it's taken them half a year to have what i'd consider decent drivers, not even good yet.
 
Originally posted by: n7
Originally posted by: swtethan
Originally posted by: cmdrdredd
Originally posted by: SickBeast
So in November Nvidia will have a top-end GPU that is more than twice as fast as ATI's fastest offering...interesting and too bad.

ATI has the R680 as well...which is the enthusiast part. This 2950Pro is for the performance "mid-range" sector.

Why don't you get Nvidia to make some decent drivers and just wait for all products from both manufacturers to be released and shown.

They have good drivers, why dont you buy a nvidia card before you speak.

No they don't.

Yes, they've gotten better.

But it's taken them half a year to have what i'd consider decent drivers, not even good yet.

In my experience, theyre good, what problems have you had with the most recent?
 
Originally posted by: swtethan
Originally posted by: n7
Originally posted by: swtethan
Originally posted by: cmdrdredd
Originally posted by: SickBeast
So in November Nvidia will have a top-end GPU that is more than twice as fast as ATI's fastest offering...interesting and too bad.

ATI has the R680 as well...which is the enthusiast part. This 2950Pro is for the performance "mid-range" sector.

Why don't you get Nvidia to make some decent drivers and just wait for all products from both manufacturers to be released and shown.

They have good drivers, why dont you buy a nvidia card before you speak.

No they don't.

Yes, they've gotten better.

But it's taken them half a year to have what i'd consider decent drivers, not even good yet.

In my experience, theyre good, what problems have you had with the most recent?

the slowdown bug noted on their own forums has not been fixed as of yet. Basically it causes the card's memory to be eaten up and FPS to drop to unplayable (~10fps).
 
Way off topic again. For most of us still on xp - drivers aren't an issue. Neither is dx10. Hope a game comes out soon the makes me want to upgrade! Bioshock seems fun (from the demo) and I will get it - but no real compelling reason to upgrade - as the game runs great on my ancient system.

I must admit a budget card that runs equal to the 2900xt and uses less power would be a compelling reason.
 
Originally posted by: ronnn
Way off topic again. For most of us still on xp - drivers aren't an issue. Neither is dx10. Hope a game comes out soon the makes me want to upgrade! Bioshock seems fun (from the demo) and I will get it - but no real compelling reason to upgrade - as the game runs great on my ancient system.

I must admit a budget card that runs equal to the 2900xt and uses less power would be a compelling reason.

The bug presents itself in XP sometimes too.

All I was doing is defending ATI from an obviously biast Nvidia user. I would have myself an 8800 if they had good driver support. They don't, and I decided at this point (now that performance is up on the HD2900 with new drivers) that I should wait and see.
 
Yep for me also, no need to upgrade - so wait and see it is. My monitor resolution is only 1680 x 1050 and the games I like play ok - so no need until we get real dx10 games. May be a bit of a wait as most seem to be console ports.
 
Originally posted by: swtethan
Originally posted by: n7
Originally posted by: swtethan
Originally posted by: cmdrdredd
Originally posted by: SickBeast
So in November Nvidia will have a top-end GPU that is more than twice as fast as ATI's fastest offering...interesting and too bad.

ATI has the R680 as well...which is the enthusiast part. This 2950Pro is for the performance "mid-range" sector.

Why don't you get Nvidia to make some decent drivers and just wait for all products from both manufacturers to be released and shown.

They have good drivers, why dont you buy a nvidia card before you speak.

No they don't.

Yes, they've gotten better.

But it's taken them half a year to have what i'd consider decent drivers, not even good yet.

In my experience, theyre good.

:thumbsup:

Nvidia drivers have been flawless on my Vista 64 system.
 
Originally posted by: swtethan
I actually run vista ultimate 64 and I have no problems, same fps (maybe 10 fps slower) as my XP thanks.
10 fps is actually quite a performance hit, especially when it comes to minimum frame rates.

 
Originally posted by: Canterwood
Originally posted by: swtethan
I actually run vista ultimate 64 and I have no problems, same fps (maybe 10 fps slower) as my XP thanks.
10 fps is actually quite a performance hit, especially when it comes to minimum frame rates.

The card has so much power that I have not noticed a difference.
 
Originally posted by: swtethan


The card has so much power that I have not noticed a difference.


Much as I miss the team sport flaming - this is getting silly. When the 2950 pro is out, will be easier to compare to competing cards.
 
Originally posted by: ronnn
Originally posted by: swtethan


The card has so much power that I have not noticed a difference.


Much as I miss the team sport flaming - this is getting silly. When the 2950 pro is out, will be easier to compare to competing cards.

Exactly. Plus if it does match HD2900XT performance for $250 then that means your $400 GTS 640MB is going to be worth less than $300. Meaning prices will fall across the board (maybe not the enthusiast segment where the gtx and the R680 should sit)
 
Settle down folks. Of course this thing will be cheaper than snot to manufacture. 55nm, HALF the bus width. TADA!! Same 320 shaders as a 2900XT is a bit discouraging though. But the price point would be excellent. And, if they get the 55nm process right, it may use less power than current R600. 😉 And cooler. Maybe even have a single slot cooler? See folks, this is what the R600 should have been. Not entirely sure that halving the bus width was very wise, but we'll see.

Question: Is the move from 55nm to 45nm an optical shrink? Or is a whole new process required? I'm just thinking a bit ahead.
 
Originally posted by: keysplayr2003
Settle down folks. Of course this thing will be cheaper than snot to manufacture. 55nm, HALF the bus width. TADA!! Same 320 shaders as a 2900XT is a bit discouraging though. But the price point would be excellent. And, if they get the 55nm process right, it may use less power than current R600. 😉 And cooler. Maybe even have a single slot cooler? See folks, this is what the R600 should have been. Not entirely sure that halving the bus width was very wise, but we'll see.

Question: Is the move from 55nm to 45nm an optical shrink? Or is a whole new process required? I'm just thinking a bit ahead.

I wonder about the BUS because the 8800gts is only 384bit right? I mean...it doesn't seem limiting there.
 
8800GTS is 320bit, the GTX is 384bit. 🙂

the memory bus width to me is like system memory, going from 256/512MB of system ram to 1GB is a big jump and you notice the difference. going to 2GB there is a noticeable difference but not as big as 256/512 to 1GB. then going above 2GB there really isn't any benefit and if there is, you'll won't notice it but its nice to have 'more'.
 
Originally posted by: cmdrdredd
Originally posted by: keysplayr2003
Settle down folks. Of course this thing will be cheaper than snot to manufacture. 55nm, HALF the bus width. TADA!! Same 320 shaders as a 2900XT is a bit discouraging though. But the price point would be excellent. And, if they get the 55nm process right, it may use less power than current R600. 😉 And cooler. Maybe even have a single slot cooler? See folks, this is what the R600 should have been. Not entirely sure that halving the bus width was very wise, but we'll see.

Question: Is the move from 55nm to 45nm an optical shrink? Or is a whole new process required? I'm just thinking a bit ahead.

I wonder about the BUS because the 8800gts is only 384bit right? I mean...it doesn't seem limiting there.

Is the G80 anything like the R600? Why compare the buswidth of an 8800GTS with the buswidth of a 2950pro? Totally different architectures and one really doesn't have anything to do with the other. Both utilize bandwidth in their own way. So, just for the sake of this thread, put G80's out of mind. Talk about what we have here, which is a new card that will have 1/2 of the buswidth of it's predecessor. That is drastic. However it is supposed to have VERY fast GDDR4. 1.4GHz (2.8GHz)?? Fast. But then again, the 2900XT 1GB has faster GDDR4 over the 2900XT 512 GDDR3 and it didn't really help that much. Diminishing returns and such.

IMHO, they are futzing around with anything they can to reduce costs. Obviously they couldn't revamp the R600 core to be more efficient in such a short amount of time, so they will do what they can to lessen costs and get more sales. AMD needs to address CORE issues (or efficiency). Memory speed may not help if the core can't process the data fast enough.
 
Originally posted by: aiya24
8800GTS is 320bit, the GTX is 384bit. 🙂

the memory bus width to me is like system memory, going from 256/512MB of system ram to 1GB is a big jump and you notice the difference. going to 2GB there is a noticeable difference but not as big as 256/512 to 1GB. then going above 2GB there really isn't any benefit and if there is, you'll won't notice it but its nice to have 'more'.

No, we are talking about "width" of the memory bus. Not the "amount" of memory.
 
Im inclined to believe that the 512bit memory interface was quite a waste of transistors budget/die space/cost etc etc for the R600. Its more of an experiment than anything else. (A successful one nonetheless). However we all know R600 is not even close to being bandwidth limited, and im sure ATi is better off with a 256bit memory interface (pair it with fast GDDR4 to maintain a reasonable bandwidth figure) would be much more cost efficient.

This 2950pro sounds quite the card, if priced right. Im thinking $299ish to $349 could be the MSRP of this card. It might have cut down on the ALU side of things from 320 to maybe 240? (Would make it 2/3 of R600 in terms of ALU count) ROPs/TMUs might stay the same.

But i bet people are more interested in the R680 slated to hit Q1 08 according to CJ. (The refresh of the R600 high end)
 
Originally posted by: Cookie Monster
Im inclined to believe that the 512bit memory interface was quite a waste of transistors budget/die space/cost etc etc for the R600. Its more of an experiment than anything else. (A successful one nonetheless). However we all know R600 is not even close to being bandwidth limited, and im sure ATi is better off with a 256bit memory interface (pair it with fast GDDR4 to maintain a reasonable bandwidth figure) would be much more cost efficient.

This 2950pro sounds quite the card, if priced right. Im thinking $299ish to $349 could be the MSRP of this card. It might have cut down on the ALU side of things from 320 to maybe 240? (Would make it 2/3 of R600 in terms of ALU count) ROPs/TMUs might stay the same.

But i bet people are more interested in the R680 slated to hit Q1 08 according to CJ. (The refresh of the R600 high end)

I know I'm more interested in high end.

Although it all depends on how the current cards handle Crysis and UT3 before the R680 release. If the games run really well on current hardware then I may be more inclined to buy now rather than wait.
 
Originally posted by: keysplayr2003

Is the G80 anything like the R600? Why compare the buswidth of an 8800GTS with the buswidth of a 2950pro? Totally different architectures and one really doesn't have anything to do with the other. Both utilize bandwidth in their own way. So, just for the sake of this thread, put G80's out of mind. Talk about what we have here, which is a new card that will have 1/2 of the buswidth of it's predecessor. That is drastic. However it is supposed to have VERY fast GDDR4. 1.4GHz (2.8GHz)?? Fast. But then again, the 2900XT 1GB has faster GDDR4 over the 2900XT 512 GDDR3 and it didn't really help that much. Diminishing returns and such.

IMHO, they are futzing around with anything they can to reduce costs. Obviously they couldn't revamp the R600 core to be more efficient in such a short amount of time, so they will do what they can to lessen costs and get more sales. AMD needs to address CORE issues (or efficiency). Memory speed may not help if the core can't process the data fast enough.

Of course they want to reduce costs. Very surprising to find a company that doesn't. Past history shows many second gen cards that use reduced bus, but still manage to keep pace with previous flagship., so that part is not surprising. The interesting part is that we should be seeing some real dx10 by that time - and will see if the architecture competes well. The R600 may be more efficient (not counting power use) than you give it credit for.

 
Originally posted by: keysplayr2003
Originally posted by: aiya24
8800GTS is 320bit, the GTX is 384bit. 🙂

the memory bus width to me is like system memory, going from 256/512MB of system ram to 1GB is a big jump and you notice the difference. going to 2GB there is a noticeable difference but not as big as 256/512 to 1GB. then going above 2GB there really isn't any benefit and if there is, you'll won't notice it but its nice to have 'more'.

No, we are talking about "width" of the memory bus. Not the "amount" of memory.

i know you guys were talking about 'width' of the memory bus. i was just using system memory as an example that to a certain point having more doesn't do much. hence i used the word 'like'. sorry if i confused you.
 
Originally posted by: Cookie Monster
Im inclined to believe that the 512bit memory interface was quite a waste of transistors budget/die space/cost etc etc for the R600. Its more of an experiment than anything else. (A successful one nonetheless). However we all know R600 is not even close to being bandwidth limited, and im sure ATi is better off with a 256bit memory interface (pair it with fast GDDR4 to maintain a reasonable bandwidth figure) would be much more cost efficient.

This 2950pro sounds quite the card, if priced right. Im thinking $299ish to $349 could be the MSRP of this card. It might have cut down on the ALU side of things from 320 to maybe 240? (Would make it 2/3 of R600 in terms of ALU count) ROPs/TMUs might stay the same.

But i bet people are more interested in the R680 slated to hit Q1 08 according to CJ. (The refresh of the R600 high end)
the article stated that the price would be 199-249.

 
Originally posted by: bryanW1995
Originally posted by: Cookie Monster
Im inclined to believe that the 512bit memory interface was quite a waste of transistors budget/die space/cost etc etc for the R600. Its more of an experiment than anything else. (A successful one nonetheless). However we all know R600 is not even close to being bandwidth limited, and im sure ATi is better off with a 256bit memory interface (pair it with fast GDDR4 to maintain a reasonable bandwidth figure) would be much more cost efficient.

This 2950pro sounds quite the card, if priced right. Im thinking $299ish to $349 could be the MSRP of this card. It might have cut down on the ALU side of things from 320 to maybe 240? (Would make it 2/3 of R600 in terms of ALU count) ROPs/TMUs might stay the same.

But i bet people are more interested in the R680 slated to hit Q1 08 according to CJ. (The refresh of the R600 high end)
the article stated that the price would be 199-249.

Were they talking about retail prices or prices the retailers will pay for each card?

I mean they said Q6600 was $266 and it's ~$300 at retail.
 
Originally posted by: ronnn


Of course they want to reduce costs. Very surprising to find a company that doesn't. Past history shows many second gen cards that use reduced bus, but still manage to keep pace with previous flagship., so that part is not surprising. The interesting part is that we should be seeing some real dx10 by that time - and will see if the architecture competes well. The R600 may be more efficient (not counting power use) than you give it credit for.

I agree with you for the most part. But I don't think that a reduced bus was ever done to new flagship cards. I mean, this 2950pro is aimed to replace the 2900XT, correct? Still has 320 shaders kicking, just seriously reduced manufacturing process. So the core is the same (architecturally) and pumped up to 850MHz. But, doesn't this GPU have a 512-bit internal/external Ring Bus Controller? Are they going to cut this in half as well? It is going to be very interesting to see how this setup pans out for AMD. If this thing performs at least AS good as a current 2900XT for the 250 dollar price point, I know I would be sold. I really don't care about the bus width as long as the performance is there.
It would really be something special if AMD got back into the game here with something fierce.

 
Back
Top