Top to bottom ATI DX11 launch 2009

Sylvanas

Diamond Member
Jan 20, 2004
3,752
0
0
Source

AMD is planning three models of DX11 graphics cards according to the Hardware-infos, targeting NVIDIA?s upcoming G300. Besides RV870, there will be R800, and RV840 for mid-segment market and RV810 for low-end market. The source indicates that R800 consists of two 40nm RV870 chips.

RV870 will probably be named as Radeon HD 5870 according to AMD?s naming scheme, and RV840 is called as Radeon HD 5850, and R800 is known as Radeon HD 5870X2.

Powered by 1200*2 stream processors, Radeon HD 5870X2 is said to feature GDDR5 memory, core/memory clock of 950/1150MHz, and bandwidth of 286GB/s.

NVIDIA G300 is scheduled to launch late 2009, so AMD is planning to phase out RV870, RV840 and R800 by the end of this year, while no further details on RV810 is disclosed yet.

That would be huge bonus if AMD can get low, midrange a high end all out at the same time.

EDIT: Here are some specs posted on a German website.

Translation
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Almost makes goo the wanderer seem like a real person with some knowledge. :laugh:

Besides all that, it would be nice to see some dx11 with windows 7. I guess the crap out will be the console ports with really nice fog.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Why am I not understanding the quote. They are releasing them and phasing them out by the end of the year? Am I that tired?
 

kreacher

Member
May 15, 2007
64
0
0
^ No apparently the person who posted the news originally on expreview is tired because they copy pasted the new model numbers instead of the old ones.

Release is still a long way away anyway so it will be some time before reliable specs appear.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Originally posted by: OCguy
Why am I not understanding the quote. They are releasing them and phasing them out by the end of the year? Am I that tired?

Maybe they are trying to take advantage of 28nm as soon as possible for RV970? This would make sense in this Global economy aimed at bang for the buck.

Supposedly that node will be ready at TMSC 2010 Q1 (although I find that hard to believe considering the production trouble with 40nm) Correction: This http://www.fudzilla.com/content/view/13833/34/ appears to indicate Global Founderies may have 28nm available for ATI.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
It sounds like the single Rv870 card will have a memory bandwidth of 143 GB/s. Does anybody think this will slightly bottleneck 1200 shaders @ 950 Mhz?

I'll bet a 28nm RV970 will have the 512 bit DDR5 though but then we may be talking 2000+ shaders going even faster.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Can anyone explain to me how memory bandwidth fits into the equation of GPU computational power?

For example a stock HD4770 has a GPU capable of .96 TFLOPs and a memory bandwidth of 54.4 GB/s.

Are there any advantages/disadvantages to having more GPU power than memory bandwidth? Obviously the memory bus takes up some room on the die. How does balancing processing power and memory interface affect how the chip can be employed?
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
So why are there still rumors about the RV870 coming out when AMD's PR department is saying that they do not exist?

I wonder if AMD is generating bogus rumors to build up hype.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: SickBeast
So why are there still rumors about the RV870 coming out when AMD's PR department is saying that they do not exist?

I wonder if AMD is generating bogus rumors to build up hype.

Disinformation tactics worked pretty well with the 4870/4850. Right up until a couple of days before launch AMD was still claiming 480 SP when it was actually an 800 SP GPU.

Then again, they could be telling the truth.


Who knows.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Anything which says the X2 has higher shader clocks than the regular is always going to look a bit dodgy to me. (German link)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I'm having a hard time believing ATI next flagship single GPU will only have 1200 shaders considering the size of the 40nm RV740 with its 640 shaders.

I would think we would be looking at 1600 shaders and 512 bit/DDR5 with maybe the possibility of a smaller 256 bit/DDR5 part also.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: Just learning
I'm having a hard time believing ATI next flagship single GPU will only have 1200 shaders considering the size of the 40nm RV740 with its 640 shaders.

I would think we would be looking at 1600 shaders and 512 bit/DDR5 with maybe the possibility of a smaller 256 bit/DDR5 part also.

Last time they leaked they would have 480sp's and showed up to the party with 800. I'd say they'll have no less than 1600 per GPU. They'll need that to compete with a 512 shader GT300. (I think). ;)
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Keysplayr
Originally posted by: Just learning
I'm having a hard time believing ATI next flagship single GPU will only have 1200 shaders considering the size of the 40nm RV740 with its 640 shaders.

I would think we would be looking at 1600 shaders and 512 bit/DDR5 with maybe the possibility of a smaller 256 bit/DDR5 part also.

Last time they leaked they would have 480sp's and showed up to the party with 800. I'd say they'll have no less than 1600 per GPU. They'll need that to compete with a 512 shader GT300. (I think). ;)

keys, apoppin, do either of you (or any one else for that matter) recall who, if anyone, had the 800SP count correct prior to release and despite the prevalence of 480sp rumors?

It would be nice to back-check some site's rumors and articles on past GPU's so we can just wholly blacklist them as discredited regarding their rumors on the nex-gen chips.

I didn't pay attention back then so I have no working memory to draw from, but you guys do I suspect, so did anyone get it right?

What about NV rumors on GT200, was everyone wrong or was anyone right about that one's specs?
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
http://www.nordichardware.com/news,7575.html

http://www.xbitlabs.com/forum/viewtopic.php?t=14742

http://www.dvhardware.net/article27327.html

http://www.pcreview.co.uk/forums/thread-3496976.php

http://www.pcgameshardware.com...eased-next-month/News/

All seems to originate from NordicHardware, and they got their info from AMD. It was a game. Mentions 480sp's for R770. What they didn't specify was that it was actually an
additional 480sp's over the HD3xxx series. 320+480=800.

Quote from Nordic Hardware article: http://www.nordichardware.com/news,7860.html
"When we presented you with a slide saying the RV770 core sports 800 shaders and not 480 like most people believed, it was met with a certain amount of skepticism, which is very understandable as many of the larger news sites have been reporting 480 shaders, although without presenting any proof for it. We know now that the rumor of 480 shader processors was planted by AMD and that RV770 indeed has 800 shader processors. We also know that AMD is aware that the RV770 as a single core is not enough to match the GT200 core from NVIDIA. Not surprisingly, AMD is going to play its trump of price/performance, but that doesn't mean that there's no raw performance to play with."

So I guess it did originate from AMD.

I wouldn't blame the sites too quickly. This was misdirection was what was given to the sites initially. They were only reporting what they heard from the "horses mouth".

As far as who had the number 800 correct right from the start? Who knows.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
that's hilarious that AMD came in with that attitude, considering the 4890 is an outstanding single config.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Ah, so subterfuge was in play then too. Too funny.

I vaguely recall that whenever someone "broke" the story that it was going to have 800sp's the response on the forums was "total horseshit, 320->800 is too much of a jump at one time" blah blah. Kinda like what we see now every time the rumored G300 specs get brought up.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
amd didn't insist on wide memory, but they did insist on moving directly to 55nm.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Originally posted by: Idontcare
Ah, so subterfuge was in play then too. Too funny.

I vaguely recall that whenever someone "broke" the story that it was going to have 800sp's the response on the forums was "total horseshit, 320->800 is too much of a jump at one time" blah blah. Kinda like what we see now every time the rumored G300 specs get brought up.

But then given that AMD managed 800SP in a die smaller than the one used by NV, it's not (in hindsight) quite such a leap.
The main issue people have now is that the G200 die was huge, and they are going to make it even bigger.

320 to 800 seemed like a big jump, but there were still less transistors than the G200 so really it wasn't so massive.
If NV make their chip in a similar way again (which is actually feasible, they couldn't really redesign it it in hindsight of the G200 'problem'), then AMD will also be able to make something with 1600, and it'll probably be smaller again.

The problem with predicting what we will see is twofold, it's mostly rumours, and we start thinking with hindsight based on current chips, when the next gen might not be able to learn lessons from the current generation because it'll potentially be too late in development.

Please tell me if I'm wrong, but AMD's design strategy will be fairly set for this gen, and so will NV's, based on pre-R700/G200 knowledge. It's unlikely I would think that they can adjust from small but lots vs large monolithic designs in time, and so it wouldn't surprise me to see a comparitively underpowered part from AMD vs NV, but it will again be cheaper.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
AMD will not be going (back) to a 512-bit bus any time soon. 256-bit + GDDR5 has them where they want to be for several reasons.

#1 - While it would be nice, they don't care to have the "crown" at high end, they care to have the crown at mid-end high volume.
#2 - They're committed to multi-chip solutions for their high end, meaning an X2. Making a cost effective 1024-bit PC board is damn near impossible, just ask Nvidia.
#3 - They want to keep things "as small as possible", to increase yields and provide better gross margins at a lower price.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Lonyo
Please tell me if I'm wrong, but AMD's design strategy will be fairly set for this gen, and so will NV's, based on pre-R700/G200 knowledge. It's unlikely I would think that they can adjust from small but lots vs large monolithic designs in time, and so it wouldn't surprise me to see a comparitively underpowered part from AMD vs NV, but it will again be cheaper.

I don't follow the GPU ISA/architecture side of things close enough to really give you anything more than an uneducated opinion here...but I thought the upcoming GT300 is a completely new design from the ground up for both the ISA and the architecture.

If that is true then really all bets are off based on extrapolating the past/present to the future. P4 to C2D, or P3 to P4 :laugh: We can't possibly know.

But yeah all rumors are that AMD will be doing "more of the same" with their next-round of GPU's. Now are those rumors just more intentional sandbagging by AMD? Honestly, I hope so. I love a good sandbag, really makes for some excitement when the hardware comes out.

Intel has got the opposite problem, so much expectation amongst the enthusiast crowd for larrabee that it's practically predestined to disappoint us. But they have the challenge that if they don't generate hype and interest in advance of the hardware debuting then no one is going to put resources into programming apps and games now so they are ready for market release when Larrabee is released.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Originally posted by: Idontcare
Originally posted by: Lonyo
Please tell me if I'm wrong, but AMD's design strategy will be fairly set for this gen, and so will NV's, based on pre-R700/G200 knowledge. It's unlikely I would think that they can adjust from small but lots vs large monolithic designs in time, and so it wouldn't surprise me to see a comparitively underpowered part from AMD vs NV, but it will again be cheaper.

I don't follow the GPU ISA/architecture side of things close enough to really give you anything more than an uneducated opinion here...but I thought the upcoming GT300 is a completely new design from the ground up for both the ISA and the architecture.

If that is true then really all bets are off based on extrapolating the past/present to the future. P4 to C2D, or P3 to P4 :laugh: We can't possibly know.

But yeah all rumors are that AMD will be doing "more of the same" with their next-round of GPU's. Now are those rumors just more intentional sandbagging by AMD? Honestly, I hope so. I love a good sandbag, really makes for some excitement when the hardware comes out.

Intel has got the opposite problem, so much expectation amongst the enthusiast crowd for larrabee that it's practically predestined to disappoint us. But they have the challenge that if they don't generate hype and interest in advance of the hardware debuting then no one is going to put resources into programming apps and games now so they are ready for market release when Larrabee is released.

I wasn't talking architecture design, but design philosophy.
The groundwork for the chips would have been laid before they knew what the competition was about (monolithic vs multi-GPU) and should carry over to the next generation G300/R800 chips.
The specifics of the architecture are one thing, and while G300 probably could be a complete new idea architecturally it is also likely that NV are sticking with a large monolithic design and not going to a new arch using multiple GPUs for the required performance.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
they have a new shader because some changes were naturally involved with going to dx11. but the TPC-centric idea exists, they are just bigger and there's more of them. i'm sure the thread setup process has been revised to accommodate the larger dimensions and new instructions. and the rumor articles each attempt to distinguish this as a MIMD architecture juxtaposing it with G80 and G200's SIMD nature.

has anyone seen this?
http://www.pcgameshardware.com...-Geforce-GTX-380/News/

looks like BS to me. 512-bit GDDR5 on top of 512 shaders? with 2.4 billion transistors, that doesn't add up with G200 1.4 billion/240 shaders...
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Originally posted by: Idontcare
Originally posted by: Lonyo
Please tell me if I'm wrong, but AMD's design strategy will be fairly set for this gen, and so will NV's, based on pre-R700/G200 knowledge. It's unlikely I would think that they can adjust from small but lots vs large monolithic designs in time, and so it wouldn't surprise me to see a comparitively underpowered part from AMD vs NV, but it will again be cheaper.

I don't follow the GPU ISA/architecture side of things close enough to really give you anything more than an uneducated opinion here...but I thought the upcoming GT300 is a completely new design from the ground up for both the ISA and the architecture.

If that is true then really all bets are off based on extrapolating the past/present to the future. P4 to C2D, or P3 to P4 :laugh: We can't possibly know.

But yeah all rumors are that AMD will be doing "more of the same" with their next-round of GPU's. Now are those rumors just more intentional sandbagging by AMD? Honestly, I hope so. I love a good sandbag, really makes for some excitement when the hardware comes out.

Intel has got the opposite problem, so much expectation amongst the enthusiast crowd for larrabee that it's practically predestined to disappoint us. But they have the challenge that if they don't generate hype and interest in advance of the hardware debuting then no one is going to put resources into programming apps and games now so they are ready for market release when Larrabee is released.

What about these ideas of 28nm as soon as Q1 2010? If this is true it could really save the graphics card industry....but then I wonder how long till the next smaller process (to help with performance vs price)