ATI Preps New Dual-Chip Flagship Graphics Card - Rumours.

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Would there be a point if 2 months later there will be a faster card out. not to mention, they would have trouble getting it under 300W. It would be wasted RnD IMO

Agreed. It's not going to give them the performance crown anyway. Why bother?
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
They say they have confirmation......

Quote:
Our further investigation also led us to find two codenames Turks and Caicos as these two cards should replace Radeon HD 5670 and Radeon 5550 and 5400.
Cayman (Southern Islands should be a group codename for Radeon 6000 series) should be the codename for the product that replaces Radeon HD 5800 series, but as far as we know this chip might actually not make it to market this year"

http://www.fudzilla.com/graphics/graphics/atis-next-generation-to-come-in-october

I think mid range first (Oct./Nov.), high end at the end of the year.

Well they also had confirmation GF100 was being released december 2009...
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
What was the limit of eyesight again? I thought it was 5000 pixels?
It's irrelevant, until we get many more wide than that. Anyway, it depends on distance. However, the ideal will be to have several pixels across and down more than the eye can see, at the viewing distance, to completely eliminate detection of aliasing, and minor lossy codec artifacts.
22.2 channel sound. Yeah right. A room full of speakers sounds nice but...
Nah, just the normal 5-8. With more channels, your computer or receiver can mix them for you. Some people might go for tons of speakers, but a HTPC, with several speakers hooked up, and a software mixer calibrated for the room, to properly mix 22 channels, for the best sound at the sofa...that would be pretty sweet.
 

Outrage

Senior member
Oct 9, 1999
217
1
0
They say they have confirmation......

Quote:
Our further investigation also led us to find two codenames Turks and Caicos as these two cards should replace Radeon HD 5670 and Radeon 5550 and 5400.
Cayman (Southern Islands should be a group codename for Radeon 6000 series) should be the codename for the product that replaces Radeon HD 5800 series, but as far as we know this chip might actually not make it to market this year"

http://www.fudzilla.com/graphics/graphics/atis-next-generation-to-come-in-october

I think mid range first (Oct./Nov.), high end at the end of the year.

wtf is your problem, do you only read what you want?

here is the something you missed to quote
but as far as we know this chip might actually not make it to market this year, but it also might launch together with the other ones in October.

you can think what you want, but fud'z dont have any confirmation supporting your thoughts.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
"As originally proposed, UHD comes in two levels of resolution: 7680x4320 pixels (33.1MP), and 3840x2160 (8.2MP), which is considerably higher than full-HD resolution (1920x1080 or 2MP) today. In addition, UHD may improve audio dramatically and enable 22.2 multi-channel three-dimensional sound."

What was the limit of eyesight again? I thought it was 5000 pixels? 22.2 channel sound. Yeah right. A room full of speakers sounds nice but...

Yeah, 7680x4320 sounds absolutely rediculous.

That is 16.5 times more pixels than today's 1080p.

But that shouldn't be too much for video cards to handle in 8-10 years assuming semi conductor manufacturers are able to maintain a doubling of xtors every 2 years.

I just wonder how affordable these LCDs will be? Or will stacked OLEDs be really cheap by that time?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Yeah until they get out a new architecture they are going to struggle to keep up with NVIDIA.

Maybe AMD will finally release "sideport" and SFR for Radeon.

If that ends up being the case dual GPU cards *might* begin to act and feel like single GPU products.

So maybe FPS doesn't really improve that much, but overall quality of experience (ie, input lag, reduction or elimination of micro-stutter) significantly increases.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,329
126
With double the Shaders (100%) (800 vs 1600) (HD5770 vs HD5870) we have 74% more Power Consumption (108W vs 188W)

If you take an HD5850 (1440 shaders) and OC to 5870 clocks (850MHz) then the performance deference between a 5850 (850MHz) and a 5870 (850MHz) is only 3-4% with 11% more Shaders.

If ATI keeps the same Architecture with Evergreen and add 20% more Shaders (1920) then the Power Consumption will go 15% up to 188+15% = 216W and performance will go up 6-8%.

Die Size will be in the area of ~400mm2

If they want to make a dual chip Card they better keep the TDP bellow 200W

So I believe SI will not be more than 10% faster than 5870 at best, actually I don’t think ATI really tried to make it faster in the first place, they wait for 28nm. ;)

If you think ATI is going to release a 6XXX series lineup and the flagship is only going to be 10% faster than a 5870, you are kidding yourself, bigtime.

There is more to it than just the increase in die size. They have had a full year now to prep these new cards, refine what they already have in 5870, and make the additions of certain aspects of what they are going to offer in N.I.

I don't see where the surprise or doubt is coming from, it's simple logic. ATI has a 6 month lead on nvidia currently in generation releases.

If they release their next series at year's end as they have gone on record with already, they will continue to maintain this lead. Nvidia has nothing coming but a refresh part that will not outdo their flagship 480.

If/when they release a dual gpu card with 460 cores, which is pretty much a certainty, we'll have the same story when 5870 dropped, the 295 was about 5-10% faster than a 5870. Likely the same story with the 6870 vs 495, or whatever they call it.

Of course, 6970 will eliminate that situation in short order, and 6870 will retake the single gpu card lead.

I called this one back at 480 release. Nvidia need to get their shit together and catch up to ATI's release cycle. Or else their sales will continue to plummet if they keep coming to table 6 months later with parts on par with ATI.

Otherwise ATI will keep putting their sales behind the eight ball for those six month terms where they are dominating gpu sales with new tech cards while nvidia is still trying to push their old stuff out the door.
 

ace55

Member
Jul 27, 2010
28
0
0
Well said Grooveriding. Nvidia really does need to get their act together, else soon enough their only GPU sales will be to 3d and multi-monitor gamers. Nvidia's only selling point atm are the aforementioned features and SLI scaling, allowing great performance/price with 460 sli and great performance (if money is no object) for 480 sli. It would be foolish to not expect Southern Islands to surpass Nvidia's current offerings in single-card performance. If SI also manages to fix AMD's problems with eyefinity and crossfire and/or adds 3d support... Well, I might just regret buying two 480s.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Yeah, 7680x4320 sounds absolutely rediculous.

That is 16.5 times more pixels than today's 1080p.
It's also only 35% more pixels than AMD has already demoed Eyefinity with. Not exactly a huge jump, anymore, is it?

I think AMD wants to have been ready and waiting, when the display technology finally gets here, and this seems to be the kind of thing they can actually one-up nVidia and Intel at.
But that shouldn't be too much for video cards to handle in 8-10 years assuming semi conductor manufacturers are able to maintain a doubling of xtors every 2 years.
Why would you need to double the transistors over and over, for it?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
It's also only 35% more pixels than AMD has already demoed Eyefinity with. Not exactly a huge jump, anymore, is it?

It is a big jump because eyefinity is spreading that resolution out over three screens.

Why would you need to double the transistors over and over, for it?

7680x4320 is 16.5 times more pixels than 1080p.

That, of course, would be worse than running sixteen 1080p monitors at the same time.

Then if we used that same resolution in Triple screen Eyefinity it would be the equivalent of running greater than forty eight 1080p screens.
 
Last edited:

golem

Senior member
Oct 6, 2000
838
3
76
SI could be 10%,50% or more faster, who knows until it comes out. I'm guessing it will be more than 10%, maybe much more, but it could be because they oced the hell out of it to get that performance or it could be a marvel of engineering and get the speed boost while running cooler and drawing less power.

But assuming everything will come out great, just like assuming the worse is also just fooling yourself. Both companies have stumbled in the past with releasing uninspired cards after a great release just like both companies have released great cards after a stinker.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
SI could be 10%,50% or more faster, who knows until it comes out. I'm guessing it will be more than 10%, maybe much more, but it could be because they oced the hell out of it to get that performance or it could be a marvel of engineering and get the speed boost while running cooler and drawing less power.

But assuming everything will come out great, just like assuming the worse is also just fooling yourself. Both companies have stumbled in the past with releasing uninspired cards after a great release just like both companies have released great cards after a stinker.

Yeah, it feels just like yesterday when nVidia released the awesome 8 series. Then again, they did take their foot of the gas with re-badges and AMD caught up
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
It is a big jump because eyefinity is spreading that resolution out over three screens.
That's a limitation of the displays, and the connection standards. Both of which are improving, given time. The standards themselves lag behind all the rest of the hardware, except for the bare panels (big monitors are still made with multiple smaller panels). Eyefinity is using the same chip for three of those displays, and could probably drive all six, right now, were it economical to include the additional transmitter parts in every Radeon 58xx GPU.

7680x4320 is 16.5 times more pixels than 1080p.
Yes, but you act as though 1080p is a high resolution, as well. It's a video storage resolution, just good enough to be noticeably superior to DVD. It is now in low-end displays. Higher is easy to come by, and will only become cheaper over time. Video cards have been handling higher resolutions than 1080p for at least ten years, now. As one example, nine years ago, Radeon 8500 cards could do 2048x1536@60Hz.

When you can buy a monitor that can do these high resolutions, that will be amazing. The video card supporting them is AMD hyping up something they seem to be really good at, which was also something ATi was good at, before they got bought. It's not that its meaningless, and AMD very well should be hyping it up, but I see it more along the lines of, "it's about time we got moving on this, again," rather than, "wow, that's just ridiculous."

Now I'm getting all nostalgic for a G550 or Voodoo 3 driving a Trinitiron CRT...
That, of course, would be worse than running sixteen 1080p monitors at the same time.
Worse? You seem to be making this assumption that the limitation, outside of games, is one of GPU processing power, which needs more transistors over all else. It's not. DVI, DisplayPort, and HDMI: these are the real limitations. DVI was needed, and came out half-baked. HDMI was desired by content companies, and came out half-baked. HDMI and DisplayPort are both being gradually molded into good interfaces for the future, though.

Overall, it's like saying CPUs will need to double their transistors for another 8-10 years, because your network can only move 100MB/s.

Then if we used that same resolution in Triple screen Eyefinity it would be the equivalent of running greater than forty eight 1080p screens.
I will look forward to that.
 

Kakkoii

Senior member
Jun 5, 2009
379
0
0
Yeah, 7680x4320 sounds absolutely rediculous.

That is 16.5 times more pixels than today's 1080p.

But that shouldn't be too much for video cards to handle in 8-10 years assuming semi conductor manufacturers are able to maintain a doubling of xtors every 2 years.

I just wonder how affordable these LCDs will be? Or will stacked OLEDs be really cheap by that time?

We would reach the limit of physical matter in less than 10 years if we managed to double it every 2 years. We'd be working with individual groups of atoms, such as graphene gates and carbon nanotubes. Can't physically get any smaller. Will have to start building chips in 3 dimensions and creating new types of chips, new tech.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Crossfire scaling with 4xxx was better than it is with 5xxx (Look at 4870X2), who is to say SI crossfire won't scale batter than the current implementation?

Nvidia would have their time better spent on optimizing GF100 to go against SI.

You are right, it's because the shaders on the Evergreen architecture have lower IPC compared to the previous generations of HD 4x00. They removed some tweaks to be able to fit the chip in a reasonable die size. That's why an HD 5770 running at 850Mhz core can't outperform the HD 4870 1GB which runs at 750MHz, even with recent drivers, it can only match it now.

The same thing happens with the HD 4550 and HD 5450, the latter have a 50MHz core advantage and identical bandwidth and it looses against the HD 4550. The performance jump from the HD 3870 to the HD 4870 was far more impressive than the performance jump from the HD 4870 to HD 5870, let's see what S.I. can bring to the table, AFAIK it uses the N.I. uncore like TMU's, ROP's, etc, and the Evergreen shaders.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Your argument makes no sense at all. If ATI is going to increase die size 18-19%, you think performance will go up only 10%? You didn't account for the increased efficiency of shaders? or other aspects of the GPU? You are assuming that all things will remain equal and they will simply add extra shaders of the same complexity?

We have already seen NV shrink the size of Fermi into GTX460 while producing superior texture fill-rate to GTX470. Therefore, trying to predict performance based on die size alone is not sufficient unless you know the architectural internals of the GPU.

I believe the S.I. card will be to the gtx 485 as the 4890 was to the gtx 285.
Very close in performance but no cigar.:)
 

gorobei

Diamond Member
Jan 7, 2007
3,957
1,443
136
7680x4320 pixels......
Is just 2560x1440 in a 4x2 eyefinity array. Current eyefinity does 3x1 30" displays, so they appear to be scaling up. Really only applies to commercial advertising at that size.
Guess we can expect an eyefinity8 card or some eyefinity4crossfire in the next release.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
I believe the S.I. card will be to the gtx 485 as the 4890 was to the gtx 285.
Very close in performance but no cigar.:)

What makes you think there will be a 485? AMD had a whole year to optimize the evergreen shaders plus add bits of NI in there too (They would be faster or more efficient or there would be no point)

Not to mention the more experience on 40nm with that architecture will yield higher clocks.

A 6870 with just under 2000SPs at 1GHz is gonna be no slouch.

I'm just pulling number out of my ass here.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
You are right, it's because the shaders on the Evergreen architecture have lower IPC compared to the previous generations of HD 4x00. They removed some tweaks to be able to fit the chip in a reasonable die size. That's why an HD 5770 running at 850Mhz core can't outperform the HD 4870 1GB which runs at 750MHz, even with recent drivers, it can only match it now.

The same thing happens with the HD 4550 and HD 5450, the latter have a 50MHz core advantage and identical bandwidth and it looses against the HD 4550. The performance jump from the HD 3870 to the HD 4870 was far more impressive than the performance jump from the HD 4870 to HD 5870, let's see what S.I. can bring to the table, AFAIK it uses the N.I. uncore like TMU's, ROP's, etc, and the Evergreen shaders.

Yes. but I was saying that 4xxx had scaling like this http://www.anandtech.com/show/2725/6
Scaling Evergreen as come anywhere near. I might just be the games, the drivers or the architecture I don't know.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
What makes you think there will be a 485? AMD had a whole year to optimize the evergreen shaders plus add bits of NI in there too (They would be faster or more efficient or there would be no point)

Not to mention the more experience on 40nm with that architecture will yield higher clocks.

A 6870 with just under 2000SPs at 1GHz is gonna be no slouch.

I'm just pulling number out of my ass here.

7900gtx - 7900gtx 512
8800gtx -8800 ultra
gtx 280- gtx 285
gtx 480 - gtx 485

It just fits. :)

A full 512 sp gtx 485 at *900 core* with twin ultra belt driven 240cm super fans should do it? :) :) and a fixed memory controller would help.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
7800gtx - 7900gtx 512
8800gtx -8800 ultra
gtx 280- gtx 285
gtx 480 - gtx 485

It just fits. :)

A full 512 sp gtx 485 at *900 core* with twin ultra belt driven 240cm super fans should do it? :) :) and a fixed memory controller would help.

Fixed that for you.

The 1st 3 cards on that list were just overclocked and they didn't have the heat and power issues of the 480.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106

happy medium

Lifer
Jun 8, 2003
14,387
480
126
The GTX-485? Are you talking about this? :D

http://en.expreview.com/2010/08/09/world-exclusive-review-512sp-geforce-gtx-480/9070.html

Avg. 5.67 percent increase in performance over the 480 part
Uses 200W more power under load
94°C with the same cooler (Accelero Xtreme) that keeps the HD 5970 toxic @ 67°C

NO , no, not the old memory leaking engineering sample that Nvidia's is baiting ATI with.:D

I firmly believe Nvidia has a full fermi comming with higher clocks,more sp's and about the same power usage.

ANd don't forget about the full gf 104. I exspect that sucker to be clocked at 850 core stock and to be a little faster then the 5870.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
NO , no, not the old memory leaking engineering sample that Nvidia's is baiting ATI with.:D

I firmly believe Nvidia has a full fermi comming with higher clocks,more sp's and about the same power usage.

ANd don't forget about the full gf 104. I exspect that sucker to be clocked at 850 core stock and to be a little faster then the 5870.

Taking them a while don't you think? :p
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
NO , no, not the old memory leaking engineering sample that Nvidia's is baiting ATI with.:D

I firmly believe Nvidia has a full fermi comming with higher clocks,more sp's and about the same power usage.

ANd don't forget about the full gf 104. I exspect that sucker to be clocked at 850 core stock and to be a little faster then the 5870.

Got a link? At least I gave a link. Earlier you were asking for links for the 6*** series. :p (I'm pretty sure it was you, anyway. ;))