"AMD’s next-generation family of high-performance graphics cards is expected to ship

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136

  • [*]Node: TSMC 28nm HPL (High-Performance Low Power)
  • Architecture: VLIW4
  • GPU: Thames (die-shrinked Cayman)

I dont believe they going to use the HPL for desktop graphics, not even for the lower performance chips. HPL is not suited for over 40-50W chips and i dont even know if they will choose it for their Mobile designs.
 
Feb 19, 2009
10,457
10
76
1. 2048 x 1000mhz vs. 1536 x 880mhz = 51.5% greater pixel fill-rate
2. 128 TMUs x 1000 mhz vs. 96 TMUs x 880mhz = 51.5% greater texture fill-rate
3. 256 GB/sec vs. 176 GB/sec = 45% more bandwidth.

Either these specs are fake, each SP/TMU/ROP is way more powerful in GCN, or HD7970 will get clobbered by Kepler. A card that has 45-51% faster specs on paper on average will at best be 50% faster than HD6970, which will put it only 30% faster than a GTX580 (100% = HD6970, 115% = GTX580, 150% = HD7970).

I am going to say these specs are too conservative or AMD has another 20% or so increase in efficiency hiding in redesigned TMUs and SPs or HD6970 was ROP starved?

I agree with this, 50% faster than 6970 is asking to be destroyed by NV's 28nm kepler. A full node jump, the performance target should be 80%. They are NOT going to go backwards.

Once you have a huge die and big TDP and be successful with it, backing down to a smaller die size when your competitor is steaming ahead.. just no.
 

dennilfloss

Past Lifer 1957-2014 In Memoriam
Oct 21, 1999
30,509
12
0
dennilfloss.blogspot.com
Wondering if the transition to 28nm will allow the cards to become a reasonable length again for mid towers. My Radeon 4870 was considered big three years ago and I have only about 1/2-3/4" of free space in front of the card (and the cabling goes there). Otherwise, I'd need to dremel-cut the supporting pillars of my Antec 3700BQE HDD cage to accommodate a longer card.

9xZYH.jpg
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
If power draw goes down then the cards should get physically smaller. It just requires more components to safely handle the current and metal to cool it off the more power a component uses.
 
Feb 19, 2009
10,457
10
76
Yeah but really, they seem to have a fixation on size.. their cards are huge.

Goes back to the 5850/70 days, 11.5' for a card that consume relatively little power. Completely not necessary. Gtx460s consume more power and were smaller, yet they also OC like nuts without exploding.

They could have a bigger market potential if they made their mid-range cards around 9' max. Small gripe, but every bit helps.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
I agree with this, 50% faster than 6970 is asking to be destroyed by NV's 28nm kepler. A full node jump, the performance target should be 80%. They are NOT going to go backwards.

Once you have a huge die and big TDP and be successful with it, backing down to a smaller die size when your competitor is steaming ahead.. just no.

You're arguing theoretical performance when nobody knows. Secondly, nobody gives a damn about high end (400$+) discrete graphics anymore in the consumer market. Wait, I take that back. People who are willing to pay 400$+ for a GPU is shrinking daily and is becoming a niche (if not already)....While IGP's are eating away at low end discrete card sales. This will only get worse as IGP performance improves over the next year. Nvidia and AMD both recognize this, thats why nvidia is branching out with tegra. So smaller, more PPW is what matters more, if 7xxx accomplishes this while being 50% faster than a 6970? I'd call that successful. As well, if this does come out in Dec/Jan it will be months head of kepler, anyway.

And oversized cards? Who cares? Honestly, I have a GTX 285 lying around and its almost the same as the 5870. Nitpicking over stupid stuff. I'm pretty sure the cayman die is smaller than the fermi by quite a bit.
 
Last edited:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Imho,

There are potential consumers that will pay for higher end cards, do give a damn, and why both AMD and nVidia offer products for these consumers.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
You're arguing theoretical performance when nobody knows. Secondly, nobody gives a damn about high end (400$+) discrete graphics anymore in the consumer market. Wait, I take that back. People who are willing to pay 400$+ for a GPU is shrinking daily and is becoming a niche (if not already)....While IGP's are eating away at low end discrete card sales. This will only get worse as IGP performance improves over the next year. Nvidia and AMD both recognize this, thats why nvidia is branching out with tegra. So smaller, more PPW is what matters more, if 7xxx accomplishes this while being 50% faster than a 6970? I'd call that successful. As well, if this does come out in Dec/Jan it will be months head of kepler, anyway.

And oversized cards? Who cares? Honestly, I have a GTX 285 lying around and its almost the same as the 5870. Nitpicking over stupid stuff. I'm pretty sure the cayman die is smaller than the fermi by quite a bit.

Any link to support it ??
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Any link to support it ??

http://techreport.com/discussions.x/21904

First, the manufacturers reportedly worry that TSMC will run into yield issues with its 28-nm process, just as the company did two years ago with the first batch of 40-nm products. On top of that, DigiTimes claims graphics card makers have seen their sales shrink lately—both at the low end, where CPUs with integrated graphics are replacing cheap discrete GPUs, and at the high end. (I'm guessing the stagnating hardware requirements for modern games don't help there.)
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You're arguing theoretical performance when nobody knows. Secondly, nobody gives a damn about high end (400$+) discrete graphics anymore in the consumer market. Wait, I take that back. People who are willing to pay 400$+ for a GPU is shrinking daily and is becoming a niche (if not already)....

Do you have a link to back that up?

1. Silver and I were both discussing the supposedly "accurate" leaked specs for a high-end HD7970 card. No one in this thread said with 100% certainy that Kepler will destroy HD7970 since we don't know the final specs. However, IF those specs are legitimate and HD7970 ends up "only" 50% faster than HD6970, then considering GTX580 is already 15% faster than an HD6970, of course Kepler will destroy it.

2. Maybe some people are OK with a 50% faster card than HD6970. However, it's really been more than 2 full years since HD5870. HD6970 is barely faster than that. So from a performance perspective, it would be extremely underwhelming if the HD7970 is only 50% faster, not 75-100%, esp. so considering GCN is a brand new architecture from ground-up since HD2900 series.

3. I am pretty sure most of us would take a 250-275W card with 80-100% more performance over the HD6970 than a 190W card with 50% more performance if spending $300+. Power consumption matters more for low- and mid-range cards and notebook market. For enthusiast cards, as long as the performance is there, people have no problems buying 200-250W cards (as this generation has shown us).

4. High-end enthusiast market is NOT shrinking, but is actually expected to grow. AMD's graphics chip revenue was up 4% in Q3 2011 from a year ago in their recent earnings announcement. You are probably thinking that the low-end discrete market is shrinking, which is true. IGPs are taking away market share of <$100 discrete cards. However, Llano and HD3000 have 0 effect on enthusiast level cards that gamers buy. In fact, when 28nm GPUs are released, I see an increase in sales for those cards as many gamers who have HD5850/5870/6850/6870/GTX470/480/570/580 will want to upgrade.

5. The reason overall discrete graphics card market "looks to be shrinking" is because most of the market share gains are being eaten away by Intel's HD3000 and Llano. Think about it, 80% of CPU purchases are Intel's, which means, Intel is automatically adding a GPU to the market share #s, regardless if people use it or not. I have a 2500k and don't use the GPU but it counsts as 1. So if I buy an AMD/NV card, well AMD and NV didn't really gain any market share based on that.....but it's incorrect. They *did* gain market share since I only use the discrete card! Intel's GPU shouldn't count whatsoever in market share #s in my rig, but it does....
 
Last edited:

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Wondering if the transition to 28nm will allow the cards to become a reasonable length again for mid towers. My Radeon 4870 was considered big three years ago and I have only about 1/2-3/4" of free space in front of the card (and the cabling goes there). Otherwise, I'd need to dremel-cut the supporting pillars of my Antec 3700BQE HDD cage to accommodate a longer card.

9xZYH.jpg

The 4870 was about average for a high end card when it came out. Video card size is something you should plan for when you buy components.

Some personal guidelines I use: A case always has to be large enough to accommodate the current largest reference gaming card on the market (currently 5970 and 6990). A motherboard has to have at least two open slots between the primary PCI-E slots to allow for proper cooling of multiple dual slot cards. I generally build my systems around the video card(s) because they're usually the biggest, hottest, most power hungry, and expensive single component of any rig.
 

dennilfloss

Past Lifer 1957-2014 In Memoriam
Oct 21, 1999
30,509
12
0
dennilfloss.blogspot.com
The 4870 was about average for a high end card when it came out. Video card size is something you should plan for when you buy components.

I've had that case (which I love) for almost 6 years. It was chosen for its cooling and quietness and it was at first housing a X1900XTX , then a 3870 for about a year, two cards that were a full inch shorter than the 4870, so it already has accommodated a much longer card than when I first put a system in it.

http://www.broadbandreports.com/forum/remark,15931038

Even the 1900XTX was considered a huge card when it came out.
 
Last edited:

-Slacker-

Golden Member
Feb 24, 2010
1,563
0
76
Can't estimate performance improvements on tahiti by comparing it's specs with those of cayman, because tahiti stream processors and cayman stream processors will likely be different (core-next and all that).

RS said:
1. 2048 x 1000mhz vs. 1536 x 880mhz = 51.5&#37; greater pixel fill-rate
2. 128 TMUs x 1000 mhz vs. 96 TMUs x 880mhz = 51.5% greater texture fill-rate
3. 256 GB/sec vs. 176 GB/sec = 45% more bandwidth.

What about the doubling of the ROP count? Doesn't that usually make a huge difference?
 
Last edited:

videogames101

Diamond Member
Aug 24, 2005
6,783
27
91
Wondering if the transition to 28nm will allow the cards to become a reasonable length again for mid towers. My Radeon 4870 was considered big three years ago and I have only about 1/2-3/4" of free space in front of the card (and the cabling goes there). Otherwise, I'd need to dremel-cut the supporting pillars of my Antec 3700BQE HDD cage to accommodate a longer card.

9xZYH.jpg

I had a HD4870 and I just upgraded to a 6870. To my surprise, the 6870 was at least an inch shorter.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
You cant possibly be serious

Why wouldn't I be? Why should we care about what this guy has to say when all his comment amounts to is "herp derp XDR2 just isn't gonna happen LOL"? He's given zero evidence for his claims, and no one has yet to prove that there's anything wrong at all with the specs in these leaks.



No, no, you didnt.

Yes, yes I did. If it weren't based on solid facts Tom's would've simply presented it as a rumor. A LEAK is not a RUMOR. Since we have the leak now, it's your turn to prove with facts that there's things wrong with it, even though everything points to it being factual. Ball's in your court.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
I dont believe they going to use the HPL for desktop graphics, not even for the lower performance chips. HPL is not suited for over 40-50W chips and i dont even know if they will choose it for their Mobile designs.

You clearly haven't been reading, then. LP (Low-Power) and HPM (High-Performance Mobile) are the ones that will be used for mobiles, while HPL and HP are for desktops.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
Yes, yes I did. If it weren't based on solid facts Tom's would've simply presented it as a rumor. A LEAK is not a RUMOR. Since we have the leak now, it's your turn to prove with facts that there's things wrong with it, even though everything points to it being factual. Ball's in your court.

man, you should give charlie more credit. I think he only went wrong once,
when the 6xxx series about to be realised....
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
man, you should give charlie more credit. I think he only went wrong once,
when the 6xxx series about to be realised....

I'll give him credit when he posts evidence for his claims. This is a leak, not a rumor.
 

Rifter

Lifer
Oct 9, 1999
11,522
751
126
Yes, yes I did. If it weren't based on solid facts Tom's would've simply presented it as a rumor. A LEAK is not a RUMOR. Since we have the leak now, it's your turn to prove with facts that there's things wrong with it, even though everything points to it being factual. Ball's in your court.

If toms was sure they wouldnt have added this to the end of there statement

"(if the information leaked is accurate)"

Also semiaccurate has also now stated those specs are wrong, so who are we to believe?

The answer is no one till someone trustworthy(like anand) has one in there hands to test.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
1. 2048 x 1000mhz vs. 1536 x 880mhz = 51.5% greater pixel fill-rate
2. 128 TMUs x 1000 mhz vs. 96 TMUs x 880mhz = 51.5% greater texture fill-rate
3. 256 GB/sec vs. 176 GB/sec = 45% more bandwidth.

Either these specs are fake, each SP/TMU/ROP is way more powerful in GCN, or HD7970 will get clobbered by Kepler. A card that has 45-51% faster specs on paper on average will at best be 50% faster than HD6970, which will put it only 30% faster than a GTX580 (100% = HD6970, 115% = GTX580, 150% = HD7970).

I am going to say these specs are too conservative or AMD has another 20% or so increase in efficiency hiding in redesigned TMUs and SPs or HD6970 was ROP starved?

I don't know if these specs are real or not. It doesn't matter though. We have no idea what the efficiency of GCN, or even a refreshed VLIW4, will be. Look at the 5870 vs. the 6870. Both are VLIW5, but you are looking at similar performance (slightly less in a majority of situations, but more performance in a few) with far fewer SPU's and TAU's. They've doubled the ROPs. That could approx. double the performance (In theory ;))!

Techspot: The HD 6870 has been downgraded from 1600 SPUs (Stream Processing Units) and 80 TAUs (Texture Address Units) to 1120 SPUs and 56 TAUs, while there are still 32 ROPs. It'll be interesting to see how this impacts the HD 6870, as it has 30% less SPUs and TAUs than the Radeon HD 5870
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
If toms was sure they wouldnt have added this to the end of there statement

"(if the information leaked is accurate)"

Also semiaccurate has also now stated those specs are wrong, so who are we to believe?

The answer is no one till someone trustworthy(like anand) has one in there hands to test.

Wrong. It's just a statement put there so people like you don't go around whining about it not being completely set in stone. In any case, it's a 99&#37; chance those are the specs.

Does Semiaccurate have a new leak? Didn't think so. Just because they disagree with it doesn't mean it's based on anything with a solid foundation.
 

Rifter

Lifer
Oct 9, 1999
11,522
751
126
Wrong. It's just a statement put there so people like you don't go around whining about it not being completely set in stone. In any case, it's a 99% chance those are the specs.

Does Semiaccurate have a new leak? Didn't think so. Just because they disagree with it doesn't mean it's based on anything with a solid foundation.

ok well you keep believing the leaks and i'll wait for an official announcement then. You should probably look up all the "leaks" for the 6xxx launch and the 5xxx launch though and see how many of them were complete BS.
 

Rifter

Lifer
Oct 9, 1999
11,522
751
126
Wrong. It's just a statement put there so people like you don't go around whining about it not being completely set in stone.

Also, I would NEVER take anything AMD says to be set in stone let alone unconfirmed leaks. If the BD launch proved anything its that AMD will lie to its customers and have no issues hanging their marketing department out to dry to take the heat for it.