AMD HD7*** series info

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

james1701

Golden Member
Sep 14, 2007
1,791
34
91
What about drivers. If the high end cards are the test bed for a new architecture, will that change how the drivers are written. If so, what about the next generation when all the new cards are produced that way. Will AMD switch gears with its driver team, and mostly only put out new stuff for the newer cards? Will that leave everyone out in the cold with a card that is only a year or two old?
 

Borealis7

Platinum Member
Oct 19, 2006
2,901
205
106
i believe 28mn could bring up to 44% reduction in power consumption, but i don't think a direct comparison between the cards is valid due to different architectures. (SIMD-VLIW4-GCN)
 
Last edited:

DeathReborn

Platinum Member
Oct 11, 2005
2,786
789
136
Yes, we should take these rumours with a grain of salt, but hey, this is the first "chinese leak". They usually pop-up within a month before the launch date.

I'm sure I saw this info somewhere a few months back, still trying to figure out where it was but it is very familiar.

If it's true then RAMBUS back in peoples PC's? I'm not sure how that's going to go down amongst some quarters.
 

Saico

Member
Jul 6, 2011
53
0
0
I'm sure I saw this info somewhere a few months back, still trying to figure out where it was but it is very familiar.

If it's true then RAMBUS back in peoples PC's? I'm not sure how that's going to go down amongst some quarters.

6xxx series were planned for 32nm production. The 1920/2048 alu numbers for a new generation were floating in the air exactly one year ago. But 32nm was canceled and AMD was forced to cut down it's chips, so they would fit 40nm. This is why we have 1536/1408 69xx cards. This is when new generation was split into northern/southern islands line-ups.

New XDR2 memory is expensive, but more futureproof than gddr5. I don't mind AMD putting it into high-end cards.
 
Feb 19, 2009
10,457
10
76
The leaks has to come around now if they plan on a release within a few weeks. AIBs needs to stock up cards and ship it.

There's been rumors of dual production at GF as well, so it may be true that the 78xx series and below will retain the current architecture, just shrunk to reduce power use/die size, and it could be made at GF while TMSC focus on the high end 79xx with a new architecture with its inherent risks.

It's actually a very clever strategy, given how efficient their current gen stuff are, if a small die can offer ~gtx580 performance at 120W, that would be a huge win and most likely unattainable for NV to compete in the perf/watt category all over again. We can speculate on 78xx performance since its a known arch. But 79xx, who knows how it will turn out. It's highly likely AMD is going to stick to 256b bus on it and therefore, the die size will be small (think sweet spot), unlike 69xx which is a 32nm design on a 40nm production.

The 7990 will be godly (excellent performance without crazy power and noise issues), that is my prediction.

Edit: My only concern is the driver team, doing a driver for multiple architecture is tough enough, but they have a whole new arch to optimize for. It took them months after launch to optimize the 69xx series.
 
Last edited:

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Which to me, sounds freaking amazing coming so soon after the 580.

Huh? It will be at least a year later than gtx 580, and on a new process node. And 580 was only a respin with minor performance improvements (though major in power/heat/noise) over gtx 480. Remember that we saw the cardboard box launch of gtx 480 in fall/winter 09, and that probably would have been the actual launch date if tsmc had had their s**t together back then.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
We need to get past this, "It's TSMC's fault". AMD used TSMC as well and weren't delayed and didn't have yield problems. Even JHH said the fault lied with nVidia. Two parts of the design team didn't communicate properly. It was less than a year ago when they finally got a fully functioning GPU in sufficient quantities to release it.
 

dpk33

Senior member
Mar 6, 2011
687
0
76
So when can we be expecting the mid-high end HD7000 series to be out? And what performance would they match in the HD6000 series? e.g 7870 matches performance of 6970 or whatsoever.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
why are they just now using XDR2? hasn't that been available for over 5 or 6 years at 8GHz?
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
We need to get past this, "It's TSMC's fault". AMD used TSMC as well and weren't delayed and didn't have yield problems. Even JHH said the fault lied with nVidia. Two parts of the design team didn't communicate properly. It was less than a year ago when they finally got a fully functioning GPU in sufficient quantities to release it.
Imho, you should be quoting this area of thought also, which is also some kind of excuse for ? Performance to power ?
But 79xx, who knows how it will turn out. It's highly likely AMD is going to stick to 256b bus on it and therefore, the die size will be small (think sweet spot), unlike 69xx which is a 32nm design on a 40nm production.

Who forced them to compromise on design ?
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Nothing less than 100% faster than the previous dual gpu card is acceptable in my opinion.
5870 is a bit faster than a 4870x2 and the gtx480 was a bit faster than a gtx295.

With a full process node smaller, both Nvidia and AMD should beat there gtx590 and 6990 cards with a single gpu card.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Nothing less than 100% faster than the previous dual gpu card is acceptable in my opinion.
5870 is a bit faster than a 4870x2 and the gtx480 was a bit faster than a gtx295.

With a full process node smaller, both Nvidia and AMD should beat there gtx590 and 6990 cards with a single gpu card.

I'd generally agree, but performance per watt is quickly becoming a more important factor than flat out performance and, especially with Nvidia, adding more and more compute features is definitely high up on their list. Another obstacle AMD and Nvidia are facing with releasing hardware that is simply faster is that with so many current users running hardware that is "fast enough" for everything they do, new functional features becomes a more crucial way to distinguish products from their own previous lineup and competitor's current lineup as well. Hence why AMD is touting triple display with 1 card, and Nvidia is more aggressively touting tessellation, 3D vision, and physx.
 

yours truly

Golden Member
Aug 19, 2006
1,026
1
81
AMD Radeon HD 7000 Southern Islands 28nm
  HD 7990 New Zealand GCN
  HD 7970 Tahiti XT GCN 1000MHz 32CUs 2048ALUs 128TMUs 64ROPs 256bit XDR2 8.0Gbps 256GB/s 2GB 190W HP
  HD 7950 Tahiti Pro GCN 900MHz 30CUs 1920ALUs 120TMUs 64ROPs 256bit XDR2 7.2Gbps 230GB/s 2GB 150W HP
  HD 7870 Thames XT VLIW4 950MHz 24SIMDs 1536ALUs 96TMUs 32ROPs 256bit GDDR5 5.8Gbps 186GB/s 2GB 120W HPL
  HD 7850 Thames Pro VLIW4 850MHz 22SIMDs 1408ALUs 88TMUs 32ROPs 256bit GDDR5 5.2Gbps 166GB/s 2GB 90W HPL
  HD 7670 Lombok XT VLIW4 900MHz 12SIMDs 768ALUs 48TMUs 16ROPs 128bit GDDR5 5.0Gbps 80GB/s 1GB 60W HPL
  HD 7570 Lombok Pro VLIW4 750MHz 12SIMDs 768ALUs 48TMUs 16ROPs 128bit GDDR5 4.0Gbps 64GB/s 1GB 50W HPL

Rumours from http://bbs.expreview.com/thread-46257-1-1.html

Are these supposed to be PCI-E 3.0 cards? I wonder if there would be any loss of performance running them in PCI-E 2 motherboards.

(sorry I don't know a great deal about computers!)
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
Are these supposed to be PCI-E 3.0 cards? I wonder if there would be any loss of performance running them in PCI-E 2 motherboards.

(sorry I don't know a great deal about computers!)
there will be zero loss using them in pci-e 2 boards.
 

yours truly

Golden Member
Aug 19, 2006
1,026
1
81
ah ok, thanks. Kind of glad I only ordered a 6870 now.

These new cards look promising but I hope they're not good for *bitcoining - couldn't find an HD 6950/70 for love nor money last week.

* I'm assuming that's why they are sometimes hard to find.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Edit: My only concern is the driver team, doing a driver for multiple architecture is tough enough, but they have a whole new arch to optimize for. It took them months after launch to optimize the 69xx series.

This comment, along with the active Civ V thread going on, made me wonder out loud: AMD beat Nvidia to the DX11 punch by six months, yet they still have yet to implement into their drivers one of the most important DX11 features: multi-threaded rendering. What is up with that?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Which to me, sounds freaking amazing coming so soon after the 580.

How long have you followed videocards? A true next generation is usually 75-100% faster. 50% faster than a GTX580 (which is just a massaged GTX480) almost 1.5 years after GTX480 was released (but it was 6 months late) is a minimum expectation really. I don't see anything amazing in that. We have had minimal (15-25%) performance increases in the last 2 years since HD5870. It's about time we get 75-100% performance increase in HD7970/Kepler. 50% is actually underwhelming imo.

So far the mid-range parts look completely underwhelming to be honest. I realize that HD6870 couldn't be faster than HD5870 since it was still stuck at 40nm. But for 28nm mid-range parts, I fully expect them to be faster than HD6970 (which was hardly more than a die shrink+ vs. HD5870).

Are these supposed to be PCI-E 3.0 cards? I wonder if there would be any loss of performance running them in PCI-E 2 motherboards.

(sorry I don't know a great deal about computers!)

No, PCIe 2.0 x16 isn't even utilized fully by any current videocard. There is only a 2-3% performance difference by going down to PCIe 2.0 x8 and another 5-6% by going down to PCIe 2.0 4x. So even at PCIe 2.0 4x there is less than a 10% penalty. So I don't see any difference between PCIe 3.0 and PCIe 2.0 (x16).
 
Last edited:

NoQuarter

Golden Member
Jan 1, 2001
1,006
0
76
No, PCIe 2.0 x16 isn't even utilized fully by any current videocard. There is only a 2-3% performance difference by going down to PCIe 2.0 x8 and another 5-6% by going down to PCIe 2.0 4x. So even at PCIe 2.0 4x there is less than a 10% penalty. So I don't see any difference between PCIe 3.0 and PCIe 2.0 (x16).

I think that few % all comes from the brief period when a lot of textures need to be loaded at once and the PCIe bus gets saturated, so PCIe 3.0 will probably show a couple more % gain due to those specific instances, but definitely not a big deal.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I'd generally agree, but performance per watt is quickly becoming a more important factor than flat out performance and, especially with Nvidia, adding more and more compute features is definitely high up on their list. Another obstacle AMD and Nvidia are facing with releasing hardware that is simply faster is that with so many current users running hardware that is "fast enough" for everything they do, new functional features becomes a more crucial way to distinguish products from their own previous lineup and competitor's current lineup as well. Hence why AMD is touting triple display with 1 card, and Nvidia is more aggressively touting tessellation, 3D vision, and physx.

Excellent post. Console ports aren't helping either. With every rumor pointing to PS4 and Xbox720 (or w/e) being delayed into 2013-2014 period, it looks like even if the next generation of cards is 2x faster, they are still going to be used to run games like Deus Ex that aren't really pushing the graphics envelope.

I welcome new cards, no doubt, but the graphics have almost stagnated and low- and mid-range offerings are stagnating in performance. Looking at HD7670-7870 (if those specs are correct), AMD is more concerned with having chips that consume less power, perhaps so that they can adopt them to mobile offerings easily. You can't really blame them since mobile GPU discrete market is where most of the growth will be for them. If those specs are correct, on the desktop it looks like nothing less than HD7950 is even worth considering.

I understand focus on power consumption for laptops but to me this new focus on desktop GPU power consumption at all costs is what's undermining historically acceptable performance gains. I don't know why suddenly so many people started "caring about the environmental impacts" of increased power consumption on the desktop since desktop discrete GPUs are a luxury in the first place......
 
Last edited:

cusideabelincoln

Diamond Member
Aug 3, 2008
3,275
46
91
I understand focus on power consumption for laptops but to me this new focus on desktop GPU power consumption at all costs is what's undermining historically acceptable performance gains. I don't know why suddenly so many people started "caring about the environmental impacts" of increased power consumption on the desktop since desktop discrete GPUs are a luxury in the first place......

They were bound to run into a power limit. Cards just can't keep using more and more power, as they historically have been pre-8800. Most high end cards have hovered around the wall that the 8800 Ultra hit, with the exceptions being Fermi and dual cards.

And I fail to see why GPUs as a luxury and energy efficiency should be mutually exclusive. They can be both. And for those who don't care about energy efficiency, there's always dual, triple, and quad cards.

Since we have this wall, squeezing as much performance under a power envelope is not really undermining performance gains at all.