AMD 7000 Series Desktop Graphics Parts Delayed to Q2 or Q3 2012?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

tincart

Senior member
Apr 15, 2010
630
1
0
I want to upgrade over Christmas, so if AMD or nVidia manage to get the next-gen out before 2012, they will have my money.
 

wirednuts

Diamond Member
Jan 26, 2007
7,121
4
0
why do people say amd is great at gpu's and sucky at cpu's? amd just sucks at cpu's, they always have. the only good chips they made were intel clones basically.

ati still makes gpu's. i dont care if they are labled amd- that company didnt fire all of ati's staff, they just changed their theme colors.
 

96Firebird

Diamond Member
Nov 8, 2010
5,738
334
126
Looks like I'll be better off getting another 460 for SLI to hold me over until these 28nm parts arrive...
 
Feb 19, 2009
10,457
10
76
Mid-range 28nm 7870 offering 6970 +10% (my estimate, no source) performance at half the watt (heat/noise) and a ~$200 price point is a winner. AMD/ATI seems to think they can release it in early December and they have been claiming such publicly.

Just to reiterate, there's no reason to doubt ATI devs when they have made public statements regarding 28nm launch.

If its anything like their previous launch, the 7870 mid range is going in first then come the 7970 high end to follow. The only question is, whats the time gap? If its one month, that's not a big deal at all. If its a few months, its going to give NV time to release their 28nm stuff at the same time.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
why do people say amd is great at gpu's and sucky at cpu's? amd just sucks at cpu's, they always have. the only good chips they made were intel clones basically.

Not true. Athlon XP+ was superior in performance and price to early Pentium 4 chips. Intel needed to bring out the guns with Hyper-threading and much higher clock speeds with Pentium 4 "C" revision. However, AMD still held the performance edge in games and performance per clock, as well as price. When Athlon 64 was introduced, it was all over for Intel on the desktop side. Athlon 64 X2 and FX series completely buried Pentium D. I can't comment on the time before that since I wasn't building computers before Athlon XP. However, with Athlon XP, Athlon 64 and Athlon X2, that's at least 3 CPUs that were at least as good / better than Intel's offerings.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Looks like GTX580 is going to hold the crown for quite a long time.

An NV director recently purchased $1.2 million of shares in the company. JHH stated that Kepler should have 3-4x the performance/watt vs. Fermi. Iirc, NV publicly stated that they have changed their corporate strategy: to release major GPU families every 24 months, not every 15-18 months as was done in the past. As such, it was not unexpected for GTX480/580 to remain a top NV GPU for a period of 24 months following GTX480's April 2010 debut. If Kepler was to launch before April 2012, that would actually be ahead of NV's own schedule. Having said that, since a new generation has been in development for a much longer period of time, we can expect a very nice performance boost compared to GTX480/580 and HD6970 parts.
 
Feb 19, 2009
10,457
10
76
JHH stated that Kepler should have 3-4x the performance/watt vs. Fermi.

What he says and what will happen is totally different story.

A 28nm shrink gives them room for doubling of gtx580 IF they keep their huge die size strategy. It would therefore have x2 the perf/watt if everything scales perfectly, which we all know does not happen. A realistic expectation is in the x1.6 to x1.8 perf/watt.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Intel dominated prior to Athlons.

AMD was a budget cpu maker, then Athlon made them awesome.

Uhhh...this is probably the worst time ever to bring up AMD CPUs.

Intel put out a turd P4....and will probably never be behind again. I dont see them giving up a 3-4 year lead.
 
Feb 19, 2009
10,457
10
76
Thats exactly my point, prior to Athlons, AMD was always a small player releasing budget alternatives to intel. They are still at it now so nothing much has changed. In fact, you could say the Athlon domination is an oddity in their history.

I also don't see them out performing intel and release >$500 CPUs for a long time.
 

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
I doubt this unless:
1.) Major problems at TSMC
2.) Major design problems with GCN

From what AMD is saying/showing they seem pretty confident that cards will be out this year.
 

utahraptor

Golden Member
Apr 26, 2004
1,069
244
116
I doubt this unless:
1.) Major problems at TSMC
2.) Major design problems with GCN

From what AMD is saying/showing they seem pretty confident that cards will be out this year.

And by this year of course you mean a single $2000 laptop will feature a mobile 7000 series part in late December.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
An NV director recently purchased $1.2 million of shares in the company. JHH stated that Kepler should have 3-4x the performance/watt vs. Fermi.

Unfortunately, I do believe that was in reference to GPGPU (double-precision floating point) and therefore is not directly related to graphics performance (although I'm sure they will still dramatically improve performance per watt).

What he says and what will happen is totally different story.

A 28nm shrink gives them room for doubling of gtx580 IF they keep their huge die size strategy. It would therefore have x2 the perf/watt if everything scales perfectly, which we all know does not happen. A realistic expectation is in the x1.6 to x1.8 perf/watt.

I like how you always assume the best scenarios for AMD future hardware (6950 having 5970 performance and 50% more tessellation power than gtx480 LOL), and choose to assume moderate to poor improvements for Nvidia. With nearly the exact same die size and on the exact same node process, Nvidia improved the performance per watt of Fermi G1 vs. G2 by 25%. No die shrink, no new architecture, no massive revisions.

Your EXTREME bias when talking about companies you do not prefer oozes from every post you make.
 
Last edited:

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Unfortunately, I do believe that was in reference to GPGPU (double-precision floating point) and therefore is not directly related to graphics performance (although I'm sure they will still dramatically improve performance per watt).



I like how you always assume the best scenarios for AMD future hardware (6950 having 5970 performance and 50% more tessellation power than gtx480 LOL), and choose to assume moderate to poor improvements for Nvidia. With nearly the exact same die size and on the exact same node process, Nvidia improved the performance per watt of Fermi G1 vs. G2 by 25%. No die shrink, no new architecture, no massive revisions.

Your EXTREME bias when talking about companies you do not prefer oozes from every post you make.

And how exactly is that worthy of praise? GF100 was horribly inefficient, and all the revision(s) did was take it from that to "okay" (GTX 560 Ti) to "still bad, but not horrible (GTX 580)". Only an AMD hater would deny that as of now they have a big advantage in performance/watt with their GPUs. They also have a lead in manufacturing process, and I doubt both of those will disappear.
 

Madcatatlas

Golden Member
Feb 22, 2010
1,155
0
0
Holy cow...i clicked "reply" to your post tviceman and my screen went black, popped back on and said in the lower right corner "Nvidia display driver stopped working and is now working again"..... i mean like...what the heck. THIS JUST HAPPENED...



Anyway, i was going to say "The pot calling the kettle black" tviceman


i think games like crysis 2, got AMD and Nvidia thinking "why spend hefty amounts of money on new stuff when we can just relax and dish out 40nm stuff that can easily cope with whats out there of challenging software?"

AMD could be either making a new r600 or could be onto something great with the GCN, but we will have to wait and see. Nvidia should have a beast in a 580 respin/refresh at 28nm.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,329
126
With nearly the exact same die size and on the exact same node process, Nvidia improved the performance per watt of Fermi G1 vs. G2 by 25%. No die shrink, no new architecture, no massive revisions.

That's some funky math there o_O Looks like a 7% improvement to me, not sure where you got 25% from ?

480 - 421W

22204.png



580 - 389W

33853.png
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
And how exactly is that worthy of praise? GF100 was horribly inefficient, and all the revision(s) did was take it from that to "okay" (GTX 560 Ti) to "still bad, but not horrible (GTX 580)". Only an AMD hater would deny that as of now they have a big advantage in performance/watt with their GPUs. They also have a lead in manufacturing process, and I doubt both of those will disappear.

I don't think anyone is saying the improvements from Fermi 1 to Fermi 2 are necessarily praiseworthy. However, it has been touted for the past two years that the Fermi architecture is inefficient, which clearly leaves more room from improvement on the NV side.

AMD also doesn't have an advantage in the manufacturing process. TSMC has been the sole manufacturer of AMD/ATI and NVIDIA high end graphics chips since I can remember. When TSMC has problems, AMD and NVIDIA have problems. Of course, the size and complexity of the chip can exacerbate the problem.
 

formulav8

Diamond Member
Sep 18, 2000
7,004
522
126
TSMC maybe? And it hasn't been "awful". Just because some noobs who know about 0.01% about semiconductor manufacturing get upset by some stories they read on the internet doesn't make it awful.


Yawn... I am sorry about spelling 'aweful' though. :)
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
That's some funky math there o_O Looks like a 7% improvement to me, not sure where you got 25% from ?

480 - 421W

Performance per watt. Reread my original post.

http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580/6
GTX580 -38.1 fps
GTX480 - 32.7 fps

GTX580 = 16.5% faster

http://images.anandtech.com/graphs/graph4008/33853.png
GTX580 - 389 watts total system draw
GTX480 - 421 watts total system draw

GTX580 system = 7.6% less power (for complete system)

1.165 * 1.076 = 1.25345 or, as I said, 25% more performance per watt. And again, that is total system power draw. If the gtx580 vs. gtx480 power draw was isolated, performance per watt with the same example you tried to disprove me with, would be significantly higher than 25%. Please tell me if my math is still funky.

EDIT: techpowerup shows 24% improvement - http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/28.html, 1% off from what you linked to with the crysis warhead anandtech benchmarks.
 
Last edited:

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
I don't think anyone is saying the improvements from Fermi 1 to Fermi 2 are necessarily praiseworthy. However, it has been touted for the past two years that the Fermi architecture is inefficient, which clearly leaves more room from improvement on the NV side.

AMD also doesn't have an advantage in the manufacturing process. TSMC has been the sole manufacturer of AMD/ATI and NVIDIA high end graphics chips since I can remember. When TSMC has problems, AMD and NVIDIA have problems. Of course, the size and complexity of the chip can exacerbate the problem.

Yes, they do. Just because they use the same semiconductor doesn't mean AMD doesn't have a lead in the manufacturing process. If you're working closer with the semiconductor and have dies that are much smaller you'll have much better yields and will be able to get working samples sooner, hence an advantage in manufacturing process.

And in the tone he was saying it, he was mentioning NVIDIA getting better perf/watt as something very noteworthy. It's not; it's like mentioning AMD getting 50% higher perf/watt with a Bulldozer revision when they're 100% behind Intel in that metric.
 
Status
Not open for further replies.