Techpowerup/Chiphell/3DCenter: AMD 6990 launches March 8th - uses two 6970 Cores

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

busydude

Diamond Member
Feb 5, 2010
8,793
5
76
That track setting looks cool. I did not play dirt 2 deep in to that many levels, was there any forest racing like that ?

Yes, in Malaysia.. and China.

And.. ewww, they are playing that game using a KB.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Maybe he's seeing micro stutter where others are not ?

The comment on the page is
"This is Colin McRae Dirt 3 on AMDs also upcoming Radeon HD 6990 connected to three Full-HD displays in an Eyefinity setup. While the graphics looked nice and crisp, we don't know if this was the full eye-candy the game has to offer. The gameplay was, however, very smooth on the Radeon HD 6990 aka Antilles."


 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Besides anything PCI-SIG can or cannot do, what you are doing, when you ignore the spec, is selling a card that other equipment (MoBo's, for example) aren't designed to run with.

Unless the card exhausts all of it's heat outside of the case, you are also dumping an incredible amt of heat into the case. If it does, then we are looking at one hell of a cooler, if it can exhaust all of it's heat, keep the card cool, and not sound like a dustbuster.

Personally, I think it's a hack job to ignore the specs. Just shows what you are not capable of engineering.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
<- educated and experienced engineer

Besides anything PCI-SIG can or cannot do, what you are doing, when you ignore the spec, is selling a card that other equipment (MoBo's, for example) aren't designed to run with.

Unless the card exhausts all of it's heat outside of the case, you are also dumping an incredible amt of heat into the case. If it does, then we are looking at one hell of a cooler, if it can exhaust all of it's heat, keep the card cool, and not sound like a dustbuster.

Personally, I think it's a hack job to ignore the specs. Just shows what you are not capable of engineering.

I agree with this.

Why is it I feel that this thread is getting derailed into an "armchair engineer" thread again? :hmm:
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
<- educated and experienced engineer





Why is it I feel that this thread is getting derailed into an "armchair engineer" thread again? :hmm:

Well it still strikes me as odd that they would always try to stick under the PCI-E limit and then suddenly just say to hell with it. Thats if these slides are correct.
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
yeah, its looking more and more stupid, in not distance future we maybe will measure TDP in Kilowatt lol i mean 450 watt for a single card is stupid even for oc'ed one and if you want CFX this bad boy then you need 1,5 kilowatt PSU its crazy
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Well it still strikes me as odd that they would always try to stick under the PCI-E limit and then suddenly just say to hell with it. Thats if these slides are correct.

Not sure of your age, and that is only relevant as I'm not sure if you were "on the scene" of computers at the time, but the CPU industry went through a similar hesitation when it came to producing CPU's that produced so much heat that they actually required a fan to air-cool them.

Once the industry, and the consumer, got over their reluctance to the idea the market never looked back and we quickly rushed towards the practical limits of conventional HSF technology.

The same thing happened with GPU's and multi-slot cooler solutions. There was an initial reluctance to "go there"...but once we did then it was fair game.

I see the 300W "boundary" as nothing more. It is an arbitrary value affixed within a spec that served a purpose in its time but that time has come and gone.

I've no doubt there were practical reasons for the 300W limitation, but because of engineering progress in developing cost-conscience solutions that address those original concerns I have no doubt the arbitrary 300W limit will be lifted.

When the DDR2 spec called for a max Vdimm of 1.95V that was with the expectation/assumption that sticks of ram would never have heat-spreaders or active cooling. Then engineering developed heatspreaders and dimm heatsinks with fans (dominator series, etc) and the arbitrary voltage limit of 1.95V no longer had a basis in engineering.

What you guys would claim to be hacks and engineering defeatism is actually the opposite. The spec limits exist because of the lack of engineering solutions to a real problem. When engineers resolve those real problems it is not a hack, it is opportunity.

There may be other downsides to the solutions which results in you personally electing to not purchase the product, but that is a personal decision and nothing more.

I personally had not problem running my Myshkin redlines at 2.2V as spec'ed by Mushkin but in violation of the Jedec DDR2 spec. Why? Because my mobo was designed for it, otherwise I wouldn't be able to set the Vdimm that high, and my PSU was designed for it.

And the existence of >300W video cards is not suddenly going to create an unforseen dynamic inside people's computer cases. People have been tri/quad sli'ing & CF'ing vcards for years, the combined heat output being well in excess of 300W.

This sort of bean-counting of the wattage/PCIe-Slot is silly arbitrary. You scale your PSU and cooling solutions accordingly if you want the product, otherwise you don't because you don't.
 

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
Not sure of your age, and that is only relevant as I'm not sure if you were "on the scene" of computers at the time, but the CPU industry went through a similar hesitation when it came to producing CPU's that produced so much heat that they actually required a fan to air-cool them.

Once the industry, and the consumer, got over their reluctance to the idea the market never looked back and we quickly rushed towards the practical limits of conventional HSF technology.

The same thing happened with GPU's and multi-slot cooler solutions. There was an initial reluctance to "go there"...but once we did then it was fair game.

I see the 300W "boundary" as nothing more. It is an arbitrary value affixed within a spec that served a purpose in its time but that time has come and gone.

I've no doubt there were practical reasons for the 300W limitation, but because of engineering progress in developing cost-conscience solutions that address those original concerns I have no doubt the arbitrary 300W limit will be lifted.

When the DDR2 spec called for a max Vdimm of 1.95V that was with the expectation/assumption that sticks of ram would never have heat-spreaders or active cooling. Then engineering developed heatspreaders and dimm heatsinks with fans (dominator series, etc) and the arbitrary voltage limit of 1.95V no longer had a basis in engineering.

What you guys would claim to be hacks and engineering defeatism is actually the opposite. The spec limits exist because of the lack of engineering solutions to a real problem. When engineers resolve those real problems it is not a hack, it is opportunity.

There may be other downsides to the solutions which results in you personally electing to not purchase the product, but that is a personal decision and nothing more.

I personally had not problem running my Myshkin redlines at 2.2V as spec'ed by Mushkin but in violation of the Jedec DDR2 spec. Why? Because my mobo was designed for it, otherwise I wouldn't be able to set the Vdimm that high, and my PSU was designed for it.

And the existence of >300W video cards is not suddenly going to create an unforseen dynamic inside people's computer cases. People have been tri/quad sli'ing & CF'ing vcards for years, the combined heat output being well in excess of 300W.

This sort of bean-counting of the wattage/PCIe-Slot is silly arbitrary. You scale your PSU and cooling solutions accordingly if you want the product, otherwise you don't because you don't.
Explains a lot thanks for the write up.
 

Red Storm

Lifer
Oct 2, 2005
14,233
234
106
I'd also lay some of the "blame" on the fact that we're still stuck on 40nm. I think the plan was to be on 32nm by now, which should have brought with it more efficiency, but AMD and Nvidia are making do with what they have.
 

RaistlinZ

Diamond Member
Oct 15, 2001
7,629
10
91
Not sure of your age, and that is only relevant as I'm not sure if you were "on the scene" of computers at the time, but the CPU industry went through a similar hesitation when it came to producing CPU's that produced so much heat that they actually required a fan to air-cool them.

Once the industry, and the consumer, got over their reluctance to the idea the market never looked back and we quickly rushed towards the practical limits of conventional HSF technology.

The same thing happened with GPU's and multi-slot cooler solutions. There was an initial reluctance to "go there"...but once we did then it was fair game.

I see the 300W "boundary" as nothing more. It is an arbitrary value affixed within a spec that served a purpose in its time but that time has come and gone.

I've no doubt there were practical reasons for the 300W limitation, but because of engineering progress in developing cost-conscience solutions that address those original concerns I have no doubt the arbitrary 300W limit will be lifted.

When the DDR2 spec called for a max Vdimm of 1.95V that was with the expectation/assumption that sticks of ram would never have heat-spreaders or active cooling. Then engineering developed heatspreaders and dimm heatsinks with fans (dominator series, etc) and the arbitrary voltage limit of 1.95V no longer had a basis in engineering.

What you guys would claim to be hacks and engineering defeatism is actually the opposite. The spec limits exist because of the lack of engineering solutions to a real problem. When engineers resolve those real problems it is not a hack, it is opportunity.

There may be other downsides to the solutions which results in you personally electing to not purchase the product, but that is a personal decision and nothing more.

I personally had not problem running my Myshkin redlines at 2.2V as spec'ed by Mushkin but in violation of the Jedec DDR2 spec. Why? Because my mobo was designed for it, otherwise I wouldn't be able to set the Vdimm that high, and my PSU was designed for it.

And the existence of >300W video cards is not suddenly going to create an unforseen dynamic inside people's computer cases. People have been tri/quad sli'ing & CF'ing vcards for years, the combined heat output being well in excess of 300W.

This sort of bean-counting of the wattage/PCIe-Slot is silly arbitrary. You scale your PSU and cooling solutions accordingly if you want the product, otherwise you don't because you don't.

Well said! Although your insight contradicts your user name. :D
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Yeah those are pretty good rumors, considering techspot thought the 6990 would have 3840SPs and nordichardware says the 590 was suppose to be out last month.

I would trust Napoleon long before any of those sites.

napoleon has been hit or miss lately as well.

edit: looks like here's an example of a recent "miss":

Or they could go the route they did with the gtx295 - use all the shaders but cut down the memory interface. But anyways, is chiphell and Napolean the same site that Silverforce used for all his 6970 destroying the hd5970 claims and also your assertions that the 6970 would be the fastest single gpu or were those rumors from page hit grabbing sites that were being repeated?

Generally speaking I would still trust chiphell and napoleon over sites looking for page hits, but lately both camps have been very good at keeping the info close to their vest. AMD has even deliberately spread false info on at least one occasion, making it very difficult for leaks to do their "leaking" for fear of being cut off permanently.

FWIW, I think that if NV is using 2 x 570 then they'll at least double the ram to 2x2.5gb.
 
Last edited:

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Take a look at this new Gigabyte m/b introduced yesterday. Its got all sorts of 'custom' power features.
It adds oc-peg ,
OC-PEG
OC-PEG provides two onboard SATA power connectors for more stable PCIe power when using 3-way and 4-way graphics configurations. Each connector can get power from a different phase of the power supply, helping to provide a better, more stable graphics overclock. The independent power inputs for the PCIe slots helps to improve even single graphics card overclocking. For 4-way CrossFireX&#8482;, users must install OC-PEG to avoid over current in the 24pin ATX connector. The entire board also features POScaps, helping to simplify the insulation process so overclockers can quickly reach subzero readiness.
I wonder if other phases means the 3.3 or 5 V rails ?
GIGABYTE GA-X58A-OC designed for extreme overclocking features

5152_7_gigabyte_launches_x58a_oc_world_s_first_overclocking_motherboard.png


GIGABYTE GA-X58A-OC designed for extreme overclocking features
 
May 13, 2009
12,333
612
126
Not sure of your age, and that is only relevant as I'm not sure if you were "on the scene" of computers at the time, but the CPU industry went through a similar hesitation when it came to producing CPU's that produced so much heat that they actually required a fan to air-cool them.

Once the industry, and the consumer, got over their reluctance to the idea the market never looked back and we quickly rushed towards the practical limits of conventional HSF technology.

The same thing happened with GPU's and multi-slot cooler solutions. There was an initial reluctance to "go there"...but once we did then it was fair game.

I see the 300W "boundary" as nothing more. It is an arbitrary value affixed within a spec that served a purpose in its time but that time has come and gone.

I've no doubt there were practical reasons for the 300W limitation, but because of engineering progress in developing cost-conscience solutions that address those original concerns I have no doubt the arbitrary 300W limit will be lifted.

When the DDR2 spec called for a max Vdimm of 1.95V that was with the expectation/assumption that sticks of ram would never have heat-spreaders or active cooling. Then engineering developed heatspreaders and dimm heatsinks with fans (dominator series, etc) and the arbitrary voltage limit of 1.95V no longer had a basis in engineering.

What you guys would claim to be hacks and engineering defeatism is actually the opposite. The spec limits exist because of the lack of engineering solutions to a real problem. When engineers resolve those real problems it is not a hack, it is opportunity.

There may be other downsides to the solutions which results in you personally electing to not purchase the product, but that is a personal decision and nothing more.

I personally had not problem running my Myshkin redlines at 2.2V as spec'ed by Mushkin but in violation of the Jedec DDR2 spec. Why? Because my mobo was designed for it, otherwise I wouldn't be able to set the Vdimm that high, and my PSU was designed for it.

And the existence of >300W video cards is not suddenly going to create an unforseen dynamic inside people's computer cases. People have been tri/quad sli'ing & CF'ing vcards for years, the combined heat output being well in excess of 300W.

This sort of bean-counting of the wattage/PCIe-Slot is silly arbitrary. You scale your PSU and cooling solutions accordingly if you want the product, otherwise you don't because you don't.
I thought it had already pretty much been established consumers didn't want to go there when it comes to power hungry cards that produce tons of heat and noise? I had a gtx 470 and 480 and personally I didn't mind the heat and noise but Nvidia took a beating sales wise and PR wise vs the cooler running 5XXX series. I believe the consumers spoke and nvidia listened.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Haha oh wow, I'm sorry, but I'm going to want to see a link to this. You're not getting away scott free with statements like these on my watch :D


How are they going to cram 2 250w gpu's in the 375 2 8 pin limit when the 6990's 200w 6970s barely make the mark? (and they still had to downclock it). How much would the 580's have to be downclocked before they get outperformed by the 6990, and how will they manage to sell the card at any profit if that happens?

I might be wrong, but I don't remember seeing anyone post about how the 590 will absolutely, as a matter of fact, not have 1024 cores, but, in the absence of any kind of confirmation, we just choose to go with our rationale and not by some quarter page, tabloid rumors.

Sure we could shut up about it if we have nothing good to say, but how is that any worse than hyping up a product that we don't know anything about? This is, after all, a speculative debate - we have to guess one way or another *, and right now the best we have is "it's not going to happen".

* - well, at least we would if this was a gtx 590 thread.

Where were you when 69x0 came out? There were a lot of allegedly reputable sites with "inside info" claiming that 6970 would obliterate gtx 480. Not just BSN/fudz/inq, but sites that actually use sources for their articles.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Take a look at this new Gigabyte m/b introduced yesterday. Its got all sorts of 'custom' power features.
It adds oc-peg ,
OC-PEG
I wonder if other phases means the 3.3 or 5 V rails ?
GIGABYTE GA-X58A-OC designed for extreme overclocking features

5152_7_gigabyte_launches_x58a_oc_world_s_first_overclocking_motherboard.png


GIGABYTE GA-X58A-OC designed for extreme overclocking features

Holy mother of god, they've engineered a mobo that can support a CPU chowing down 1200Watts!? Now that is insane :eek:

Definitely going after those LN2 8GHz OC'ers. Or maybe Intel aims to have Haswell be a 10GHz 1kW TDP water-cooled monster :p
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
So you are making a claim and not bothering to back it up will evidence when asked? bye bye credibility :D

No offense, but I don't recall seeing you around here during those discussions. However, they definitely happened. Based upon all the prognostications/rumors/etc, I was very disappointed in 6970. It has ended up being quite the performer, getting very near gtx 580 at 2560x1600 with max details/AA levels, but it's nowhere near the expected "gtx 480 killer" rumors that were rampant during the launch timeframe.
 

Red Storm

Lifer
Oct 2, 2005
14,233
234
106
I thought it had already pretty much been established consumers didn't want to go there when it comes to power hungry cards that produce tons of heat and noise? I had a gtx 470 and 480 and personally I didn't mind the heat and noise but Nvidia took a beating sales wise and PR wise vs the cooler running 5XXX series. I believe the consumers spoke and nvidia listened.

That was because they didn't offer enough performance for the amount of heat and power. It doesn't matter how power hungry your card is, if it provides enough performance to justify that cost people will like it.
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
Holy mother of god, they've engineered a mobo that can support a CPU chowing down 1200Watts!? Now that is insane :eek:

Definitely going after those LN2 8GHz OC'ers. Or maybe Intel aims to have Haswell be a 10GHz 1kW TDP water-cooled monster :p

like i said if this trend continue then we will be seeing 1 Kw intel core i10 lol
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Whats the percentage of owners on this forum who actually use 3 monitors for gaming? Has to be less then 5%.

If i was designing a video card these days it would be for the 1920X1080 120hz guys

that's what NV did when they got fermi "right" with the gtx 460. However, egos are not stoked with midrange performance. You don't see donald trump driving a taurus, do you?
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
I thought it had already pretty much been established consumers didn't want to go there when it comes to power hungry cards that produce tons of heat and noise? I had a gtx 470 and 480 and personally I didn't mind the heat and noise but Nvidia took a beating sales wise and PR wise vs the cooler running 5XXX series. I believe the consumers spoke and nvidia listened.

Consumers are fickle, hence the utlity of having a marketing dept from a business expense standpoint.

There once was time when consumers refused to ride in an abomination called the horseless carriage. Consumers change over time.

It is not clear to me that consumers eschewed the Nvidia products because of the noise/heat of those products or just the fact that they had a choice and when all other things were equal (price/performance) they then opted for the secondary things that mattered.

We can make assumptions and uneducated guesses about it (that is fun to do and why we are here), but we shouldn't fool ourselves into believing that our conclusions stemming from our cause-and-effect postulates have any basis in reality.