• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Techpowerup/Chiphell/3DCenter: AMD 6990 launches March 8th - uses two 6970 Cores

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tincart

Senior member
Apr 15, 2010
630
0
0
Forgive me, (was sleeping durimg that part of physics class) but would voltage regulation keep the card from drawing too much power?

I think I remember people who were running overclocked 4870X2 could feel the PCI-E cables softening and heating up while the card was under load. Wouldn't something similar happen with an overclocked 6990?
We'll have to wait and find out. If the rumors are true and both AMD and nV are releasing >300W cards, then all our questions will be answered. We will find out what PCI-SIG will do. We will find out what the power draw does to power delivery system. We will find out (crucially, I think) what this does to warranty coverage.
 

Creig

Diamond Member
Oct 9, 1999
5,171
13
81
Specifications aren't published just for the sheer fun of creating them. I have no doubt whatsoever that many, many, MANY hundreds of hours of study, debate and design went into each version of PCI-E. PCI-SIG has over 900 member companies signed up. Their Board of Directors has representatives from Intel, Microsoft, IBM, AMD, Nvidia, Sun Micro, HP, Broadcom and Agilent Technologies. The Chairman and President of PCI-SIG is Al Yanes, a distinguished Engineer from IBM. These guys aren't going to guess about ANYTHING. If they put out a spec, you can be sure that every single item in it was put in for a very specific reason.

As Skurge mentioned, I also remember people talking about overclocked 4870X2 power connectors heating up. That is why each connector has a maximum power rating. If they could safely allow the PCI-E slot to deliver all 300+ watts, don't you think they would have done so rather than adding multiple 6 and 8 pin connections?

It is possible that when PCI-E 2.0 was released, they did not foresee the possibility of a video card drawing over 300w. Hence there was no inclusion of a dual 8-pin power connector configuration. And that could also explain 300w as the highest power consumption configuration in the PCI-E 2.0 specifications.

However, there could be other issues at work as well. Thermal load, electrical noise, inlet temperatures, exhaust temps, etc. Perhaps 300w is the maximum allowable to be dissipated out the back of the card to prevent people from getting burned if they happen to touch the backplate. Not being an Electrical Engineer, I can only guess. Which is something I'm sure PCI-SIG doesn't ever do.

Personally, I think that as long as card does not not violate the maximum rating of any of the power delivery components (slot + connector + connector), then I don't see that PCI-SIG would have any problem certifying it. But again, there may be other factors at work that we know absolutely nothing about.
 

tincart

Senior member
Apr 15, 2010
630
0
0
Personally, I think that as long as card does not not violate the maximum rating of any of the power delivery components (slot + connector + connector), then I don't see that PCI-SIG would have any problem certifying it. But again, there may be other factors at work that we know absolutely nothing about.
It will be interesting to see if these cards lead to a revision of the specifications. Does anyone know what the process is for standard revisions?

I very much doubt that ATI and nV would be putting out flagship products if they thought there would be serious drawbacks to going over the spec. They must have done some serious research on this issue and decided to go ahead.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
200
106
Don't you think there would be some type of voltage regulation? Whether it be on the motherboard or the maximum draw within spec on the graphics card itself? I'm no electrical engineer, but I would think that when designing a mobo and graphics card, that some sort of voltage/amperage regulation would be a good idea.
Yes, it's 75W. At least that what the mobo designer was told that the card would draw. Now though it seems that the graphic card designers have put the specs in the "too hard" pile.

Specifications aren't published just for the sheer fun of creating them. I have no doubt whatsoever that many, many, MANY hundreds of hours of study, debate and design went into each version of PCI-E. PCI-SIG has over 900 member companies signed up. Their Board of Directors has representatives from Intel, Microsoft, IBM, AMD, Nvidia, Sun Micro, HP, Broadcom and Agilent Technologies. The Chairman and President of PCI-SIG is Al Yanes, a distinguished Engineer from IBM. These guys aren't going to guess about ANYTHING. If they put out a spec, you can be sure that every single item in it was put in for a very specific reason.

As Skurge mentioned, I also remember people talking about overclocked 4870X2 power connectors heating up. That is why each connector has a maximum power rating. If they could safely allow the PCI-E slot to deliver all 300+ watts, don't you think they would have done so rather than adding multiple 6 and 8 pin connections?

It is possible that when PCI-E 2.0 was released, they did not foresee the possibility of a video card drawing over 300w. Hence there was no inclusion of a dual 8-pin power connector configuration. And that could also explain 300w as the highest power consumption configuration in the PCI-E 2.0 specifications.

However, there could be other issues at work as well. Thermal load, electrical noise, inlet temperatures, exhaust temps, etc. Perhaps 300w is the maximum allowable to be dissipated out the back of the card to prevent people from getting burned if they happen to touch the backplate. Not being an Electrical Engineer, I can only guess. Which is something I'm sure PCI-SIG doesn't ever do.

Personally, I think that as long as card does not not violate the maximum rating of any of the power delivery components (slot + connector + connector), then I don't see that PCI-SIG would have any problem certifying it. But again, there may be other factors at work that we know absolutely nothing about.
Come on now, I've been told on many occasions that 300W is just an arbitrary number. Now you're telling that the "powers that be" are actually knowledgeable. Who would have thunk it? /sarc

Seriously though, thanks for stating what should be obvious to everyone. The PCI-SIG is made up by professionals from all fields in the industry. They work out specs that are the most beneficial to everyone in the industry. They are a compromise, as these things have to be to take everyone into account, but it doesn't mean that the people on one side of the isle should just ignore the regulations they've all worked out.

It will be interesting to see if these cards lead to a revision of the specifications. Does anyone know what the process is for standard revisions?

I very much doubt that ATI and nV would be putting out flagship products if they thought there would be serious drawbacks to going over the spec. They must have done some serious research on this issue and decided to go ahead.
AMD and nVidia put out products all of the time that barely work properly. VRM that run too hot, minimal power regulation, coolers that barely keep the GPU temps under 100C, solder bumps that crack and need to be "reflowed", coolers that sound like the PC is ready for lift-off, and on and on. They are companies, who above all else, are trying to make money. Now, profit's not a dirty word. When companies design products in a competitive marketplace they do everything they can to keep costs at a minimum. "Good enough" is perfectly acceptable.

Companies do make some products that are better built with better components than others. The Hawk series from MSI, just to name one. Most people don't buy those though. Most people use the "sort by lowest price" function on newegg. Then they push the crap out of their components, never maintain them, and come onto forums like these when it all goes to custard to say how crappy a company's products are because they managed to beat them into submission.
 

bryanW1995

Lifer
May 22, 2007
11,143
32
91
Is there some reason you two are out to discredit me with strawman arguments, rather than actually addressing what I'm saying if you disagree?

We aren't talking about a 301W card. We're seeing numbers like 375W and 450W.

If ROG MoBo's, or any others, are rated to handle 450W cards in their x16 slots, then they should say so. Without some design regulation, though, how do we know for sure?
No need to get defensive, I was just pointing out that high end motherboards tend to aggressively push the envelope. Also, as mentioned, mobos only put out 75w on the pci-e x16 slot anyway, the rest comes directly from the psu.

How long has this 300w standard been around? Shouldn't they have set a time limit on reexamining/renewing the standard? When 1000+W psus are readily available to consumers then it does seem like 400 or even 500+ W would make more sense these days as the "standard".
 

bryanW1995

Lifer
May 22, 2007
11,143
32
91
Sometimes regulations are plain stupid. Sometimes regulations are written by people that don't understand what they are talking about. Sometimes regulations exist so the people writing them have a job. Sometimes (lots of times) people have agendas. Sometimes regulations are what they are because they've always been like that.

Don't think everyone running a business is some amoral demon trying to cut on safety and that everyone regulating business is some paragon of virtue trying to keep the consumer safe.

People are people.

So what is going to happen in this case?

Blowing GPUs? Blowing computers?
talk like that could get you expelled from the democratic party...
 

3DVagabond

Lifer
Aug 10, 2009
11,951
200
106
No need to get defensive, I was just pointing out that high end motherboards tend to aggressively push the envelope. Also, as mentioned, mobos only put out 75w on the pci-e x16 slot anyway, the rest comes directly from the psu.

How long has this 300w standard been around? Shouldn't they have set a time limit on reexamining/renewing the standard? When 1000+W psus are readily available to consumers then it does seem like 400 or even 500+ W would make more sense these days as the "standard".
Not being defensive. Just trying to keep it reasonable. It's all good. :)

The spec says that the card will draw 75W from the PCI slot. It doesn't say that the PCI slot will be capped at 75W. That's virtually impossible to do in real world situations. If you were to put a hard limit of 75W on the board, you'd have cards crashing all over the place. There are always peaks and surges that go way above the normal power draw design limit. They have to be allowed for. In doing this the board is allowed to go over what a safe continuous level. A card that would do this continuously could be a problem. Granted, probably not for ROG boards, EVGA 3 and 4 SLI boards, or the Gigabyte "Hemi Orange" overclocking board, or some others. Most boards aren't built to the same standards as those though. Nor should they have to be because someone, either in management, marketing, or engineering decided to go cowboy on the design of a card.
 

bryanW1995

Lifer
May 22, 2007
11,143
32
91
Yes, it's 75W. At least that what the mobo designer was told that the card would draw. Now though it seems that the graphic card designers have put the specs in the "too hard" pile.



Come on now, I've been told on many occasions that 300W is just an arbitrary number. Now you're telling that the "powers that be" are actually knowledgeable. Who would have thunk it? /sarc

Seriously though, thanks for stating what should be obvious to everyone. The PCI-SIG is made up by professionals from all fields in the industry. They work out specs that are the most beneficial to everyone in the industry. They are a compromise, as these things have to be to take everyone into account, but it doesn't mean that the people on one side of the isle should just ignore the regulations they've all worked out.



AMD and nVidia put out products all of the time that barely work properly. VRM that run too hot, minimal power regulation, coolers that barely keep the GPU temps under 100C, solder bumps that crack and need to be "reflowed", coolers that sound like the PC is ready for lift-off, and on and on. They are companies, who above all else, are trying to make money. Now, profit's not a dirty word. When companies design products in a competitive marketplace they do everything they can to keep costs at a minimum. "Good enough" is perfectly acceptable.

Companies do make some products that are better built with better components than others. The Hawk series from MSI, just to name one. Most people don't buy those though. Most people use the "sort by lowest price" function on newegg. Then they push the crap out of their components, never maintain them, and come onto forums like these when it all goes to custard to say how crappy a company's products are because they managed to beat them into submission.
Driving over the speed limit is illegal and inherently less safe in most instances than driving at or under it, yet people do it all the time. Not only that, but in general those that follow the rules to the letter typically work for people like JHH who are constantly pushing the envelope. NV and AMD are engaged in a fight to the death, do you think they're worried about uprooting a couple of petunias during the battle? Clearly the ASUS superdoopermonster 5970 pulled a lot more than 300w, yet how many people have burned their fingers or blown up their house from using them? I haven't heard of any, and amd/nvidia probably haven't either. Now that one of their mutual partners has proven that it can be done on a low volume, enthusiast part successfully, it's open season on 300w as far as they're concerned. Does that mean that there isn't a higher chance of problems on a card that pulls over 300w? Of course not, there is quite likely to be a higher risk of issues. However, that comes with the territory for Xtreme 1337 game3s. If my stupid cat singes his hair from hanging out by my gpu exhaust then maybe next time he'll hang out some place safer...
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
200
106
You just need to be involved in any kind of regulated business to see stuff like that.

Can be even simple stuff as products being refused because it says "made in" instead of "fabricado em" (portuguese version of made in) or because a label of a product says what it contains but not what doesn't contain.

So, no. There is no evidence that your argument applies. You are speaking purely anecdotal.

We'll file this along with "anything's possible" arguments then.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
200
106
Driving over the speed limit is illegal and inherently less safe in most instances than driving at or under it, yet people do it all the time. Not only that, but in general those that follow the rules to the letter typically work for people like JHH who are constantly pushing the envelope. NV and AMD are engaged in a fight to the death, do you think they're worried about uprooting a couple of petunias during the battle? Clearly the ASUS superdoopermonster 5970 pulled a lot more than 300w, yet how many people have burned their fingers or blown up their house from using them? I haven't heard of any, and amd/nvidia probably haven't either. Now that one of their mutual partners has proven that it can be done on a low volume, enthusiast part successfully, it's open season on 300w as far as they're concerned. Does that mean that there isn't a higher chance of problems on a card that pulls over 300w? Of course not, there is quite likely to be a higher risk of issues. However, that comes with the territory for Xtreme 1337 game3s. If my stupid cat singes his hair from hanging out by my gpu exhaust then maybe next time he'll hang out some place safer...

I'm not sure what point you are trying to make with your "speeding" analogy. Excessive speed is the #1 killer on highways. I think it backs my position more than yours. Maybe you just wanted to beat me to "the car analogy". ;)

As you can see in the chart below the Ares actually doesn't blow the pci-e spec apart. There are situations it goes well over the 300W, but so does the 4870x2 and some others.

As you have said, the Ares is a limited production "designer" card. It's not the standard reference design. This does make a difference. Just as I can see the argument that the regulation 5mph bumpers on a Lamborghini isn't worth it. I do see the purpose for them on 99% of the cars out there. (Got my car analogy in there too)

 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
1
0
Gibbo over at OCUK has confirmed he's received shipment of the 6990s in his usual cheeky way :)

http://forums.overclockers.co.uk/showthread.php?t=18249896

We may get one of his trademark pyramid shots of all the stock later today.

He is still mum on pricing :'(

My god its so big, the warehouse canna take it!!!
When reviewers go "OH EM GEE" its so big.... you know its a big card.

Says it ll be over 450£

I know theres VAC and whatnot but... yeah probably gonna be a expensive card.
(things always cost more in europe though)
 

MoMeanMugs

Golden Member
Apr 29, 2001
1,662
1
71
Forgive me, (was sleeping durimg that part of physics class) but would voltage regulation keep the card from drawing too much power?

No, it wouldn't. You'd want some chokes in place.

<- EE that designs electronics for anyone that actually cares
 

tincart

Senior member
Apr 15, 2010
630
0
0
When reviewers go "OH EM GEE" its so big.... you know its a big card.

Says it ll be over 450£
OCuk has a picture with the 6990 and a ruler. Looks to be a little over 12 inches. Same length as a 5970. The cooling solution will obviously be triple slot.
 

ASK THE COMMUNITY