What do you think of nVidia locking down voltage?

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Does it bother you that nVidia has locked down the voltage on "Kepler" GPUs?

  • I don't care

  • It doesn't bother me at all

  • It bothers me a little

  • It bothers me a lot

  • I will no longer purchase nVidia products because of this

  • I don't overclock


Results are only viewable after voting.

Keromyaou

Member
Sep 14, 2012
49
0
66
In their statement there is a sentence 'Overvoltaging above our max spec does exactly this. It raises the operating voltage beyond our rated max and can erode the GPU silicon over time.'

I just shorted this part in my way.
 

Keromyaou

Member
Sep 14, 2012
49
0
66
Sure. But do you believe that company representatives honestly talk about their very negative facts about their products? I don't. If they say something negative, I will take it a bit more seriously than they say. They also said, 'We support overvoltaging up to a limit on our products, but have a maximum reliability spec that is intended to protect the life of the product. We don’t want to see customers disappointed when their card dies in a year or two because the voltage was raised too high.' (http://www.brightsideofnews.com/new...proving-quality-or-strangling-innovation.aspx). Then their definition of 'over-time' is one to two years. One to two years is 'fast' in my dictionary. I usually stick with the same gpus for two to three years. One to two years are too short for me. If over-voltaged cards could start to die in one to two years, I start worried about the lifespan of stock-voltaged cards as well.
 
Feb 19, 2009
10,457
10
76
With some users already seeing degradation not long after launch without even vcore boosting above the "safe" level, it suggest it is "fast".

At launch review i recall several reviewers noticed their 680 boosting to ~1.2ghz in games. Whilst it varies, it seems to be 1.1-1.2ghz boost. Higher if the cooler is good. Yet, the max typical OC even with good coolers are ~1.3ghz, there is only 100mhz headroom, ~8%. In combination with Nv's confirmation that pushing vcore above that "safe" level will degrade their chip.. and that their "safe" level is the absolute max they have tested that will provide their gpu with a good lifetime duration.

That's indicative that NV is using their turbo boost to really redline kepler. So no suprises it will degrade if pushed further with extra volts.

Ultimately it doesnt matter, because NV only guarantees 1.05ghz boost performance, so as long as cards get that and dont degrade below it, you get what you paid for. It's only bad for some enthusiasts who expect a good product with OC flexbility when its being sold for >$500.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
This post deserves a sticky.

I would also emphasize that last sentence about threatening to withhold GPUs from non-complying partners. This is not some benign recommendation.

Is there a reason why Sapphire (AMD-only) seems fine with going over 1.175v on AMD GPUs even though they are built at the same fab (TSMC)? I vaguely remember one AT forumer who said he asked Sapphire what the max reliable voltage was and Sapphire said "below 1.25v" which implies something very close to 1.25v (like, say, 1.24v), even if it technically could mean some other threshold. Did AMD build their GPUs with more robust structures or something; or did AMD not do their homework and those running >1.175v on their AMD GPUs will suffer from electromigration early death; or is there some other reason for the difference between 1.175v (NV) and ~1.25v (AMD)?

Personally I think this is the result of NV pressing what was supposed to be their midrange GPU into service as a high-end GPU. They had to really push that chip's limits, leaving little to no overvoltage breathing room. This is the GPU we're talking about, not other components, so other than tighter voltage regulation, I don't think there is much that the AIBs could do even if they were told to push the chip further. And there are probably negligible gains to be had from tighter voltage regulation. In other words, the GK104 is "redlined" as RS put it.

I don't think it deserves a sticky, except to point out bad form. He's quoting a writer/article and adds his own editorial bias comment mid-way through.

Just found this Link posted on another forum:

It appears NV confirmed with certainty GK104's voltage was set to the absolute maximum and the chip can fail if you increase voltage beyond stock voltage spec:

"We love to see our chips run faster and we understand that our customers want to squeeze as much performance as possible out of their GPUs. However, there is a physical limit to the amount of voltage that can be applied to a GPU before the silicon begins to degrade through electromigration. Essentially, excessive voltages on transistors can over time "evaporate" the metal in a key spot, destroying or degrading the performance of the chip. Unfortunately, since the process happens over time, it's not always immediately obvious when it's happening. Overvoltaging above our max spec does exactly this. It raises the operating voltage beyond our rated max and can erode the GPU silicon over time.

'In contrast, GPU Boost always keeps the voltage below our max spec, even as it is raising and lowering the voltage dynamically. That way you get great performance and a guaranteed lifetime. So our policy is pretty simple: We encourage users to go have fun with our GPUs. They are completely guaranteed and will perform great within the predefined limits. We also recommend that our board partners don’t build in mechanisms that raise voltages beyond our max spec. We set it as high as possible within long term reliability limits.

They're also leaving a bad taste in board partners' mouths: where in previous generations each company has been able to push its own cards to the limit in order to beat the competition, under Nvidia's alleged new rules all GTX 680 boards will be more or less identical in performance and features."


Now we have 100% confirmation that Kepler's GPU voltage was red-lined from the factory to the absolute safest max allowed. That means NV just officially confirmed that anyone increasing it above this level is playing electromigration lottery with GK104.

There is more to this than meets the eye. As I have hypothesized, NV also did this to prevent slower GPUs from having the ability to overclock beyond the faster offerings (i.e., 670 beating a 680).

"We've been told that the secretive restrictions on board partners go yet further: 'They [Nvidia] also threaten allocation if you make a [GTX 680] card faster than the GTX 690.'"
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
Personally I think this is the result of NV pressing what was supposed to be their midrange GPU into service as a high-end GPU. They had to really push that chip's limits, leaving little to no overvoltage breathing room. This is the GPU we're talking about, not other components, so other than tighter voltage regulation, I don't think there is much that the AIBs could do even if they were told to push the chip further. And there are probably negligible gains to be had from tighter voltage regulation. In other words, the GK104 is "redlined" as RS put it.
This makes sense to me for the most part. But what does give me pause is, why can AMD field a significantly higher transistor count GPU, and push the voltage higher as well, but not suffer from the same issue? Clock speeds this generation are also very close. Something doesn't add up.
Indeed, you replaced over time with fast.
What do you consider an acceptable timeframe for a GPU to degrade?
 

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
First NV took away the ability to use physx unless you are all NV, then custom refresh rates...
Can someone clarify about the custom refresh rates? I prefer Nvidia cards entirely because I don't have to rely on Powerstrip for tweaking refresh rates. Did they recently change something?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I don't think it deserves a sticky, except to point out bad form. He's quoting a writer/article and adds his own editorial bias comment mid-way through.

Care to explain why the part you highlighted in red is biased? First of all I didn't edit the article but highlighted sections of the article in italics and my own thoughts were not, to separate the article from my own comments. I also enclosed the pieces from the article in quotation marks to clearly separate my thoughts from the author's article.

Secondly, NV themselves said that stock voltage for Kepler is safe max and going above that causes electromigration on their product. I didn't make that up. To me this was new information since we didn't know with 100% certainty that Kepler's stock voltage was the absolute safe max amount. If you knew this, I guess you were ahead of the game. :p

How fast is "fast" anyway?

From the same article:

"In contrast, GPU Boost always keeps the voltage below our max spec, even as it is raising and lowering the voltage dynamically. That way you get great performance and a guaranteed lifetime."

I am guessing it would be reasonable to assume that guaranteed lifetime would be at least for the term of the warranty of the product; so 2-3 years depending on the vendor.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
I am guessing it would be reasonable to assume that guaranteed lifetime would be at least for the term of the warranty of the product; so 2-3 years depending on the vendor.

If a card lasted only 2-3 years, I'd be pissed. I think NV learned something from bumpgate. I don't think they think long-term reliability means just 2-3 years, probably more like 5+.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
This makes sense to me for the most part. But what does give me pause is, why can AMD field a significantly higher transistor count GPU, and push the voltage higher as well, but not suffer from the same issue? Clock speeds this generation are also very close. Something doesn't add up.

The explanation could lie in the actual composition of leaky vs. non-leaky transistors inside the chip itself and the type of 28nm transistor used for manufacturing of Tahiti XT vs. Kepler.

Last generation on 40nm HD6970 stock voltage was 1.175V and it took 1.25V without failure. Max voltage on my 470s was only 1.087V despite those Fermi chips also being manufactured on 40nm. Remember how NV removed "leaky" transistors in Fermi GTX480 and replaced them with less leaky 40nm transistors in 580 to try and lower power consumption, but the node stayed the same at 40nm? Not all 40nm and 28nm transistors are created equal and not all chips have the same % of leaky vs. non-leaky transistors within the chip. Your comparison of looking at two different 28nm GPUs could be omitting some unknown information to us, such as that not all 28nm transistors are equal, even if made at the same fab.

You can change the transistor type on the same node and change the characteristics of a chip in terms of its clocks and power consumption despite staying on the same node. This was done on GF110 and explained in detail by Anandtech where NV went from 2 types of transistors in GF100 to 3 types in GF110:

"Thus the trick to making a good GPU is to use leaky transistors where you must, and use slower transistors elsewhere. This is exactly what NVIDIA did for GF100, where they primarily used 2 types of transistors differentiated in this manner. For GF110, NVIDIA included a 3rd type of transistor, which they describe as having “properties between the two previous ones”. Or in other words, NVIDIA began using a transistor that was leakier than a slow transistor, but not as leaky as the leakiest transistors in GF100. Again we don’t know which types of transistors were used where, but in using all 3 types NVIDIA ultimately was able to lower power consumption without needing to slow any parts of the chip down."

If we just compare two chips at 28nm and make a bold assumption that both AMD and NV use the same type of 28nm transistors in their chips and even a bolder assumption that the % of these leaky vs. non-leaky type of transistors are similar in composition for both chips, implying a similar level of degradation at similar voltage levels, that's a lot of assumptions right there. We don't even know how many types of 28nm transistors are in Tahiti XT vs. Kepler. Tahiti XT may use 3 types and Kepler could use 2 types or vice versa. If both use 2 types for example, does one have 75% leaky transistors and 25% non-leaky, but the other has 75% non-leaky and 25% leaky? Etc. Too much unknown information.

Since after-market Kepler cards boost well over 1200mhz from the factory, and HD7970 GE operates at just 1050-1100mhz, the transistors/composition of leaky vs. non-leaky transistors in Kepler and Tahiti XT are likely very different. HD7850/7870 do not really overclock better than 7950/7970 chips making it difficult to conclude that smaller size alone of GK104 can by itself explain why it is able to operate at much higher frequencies. Since Kepler operates at much higher clocks at 1.175V, the transistors in that chip might be picked on purpose to be able to operate at high frequencies with lower voltage, but that could make them more fragile with higher voltages. I am not an expert on transistors by any means but it seems the chip itself is made up of various types of 28nm transistors and they have different properties.

Also, there are certain transistors inside Tahiti XT that are allocated to compute functions, dynamic scheduler and double precision. I am not sure if these transistors share the same properties as the rest of the functional units in the chip responsible for graphical functions. It seems there could be technical reasons we don't have access why Tahiti XT can work safely at 1.25V but Kepler cannot despite both made on 28nm node.
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
Further, since Kepler operates at much higher clocks, the transistors in that chip might be picked on purpose to be able to operate at high frequencies with lower voltage, but that could make them more fragile with higher voltages.
This sounds very possible, I wonder if this is indeed the case. The thing is, If I operate a 7970 for example at 1.25 volts@1.2GHz, will the chip degrade? On my card I've been slowly inching up to about this, only time will tell I guess.

Nvidia seems to be cutting things really fine here, we are talking about a very small amount of voltage increase being the "tipping point" where it starts to damage the silicon. The real question to me is, will Kepler degrade even at stock volts.

Also, the problem I have with the Fermi comparison is, Nvidia did not lock down their partners like they are doing with Kepler. Fermi can take quite a voltage bump without dying, although power consumption goes through the roof after a certain point. But the chip doesn't start to degrade AFAIK, many people have some good voltage/frequencies bumps and their cards are working fine.
 
Last edited:

Keromyaou

Member
Sep 14, 2012
49
0
66
This article about the degradation of cpu (http://www.anandtech.com/show/2468/6) could give some insights into the interpretation of this issue about gpus. In the figure describing the cpu lifespan and core voltage, they assume that the cpu lifespan under warranty is three years. At the original voltage the cpu will be operable for three years, and then cpu will start malfunctioning due to silicon degradation. However if you increase core voltage to some extent, cpu will be functioning for an extended period again. If this is how Nvidia calculate the lifespan of their gpu, Kepler will function for about three years under default voltage. Then some Kepler gpus start malfunctioning. However in this case it is not possible to increase voltage to salvage gpus since there is no head room for overvoltage for Kepler (Maybe the down-clocking might salvage Kepler for a while???). In this sense I feel that Kepler gpus seems to be similar to heavily overvoltaged/overclocked HD7970s in terms of lifespan. This interpretation makes sense since Nvidia representatives said, 'We don’t want to see customers disappointed when their card dies in a year or two because the voltage was raised too high' (http://www.brightsideofnews.com/new...proving-quality-or-strangling-innovation.aspx). Also Nvidia seems to be very concerned about the warranty (which is two to three years as far as I understand). Probably Russiansensation's estimation seems to be reasonable.
 

Pottuvoi

Senior member
Apr 16, 2012
416
2
81
Many people here could discuss the financial and strategic aspects of how companies stack up but it would be off-topic for the most part. A lot of forum members have said they don't want financials and company-based discussions to dilute this sub-forum after we dabbled into this area in the past. What you implying is that AMD has to cater to enthusiasts because they are more desperate to win / keep customers? That's a valid point. What's your view on Intel offering K series overclocking chips and charging extra for warranty that allows 1 time replacement no matter what if the CPU fails? Why can't NV do that at least?
It isn't be the nVidia which needs to give that warranty, but the card manufacturer.
I would imagine the cost of overclockable cards to be quite bit higher due to it though.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Max stock operating voltage for Tahiti XT is 1.175V, which was later extended to 1.25V (implying the original stock voltage of 1.175V was a conservative number). Stop discussing semantics that some chips ship undervolted. Max stock voltage for a product line is not a variance, but a fixed number. Stock voltage for a particular product you buy in the store varies like VID varies for CPUs. However, SB and IVB CPUs do have a max operating safe stock voltage and it's not the stock voltage you get when you plug the CPU into your motherboard. If you pick up 5 GPUs and they all have different stock voltage, that has nothing to do with the official stock voltage allowed for Tahiti XT chip, which is generally much higher.

AMD is using a variable VID for the chips. The default voltage for the card is different from card to card. 1,175V is only the highest setting.

The 1.25V BIOS can be safely applied to all 7950/7970 cards if you want. Some may be stable, others may not but it will not kill the chip from electromigration. Thus the chip supports additional voltage increase from 1.175 to 1.25V without failure, or otherwise AMD would never have allowed reviewers to release it as a downloadable link. AMD does not guarantee that your 7970 chip will work with 100% certainty at 1050mhz with the 7970 GE bios, but they guarantee that if you flash the card to 1.25V, it won't destroy the chip from overvoltage. Otherwise they would never release a 1.25V BIOS.
AMD doesn't care. They didn't publish the Bios on their website and guarantee nothing. That's your imagination.

That's not what AMD said. They shared the BIOS through review websites:

"We fully expect that for the class of gamer that uses a 7970 or 7950, they’re very savvy gamers. They’re guys that build their own systems or upgrade on a fairly regular basis and have the capability to flash a BIOS regularly and probably read the forums to know the BIOSes are available." ~ Source

Oh look here you go PCPerspective included the full 7950 B Bios if gamers want to use it. "AMD is allowing us to share the FW updater with you."
They sent the Bios to the reviewer. Go find it on their homepage.

So does overclocking but AMD still went out of their way to provide this option. NV removed that option completely because they maxed out Kepler voltage from the factory but kept quiet about it.
nVidia is using the whole spectrum of voltage for the chip from 1V up to 1,175V.
AMD is only using one step as default. AMD is not warranty the use of the higher voltage like nVidia.

Kepler's stock voltage is 1.175V as far as I am aware. NV does not allow any voltage adjustment above this level. Therefore, NV warranties 0% increase in voltage above stock levels, outside of whatever bump occurs during dynamic boost (I believe up to 1.212V).
That's your problem: You have no clue. :rolleyes:
Stock voltage is 1V for the base clock. Every boost step is using a higher vcore up to 1,175V.

I have my facts straight. My card allows voltage up to 1.3V from 1.174V. That option to 1.3V ships with software in the box. You are not getting it. AMD never said, look if you overclock beyond 1.175V, your card is toast. After they released 1.25V BIOS for 7950/7970 chips, they never said, look if you overclock beyond 1.25V, your card is toast. Don't turn this around into AMD vs. NV trying to justify NV's actions. :whiste:
1,3 from 1,174V is only 11% more. nVidia giving their customers 17,5% more voltage over the stock. Look, it seems you don't understand Kepler's Boost function. You should stop talking about all this.

I never said 7950 is nearly as fast as a 7970 out of the box. You brought that out of nowhere. I specifically stated that voltage control often allowed someone to buy a lower end SKU and overclock it much higher. GTX670 is a $400 card, not a $280 card. Can you overclock 660Ti to $500 GTX680 speeds? No, you cannot. You can with 7950 and voltage control is a huge reason for it. Go ask 7950 owners. How about GTX460/470 overclocks without voltage control? Much worse.
Sure you said this:
There is more to this than meets the eye. As I have hypothesized, NV [AMD] also did this to prevent slower GPUs from having the ability to overclock beyond the faster offerings (i.e., 670 [7950] beating a 680 [7970]).

So show me a 7950 out of the box which is as fast as a 7970. You said that nVidia is not allowing that but ignoring that there are a) GTX670 SKUs which nearly perform like a GTX680 and b) they using the same max vcore for the GTX670 cards.
The second part of the quote makes no sense: Now we expect that a $250 card should be able to overclock to perform like a $500 card? Why would anybody buy the $500 if this would possible? o_O
These times are over. And you were not able to do this in the last generation. The GTX480 was 50% faster than the GTX460. Good luck to bring the GTX460 to GTX480 level...

The thread title is "What do you think of nVidia locking down voltage?" I think it's a step back for the consumer and hurts overclockers. Your view seems to be the opposite I imagine since you are trying to argue against me? I linked at least 1 real reason that NV provided for why voltage control above stock is not allowed on NV cards - they could fail because stock voltage is max voltage for GTX600 series. It seems you don't like that response from NV or are upset I linked it. Again, this has nothing to do with the thread but you keep turning it personal and not focusing on the subject itself.
They are not locking down voltage, they are using it. Last generation there was a cap, too. Where was the negative response? :confused:
Why do you think i can get up to 1300MHz from 980MHz? Because nVidia is overvolting for me. Why can you only get to ~1050MHz without vcore adjustment with a 7970? Think about it for a moment.

Except Gigabyte GTX670 costs $400. Please continue defending why NV removing voltage control is great for the consumers.
Oh god, pls. How many times will you repeat this? Really, it is so hard to understand that they allowing voltage control up to 1,175V? What's so different to AMD? Can you use every number? And if not why do you not complaining about them for "removing voltage control"?
 
Last edited:

Zanovar

Diamond Member
Jan 21, 2011
3,446
232
106
AMD is using a variable VID for the chips. The default voltage for the card is different from card to card. 1,175V is only the highest setting.

AMD doesn't care. They didn't publish the Bios on their website and guarantee nothing. That's your imagination.

They sent the Bios to the reviewer. Go find it on their homepage.

nVidia is using the whole spectrum of voltage for the chip from 1V up to 1,175V.
AMD is only using one step as default. AMD is not warranty the use of the higher voltage like nVidia.

That's your problem: You have no clue. :rolleyes:
Stock voltage is 1V for the base clock. Every boost step is using a higher vcore up to 1,175V.

1,3 from 1,174V is only 11% more. nVidia giving their customers 17,5% more voltage over the stock. Look, it seems you don't understand Kepler's Boost function. You should stop talking about all this.

Sure you said this:


So show me a 7950 out of the box which is as fast as a 7970. You said that nVidia is not allowing that but ignoring that there are a) GTX670 SKUs which nearly perform like a GTX680 and b) they using the same max vcore for the GTX670 cards.
The second part of the quote makes no sense: Now we expect that a $250 card should be able to overclock to perform like a $500 card? Why would anybody buy the $500 if this would possible? o_O
These times are over. And you were not able to do this in the last generation. The GTX480 was 50% faster than the GTX460. Good luck to bring the GTX460 to GTX480 level...

They are not locking down voltage, they are using it. Last generation there was a cap, too. Where was the negative response? :confused:
Why do you think i can get up to 1300MHz from 980MHz? Because nVidia is overvolting for me. Why can you only get to ~1050MHz without vcore adjustment with a 7970? Think about it for a moment.

Oh god, pls. How many times will you repeat this? Really, it is so hard to understand that they allowing voltage control up to 1,175V? What's so different to AMD? Can you use every number? And if not why do you not complaining about them for "removing voltage control"?

I think its all about control dude,1.175+:p
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Which was not eben possible with Fermi to go over a certain limit.
I can force 1,175V for every Boost step. So there is control.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
The accepted definition of "voltage control" when it comes to overvolting and overclocking is voltages above stock. 1.175 is stock voltage for max boost. Your statement that 1V being stock is nebulous at best, because kepler only uses 1V when in an idle state @ 315mhz (IIRC). Only in the lowest idle states will it be at .9-1V....
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
Stock voltage is 1V for the base clock. Every boost step is using a higher vcore up to 1,175V.

Again this is like saying base voltage for a 3570k CPU is 1.1v. Processors are not rated at idle speed from the manufacturer. Notice how nobody says the clock speed of a 3570k is 1600Mhz or a GTX 670 is 350Mhz. You have zero control over your GPU above 1.175v. This means that if your limit is not temps, not tdp, and not the chip itself...you are stuck because you cannot put 1.2v. You don't have the ability to test your GPU to see the limits.
 
Last edited:

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
...
Oh god, pls. How many times will you repeat this? Really, it is so hard to understand that they allowing voltage control up to 1,175V? What's so different to AMD? Can you use every number? And if not why do you not complaining about them for "removing voltage control"?

You're actually trying/bothering to argue that stock voltage isn't 1.175v.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
The explanation could lie in the actual composition of leaky vs. non-leaky transistors inside the chip itself and the type of 28nm transistor used for manufacturing of Tahiti XT vs. Kepler.

Last generation on 40nm HD6970 stock voltage was 1.175V and it took 1.25V without failure. Max voltage on my 470s was only 1.087V despite those Fermi chips also being manufactured on 40nm. Remember how NV removed "leaky" transistors in Fermi GTX480 and replaced them with less leaky 40nm transistors in 580 to try and lower power consumption, but the node stayed the same at 40nm? Not all 40nm and 28nm transistors are created equal and not all chips have the same % of leaky vs. non-leaky transistors within the chip. Your comparison of looking at two different 28nm GPUs could be omitting some unknown information to us, such as that not all 28nm transistors are equal, even if made at the same fab.

You can change the transistor type on the same node and change the characteristics of a chip in terms of its clocks and power consumption despite staying on the same node. This was done on GF110 and explained in detail by Anandtech where NV went from 2 types of transistors in GF100 to 3 types in GF110:

..............<snip>..........

I think your heading in a good direction. Finally some people are really looking at this a little deeper.

There obviously is a real reason nvidia set the cap. This reason is real and it is to protect the chip for the long term. Everyone should agree to this. Its fact as far as i am concerned.

I have a ton of thought on this, i could go on with all sorts of scenarios. I can dream them up till no end. But lets stick to the facts.

I like the direction you are going with this post. And its an angle that many are overlooking. But there is much more to it than what you said.

Nvidia and AMD both are on 28nm with TSMC but...........the are not even using the same process. Not all 28nm is the same, as you said. But this goes deep. The 28nm ramp up was known to be slow and limited during which several processes for the 28nm node was developed.

http://www.eetimes.com/electronics-...m-high-k-metal-gate-process-into-two-versions

Its been said that AMD decided to drop the HKMG and go with HPL allowing for a quicker ram up. Nvidia stuck with their original HKMG. They are all similar but have defining characteristics that make them different. The real catch is that you can never fully predict exactly what the outcome will be. The only given is that the outcome is not entirely predictable and there is always surprises. They are too be expected. Not all surprises are bad, some times things turn out to be much better than ever expected. It varies wildly and extreme. So many aspects arent known and the guesswork would blow your mind. Sometimes the results are the exact opposite of you intentions.

Its always a guessing game with lottery results. Even if nvidia and AMD used the exact same process they have individual designs from transistors to their functions. But on top of that nvidia and AMD didnt even go the same process. This really throws everything out of the window. There is no way draw conclusions based on how the other functions. They are worlds apart.....galaxies apart....universes apart. Not even on the same 28nm process.

RS, i think you were going in a great direction, just thought i would add in this major consideration. It really shows how there is no way that the voltage of AMDs chips only apply to their chips and cannot be useful in looking at nvidia's current solution. While they both are 28nm, they are nowhere near the same.....