• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

[VC] NVIDIA GeForce GTX 1060 Specifications Leaked, Faster than RX 480

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Headfoot

Diamond Member
Feb 28, 2008
4,444
638
126
The Vapor-X is not the same card as the Tri-X that the above poster was referencing.
Read more carefully, he mentioned "Sapphire Tri Fan cards" and then posted about the Vapor X. I realize thats not the same card. Which is why I said "That test measured 37 db" I used the graph directly out of his link.

So no. I am referring to exactly the same card he is referring to, even though it is not the originally mentioned Tri-X model
 

MrTeal

Diamond Member
Dec 7, 2003
3,104
775
136
Well, that answers that.


Power input is soldered onto the big holes if the silkscreen is any indication. The connector in the middle looks like it might be for the fan.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,573
126
Well, that answers that.


Power input is soldered onto the big holes if the silkscreen is any indication. The connector in the middle looks like it might be for the fan.
Well, the white fan connector is pretty obvious at the top center of the board.

So that little black 4 pin connector and 2 solder points at the top rear must go to the 6 pin power connector.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,573
126
Man, I wish we could find clear pics of the front of the board.

Anandtech has a huge high-res of the back, but not the front.
 

96Firebird

Diamond Member
Nov 8, 2010
5,651
268
126
Not that I can post here (it would be considered member callout), but yes I have a number of them.
So we're just supposed to believe you? Why would it be a member callout, you'd just be posting what they themselves have said?
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
638
126
So we're just supposed to believe you? Why would it be a member callout, you'd just be posting what they themselves have said?
Really dude? You know how these things go. I'm not going to post something that is against the forum rules... I can PM you specifics if you really think im making this up... But if you think everyone on the forum is honest I have a bridge to sell you
 

BSim500

Golden Member
Jun 5, 2013
1,480
214
106
But the real fact of the matter is that wattage never mattered until nVidia was better at it... (snip wall of text excuses)
Are you for real? Sorry but this is lame historical revisionism regardless of how often it gets blindly parroted. When AMD was in the lead they absolutely were repeatedly praised on it. Examples:-

"Conclusion: Low power consumption, excellent performance per watt" is literally number one on the plus list - April 2011
http://www.techpowerup.com/reviews/AMD/HD_6670/26.html

"It's nice to see the power consumption going down while performance goes up"
"That's some pretty low Power Consumption, even under load"
"Nice power consumption improvements over HD 5670 ... and over HD 4670"

"The Athlon 64 X2 4200+ also consumes less power, at the system level, than the Pentium D 840-just a little bit at idle (even without Cool'n'Quiet) but over 100W under load. That's a very potent combo, all told." - May 2005
http://techreport.com/review/8295/amd-athlon-64-x2-processors/16

Rinse & repeat hundreds of times on dozens of forums / review sites for YEARS when AMD/ATI were in front they were positively commented on vs nVidia's "power hogs". The term "Space Heater" itself sprung up around the P4 era by AMD users commenting on how the A64 drew as much as 50 watts less. If anything, it's only really been the last 2-3 years when AMD has started slipping behind on efficiency, and the very moment the roles got reversed a new group of "goalpost movers" sprung up on tech forums to "combat bias" - by pretending anything AMD is currently weak at "has always been irrelevant" and simply ended up "anti-fanboy-fanboys" themselves. And now we have 14nm, guess what? RX480 vs 2nd hand R9 290 card comparisons suddenly include "better perf per watt, high efficiency", etc, as a "new" positive sales point for buying the former over the latter by the same people who "didn't care" just 3 weeks ago. :sneaky: New RX 480 owners pumped over how a little undervolting and "it draws HALF the power of a 290!". So 150w AMD vs 275w AMD = "that's a great reduction", but 120w nVidia's vs 275w AMD = "no one has ever cared about that stuff, you smokescreen fanboy". And this is your "stamping out fanboy bias" methodology? :D

If efficiency is not important to you personally that's fine. But unless you're brand new to building computers, don't pretend "no one ever mentioned power consumption before Sandy Bridge / Maxwell", because that's one hell of a parallel universe fantasy that's already been repeatedly debunked by looking at reviews and forums comments of AMD products back when they were ahead and seeing the exact polar opposite of that claim... :thumbsdown:
 

Mikeduffy

Member
Jun 5, 2016
27
18
46
About the 1060 - can this do Multi-Engine, aka asynchronous compute?

Anyone know exactly how the 1060 will fare in DX12?

My feeling is the 6GB will be an issue in DX12 at 1440p on day1 releases - perform better after driver updates.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
638
126
Are you for real? Sorry but this is lame historical revisionism regardless of how often it gets blindly parroted. When AMD was in the lead they absolutely were repeatedly praised on it. Examples:-

"Conclusion: Low power consumption, excellent performance per watt" is literally number one on the plus list - April 2011
http://www.techpowerup.com/reviews/AMD/HD_6670/26.html

"It's nice to see the power consumption going down while performance goes up"
"That's some pretty low Power Consumption, even under load"
"Nice power consumption improvements over HD 5670 ... and over HD 4670"

"The Athlon 64 X2 4200+ also consumes less power, at the system level, than the Pentium D 840-just a little bit at idle (even without Cool'n'Quiet) but over 100W under load. That's a very potent combo, all told." - May 2005
http://techreport.com/review/8295/amd-athlon-64-x2-processors/16

Rinse & repeat hundreds of times on dozens of forums / review sites for YEARS when AMD/ATI were in front they were positively commented on vs nVidia's "power hogs". The term "Space Heater" itself sprung up around the P4 era by AMD users commenting on how the A64 drew as much as 50 watts less. If anything, it's only really been the last 2-3 years when AMD has started slipping behind on efficiency, and the very moment the roles got reversed a new group of "goalpost movers" sprung up on tech forums to "combat bias" - by pretending anything AMD is currently weak at "has always been irrelevant" and simply ended up "anti-fanboy-fanboys" themselves. And now we have 14nm, guess what? RX480 vs 2nd hand R9 290 card comparisons suddenly include "better perf per watt, high efficiency", etc, as a "new" positive sales point for buying the former over the latter by the same people who "didn't care" just 3 weeks ago. :sneaky: New RX 480 owners pumped over how a little undervolting and "it draws HALF the power of a 290!". So 150w AMD vs 275w AMD = "that's a great reduction", but 120w nVidia's vs 275w AMD = "no one has ever cared about that stuff, you smokescreen fanboy". And this is your "stamping out fanboy bias" methodology? :D

If efficiency is not important to you personally that's fine. But unless you're brand new to building computers, don't pretend "no one ever mentioned power consumption before Sandy Bridge / Maxwell", because that's one hell of a parallel universe fantasy that's already been repeatedly debunked by looking at reviews and forums comments of AMD products back when they were ahead and seeing the exact polar opposite of that claim... :thumbsdown:
Let me clarify -- I'm talking about the folks trolling about power consumption that obviously switched sides only when nVidia was better at it which has continued to today. The nVidia folks didnt care about power consumption until nVidia was winning in it. Equally bad are the AMD folks suddenly stopped caring about power consumption once their team wasn't winning in that. I'm not trying to revise any history here. I'm trying to call out that the people whose opinions on power consumption changes based on who's better at it are insincere and intentionally spread FUD. The people who always cared about it are few and far in between compared to the trolls.

I definitely agree with you that the power consumption trolling was rampant and equally dumb during Fermi, and further back. I'm not talking about CPUs at all though so I can't say about that, I haven't followed CPUs closely until the last couple of years. I'm also not talking about professional reviews, because those actually have the presumption and semblance of objectivity for the most part.

Though the tone of your post doesn't suggest it, we are entirely in agreement here. The forums are loaded with trolls who will make "the most important" metric whichever one their favorite company is winning in at the time. They are entitled to their opinion but advising people who come here for purchasing advice on fanboy loyalty emotion is a disservice to anyone joining the pc building and enthusiast community.

The reason this is relevant is I see that rx 480 vs 1060 is going to be the next battleground littered with FUD by insincere emotional forum warriors. These forums were once a lot less FUD filled than they are now. There is still a contingent of actually (or mostly) neutral/objective enthusiasts here but I fear they are getting fed up and crowded out.
 
Last edited:

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Judging by the 2 missing memory ICs on the PCB.. I wonder if the GP106 does infact have a 256bit bus and perhaps we might see a 1060Ti in the near future?
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
That is a weird decision. It will not have contact resistance when compared to a connector, but this is more labour intensive and normally costs more.
Its probably ok given that the FE cards for the 1060 is only sold by nVIDIA and nVIDIA only. I read somewhere that the 1060 FE will also be a limited run til the AIBs flood the market with their own.
 
Feb 19, 2009
10,457
5
76
About the 1060 - can this do Multi-Engine, aka asynchronous compute?

Anyone know exactly how the 1060 will fare in DX12?

My feeling is the 6GB will be an issue in DX12 at 1440p on day1 releases - perform better after driver updates.
Pascal cannot do DX12/Vulkan Multi-Engine. Lacks a real hardware scheduler that allows this flexibility.

It did improve upon Maxwell by adding fine-grained preemption, needed for good VR experiences.

6GB is fine, this vram bloat perception needs to stop. Two months ago, people still thought the 980Ti 6GB is a good deal at over twice the price of this 1060 6GB.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
How do you define a "real hardware scheduler"?
Something which violates the PCIe spec and promotes your 170W card as a 150W one.
/sarcasm

He has no clue. Like the other ones who are still spreading these lies. Pascal fully supports "Multi engine".
Something nVidia improved from maxwell.

AMD still lacks FL12_1, Tiled Ressources Tier 3 and any VR-improving capabilities.
 
Last edited:
Mar 10, 2006
11,715
2,010
126
Something which violates the PCIe spec and promotes your 170W card as a 150W one.
The reason I ask is that when the Polaris "teaser" materials first showed up, AMD promoted a feature known as a "hardware scheduler." This was a bit strange because GCN has, to my knowledge, what would be considered a "hardware scheduler" and has had one since inception.

Later on, we learned that the "hardware scheduler" AMD was talking about was a special new unit that essentially acts as a souped up Asynchronous Compute Engine, or ACE.

This is why I would like Silverforce11 to clarify his statement around a "real hardware scheduler." Because, strictly speaking, pre-Polaris GPUs didn't have "real hardware schedulers," at least in the context of the "hardware scheduler" block that was introduced with Polaris. But, even the original GCN had hardware schedulers per CU to schedule wavefronts to the vector units.
 
Feb 19, 2009
10,457
5
76
Something which violates the PCIe spec and promotes your 170W card as a 150W one.
/sarcasm

He has no clue. Like the other ones which are still spreading these lies. Pascal fully supports "Multi engine".
Something nVidia improved from maxwell.

AMD still lacks FL12_1, Tiled Ressources Tier 3 and any VR-improving capabilities.
Coming from YOU who wrongly argued for months that Maxwell has real Async Compute... it's priceless man. You keep it up.
 
Feb 19, 2009
10,457
5
76
The reason I ask is that when the Polaris "teaser" materials first showed up, AMD promoted a feature known as a "hardware scheduler." This was a bit strange because GCN has, to my knowledge, what would be considered a "hardware scheduler" and has had one since inception.

Later on, we learned that the "hardware scheduler" AMD was talking about was a special new unit that essentially acts as a souped up Asynchronous Compute Engine, or ACE.

This is why I would like Silverforce11 to clarify his statement around a "real hardware scheduler." Because, strictly speaking, pre-Polaris GPUs didn't have "real hardware schedulers," at least in the context of the "hardware scheduler" block that was introduced with Polaris. But, even the original GCN had hardware schedulers per CU to schedule wavefronts to the vector units.
Already discussed previously:

http://forums.anandtech.com/showthread.php?p=38323925&highlight=hws#post38323925

You can even go back further, because the HWS units are in Fiji too:

http://forums.anandtech.com/showpost.php?p=37669975&postcount=14

All explained in those posts.

As for hardware/software scheduler, read Anandtech's articles on the matter. They've covered it well enough, if you're really curious and want to know.
 
Mar 10, 2006
11,715
2,010
126
Already discussed previously:

http://forums.anandtech.com/showthread.php?p=38323925&highlight=hws#post38323925

You can even go back further, because the HWS units are in Fiji too:

http://forums.anandtech.com/showpost.php?p=37669975&postcount=14

All explained in those posts.

As for hardware/software scheduler, read Anandtech's articles on the matter. They've covered it well enough, if you're really curious and want to know.
Silverforce11, I read AMD's whitepaper on Asynchronous Shading, and I understand that the ACEs are there to perform the scheduling of the tasks from different queues (graphics, compute, and copy) onto the GPU when there are "gaps" in between the execution of tasks from a specific queue.

What I don't understand is why the shift from a hardware-level instruction scheduler to a software scheduler (as what happened from Fermi to Kepler and beyond) necessarily precludes the inclusion of structures similar to ACEs in order to handle the scheduling of tasks from different queues onto the GPU.

In fact, NVIDIA talked about how Pascal has "dynamic load balancing" which allows the GPU to, on the fly, allocate resources from graphics to compute and vice versa, all without a hardware instruction scheduler. This also has to be done in hardware, otherwise NVIDIA would have just implemented it in the driver for Maxwell.

Any insight would be appreciated.
 

ASK THE COMMUNITY