Upcoming Maxwell GPU pictured!!!!

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
In reality no one gives a fudge about power consumption in this segment (enthusiast gaming), only stockholders and shills when it's favorable to their agenda.

Ya seriously if there is a single Maxwell card with 780Ti SLI performance and uses 350-375W, it will fly off the shelves. People buy flagship cards not because of performance/watt but because of flagship performance. If an architecture is more efficient, it is a "free" bonus courtesy if AMD/NV's new design. Performance/watt matter more for sub-75W, sub-150W in desktop and mobile sector. For flagship desktop, I would love a 400W 550mm2+ card with double the performance of 780Ti; and if they have to use AIO to cool it, I am fine with that too.

It looks like I am the only one who is sick of 1990's size computer cases and/or mini-tornadoes cooling down the 750 watt space heaters we call gaming rigs.

:/ Oh well. This is me getting old. I suck.

EDIT: If higher end Maxwells scale in perf/watt as well as GM107 does over GK107, it should only take ~280 watts (average power consumption) to get 2x GTX 780 TI. If Nvidia were to build a Maxwell card designed to run at 400 watts, then it should conceivably be 2.5x the performance of GTX 780 TI.
 
Last edited:

Wall Street

Senior member
Mar 28, 2012
691
44
91
I think that that the power connector may be done like that to show that the reference PCB supports 8 + 6 pin power options either with a side-by-side arrangement or a single double decker connector. I bet that the OEMs will either install only the double decker connector or two shorter single level connectors depending on what provides the best clearance for their fan arrangement (e.g. squirrel-cage blowers are more likely to use the double decker and dual/tri-fan axial heatsinks will use the side-by-side).
 
Feb 19, 2009
10,457
10
76
That die size looks around 400mm compared to the gddr5, its not big maxwell, rather, mid-range maxwell ala gtx680/770.

I think we know its coming soon the leaks have maxwell on 28nm and a mid-range part.

The same leakers noted AMD's 28nm next-gen is also launching soon. Should be fun.
 

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
No 20nm, no buy.

My little case doesn't have luxury to dissipate that much heat.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
No 20nm, no buy.

My little case doesn't have luxury to dissipate that much heat.

This.

Why is everyone so 'happy' with each new gen proving 10-20% more performance? Paying $600-750 for a new card that is marginal over last gen sucks. We used to get 70-80% of previous-gen SLI or CF performance from the new card, with generally more/faster VRAM as well. Then we started getting 25-30% more from the 'next-gen'. Now we are getting essentially a side-grade from the previous cards and marginal improvements.

Boo.
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
It looks like I am the only one who is sick of 1990's size computer cases and/or mini-tornadoes cooling down the 750 watt space heaters we call gaming rigs.

:/ Oh well. This is me getting old. I suck.

You can have a 400W GPU in a mATX case. Heck, you can have one of those in a Prodigy if you want.

This isnt about fitting those kind of cards in cramped cases, you can do that already. This is about people suddenly caring about power consumption in halo products. Pst, most people buying those halo products have enthusiast PSUs and cases to begin with.
 

SlickR12345

Senior member
Jan 9, 2010
542
44
91
www.clubvalenciacf.com
Can't wait, hopefully Nvidia isn't as terrible in their pricing and realize they are losing market share and status and start pricing GPU's correctly.

I mean the 750TI is a joke of a card, for $150 its at least 20% slower than the 265 from AMD and barely beats out the 260x which sells for $120

As far as high end I hope the new 880 card isn't $700+ milking everyone out of their money for 15-20% performance gains over the 780ti.

Hopefully the 880 will give 25% improved performance for $550.
 

Galatian

Senior member
Dec 7, 2012
372
0
71
assuming that a smaller node = less heat probably isn't the best line of thinking. Look what happened to Haswell


I'm not sure Haswell is a good example though: the problem is that heat is transferred through TIM to the IHS and then trough more paste to the actual cooler. It is highly inefficient, especially considering the gap they included between the DIE and the IHS with the use of their glue. For example my GTX670 runs at 40 degrees Celsius overclocked on my watercooling rig because the GPU DIE is directly cooled.

On the other hand there is a point to be made that smaller feature size equals less area for the same heat to be dissipated so 28 nm might have been the sweet spot, but we'll see.

That being said I'm definitely waiting for both companies to bring out new GPUs in hopes they both check each other to be price competitive.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
No 20nm, no buy.

My little case doesn't have luxury to dissipate that much heat.

Fully agree. If they go 28nm now, its basicly a complete long term disaster for dGPU makers. I assume AMD will do the e3xact same due to cost. Then we may end up with a 4 year period before 20nm is even an option.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Fully agree. If they go 28nm now, its basicly a complete long term disaster for dGPU makers. I assume AMD will do the e3xact same due to cost. Then we may end up with a 4 year period before 20nm is even an option.

So, GM107 and GM108 are a "disaster"? :confused:

20nm allows only a 30% reduction in the power consumptions. On the other hand the price for a wafer goes up to 50%...
nVidia has shown that they are able to deliver the same performance jump with another 28nm like they would be able with 20nm.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
So, GM107 and GM108 are a "disaster"? :confused:

20nm allows only a 30% reduction in the power consumptions. On the other hand the price for a wafer goes up to 50%...
nVidia has shown that they are able to deliver the same performance jump with another 28nm like they would be able with 20nm.

Yes it is a disaster. Because it means they cant afford the future. And Maxwell added what, 30% die size?

A 20nm version even with the same amount of transistors that would be almost the same cost per chip. Would still give better electrics for higher performance, lower power consumption or both. Obviously nVidia and AMD cant afford that currently. And we are on a fasttrack for a 4 year(Or longer) cycle on the same node for dGPUs.

Then you can champ the new uarch and its improvement as much as you like. While making up one excuse after the other for the company.

This is yet another step towards the death of the dGPU.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Yes it is a disaster. Because it means they cant afford the future. And Maxwell added what, 30% die size?

And a 20nm wafer costs nearly 40-50% more than a 28nm one.

A 20nm version even with the same amount of transistors that would be almost the same cost per chip. Would still give better electrics for higher performance, lower power consumption or both. Obviously nVidia and AMD cant afford that currently. And we are on a fasttrack for a 4 year(Or longer) cycle on the same node for dGPUs.
What?! A 20nm wafer costs more and has less yields than a 28nm wafer today. With the same amount of transistors the price of the chip will be the same in the best case. Reality is that a 20nm chip with the same amount of transistor will cost much more than the "same" on 28nm. Not even that there is only a 30% reduction in power consumption...

Then you can champ the new uarch and its improvement as much as you like. While making up one excuse after the other for the company.

This is yet another step towards the death of the dGPU.
Most improvements come from the architecture and not from a new node. There doesn't exist one reason to go with a new node when you can archive the same performance jump with the old one. Especially when the price goes up in a big way.

And yet GM108 is beating everything Intel can offer.
Wake me when Intel can produce something that is a)faster and b)available in non >$1000 notebooks.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
So you are saying, nVidia couldnt afford to lower the power consumption 30% or increase performance for the same die cost, tho higher IC design cost. And the consumer instead have to suffer with the 75-100W extra heat and cost in the performance desktop. While nVidia is becoming even more irrelevant in the mobile space. Not to mention the atrocity diesizes that something like the GM200 will be.

Thanks for the clarification.
 
Last edited:

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
I wonder if the 6+8pin is for the GPU and the other 6pin is for the ARM co-processor ala the denver project (hence the weird board layout to accommodate such a thing if it exists). Or else they could simply have gone dual 8pins.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I wonder if the 6+8pin is for the GPU and the other 6pin is for the ARM co-processor ala the denver project (hence the weird board layout to accommodate such a thing if it exists). Or else they could simply have gone dual 8pins.

There is no such thing for dGPUs. People simply confused it with the Tegra development.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
I think that that the power connector may be done like that to show that the reference PCB supports 8 + 6 pin power options either with a side-by-side arrangement or a single double decker connector. I bet that the OEMs will either install only the double decker connector or two shorter single level connectors depending on what provides the best clearance for their fan arrangement (e.g. squirrel-cage blowers are more likely to use the double decker and dual/tri-fan axial heatsinks will use the side-by-side).

Yup. You may have noticed the little button next to the 6pin on the right side. The button is there to cut off that connector during testing of power. The retail versions will have the stacked pins. 8+6. Or 6+6 like GTX 680 had. One of them have have to go.

99.99% certain

That die size looks around 400mm compared to the gddr5, its not big maxwell, rather, mid-range maxwell ala gtx680/770.

I think we know its coming soon the leaks have maxwell on 28nm and a mid-range part.

The same leakers noted AMD's 28nm next-gen is also launching soon. Should be fun.

Yeah its most likely GM204 aka our upcoming GTX 880.
Unless its 20nm GM210 but very unlikely because I bet extremely few samples have been made.

And I think Nvidia is waiting for AMD to release Tonga in August before responding with GTX 880.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
So you are saying, nVidia couldnt afford to lower the power consumption 30% or increase performance for the same die cost, tho higher IC design cost. And the consumer instead have to suffer with the 75-100W extra heat and cost in the performance desktop. While nVidia is becoming even more irrelevant in the mobile space. Not to mention the atrocity diesizes that something like the GM200 will be.

Thanks for the clarification.

...

How does the consumer suffer with GM107 or GM108?
GM107 offers around twice the performance of GK107 without increasing the power consumption. A shrink only offers a 30% reduction.
GM107 is 25% bigger than GK107. A GK107 shrink would be only 30% smaller because of the limited power reduction. On the other hand the wafer price goes up to 40-50% and the yield goes down.
Even with a 25% bigger 28nm die there doesnt exist a difference between the costs.

I think you dont really understand that going further means that there is no advantage in the price and less and less in the power reduction segment. With higher prices and lower power reduction there is no rush for it when you can archive the same performance jump with the old process node.
 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Sontin, you say that it isn't necessary for Nvidia to go to any node below 28nm, and yet you're saying that Intel can't beat Nvidia?

Unlike Nvidia, Intel's new nodes have both improved power consumption and performance, and a lower price per transistor. If Nvidia does what you say, Intel's going to have an easy time competing with their 14 and 10nm transistors.

Even if 20nm doesn't have a lower cost, chips still benefit from the higher efficiency of the smaller transistors.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Sontin, you say that it isn't necessary for Nvidia to go to any node below 28nm, and yet you're saying that Intel can't beat Nvidia?

nVidia will go on 20nm when the price comes down and yield and volume go up.

Unlike Nvidia, Intel's new nodes have both improved power consumption and performance, and a lower price per transistor. If Nvidia does what you say, Intel's going to have an easy time competing with their 14 and 10nm transistors.

Sure. That must be the reason why Haswell is <10% faster than Ivi Bridge while using nearly the same amount of power.

Even if 20nm doesn't have a lower cost, chips still benefit from the higher efficiency of the smaller transistors.

Like i said: Power goes only down by 30%. With 28nm it was 45%.