• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

How much better should the 20 nm, 800-series be?

Mondozei

Golden Member
This is basically what we know of the new Maxwell series. Courtesy of Videocardz(if admins don't want it here, I can just do a custom table in Excel and post a picture here instead.)

Qh4UBHE.png






The Firstrike Scores are as follows:

GTX 750 - 5 250 points
GTX 750 Ti - 5 963 points
GTX 650 Ti - 4 819 points
GTX 660 - 6 727 points


Given these specs, it seems like Maxwell isn't really that much faster than Kepler clock-for-clock but makes it up in energy savings, packing a lot more cores. This, together with 20 nm, should be the way forward.

I don't think there's really any disagreement on this point.

But where I do think there is debate is just how much better the 800-series will be, say the top-end card, compared to a 780 Ti in, say, Firestrike? 30% more? What's a reasonable estimate?
 
IDD clock for clock LMAO its almost as useless as (best) pound for pound (fighter)
The thing to look for is perf/Watt, and Maxwell and Nvidia are all rage about it.

IMHO +30% should be a conservative estimate vs Kepler.
 
Massive better.
They can effectively double the transistor count on 20nm Maxwell compared to 28nm Kepler.

But nobody knows how long it will take before we get a maxed out Maxwell. Like always, Nvidia will use their time, release better cards step by step, to get most profit out of the node.
 
Spec numbers don't really mean anything when game graphics hasn't progressed much as of late.

We have double/triple the specs past few years, while the graphics are slightly better.

Game play isn't exactly progressing either and more and more developers force you to buy crap AFTER you guy a game....no thanks.

It will really take a special game for me to upgrade in the coming years....
 
I'd expect it to be like a full kepler vs a full Fermi. But it doesn't matter because they will do a bottom up release schedule. We won't get a real Maxwell until 2016 I bet.
 
Spec numbers don't really mean anything when game graphics hasn't progressed much as of late.

We have double/triple the specs past few years, while the graphics are slightly better.

The minimal improvements have been due to console oriented development, they were targeting the decade old hardware in the 360 and PS3. Now that those are out, and the console gamers have 2010's mid range hardware, we'll see another small boost over the next 18-24 months before the XB1/PS4 are fully maxed out; the cycle continues.

Game play isn't exactly progressing either and more and more developers force you to buy crap AFTER you guy a game....no thanks.

I started a thread in PC Gaming about this yesterday, the state of micro transactions in gaming is like a cancer. In mobile gaming especially, its choking the life out of the platform and poisoning the well.
 
Considering how mature 28nm is, I wouldn't expect massive gains right off. It's hard to say, but just a SWAG ~30%-40% for starters increasing to 60%-70% as 20nm matures.
 
Does anyone actually think we'll see >300mm chip sizes as "high-end" chips this summer?

Unless AMD does a huwaii with 4000SP at ~420mm die size, I bet both companies will release small die chips just like 7970 and 680 and claim that their the new top of line.
 
This is basically what we know of the new Maxwell series. Courtesy of Videocardz
Rumors are saying this:

http://www.techpowerup.com/197620/a...nd-gtx-750-pictured-clock-speeds-surface.html


Performance:

British tech publication UK Gaming Computers got their hands on the two cards, and took a peek under the hood using GPU-Z 0.7.6 (which supports the two). It confirms specifications from the older article, and also reveals clock speeds. The GTX 750 Ti features 1085 MHz core, 1163 MHz GPU Boost, and 5.50 GHz (GDDR5-effective) memory, which churns out 88 GB/s of memory bandwidth. The GTX 750, on the other hand, features the same GPU clock speeds, but slightly slower memory, at 5.10 GHz, at which the memory bandwidth is 81 GB/s. The site also put the two through a quick 3DMark 11 run (performance preset). The GTX 750 Ti scored P5963 points, and the GTX 750 scored P5250 points. Since the two are custom design cards, we're not sure if the clock speeds will stick. For all we know, the two could be factory-overclocked. Impressive performance nonetheless.
We know that it has no 6pin connector, so ~75w TPD.
We know the chip size is about the same as the 260x's.


"The GTX 750 Ti scored P5963 points".
A 260x scores around ~6300 points.


So bascially we re looking at 5-10% slower than a 260x,
but useing much lower power.

115w TDP vs 75 TDP.

I think the 260x will have more overclock room than the 750ti,
because the 260x comes with a 6pin connector.

I think Nvidia pushed the cards to the limit (TDP) wise (but not crossing 75w) so they could sell them to OEM's.
 
Last edited:
So the concensus seems to be:
A full-blown 20 nm Maxwell is going to kick ass.
However, it all depends on AMD until we get one.

I'm less sanguine about the notion that Nvidia will release moderately more powerful GPU's bit by bit until much later, simply because by the time we can realistically expect a full-blown Maxwell at the earliest(H1 next year), their top-flagship GPUs will have had 2 years at their neck already.

Someone suggested we'd wait until 2016 until we see a full-blown Maxwell. Is this would be the case then that would essentially mean the GPU market will slowly go the way of the CPU market in the next few years. 3 years for a 30-40% improvement(if we use conservative estimates)?

If we assume a 50% jump, quite substantial after all, it'd still be 16,66% improvement over each year when you spread them out. That's almost Intel-esque. Haswell gave a 10-15% IPC improvement, so 16,66 would be slightly above the upper-range bound of that estimate.

From my POV, it depends on the full improvement capability on Maxwell. If it is above 60% as some have suggested, then they may do a phased approached(unless forced to do otherwise by AMD). However, if it is a more modest ~40% improvement, I think we will see it already next year by H1 at the latest. Hawaii was designed for 20 nm and AMD has a history of going to new nodes first(even if them being tied down by laggard GloFo probably won't help them this time).

And there's been a pick-up in node improvements lately by TSMC; which is trying hard to catch up to Intel. They should get to 14 nm late next year or early 2016 considering they're already doing volume production as of now on 20 nm.

Volta is coming out in 2016, too.

The wildcard is mobile, in some sense. If Tegra continues to run aground like it has the last few generations(while K1's GPU capabilites are impressive, it's important to remember the GPU is only 1/12th of the entire SoC and K1 won't even have an integrated LTE modem - again), that puts more pressure on desktop.

PC gaming is still growing each year and will continue to do so until 2017 according to most analysts. AMD has the console market has a slow but steady revenue provider. If Nvidia's mobile efforts continue to be hobbled, it'll be far from sure if they can afford to take such a lenient stance towards the PC GPU market.
 
I suspect we won't see a "20nm" Maxwell. The only 20nm process offered by TSMC is "SoC"- the HP process got canned. They would be better off waiting until TSMC's so-called "14nm", which is really 20nm with FinFETS, and the replacement for the MIA 20nm HP process.
 
I suspect we won't see a "20nm" Maxwell. The only 20nm process offered by TSMC is "SoC"- the HP process got canned. They would be better off waiting until TSMC's so-called "14nm", which is really 20nm with FinFETS, and the replacement for the MIA 20nm HP process.

What?
TSMC wont be ready with mass production of 14nm until atleast Q1 2015. Surely we wont be stuck with only architecture gains for the entire 2014?
 
About the same performance as 7790. This should be a nice match for bonaire. Also, if true, it seems its performance per shader is a bit lower.
 
How many CUDA cores do you think will be in the main Maxwell later on this year? Double what was in the Titan? And how much memory? 8gb?
 
20/16nm Maxwell is going to be a game-changing architecture.

Look for clarity in March, when Maxwell will be officially announced.
 
Last edited:
What?
TSMC wont be ready with mass production of 14nm until atleast Q1 2015. Surely we wont be stuck with only architecture gains for the entire 2014?

TSMC and Apple already taped out a 16nm chip, and the first wave of 16nm chips will hit the market this year most likely. Obviously not GPUs, though.
 
I suspect we won't see a "20nm" Maxwell. The only 20nm process offered by TSMC is "SoC"- the HP process got canned. They would be better off waiting until TSMC's so-called "14nm", which is really 20nm with FinFETS, and the replacement for the MIA 20nm HP process.

Whoa, seriously? Do you have a source?
This would be a big deal!
 
20/16nm Maxwell is going to be a game-changing architecture.

Look for clarity in March, when Maxwell will be officially announced.
I can already see the canned bench marks of the 780 stock clocks from may 2013 vs a small die maxwell at 1100 or higher boost beating it in the reviews.
 
What?
TSMC wont be ready with mass production of 14nm until atleast Q1 2015. Surely we wont be stuck with only architecture gains for the entire 2014?

Whoa, seriously? Do you have a source?
This would be a big deal!

No secret source, sorry guys. 😉 But the fact that TSMC is only offering a 20nm SoC process is well documented: http://www.eetimes.com/document.asp?doc_id=1261566 TSMC try to put a positive spin on it, but it seems pretty clear; they couldn't get a planar HP process working, and as such the only 20nm process is one targeted at SoCs, i.e. low power draw.

From that point on though, the rest is speculation. Basically- why would AMD bring out a massive chip like Hawaii if 20nm GPUs were round the corner? Why would NVidia go to all of the effort of creating Maxwell cards on 28nm? This is the sort of behaviour which doesn't make sense if a big transition is right around the corner.

The 20nm process is really a re-labeled 22nm, so he is incorrect.

The 20nm SoC process is not somehow excluded from desktop applications.

20nm SoC certainly could be used for a desktop application, sure. But if it doesn't have noticeably better electrical characteristics (for a high performance chip) than 28nm HP, has higher wafer costs to cancel out any savings from smaller dies, why would they bother with it? Especially when 20nm with FinFETs (aka "14nm") is coming so soon afterwards. Not to mention Apple is supposedly buying up as many TSMC 20nm wafers as they can get their hands on, so supplying a full GPU range could well be tricky to begin with.

It's speculation, sure, and I certainly could be wrong. It's just my thinking at the moment.
 
Back
Top