[ PCGH ] Maxwell GTX 880 specifications leaked

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Rvenger

Elite Member <br> Super Moderator <br> Video Cards
Apr 6, 2004
6,283
5
81
The 680 "went backwards" as well, but was still faster in the end.



I didn't see where price was mentioned; but I think I'm sadly expecting prices to start higher than the $500-550 mark we saw with 28nm. Anyways, if this is GK104's successor, then 256-bit bus sounds right.

The core count doesn't sound unrealistic, either. GK104 had 4x has many cores as GK107. So if this chip is GM204, and it ends up with 5x as many cores as GM107, that is within the ballpark difference that the Kepler chips had.

Also, as we see, GM107 performs quite well even with a castrated 128-bit bus and paltry 86.4 gb/s bandwidth.



GTX 680, anyone?



Go look at the specification leaks in the other thread on the R9 390, there is no way Nvidia would put GM104 up against that. I just can't see a 256bit bus w/ 40 ROPs vs. a 512bit bus with 96 ROPs. It just doesn't make sense competitively speaking. Anyways, both of these leaks are probably fake anyways.
 
Feb 19, 2009
10,457
10
76
True, but we could get an estimate based on the 750 Ti.

40 ROPs seems like it would be a major bottleneck though.

Not really, since the 750 ti is purely low end and castrated on the primary function of the bigger Maxwell: Compute.

Honestly we don't have a good idea and not for a long time. These are just fun time wasting on tech forum guessing games. ;)

As a mid-range chip, 40 ROPs will allow it to be competitive up to 1600p so its not a concern. The only concerning factor is the price, looks like us gamers are going to be shafted worse on 20nm.
 

TreVader

Platinum Member
Oct 28, 2013
2,057
2
0
Go look at the specification leaks in the other thread on the R9 390, there is no way Nvidia would put GM104 up against that. I just can't see a 256bit bus w/ 40 ROPs vs. a 512bit bus with 96 ROPs. It just doesn't make sense competitively speaking. Anyways, both of these leaks are probably fake anyways.

I understand the 256bit bus, but it's not like the 780 had too many ROPs. If anything I would expect them to release GM104 with 64ROPS, then have GM110 have 96.



The only thing that could make up for the ROP deficit is clockspeed.
 

Braxos

Member
May 24, 2013
126
0
76
Guys we are going up on resolution at monitors, rethink it a bit. Those would drive a 4k monitor, a 1440p at a decent quality?
 

DiogoDX

Senior member
Oct 11, 2012
757
336
136
specs make very little sense...

its listing effective memory at 7400 when the fastest gddr5 is rated for 7000.
Maybe a new GDDR5.:hmm:


http://sites.amd.com/us/Documents/TFE2011_006HYN.pdf
Semtiacutetulo.jpg
 

moonbogg

Lifer
Jan 8, 2011
10,734
3,454
136
I would be shocked if the "news" was actually in our favor. Nvidia's mid range rape train plowed through all of us at super speed. Why would they do something stupid like give us a full chip?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
The race of random speculation have started I see. Techsites gotta love all the coming clicks with changed specs over the next 6 months. Keep those ads coming in!
 
Feb 19, 2009
10,457
10
76
Don't forget that leaks of Tahiti specs from a Japanese tech site months (something like 6-9 months as i recall) prior to release was spot on. But ofc, too early to take seriously outside of fun speculations.
 

dangerman1337

Senior member
Sep 16, 2010
437
74
91
TBH I'm not sure if the Second generation GM200, 204, 206 Maxwells will be on 20nm. I recall someone on Beyond3D saying that it'll be on 28nm and be released very late Q3/early Q4 and has been taped out. Considering the costs of 20nm it won't be cost-effective and the 20nm that TSMC does is just SOC and no LP, HP, HPM etc. I think the energy efficiency gains with Maxwell will make up for no die shrink and it would be more worthwhile to have 16nm FF Pascal ready in 2016 instead of a cost-ineffective early-mid 2015 20nm Maxwell.
I've got a feeling that AMD will be using HBM for Pirate Islands similar to AMD using GDDR5 for R700.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Reducing the ROPs and memory bus width would kill performance on 4K. Given that this is going to be a major focus of high-end video cards going forward, I don't see why Nvidia would want to do that.
 

dangerman1337

Senior member
Sep 16, 2010
437
74
91
Reducing the ROPs and memory bus width would kill performance on 4K. Given that this is going to be a major focus of high-end video cards going forward, I don't see why Nvidia would want to do that.
I don't think relatively affordable on 4K on a single GPU is remotely possible any time soon. Even a OC'd HD 290/GTX 780ti will have to use high-ish or medium settings for today's titles now to get a smooth and stable frame-rate. I think Crysis 3 is a good metric on how intensive games in the next few years will be. There's titles coming out such as AC: Unity or Witcher 3 and I would not be surprised if Star Citizen will be an intensive title as well (since a recent stream had dips on developer's computers) that'll strain today's very high end single GPUs on 2560x1440.
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
Reducing the ROPs and memory bus width would kill performance on 4K. Given that this is going to be a major focus of high-end video cards going forward, I don't see why Nvidia would want to do that.

Its really the successor to GK104 so its an increase from 32 to 40 ROPS and the 256 bus would carry over.

GM110 would most likely be 64+ ROPS
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
The memory bus may perform okay because maxwell is a beast in terms of bandwidth usage.

http://www.notebookcheck.net/Review-Clevo-W650SJ-Schenker-M504-Barebones-Notebook.114329.0.html

Just look at the 850m (1000 mhz DDR3, 32 GB/sec) go. Its way faster than the bandwidth limited 750m (750m GDDR5 is 10-40% faster than the DDR3 SKU).

Comparing it to the 860m (2500 GDDR5)

http://www.notebookcheck.net/Review-Clevo-W370SS-Nexoc-G728II-Barebones-Notebook.114640.0.html

The 860m is a good 30% faster but the difference is nowhere near as dramatic as expected when you consider that the 860m is hitting 770m levels and the 850m is performing identically to the 765m with double the bandwidth (the 765m is BW limited).

Looks like nvidia wanted to reduce the memory controller size with Maxwell which they certainty did.

Not sure about the ROPs.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Reducing the ROPs and memory bus width would kill performance on 4K. Given that this is going to be a major focus of high-end video cards going forward, I don't see why Nvidia would want to do that.

Thing is 880 has a GM204 codename which would suggest it's NV's mid-range Maxwell card. In that case, it should target 1440/1600p, not 4K. Even their flagship Maxwell on its own won't be fast enough for next gen PC games on 4K. I tend to use C3 as the basis of next gen gaming since more or less everything else looks worse graphically.

On Medium quality in C3 at 4K, 780Ti gets 66 fps. You think flagship Maxwell will be as fast as dual 780Tis?
62501.png


Even if beats 780Ti SLI, turn on max settings on C3 and 295X2 gets less than 30 fps at 4K. No chance for a single Maxwell.
crysis3_3840_2160.gif


Having said that, compare 660 vs. 750Ti. On paper, 660 wipes the floor with 750Ti but it's only 20% or so faster. Maxwell's efficiency, especially on 20nm cannot be underestimated based on ROP and bandwidth specs alone. The increase in IPC was 35%!

As I said in another thread, I just don't like the idea of NV repeating 680 strategy and launch GTX880 at $500+ since it's still mid-range Maxwell, even if it beats 780Ti.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
Thing is 880 has a GM204 codename which would suggest it's NV's mid-range Maxwell card. In that case, it should target 1440/1600p, not 4K. Even their flagship Maxwell on its own won't be fast enough for next gen PC games on 4K. I tend to use C3 as the basis of next gen gaming since more or less everything else looks worse graphically.

On Medium quality in C3 at 4K, 780Ti gets 66 fps. You think flagship Maxwell will be as fast as dual 780Tis?

Even if beats 780Ti SLI, turn on max settings on C3 and 295X2 gets less than 30 fps at 4K. No chance for a single Maxwell.

Having said that, compare 660 vs. 750Ti. On paper, 660 wipes the floor with 750Ti but it's only 20% or so faster. Maxwell's efficiency, especially on 20nm cannot be underestimated based on ROP and bandwidth specs alone. The increase in IPC was 35%!

As I said in another thread, I just don't like the idea of NV repeating 680 strategy and launch GTX880 at $500+ since it's still mid-range Maxwell, even if it beats 780Ti.

Nvidia is likely to repeat the strategy which they adopted with GK104. A 3200 cuda core Maxwell chip with 48 ROPs and 384 bit memory running at 7 Ghz is what I believe is a realistic expectation for a 300 - 350 sq mm chip made at TSMC 20nm. It will have to be clocked conservatively as TSMC 20nm brings half node like power efficiency gains. But a year later a GM200 manufactured at TSMC 16FF and 16FF+ , which doubles the GTX 780 Ti is easily possible.

https://markets.jpmorgan.com/research/email/-kjegkq4/GPS-1336259-0

Here is a JP Morgan report which says TSMC 16FF is likely to start volume production in late Q4 2014 / early 2015 with a steep ramp in H2 2015.

TSMC 16FF brings a close to 40% perf improvement at the same leakage and a 55% power reduction at the same performance when compared to TSMC 28HPM. (slide 19)

http://www.eda.org/edps/EDP2013/Papers/4-4 FINAL for Tom Quan.pdf

yeah a single GM200 built at TSMC 16FF or TSMC 16FF+ can easily play Crysis 3 at Very high settings at 4K and be playable at 35 - 40 fps. to max out AA you will need to go SLI.

http://hexus.net/tech/reviews/graphics/68381-amd-radeon-r9-295x2/?page=6

http://www.hardwarecanucks.com/foru...md-radeon-r9-295x2-performance-review-10.html

http://www.hardocp.com/article/2014/04/08/amd_radeon_r9_295x2_video_card_review/4#.U0bBL6KfZ8E
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Fudzilla posted yesterday that their contacts within nvidia revealed that 20nm Maxwell is still on track for a 2H 2014. Whether that's true or not, who knows. I'm guessing AMD fans will scream that it isn't possible, while NV fans will remain optimistic. :) As is always the case.

http://fudzilla.com/home/item/34451-nvidia-20nm-maxwell-comes-in-late-2014

I have no idea. I certainly hope we get new stuff sometime this year. My optimism is waning at this point though - it would be nice to have some new toys. :p

I guess we just won't know until it happens - neither AMD or NV would say much about a new architecture until release is imminent within a month or two. Any earlier than that would eat into existing SKU sales, which neither side would want. I mean, if NV came out and stated "Maxwell GTX 880 is being released in October" you can bet that GTX 780ti sales would literally drop off a cliff overnight. So I can see why they're mum on the topic.
 
Last edited:

TreVader

Platinum Member
Oct 28, 2013
2,057
2
0
From Wikipedia:
"Nvidia increased the amount of L2 cache on GM107 to 2 MB, up from 256 KB on GK107, reducing the memory bandwidth needed. Accordingly, Nvidia cut the memory bus to 128 bit on GM107 from 192 bit on GK106, further saving power.[6] Nvidia also changed the streaming multiprocessor design from that of Kepler (SMX), naming it SMM. The layout of SMM units is partitioned so that each of the four warp schedulers controls isolated FP32 CUDA cores, load/store units and special function units, unlike Kepler, where the warp schedulers share the resources. Texture units and FP64 CUDA cores are still shared.[6] SMM allows for a finer-grain allocation of resources than SMX, saving power when the workload isn't optimal for shared resources. Nvidia claims a 128 CUDA core SMM has 90% of the performance of a 192 CUDA core SMX"


So it looks like it just doesn't need the memory bandwidth. It's very possible the specs are fake, but if so then whomever made them up did his homework

edit: .9 x 192 = 172.8 - 128 = 44.8/ 128 = .35 = 35% increase in shader power from SMM.
 
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Thing is 880 has a GM204 codename which would suggest it's NV's mid-range Maxwell card. In that case, it should target 1440/1600p, not 4K.

Whether a chip is a flagship or not depends on what else is out at the time. The Nvidia GTX 680 and AMD HD 7970 were both flagships until they were surpassed by new high-end cards with larger dies. If the 880 replaces the 780 (as the numbering would indicate) then it's the new flagship by default, unless they're planning to release an even bigger chip at the same time.

And expecting a new flagship card on an updated process node to match the previous generation's top SLI/XF pair is actually not unreasonable given past history. Especially since this time the node shrink is being paired with a move to a new architecture that is already demonstrated to be considerably more efficient (as we know from the 750/750Ti).

The bottom line is that the stats provided for the alleged GTX 880 make no sense. It would be ridiculous to pair that large a die, with that many shaders, with so few ROPs and such a narrow memory bus. It would be the opposite of future-proof; it would hurt performance in an area where the emphasis is clearly going to increase in the future.