• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Discussion 'Lovelace'? Next gen Nvidia gaming architecture speculation

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dr1337

Member
May 25, 2020
148
231
76
Lately for the past month or so infamous twitter leaker kopite7kimi has had a nice trickle of next generation nvidia leaks, haven't seen too many people talking about it.

k1.PNG

And now today with further extrapolation by 3Dcenter.org and videocardz, the full AD102 die is pegged at 18,432 FP32 cores as a massive monolithic 5nm part. Though this news is really early considering Ampere launched only a few months ago, kopite has an excellent track record and also doesn't expect Lovelace for over another year anyways.

IMO this is starting to get interesting. Assuming the leaks are true it begs the question, is there something wrong with hopper? Or was hopper just too far out and Nvidia is refreshing the lineup with "ampere 2.0" in the mean time? (too much pressure from AMD?) And holy cow they're going to nearly double fp32 again?!?! Its hard not to be excited about such a monster graphics card 😆

What do you guys think? Is kopite off their rocker? Is nvidia really going to give us Big Ampere? I know its still really early for any legitimate speculation but its also so rare that we'd get such details so far out, though such a sudden shift in plans is also atypical from nvidia. Still, sounds very interesting, and its gonna be neat to see how this all pans out.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
3,199
1,721
136
Not quite the right place for this but:


Looks like we're going to see an Ampere refresh next year. Would be nuts not to have a refresh at this stage, given how few cards have actually made it into the general public's hands as well as what is shaping up to by an absolutely nutty second hand market here.

Why start rolling out new arch's when there a 3/4 tank left in the current one...
 

jpiniero

Diamond Member
Oct 1, 2010
9,572
2,047
136
Not quite the right place for this but:


Looks like we're going to see an Ampere refresh next year. Would be nuts not to have a refresh at this stage, given how few cards have actually made it into the general public's hands as well as what is shaping up to by an absolutely nutty second hand market here.

Why start rolling out new arch's when there a 3/4 tank left in the current one...
Don't see the point unless it's on SS7. Full GA103 is too close to a 3080. There wouldn't be any room for a 3080 Super.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
3,199
1,721
136
Don't see the point unless it's on SS7. Full GA103 is too close to a 3080. There wouldn't be any room for a 3080 Super.
-A lot of speculation that GA103 will be a slimmed down chip with a much smaller footprint, but similar or higher performance to a 3080/Ti/GA102 thanks to higher clocks.

Basically allows NV the opportunity to do a few things:

- Produce more dies per wafer and feed the beast a bit.

- Realign the now far too low MSRP on the original Ampere line much higher to bring MSRP more in line with actual market value of the cards.

- Un-**** the RAM situation with the current line up that includes a 10GB top end card.
 

jpiniero

Diamond Member
Oct 1, 2010
9,572
2,047
136
-A lot of speculation that GA103 will be a slimmed down chip with a much smaller footprint, but similar or higher performance to a 3080/Ti/GA102 thanks to higher clocks.
I don't see how they can get higher clocks unless it's a better node or a pretty significant respin.

- Realign the now far too low MSRP on the original Ampere line much higher to bring MSRP more in line with actual market value of the cards.
They'd get bashed real hard if they tried to sell the full GA103 at $699, even if it's only like 2% slower than the 3080. I suppose they could put 20 GB of GDDR6 but I doubt they will do that.

- Un-**** the RAM situation with the current line up that includes a 10GB top end card.
Don't think they will fix this without 2 GB chips being available for GDDR6X. Not entirely clear that's happened.
 

Ajay

Diamond Member
Jan 8, 2001
8,936
3,623
136
Don't think they will fix this without 2 GB chips being available for GDDR6X. Not entirely clear that's happened.
Can't find any public info on 16 Gib modules. Wonder if it would be worth it to go with 16 Gib GDDR6 from Samsung (or elsewhere). Offering 16 GB of VRAM on a card with ~3070Ti performance would be a worth while trade off for competing with AMD.
 
  • Like
Reactions: Mopetar

jpiniero

Diamond Member
Oct 1, 2010
9,572
2,047
136
Can't find any public info on 16 Gib modules. Wonder if it would be worth it to go with 16 Gib GDDR6 from Samsung (or elsewhere). Offering 16 GB of VRAM on a card with ~3070Ti performance would be a worth while trade off for competing with AMD.
They could have done that with the 3070 Ti but chose not to. Full GA103 is rumored to be 320 bit although they could change it.
 

Ajay

Diamond Member
Jan 8, 2001
8,936
3,623
136
They could have done that with the 3070 Ti but chose not to. Full GA103 is rumored to be 320 bit although they could change it.
Too bad it isn't 384b. Tough sell in some respects against the higher memory capacities that AMD has. With prices high, 'future' proofing must be on purchaser's minds. Nvidia guessed wrong in some respects, but they are selling everything they make, so it's not hurting them at all.
 

A///

Senior member
Feb 24, 2017
940
665
136
Not really related, but whether you like LTT or find them obnoxious, this clip had some interesting depth to it. R&D won't stop on a product a year or two out from sampling and it may end up with a date overlap at this rate of replenishment.

 

GodisanAtheist

Diamond Member
Nov 16, 2006
3,199
1,721
136
Again, a little off topic, but will provide some insight into how NV plans on tackling the MCM problem as more information comes out:

 

Mopetar

Diamond Member
Jan 31, 2011
5,960
2,783
136
Are these really any different than old cards like the GTX 590 or the Radeon 5970 where it's just an SLI/Crossfire setup on a single board? I'm supposing that these are going to be chips on the same package as opposed to just the same board, but It doesn't seem as though either company is using the same kind of approach as with Zen and rather this is only being implemented because not doing so would push the resulting monolithic chip beyond the reticle limit.
 

Hitman928

Diamond Member
Apr 15, 2012
3,576
3,939
136
Are these really any different than old cards like the GTX 590 or the Radeon 5970 where it's just an SLI/Crossfire setup on a single board? I'm supposing that these are going to be chips on the same package as opposed to just the same board, but It doesn't seem as though either company is using the same kind of approach as with Zen and rather this is only being implemented because not doing so would push the resulting monolithic chip beyond the reticle limit.
I think if they just but 2 dice on a substrate with interconnects (similar to 2 GPUs on one board but with higher bandwidth), then the cards won't be viable for gaming without some kind of additional circuitry for load distribution and synching. I would imagine it would also have to do so in a low latency manner. For compute it would work just fine. It's possible that Nvidia's first MCM GPUs are compute only but without really any info on Hopper's architecture, it's impossible to say with any certainty.
 
  • Like
Reactions: Mopetar

Gideon

Golden Member
Nov 27, 2007
1,412
2,819
136
It looks like lovelace is TSMC 5nm not samsung, thst's close to 2 node shrinks (definitely 1.5+ as Samsung 5nm is roughly equal to TSMC 7nm in power draw):


Against the rumored chiplet RDNA3 Nvidia is gonna need all the help it can, but the node shrink alone should allow significantly more SMs. If they can overhaul the architecture decently and add major new features (unlike the 3xxx series), they should be fine. Hard to see them keeping the rasterization crown though.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
3,199
1,721
136
It looks like lovelace is TSMC 5nm not samsung, thst's close to 2 node shrinks (definitely 1.5+ as Samsung 5nm is roughly equal to TSMC 7nm in power draw):


Against the rumored chiplet RDNA3 Nvidia is gonna need all the help it can, but the node shrink alone should allow significantly more SMs. If they can overhaul the architecture decently and add major new features (unlike the 3xxx series), they should be fine. Hard to see them keeping the rasterization crown though.
- As if supply constraints weren't bad enough. If true, also says a lot about how NV's felt its Samsung experiment went and looks to go which is a bit of a bummer if we're back to Samsung = high volume low performance parts.
 

jpiniero

Diamond Member
Oct 1, 2010
9,572
2,047
136
It looks like lovelace is TSMC 5nm not samsung, thst's close to 2 node shrinks (definitely 1.5+ as Samsung 5nm is roughly equal to TSMC 7nm in power draw):
Definitely, it's like 3x the density. Maybe what nVidia do is like 7 cuts with the LL101 cut being 3x the SM count in case they need it. Don't know if you should really expect much other than a straight shrink and more SMs.
 

CakeMonster

Golden Member
Nov 22, 2012
1,026
96
91
- As if supply constraints weren't bad enough. If true, also says a lot about how NV's felt its Samsung experiment went and looks to go which is a bit of a bummer if we're back to Samsung = high volume low performance parts.
I mean, it made them buttloads of money and they didn't have to fight for capacity with all the other TSMC 7nm customers. Samsung gets a really bad rep but I'm not sure NV would have wanted to make this choice differently.

For the next generation, buying into TSMC 5nm after Apple is done with most of the volume (If we assume release in late '22) would make sense for the economy of a large scale order (like Samsung made sense in '20).
 

eek2121

Golden Member
Aug 2, 2005
1,069
1,141
136
I mean, it made them buttloads of money and they didn't have to fight for capacity with all the other TSMC 7nm customers. Samsung gets a really bad rep but I'm not sure NV would have wanted to make this choice differently.

For the next generation, buying into TSMC 5nm after Apple is done with most of the volume (If we assume release in late '22) would make sense for the economy of a large scale order (like Samsung made sense in '20).
NVIDIA has definitely had less supply issues than AMD. Hopefully going forward supply won’t be an issue.
 

Ajay

Diamond Member
Jan 8, 2001
8,936
3,623
136
NVIDIA has definitely had less supply issues than AMD. Hopefully going forward supply won’t be an issue.
They will have supply issues with Lovelace, since it will be on N5, with a ton of other products. Maybe they'll still source some lower end GPUs from Samsung.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
3,199
1,721
136
I mean, it made them buttloads of money and they didn't have to fight for capacity with all the other TSMC 7nm customers. Samsung gets a really bad rep but I'm not sure NV would have wanted to make this choice differently.

For the next generation, buying into TSMC 5nm after Apple is done with most of the volume (If we assume release in late '22) would make sense for the economy of a large scale order (like Samsung made sense in '20).
- Right. Basically Samsung is far behind enough tech wise, and AMD is catching up arch wise, that NV is going to abandon the fab that allowed them to have tons of volume and make money hand over fist during a pandemic to jump onto the horrendously crowded N5 node with everyone else following in Apple's wake.

Currently NV isn't really competing with anyone else for Fab space at Samsung and even then we're getting crushed with supply/demand imbalance (granted, a lot of that is demand driven). But we're not even going to have that as a buffer next round. High GPU prices and inventory shortages are here to stay for the long term...
 

CP5670

Diamond Member
Jun 24, 2004
4,864
233
106
I wonder what people are going to play on this. Are games really going to get that demanding in a year or two? The existing 3000 cards are already fast enough for pretty much everything except 4K RT without scaling or top end VR. They should just make it easier to buy the existing cards.

All the game studios also seem to be moving to multiplayer live service games with microtransactions, where top end graphics is not really the priority to begin with.
 

Saylick

Golden Member
Sep 10, 2012
1,011
871
136
I wonder what people are going to play on this. Are games really going to get that demanding in a year or two? The existing 3000 cards are already fast enough for pretty much everything except 4K RT without scaling or top end VR. They should just make it easier to buy the existing cards.

All the game studios also seem to be moving to multiplayer live service games with microtransactions, where top end graphics is not really the priority to begin with.
Crysis (Remastered), naturally. ;)
 
  • Love
  • Like
Reactions: Leeea and Ajay

Mopetar

Diamond Member
Jan 31, 2011
5,960
2,783
136
Just turn down the clocks and/or undervolt and it and there's your efficient card. Problem is that you still pay the same price for the card pushed to the limits and unfortunately that's not just MSRP.
 
  • Like
Reactions: KompuKare

gdansk

Senior member
Feb 8, 2011
667
364
136
The next generation will be more energy efficient than Ampere. If games do not need more performance then they can simply move back to their efficiency curve.

But because the premise that games do not benefit from more performance is flawed, they won't. The argument of "but who needs that much performance" is proven false by history. Software always expands to waste/exploit available hardware.
 
Last edited:

ASK THE COMMUNITY