• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 86 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 
I also saw this speculation elsewhere. What if there really is 2 die, not a co-processor, but another full die on the back for a traditional xx90 type of product.


"Shots of NVIDIA’s RTX 3090 PCB (or perhaps the 3080) have surfaced on the Bilibili forums (now taken down), showing an interesting design. The VRAM dies are on the back-side of the PCB along with a second processor (co-processor?). Furthermore, the NVLink connector is different from the one on the Turing GPUs."
https://www.hardwaretimes.com/nvidi...eportedly-surfaces-double-sided-with-2-chips/
All of that is just basically speculation based on the leaked image, no different than people have been doing here. There's nothing to say anything is under that CPU.
If it was a traditional xx90 product, they'd lay out the GPUs on the front side like they traditionally do. Cooling and trace routing for that backside GPU would be a nightmare, for no good reason. You'd almost need a 4 slot card with coolers above and below, or a 3 slot card with hybrid or liquid cooling.
 
Last edited:
The FE has the cooler on both sides. Might be four slots.
😀 ...nope, it still takes 2 slots

I also saw this speculation elsewhere. What if there really is 2 die, not a co-processor, but another full die on the back for a traditional xx90 type of product.


"Shots of NVIDIA’s RTX 3090 PCB (or perhaps the 3080) have surfaced on the Bilibili forums (now taken down), showing an interesting design. The VRAM dies are on the back-side of the PCB along with a second processor (co-processor?). Furthermore, the NVLink connector is different from the one on the Turing GPUs."
https://www.hardwaretimes.com/nvidi...eportedly-surfaces-double-sided-with-2-chips/
VRAM is on both sides because those are 8Gbit/1GB per chip -> 24GB = 12 GDDR chips on both sides. And there is NO second GPU/FPGA on back side of PCB, there are just SMDs (and one SMD supercap 😉 )
 
😀 ...nope, it still takes 2 slots

VRAM is on both sides because those are 8Gbit/1GB per chip -> 24GB = 12 GDDR chips on both sides. And there is NO second GPU/FPGA on back side of PCB, there are just SMDs (and one SMD supercap 😉 )
Is 3090 new Titan or is there going to be a 48gb titan?
 
I get the feeling that if 1080Ti people want to stay in a similar price bracket, they may be stuck with the 3060 cards or maybe a lower model 3070 for about a 15% performance uplift. I'd think that after Turing, Nvidia would want to offer people an upgrade. I can't imagine them leaving so many people with a 1080Ti without a viable upgrade path yet again. Does it make sense for them to do that? I honestly don't know. If the only impressive upgrade comes in at $1000 or higher, then they can't be interested in making a sale to their traditional enthusiast crowd. They would clearly be moving on to a different customer altogether. It just seems odd to me. Even if the 2080Ti had been $700, I would have past it up. It didn't bring enough performance to the table over what I already had. If I'm stuck with a 3070Ti at $800 that is basically on par with a 2080Ti, then I'll be skipping 2 entire generations in a row, which I don't recall ever doing before. I can't be alone here. Seems very odd.

I'm in the same boat as you. I'd have a really hard time spending $800 on a new GPU for a 35-40% performance improvement. If 50-60% more performance is going to cost me $1,200, I'll just carry on with my 1080Ti.
 
😀 ...nope, it still takes 2 slots

VRAM is on both sides because those are 8Gbit/1GB per chip -> 24GB = 12 GDDR chips on both sides. And there is NO second GPU/FPGA on back side of PCB, there are just SMDs (and one SMD supercap 😉 )

Ian said the power draw of GDDR6X is 25% more at the higher speed. 24 chips would be a ton of power to the point where there might not be room.
 
Ian said the power draw of GDDR6X is 25% more at the higher speed. 24 chips would be a ton of power to the point where there might not be room.
Interesting. It basically means that memory subsystem alone draws 60W per 12 memory chips.

If those GPUs are having 24 memory chips its means 120W in power draw from memory subsystem, alone.

😵
 
Ian said the power draw of GDDR6X is 25% more at the higher speed. 24 chips would be a ton of power to the point where there might not be room.
So, check that leaked Colorful PCB picture again. There is 12 GDDR6(X) chips on back side + another 12 on front side... or do you thing that nV will place memory chips on back side only with consumer Ampere cards?
 
So, check that leaked Colorful PCB picture again. There is 12 GDDR6(X) chips on back side + another 12 on front side... or do you thing that nV will place memory chips on back side only with consumer Ampere cards?

I guess I should say that it's not impossible they do 24 GB, just that something would have to give. Perhaps the 24 GB models would have better binned chips.

It's been awhile since nVidia offered a memory choice at the high end but perhaps this is the case.
 
The 3090 appears to be replacing the Titan, which currently sells for $3000. So you could look at it as a 33% price cut 😛
I can imagine how Mr. Leather Jacket preaches how awesome development that is. "Titan for $1000 less than before."

That being said I kinda doubt that it will cost that much. 1399 € would be my bet. (But in practice... and custom cards....)
 
Last edited:
Interesting. It basically means that memory subsystem alone draws 60W per 12 memory chips.

If those GPUs are having 24 memory chips its means 120W in power draw from memory subsystem, alone.

😵
Assuming 300-350W total, is 120W for memory so bad? 35-40%? I see little outrageous here.
 
Interesting. It basically means that memory subsystem alone draws 60W per 12 memory chips.

If those GPUs are having 24 memory chips its means 120W in power draw from memory subsystem, alone.

😵

I wonder what the power difference would be with slower memory but a 512 bit bus.
 
Memory power scaling doesn't work like that, just having twice as much VRAM won't double power usage when you're keeping bandwidth constant. You're still running 19Gbps on a 384-bit bus. It'll use more power than a 11/12GB card, but not twice as much.
 
The 3090 appears to be replacing the Titan, which currently sells for $3000. So you could look at it as a 33% price cut 😛

And why would, if the rumor is true, NVidia cut 33% from the Titan's price if the 3090 is replacing it unless a Titan may come out later on? Is this new segmentation or reshuffling names? If the 3090 matches last gen Titan performance or exceeds it to a considerable degree, is this NVidia cementing their fear that AMD have actually come up with something viable?

*Insert UFO sounds*
 
If those GPUs are having 24 memory chips its means 120W in power draw from memory subsystem, alone.
Not necessarily. Those memory chips would be running in GDDR clamshell mode where each pair of chips have half their memory bus interface disabled, so power would likely not increase (as) much as raising chip count from 12 to 24 would otherwise suggest. As it's the PHY which consumes the vast majority of power in a GDDR device, if you disable half of it, almost half of the chip's power dissipation goes away.

You'd raise consumption by whatever running some on-chip logic requires, to manage chip functions and executing commands sent to it (reading, writing and so on), and keeping the DRAM arrays refreshed, which would be pretty manageable.
 
And why would, if the rumor is true, NVidia cut 33% from the Titan's price if the 3090 is replacing it unless a Titan may come out later on? Is this new segmentation or reshuffling names? If the 3090 matches last gen Titan performance or exceeds it to a considerable degree, is this NVidia cementing their fear that AMD have actually come up with something viable?

*Insert UFO sounds*
Pricing will be quite interesting as they're launching first.
 
And why would, if the rumor is true, NVidia cut 33% from the Titan's price if the 3090 is replacing it unless a Titan may come out later on? Is this new segmentation or reshuffling names? If the 3090 matches last gen Titan performance or exceeds it to a considerable degree, is this NVidia cementing their fear that AMD have actually come up with something viable?

*Insert UFO sounds*

It will most likely lack some of the DP performance Titan has. RTX Titan has been pretty poor on sales. With its price being so high, people just go up to a Quadro. And nobody buys a Titan for its gaming performance. So release something in its place, gimp the workstation performance, and sell it as a top tier gaming card.
 
Pricing will be quite interesting as they're launching first.

Definitely. I made a post last night about old NVidia prices and how it's changed so much. I'm both excited and afraid of the launch. If NVidia get forced into a yearly game of launches by AMD I'm okay with that and would only bother upgrading every 2-3 years if the performance difference is decent enough to warrant the upgrade.
It will most likely lack some of the DP performance Titan has. RTX Titan has been pretty poor on sales. With its price being so high, people just go up to a Quadro. And nobody buys a Titan for its gaming performance. So release something in its place, gimp the workstation performance, and sell it as a top tier gaming card.
True. I know two people who buy Titans for gaming, but they have incredible rigs and incredibly deep wallets for their hobbies. I'm really interested in sales figures concerned regular GeForce vs. Quadro vs. Titan in professional fields since the go to advice on modeling sites seems to be save your money and go for GeForce. It used to be exclusively Quadro back in the day.
 
Interesting. It basically means that memory subsystem alone draws 60W per 12 memory chips.

If those GPUs are having 24 memory chips its means 120W in power draw from memory subsystem, alone.

😵

Lol, power scales with bandwidth not with size...and GDDR6X has higher bandwidth efficiency.
Anyway your calculation is nonsense.
 
Lol, power scales with bandwidth not with size...and GDDR6X has higher bandwidth efficiency.
Anyway your calculation is nonsense.
R9 290X memory subsystem was using 80W of power, with 6000 MHz GDDR5, with 16 memory chips.
RX 480 memory subsystem was using 37W's of power with 7000 MHz GDDR5, with 8 memory chips.

Power scales with memory bandwidth, because of the frequency. The higher frequency you go, the more power it will use.
 
Back
Top