Discussion Nvidia Blackwell in Q4-2024 ?

Page 27 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

TESKATLIPOKA

Platinum Member
May 1, 2020
2,523
3,038
136
The 5080 is the full GB203 so at least it's more SMs than the 4080 Super. Also it's 400 W.
Only 5% more Cuda.
Not very optimistic, but GB207 supposedly has only 20SM, so a 20% regression compared to AD107's 24SM.
Nvidia wouldn't release a new GPU, which is slower than It's predecessor, It needs to be faster by at least 15%.
So I expect either IPC or clockspeed gains or both to make up for the missing SM and extra performance on top of that.

16GB Vram? Not really enough, but what did we expect? AMD also will stay at this amount, even If at a lower price.
 

blackangus

Member
Aug 5, 2022
149
204
86
Only 5% more Cuda.
Not very optimistic, but GB207 supposedly has only 20SM, so a 20% regression compared to AD107's 24SM.
Nvidia wouldn't release a new GPU, which is slower than It's predecessor, It needs to be faster by at least 15%.
So I expect either IPC or clockspeed gains or both to make up for the missing SM and extra performance on top of that.

16GB Vram? Not really enough, but what did we expect? AMD also will stay at this amount, even If at a lower price.
Well on some games RT + DLSS on high at 4k is more than 16GB of ram yeah for the 5080 16 really isn't desirable.
Some games are even more than 16GB at 1080p (I read the lastest SW game was hitting 20 GB for 1080p RT + DLSS)
 

jpiniero

Lifer
Oct 1, 2010
15,227
5,780
136
Only 5% more Cuda.
Not very optimistic, but GB207 supposedly has only 20SM, so a 20% regression compared to AD107's 24SM.
Nvidia wouldn't release a new GPU, which is slower than It's predecessor, It needs to be faster by at least 15%.

I def wouldn't expect much from any desktop GB207.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
Depends on when monitors get around to fully implementing DP2.1.

It's one of those standard where you can say DP2.1 without implementing full bandwidth, as in the new Sony 480Hz OLED with 2.1:


"one DisplayPort 2.1 port"

But apparently it's not full bandwidth:

 

poke01

Platinum Member
Mar 8, 2022
2,216
2,811
106
  • Like
Reactions: Tlh97 and Saylick

Mopetar

Diamond Member
Jan 31, 2011
8,118
6,771
136
Good. The faster this bubble drops and fails the better. It’s the next con after Crypto, Metaverse and now AI. What AI demands is unsustainable and inefficient. I hope companies stop investing in AI and Nvidia AI chips

Be careful what you wish for. JHH might decide to keep margins the same by increasing prices for gamers and charge even more.

While I'm being facetious, I wouldn't be at all surprised if it were used as an excuse to increase prices. Especially on the high end.
 

Saylick

Diamond Member
Sep 10, 2012
3,537
7,861
136
Be careful what you wish for. JHH might decide to keep margins the same by increasing prices for gamers and charge even more.

While I'm being facetious, I wouldn't be at all surprised if it were used as an excuse to increase prices. Especially on the high end.
"The price hikes will continue until profit margins improve." - JHH, probably.

But in all seriousness, Nvidia can try to charge gamers more but gamers are more price sensitive than enterprise customers. It would be a fool's errand.
 

CakeMonster

Golden Member
Nov 22, 2012
1,505
661
136
Good. The faster this bubble drops and fails the better. It’s the next con after Crypto, Metaverse and now AI. What AI demands is unsustainable and inefficient. I hope companies stop investing in AI and Nvidia AI chips
AI has its uses but it would be great to have a correction big enough to take out the most delusional hype masters before they become trillionaires.
 
  • Like
Reactions: TESKATLIPOKA

VirtualLarry

No Lifer
Aug 25, 2001
56,560
10,176
126
"The price hikes will continue until profit margins improve." - JHH, probably.

But in all seriousness, Nvidia can try to charge gamers more but gamers are more price sensitive than enterprise customers. It would be a fool's errand.
Still, they warned that consumers could face higher prices or empty shelves for certain products. Some shipping companies have already warned of surcharges of $1,500 and $3,000 a container in the event of a strike, according to the analysts.

If the disruptions were to last, those costs will be eventually passed onto consumers. The last wave of inflation was partially caused by supply-chain disruptions during the pandemic, as the cost of shipping containers rose by some 400% from 2020 to 2022.
 

jdubs03

Senior member
Oct 1, 2013
881
524
136
This is a decent breakdown of the nVIdia SKUs going back to the GTX 700 series.
Here’s a good chart showing the core/memory characteristics over time.
1727936715944.png
1727936784383.png
1727936916691.png
1727936954638.png
1727937074409.png
1727937110163.png
Bottom line is the GTX 5080 based on the recent rumors running around is more like a 5070 class and should be priced accordingly to previous historical figures.

1727937702592.png
1727937736913.png
 
Last edited:

SteinFG

Senior member
Dec 29, 2021
649
777
106
Bottom line is the GTX 5080 based on the recent rumors running around is more like a 5070 class and should be priced accordingly to previous historical figures.
that's not how it works in real world though.

I'm thinking, what if we compare by die size? Just remember that Samsung's 8nm is a lot cheaper, by some rumors as cheap as TSMC 12nm
780 - 561mm, 28nm, GPU uses binned die.
980 - 398mm, 28nm, GPU uses full die.
1080 - 314mm, 16nm, GPU uses full die.
2080 - 545mm, 12nm, GPU uses full die.
3080 - 628mm, 8nm, GPU uses binned die.
4080 - 379mm, 4nm, GPU uses full die.

About the same size as 980, bigger than 1080, smaller than 2080
 
Last edited:
  • Like
Reactions: MoogleW

biostud

Lifer
Feb 27, 2003
18,702
5,437
136
Any chance they'll do 320 or 384 bit 5090 with 3Gb chips for 30 or 36 Gb memory?
 

SteinFG

Senior member
Dec 29, 2021
649
777
106
Any chance they'll do 320 or 384 bit 5090 with 3Gb chips for 30 or 36 Gb memory?
It's possible, if they get faster memory there.
5090 with 28 Gbps @ 512 bit = 1.792 TB/s , 32GB with 2GB density packages, 6MB of 8MB L2 enabled per package, 96MB L2 total
5090 with 37.5 Gbps @ 384 bit = 1.800 TB/s , 36GB with 3GB density packages, 8MB of 8MB MB L2 enabled per package, 96MB L2 total

37.5 Gbps doesn't seem achievable until 6090 though, so I doubt.
 

SteinFG

Senior member
Dec 29, 2021
649
777
106
It is pretty clickbait. Shoulda been pretty obvious that NV bumped the GB202 specs to boost it's Workstation/AI performance. They will just charge a lot more.

5090's probally going to be $2499 if it is really using the full 512-bit.
And then 5080 at $1000 would cost historically like a 70 class: ~40% of the flagship price 🤣
 
  • Like
Reactions: Tlh97 and Ranulf

jdubs03

Senior member
Oct 1, 2013
881
524
136
It is pretty clickbait. Shoulda been pretty obvious that NV bumped the GB202 specs to boost it's Workstation/AI performance. They will just charge a lot more.

5090's probally going to be $2499 if it is really using the full 512-bit.
What is clickbait? The HWUB title? I mean the data is what it is. Going by the ratios, a 5080 looks more like what a 5070 would be considering their previous product history. I don’t see how that can be considered clickbait. It’s data.
The 5080 as rumored has a lower core count ratio, by far, than any x80 class release in over 10 years. And they’re going to charge more for that? Nah that’s not cool.

And $2499 is just grossly expensive. Look at the 4090 launch price. Come on man, $900 more - a 56% increase? Sounds like you’re making excuses for monopolist pricing power. Because if this rumor is true, their behavior is that of a monopolist.

And then 5080 at $1000 would cost historically like a 70 class: ~40% of the flagship price 🤣
If this is the case, then it wouldn’t be as bad, since the price/performance compared to its predecessor would be better nominally (still terrible in relative terms). But something tells me they’ll charge more because they can.
 
Last edited:
  • Like
Reactions: Executor_