• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

NVIDIA Volta Rumor Thread

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Oh okay. My issue with it is that it's 27 inches. Imo, 4k should be at least 30.

Sent from my SAMSUNG-SM-G935A using Tapatalk
 
https://www.skhynix.com/eng/pr/pressReleaseView.do?seq=2086&offset=1

"SK Hynix Inc today introduced the world’s fastest 2Znm 8Gb(Gigabit) GDDR6(Graphics DDR6) DRAM. The product operates with an I/O data rate of 16Gbps(Gigabits per second) per pin, which is the industry’s fastest. With a forthcoming high-end graphics card of 384-bit I/Os, this DRAM processes up to 768GB(Gigabytes) of graphics data per second."

More memory bandwidth than a 2 module HBM2 graphics card from any competitor.
 
https://www.skhynix.com/eng/pr/pressReleaseView.do?seq=2086&offset=1

"SK Hynix Inc today introduced the world’s fastest 2Znm 8Gb(Gigabit) GDDR6(Graphics DDR6) DRAM. The product operates with an I/O data rate of 16Gbps(Gigabits per second) per pin, which is the industry’s fastest. With a forthcoming high-end graphics card of 384-bit I/Os, this DRAM processes up to 768GB(Gigabytes) of graphics data per second."

More memory bandwidth than a 2 module HBM2 graphics card from any competitor.

Interesting. Volta with GDDR 6 this fall might be a surprise just like 1080 was last summer with GDDR5X.
 
Id be more interested in the power savings with GDDR6 because clearly HBM based tech is superior atm with how much power it consumes vs the conventional GDDR memory.
 
So we can expect 512 GB/s for the GTX 2080.

980 had less bandwidth than the 780 Ti, but of course is still a bit faster. While some wonder if Nvidia can do another same node success and make the same gains as Kepler-Maxwell, this alone is good news for those hoping for a big performance boost. This is the first time the incoming x80 chip will have more bandwidth than the previous x80 Ti, if it gets 256-bit 16Gb/s GDDR6.
 
Maxwell had better color compression compared to Kepler, so that helped a bit... I'm not sure how many more iterations of enhanced compression they can achieve.
 
Early 2018 for the 384-bit Volta, eh? Well then, I think instead of contradicting the rumors of Volta in Q3, this strengthens them -- GV102 isn't going to be the first high-end gaming GPU from NVIDIA, GV104 is. I would expect Volta GV104 to ship with, perhaps, 12Gbps GDDR5X.
 
http://www.anandtech.com/show/11014/asus-demonstrates-rog-swift-pg27uq-4k-144-hz-hdr-dcip3-and-gsync

That one. I'm hoping that the Titan Volta will be suitable (at least in SLI where supported) for 4k gaming at higher framerates, even with a tweak here and there in AAA games.

You are going to need to wait another generation at least, maybe 2 for that. Volta is rumored to be a 50% improvement in perf/watt, which means Titan Volta could in theory be 50% faster than the 1080 Ti/Titan Xp. That's only enough to run current gen games at around 85-90 fps. Assuming nVidia can maintain a 50% increase per every 18 months, it's still going to be 2.5-3 years before there will be a Titan card that can run *today's* games at 144Hz+ maxed out.

Even with some settings turned down, to high settings instead of max, it'll have to be at least Volta's successor Titan before we can hope to hit 144Hz at 4K in games that aren't CPU bound (like RTS/action RPG/MOBA).
 
Why HBM2 so expensive and hard to produce?

Because it isn't used in that many products. There's something called economies of scale. You can make almost anything cheap to produce, you just have to produce a TON it so the marginal cost is low, and that sometimes isn't possible if a competitor can make something cheaper and perform similarly (aka G5X).
 
First Volta SKUs to come in September+ 2017, better perf/watt, equipped with GDDR6

http://carbonite.co.za/showthread.php?t=158997
1. NVIDIA is introducing at least two Volta parts this year, the first is in September and no info on the 2nd part. Either way, 1st part shows improved performance/mm and performance per clk which we haven't had for ages. (i.e it is a step forward over Maxwell per clock, which was not the case for Pascal)
3. NVIDIA's new parts will offer GDDR6, and HBM is reserved for Tesla and compute parts for now.
 
Sounds great for enterprise, means little for us Gamers until NVidia trickles it down to consumer skus in 2018.
 
They might push it earlier than that - there has to be a real danger that now they've announced GV100 sales of the Pascal stuff will slow down a bit due to it being previous gen technology.

We're still in the first half of the year of course, so a good bit of scope. If they can make a die *this* big at all then they can definitely make the smaller ones in a technical sense.
 
They might push it earlier than that - there has to be a real danger that now they've announced GV100 sales of the Pascal stuff will slow down a bit due to it being previous gen technology.

We're still in the first half of the year of course, so a good bit of scope. If they can make a die *this* big at all then they can definitely make the smaller ones in a technical sense.

What can they push? They going to call Samsung and ask if they can have 30k hbm2 chips about 6 months early...?

Doesn't work that way. NVidia already tried the wooden screw approach.
 
So according to Fudzilla, the next Geforce product line won't be based on Volta, but on "a Pascal influenced design derived shrink down":
http://fudzilla.com/news/graphics/43962-new-geforce-will-be-incremental

That makes actually sense.
Errrrr, why? Of course gv102/4/6 etc won't keep everything that gv100 has but there's some big stuff there that'll carry over directly enough.
(Huge power efficiency jump for starters.).

Have seen some very sane seeming arguments on here that the lesser Volta's will have to have at least a few tensor core bits so people can learn to program them w'out buying gv100!
 
Errrrr, why? Of course gv102/4/6 etc won't keep everything that gv100 has but there's some big stuff there that'll carry over directly enough.
(Huge power efficiency jump for starters.).

Have seen some very sane seeming arguments on here that the lesser Volta's will have to have at least a few tensor core bits so people can learn to program them w'out buying gv100!

Dunno, perhaps Nvidia doesn't feel threatened by Vega, hence the tick-tock strategy. Perhaps they'll save a 7nm Volta for 2018/2019 against Navi. Or perhaps it's all bollocks and they will launch consumer Volta as usual.
 
Dunno, perhaps Nvidia doesn't feel threatened by Vega, hence the tick-tock strategy. Perhaps they'll save a 7nm Volta for 2018/2019 against Navi. Or perhaps it's all bollocks and they will launch consumer Volta as usual.
Their threat isn't vega. Much more the need to keep their annual upgrade cycles running.

Now I think about it is almost known nonsense - there's a tegra using Volta known for release this year sometime. In that self driving module stuff.

Of course the first Volta stuff probably won't be a big jump Vs the 1080ti in raw performance terms if they go middle chip first as per Pascal and Maxwell.
 
Errrrr, why? Of course gv102/4/6 etc won't keep everything that gv100 has but there's some big stuff there that'll carry over directly enough.
(Huge power efficiency jump for starters.).

Have seen some very sane seeming arguments on here that the lesser Volta's will have to have at least a few tensor core bits so people can learn to program them w'out buying gv100!
It basically means, that Volta consumer GPU will resemble the same architecture as GP100.

In FP32, the structure of each SM in GV100 is the same as is in GP100. There is no need to worry. You will get boost in performance, and efficiency. GP100 and GV100 will have the same throughput of CUDA cores, which will be higher than consumer Pascal GPUs.
 
Back
Top