Official Fury X specs

Page 16 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

MagickMan

Diamond Member
Aug 11, 2008
7,537
3
76
No it doesn't. If you run out of that 512 GBps, ~200 cycle latency local ram, you are forced to go through the 30 GBps, 10,000+ cycle latency PCIe interface.

To make an analogy: Starting a game that you have installed takes you some 20 seconds on a HDD and a little bit less on a SSD. If you don't have it installed yet you need to load it through Steam first, install it and then start - which takes you anywhere from a couple of minutes to half a day. And a SSD will not speed that process up by much.

You can tweak your drivers to make (slightly) more efficient use of that local ram, sort of like keeping your hard drive free of junk to have more games installed. But you can do that on any memory interface equally, just like you can do that equally on a hard drive and a SSD.

Exactly. If you run out, you just run out. I think the Fury X will be great, I really want one, but I'm not buying the 4GB model. I mean, that would be kind of silly if I'm already running a 4GB card and running out of texture memory, no matter how fast the memory on the Fury X is.

and I don't "play" 3dmark, in fact I mostly ignore synthetics, so anyone telling me they can extrapolate real world performance from them (when we know how easily they can be manipulated) is... well... they've been drinking the kool-aid for too long. :whiste:
 

MagickMan

Diamond Member
Aug 11, 2008
7,537
3
76
You don't follow the industry closely do you? Are you familiar with when AMD released the 290X and setup a press tent across the street from an Nvidia press conference to steal its thunder? No? Well then, you wouldn't understand why the release of details was postponed when Nvidia released the Titan X on the same day! It was payback and fair in my opinion. A dick move deserves a dick move.

As to pricing, I wasn't saying AMD brought down the 980Ti's price, I said the reverse. Nvidia was waiting to drop it after AMD released its card, but as they couldn't wait much longer and no details were coming until E3 instead of computex, they wanted to get the purchases from people waiting for AMD, and, to be fair, Titan X performance on 4K almost at $650, not a bad deal. AMD's Fury, over time, will crush its performance!!! But, at that price, Nvidia was sacrificing sales of Titan X for the sake of forcing AMD into a lower price range because, on large, Nvidia can afford it better. If AMD didn't lower the suggested rumor price, it would have had worse time justifying the cost and lost sales.

As for the 4GB assertions, you are spouting idiocy. You are comparing two different standards as if they are equal. Is 4GB of DDR2 = 4GB DDR3 = 4GB DDR4 = 4GB GDDR5? NO!!!!!!!!!!!!!!!!!!!!!!!!!!!! In the same way, it takes 6GB-8GB of GDDR5 to equal 4GB of Gen1 HBM!!! In fact, reviewing leaked benchmarks, the Fury X crushes the 980Ti in 5K and 8K!!! Now, to be fair, at 5K and 8K, Titan X outperforms. That is where the memory limitation seems to come in and effect performance... Now, as only apple and few others have 5K, while 8K is NOT SOLD TO CONSUMERS ANYWHERE YET, it will not be an issue until 2017 (which most enthusiasts buying these cards will want to upgrade by then)!!!

Now, as for the reasons for only 4GB instead of 8GB cards. It is a limitation of current interposer tech. Currently, for Gen1, interposers only allow a stack of 1GB of memory to rest on it. To do more, you must use a dual interposer which adds significant costs to production. It is not a lack of memory available, it is that the costs don't justify the return for the initial release!!! As this will only be needed for competing with the Titan X at 5K and 8K gaming, it can wait for the time period of releasing a dual Fiji card this fall without sacrificing sales. So until you go understand more about HBM and how Fury has a 4096-bit memory interface with 512GBps bandwidth that crushes the 980Ti and Titan X bandwidth with about 800 more shader cores than the Titan X and 980Ti which allow faster processing of items normally stuck in memory (I don't want to go into the details because this is EXTREMELY oversimplified and doesn't actually accurately describe what is happening, I apologize to those that know better on how it works and welcome better descriptions for the community at large), please stop spewing NVIDIA marketing BS!!!:biggrin:

None of this has been proven with real world tests yet. None of it. So please quit spewing propaganda and hearsay. K? Good, glad we got that out of the way. And you completely ignored my comments about the interface. Hint: we are saying the same thing about it, and HBM in general, so quit acting like we aren't. :p
 

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
If they couldn't launch with 8GB of HBM, they should rather have introduced a new mid range product instead. Somewhere between the 260x and 280.

I think a midrange champion would do more for their brand right now.
 

RampantAndroid

Diamond Member
Jun 27, 2004
6,591
3
81
4GB on each standard IS NOT EQUAL and 4GB OF HBM HAS THE CAPABILITY OF PUTTING THROUGH AS MUCH DATA AS 6-8GB OF GDDR5 IN THE SAME AMOUNT OF TIME ALLOWING FOR EQUIVALENT PERFORMANCE!

You're *really* oversimplifying. The issue with 4GB is that there is concern that 4GB is not enough. I don't care how fast it is. If over 10 frames, you need 6GB worth of resources to render, that means you need to page resources in and out of VRAM. Where do you page them from? At best, system memory. At worst, the SSD/Disk drive.

If you need to access 6GB of data within a short period of time, you're going to have to pull in your 4GB of data, us it, page out some resources to make room for new resources in VRAM, and then pull those new resources over the incredibly slower PCIe bus. I don't care HOW fast your HBM is - if you're forced to look outside of your VRAM for the data required to render a frame, the speed of your VRAM will mean nothing.
 

ajc9988

Senior member
Apr 1, 2015
278
171
116
If you guys are too ignorant to realize when the limit is or that increased speed with lower latency and more processing can do as much as a slower standard with less to a point, you guys are just retarded and really need to study. You only have it stuck in ram until used. At 4k, all cards are the same because same amount of data and it doesn't sit in ram as long as GDDR5. When it needs more, as I said at 5k and 8k, it fails for your stated reasons. But to act like standards that affect bandwidth make no difference, our lower latency between graphics ram and GPU, gtfo!!!

Edit: this isn't even addressing the other parts at play that improve the process which bolster my point. Nvidiots!

Buh-bye, troll.
-- stahlhart
 
Last edited by a moderator:

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
You didn't notice all his posts are in this thread? Stop responding to the troll.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
If they couldn't launch with 8GB of HBM, they should rather have introduced a new mid range product instead. Somewhere between the 260x and 280.

I think a midrange champion would do more for their brand right now.

Yes, because amd need their image changed from "fast and hot" to "their latest and greatest compete with nvidia bottom end." /sarc

Imagine that doom n gloom.

Also, amd did exactly that in previous generations. They battled nvidia with small dies where double chip cards were going against big core nv gpus. That didn't work out regardless of superior perf/w, perf/mm2, perf/$ etc.

They are changing their game and rightly so.

HBM is a hot tech right now and I can't imagine any enthusiast not getting one. It is the latest and greatest that will shape this whole (and even more) industry for the upcoming decade.

I can imagine a few ignorants skipping it, but true GPU enthusiast couldn't just stand aside having none of it.
 

DeathReborn

Platinum Member
Oct 11, 2005
2,750
746
136
To address your comments, yes, the capacity is the same. But not all ram is created equal. You have to look at the effects of clock speeds and bus width to understand the speed, or bandwidth, of the ram to understand what effects it has on the frame buffer. Now, when examining the two standards, GDDR5 has a 32-bit bus width and is clocked at up to 1750Mhz (7GBps). HBM has a 1024-bit bus width with a clock speed of 500Mhz (1GBps) on gen 1 HBM. What this means is GDDR5, per chip, can do a max of 28GBps, while HBM can achieve 100-128GBps per chip. That is much faster!

You do know that each HBM stack on the Fury cards is actually 4 chips stacked on top of a logic die right? So that 1024bit per chip becomes 256bit per chip & 1024bit per stack.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Yes, because amd need their image changed from "fast and hot" to "their latest and greatest compete with nvidia bottom end." /sarc

Imagine that doom n gloom.

Also, amd did exactly that in previous generations. They battled nvidia with small dies where double chip cards were going against big core nv gpus. That didn't work out regardless of superior perf/w, perf/mm2, perf/$ etc.

They are changing their game and rightly so.

HBM is a hot tech right now and I can't imagine any enthusiast not getting one. It is the latest and greatest that will shape this whole (and even more) industry for the upcoming decade.

I can imagine a few ignorants skipping it, but true GPU enthusiast couldn't just stand aside having none of it.

While I hope I can agree with you in a couple days when we get reviews, is a 'true enthusiast' one who endorses a product that has not yet proven itself? How can we say it is 'ignorant' to skip something we know very little about?
 

psychok9

Junior Member
Nov 17, 2009
10
0
61
Sadly, also if the vRam speed is like the speed of the light, if you go over 4gb, you will stutter because the PCIE become the bottleneck. It's simple. The speed of vRam is forward the GPU.
GPU <-> VIDEO RAM <-> SYS RAM <-> HDD.
I think that resolution isn't the point, the point is the resolution of the textures like Shadow of Mordor Ultra. Pack.
The good news is that amount of vRam will be a problem only in the future, maybe since 2 years ahead... when we will have HBM2 and Fury 2 cards.
 

ajc9988

Senior member
Apr 1, 2015
278
171
116
@deathreborn - I realize it. It was right before I went to bed I wrote that. But thanks for pointing it out.
 

Creig

Diamond Member
Oct 9, 1999
5,171
13
81
(I'm going to preface this by saying that I'm not a brand loyalist, I'm not "Team Green or Red". Hell, I don't even understand company loyalty within the consumer space, but that's just me. It seems to only help the company and doesn't benefit us, the buying public, at all. But anyway...)

Yes. The performance of the Titan X, relative to its time of release, caught AMD off guard, as did its price. Sure, it's expensive, but not when compared to its relative game performance compared to other GPUs that were currently available. Then came the 980Ti, which not only made the Titan X comparatively overpriced, but also made mincemeat of everything else in the upper end of the market.

You claim that this was so nvidia could "steal AMD's thunder". Well, that's ignorant. It takes a very long time to develop a GPU for public sale. The Titan X (and 980Ti, as well) was going to come out anyway, realistically the only thing Fiji did was affect the Titan X/980Ti introductory MSRP (which is a great thing, make no mistake).
I hate to do this, but I'm going to have to agree with ajc9988 on this point. The $1,000 Titan X had been on sale for only 2 1/2 months when Nvidia released the $650 980 Ti, effectively cutting Titan X sales off at the knees. Nobody, especially Nvidia, would do that unless they felt threatened by another product.
 

Creig

Diamond Member
Oct 9, 1999
5,171
13
81
While I hope I can agree with you in a couple days when we get reviews, is a 'true enthusiast' one who endorses a product that has not yet proven itself? How can we say it is 'ignorant' to skip something we know very little about?

Ditto. The 2900XT looked pretty exciting on paper, too. And we all know how that card flopped. People need to take a few deep breaths and just wait for the reviews to come in before we can compare the strengths/weaknesses of the Fury X to the 980 Ti.
 

psychok9

Junior Member
Nov 17, 2009
10
0
61
Guys you miss the point. If you go over 4GB, also if your vram goes like the light speed, you have a bottleneck on your Sys RAM.
GPU <-> VRAM <-> SYSRAM <-> HDD

Isn't the resolution screen problem but if you load a lor of very high resolution textures (like Shadow of Mordor Ultra optional pack). It depends also how devs will program their games.

The good news is that amount of vRam will be a problem only in the future, maybe since 2 years ahead... when we will have HBM2 and Fury 2 cards.
 

Alatar

Member
Aug 3, 2013
167
1
81
I hate to do this, but I'm going to have to agree with ajc9988 on this point. The $1,000 Titan X had been on sale for only 2 1/2 months when Nvidia released the $650 980 Ti, effectively cutting Titan X sales off at the knees. Nobody, especially Nvidia, would do that unless they felt threatened by another product.

You are aware that Nvidia always repeats the same pattern with their higher end chips (doesn't even have to be the highest end one)?

Expensive fully enabled chip followed by a slightly cut chip with much better price/perf has been happening for years with pretty much every launch.

Last time they did the same thing, at around the same time with the 780 and AMD's GPUs were still months out. Why is this time suddenly different? Everyone who knows anything about GPUs saw 980Ti coming. AMD or no AMD.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
I wonder how worthwhile it is to spend extra for a full water block: http://www.ekwb.com/news/606/19/Exi...ks-compatible-with-Radeon-Rx-300-series-GPUs/

When the specs 50C and 32dBa are already quite good.

I wonder how the VRM cooling is with the stock cooler? That was the one achilles heel on the 295x2 and a full block helped a lot there. Those specs are pretty impressive though...would be tough to significantly improve unless you have a really nice loop you are integrating it with. A 'cheaper' custom loop might not be as good...
 
Feb 19, 2009
10,457
10
76
From the pictures of the Fury X cooler, its got a copper pipe extension from the GPU waterblock/pump, that runs over the VRMs (basically water cooling for both GPU/VRM). Its looks to be an excellent design that uses the limited space fully.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
You are aware that Nvidia always repeats the same pattern with their higher end chips (doesn't even have to be the highest end one)?

Expensive fully enabled chip followed by a slightly cut chip with much better price/perf has been happening for years with pretty much every launch.

Last time they did the same thing, at around the same time with the 780 and AMD's GPUs were still months out. Why is this time suddenly different? Everyone who knows anything about GPUs saw 980Ti coming. AMD or no AMD.
Titan came out in Feb 2013. 780 Ti came out in Nov 2013. It's like 9 months.
Why 980 Ti comes out only after a couple of months after Titan X? 980Ti is killing the sales of Titan X.
What do you think, what is different? Why NV did not wait 8-9 months as in the 780Ti scenario?
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,193
2
76
Titan came out in Feb 2013. 780 Ti came out in Nov 2013. It's like 9 months.
Why 980 Ti comes out only after a couple of months after Titan X? 980Ti is killing the sales of Titan X.
What do you think, what is different? Why NV did not wait 8-9 months as in the 780Ti scenario?

They wanted to cash in on early adopters before Fury X was released.

I'm positive there is a lot of corporate espionage going on with any tech industry. They likely knew that Fury X will perform well, and didn't want to wait for competition in the GPU Space. Released 980ti early so it had some time on the market alone at a higher price.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
They wanted to cash in on early adopters before Fury X was released.

I'm positive there is a lot of corporate espionage going on with any tech industry. They likely knew that Fury X will perform well, and didn't want to wait for competition in the GPU Space. Released 980ti early so it had some time on the market alone at a higher price.
Right, some people don't know how to connect dots or they need an illusion that NV is acting normal...
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Titan came out in Feb 2013. 780 Ti came out in Nov 2013. It's like 9 months.

That's the wrong comparison. The original Titan was released in 02/2013, and the GTX 780 was released in 05/2013. That's the same three-month delay as we saw between the Titan X and GTX 980 Ti. The GTX 780 was a cut-down version of the Titan chip (GK110), just as the GTX 980 Ti is a cut-down version of the Titan X chip (GM200).

It wouldn't surprise me if we saw a fully-enabled "GTX 980 Ti Black Edition" or something in another 5-6 months.
 
Feb 19, 2009
10,457
10
76
To be accurate with history, the 780 wasn't as close to Titan, but back then, Titan had two claim to fame to justify its price: Uber DP compute and extra vram. The current situation is basically 1-5% performance gap and no extra DP compute. The 980Ti obsoletes Titan X on a level that the 780 never reached.

The original Titan was never obsolete, even with 780Ti beating it in gaming performance, because it was a card that still sold at $1k due to its superior compute, making it a "cheap" quadro. Titan X only has gaming as its niche. When a $350 cheaper product is within 1-5%...
 
Status
Not open for further replies.