Official Fury X specs

Page 15 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

sze5003

Lifer
Aug 18, 2012
14,184
626
126
Currently X = water cooled ($649) vs air cooled ($549). We don't have info to really say beyond that.
So if they end up performing the same what would be the only advantage water cooling? I would hope the water cooled version can overclock more otherwise I'd save myself the $100.
 
Feb 19, 2009
10,457
10
76
So if they end up performing the same what would be the only advantage water cooling? I would hope the water cooled version can overclock more otherwise I'd save myself the $100.

Cooler, quieter, heat out your case. The same reasons why people go with water cooling in general.
 

sam_816

Senior member
Aug 9, 2014
432
0
76
So if they end up performing the same what would be the only advantage water cooling? I would hope the water cooled version can overclock more otherwise I'd save myself the $100.



According to AMD the cooler is designed for 500w(hope I am wording it right). That should keep things under control at decent overclock as well.
 

96Firebird

Diamond Member
Nov 8, 2010
5,712
316
126
I sent them an e-mail asking if it'd work, we'll see what they say. I wouldn't hold out hope though, this would be the first adapter to actually work. Not even the folks on the home theater forums know of such a cable.

Just to update, as suspected, the DP to HDMI 2.0 cable linked earlier will not work. I'm guessing by chipset they mean whatever is contained within the cable or adapter, not a chipset at the ports or devices.

HI,

As we state on our website http://visionaudiovisual.com/techconnect/cables/hdmi/ at the moment there is no chipset that supports 60 Hz, so I’m afraid it’s only 4k @ 30Hz.

Thanks,
Stuart
 

MagickMan

Diamond Member
Aug 11, 2008
7,537
3
76
You're uninformed then. The reason for adding 6+GB is to add more bandwidth while the ram largely isn't fully utilized. Check the current benchmarks without optimized drivers. It has the same or better performance than 980Ti and sometimes Titan X. Once directx 12 and openCL 2.x are utilized, because it uses IP developed by AMD, AMD cards will pull away significantly over Nvidia, even with, as you say, only 4GB memory. Now maybe the 8GB fury X variant greatly outperforms, I don't know. But if you truly look at the numbers, the three month driver optimization curve AMD usually has for greatly improving performance, and the above new APIs, it only makes sense to buy AMD!!! :)

Edit: To speak to your "rush job" comment, AMD was going to release info on the card back in March until the Titan X was released right before they were going to talk, that way to steal AMDs thunder. Then, they announced E3 for the event in May. There was nothing rushed about it. Further, fears from Nvidia caused it to sacrifice Titan X sales because they thought performance of Fury X would be great enough to drown both Titan X and 980Ti sales. It forced a reduction of $200 on the rumored release price to match the 980Ti, potentially costing millions in margin on the card. It wasn't rushed and this is Nvidia fanboy rhetoric!!!

(I'm going to preface this by saying that I'm not a brand loyalist, I'm not "Team Green or Red". Hell, I don't even understand company loyalty within the consumer space, but that's just me. It seems to only help the company and doesn't benefit us, the buying public, at all. But anyway...)

Yes. The performance of the Titan X, relative to its time of release, caught AMD off guard, as did its price. Sure, it's expensive, but not when compared to its relative game performance compared to other GPUs that were currently available. Then came the 980Ti, which not only made the Titan X comparatively overpriced, but also made mincemeat of everything else in the upper end of the market.

You claim that this was so nvidia could "steal AMD's thunder". Well, that's ignorant. It takes a very long time to develop a GPU for public sale. The Titan X (and 980Ti, as well) was going to come out anyway, realistically the only thing Fiji did was affect the Titan X/980Ti introductory MSRP (which is a great thing, make no mistake).

Now, unless you're blind, you'll know that many modern games use a LOT of texture memory, that useage isn't going to go down, either. 4K gamers, the people AMD is targeting with the Fury X, know this. If 4GB is a tight fit right now @4K, and it is with respect to several titles when you turn quality up to Ultra settings, imagine what's going to happen with games coming out over the next 24>48 months. People are now accustomed to being able to keep a cutting edge video card for that long and it still be competitive. 4GB doesn't give them peace of mind, and neither should it.

Why is it only 4GB? As I've read in technical papers, HBM is tricky to produce, as are the interfaces that allow it to communicate with the GPU. Due to economy of scale, and new processes still in development, that difficulty will virtually disappear, but they aren't there yet. So what AMD has for mass production is a 4GB product. Knowing the above, will that change in the next 6 months? Absolutely. If the 8GB part was available in quantity would AMD be releasing it now as well? You bet your ass they would. But, they needed to reply to nvidia sooner rather than later or they would have to concede the high-end gaming market entirely (which they've already taken quite a beating in).

So what we have then is a video card for high res (4K) gaming enthusiasts that isn't entirely suited for them. Of course, the fix is to simply not run extreme textures in certain titles, but that's not what enthusiasts like to do, and furthermore, it negates some of the need for a cutting edge GPU in the first place, so that creates a bit of a contradiction. It's like they're doing half of a product release, and that's why it seems to me that they've rushed this thing.
 

ajc9988

Senior member
Apr 1, 2015
278
171
116
(I'm going to preface this by saying that I'm not a brand loyalist, I'm not "Team Green or Red". Hell, I don't even understand company loyalty within the consumer space, but that's just me. It seems to only help the company and doesn't benefit us, the buying public, at all. But anyway...)

Yes. The performance of the Titan X, relative to its time of release, caught AMD off guard, as did its price. Sure, it's expensive, but not when compared to its relative game performance compared to other GPUs that were currently available. Then came the 980Ti, which not only made the Titan X comparatively overpriced, but also made mincemeat of everything else in the upper end of the market.

You claim that this was so nvidia could "steal AMD's thunder". Well, that's ignorant. It takes a very long time to develop a GPU for public sale. The Titan X (and 980Ti, as well) was going to come out anyway, realistically the only thing Fiji did was affect the Titan X/980Ti introductory MSRP (which is a great thing, make no mistake).

Now, unless you're blind, you'll know that many modern games use a LOT of texture memory, that useage isn't going to go down, either. 4K gamers, the people AMD is targeting with the Fury X, know this. If 4GB is a tight fit right now @4K, and it is with respect to several titles when you turn quality up to Ultra settings, imagine what's going to happen with games coming out over the next 24>48 months. People are now accustomed to being able to keep a cutting edge video card for that long and it still be competitive. 4GB doesn't give them peace of mind, and neither should it.

Why is it only 4GB? As I've read in technical papers, HBM is tricky to produce, as are the interfaces that allow it to communicate with the GPU. Due to economy of scale, and new processes still in development, that difficulty will virtually disappear, but they aren't there yet. So what AMD has for mass production is a 4GB product. Knowing the above, will that change in the next 6 months? Absolutely. If the 8GB part was available in quantity would AMD be releasing it now as well? You bet your ass they would. But, they needed to reply to nvidia sooner rather than later or they would have to concede the high-end gaming market entirely (which they've already taken quite a beating in).

So what we have then is a video card for high res (4K) gaming enthusiasts that isn't entirely suited for them. Of course, the fix is to simply not run extreme textures in certain titles, but that's not what enthusiasts like to do, and furthermore, it negates some of the need for a cutting edge GPU in the first place, so that creates a bit of a contradiction. It's like they're doing half of a product release, and that's why it seems to me that they've rushed this thing.
You don't follow the industry closely do you? Are you familiar with when AMD released the 290X and setup a press tent across the street from an Nvidia press conference to steal its thunder? No? Well then, you wouldn't understand why the release of details was postponed when Nvidia released the Titan X on the same day! It was payback and fair in my opinion. A dick move deserves a dick move.

As to pricing, I wasn't saying AMD brought down the 980Ti's price, I said the reverse. Nvidia was waiting to drop it after AMD released its card, but as they couldn't wait much longer and no details were coming until E3 instead of computex, they wanted to get the purchases from people waiting for AMD, and, to be fair, Titan X performance on 4K almost at $650, not a bad deal. AMD's Fury, over time, will crush its performance!!! But, at that price, Nvidia was sacrificing sales of Titan X for the sake of forcing AMD into a lower price range because, on large, Nvidia can afford it better. If AMD didn't lower the suggested rumor price, it would have had worse time justifying the cost and lost sales.

As for the 4GB assertions, you are spouting idiocy. You are comparing two different standards as if they are equal. Is 4GB of DDR2 = 4GB DDR3 = 4GB DDR4 = 4GB GDDR5? NO!!!!!!!!!!!!!!!!!!!!!!!!!!!! In the same way, it takes 6GB-8GB of GDDR5 to equal 4GB of Gen1 HBM!!! In fact, reviewing leaked benchmarks, the Fury X crushes the 980Ti in 5K and 8K!!! Now, to be fair, at 5K and 8K, Titan X outperforms. That is where the memory limitation seems to come in and effect performance... Now, as only apple and few others have 5K, while 8K is NOT SOLD TO CONSUMERS ANYWHERE YET, it will not be an issue until 2017 (which most enthusiasts buying these cards will want to upgrade by then)!!!

Now, as for the reasons for only 4GB instead of 8GB cards. It is a limitation of current interposer tech. Currently, for Gen1, interposers only allow a stack of 1GB of memory to rest on it. To do more, you must use a dual interposer which adds significant costs to production. It is not a lack of memory available, it is that the costs don't justify the return for the initial release!!! As this will only be needed for competing with the Titan X at 5K and 8K gaming, it can wait for the time period of releasing a dual Fiji card this fall without sacrificing sales. So until you go understand more about HBM and how Fury has a 4096-bit memory interface with 512GBps bandwidth that crushes the 980Ti and Titan X bandwidth with about 800 more shader cores than the Titan X and 980Ti which allow faster processing of items normally stuck in memory (I don't want to go into the details because this is EXTREMELY oversimplified and doesn't actually accurately describe what is happening, I apologize to those that know better on how it works and welcome better descriptions for the community at large), please stop spewing NVIDIA marketing BS!!!:biggrin:
 

Glo.

Diamond Member
Apr 25, 2015
5,730
4,606
136
In fact, reviewing leaked benchmarks, the Fury X crushes the 980Ti in 5K and 8K!!! Now, to be fair, at 5K and 8K, Titan X outperforms. That is where the memory limitation seems to come in and effect performance... Now, as only apple and few others have 5K, while 8K is NOT SOLD TO CONSUMERS ANYWHERE YET, it will not be an issue until 2017 (which most enthusiasts buying these cards will want to upgrade by then)!!!

For such statements I would wait till gaming benchmarks are released in those resolutions, not 3dMark ones.
 

Null_Pointer

Junior Member
Jun 21, 2015
1
0
0
Question, will the Fury X be available through aftermarket companies like MSI/Gigabyte etc... ? Or is it being sold as reference card through AMD only?

I'm asking because I'd really love this card with a dual dvi port for my monitor and I know its not gonna have one. But perhaps aftermarkets will provide the port? Any info on this?
 

ajc9988

Senior member
Apr 1, 2015
278
171
116
For such statements I would wait till gaming benchmarks are released in those resolutions, not 3dMark ones.

Although I do agree, it is the information available and 3dmark is NOT memory optimized. I do look forward for live tests in 3 days - 1mo (for those companies that don't get the early exclusives, etc.)! But point taken and received. I just believe it should be relatively in-line with those results until better driver support is available and new APIs are in place...

To be honest, I will gladly apologize for any incorrect statements and thank those for correcting me and educating me! Here, I agree, it is a little early for the statement!
 

Glo.

Diamond Member
Apr 25, 2015
5,730
4,606
136
It is not incorrect statement in 100%. However, for final judgment of how GPU can handle 5K - we should wait for the gaming benchmarks in that resolution.


Cause its still can be worse than the 3dMark result ;).
 

96Firebird

Diamond Member
Nov 8, 2010
5,712
316
126
$683 what happened to $649 price ?

According to Gibbo over at OCUK, the initial stock of Fury cards is very limited and will sell out fast. He expects most retailers will charge over MSRP because of the limited stock...

Take this with a grain of salt though, he is also in the market to sell cards and was promoting a 980 Ti deal in the same post.
 

sze5003

Lifer
Aug 18, 2012
14,184
626
126
According to Gibbo over at OCUK, the initial stock of Fury cards is very limited and will sell out fast. He expects most retailers will charge over MSRP because of the limited stock...

Take this with a grain of salt though, he is also in the market to sell cards and was promoting a 980 Ti deal in the same post.
Yea that's understandable but I've got a working card now so if I didn't I would worry. If I have to wait before ordering one its not a problem. No one should pay over retail.
 

OatisCampbell

Senior member
Jun 26, 2013
302
83
101
As for the 4GB assertions, you are spouting idiocy. You are comparing two different standards as if they are equal. Is 4GB of DDR2 = 4GB DDR3 = 4GB DDR4 = 4GB GDDR5? NO!!!!!!!!!!!!!!!!!!!!!!!!!!!! In the same way, it takes 6GB-8GB of GDDR5 to equal 4GB of Gen1 HBM!!! In fact, reviewing leaked benchmarks, the Fury X crushes the 980Ti in 5K and 8K!!!

I hope you are right, because I am an AMD fan.

I don't see how you could be though. I'm not a game or graphics engineer but could you please explain to me in laymen's terms:

Isn't what HBM has a lot of "bandwidth", super wide pipe to framebuffer for access to textures or AA samples there?

And isn't what the other guy talking about just straight storage, like normal RAM? It looks to me like he's saying some current games at 4K settings need more than 4GB of VRAM not faster access to it.

Faster access is always better, but I think he's talking about what we called "texture thrashing" in the old days?
 
Last edited:

96Firebird

Diamond Member
Nov 8, 2010
5,712
316
126
So until you go understand more about HBM and how Fury has a 4096-bit memory interface with 512GBps bandwidth that crushes the 980Ti and Titan X bandwidth with about 800 more shader cores than the Titan X and 980Ti which allow faster processing of items normally stuck in memory (I don't want to go into the details because this is EXTREMELY oversimplified and doesn't actually accurately describe what is happening, I apologize to those that know better on how it works and welcome better descriptions for the community at large), please stop spewing NVIDIA marketing BS!!!:biggrin:

I'd like to hear these details on how higher bandwidth leads to less capacity needs.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,108
1,260
126
I hope you are right, because I am[ an AMD fan.

I don't see how you could be though. I'm not a game or graphics engineer but could you please explain to me in laymen's terms:

Isn't what HBM has a lot of "bandwidth", super wide pipe to framebuffer for access to textures or AA samples there?

And isn't what the other guy talking about just straight storage, like normal RAM? It looks to me like he's saying some current games at 4K settings need more than 4GB of VRAM not faster access to it.

Faster access is always better, but I think he's talking about what we called "texture thrashing" in the old days?

No current 4K benchmarks have shown texture thrashing on 4GB cards may be what he is saying in an animated way.

Yes performance is universally garbage at 4K on every setup if you try to run maximum settings in the demanding titles, but on 4GB cards like a 980 or a 290x, you don't see the low single digit minimums you get when you've hit VRAM capacity and the related halting stutter that causes. I think people are just confused because of what they see for reported VRAM usage on cards like a Titan X that have more memory than they need. They think seeing a consumption above 4GB means you need more than that amount. Benchmarks of 4GB cards at 4K show that isn't the case though.

On a sort of related note, there is some gold in this forum back when 680 and 7970 were the big cards and we had a similar situation where one card had 50% more memory than the other. Some good flip flops from both sides of the aisle can be seen in those threads compared to now in terms of understanding the difference between consumption reported and what is actually required. I think it's all a matter of perception really. I don't expect to see any 4K benchmarks showing the Fury X out of VRAM in any game unless a site benches beyond 4K or downsamples from a resolution even higher than 4K.
 

Alatar

Member
Aug 3, 2013
167
1
81
As for the 4GB assertions, you are spouting idiocy. You are comparing two different standards as if they are equal. Is 4GB of DDR2 = 4GB DDR3 = 4GB DDR4 = 4GB GDDR5? NO!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Capacity wise the answer to your question is yes.

In the same way, it takes 6GB-8GB of GDDR5 to equal 4GB of Gen1 HBM!!!

If anything is nonsense it's statements like this.
 

ajc9988

Senior member
Apr 1, 2015
278
171
116
To address your comments, yes, the capacity is the same. But not all ram is created equal. You have to look at the effects of clock speeds and bus width to understand the speed, or bandwidth, of the ram to understand what effects it has on the frame buffer. Now, when examining the two standards, GDDR5 has a 32-bit bus width and is clocked at up to 1750Mhz (7GBps). HBM has a 1024-bit bus width with a clock speed of 500Mhz (1GBps) on gen 1 HBM. What this means is GDDR5, per chip, can do a max of 28GBps, while HBM can achieve 100-128GBps per chip. That is much faster!

So how does this translate to HBM being able to perform the same function using less ram? Well, we have to look at what the industry has done to this point. As cards where being developed, the processing power required more bandwidth between the ram and the GPU. So, how can you increase that bandwidth if you are stuck with 32-bit bus width? You run them in parallel and slap more chips on the card!!! That's the solution, but has the cost of heat. It sped things up while costing less than developing a better solution. Why bandwidth? BECAUSE THAT WAS THE BOTTLENECK!!! After that solution was implemented, coders got lazy because they had so much ram to play with, there was no need to optimize the frame buffer.

But that still doesn't explain why HBM can perform the same function on less ram... But it actually does. By increasing the bandwidth by about 52% over the Titan X (336.5GBps vs 512GBps) along with caching improvements, the data is transferred in and out of the frame buffer FASTER than GDDR5!!! So even if filled to capacity, it is functionally as if you had MORE ram if it was on the slower standard of DDR5! Does this mean that it doesn't have limitations? NO! As I mentioned, at 4K the performance is right on par. When 5K and 8K are used, due to larger frame buffer sizes, the speed advantage breaks down. It still MAJORLY ON SYNTHETIC BENCHMARKS outperforms the 980Ti, but gets PWND by the Titan X! Why? Because now the larger capacity becomes necessary because of the resolution requiring larger size frames. This is where the 12GB of GDDR5 is able to buffer better and make the card shine. So, to address @Alatar and @Headfoot, although I greatly simplified the discussion, 4GB on each standard IS NOT EQUAL and 4GB OF HBM HAS THE CAPABILITY OF PUTTING THROUGH AS MUCH DATA AS 6-8GB OF GDDR5 IN THE SAME AMOUNT OF TIME ALLOWING FOR EQUIVALENT PERFORMANCE! Even this is an oversimplification of all factors that take place, but I feel I've addressed both of you sufficiently to say stop trolling and go home!
 

Piroko

Senior member
Jan 10, 2013
905
79
91
...Why bandwidth? BECAUSE THAT WAS THE BOTTLENECK!!!
That is true, HBM is the next step in fighting the bandwith problem.

But that still doesn't explain why HBM can perform the same function on less ram... But it actually does.
No it doesn't. If you run out of that 512 GBps, ~200 cycle latency local ram, you are forced to go through the 30 GBps, 10,000+ cycle latency PCIe interface.

To make an analogy: Starting a game that you have installed takes you some 20 seconds on a HDD and a little bit less on a SSD. If you don't have it installed yet you need to load it through Steam first, install it and then start - which takes you anywhere from a couple of minutes to half a day. And a SSD will not speed that process up by much.

You can tweak your drivers to make (slightly) more efficient use of that local ram, sort of like keeping your hard drive free of junk to have more games installed. But you can do that on any memory interface equally, just like you can do that equally on a hard drive and a SSD.
 
Status
Not open for further replies.