[VC]AMD Fiji XT spotted at Zauba

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
CPU's and GPU's aren't comparable perf/$. AMD can't compete with Intel perf/anything because their performance is so much lower. There's no parallel to draw. AMD GPU's compete in overall performance just fine to nVidia. ~1/2 the time nVidia is faster, 1/2 the time AMD is. When was the last time AMD had better CPU performance than Intel?


It's marketing that sways nVidia fans and nVidia is way better than AMD at marketing their products.
In the last year or two AMD has improved their overall position in the market but not their sales too much. They increased market share in the 2nd Q (IIRC) and I believe that's why nVidia came in at the prices they did for the 970/980. It was to compete perf/$ with AMD. AMD have partnered with Adobe and Apple in the professional range, which are two very high quality brands. Neither would partner with an inferior performing IHV and risk damaging their own brand image. AMD just take advantage and capture mindset.



This is something people like to talk about but there is no evidence to support AMD is falling behind due to their R&D budget. nVidia has an advantage in gaming perf/W and has since Kepler, but that's all. We'll have to keep an eye on this and see if it continues. It's not unusual for either company to have an advantage in perf/W at different times.



If you think AMD is lower quality you've bought into some FUD. That or you are simply trying to spread it. Sorry.

Please show me how AMD is NOT cutting R&D while Intel and NV are increasing their respective budgets? That's right, you cannot...

http://www.fool.com/investing/general/2014/01/27/advanced-micro-devices-inc-is-still-a-terrible-bus.aspx

Here is an exerpt, if this helps:

'As the chart above shows, Intel and NVIDIA have significantly boosted their R&D spending in the last five years. Meanwhile, AMD has cut its R&D budget by a third. Intel's R&D budget already dwarfed AMD's, and that gap is growing wider every year. Meanwhile, NVIDIA's R&D budget has overtaken that of AMD.'
 
Last edited:

realibrad

Lifer
Oct 18, 2013
12,337
898
126
I don't get you --- instead of desiring more efficiency -- you desire reference water cooling! If a gamer desires water cooling, AIB's and third parties can offer you this choice.

I don't get you. Instead of increases in pref, you want increases in efficiency.

Gaming rings use a lot of power. It has almost never been an issue before. Now, with the release of NVs new cards, its all about efficiency. AMD released a 285 which saw big increases in efficiency and a small tick in perf, yet nobody cares.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,313
3,177
146
Guys, this thread is about the upcoming AMD flagship card(s). Not about market share or business strategy. Please back on topic.

I am curious as to more specs on the 390 series, and how exactly HMB is different and what advantages it has.
 

geoxile

Senior member
Sep 23, 2014
327
25
91
There's a few things to note there. Caches have much higher bandwidth than external memory. They also use much less power, generally speaking.

That's true, but reducing latency by half sounds like a big improvement. IIRC latency is a major issue with main memory so such a big improvement sounds good to me.

I also recall AMD talking about 2 level memory, perhaps they intend to use HBM as a high level cache alongside GDDR?
 
Last edited:

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
I don't get you --- instead of desiring more efficiency -- you desire reference water cooling! If a gamer desires water cooling, AIB's and third parties can offer you this choice.

Cooler temps = greater efficiency in the end.

Even the mighty Intel has temperature issues with their highly efficient current offerings under heavy loads.

If NVIDIA launched its next gen with a water cooler would you buy it? Would you condemn it?

Seems like around here AMD is doomed no matter what. There's always those who bash their every move, nitpick every weakness no matter how small it is.

I'd like to see how the 390x performs and am looking forward to reviews.
 
Last edited:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
It may be wiser for a company to invest or spend more R&D to create efficient architectures than offer reference water cooling.

Warning for off topic/ignoring mod post
 
Last edited by a moderator:

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
It may be wiser for a company to invest or spend more R&D to create efficient architectures than offer reference water cooling.

It's cheaper to just spread FUD.

With billions of $'s in combined research we still don't have air cooled autos.... Makes me wonder why. Guessing water trumps air for cooling

Only thing wrong with ref design water cooling is AMD did it 1st.

Please don't respond to the off topic post. Lets try to keep this on topic.
 
Last edited by a moderator:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Good Lord , there were air cooled engines and cars for over a century.


edit:
I didn't see the mod post 'till now, sorry!
 
Last edited:

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,313
3,177
146
The last few posts have had just about nothing to do with the upcoming AMD flagship cards, and have either been general AMD vs Nvidia gripes/suggestions or have been responses to said posts, neither of which are on topic.

Next off topic post gets an infraction, I really don't want another flame war here. It may be time to close the thread soon and wait for more information on the 390x.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Are there any drawbacks to HMB, or will it essentially be 'all good'? In regards to latency, etc. Looks amazing...
 

SunburstLP

Member
Jun 15, 2014
86
20
81
Barely on-topic wall-of-text incoming, feel free to skip it.

AMD is really in a damned if they do and damned if they don't situation. If they go the route that RS suggests with hardcore, closed source performance optimizations, people will scream that it's dirty and that AMD can't make a GPU good enough to win without cheating.

Conversely, if they don't we'll just keep having the same BS like here with all sorts of foaming-at-the-mouth OT rants about financials, R&D, CPU performance, etc. If there's one behavior that I truly have never understood, it's people's allegiances to brands. I understand shills. They receive compensation for spreading information and enthusiasm. Fair enough. But, why is the human psyche so weak that you'd place yourself in a gladiator in a coliseum full of lions style fight defending your ego and purchase?

--questionably OT bit below--

I've had a life-long soft-spot for AMD. My first machine was a 386DX40. At the end of the day, I seek information relevant to my usage and make the decision that best accommodates my situation. I have a 5770 in my machine right now. When I bricked my GTGX280, it was what the local shop had and it fulfilled my requirements. Since I'm primarily a linux user, I'd rather have an Nvidia card because their drivers provide a way better experience for most of my usage trends. Having said that, AMD is in the middle of a ginormous open-source rebuild of their driver. The kernel blob will be OS, and the user-land components will go either way. It's a phenomenal idea. I'm happy to wait until I see the data from their efforts before I make a buying decision.

Instead of being happy that NV's engineering provided something remarkable with Maxwell, we poo-poo it. We say that it's software trickery since it's hard to figure out just how much of an improvement Maxwell is. And that's sort of fair, but how many times did a GPU not really make an impact at launch only to show up later as a card that should have been lauded for its forward-thinking design?(I'd put TBDR GPUs here, but I'm hardly knowledgeable enough about it. They seemed mediocre at the time and have found their niche in mobile if I understand correctly.) Or the opposite, really. (I'm looking at you NV 5k series...)

It's imperative to the success of our community that these brand shenanigans go away. The community-at-large loses when things are closed-source. We lose when benchmark drivers come out, we lose when bias and favors are common amongst review sites. We lose when we are so blinded by the colors of our jerseys that we can't discuss the merits of things fairly. I guess what I'm trying to say is that when we look at each other, we should realize that we all enjoy expensive magical devices that make pretty pictures, or do real science, or whatever it is that people use these things for. We can all get what we want if we exert our will. As a community, we can change things. If we don't stop the crap, it'll never happen and companies will keep having their way with us.

edit: typos
 
Last edited:

Greenlepricon

Senior member
Aug 1, 2012
468
0
0
As far as I know it only seems to be a problem with HBM is cost, but I doubt that would be enough for them to price it unreasonably high and they may even be able to skimp on other parts if their memory is good enough (i.e. a low clocked bus), which would save on cost.

If it does manage to make it 20% above Nvidia's flagship at a good price then I'll be picking one up. I am a bit worried about the power concerns however. I know that people like to say it's not a problem to an enthusiast, but my teeny 500w psu and habit of overclocking don't always agree. If the power usage doesn't go up much over Hawaii, then I really don't think there's any reason to complain. I'm excited to see what Tonga's improvements bring to the table, and maybe to see if power efficiency really did go up and we just can't see it on a cut-down chip that didn't make the cut for Macs.

People often forget that performance per watt doesn't mean you have a low energy draw. It means that power usage may be 40% higher and performance 50% against another card. They could take this route against Nvidia rather than cutting watts as low as possible. I would be perfectly happy buying the top card as long as it wasn't breaking my psu.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,815
1,294
136
Are there any drawbacks to HMB, or will it essentially be 'all good'? In regards to latency, etc. Looks amazing...
There is additional cost in using Si Interposer and 2.5D TSV stacks. Other than that is all "excellent."

- HBM is wide
- HBM is low latency
- HBM is low power for high performance

For the Main VRAM;
GDDR5 -> 512-bit / 8 x 64-bit Dual Channel
5 Gbps / 320 GB/s
HBM -> 4096-bit / 4 x 1024-bit Octo-channel or Sedec-pseudo Channel.
1 Gbps(1.25 Gbps) / 512 GB/s(640 GB/s)

Real world bandwidth efficiency is also higher with HBM do to lower tFAW/tRRD latencies.

The best bet is that DDR4/GDDR5M will be tacked on as support/sub-VRAM.

Xeon Phi(Knights Landing) is 16 GB MCDRAM(HMC) with 384 GB DDR4.
The upcoming high-end AMD GPU with HBM is 4 GB.

So, ideally if AMD is copy and pasting Intel. For the FirePro part there should be at least 96 GB of DDR4. Which is 192-bit DDR4 with 4 Gb (1Gbx4) density DRAM.

DDR4 192-bit => 96 GB @ 2133 MHz or 2400 MHz speed
(SK Hynix exclusive like HBM) GDDR5M 192-bit => 12 GB @ 5000 MHz speed.
 
Last edited:

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
There is additional cost in using Si Interposer and 2.5D TSV stacks. Other than that is all "excellent."

- HBM is wide
- HBM is low latency
- HBM is low power for high performance

For the Main VRAM;
GDDR5 -> 512-bit / 8 x 64-bit Dual Channel
5 Gbps / 320 GB/s
HBM -> 4096-bit / 4 x 1024-bit Octo-channel or Sedec-pseudo Channel.
1 Gbps(1.25 Gbps) / 512 GB/s(640 GB/s)

Real world bandwidth efficiency is also higher with HBM do to lower tFAW/tRRD latencies.

The best bet is that DDR4/GDDR5M will be tacked on as support/sub-VRAM.

Xeon Phi(Knights Landing) is 16 GB MCDRAM(HMC) with 384 GB DDR4.
The upcoming high-end AMD GPU with HBM is 4 GB.

So, ideally if AMD is copy and pasting Intel. For the FirePro part there should be at least 96 GB of DDR4. Which is 192-bit DDR4 with 4 Gb (1Gbx4) density DRAM.

DDR4 192-bit => 96 GB @ 2133 MHz or 2400 MHz speed
(SK Hynix exclusive like HBM) GDDR5M 192-bit => 12 GB @ 5000 MHz speed.

That's some pretty amazing numbers. The bandwidth is definitely needed for higher-resolution displays and AA, that's for sure. This is exciting stuff. :)

Thanks for the info!
 
Sep 27, 2014
92
0
0
Barely on-topic wall-of-text incoming, feel free to skip it.

AMD is really in a damned if they do and damned if they don't situation. If they go the route that RS suggests with hardcore, closed source performance optimizations, people will scream that it's dirty and that AMD can't make a GPU good enough to win without cheating.

Conversely, if they don't we'll just keep having the same BS like here with all sorts of foaming-at-the-mouth OT rants about financials, R&D, CPU performance, etc. If there's one behavior that I truly have never understood, it's people's allegiances to brands. I understand shills. They receive compensation for spreading information and enthusiasm. Fair enough. But, why is the human psyche so weak that you'd place yourself in a gladiator in a coliseum full of lions style fight defending your ego and purchase?

--questionably OT bit below--

I've had a life-long soft-spot for AMD. My first machine was a 386DX40. At the end of the day, I seek information relevant to my usage and make the decision that best accommodates my situation. I have a 5770 in my machine right now. When I bricked my GTGX280, it was what the local shop had and it fulfilled my requirements. Since I'm primarily a linux user, I'd rather have an Nvidia card because their drivers provide a way better experience for most of my usage trends. Having said that, AMD is in the middle of a ginormous open-source rebuild of their driver. The kernel blob will be OS, and the user-land components will go either way. It's a phenomenal idea. I'm happy to wait until I see the data from their efforts before I make a buying decision.

Instead of being happy that NV's engineering provided something remarkable with Maxwell, we poo-poo it. We say that it's software trickery since it's hard to figure out just how much of an improvement Maxwell is. And that's sort of fair, but how many times did a GPU not really make an impact at launch only to show up later as a card that should have been lauded for its forward-thinking design?(I'd put TBDR GPUs here, but I'm hardly knowledgeable enough about it. They seemed mediocre at the time and have found their niche in mobile if I understand correctly.) Or the opposite, really. (I'm looking at you NV 5k series...)

It's imperative to the success of our community that these brand shenanigans go away. The community-at-large loses when things are closed-source. We lose when benchmark drivers come out, we lose when bias and favors are common amongst review sites. We lose when we are so blinded by the colors of our jerseys that we can't discuss the merits of things fairly. I guess what I'm trying to say is that when we look at each other, we should realize that we all enjoy expensive magical devices that make pretty pictures, or do real science, or whatever it is that people use these things for. We can all get what we want if we exert our will. As a community, we can change things. If we don't stop the crap, it'll never happen and companies will keep having their way with us.

edit: typos

Excellent post. I agree 100%
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Some risks with HBM could be supply issues, possibly higher chance of failure for this new technology, and due to current state VRAM limit at 4GB. 4GB is cutting is very close for keeping a GPU beyond 2.5-3 years given how bloated console ports have gotten in 2014. We are talking about 4GB is a good amount now for 1440/1600p (3GB bare minimum and 2GB worthless); but this is already close to borderline for 4K. I would much prefer 6-8GB flagships. This VRAM limit for HBM is my biggest gripe.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Some risks with HBM could be supply issues, possibly higher chance of failure for this new technology, and due to current state VRAM limit at 4GB. 4GB is cutting is very close for keeping a GPU beyond 2.5-3 years given how bloated console ports have gotten in 2014. We are talking about 4GB is a good amount now for 1440/1600p (3GB bare minimum and 2GB worthless); but this is already close to borderline for 4K. I would much prefer 6-8GB flagships. This VRAM limit for HBM is my biggest gripe.

Well put. I think 6GB should be the minimum for VRAM on all 2015 flagship parts. Especially when we see more and more poorly-done console ports that are using over 4GB of RAM for caching.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
If AMD wants to change the mentality of the masses, it needs to be the THE BEST outright within that generation and repeat that for a few years at the minimum. Then gamers will think twice buying an NV GPU (because all their buddies have had much faster AMD cards for years).
I don't even think this will put more than a small dent in Nvidia's marketshare. What AMD needs to do is become a marketing machine (which basically amounts to brainwashing) to change the perception their products are better. Nvidia has the mindshare thing locked down tight, when they're slower they are smoother, when they use more power they have better drivers and better dev relations. When AMD is first to market and much faster then AMD is not fast enough for a new generation and NV loyalists wait it out.

It simply doesn't make any difference the actual value and merits a product may have, brand loyalty trumps all of that.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,815
1,294
136
This VRAM limit for HBM is my biggest gripe.
That is a non-problem. Since, it can be offset by using a secondary DDR phy.

4 GB HBM + 12 GB DDR4/GDDR5M/GDDR5
= 16 GB VRAM

This was not a problem back with Nvidia's Turbo Cache and ATI's Hyper Memory.

32 MB Onboard VRAM + 96 MB Motherboard System RAM = 128 MB Videocard.
128 MB Onboard VRAM + 128 MB Motherboard System RAM = 256 MB Videocard.

4 GB HBM VRAM On-chip
12 GB etc VRAM On-board
+ unified system RAM space.
= VRAM
 
Last edited:

el etro

Golden Member
Jul 21, 2013
1,584
14
81
This VRAM limit for HBM is my biggest gripe.

The Vram madness(pushed by the likes of Shadow of Mordor and The Evil Within) could not come in a worse hour...

That is a non-problem. Since, it can be offset by using a secondary DDR phy.

Adding off-die Vram on a GPU that seems to be get watercooled is not a good idea. Probably they are already counting with the HBM savings to hit the power limit for a card as big as this...
 

SunburstLP

Member
Jun 15, 2014
86
20
81
Ugh, I just got a bit ill thinking about how awful the GF6200TC was and how many systems I had to put them in when I worked in a computer shop.

The idea of mixing on-board GDDR5 with HBM is interesting, would that need to be accounted for in the design of the memory controller or would the latency not really be a big deal?
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,815
1,294
136
Adding off-die Vram on a GPU that seems to be get watercooled is not a good idea.
The GPU cooling appears to be air-cooled. It is just with the limited edition colors of the R9 295X2 cooler.

If anything it will be an Air Cooler with an option of going liquid, like the ASUS Poseidon. To include Asetek into this; http://asetek.com/desktop/gpu-combo-coolers/550qc.aspx
http://asetek.com/desktop/gpu-combo-coolers/740qc.aspx
The idea of mixing on-board GDDR5 with HBM is interesting, would that need to be accounted for in the design of the memory controller or would the latency not really be a big deal?
The memory controller and prefetch will probably know where to store things.

The HBM DDR PHYs will be more close to the shader engines. While, the DDR4/GDDR5/GDDR5M PHY will be closer to PCIe, XDMA, etc.

Frame -> HBM
Texture/Misc -> Sub-PHY.
 
Last edited:

Paul98

Diamond Member
Jan 31, 2010
3,732
199
106
I don't even think this will put more than a small dent in Nvidia's marketshare. What AMD needs to do is become a marketing machine (which basically amounts to brainwashing) to change the perception their products are better. Nvidia has the mindshare thing locked down tight, when they're slower they are smoother, when they use more power they have better drivers and better dev relations. When AMD is first to market and much faster then AMD is not fast enough for a new generation and NV loyalists wait it out.

It simply doesn't make any difference the actual value and merits a product may have, brand loyalty trumps all of that.

Well said, though it's rather sad how easy it is to manipulate the general public
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Well said, though it's rather sad how easy it is to manipulate the general public

That's a rather strong statement.

Realize that a company like NV has products released and products in the pipeline. It should be pretty easy to market something that is easily the best, right? If there ARE deficiencies vs. the competition, you focus on what you can do better. For example, working with devs to eek every bit of performance as possible or working on drivers. You add value wherever you can to make the overall product offering better. That's just business.

Look at regular market leaders like Intel and NV. They both followed-up pretty terrible products like P4E and Geforce FX with great innovations like Core and 6800/7800/8800 soon after. Sometimes it is the failures that drive you to better products. When you DO have a 'winner' you should keep the foot on the gas and work to improve other aspects of your business.

AMD did this pretty well in the 4870/5870 times when they (1) greatly improved drivers and (2) matched SLI scaling.

Unfortunately, AMD (as a company) has been much weaker over the past 10 years vs. NV. I personally think we are starting to see that. How long will AMD investors allow the cash to be bleeding? You can see the R&D budgets shrink for AMD and increase for NV. That is not a good sign for AMD in the long-term. AMD just doesn't have the die-hard fans that NV has. I think there are definitely areas they could have done a better job with, but they have definitely tried a lot of things and been creative with products and marketing. I'm not sure there is really an answer here....