[BitsAndChips]390X ready for launch - AMD ironing out drivers - Computex launch

Page 56 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

arandomguy

Senior member
Sep 3, 2013
556
183
116
The Avg and Peak on TPU are with Metro LL. So yet a different set of results.

Yes I'm aware but it is closer to compare gaming vs. gaming results and then perhaps question the validity, as opposed to questioning synethic (furmark) vs gaming results while questioning the validity.

Also note that TPU doesn't use system results vs say system results for Anandtech. This is why even the Furmark results for both are very different, as Furmark's load on the rest of the system is comparatively small.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,576
126
I wonder if this will indeed be largely a re-brand from AMD, with maybe just one flagship with HBM, or possibly just two cards with HBM?
 

therealnickdanger

Senior member
Oct 26, 2005
987
2
0
What I find strange is, what's up with them anyway. We have the 900series out but the 750ti is still in the line up. That has got to be confusing to the average joe.

As long as the numbers on the box are "bigger", then Average Joe won't ever question it.

This is purely a guess, but let's look specifically at the GTX 750/Ti. It has a reputation - a presence if you will - on the market as being a somewhat magical card. Due to its low power demands and high performance, it can be used as a sort of "cure all" for just about any OEM system. An inexpensive booster shot to cure your low fps woes. That type of word of mouth type marketing pays for itself. If NVIDIA were to repackage it as the GTX 840 or 940 and stop selling it as the 750, it would lose the recognition that it earned. So not only would it cost NVIDA money to rebrand it, repackage it, and remarket it, but it would probably disrupt the money train coming in from its sales. Lose-lose. Looking at it from this perspective, it make perfect sense as to why they allow SKUs to fragment.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Who knows the ways of marketing departments?

Most likely is a bit dull: the 750/ti slots were free at the time - I'm not sure why :) - and they haven't wanted the hassle/cost of a rebrand since.

It also can't be impossible that they wanted the option of moving the 9xx series to 10xx when cut down GM200 arrived.
(Partially vs any AMD competition, partially as the 9xx naming scheme will be rather crowded!).

Then the 750ti could have jumped straight to being a 1050 ;)

Or maybe the 950/ti is being kept free for a fairly early Pascal trial run.
 

Maxima1

Diamond Member
Jan 15, 2013
3,549
761
146
If anything, I would argue the complete opposite of your point. If AMD is extremely tight on resources, they should have never done 390/390X $500-700 level cards and focused all their efforts on the $100-400 desktop and mobile dGPU segments - that's literally 80% of the entire GPU market. If AMD spent hundreds of millions on dollars just to make 2 good desktop chips in the form of uber expensive 390/390X/395X2 cards that only 5% of the market cares about, and everything else for the desktop and mobile is old crap, they are going to lose market share every quarter until 14nm/16nm. Also, given how R9 290/290X hardly sell at $240-280, this strategy would be an automatic fail and defeats the purpose of relaunching R9 300 series. Might as well do nothing, literally save their precious millions for 14nm/16nm and sell R9 200 series for another 15 months. I still don't understand the point of taking Pitcairn, Hawaii and Tonga and just re-labeling their boxes and replacing the number R9 2xx with R9 3xx. This sounds just too crazy to believe that AMD spend all its efforts on just 1 chip (and it's cut-down R9 390 version due to yields).
Well, if you look at their mobile lineup, they put up the white flag. Even the M375 has DDR3, which is a joke in 2015.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
All the talk and speculation, and still nothing concrete. I really hope 390x sucker punches Nvidia, but R9 295x performance on the same node sounds too good to be true. 600 mm2 GM200, while IMO needing a tweak due to noticeably efficiency loss vs. GM204, dropped all the DP, went entirely graphics performance, and at 600mm2 still can't match 295x unless it's max OC'd and even then it's still a little bit slower most of the time. I don't think AMD is going to be able to improve in perf/w and perf/mm2 that much on the same node over their existing architecture, let alone Nvidia's Maxwell. I hope they can match Titan X at $649 with equal perf/w ($699-749 for 8gb version), clearly beat GTX 980 at $499, and match GTX 980 at $399. That would be fantastic, IMO.

Here's to hoping though.
 
Last edited:

S.H.O.D.A.N.

Senior member
Mar 22, 2014
205
0
41
If anything, I would argue the complete opposite of your point. If AMD is extremely tight on resources, they should have never done 390/390X $500-700 level cards and focused all their efforts on the $100-400 desktop and mobile dGPU segments - that's literally 80% of the entire GPU market.

Invest God knows how much money and time into a product that will serve them for about 9-12 months only to become obsolete once 14nm hits, instead of implementing something that will directly affect for years to come not just their dGPU segment, but also the APU offering?
 

Maxima1

Diamond Member
Jan 15, 2013
3,549
761
146
Doesn't anyone else find it funny that an "R9" part is using DDR3? Lenovo has the M375 in their Y40 gaming model. I wonder if the Skylake model will end up with it, too....
 

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
Doesn't anyone else find it funny that an "R9" part is using DDR3? Lenovo has the M375 in their Y40 gaming model. I wonder if the Skylake model will end up with it, too....
edit: nm found it.

so far, it is the only mobile gpu with an official release. I need more to gouge it's value.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126

Thanks! I stand corrected. :thumbsup:

Invest God knows how much money and time into a product that will serve them for about 9-12 months only to become obsolete once 14nm hits, instead of implementing something that will directly affect for years to come not just their dGPU segment, but also the APU offering?

From what I've read, their Zen APUs that could utilize HBM aren't even out until 2017. Also, I don't see how in 9-12 months 28nm GPU swill become obsolete. Why do you think we are going to see AMD/NV with top-to-bottom mobile and desktop dGPU 14nm/16nm roll-out in just 9-12 months? We may see a 14nm/16nm flushed out SKU like AMD did with HD4770 40nm or NV did with GTX750/750Ti Maxwell. Otherwise, I personally see Maxwell/R9 300 series lasting at least until August-September 2016, which is nearly 1.5 years from now. That's a long time to just spend hundreds of millions of dollars on a $500+ niche segment of product.

Also, if AMD just re-badges the entire R9 300 series, that won't accomplish much because today at very low prices R9 270-290X cannot hold ground to Maxwell. So this strategy is an automatic fail because it will tarnish AMD's brand reputation long-term and will not produce tangible market share gains or profits.

Don't forget how successful AMD was with HD4800-6900 series compared to where they are today, but 4870/4890/5870/6970 did not have the performance crown of their generations. AMD did well because they had good products the $100-400 range. Most of the market doesn't care about $500+ videocards and if AMD's entire focus were $500 R9 390 series, that's a big strategic mistake imo, unless AMD intends to bring R9 390 to ~$400 price levels and end up with its own GTX970 so to speak. I still think AMD will make improvements to other SKUs besides 390/390X.

All the talk and speculation, and still nothing concrete. I really hope 390x sucker punches Nvidia, but R9 295x performance on the same node sounds too good to be true.

I don't even know who is spinning these R9 390X = 295X2 rumours. Sounds like some wild fantasies. From what we have so far, at best R9 390X is 50-55% faster than R9 290X (1.05Ghz 4096/256 TMU vs. 1Ghz 2816/176 TMUs0. When tested at higher resolutions where CPU bottlenecks are removed, R9 295X2 is 84% faster than an R9 290X and 24% faster than the Titan X. I don't know how anyone actually believes that R9 390X will match an R9 295X2 in FPS. What I will say though is price notwithstanding, I would rather own a single chip card with 80-90% of the performance of R9 295X2. The market has already voted and is buying a much slower 980 over the 295X2 so from a market's point of view even a $600 R9 295X2 is irrelevant.

I hope they can match Titan X at $649 with equal perf/w ($699-749 for 8gb version), clearly beat GTX 980 at $499, and match GTX 980 at $399. That would be fantastic, IMO.

I don't think that's going to happen. Titan X has almost no DP performance but I think AMD will keep those in. If R9 390X matches Titan X in performance for $649, I don't think most gamers will care about 50-60W of extra power usage. The more viable comparison will be GM200 6GB after-market version max OC vs. R9 390X after-market max OC. I think once these 2 cards are released, the Titan X will become irrelevant just like 780Ghz editions and R9 290 made the Titan a pointless product for gaming. NV will not want to admit that Titan X won't be their fastest offering this generation because it could affect sales but I have little doubt in my mind that MSI Lightning/EVGA Classified GM200 versions will beat the Titan X.
 
Last edited:

LTC8K6

Lifer
Mar 10, 2004
28,520
1,576
126
Technologies like High Bandwidth Memory are unlikely to make a huge impact in the professional market in the short term, since first-generation HBM (High Bandwidth Memory) deployments are limited to 4GB of RAM or less, and most workstation cards offer 12 to 16GB at the high-end.

Looks like no 8GB HBM cards for us?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
True :) But they can sanely push out that sort of small die much sooner on 14/16 than they can the bigger ones. Won't need enormous volumes either.

It's not going to happen now, or even soon, though. I think AMD just releasing 28nm again means we're likely on it for another year (give or take) or so.

I am curious though to who will be first on the next node, 20nm, 16FF/14FF. Typically it's not nVidia. We'll see if that changes.


Tom's Hardware uses an unspecified GPGPU load for their "torture test". TechPowerUp uses FurMark, which is the only reliable way to get maximum power consumption out of a card.

Furmark is not reliable. Both companies software throttle for it. The IHV's need to have their cards wee ir as a 2D app and not even shift out of 2D clocks. Kill that program once and for all.
 
Last edited:

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
I know a guy that is in a position to know that this guy doesn't know a guy that is in the position to know anything about fiji...

Some people are like a flag.
 

S.H.O.D.A.N.

Senior member
Mar 22, 2014
205
0
41
A guy knows a guy. And he's apparently the only guy that knows a guy, because no other guy was able to debunk the 8gb rumour for months.

About as legit as your random wccftech rumour.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Gg AMD, you had a good run...

970 SLI sold like hot cakes with only 3.5GB of VRAM to 1080P all the way to 3440x1440 users. The same for 980 SLI. I guess all those NV buyers didn't care about 3.5-4GB of VRAM but if R9 390/390X only has 4GB, it's DOOM.

970 SLI vs. 290X
- 60% faster at 1080P
- 70% faster at 1440P

980 SLI vs. 290X
- 72% faster at 1080P
- 89% faster at 1440P

I guess a $500-600 card that's 50-60% faster than an R9 290X would be worthless then because of 4GB of VRAM? What happens if AMD repeated HD4850-4870 or 5850-5870 strategy? A lot of PC gamers don't have 4K monitors which means 4GB is plenty.

Do I guess we can put the "Dual Link Interposer" fantasy to death as well.

Nothing is official until AMD's launch. But interesting how 4GB vs. 8GB has become such a contentious issue for you considering you did pay $550 for a 980 4GB that's only 15% faster than the 290X but yet you didn't have a problem with its 4GB of VRAM given the price. Also, I don't recall you going out of your way and not recommending 970 3.5GB SLI or 980 4GB SLI to any 1080P-4K gamers despite both of those setups trading blows and even beating a Titan X in FPS. I guess having 980 SLI performance for $1000 with only 4GB of VRAM is fine as of May 2015, but as of July 2015, 4GB officially becomes outdated spec? :sneaky:

TechPowerUp uses FurMark, which is the only reliable way to get maximum power consumption out of a card.

No, it isn't. Furmark is not a reliable way to arrive at a GPU's maximum power usage since it is the most worthless synthetic GPU power test invented since no real world program can load the GPU like a power virus can. You continue to deny this and using FurMark to represent at GPU's maximum power usage is amazing despite the entire forum already debating this topic years ago and agreeing that Furmark is indeed a waste of time because it acts as a power virus. How you aren't understanding the basic premise that no real world program can stress 99.9% of every transistor inside a GPU, but Furmark can, is remarkable! Unless someone in the world can design a real world application that PC gamers actually regularly use that can mimic the stress levels of Furmark, Furmark is just a synthetic bench and nothing more and is as far away from reality as it gets.

Seti@Home, MilkyWay@Home, Folding@Home, scrypt mining are all tasks PC gamers do run on their GPUs for some benefit. There is no measurable benefit that Furmark provides for today's GPU testing - the score in Furmark, the FPS in Furmark - are all meaningless. It can't be used reliably for OC stability testing either because NV/AMD have built in GPU thermal throttling in the drivers. There is no community of PC users that participates in Furmark weekly competitions to help scientific research, making $ from selling online currency or for gaming purposes. For all intents and purposes Furmark is an outdated test that some websites still cling to because their editors are stuck in the past when Furmark actually had some benefits.

I am not even sure if you did run Furmark lately. If you do, you will notice if the GPU's power target is exceeded, the GPU starts thermal throttling on purpose and/or GPU load drops far below 100%. This is because NV/AMD included safety mechanisms in their drivers to prevent GPUs from failing from an unrealistic ASIC / PCB workload imposed by Furmark. It's no wonder so many HD4870/GTX570/590 cards failed under Furmark.

Furmark is basically known for destroying GPU's VRM system and AMD/NV do not consider this program worthwhile whatsoever.
http://www.techpowerup.com/forums/threads/ati-deliberately-retards-catalyst-for-furmark.69799/
 
Last edited:

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
970 SLI sold like hot cakes with only 3.5GB of VRAM to 1080P all the way to 3440x1440 users. The same for 980 SLI. I guess all those NV buyers didn't care about 3.5-4GB of VRAM but if R9 390/390X only has 4GB, it's DOOM.

970 SLI vs. 290X
- 60% faster at 1080P
- 70% faster at 1440P

980 SLI vs. 290X
- 72% faster at 1080P
- 89% faster at 1440P

I guess a $500-600 card that's 50-60% faster than an R9 290X would be worthless then because of 4GB of VRAM? What happens if AMD repeated HD4850-4870 or 5850-5870 strategy? A lot of PC gamers don't have 4K monitors which means 4GB is plenty.



Nothing is official until AMD's launch. But interesting how 4GB vs. 8GB has become such a contentious issue for you considering you did pay $550 for a 980 4GB that's only 15% faster than the 290X but yet you didn't have a problem with its 4GB of VRAM given the price. Also, I don't recall you going out of your way and not recommending 970 3.5GB SLI or 980 4GB SLI to any 1080P-4K gamers despite both of those setups trading blows and even beating a Titan X in FPS. I guess having 980 SLI performance for $1000 with only 4GB of VRAM is fine as of May 2015, but as of July 2015, 4GB officially becomes outdated spec? :sneaky:


It was a bit tongue in cheek, on a serious note if they can't outperform nvidia in every aspect then they will lose more market share. Think about it, just nvidias marketing alone adds 20% extra performance to their cards. So amd has to be competitive and overcome this extra marketing fluff. Having less memory is a quick way to lose mind share.
 
Mar 10, 2006
11,715
2,012
126
It was a bit tongue in cheek, on a serious note if they can't outperform nvidia in every aspect then they will lose more market share. Think about it, just nvidias marketing alone adds 20% extra performance to their cards. So amd has to be competitive and overcome this extra marketing fluff. Having less memory is a quick way to lose mind share.

Sadly, this is true. A 6GB GM200 will probably seem more attractive than a Fiji with 4GB to the average Joe, even if the Fiji performs better.
 

maddie

Diamond Member
Jul 18, 2010
5,204
5,614
136
Am I to conclude that the consensus now is 4GB HBM because of what NVGPU posted?
 

5150Joker

Diamond Member
Feb 6, 2002
5,549
0
71
www.techinferno.com

Rest In Pepperonis AMD 390X for those that were hoping for 4K+ or surround. Guess it's 980 Ti/Titan X or bust if what this guy says is true.


Do I guess we can put the "Dual Link Interposer" fantasy to death as well.

A lot of fantasy's will be going by the wayside as we get closer to launch of this thing, especially some like "AMD will take 35-40%+ share back in one quarter". The excuses are already flying about 4GB being enough now. Sit back and enjoy the show.
 
Last edited:

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
Am I to conclude that the consensus now is 4GB HBM because of what NVGPU posted?

Yes, Nvidia fans (in this very thread) will latch onto yet another rumor from some random website, and will preach it as fact. It's happening already!


Fanboy accusations (even indirectly) will not be tolerated and I will toss you from this thread if it continues.

-Rvenger
 
Last edited by a moderator: