• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

R9 300 cards listed in new driver - R9 370 is a rebrand

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
really guy? if RS mentions the 980[or any card really] it is in the context of perf/$ -atleast his post history shows this. So it isn't some kind of "internet" grudge...

That is strange........
You dont even have to leave this thread, you obviously haven't even looked at the post here......did you even follow?

Anyways, I am not continuing off topic with this.
My post is for RS, he may just disregard it. But he knows some of the stuff he has said. I have addressed it many times, just in an attempt to shine a different light.

We aren't going to bring up things that were said but you can read in this thread and see what you missed. Of course he raves on and on about performance per dollar but multiple times now.............
You aren't dragging me any further.
I said what I said to RS, it doesn't matter if you don't get it
 
Let us know how we can get 512GB-640GB/sec memory bandwidth out of GDDR5 this generation. No GPU maker in the world has so far made a 512-bit memory controller work with 8Ghz GDDR5. 😉

Are you saying its impossible?

CES 2015 had some. Everyone knows that FreeSync has a range of 9-240Hz, which means the 40-48Hz limitation on some monitors today is directly related to the panel chosen / manufacturer of that model.

So you cant mention a single one. Super. Dont use the 30hz again then until we see actual products.



It's not about your 980, but the fact that for the last 6 months you have continuously implied that R9 300 series are all re-brands with some models getting "worthless" HBM. You have continuously implied that AMD can't improve perf/mm2 and perf/watt on 28nm node without a brand new architecture. Essentially all your posts were of the view that 980 will beat anything AMD has in perf/watt until 14nm node.

Even now you keep making statements how HBM1 offers nothing worthwhile other than power usage. After being so horribly wrong with your predictions in the past, in your shoes it would be wise to do extensive research before posting theories on future products.

When R9 390X thrashes a 980 in perf/watt and performance, we'll have to add 0/1 to your growing list of 0/XX predictions you got correct. :colbert:

The performance and performance/watt of my GTX980 is obviously a dear case to you. Since you have to mention it in almost every single post while keep focusing on what happens to me and my GTX980. Its somehow personal for you while the rest of us dont care a single bit. We all but you know something better will be in the future. But unless AMDs drivers lie. Everything in desktop and mobile besides the 380 and 390 desktop series is now a rebrand. And the last 2 we dont know what will be yet. Yet you already decided the outcome. Because my evil GTX980 that you somehow got a personal issue with must be beaten.

You going on record that no FreeSync monitor released in the next 2-3 years will have refresh rates below 40Hz because the tech is flawed/there is no incentive?

Its you setting a fixed timeline, not me. But again, you already proved you couldnt show any product below 40hz. So how long will we have to wait?


So basically not 1mhz bump in clocks, no new features. Will write this down.

Modifying clock or voltage doesnt make it less of a rebrand. Its amazing you start to defend this.
 
Last edited:
Okay, I know how biased you are against AMD, but really? Really? That's a ridiculously bold claim unless you're saying that the tech is DoA and G-Sync will kill it off prematurely...

From the looks of it, both gsync and freesync is niche products/failures.

So no, its not bias. Its realism.

Both freesync and gsync adds considerable cost. Plus its both used mainly for higher end monitors. Specially gaming oriented.

And going lower than the minimum rates. And you are way better of with a regular 60hz monitor.
 
Both freesync and gsync adds considerable cost. Plus its both used mainly for higher end monitors. Specially gaming oriented.

Maybe they both add cost, but gsync adds far more cost than adaptive sync. I think adaptive sync will trickle down to cheaper monitors once the producers introduce new lines. That is never going to happen with the gsync module.
 
From the looks of it, both gsync and freesync is niche products/failures.

So no, its not bias. Its realism.

Both freesync and gsync adds considerable cost. Plus its both used mainly for higher end monitors. Specially gaming oriented.

And going lower than the minimum rates. And you are way better of with a regular 60hz monitor.

Just wrong. Or "biased" and "misleading" might be better terms.

You are calling something a niche product when it will benefit a vast majority of users.
 
So you cant mention a single one. Super. Dont use the 30hz again then until we see actual products.

You might want to read more carefully next time because I never said a 30Hz monitor is currently for sale. The point is it's already in development and a prototype has been showcased at CES 2015 per the video from the show.

I don't even know why I waste time replying to your predicable replies that never have any factual substance. The name of the monitor is in the video in the link I posted. I am not going to do the work for you if you are too lazy/incompetent to check the information in the links provided. You are the most pessimistic person when it comes to any AMD technology on this forum. From day 1, not 1 positive thing came out of you in 13000 posts regarding any existing or future AMD product eve made. Oh, and you managed to accumulate 13000 posts from 2012. This must be a record. When you are constantly proven wrong, you just move on to the next topic and continue downplaying the next technology/generation and never admit how you were wrong for all those previous generations. It's only a matter of time before your predictions of AMD not improving perf/watt on 28nm beyond 290X and a FreeSync monitor with 30Hz launching are proven wrong, as always.
 
Last edited:
No.

GTX580 was 15-16% faster for a 43% price premium ($500/$350)
GTX980 is 15-16% faster for a 67% price premium ($550/$330)

NV is charging a 56% premium (67%/43%) vs. 570/580 generation for each additional 1% increase in performance between a 2nd tier and a 1st tier card when moving from a 970 to a 980.

Here is more.

Per Sweclockers today,

In modern games today, GTX980 only beats 780Ti by about 11% at 1440P and 12% at 4K.
In modern games back then, GTX680 beat GTX580 by 42% at 1080P and 45% at 1440P (using 7870 = 580)

Don't like that site? (image)

At 2560x1600, GTX680 is 51% faster than a 580 (NV's last gen flagship), but 980 is only 11% faster than a 780Ti (NV's last gen flagship).

Even if we only look at 680 vs. 580 and 780Ti vs. 980, 980 is an overpriced mid-range chip that has perf/watt marketing fluff going for it - put that perf/watt marketing aside and consider the context, 980 easily cements itself as the least impressive $500-550 NV GPU ever made, the least impressive generational leap at the $500-550 level from NV ever. That's why GM200 6GB and R9 390/390X are absolutely necessary to correct this stagnation and overpricing market situation. IMO, the reason 980 sold so well is not because of how good that card really is, but rather because it was the fastest single GPU for a long time and by default people who build new rigs or upgrade often get the fastest single GPU. Marketing and sales wise, 980 is a wild success as a result of AMD not showing up on time. However, when taking into account 980's price and the time-frame from 290X/780TI's launches, the generation improvement leap wise, 980 is an utter disappointment. Never in the history of AMD/NV/ATI has a next gen card priced at $550 been this unimpressive vs. the cards preceding it.

Once we consider 4K gaming benchmarks over 780Ti/290X, 980 is an embarrassment for a next gen $550 card. If R9 300 series flops, this desktop GPU generation will go down in history as one of the worst of all time, if not the worst. Hopefully GM200 6GB delivers if R9 390 flops.

As a side-note, 960 already cemented its place in history as the worst x60 successor from NV ever.

Only 11% separates an after-market 960 and an after-market 760. At no point in NV's history has a next gen x60 card ever been only 11% faster than the predecessor.
http://www.computerbase.de/2015-01/nvidia-geforce-gtx-960-im-test/12/
You make some interesting points here. Good read.
 
Is there any confirmation on HBM at all? Else the joker could be GDDR5. Besides power saving HBM doesnt really offer anything GDDR5 cant do. We have to wait till HBM2 for the breakthrough.

7990 was an ultra niche due to 500W. A dual Tonga using 300W would fit the bill for a card that could be much more widely used and accepted.

Are you saying its impossible?

As the only person I've seen with the claim that HBM doesn't offer anything over GDDR5, the burden of proof lies on you. Your statement is absurd, HBM clearly has several significant benefits.
 
As the only person I've seen with the claim that HBM doesn't offer anything over GDDR5, the burden of proof lies on you. Your statement is absurd, HBM clearly has several significant benefits.

I never said it didnt offer anything over GDDR5.

But speedwise even with 4 stacks (4096bit), you are on pair with GDDR5(512bit). Thats why the breakthrough will first happen with HBM2.

The main benefit of HBM1 is lower power consumption. The main disadvantage is price.
 
And when will that be?

Isn't that like arguing that 1080p was niche and a failure one year after it came out? How about IPS? Give it time, geez. High cost, limited selection, and the occasional glitches and flaws are part of the early adopter experience.

This technology is too important to gamers to fade away. G/Free as brands might not persist forever, but adaptive refresh rates are here to stay.
 
Isn't that like arguing that 1080p was niche and a failure one year after it came out? How about IPS? Give it time, geez. High cost, limited selection, and the occasional glitches and flaws are part of the early adopter experience.

This technology is too important to gamers to fade away. G/Free as brands might not persist forever, but adaptive refresh rates are here to stay.

Or like so many other technologies. It may be replaced by something entirely different.

Until there is unity between the 3 vendors, specially nVidia and Intel due to marketshare superiourity. Then its pretty much stillborn. And thats the problem. Then the technology can be the best thing since sliced bread. But without that unity it goes nowhere.
 
Is there any confirmation on HBM at all? Else the joker could be GDDR5. Besides power saving HBM doesnt really offer anything GDDR5 cant do. We have to wait till HBM2 for the breakthrough.

7990 was an ultra niche due to 500W. A dual Tonga using 300W would fit the bill for a card that could be much more widely used and accepted.

I love how you're trying to to say the rumor with plenty of backing (HBM) seems unlikely, but the completely random theory with 0 backing and no reasonable logic (dual Tonga) makes sense to you.

Seriously, I think this thread probably need to be locked if we're getting to the point that people are seriously discussing a baseless theory that popped into someone's head. It won't be a dual GPU. Simple as that.
 
I love how you're trying to to say the rumor with plenty of backing (HBM) seems unlikely, but the completely random theory with 0 backing and no reasonable logic (dual Tonga) makes sense to you.

Seriously, I think this thread probably need to be locked if we're getting to the point that people are seriously discussing a baseless theory that popped into someone's head. It won't be a dual GPU. Simple as that.

Well if you discount all the advantages HBM has, it doesn't have any advantages and it doesn't make sense to pay the extra cost to implement it.

Simple!

But no, in the real world where memory is part of a graphics card rather than something that gets added and does its own thing, HBM offers significant energy savings on a design that is pretty much TDP constrained, so by having the memory use less energy, other things can use more energy to deliver more performance.
 
I never said it didnt offer anything over GDDR5.

But speedwise even with 4 stacks (4096bit), you are on pair with GDDR5(512bit). Thats why the breakthrough will first happen with HBM2.

The main benefit of HBM1 is lower power consumption. The main disadvantage is price.

That's where you're mistaken. Time and time again I have to read your biased half-assed posts.

HBM1 is 8x the bus at 1/4 the frequency. That's 2x the speed over GDDR5. Disregarding the latency improvements.
 
That's where you're mistaken. Time and time again I have to read your biased half-assed posts.

HBM1 is 8x the bus at 1/4 the frequency. That's 2x the speed over GDDR5. Disregarding the latency improvements.

4096bit at 1Ghz = 512GB/sec.
512bit at 8Ghz = 512GB/sec.
512bit at 7Ghz = 448GB/sec.
 
But actually the competitor uses only a 384 bus or lower. Compared to that ir should give a significant improvement on transferrates.
 
4096bit at 1Ghz = 512GB/sec.
512bit at 8Ghz = 512GB/sec.
512bit at 7Ghz = 448GB/sec.

1. No GPU maker has ever made a memory controller that can run 512-bit bus with those GDDR5 speeds. On top of that it may be possible to make a smaller sized HBM1 controller than a 512-bit one. You ignore this.

2. HBM1 4GB saved 50W of power over GDDR5 at similar bandwidth, which means HBM 1 8GB would save a ton of power over 8GB GDDR5 over a 512-bit bus. This extra power usage headroom can be used to clock the GPU higher or make the die larger. You ignore this.

3. Having 1 generation head start on HBM1 will make it easier for AMD's engineers to implement HBM1/2 in future APU designs. Thus, incorporating HBM and learning all its intricacies earlier is more beneficial for AMD than NV because NV doesn't sell APUs. You ignore this.

4. Combing points #1 and #2, if running GDDR5 over 512-bit was that easy, I am sure AMD's engineers would have waited for HBM 2.0. Since NV only uses a 384-bit bus, they didn't have to contemplate the idea of using 7-8Ghz GDDR5 with a more complex 512-bit bus. I would think that an army of professional engineers at AMD is way smarter than any commenter on these boards when it comes to choosing HBM 1 vs. GDDR5 above a certain level of bandwidth. You ignore this.

If you truly believe that Dual-Tonga+HBM1 R9 390X is AMD's next gen $700 flagship from AMD, then bet on it and if you are wrong and it turns out a large die size single chip GPU with 3500-4096 SPs, then you won't post on AT forums for 6 months. Let's see how much you really believe in that theory of yours.
 
Last edited:
Having 1 generation head start on HBM1 will make it easier for AMD's engineers to implement HBM1/2 in future APU designs. Thus, incorporating HBM and learning all its intricacies earlier is more beneficial for AMD than NV because NV doesn't sell APUs. You ignore this.

I think this is the single most important reason why AMD is experimenting with HBM on video cards. I don't expect it to make a substantial amount of difference on a discrete-GPU flagship (Titan X manages fine with GDDR5), but it is going to be a massive leap forward on APUs. As things currently stand, anything more than about 384 GCN shaders on an AMD APU is a waste, because they're bottlenecked by the slow speed of DDR3. On the other hand, once it becomes possible to stack 8GB of HBM on the APU (to share between the CPU and GPU portions), then it becomes possible to build a true "console on a chip".
 
I think this is the single most important reason why AMD is experimenting with HBM on video cards. I don't expect it to make a substantial amount of difference on a discrete-GPU flagship (Titan X manages fine with GDDR5), but it is going to be a massive leap forward on APUs. As things currently stand, anything more than about 384 GCN shaders on an AMD APU is a waste, because they're bottlenecked by the slow speed of DDR3. On the other hand, once it becomes possible to stack 8GB of HBM on the APU (to share between the CPU and GPU portions), then it becomes possible to build a true "console on a chip".

it is wasted for bandwidth intensive workloads, compute doesn't seem very affected adding more compute units.
 
it is wasted for bandwidth intensive workloads, compute doesn't seem very affected adding more compute units.

There are other advantages to HBM on APUs/GPUs though. You get more bandwidth per module. That means you don't need as many modules. If you combined that with higher density due to stacking, you are suddenly able to have a lot of VRAM at very high bandwidth in a much smaller space. Secondly, the actual RAM modules themselves are much smaller (37X smaller than DRAM modules per SK Hynix, and are the size of an aspirin pill).

So in essence HBM is not just about matching or exceeding GDDR5 at lower power usage. For APUs, they do not have a 10-12 inch PCB to put 16GB of memory on there.

Hynix-HBM-24-e1412091427780.jpg


2nd generation HBM will have a major improvement in 3D stacking density.


hynix_2014_021.png


What's remarkable is how the power consumption vs. the amount of bandwidth extracted from 1st gen HBM is getting ignored as a major advantage. What this means in practice is that the higher the GDDR5 speed gets and the more complex the memory controller is (128bit -> 512-bit) and the more you increase the amount of GDDR5 modules, the more HBM will pull away in Bandwidth/watt efficiency. The effect is cumulative because you need way more GDDR5 chips on the PCB to achieve the same memory bandwidth AND they happen to use way more power. That's a double hit!

normal_HBMmemory-Wccftech2611-1.jpg
 
HBM1 4GB saved 50W of power over GDDR5 at similar bandwidth, which means HBM 1 8GB would save a ton of power over 8GB GDDR5 over a 512-bit bus. This extra power usage headroom can be used to clock the GPU higher or make the die larger. You ignore this.

So you essentially claim that over half the power usage on a Titan X is GDDR5?

Hynix themselves says 42% power saving. And thats against GDDR5 8Ghz while being alittle creative.

Hynix-HBM-9.jpg


Secondly, the actual RAM modules themselves are much smaller (37X smaller than DRAM modules per SK Hynix, and are the size of an aspirin pill).

It seems you are getting carried away again. Lets look on Knights Landing with HMC. HMC modules is about the same size.

Intel-Knights-Landing-Processor_Die_1.jpg


And thats a very big package with a huge die. The actual chips themselves isnt much different than GDDR5. Its mainly the density. Even without any protection and the die only. The HBM is a factor ~4x smaller than a GDDR5 module in its FBGA package.

SK-HYNIX-HBM-vs-GDDR5.jpg


In terms of latency I wouldnt get my hopes up either.

gtc2015-skhynix-3b-900x565.jpg
 
Last edited:
There is no doubt that the titan X is less efficient than the gtx 980.

a lot of people have suggested it is the ram. Well, you can easliy figure out about how much power 7ghz Gddr5 uses by doing the math.

The gtx 980 is 4gb vs the titan at 12gb. the loss in performance per watt is due to an extra 8gb or ram
 
Back
Top