really guy? if RS mentions the 980[or any card really] it is in the context of perf/$ -atleast his post history shows this. So it isn't some kind of "internet" grudge...
Let us know how we can get 512GB-640GB/sec memory bandwidth out of GDDR5 this generation. No GPU maker in the world has so far made a 512-bit memory controller work with 8Ghz GDDR5. 😉
CES 2015 had some. Everyone knows that FreeSync has a range of 9-240Hz, which means the 40-48Hz limitation on some monitors today is directly related to the panel chosen / manufacturer of that model.
It's not about your 980, but the fact that for the last 6 months you have continuously implied that R9 300 series are all re-brands with some models getting "worthless" HBM. You have continuously implied that AMD can't improve perf/mm2 and perf/watt on 28nm node without a brand new architecture. Essentially all your posts were of the view that 980 will beat anything AMD has in perf/watt until 14nm node.
Even now you keep making statements how HBM1 offers nothing worthwhile other than power usage. After being so horribly wrong with your predictions in the past, in your shoes it would be wise to do extensive research before posting theories on future products.
When R9 390X thrashes a 980 in perf/watt and performance, we'll have to add 0/1 to your growing list of 0/XX predictions you got correct.![]()
You going on record that no FreeSync monitor released in the next 2-3 years will have refresh rates below 40Hz because the tech is flawed/there is no incentive?
So basically not 1mhz bump in clocks, no new features. Will write this down.
Okay, I know how biased you are against AMD, but really? Really? That's a ridiculously bold claim unless you're saying that the tech is DoA and G-Sync will kill it off prematurely...
Both freesync and gsync adds considerable cost. Plus its both used mainly for higher end monitors. Specially gaming oriented.
From the looks of it, both gsync and freesync is niche products/failures.
So no, its not bias. Its realism.
Both freesync and gsync adds considerable cost. Plus its both used mainly for higher end monitors. Specially gaming oriented.
And going lower than the minimum rates. And you are way better of with a regular 60hz monitor.
Just wrong. Or "biased" and "misleading" might be better terms.
You are calling something a niche product when it will benefit a vast majority of users.
And when will that be?
So you cant mention a single one. Super. Dont use the 30hz again then until we see actual products.
You make some interesting points here. Good read.No.
GTX580 was 15-16% faster for a 43% price premium ($500/$350)
GTX980 is 15-16% faster for a 67% price premium ($550/$330)
NV is charging a 56% premium (67%/43%) vs. 570/580 generation for each additional 1% increase in performance between a 2nd tier and a 1st tier card when moving from a 970 to a 980.
Here is more.
Per Sweclockers today,
In modern games today, GTX980 only beats 780Ti by about 11% at 1440P and 12% at 4K.
In modern games back then, GTX680 beat GTX580 by 42% at 1080P and 45% at 1440P (using 7870 = 580)
Don't like that site? (image)
At 2560x1600, GTX680 is 51% faster than a 580 (NV's last gen flagship), but 980 is only 11% faster than a 780Ti (NV's last gen flagship).
Even if we only look at 680 vs. 580 and 780Ti vs. 980, 980 is an overpriced mid-range chip that has perf/watt marketing fluff going for it - put that perf/watt marketing aside and consider the context, 980 easily cements itself as the least impressive $500-550 NV GPU ever made, the least impressive generational leap at the $500-550 level from NV ever. That's why GM200 6GB and R9 390/390X are absolutely necessary to correct this stagnation and overpricing market situation. IMO, the reason 980 sold so well is not because of how good that card really is, but rather because it was the fastest single GPU for a long time and by default people who build new rigs or upgrade often get the fastest single GPU. Marketing and sales wise, 980 is a wild success as a result of AMD not showing up on time. However, when taking into account 980's price and the time-frame from 290X/780TI's launches, the generation improvement leap wise, 980 is an utter disappointment. Never in the history of AMD/NV/ATI has a next gen card priced at $550 been this unimpressive vs. the cards preceding it.
Once we consider 4K gaming benchmarks over 780Ti/290X, 980 is an embarrassment for a next gen $550 card. If R9 300 series flops, this desktop GPU generation will go down in history as one of the worst of all time, if not the worst. Hopefully GM200 6GB delivers if R9 390 flops.
As a side-note, 960 already cemented its place in history as the worst x60 successor from NV ever.
Only 11% separates an after-market 960 and an after-market 760. At no point in NV's history has a next gen x60 card ever been only 11% faster than the predecessor.
http://www.computerbase.de/2015-01/nvidia-geforce-gtx-960-im-test/12/
Is there any confirmation on HBM at all? Else the joker could be GDDR5. Besides power saving HBM doesnt really offer anything GDDR5 cant do. We have to wait till HBM2 for the breakthrough.
7990 was an ultra niche due to 500W. A dual Tonga using 300W would fit the bill for a card that could be much more widely used and accepted.
Are you saying its impossible?
As the only person I've seen with the claim that HBM doesn't offer anything over GDDR5, the burden of proof lies on you. Your statement is absurd, HBM clearly has several significant benefits.
And when will that be?
Isn't that like arguing that 1080p was niche and a failure one year after it came out? How about IPS? Give it time, geez. High cost, limited selection, and the occasional glitches and flaws are part of the early adopter experience.
This technology is too important to gamers to fade away. G/Free as brands might not persist forever, but adaptive refresh rates are here to stay.
Is there any confirmation on HBM at all? Else the joker could be GDDR5. Besides power saving HBM doesnt really offer anything GDDR5 cant do. We have to wait till HBM2 for the breakthrough.
7990 was an ultra niche due to 500W. A dual Tonga using 300W would fit the bill for a card that could be much more widely used and accepted.
I love how you're trying to to say the rumor with plenty of backing (HBM) seems unlikely, but the completely random theory with 0 backing and no reasonable logic (dual Tonga) makes sense to you.
Seriously, I think this thread probably need to be locked if we're getting to the point that people are seriously discussing a baseless theory that popped into someone's head. It won't be a dual GPU. Simple as that.
I never said it didnt offer anything over GDDR5.
But speedwise even with 4 stacks (4096bit), you are on pair with GDDR5(512bit). Thats why the breakthrough will first happen with HBM2.
The main benefit of HBM1 is lower power consumption. The main disadvantage is price.
That's where you're mistaken. Time and time again I have to read your biased half-assed posts.
HBM1 is 8x the bus at 1/4 the frequency. That's 2x the speed over GDDR5. Disregarding the latency improvements.
4096bit at 1Ghz = 512GB/sec.
512bit at 8Ghz = 512GB/sec.
512bit at 7Ghz = 448GB/sec.
Having 1 generation head start on HBM1 will make it easier for AMD's engineers to implement HBM1/2 in future APU designs. Thus, incorporating HBM and learning all its intricacies earlier is more beneficial for AMD than NV because NV doesn't sell APUs. You ignore this.
I think this is the single most important reason why AMD is experimenting with HBM on video cards. I don't expect it to make a substantial amount of difference on a discrete-GPU flagship (Titan X manages fine with GDDR5), but it is going to be a massive leap forward on APUs. As things currently stand, anything more than about 384 GCN shaders on an AMD APU is a waste, because they're bottlenecked by the slow speed of DDR3. On the other hand, once it becomes possible to stack 8GB of HBM on the APU (to share between the CPU and GPU portions), then it becomes possible to build a true "console on a chip".
it is wasted for bandwidth intensive workloads, compute doesn't seem very affected adding more compute units.
HBM1 4GB saved 50W of power over GDDR5 at similar bandwidth, which means HBM 1 8GB would save a ton of power over 8GB GDDR5 over a 512-bit bus. This extra power usage headroom can be used to clock the GPU higher or make the die larger. You ignore this.
Secondly, the actual RAM modules themselves are much smaller (37X smaller than DRAM modules per SK Hynix, and are the size of an aspirin pill).