[Sweclockers] Radeon 380X coming late spring, almost 50% improvement over 290X

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Should water cooling be required? Are you saying that, if you want to buy an enthusiast GPU, you have to accept watercooling? It does not work in every computer case, not everyone is ready to accept it. Yes, quite a few gamers WILL accept it, and it's a neat solution... but it shouldn't be a requirement to enter the enthusiast market. I find that to simply be taking things too far.

1. If a Hybrid WC is actually superior in noise levels and temperatures, which ensures better overclocking, and exhausting heat out of the case, then I have no problem with it being a "requirement" for a reference solution. Sure as hell beats the loud reference 7970/290X coolers or the thermal throttling mess that is the Titan Z.

Titanz.png


2. There is no indication that no AIB will ever make an air cooled solution. It's possible we will see triple slot MSI Lightning and Asus Matrix, etc.

If their hardware is not mature enough to reach that performance level without requiring a custom AIO solution to simply stay within reasonable temp levels, they need to slow down until they can shrink it or reach higher efficiency, or they seriously risk letting Nvidia gain further marketshare.

You seem to think the AIO CLC is actually a negative but for many it's getting a far superior cooling solution for not much extra cost. The whole point of flagship cards is to push their performance to the absolute limit. In 2015 it will be a 300W R9 380X and in 2016/early 2017 a $350 180W card will be as fast or faster. For anyone who cares about electricity costs and power usage, well they can get slower 980/370X or wait for Pascal, etc. What you suggest is for AMD to purposely gimp the 380X to 225-250W because 300W is too much for you.

GM200 and 380X target the high-end market, where people should have a case and PSU to support such products. I certainly don't remember AT forum being so vocal about the 780Ti, a card that peaked 269W of power in reference form, and 286W of power on say the awesome EVGA Classified:

power_peak.gif


Also, don't compare AMD's to NV's TDP. They are not rated the same. NV's TDP is more or less underrated marketing BS. 780Ti and 480 easily exceeded their TDPs.

970 is rated at 145W TDP but reaches 190W+ at load. :sneaky:

power_peak.gif


Now if R9 380X is 30% faster than a 980 and a 980 is about 16% faster than a 970, we get 380X to be 51% faster than a 970.

192W x 1.51 = 290W. Sounds reasonable to me especially since in the context of the overall system power, the 380X should be way faster than a 970 but the overall system power usage will go up by less than the performance increase.

But if they only went up 7W but need AIO now, that tells you that the base was too high in the first place, and it seems the base measurements were based on a 95ºC reference if I have found the right sources, which is frankly ridiculous beyond argument IMO.

I am sure AMD could have made MSI Lightning 290X style cooler and gotten a similar 75C load temps, but still chose Hybrid WC for a few reasons:

1) It might have cost less to source 120mm from Asetek;
2) You still exhaust the heat out of the case
3) This solution is better for dual or even triple CF. How would you fit 3x triple slot MSI Lightnings in 1 case?

The reason R9 290X runs at 94-95C is because the reference cooler is inefficient. You already know that after-market R9 290X cards like MSI Lightning or Sapphire Tri-X / Vapor-X run at 70-75C.

I WANT AMD to make a damn good card, and this is in the right direction if current leaks are accurate. However, if the solution still requires an AIO cooler, we're treading in the wrong direction. No single-GPU card should ever REQUIRE an AIO solution.

I am sure the same was said by the hardcore Porsche purists before the 911 abandoned air cooling for water-cooled engines. Now look at where the 911 991 model is today compared to 2-3 generations ago 911s in terms of sports car performance.

Look at the AIO CLC market for CPUs that took off, despite Noctua NH-D15 and Phanteks PH-TC14PE beating a lot of the popular AIO CLC solutions in terms of noise levels vs. performance. It's been a clear trend over the last 5 years that CLC on CPUs has taken over the high-end heatsink market for the most part. Why can't we have 300W CLC flagship GPUs because CLC allows for flagship cards to be made far above the usual 250W TDP mark? I wouldn't even mind if they made 350-400W flagship cards with CLC. Give the market more choices/options. :thumbsup:

Dual-GPU? Sure, why not, that can be the cost of two GPUs on a single PCB sometimes. But I can't think of a reason why ANY flagship *standard market* computer part should require *extreme* cooling measures. Make no mistake: no water cooling system, of any variety, is anything but *extreme* in the consumer market.

I disagree. Kraken G10 GPU bracket and AIO CLC are not extreme cases in the DIY market. Full custom loop, LN2, vapor phase change cooling, those are your top 3-5% of the market.

Frankly, I can't even work with them in my current case, not without being forced to go with an AIO cooler on my CPU too. This is especially true if I want to go SLI/Crossfire (which may be required for multi-monitor or 4K at ultra settings for awhile).

Anyone who is considering $1000-1300 of dual flagship GPUs surely can buy a $200 new case that will last 5+ years? There are always other options like waiting for 14/16nm Pascal, getting a 250W air cooled GPU, waiting for AIB after-market solutions, getting 980 SLI/370X CF instead.

You also need to take into account that if R9 380X and GM200 are really 35% or so faster than a 980 at stock, they won't be too far off 970 SLI. Sure they'll probably be 15-20% slower but a lot of people will take a single card that's 80-85% of 970 SLI performance to not deal with CF/SLI profiles.

I like the idea, but not the requirement. Get the AIB partners to release multiple solutions, a reference AIO cooler and whatever cooler they want, or, heck, two different "reference" coolers, leave it to the consumer to decide which they want. Do it like EVGA: they have some of their cards available in both ACX and the blower style.

I am not sure why you ruled out the possibility of a Vapor-X, MSI Lightning, Asus Matrix, etc. considering cards like 780Ti and 290X had them.

I did. That's where the 70º+ figure came from. And yes, if that is an accurate measurement, the coolers as they are should be able to handle that.

But those are not good temps for those cards to run at, plain and simple. They draw more power at that temp, which means they produce more waste heat.

I don't know better than an AMD or Intel or NV engineer. My laptop's 3635QM has been pushed to 99-100% nearly 24/7 load in distributed computing since February 2013, with max load reaching 93-94C and average load temperatures of 87-89C. This chip is rated at 105C by Intel and I expect it to not fail up to that level. If this chip runs at 15C or 88C makes no difference to me unless the heat is actually impacting the laptop's keyboard, which on my laptop it isn't by any degree that matters to me.

Maximum GPU temperatures
GTX280 = 105C
GTX580 = 97C
GTX680 = 98C
780Ti = 95C
GTX980 = 98C

The extra power usage at hot temperatures is probably a small factor overall, much less important than the impact on the Boost clocks. However, if the GPU or CPU is rated at 95-105C, I have no problem whatsoever running it 24/7 @ 100% load for 2-3 years at 85-90C. I've never had any GPU fail on me due to high temps and I've been running distributed computing for years and years on overclocked and overvolted NV/AMD cards. If the components are well made, they will handle 85-90C loads and not fail. VRMs can do even more.

Let's say if a GPU or CPU was made of exotic materials that hypothetically allowed it o run at 200C and function well. I bet most PC gamers will think that running such an ASIC at 125-150C would be ludicrous, but in reality that's only psychological. If from an engineering point of a view a chip is rated to 95C as a perfectly acceptable operating temperatures, it's fine. Tonga chips in the iMac Retina often hit 100-103C.

In fact, R9 295X doesn't start thermal throttling in the iMac Retina until about 106C.
http://forums.macrumors.com/showpost.php?p=20590035&postcount=554

This idea on our forum by experienced enthusiast that it's somehow bad to run GPUs at 80C or 90C is just a cautious opinion, far detached from the engineering point of view. The high temperatures do matter if the Boost clock is impacted but if it's not impacted until 100C for some other chip, it doesn't matter.

Also, you made a point how blowers are often preferred but while that might be true for MiniITX, it's not true for mid-size to large-sized cases. Even 980 SLI reference blowers start throttling without a custom fan curve, which raises their noise levels.

"We found that with the default settings on GeForce GTX 980 SLI the lowest clock rate it hit while in-game was 1126MHz. That clock speed is actually below the boost clock of 1216MHz for GTX 980. This is the first time we've seen the real-time in-game clock speed clock throttle below the boost clock in SLI in games. It seems GTX 980 SLI is clock throttling in SLI on reference video cards."
~ HardOCP

I know many on AT forums to this day won't admit the inferiority of blowers for high-end gaming systems, but as far as single flagship GPUs go in scientific benchmarks, blowers get destroyed in noise levels and performance by an after-market cooler like the Windforce 3X on a 250W TDP flagship card:

"In the automatic regulation mode, when the fans accelerated steadily from a silent 1000 RPM to a comfortable 2040 RPM, the peak GPU temperature was 78°C. It is about 20°C better than with the reference cooler and much quieter, too! That’s just an excellent performance for a cooler of the world’s fastest graphics card." ~ Xbitlabs

If a 120mm rad solves the temperatures, noise levels and exhausting hot air out of the case all at once, I think it should be the future of reference cooling for flagship 250-300W cards. As I said for those who want miniITX/MicroATX systems, there will always be mid-range 160-180W cards.

------

To re-iterate on GM200 vs. 380X, I think NV will have the edge at 1080P (lower CPU overhead/higher CPU multi-threading), cases where 4GB of VRAM is exceeded should be investigated at 4K in SLI/CF, and possibly overclocking headroom will give NV a big edge on cards like the classified. If GM200 operates at 250W and it can overclock 15-20% on stock voltage or with a minor voltage bump, I would suspect it will be much harder for the 380X to compete if it's a 300W chip already under water. We know that 295X2 wasn't a stellar overclock as AMD pushed it near the max. That's where Maxwell's efficiency could become the trump card as we've seen 750T, 960/970/980 are all great overclockers.
 
Last edited:

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
What was the performance increase from 300mm2 GK104 to 550mm2 GK110? 40% for 80% more die area?

GM204 is 400mm2. Obviously they will not increase the die size by 80% (to 720mm2). 50% increase would put GM204 into 600mm territory. Keeping the same scaling (2% die area increase for 1% performance increase) we arrive at 25% more performance than 980.

380X is a whole new gpu, with new memory system, with new GCN core. There is nothing to compare it to. Performance can be all over the place. From something like 15% to, something crazy like 50+% performance increase.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
780Ti is 55% faster at 1440p, and 62% at 4K. I have a feeling 1080p will slowly becoming an irrelevant resolution for testing cards like GM200/380X, unless one only tests the most demanding games (or if we get way more demanding 2015-2016 games). Otherwise, a lot of CPU limited games narrow the delta between a flagship and a mid-range card.
http://www.techpowerup.com/mobile/reviews/MSI/GTX_960_Gaming/29.html

Maxwell has 35-40% IPC efficiency increase: 90% of 192 CUDA core performance in a 128 CUDA core cluster. That means you don't need to scale GM200 as much as GK110 to continue getting linear scaling. Also, GK110 scaled well and I don't see GM200 changing that.

600mm2 is 50% larger than GM204, so 3072 CUDA coees, 96 ROPs, more than 50% TMUs. I bet if overclocked to 980 speeds, it will be 50%+ faster. Whether or not NV will launch GM200 at 1217mhz clocks is another matter but for overclockers, this is less relevant as long as the chip overclocks to 1.4-1.5Ghz like the 980.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
In recent games above 1080p 2GB 770 suffers alot. Comparing it to 3GB 780Ti doesn't tell the whole story
That is why 1080p initial 780ti review is a good indication of performance delta.
perfrel_1920.gif
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Initial reviews often have 3 "unintentional" flaws:

1) Newer architextures often gain 5-10% with driver updates. This is not reflected in launch reviews. For that reason using post-launch reviews once they become available is prudent.

2) They capture a particular snapshot of old(er) 2-3 year old games (game engines). This point is critical because it often doesn't reveal the true potential of a next gen flagship card at 1080p due to major CPU bottlenecks. If the GPU isn't pushed enough by those games, it can't show its true potential.

If you look at TPU's 1080p launch reviews of 5870 vs. 285, 6970/480 vs. 5870, 7970/7970Ghz vs. 6970, 580 vs. 680/7970, it becomes obvious all of those reviews couldn't capture the true potential of the faster cards that they gained in due time.

Since we can't test 2016/2017 games in 2015, one way to 'simulate' next gen GPU games/loads is to look at GPU limited benchmarks (ie, 1440-4K, multi-monitor). If we focus on 1080p for cards like GM200/380X/295X2, we will arrive at incorrect conclusions about how fast the cards really are against 290X/980 for demanding titles.

3) Related to point #2, older games don't reveal bottlenecks in older architectures, such as 5870's tessellation weakness, or 680/770's memory bandwidth bottlenecks. Therefore, if you only look at launch reviews, you aren't seeing the big picture of how older cards manage in later games.

Launch reviews matter at launch since that's all we have. However, 6 months to 1 year out, we should use latest reviews with the latest demanding games and up-to-date drivers.

Using this methodology, today we know that 780Ti is way faster than only 40% against a 680. While GM200/380X should be measurably faster than a 980 at 1080p, their true potential in 2013-2015 games will be shown at 1440p and above. Unfortunately, way too many PC games made in the last 2-3 years will be CPU limited at 1080p with such cards.

I can't wait until 4K becomes more affordable to hit the mainstream and 1080p standard is abandoned once and for all. With 55" 4K LEDs already dropping to $1000 USD, once $300-400 GPUs get fast enough to handle 4K, I have a feeling the huge pent-up demand for 1080p PC monitor users will result in an explosive 4K adoption:
http://www.bhphotovideo.com/bnh/cont...G&Q=&A=details

If I had to buy today, it would be a 27" QNiX 1440p, or more likely 34" 3440x1440 21:9 or 4K. I would not get a 1080p display in 2015. At this point I view 1080p as a budget gaming resolution.

With 34" 3440x1440 dropping to $750, it's remarkable how much value can be had now in the monitor space from the stale decade of $1000-1300 30" 2560x1600 displays:
http://slickdeals.net/f/7638120-lg-...onitor-3440x1440-ips-panel-750-shipped-newegg
 
Last edited:

Haserath

Senior member
Sep 12, 2010
793
1
81
TDP is rated at the average power draw for the card, not the peak.

I think most 970's draw more than or around the same as the 980 though.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
TDP is rated at the average power draw for the card, not the peak

First of all, the industry's use of TDP to mean power usage by most PC gamers is used incorrectly. TDP never stood for absolute maximum power consumption of an ASIC.

There are plenty of 250W TDP GPUs such as 7950/R9 280X/7970/7970Ghz/780/780Ti that all use a different amount of peak power at load. TDP just helps us determine the class of the GPU, but not its actual real world power usage.

"The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by the CPU that the cooling system in a computer is required to dissipate in typical operation. Rather than specifying CPU's real power dissipation, TDP serves as the nominal value for designing CPU cooling systems.[1] The TDP is typically not the largest amount of heat the CPU could ever generate (peak power), such as by running a power virus, but rather the maximum amount of heat that it would generate when running "real applications.""
http://en.wikipedia.org/wiki/Thermal_design_power

^ Notice how TDP refers to the cooling system, not the maximum amount of power the GPU actually can use.

Second of all, TDP is defined differently internally by NV, AMD and Intel, making comparisons of NV to AMD to Intel TDP's 100% meaningless.

For all intents and purposes, TDP was always a worthless metric for real world power usage. At best, it's just a guidance for the cooling system, at worst, its grossly misleading.

Saying that TDP is related to average power usage also tells us little since if the GPU peaks at higher loads and you only accounted for average power usage, then your PSU might not handle the load.

I long stopped caring for TDP of any PC component. The only thing that matters is real world real applications power usage for the DIY space. And what makes TDP even more worthless is for those of us who overclock, it tells us absolutely nothing about how much power usage and what cooling system we need to cope with the overclocks.

power-3.png


Based on the TDP of a 4790K at 88W and a 5820/5960X at 140W, you'd think that the later would use about 60% more power but you'd be way off. At stock, the difference is nowhere near 50-60%, while in overclocked states, it's the opposite with the power difference at 90%!

As I said, TDP has been a worthless metric for a long time but just like the blower > after-market cooler myth, the TDP myth persists. When we now have the tools to accurately measure real world power usage of GPUs, TDP is irrelevant for DIY enthusiasts.

--

NV's marketing directly exploits these points:

1) Most gamers do not know that TDP does not equal power consumption.
2) Most gamers do not know that NV and AMD and Intel measure TDP differently.
3) NV wants reviewers and gamers to focus on perf/watt on a card basis, rather than on a system wide basis. But since we aren't engineers, we can't use a GPU on its own, we need RAM, motherboard, CPU, etc. While comparing GPU architectures on a perf/watt basis is fun, as a gamer, it doesn't accurately tell me about how in/efficient my overall PC rig vs. another one in the context of the overall system efficiency.

I'll give you an example:

165W TDP 980 vs. 300W TDP 380X. If 380X is only 35% faster, it sounds absolutely horrible.

Power_03.png


What if in a game the i7 4770K system with a 380X was 35% faster and the total system power usage was 290W x 1.35 = 392W?

Notice, in that case the i7 4770K+380X system's power efficiency is just as good as the i7 4770K+980 system's overall power efficiency.

NV doesn't want gamers to compare overall gaming rig efficiency on a perf/watt basis because it destroys their marketing completely. Ask yourself this, would someone compare the efficiency of the GPU in PS4 to the GPU in XB1? No they wouldn't. They would compare the power usage of the entire PS4 console to the Xbox 1.

Therefore, even if a flagship card uses 275-300W of power, it absolutely does not automatically mean that the overall gaming rig in inefficient. Most gamer's desire to ignore perf/watt from an overall/total system's point of view does not give us a complete picture because we cannot just use a GPU on its own when gaming. Similarly stated, you cannot compare the TDP of 1 intel CPU to another Intel CPU and extract overall system efficiency.

I wish gamers paid attention to the context of power usage and TDP, but they simply don't for as long as I've been on this forum. Heck, the blower > after-market open air cooler myth still won't die 12 years since I've joined.

I think most 970's draw more than or around the same as the 980 though.

The 980 has power load balancing, missing on the 970. The point is most gamers see 145W TDP 970 and 165W TDP 980 against a 300W TDP 380X and freak out at the AMD's horribly inefficient and power hungry card. The reality is completely different. Not only is NV's TDP completely inaccurate in portraying the maximum power usage of their cards in demanding games, but the TDP itself fails to take into account the overall PC rig efficiency in the context of perf/watt of total system power in games. Even if you do measure real world power usage of GPU 1 against GPU 2, without comparing total system power usage, you are also not seeing what the actual efficiency of your system is.
 
Last edited:

Haserath

Senior member
Sep 12, 2010
793
1
81
These TDP arguments never go anywhere.:mad:

I was just pointing out, don't use peak load power as some sort of metric. Even AMD goes over their rated TDP for peak power; it was probably only that high for milisecond(s), which PSUs are supposed to be built to handle, and you should really have enough room for them even without that.

The companies also don't rerate every products' TDP. Intel keeps the same TDP for their E chips, then down to the i7-i5(4790k was special) then the S,i3,and T. The actual chip rated for that TDP will be the highest one, like the 5960X.

5960X vs 4790K
TDP:140-88W=52W
System power:205-150W=55W
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
wall of facts

You know, I had to take a step back for a moment. Last night I was actually thinking about this before I fell asleep, and remembered some of what you would end up posting in that quote: that flagships have historically been high-heat, large-die, power-guzzling beasts. The heat has truly only been addressed through possibly some architectural changes, but mostly better cooling solutions. Nvidia's phase-change blower cooler is apparently phenomenal for 250W cards.

So, I must humble myself and admit I've been wrong. I got closed minded and let myself completely forget that the current "flagship" Maxwells cards are anything but, so the comparison of progress is not realistic. Heck, now remembering, I had Nvidia GPUs that routinely stayed in the 80s and even into the low 90s if I didn't aggressively push the fan speed (GTX 285, I think).
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
780Ti is 55% faster at 1440p, and 62% at 4K. I have a feeling 1080p will slowly becoming an irrelevant resolution for testing cards like GM200/380X, unless one only tests the most demanding games (or if we get way more demanding 2015-2016 games). Otherwise, a lot of CPU limited games narrow the delta between a flagship and a mid-range card.
http://www.techpowerup.com/mobile/reviews/MSI/GTX_960_Gaming/29.html

Maxwell has 35-40% IPC efficiency increase: 90% of 192 CUDA core performance in a 128 CUDA core cluster. That means you don't need to scale GM200 as much as GK110 to continue getting linear scaling. Also, GK110 scaled well and I don't see GM200 changing that.

600mm2 is 50% larger than GM204, so 3072 CUDA coees, 96 ROPs, more than 50% TMUs. I bet if overclocked to 980 speeds, it will be 50%+ faster. Whether or not NV will launch GM200 at 1217mhz clocks is another matter but for overclockers, this is less relevant as long as the chip overclocks to 1.4-1.5Ghz like the 980.

Sorry but nothing scales perfectly at 1:1. the tpu charts show the 780 ti is 49% faster than GTX 770. Thats 49% scaling for 87.5% more shaders , 50% more ROPs, 50% more bandwidth. GTX 980 is not 2x the perf of GTX 960 even though its twice the resources in every aspect - shaders, rops, bandwidth, gpc. This is inspite of GTX 960 getting hammered in games like AC Unity , Middle Earth Shadow of Mordor which hit the 2GB VRAM limit and the GTX 960 perf tanks. With 4GB GTX 960 will close that gap even further. btw the GTX 980 has higher boost speeds. so in fact the scaling is never even close to 100%. You can expect a best case of 65 - 75%. taking a scaling factor of 70% a GM200 will be 0.7 * 0.5 * GM204 perf = 0.35 . so 35% perf improvement at same clocks. You can bet that GM200 will run at lower clocks. So I would not be surprised if GM200 will be 30 - 35% faster than GM204. thats what the leaks also showed and I think its realistic. That would put GM200 and AMD's next flagship on par with the AMD card winning 4k easily.
 
Last edited:

xLegenday

Member
Nov 2, 2014
75
0
11
Firs of all, this news is wrong! 380 wont be FIJI. It might surprise you...
Second, there will only be one really new GPU design this year. All the rest, renames with minor tweaks and clock speeds.

The big change will have 2016 with the new manufacturing process, as this year still 28nm will rule the gpu world.
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
Firs of all, this news is wrong! 380 wont be FIJI. It might surprise you...
Second, there will only be one really new GPU design this year. All the rest, renames with minor tweaks and clock speeds.

The big change will have 2016 with the new manufacturing process, as this year still 28nm will rule the gpu world.

Are you implying Hawaii will be 380 and Fiji 390?
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
So I would not be surprised if GM200 will be 30 - 35% faster than GM204. thats what the leaks also showed and I think its realistic. That would put GM200 and AMD's next flagship on par with the AMD card winning 4k easily.

Why are you assuming an easy 4k win for AMD? Just because of HBM?
 

Shehriazad

Senior member
Nov 3, 2014
555
2
46
The 380X is 300W. So the 390X will be a 500W single GPU that'll burn out in 6 months then? Either way, it looks like I'll have to pass. :/ Absolutely pathetic.

I don't hear people complaining about 295X2 being toast...they suck 500+++.

Just because it needs a lot of juice doesn't mean it's gonna explode "way" faster.

Sure it might go boom after 4-5 years instead of 5-7 years...who gives a crap...not like you keep the card long enough for that.

And if you're that scared you might as well just pick a 3rd party vendor that gives a 5 year warranty...those exist.


Why are you assuming an easy 4k win for AMD? Just because of HBM?

"Just"? HBM can be anywhere from 4.5 to 9 times faster than DDR. And if the 300 series already has such a huge performance plus just in 1080P where the bandwidth doesn't matter as much...this gap would only widen in 4K. There's a reason Nvidia also wants to use HBM asap.


P.s. I always hoped that HBM or similar versions at some point got rid of DDR as main system memory...a person is allowed to dream, right? :}
 
Last edited:

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
"Just"? HBM can be anywhere from 4.5 to 9 times faster than DDR. And if the 300 series already has such a huge performance plus just in 1080P where the bandwidth doesn't matter as much...this gap would only widen in 4K. There's a reason Nvidia also wants to use HBM asap.


P.s. I always hoped that HBM or similar versions at some point got rid of DDR as main system memory...a person is allowed to dream, right? :}

True, but memory bandwidth isn't the only spec that matters at high resolution. GM200 will have the ROPS (96) and sufficient bandwidth (with Nvidia's maxwell compression) for good 4k performance. I wouldn't assume that AMD will run away from Nvidia at 4k.
 

5150Joker

Diamond Member
Feb 6, 2002
5,549
0
71
www.techinferno.com
So are we talking a 380x paired to some AIO solution that you have to attach to your case? If so, AMD will surely sell these by the truckloads and close the gap with NVIDIA in market share. :sneaky:
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
So are we talking a 380x paired to some AIO solution that you have to attach to your case? If so, AMD will surely sell these by the truckloads and close the gap with NVIDIA in market share. :sneaky:

Because no one buying top end video cards has a 120mm fan mount available in their case anywhere? /s

Yeah if the performance is there with low temps and noise due to an AIO they will sell by the truckloads
 

.vodka

Golden Member
Dec 5, 2014
1,203
1,538
136
HBM isn't only about sheer bandwidth but also much lower latency than GDDR5. I suppose the 380x's massive (and improved) array of shader cores/ROPs/etc will benefit from that lowered latency, and be kept fed a lot more than say, for example, on the 290x. Efficiency and resource usage goes up!

The 380x is shaping up to be a monster, but so does GM200. Each one in its own way. Time can't pass fast enough!


I'll say it again, the last time ATI/AMD implemented a new memory technology in one of their cards (4870, GDDR5), it was a formidable product. 380x shouldn't be much different.... yeah, it's a different time in different competitive conditions and a different situation altogether, but there are reasons to expect something impressive.
 
Last edited:

Shehriazad

Senior member
Nov 3, 2014
555
2
46
True, but memory bandwidth isn't the only spec that matters at high resolution. GM200 will have the ROPS (96) and sufficient bandwidth (with Nvidia's maxwell compression) for good 4k performance. I wouldn't assume that AMD will run away from Nvidia at 4k.

Well we don't know the final and legit specs of the 380X yet, anyway. I'm just saying if the GPU side already has more raw performance...but then also has an insane Vram latency AND bandwidth boost...then it WOULD clearly rip apart any current Maxwell card. (Would..as in IF)

That said...compression? Talking about delta comp? Like the kind of compression that AMD was already using since the 285? :p

*cough*
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
Well we don't know the final and legit specs of the 380X yet, anyway. I'm just saying if the GPU side already has more raw performance...but then also has an insane Vram latency AND bandwidth boost...then it WOULD clearly rip apart any current Maxwell card. (Would..as in IF)

That said...compression? Talking about delta comp? Like the kind of compression that AMD was already using since the 285? :p

*cough*

yeah I am - bandwidth only matters for how much is needed for the given workload. If GM200 has enough to feed it at high resolutions than the r380 having 10 times the bandwidth won't matter if it doesnt have the horsepower to use it.

HBM is great and im sure it will be a standard at some point, I just don't think that alone is going to mean AMD wins @ 4k.
 

5150Joker

Diamond Member
Feb 6, 2002
5,549
0
71
www.techinferno.com
Because no one buying top end video cards has a 120mm fan mount available in their case anywhere? /s

Yeah if the performance is there with low temps and noise due to an AIO they will sell by the truckloads

If by truckloads you mean <1% of desktop PC enthusiasts then yeah, that will surely help them claw back to 50% parity with NVIDIA. :thumbsup: If the 380x fails to have a good air solution, it will be a market failure for AMD. It may appear sold out at first in stores because it's a new niche product and I'm sure some will jump on the AT forums and say, "SEE I TOLD YOU IT'S SELLING SOOO MUCH!" and then we'll be greeted with another AMD financial report outlining how much money they are continuing to bleed and how much more market share NVIDIA has taken from them. Believe me, I want AMD to do really well here, last thing I want is another Intel situation for the GPU side but these kinds of decisions aren't going to help AMD in the long run.
 
Last edited:

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
I sort of agree. The problem I see is a CLC solution could cause perception problems without a competitive TDP figure. I can see fanboys mocking it for needing a big cooling solution (to run much cooler and quieter), so AMD really needs to lower that TDP figure. Unfortunately their load power use to performance ratio is looking pretty competitive so it might be difficult to lower that, but if they can get a competitive TDP they should be fine because then it's easy to point out that it's just the NV cooling solution that isn't competitive.
 

StinkyPinky

Diamond Member
Jul 6, 2002
6,971
1,276
126
This is going to be an interesting year for GPU's. Can't wait! With DX12 coming as well (and DX 11.1 cards can use a lot of the features of DX 12) we should see some good performance gains.
 

garagisti

Senior member
Aug 7, 2007
592
7
81
So are we talking a 380x paired to some AIO solution that you have to attach to your case? If so, AMD will surely sell these by the truckloads and close the gap with NVIDIA in market share. :sneaky:
It is always nice to read a very positive contribution in any thread. Thank you!