unseenmorbidity
Golden Member
- Nov 27, 2016
- 1,395
- 967
- 96
Last edited:
Not sure where you said HBC and too lazy too look it up lol
Anyway with DDR4 they would have 30-50GB/s in laptop and Vega is supposed to have larger cache + more to use less BW. My expectation is that they can get to Polaris 11 level of perf in gaming. Maybe 11CUs at 1GHz or so for 1.5RFLOPs and greater utilization in gaming than Polaris.
looncraz said:That means it makes more sense to use fewer CUs - perhaps only four (256 SPs) and include the high-bandwidth cache (or just a very large L2).
I am totally fine with this test as a datapoint. Where I have a problem is when it's not contrasted with the benefit of more cores.
The question isn't "[faster theoretical draw calls] or [slower theoretical draw calls]?" when you pit a 7700k vs Ryzen 7. The correct question is: "[faster theoretical draw calls] or [more cores / higher minimum frames / multithread future proof]?".
Many reviewers have failed to present this choice to the reader/viewer. And I find that disingenuous.
https://www.computerbase.de/2017-03.../#diagramm-battlefield-1-dx11-multiplayer-fps
too bad most reviewers don't do tests for more relevant CPU limited scenarios such as multiplayer in current games due to inability to do consistent tests
personally, I've been curious if someone can load up something like BF1 multiplayer on two PCs with different CPUs and just follow each other around and compare #s when they are by each other
What software do you use to record those lows?I have logged hours of play time on BF4 and BF1 on my i7-2600k @ 4.5Ghz and my stock-clocked RX 480.
I max everything and run 144hz FreeSync. I *do* artificially limit my max-FPS to 120, so I'll need to make some unrestrained runs and turn off Radeon Chill
I bought a Fury so I could test with it as well.
I will then repeat, with both cards, on both Windows 10 and Windows 7, and report 1% lows, 0.1% lows, averages, and will graph frame times.
I will then do that with my Ryzen 7 1700X at stock clocks... and again overclocked.
LOTS of testing. I best get to it...
No. It appears you are not up to the speed with information we know. Polaris 11 16 CU design in Radeon Pro 460 from Macbook Pro consumes under load around 18W of power(GPU only). It is clocked at 907 MHz, and TDP, or rather power gate, in BIOS of the GPU is 35W. The rest is consumed by memory, made from 4 memory cells for 128 bit memory bus. AdoredTV has tested how Polaris 11 with 14 CU design behaves declocked to 850 MHz, and it was around 15-18W of power under load, so there is a pattern here.
Raven Ridge APUs are not using Polaris GPUs, but Vega. Vega GPUs have been optimized to work with higher core clocks at the same thermal envelope compared to previous generations of GPUs.
And now we have Raven Ridge mobile Engineering Sample with 4C/8T design clocked at 3.0/3.3 GHz, and 12 CU's with 1 CU taken out totalling with 11 CU's for 35W TDP for whole APU package. About this engineering sample Canard PC Twitter reported some time ago, already.
Not sure where you said HBC and too lazy too look it up lol
Anyway with DDR4 they would have 30-50GB/s in laptop and Vega is supposed to have larger cache + more to use less BW. My expectation is that they can get to Polaris 11 level of perf in gaming. Maybe 11CUs at 1GHz or so for 1.5RFLOPs and greater utilization in gaming than Polaris.
However, it seems AMD is, in fact, going with HBM on APUs - according to Fottemberg, Apple has ordered them: https://semiaccurate.com/forums/showpost.php?p=284628&postcount=7858
That would be a pretty useful means to get HBM into the realm of reasonable applicability for the masses.
What software do you use to record those lows?
RX 460 has 112GB/s memory Bandwidth, there is no way a 11CU Vega with 30-50GB/s bandwidth will have the same performance as Polaris 11.
But with 50GB/s it will have adequate performance (30fps or more) for 1080p gaming at low/medium IQ settings.
That could be true but not the way most think.
It is likely Intel CPU and AMD GPU using Intel's silicon bridge packaging technology.
So my point stands that Si interposer and HBM are not a solution and we won't see it in APUs.
If and when AMD has viable solutions, and Intel's solution is viable(although not ideal), AMD will use them.
I want to see a response from joker, innocent untill proven guilty, although the case against him is starting to look compelling.Apparently, Joker's "5ghz" 7700k was actually running at 4ghz...
https://www.youtube.com/watch?v=VWarC_Nygew
More likely, Apple puts Raven Ridge in its MacBook Pro
Actually, due to economics of scale, I believe HBM2 is cheaper.Everything except the price of HBM.
Personally, I think it would be best if AMD put a single HBM1 stack on an interposer for a high-end APU SKU or two.
Even just 512MB or 1GB of 128GB/s is better than nothing. And HBM1 is proven and cheaper.
k?Personally, I'm not buying into the Ryzen hype myself.
AMD dropped the ball long ago IMHO.
"Ok, so what?"k what ?
There are lots of 1700 and 1700X CPUs available here in Finland but motherboards are the problem. There are some pretty crappy B350 boards but no X370. To be honest this is rather unusual situation.Ryzen CPU's in stock right now on Amazon. I think the bigger problem is motherboards really, but if you want a chip, go get one now.
If you cant elaborate on that/go into detail or offer balanced and objective arguments for your opinion, just "AMD is rubbish,always was" then i suggest you should head to the Kabylake thread.Personally, I'm not buying into the Ryzen hype myself.
AMD dropped the ball decades ago IMHO.
Ryzen CPU's in stock right now on Amazon. I think the bigger problem is motherboards really, but if you want a chip, go get one now.
Thats what i said when i tested tomb raider on 6700k 4.5Ghz.I was getting much higher fps.Apparently, Joker's "5ghz" 7700k was actually running at 4ghz...
https://www.youtube.com/watch?v=VWarC_Nygew