Tup3x
Golden Member
- Dec 31, 2016
- 1,272
- 1,405
- 136
It doesn't meant that it's running at that speed though.CMK16GX4M2B3000C15 is
https://www.newegg.com/Product/Product.aspx?Item=N82E16820233852
It doesn't meant that it's running at that speed though.CMK16GX4M2B3000C15 is
https://www.newegg.com/Product/Product.aspx?Item=N82E16820233852
45ns is latency on Intel CPUs. Having 25ns on top of that? Please.
Yea possibly for corner case workloads, but just remember your getting 50-100% more cores or half the price for that, i know we are looking at latency here and not value for money but it has to be taken into account, if you were choosing between 6900k or 1800x for those corner case workloads you speak of, 10-30% latency perf penalty BUT half the price, still looks a bargain. That is in corner case workloads that i think is not going to matter for most.Stuff i work with is generally unfriendly to any cache structure and consists of a load of random accesses.
So, for my workloads it probably does. I just hope AMD was aware of it ahead of time, so that stuff is fixed proper by now (in Zen 2, that should soon tape out, shouldn't it?). After all, that's about the only glaring flaw with Ryzen right now. Rest are either imperfections of deliberate choices.
45ns is latency on Intel CPUs. Having 25ns on top of that? Please.
Yep, cache should carry it for now.
It says the speed it's running at at the very bottom. 2133 should be running at CL13 or less.It doesn't meant that it's running at that speed though.
Why do you assume CL13? Often if you do not use XMP profile the RAM defaults to really low speed and high latencies.It says the speed it's running at at the very bottom. 2133 should be running at CL13 or less.
Where do you get 3000MHz memory speed from if SiSoft is reporting 2133MHz CL16?
Why do you assume CL13? Often if you do not use XMP profile the RAM defaults to really low speed and high latencies.
I know, what i am talking about. AIDA64 reports latency for the whole package: memory and IMC. So does Sandra, Passmark and literally every single memory latency test you can conduct, because that's just how it is.You don't know what you are talking about. My Phenom 2 gets 45ns. Bulldozer/??? gets the same.
My 2133 works at 10ish timings (screw Intel), it does not mean the memory in this test does.Because almost all newer 2133 can hit CL13. You adjust it in the bios. 13-14-13-41 1t.
And What does that have to do with what I said? Intel and AMD both have similar latency with tuned ram. Pop in XFR, or AMP and the results are about the same.I know, what i am talking about. AIDA64 reports latency for the whole package: memory and IMC. So does Sandra, Passmark and literally every single memory latency test you can conduct, because that's just how it is.
My 2133 works at 10ish timings (screw Intel), it does not mean the memory in this test does.
There is already floating around 4C/8T APU engineering sample with 11 CUs in Mobile package that has 35W TDP, and 3.0/3.3 GHz. 11 CU design is cut down from 12 CPU.Are you saying we'll have 15W 4c8t 3GHz APUs with 512SPs at 1.2GHz? At full load? Not power throttling? Don't be daft. At 45W that might be possible, but the GPU will throttle no matter what. Remember, the 896SP 460 is 75W alone.
Yes,you're probably right. I personally know for a fact, that nobody in IT decision making has ever planned ahead but rather well after market changes happen, and noone in decision making were ever following the technical and practical advancements of theirbrespective market fields. Also none of them ever wondered if they could make their purchases and departments financially more effective with simple actions such as purchasing products with better price/performance ratio then before in large quantities.You should think full well before re-posting some kiddy fanatics conspiracy fairy tale on here. That post and poster are plain sad.
Anyone who works in the IT field as an Architect/CTO/Director knows fully well that story is complete and utter BS, every letter of it. AKA smear campaign.
These threads have for the past week become full of random trash, little substance. No one in the decision making IT world even follows Ryzen right now.
In a mass virtualized/cloud orientated IT world, they will discuss Ryzen only once Naples is out.
Even then it will be very few, as that department is in every major company, outscourced.
There is no Evil and Good here. it is just business vs business.
And these practices are NEVER MFG<->Business. It is MFG<->OEM/ODMs.
Boost clock of ~1.17GHz for 50W TDP & 75W would be same as RX 460 btw RR will have Polaris cores or Vega?What frequency are those 1024SP@50/75W? because i was talking of 512SP at a base frequency of 900-1000MHz on the 14nmFF (versus 512SP@800MHz on the 28nm BULK)
Sure. And the GPU itself is far from the whole thermal load of a dGPU. Let's assume that the GPU itself represents 50W (66%) of the 75W TDP. That means a 512SP version at full tilt, no RAM or anything included, is 25W. Alone.And the full P11 is 1024 SP @75W TDP, we also have the highly binned P11 for pros with a 50 W TDP ~
http://www.anandtech.com/show/10821/now-shipping-amd-radeon-pro-wx-series
I'm not saying that a 15W APU will throttle (or not) but you're basing your assumptions on a highly leaky, locked GPU.
Base clock numbers go out the window once the iGPU gets any sort of load, remember that. With Intel as an example, their 15W >2.5GHz base clock chips often dip well below 1.5GHz when the iGPU kicks in and that's with a far smaller GPU. My desktop A8-7600 (65W) power throttles when the GPU (384SP) is under load, although not by that much (from 3.1GHz down to ~2.4-2.6). APUs are optimized to utilize their thermal window to the fullest extent possible. As such, it would be silly for the CPU not to have a higher base clock when the GPU is idle, as they have cooling to spare. I'd love to see actual clock speeds for a Carrizo chip with the GPU under load. They'd definitely not be 2.7GHz.I calculated that many weeks ago and maybe i remember bad, but 15W excavator has 2.7GHz base clock for 4 core (2m) and 1100Mhz of (max?) frequency for the 512SPs. Given that at low power/Vcore the process allow 65% power saving or +80% clock, I calculated that if the SPs will remain 512 a +100MHz will give anyway some power to the CPU. 2.7GHz Zen 4C8T will draw just a little less than 2.7GHz XV 2M4C due to power scaling.
Giving the increased power budget on the CPU, we can reach 3GHz.
Or alternatively, still 2.7GHz Zen cores, but with 1024SPs at 900MHz-1GHz max, but with higher area consumption...
EDIT: to be clear, on the CPU the clock is base clock. I don't know/remember turbo clocks on the BR APU. For the GPU part it's the MAXIMUM clock. Base clock, if i remember well, is 800MHz... On the 28nm BULK.
No higher than ~1200MHz, at least. And sure, cutting back a few hundred MHz will save you power. But there's also data suggesting that Polaris power draw flattens out below ~900MHz. So there's not much to save. I hope Raven Ridge has Vega-based iGPUs, but I'm not betting on it. And even so, you'd need 15-20W for a 512SP GPU alone for it to get up to speed. Fitting it inside a 15W TDP APU and not expecting throttling is a pipe dream.What frequency are those 1024SP@50/75W? because i was talking of 512SP at a base frequency of 900-1000MHz on the 14nmFF (versus 512SP@800MHz on the 28nm BULK)
That doesn't answer the question of CPU clocks under GPU load. Again: any GPU load means CPU base clocks go out the window. Any at all. If it's a 35W 4c8t chip with a 3GHz base and 11CUs (704SP), I'd expect it to allocate anywhere between 50 and 75% of its power to the GPU under heavy loads - which would mean significant CPU throttling. I'd be very impressed if it maintained 2GHz on all cores.There is already floating around 4C/8T APU engineering sample with 11 CUs in Mobile package that has 35W TDP, and 3.0/3.3 GHz. 11 CU design is cut down from 12 CPU.
Well this is what i get on ivb with DDR3 in Dragon Age inc . looks like you can hit latency points where things blow outWhich also shouldn't be giving exponential scaling, unless they went and loosened the timings to go along with the lower clockspeeds.
They sell Intel CPUs at their highest pre-silicon lottery limit. Consider 4.2Ghz precisely that for Ryzen.I doubt anybody would sell a OC CPU at its highest limits.
And in 2 months what degradation-free OC headroom does SR have.It shall be known in 4 days what OC headroom does SR have.
And in 2 months what degradation-free OC headroom does SR have.
And in 2 months what degradation-free OC headroom does SR have.
No higher than ~1200MHz, at least. And sure, cutting back a few hundred MHz will save you power. But there's also data suggesting that Polaris power draw flattens out below ~900MHz. So there's not much to save. I hope Raven Ridge has Vega-based iGPUs, but I'm not betting on it. And even so, you'd need 15-20W for a 512SP GPU alone for it to get up to speed. Fitting it inside a 15W TDP APU and not expecting throttling is a pipe dream.