SiliconWars
Platinum Member
- Dec 29, 2012
- 2,346
- 0
- 0
On 7-CPU, the A4-5000 wins by approx. 50-60% with 4 threads. That is at a lower clockspeed for the A4 as well. Extrapolate to 8 threads, easy conclusion: Jaguar is immensely faster.
On Geekbench: Bobcat wins in every single-threaded test at 1.6GHz. Jaguar adds between 10-20% IPC boost from Bobcat. Extrapolate to 8 threads, ARM again loses by a large margin.
Desktop, server, game console, fax machine, call it whatever you want, ARM has never made a processor which is at the performance level that an 8C Jaguar is expected to be at.
So, what you're saying is that ARM could make a processor with worse power consumption, to compete against a processor that already has better power consumption. Lol, that's excellent logic. They're definitely going to make their situation BETTER with that.
Well AMD reckons Jaguar is about 1W per core at 1GHz going to 2W per core at 1.4 GHz
Yeah, because the Exynos 5250 is only two cores and A4-5000 is four. No kidding. What, did you think that it had four cores but awful scaling past two threads vs everything else there that doesn't? Maybe try looking at the two thread tests between the two.
Put your glasses on and look at it again. The Cortex-A15 wins in single threaded image compress, image decompress, Lua, primality, sharpen image, blur image and write sequential.
Now, I said Bobcat had slightly better perf/MHz (I really didn't do an average here). Jaguar adds 10-20% to that. Which is why I said that a 2.5GHz Cortex-A15 should be competitive against a 2GHz Jaguar.
Cut the crap about Jaguar having 8 cores. Cortex-A15 would have 8 cores in this scenario. Stop arguing against that.
They make IP that can be configured for a bunch of different purposes, and yes, it can be configured to a similar performance level vs 8C Jaguars, especially the 1.6GHz one in XBox One. All this stuff about what does or doesn't exist in the market is not needed in a technical comparison.
I said that one could make a Cortex-A15 console SoC that'd have competitive performance.
How people confuse this with an argument that this should have been chosen is beyond me. And the reason I said it is because people like you say crap about how you'd be stuck with 4 cores max or how the single threaded performance would be stuck way behind the consoles.
You keep talking power consumption but you don't have the TSMC 28nm HPM or HP (whatever Kabini is made on, almost certainly not LP) Cortex-A15 power-over-leakage optimized numbers to compare with, so there's really no point.
In a perf/W sense I see no big challenge in Cortex-A15 keeping up with that when on a similar process.
Yes, that's what I meant - multiplying the cores multiplies the performance difference. With both @ 8 cores, the lead for Jaguar just gets bigger.
No it'll stay the same everything else being equal.
Exophase is mostly right, there isn't a lot between the cores. Jaguar's perf/W is being held back by x86.
My bad, why would they label 4 threads on a 2c processor![]()
You're right, noticing a trend of poor performance on the last benchmarks of each run - possible throttling issues?
Well sure, by using Geekbench as the be-all benchmark between the twoother benchmarks say that Jaguar is significantly faster than A15, so which is to be believed.
Making IP with similar performance level =/= making IP that is remotely better than the competition at any aspect. Sure, Intel COULD make Slice beat a Titan, but it'd just take tons of die space, power, and money...
8-core Cortex-A15 @ 2.5GHz is feasible. It'd run within a power budget suitable for consoles. It's performance competitive with Jaguar.8-core (2x4 cluster) Cortex-A15 @ 2.5GHz is perfectly implementable on 28nm SoCs today, and it would run within a power budget that's suitable for consoles. That would have been performance competitive with Jaguar (with equal quality code of course). Performance was not the problem.
You keep talking power consumption yet you don't have ARM on your high-speed process to compare with, so there's really no point.
See how easy it is to place all of the burden of proof on the other guy?
Now that I look at it it's especially odd that you say that only AMD can deliver the graphics performance, are you really saying nVidia wouldn't have the GPU for this either? Or did you mean of AMD and Intel?Seriously, ARM is amazing for tablets and phones, but it really ain't anywhere CLOSE to desktop performance. The best Nvidia can do is power a handheld Shield. They don't have the ability to make a medium/high end SoC. Only Intel and AMD have that ability, and only AMD can deliver the graphics performance needed for games.
Blah blah, AMD would be better than Intel if they had their process, 3dfx would beat a Titan, yadda yadda yadda. Last I checked, we were discussing CPU designs here, not processes![]()
Don't know what you're referring to. Exynos 5250 in the configuration tested has no throttling (it's running on a Linux distro).
Geekbench and 7-Cpu. Maybe we'll see something with Phoronix? Do you have a Kabini with Linux? Do you want to run the suite?
Do I have to remind you again of exactly what I was refuting in this thread?
Here, let's just revisit what I said:
8-core Cortex-A15 @ 2.5GHz is feasible. It'd run within a power budget suitable for consoles. It's performance competitive with Jaguar.
Now I happen to think that Cortex-A15 on the same 28HP(M?) process that Kabini is probably on could compete well in perf/W at peak perf (and for consoles the rest of the perf/W curve doesn't really matter). And you won't see it on Samsung's 28nm Exynos Octa (much less 32nm Exynos 5250, which is all we have power numbers for) because it optimizes for the lower part of that perf/W core. But quite frankly, the numbers seem to show it's in a similar perf/W ballpark with Jaguar even there.
But let's say that I give you the full benefit of the doubt, let's say to get similar performance Cortex-A15 has to use much more power to do so. How much more power? 5W? 10W? Would it completely break a reasonable power budget of the console? Probably not. Now tell me what was wrong with what I said.
Hopefully you won't start talking about perf/mm^2 now...
Actually I think all the burden of proof IS on you because you said that ARM doesn't have the ability to provide a high enough performance CPU for a console, specifically you said:
Now that I look at it it's especially odd that you say that only AMD can deliver the graphics performance, are you really saying nVidia wouldn't have the GPU for this either? Or did you mean of AMD and Intel?
I'm asking you to do a comparison with the same openly available processes, the same that would be used for the consoles regardless of the CPU chosen. This is nothing like Intel's process advantage. But yes, process does have a bearing on performance and peak perf/W..
The Bobcat was dropping off, that's what I was referring to.
Honestly, synthetic benchmarks are usually very dependent on certain quirks of architectures. I'd rather see game benchmarks (this is a game console after all), and 3dMark gives a hint at what that story is like.
8-core A15 at 2.5GHz is hypothetical, assuming they can make one does not leave any guarantees about efficiency or performance.
Idle power matters a lot, I leave mine sitting on the dashboard for long periods of time. I want it to sip power all the time, not just during games. But that's just IMO. Kabini appears to deliver solid idle and load performance. Is that the process, or the design of the core? I think it's the design. Otherwise, according to how you think the processes work, one would have to be sacrificed.
Even if ARM can make it happen, they lose to AMD in essentially every aspect, and I don't think they can undercut on price.
I was just stating the obvious. If ARM is able to deliver an excellent mid-range CPU, why are there no ARM laptops? Why are there no ARM desktops? If they could deliver those, I'm sure they would. They haven't even managed a 64-bit design yet.
You can't really just "compare" between processes. There's so many little details that make it much more difficult. For one thing, you can't really make comparable designs at 2 different gate lengths - the actual layout of the design has to physically change. It's not just "scale down 2/3, make the exact same thing". There's more to it than that. Hence comparing across processes is a bit of a crapshoot.
There are some bizarre arguments going on from some of you. Sony and Microsoft can't afford to live in a "what if" and "maybe this" and "if an ARM processor had this core count and at this speed on this process with this non existent GPU" world.
AMD has the best overall package available by far, no one else is even in the same realm.
However, the way things are going at Team Nintendo, the WiiU successor may well end up using Bay Trail or Denver for all the sense that Nintendo's roadmap makes [/insert crazy screwy eyes]
Nah, they're just going to keep asking IBM to shrink the PowerPC 750 until they can fit a dozen of them on a single die![]()
If nintendo had brains( I hope they do, but dont expect it) they would give up on Wii-U, do a custom AMD Kaveri/next interation @ 20nm with more CU's/MTU's/ROP's and the best memory type available in that time frame (wide I/O???). Target in the same watt range (100-150). That should give them a decent perf advantage somewhere around the same costs, but more importantly they can just piggy back off all the other HSA based work devs have been doing on PS4/X1 for the two years that wii-u will be floppying around like a stunned mullet waiting for 20nm to be cost effective.
Seriously, given the existence of the "semi-custom" silicon that is the APU which both MS and Sony are using in their consoles, just how much cost could it possibly represent for Nintendo to phone up an AMD sales rep and say "hey, you know that chip you already designed and are already producing for Sony and Microsoft, yeah how about you produce 10-20% more and sell them to us too?"
At this point with AMD becoming the defacto-standard for "me too" features on a console, what excuse does Nintendo have for not at least having the bare minimum of me-too performance capability?
There are some bizarre arguments going on from some of you. Sony and Microsoft can't afford to live in a "what if" and "maybe this" and "if an ARM processor had this core count and at this speed on this process with this non existent GPU" world.
AMD has the best overall package available by far, no one else is even in the same realm.
I've never seen a product with an ARM CPU above 2GHz
Best available and best within a commercially relevant timeframe as far as either the Xbox One or the Playstation 4 are concerned.
It is pretty clear that AMD came out on top here, as did MS and Sony along with their customers.
Nvidia would not have felt it necessary to develop Shield if they felt the market was beneath them or irrelevant, but they got squeezed out and they are desperate...desperate enough to fund the development of an experimental new handheld console the likes of Shield.
Nvidia's reaction to AMD's success is just more proof and vindication that AMD is doing something right here, and that Nvidia failed to open a can of whoopass on the appropriate competitor. (so focused on Intel that they forgot to mind their own backyard...whoops indeed)
Nvidia would not have felt it necessary to develop Shield if they felt the market was beneath them or irrelevant, but they got squeezed out and they are desperate...desperate enough to fund the development of an experimental new handheld console the likes of Shield.