NTMBK
Lifer
- Nov 14, 2011
- 10,459
- 5,845
- 136
I wonder how 20nm vs 32nm would go. 32nm probably doesn't have a hard time competing against 20nm.
Which is why Intel's 32nm Atom phones sold absolute gangbusters, sure.
I wonder how 20nm vs 32nm would go. 32nm probably doesn't have a hard time competing against 20nm.
Which is why Intel's 32nm Atom phones sold absolute gangbusters, sure.
If you have a crappy platform, crappy BOM, crappy pricing strategy, crappy and slow roadmap and crappy architecture, even a 0.1fm process node still wouldn't matter. (So never mind a node that's simply on par in terms of power.)
Do you not remember how bad initial revisions of the A15 were? I wouldn't be surprised at all if their 1.9x claim is true.
I do recall the disappointment about A15, however I would argue that when we talk about these roadmaps doubt should be implied, and constantly calling every claim hypeperbole or hype (without any evidence) detracts from the discussion.Just a bit.
I see your point, but I feel like performance claims and roadmaps are on entirely different levels. It's a bigger deal to say "we're gonna beat our competitors by x amount" than it is to say "we'll have a new product in this timeframe." Really though, I don't think roadmaps -- unaccompanied by other statements -- are hype at all.
Intel did do the same thing, back when they disclosed Silvermont's architecture. It was probably, at least in part, a response to ARM's claim that Intel was about to get beaten with Silvermont, which was not even close to being the case. Given this, I'm not really surprised that people are wary of ARM's claims, but I personally find them reasonable.
It's that Intel started to make comparisons against ARM chips back in 2008 when they launched Atom. I laughed a lot when I looked at their slides back then :biggrin: If anyone can find them...Intel did do the same thing, back when they disclosed Silvermont's architecture. It was probably, at least in part, a response to ARM's claim that Intel was about to get beaten with Silvermont, which was not even close to being the case. Given this, I'm not really surprised that people are wary of ARM's claims, but I personally find them reasonable.
I'm sure they were talking about overall, multithreaded performance, and yes, its relevance is dubious, though not entirely useless. FinFETs are just a really big deal, though, and I imagine they're quite critical to achieving the "sustained performance" they mention.Even if the initial revisions were bad, they aren't going to do 1.9x with a single node and not even a real single node. Even assuming the best numbers available from TSMC for 20nm to 16FF+, they need to clear another 30-40%. I just don't see it. And I certainly don't see it in any practically relevant case. ie, its a lot easier to increase thermally constrained performance in multi-threaded cases, but the reality of the software is that multi-tread performance isn't really at all relevant to the actual end user.
Did they? I've never seen those.It's that Intel started to make comparisons against ARM chips back in 2008 when they launched Atom. I laughed a lot when I looked at their slides back then If anyone can find them...
I seriously doubt any foundry 20nm is "on par" with Intel 32nm in terms of electrical characteristics. I'm sure they're ahead.
That's it thanks! ARM11 was already old back then. Cortex-A8 was just around the corner, and Cortex-A9 had been announced. Everyone with a little bit of micro-arch knowledge knew Intel had missed their target.EDIT: Like this one?
![]()
They were indeed pretty bad. Very late to market, poor memory controller, etc. OMAP4 slightly improved that, and OMAP5 could have been nice. Anyway I still want a BeagleBoard X15 and TI DSP are lovely monsters :biggrin:Gah, that reminds me -- I'm glad OMAP's gone. TI makes some great stuff... but OMAP was not one of those things.
I had an OMAP3 on my Droid 2... not a happy time. I think the biggest issue was the 512MB RAM, though. I'm now using a Snapdragon 800... night and day. I feel like 14/16nm might be a very noticeable bump as well.That's it thanks! ARM11 was already old back then. Cortex-A8 was just around the corner, and Cortex-A9 had been announced. Everyone with a little bit of micro-arch knowledge knew Intel had missed their target.
They were indeed pretty bad. Very late to market, poor memory controller, etc. OMAP4 slightly improved that, and OMAP5 could have been nice. Anyway I still want a BeagleBoard X15 and TI DSP are lovely monsters :biggrin:
I'm sure they were talking about overall, multithreaded performance, and yes, its relevance is dubious, though not entirely useless. FinFETs are just a really big deal, though, and I imagine they're quite critical to achieving the "sustained performance" they mention.
Here's an earlier slide comparing their A15 to the A57:
![]()
They claim about 50% node-agnostic improvement for the A57. This is with an important caveat -- they're probably factoring in that the A57 is 64-bit capable, and has the benefits and penalties that come along with it. Wiith the 20nm process, it bumps to ~1.9x. That's a ~26% contribution from the process. I imagine 16/14 will be a bit larger of a gain.
I certainly agree with the first part, but disagree with the second part. Try to reproduce any of Intel comparative benchmarketing result, but don't try too much, you'd waste your timeDon't get me wrong, ARM makes nice designs, its just that their marketing of them is smarmy at best. Pretty much everyone else in the industry when quoting performance numbers will give you what's being tested and under what conditions.
Nvidia's pretty bad. Intel and AMD have certainly had their moments. Kind of hard to pick just one.I think we can agree that all marketing slides are misleading/deceptive. But I dont think I have seen anything as misleading as ARM. It is in a league of its own.
I think we can agree that all marketing slides are misleading/deceptive. But I dont think I have seen anything as misleading as ARM. It is in a league of its own.
That's it thanks! ARM11 was already old back then. Cortex-A8 was just around the corner, and Cortex-A9 had been announced. Everyone with a little bit of micro-arch knowledge knew Intel had missed their target.
They were indeed pretty bad. Very late to market, poor memory controller, etc. OMAP4 slightly improved that, and OMAP5 could have been nice. Anyway I still want a BeagleBoard X15 and TI DSP are lovely monsters :biggrin:
I hope that post will clear a few things up. That marketing slide sounded so deceiptive.But every once in a while an ARM engineer cuts through the marketing and comes to the rescue.. no sooner than I posted that found this on RWT:
http://www.realworldtech.com/forum/?threadid=147766&curpostid=147801
So there, an actual IPC comparison with Cortex-A57 (albeit a big range), that can put to rest the speculation
And 10-50% better IPC while using less power, under the same process and using the same macros is very impressive.
Or perhaps it's trying not to offer info some patent troll could try to use to attack ARM.I'd really like to know what ARM is so afraid of. Do they really think knowing this information will give their competitors any real advantage?
I wonder how long until Intel decides to exit this market. They've been trying for so many years now, and the mobile division lost a staggering $4.21 BILLION in 2014 alone. Kind of sad though, since Qualcomm really needs some competition.
Or perhaps it's trying not to offer info some patent troll could try to use to attack ARM.
I think they're going to wait until the platform costs they've steadily been dropping to the price point they've been getting so many design wins at, and then not leave the market because their investment has been rewarding them with more market share for less money spent.
I've lost hope in ARM's marketing. I'm sad to agree that they really have appeared to have sunk beneath even nVidia, let alone Intel. The only saving grace is that they're so blatantly ridiculous that I doubt anyone really takes them seriously, unlike nVidia's which have more subtle and convincing problems.
This is part of a bigger trend where ARM has been less and less willing to reveal real data about anything instead of BS. With each new CPU core they released since Cortex-A8 the TRM contained less and less information on performance, for example instruction timing and hazards. A9 still had timing information but nothing on reordering, while A5, A7, and A15 (or anything since) have pretty much nothing at all. Uarch details were relegated to less formal presentation slides that were vague and confusing. Cortex-A57 and A53 got almost no mention of uarch details at all (although the A53 articles here helped a little). And the slides talking about performance comparisons said less and less about what they were testing. At this point I'm not even expecting them to reveal anything about A72 outside of these single magic numbers.
Meanwhile Intel and AMD give optimization guides with real information, although the Bulldozer family one is pretty bad.
This is especially annoying for the in-order cores that ARM still heavily depends on, where you really need to know this to write good code. Oh, we can pretend that compilers are always as good or better at assembly than the best programmers no matter how long that continues to not be the case, or maybe that all compiler writers somehow have better access to processor information than the general public in the first place.
I look at something like Raspberry Pi especially, now there's this Cortex-A7 version which is a great improvement but still low end. Yet a huge platform that could use all the fine tuned optimization it can get. Would be nice if someone was inclined to fine tune code the processor, but they can't really unless they or someone else wants to reverse engineer all of it themselves.
I'd really like to know what ARM is so afraid of. Do they really think knowing this information will give their competitors any real advantage?
Back around that time Intel was also very big on comparing Javascript benchmark scores, when the browsers had much inferior ARM JITs or were lacking them entirely.
And when Z series started making it into anything even resembling a mobile device they weren't clocked at anywhere near 1.6GHz. I remember that one hybrid Symbian/Windows Moorestown phone that halved the rated clock speed of the part and still drained the battery in under an hour in Windows. But Intel was talking about Atom hitting it big in MIDs (think like Nokia N800) long before Moorestown even.
For better or worse I'm still getting an OMAP5-based Pyra when it comes out, and only a few years later than OMAP5 devices were supposed toIt's really sad that for a device like this TI is still one of the only realistic options.
I guess that means maybe ARM does have real manuals for some people willing to sign an NDA that also forbids them from suing.
Intel always puts its testing methodology on the slide, so not sure why you're complaining.let alone Intel.
