The RISC Advantage

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
Please point out, verbatim, where I posted irrelevant statements. Secondly, please point out what statements I made that were deserving of your condescending statements that explained things that go without saying.

Condescending? I'm supposed to read your mind over the pipes and assess your level of knowledge? How can someone be so ridiculously thin-skinned?

Then you get them as equal as possible, for the purpose of this discussion.

You can't, because ISA and design are intimately connected.

Say you have 100 programs, compiled for a processor utilizing ARMv8, and compare to a processor utilizing x86-64. Assume the target workloads for these processors are identical. Also assume their development budgets were identical. Assume they use the same manufacturing process. The compilers used were created with an equal development investment. Assume all personnel involved in the creation of the software and hardware mentioned are equally competent.

Measure the performance of those 100 programs, and compare between the two processors.

The one that draws more power.

I'd appreciate it if you'd stop nitpicking semantics, and address what I'm actually trying to say. It's really frustrating that I've had to go and spell all this out.

Who's condescending now? Guess I'll call it a day.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
Of course, at least in America, that's completely irrelevant. Everyone drives even if they could have walked. Not to mention, in some places it's too hot to ride a bicycle regularly. :) I need my features! Air conditioner! Ass warmer!!

That's kind of the problem with the "x86 tax" argument. Eventually you end up needing features (NEON, performance, security, virtualization) and then everyone is just as big as everyone else. In a world where Apple's core is multiple integer ratios larger than an Atom core, IMHO the "x86 tax" is not relevant.
I didn't make the car analogy. But if one is going to go there, then I'm simply pointing out that you don't send a car to do the job of a bicycle. Put another way the analogy doesn't really work.

On the "RISC advantage" I don't see it, I more see how the tech is implemented that works extremely well in very low power devices. Let's put it this way, if ARM and X86 are metrically equal, then why doesn't ARM scale up very well? Same reason x86 has issues scaling down, they were originally designed for a different purpose. It takes time and several iterations before each is optimized and suitable when taken out of their "native" environment.
 

III-V

Senior member
Oct 12, 2014
678
1
41
Condescending? I'm supposed to read your mind over the pipes and assess your level of knowledge? How can someone be so ridiculously thin-skinned?
No, but it should have stopped as soon as I said this:
I am not sure why you are stating this -- doesn't my comment already suggest this?
Yet you continued.
You can't, because ISA and design are intimately connected.
You can, if you possess something called an imagination.
The one that draws more power.
What a cop out response. This is not necessarily true, and you know it. Or I hope you would given your level of education, and career.
Who's condescending now? Guess I'll call it a day.
You really haven't left me a choice, have you? You've gone out of your way to completely miss my point. I am not at fault here.
 
Last edited:

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
You can, if you possess something called an imagination.

I deal in facts and reality.

What a cop out response. This is not necessarily true, and you know it. Or I hope you would given your level of education, and career.

Unless one team went with a crazy implementation (i.e. Netburst), then it is true.

You really haven't left me a choice, have you? You've gone out of your way to completely miss my point. I am not at fault here.

It's my fault you're condescending. Got it.
 

III-V

Senior member
Oct 12, 2014
678
1
41
I deal in facts and reality.
Of course; you're an engineer, after all.

What I've proposed is something that is well within the boundaries of the laws of the universe, however.
Unless one team went with a crazy implementation (i.e. Netburst), then it is true.
I don't think so. The point of removing all of these variables is to elucidate any inefficiency, not just the big outliers (Netburst). You can run through the scenario I've provided however many times you'd like for a given confidence level, and you'd see a clear answer.
It's my fault you're condescending. Got it.
I perceived your initial explanations of architecture to me, as being condescending. I suggested that I knew what I was talking about, and your explanations were unnecessary, and yet you continued to try to spoon-feed me. Given this, it is indeed your fault.

I actually have enjoyed this discussion with you, but it's just been frustrating being talked down to, asking you to stop, and then you continuing to do so. It's also frustrating that I've gone through so many posts to explain something ("all else equal") and making little to no progress.

Wish things hadn't gone this way. I'd even asked where I went wrong, and didn't get an answer.
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
^

I told you where you went wrong, there's no better or worse ISA because it cannot be assessed in isolation and it's not a limiting factor.

As for the "spoon-feeding" and "condescension", please stop being so defensive. Nobody is here to denigrate you.
 

III-V

Senior member
Oct 12, 2014
678
1
41
I told you where you went wrong, there's no better or worse ISA because it cannot be assessed in isolation and it's not a limiting factor.
This happened before that. Please don't conveniently "forget" things. I've also demonstrated it can be done in isolation.
As for the "spoon-feeding" and "condescension", please stop being so defensive. Nobody is here to denigrate you.
Any decent human being, when asked to stop being rude, should do so, rather than refusing to do so and amplifying, such as you've done.

You are objectively wrong here.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
Please point out, verbatim, where I posted irrelevant statements. Secondly, please point out what statements I made that were deserving of your condescending statements that explained things that go without saying.
(...)

I didn't see see any condescending tone from him. Relax dude. Even if something was blindingly obvious to you there are others here who appreciate some elucidation. Nothing wrong with giving some details on a public forum, remember that you are not talking in private.
 
Last edited:

WhoBeDaPlaya

Diamond Member
Sep 15, 2000
7,415
404
126
Any decent human being, when asked to stop being rude, should do so, rather than refusing to do so and amplifying, such as you've done.
You are objectively wrong here.
If you are this touchy, you'd better shut off your Internet connection.
I didn't get the impression that dmens was being rude/derisive/condescending at all.
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
keeping the nonsense aside, dmens, by any chance are you the Intel engineer that made a AMA reddit post a while ago? If so, I would want to ask you a few questions about some claims regarding Haswell that were said at that time.
 

III-V

Senior member
Oct 12, 2014
678
1
41
If you are this touchy, you'd better shut off your Internet connection.
I didn't get the impression that dmens was being rude/derisive/condescending at all.
I perceived it that way, though. I was pretty frustrated that he was calling my bits on decode width as being irrelevant, despite it being clearly in response to videogames101's first(?) post in the thread. And I'd stated twice that I didn't need him to explain things to me, before I said he was talking down to me, as I already knew the things he was stating... there was no need to preach to the choir.

Perhaps I need to be better about not taking people's attempts to help others learn personally, but in the end, I can't help but feel I was being talked down to, particularly now, after seeing that kind of reaction from him. I think I erred in explicitly he was being condescending, seeing how he spiraled out of control after that, but the post where I stated was very tame... especially compared to what he posted in response. I was also pretty frustrated and started talking down to him as well, but he really things to the next level with the ad hominem.

But I know all of you don't care. This whole thing is all some sort of entertainment feed for you. Perhaps a moderator might care, when (or the way things have been lately -- if) they step in here.
While we're at it, let's assume invisible pink unicorns exist…
Better yet, why don't you play along for the sake of the argument?

My point is very clear: there is such a thing as having ISAs that are, objectively, superior or inferior relative to other ISAs. I've asserted that ARM, particularly its 32-bit variety, is a better ISA than x86, and that RISC-V is better than either, if we were to ignore compatibility as a factor. I suppose I should limit the scope of this claim to pertain to modern, commonly used consumer software only.

It seems ridiculous to assert that ISAs can't be better than others -- this would mean no other ISAs have improved over another, and that all ISAs after the very first ISA were redundant. We'd have learned nothing in the history of computing, in regards to how to better tell our computers to do the things we'd like them to do.

This isn't the case, of course, especially when you consider that software changes over time. What may have been a good ISA 50 years ago would be pretty terrible today -- it wouldn't even run modern software, seeing as we're using 32 and 64 bit addressing. If it could, it of course would be heavily masked by other factors as x86 has, but you hopefully can see what I'm saying. Have communication protocols not improved over time, as an analogy?

dmens made the point of there being tradeoffs. There certainly are tradeoffs between different ISAs -- many of them have been built with different goals in mind, and this just reinforces what I'm saying. Each of them will be better than one another at various things, and the particular thing to compare would be how they handle today's consumer software, so why not evaluate that? Why not see how two ISAs, both in processors targeting the same markets, compare to each other in benchmarks?

As far as separating the performance/power/whatever of ISAs from their "intimately related" uarch as dmens stated, you could compare two processors using two different ISAs, with similar architectures, using the same process. Certainly there has to be at least a few cases where this scenario exists, perhaps the RISC-V vs. ARM comparison I linked an image of earlier would qualify. Even if no such cases exist, just ignore architecture, and sample multiple products on the same process, using two different ISAs, until you've got enough to determine that there is or isn't a discernible difference.

The fact that this is hard, crazy, infeasible, or that one would be better off looking for a pink unicorn is not relevant to my main point, though. Even if it absolutely were impossible to measure the impact of an ISA on a processor's performance, there are damn good reasons to believe that the ISA used does have an impact, and those reasons boil down to the fundamentals of how computers work.
 
Last edited:

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
keeping the nonsense aside, dmens, by any chance are you the Intel engineer that made a AMA reddit post a while ago? If so, I would want to ask you a few questions about some claims regarding Haswell that were said at that time.

Nope, wasn't me. But feel free to PM me with questions, I can answer them as best I can.
 

Nothingness

Diamond Member
Jul 3, 2013
3,301
2,374
136
I gotta disagree. Silvermont for example is a very small core and has a competitive die size, so the difference in transistor is negligible.
Difference with what?

If you go to the lower end of the CPU range, the x86 tax starts to be a problem. Here I am talking about very small micro-controllers. Even ARM designed a specific reduced variant of its ISA to get smaller (cf. Cortex M). In that context, Silvermont is a huge core and even Quark can't compete.

Of course this applies to the number of transistors. If you take into account process, the discussion is different, but it's not the subject here.
 

kimmel

Senior member
Mar 28, 2013
248
0
41
It seems ridiculous to assert that ISAs can't be better than others -- this would mean no other ISAs have improved over another, and that all ISAs after the very first ISA were redundant. We'd have learned nothing in the history of computing, in regards to how to better tell our computers to do the things we'd like them to do.

It's equally ridiculous to assert that ISAs have any value at all without an implementation. You can look at an ISA on paper all you want and convince yourself that it's inherently better for any number of reasons. That means nothing without being able to translate that ISA into competitive products.

In other words we might as well assert that vim is inherently better than emacs and C++ is better than garbage collected languages. When in reality they are both better and worse at the same time. The world is shades of grey not black and white.
 

III-V

Senior member
Oct 12, 2014
678
1
41
It's equally ridiculous to assert that ISAs have any value at all without an implementation. You can look at an ISA on paper all you want and convince yourself that it's inherently better for any number of reasons. That means nothing without being able to translate that ISA into competitive products.

In other words we might as well assert that vim is inherently better than emacs and C++ is better than garbage collected languages. When in reality they are both better and worse at the same time. The world is shades of grey not black and white.
And we are capable of determining what shade of grey is darker than another, no?
 

Nothingness

Diamond Member
Jul 3, 2013
3,301
2,374
136
The decode stage is mostly a fixed pipeline, it is very hard for it to be a bottleneck.
It's not a bottleneck, but a complex ISA requiring several stages for decoding will impact branch penalty.

Unless you consider the decode width to be a bottleneck, in which case the entire machine must grow to accommodate decode width growth.
Isn't decode width an issue in itself due to variable-length instructions? Deciding where instructions start in your decode packet will impact latency.

That being said, for the higher-end x86 decoding certainly isn't an issue.
 

kimmel

Senior member
Mar 28, 2013
248
0
41
And we are capable of determining what shade of grey is darker than another, no?

Depends on what part of the beast you look at. Could be the light part, could be the dark part it's certainly not one color. Maybe you prefer dark meat over white meat. :)

You really have to compare implementations and not ISA's for a productive discussion.
 

III-V

Senior member
Oct 12, 2014
678
1
41
That being said, for the higher-end x86 decoding certainly isn't an issue.
Perhaps compared to other issues, but am I wrong in thinking that, given decode width has grown over time, and since the last change to this was Conroe (and before that, P6), we may be seeing increased width in the next, say, decade? Assuming Intel sticks with iterating Core, that is.

I'd actually love to craft a bunch of micro-benchmarks to try and determine the bottlenecks present in CPUs, as a way to predict what changes may be made in the future, but I don't really know how to get started on that.
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
It's not a bottleneck, but a complex ISA requiring several stages for decoding will impact branch penalty.

Yep, restart penalty is higher. 2nd generation Core introduced a system to immediately restart instruction fetch without waiting for a flush to try to alleviate this.

I am not an expert in the x86 decoder but I read somewhere that a deeper decode pipeline allows for fancier (and hopefully more accurate) branch prediction mechanisms too. Not sure about that though.

Isn't decode width an issue in itself due to variable-length instructions? Deciding where instructions start in your decode packet will impact latency.

By width I meant the number of micro-ops that can be produced per cycle by the decoders, not the number of bytes. And yes some instructions go through a "slow" decoder which has lower throughput. We optimize for the common cases using "fast" decoders.
 
Last edited:

III-V

Senior member
Oct 12, 2014
678
1
41
Depends on what part of the beast you look at. Could be the light part, could be the dark part it's certainly not one color. Maybe you prefer dark meat over white meat. :)

You really have to compare implementations and not ISA's for a productive discussion.
I'd think we'd have enough implementations to sample to be able to determine the impact of an ISA, to at least some point of significance. E.g., take however many iterations of MIPS there have been (or less as desired) and compare after doing the same for ARM (A-series, R-series, or M-series -- which ever would be most appropriate). These would often share the same process, from the same foundry, and have had similar design goals and target markets. Additionally, for the process, you would need to compare either between one of each product on each shared node, and do this for several node/product combinations and compare the calculated means, or normalize results with a simulated impact of the process used (process parameters are typically pretty easy to come by).

That'd be a lot of work though, of course. Perhaps it'd not be a bad idea for a dissertation. Anyway, I am just suggesting that it is, in fact, possible to determine a difference to some point of usable accuracy and significance, regardless of how big of a pain in the ass it'd be to do so.
 
Last edited:

liahos1

Senior member
Aug 28, 2013
573
45
91
this thread is nearly as esoteric and vitriolic as those on RWT. :D its great getting to hear these different points of view though.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
I'd think we'd have enough implementations to sample to be able to determine the impact of an ISA, to at least some point of significance. E.g., take however many iterations of MIPS there have been (or less as desired) and compare after doing the same for ARM (A-series, R-series, or M-series -- which ever would be most appropriate). These would often share the same process, from the same foundry, and have had similar design goals and target markets. Additionally, for the process, you would need to compare either between one of each product on each shared node, and do this for several node/product combinations and compare the calculated means, or normalize results with a simulated impact of the process used (process parameters are typically pretty easy to come by).

That'd be a lot of work though, of course. Perhaps it'd not be a bad idea for a dissertation. Anyway, I am just suggesting that it is, in fact, possible to determine a difference to some point of usable accuracy and significance, regardless of how big of a pain in the ass it'd be to do so.

Have at it.Please report back with your results when you are done.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
Better yet, why don't you play along for the sake of the argument?

Because the scenario you outlined earlier is just as absurd. Unless you have gobs of money at your disposal, what you're proposing won't be possible and then you're left with discussing theory. And in theory there is no difference between theory and practice. In practice there is.