In reaction to : Idontcare,
The fact you don't see such things happening at the top (Intel/Microsoft) might be taken as proof that the problem is not as much of a problem as you perceive it to be. Other than your handful of video conversion software programs there isn't much out there that taxes a modern quad-core computer system. For the vast majority of consumers there are plenty of spare CPU cycles to go around.
Well, If such a thing would be implemented it would not be an x86 from Intel soon.
Since they need and want support from software vendors.
Microsoft has a big share in the desktop os and would be a significant factor.
But unfortunately microsoft has a history of being not a visionair, rather copying (or how one likes to call it) good idea's of other people/companies.
I think that a company that is to much marketing driven will not come up with novell idea's or will milk the cash cow as long as possible before implementing those idea's.
Therefore innovation goes much slower then it could be.
If it would happen it would happen in the server world first and there the linux community is always eager to use every feature for more performance. I do agree with you on that.
To come back to Intel :
If we take for example the IMC in the K8 and K10 from AMD, AMD and many others had this for years. Now Intel finally takes the step for an IMC to. Progress is there.
Your posts are now coming full circle and sounding a lot like my original posts on your speculation near the top of the thread:
It is always good to discuss these things. If we would just accept the "scraps" that are trown to us, the world would not be a fun place. I rather be an active person then waiting.
In reaction to : taltamir,
patents are not a wrong concept, they just completely went out of control and are now applied wrongly and stifle innovation instead of encouraging it (their original goal
I agree, that was is what i wrote but in different words. :thumbsup:
In reaction to :CTho9305,
I agree that some points in the text are a bit exaggerated.
From other points i will take your word for it .:laugh:
I have knowledge of a cpu innerworkings but not from every detail.
Nice to learn something. :thumbsup:
I read something in the past about those bottlenecks, that the functional unit's inside the core can never be kept busy all the time(even at ideal circumstances with branch prediction being right all the time).
But has that not to do with the limit's the x86 architecture has ?
The guy doesn't seem to know as much as he thinks he knows. He talks about complexities in x86 instruction decoding and how decode limitations cost a lot of performance (e.g. on Intel chips, instructions need to occur in certain patterns for maximum decode throughput) but misses the fact that there are other bottlenecks that limit performance more. Even if decoder limitations never caused idle cycles, performance wouldn't go up drastically. For what it's worth, he completely ignores the fact that the AMD CPUs can decode multiple complex instructions in a single cycle and just focuses on Intel's decoder's limitations.
What are the limitations of the AMD architecture then that causes this problem ?
Or with the Intel cpu's for that matter ?
I am really interested !
I always like to read the articles on this site :
http://www.realworldtech.com/
and here
http://arstechnica.com/articles/paedia/cpu.ars
and this
http://www.lostcircuits.com/
I find it very interesting.