CPU Technology - Ahead? Behind? Or right on track?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Spawne32

Senior member
Aug 16, 2004
230
0
0
Hopefully microsoft offers an upgrade path for windows 7 users, rumor has it that an upgrade will be priced somewhere around 40 dollars per license, since they are desperate to get people off of XP and vista/7. Many major companies refuse to upgrade from XP mainly because of how TERRIBLE windows 8 was.
 

jpiniero

Lifer
Oct 1, 2010
16,851
7,296
136
Many major companies refuse to upgrade from XP mainly because of how TERRIBLE windows 8 was.

Actually companies have finally upgraded from XP... to Windows 7. That's the main driver of Intel's recent sales surge.
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
As a software/hardware guy I can't tell how I feel when I hear pure HW people say "they'll recompile their code and get huge speedup". Legacy is here to stay, and your job as a HW guy is to make sure you don't break performance of legacy code. That's frustrating, but that's reality :(

OTOH the inability of software guys to properly use multiple cores is definitely a SW issue (unless you have a buggy HW implementation) and a very frustrating one too...

That being said, I'm very impressed by how processors have evolved in the last 30 years, both from a process and a micro-architecture point of view. That was predicted by Moores's Law, so one shouldn't be surprised, but that doesn't mean it isn't impressive ;)

Yup... that's the reality that I've already accepted. x87, MMX and SSE will live on as well as legacy code written by people that never learned cache thrashing 101 (hah, I wonder if caches even existed when they wrote it). Oh well.
 

ehume

Golden Member
Nov 6, 2009
1,511
73
91
One place I work, we just last month upgraded to Win7. At the other place, we are promised to get off XP, Real Soon Now.
 

videogames101

Diamond Member
Aug 24, 2005
6,783
27
91
Yup... that's the reality that I've already accepted. x87, MMX and SSE will live on as well as legacy code written by people that never learned cache thrashing 101 (hah, I wonder if caches even existed when they wrote it). Oh well.

You can't expect 90% of programmers today to have any understanding of how a cache even works, and that will continue to be true. Not to mention differences in cache implementations mean that true optimization for any given cache structure can't even be done for a generic x86 binary because of how many different systems you're targeting.
 
Last edited:

III-V

Senior member
Oct 12, 2014
678
1
41
You wrote a recompilation would speed things up. How much speedup will you get if no vectorization happens? Do you know OoOE has basically made code scheduling specific optimizations almost useless? So how do you think recompilation will speed up things?


Do you have examples apart from new instructions?
I can't recall anything from recent history. However, I am curious as to why you're excluding new instructions.
But hardware developers also are lazy when they say recompilation should be done to benefit from a new micro-arch.
It's lazy to put a bunch of work into something for the benefit of developers, and then ask them to utilize it?

For the record, the point I was making was that progress is being made on the hardware side of things. I wasn't trying to turn this into a "developers are lazy" conversation.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
Before, I only had Comcast in my area. It was great, I had highspeed internet. However, I had constant dips in my connection due to Comcast not having enough throughput for everyone. Then, this magical thing called competition came to my area. Verizon came with a FIOS line.

Now?

I have amazing throughput with Verizon, and Comcast has also upgraded their lines in the area MULTIPLE times with Verizon also upping their speeds to provide a "competitive" rate/speed to the "competition".

I can't believe you'd seriously advocate against competition.....

We do not have anything like competition in the us with internet.

Good audio summary occurred today

http://www.sciencefriday.com/#path/...speed-internet-lags-behind-on-price-cost.html
 

Nothingness

Diamond Member
Jul 3, 2013
3,315
2,386
136
I can't recall anything from recent history. However, I am curious as to why you're excluding new instructions.
I was thinking about very specialized instructions such as those for AES. But now that I think of it, X87 to SSE2 might bring significant speedup on some CPU even iwhen not using vectorization....

It's lazy to put a bunch of work into something for the benefit of developers, and then ask them to utilize it?
I meant degrading the runtime behavior of some instructions or sequences just because if fits better a new uarch; as an example it happened multiple times in the past where memcpy routines for Intel x86 kept on changing due to uarch changes, or when ARM didn't put a pipelined FPU on Cortex-A8. That's a pain and micro-architectd should avoid that.

For the record, the point I was making was that progress is being made on the hardware side of things. I wasn't trying to turn this into a "developers are lazy" conversation.
And I was just trying to show both SW and HW devs can do better, though most of the HW guys are great in my team and once educated about SW just do a better job :biggrin:
 

NTMBK

Lifer
Nov 14, 2011
10,459
5,845
136
Getting software developers to optimize is a nightmare in and of itself... but what you're saying has no relevance to the point I was making. There is only so much you can do for legacy apps aside from ramping up clock speed.

You said that things would be sped up by a recompile, which for 95% of apps isn't true. General purpose code hasn't been significantly improved by new instructions since SSE replaced x87 for scalar FP arithmetic. Getting benefit from SSE and AVX vector ops generally requires either assembly, intrinsics, or special SIMD annotation (like the Intel compiler's '#pragma simd' vectorization hints). Or alternatively, calling out to an optimized maths library which implements such optimizations internally.

It's a long way from just being a recompile. The last time that was true was with the Pentium III.