Benefits to compiling software?

Sureshot324

Diamond Member
Feb 4, 2003
3,370
0
71
I got into an argument with coworker about this. He switched to using Gentoo and swears that it's much faster than Ubuntu because everything is compiled specifically for his CPU, and will take advantage of all features of his CPU such as new instruction sets and registers. He said the precompiled packages of a distribution like Ubuntu are compiled for the lowest common denominator. They'll pick a baseline CPU, say a Pentium 1, and compile it for that and it won't take advantage of any CPU feature after that.

I always thought that programs would automatically take advantage of whatever CPU features are available. If you install Windows XP on a Pentium 1, I'm pretty sure any app would still run provided you have enough ram) though perhaps slowly, and if you run those same apps on a modern CPU they'll automatically use all the features of that CPU.

The only time in Windows you typically have different compiled binaries for different CPUs is 32bit and 64bit. Why isn't it the same in Linux?
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
He's right about Gentoo.

When you compile it on your hardware, it's essentially custom made for your hardware. Gentoo is a PITA to install though and takes days to compile. Ask him about that part. ;)

Ubuntu 9.04 is better overall anyway. Why don't you both settle it by running the Phoronix benchmark suite? :beer:
 

little elvis

Senior member
Sep 8, 2005
227
0
0
I ran Gentoo for years and it's been my experience that it's not appreciably faster than any similarly configured Ubuntu, Fedora, etc install.

Since Gentoo is built from the ground up, you will likely end up with a system that has less "bloat" than Ubuntu or Fedora, etc. So any speed difference can likely be attributed to that.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
The 0.0005% speedup he sees in running an application is offset by the 5000% increase in time taken to install said app.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
He's right about the lowest common denominator with regards to x86 32-bit software. That's why you see most packages saying i386 or similar in them. But for 99% of software out there it makes absolutely 0 difference. You really think SSE is going to make logging into Gnome faster? Apps that do benefit are usually smart enough to figure out what's available and use them, like mencoder and ffmpeg. He's either feeling a placebo affect or his Gentoo install is missing something that's in the Ubuntu install that has an affect on performance.
 

xSauronx

Lifer
Jul 14, 2000
19,582
4
81
Originally posted by: little elvis
I ran Gentoo for years and it's been my experience that it's not appreciably faster than any similarly configured Ubuntu, Fedora, etc install.

Since Gentoo is built from the ground up, you will likely end up with a system that has less "bloat" than Ubuntu or Fedora, etc. So any speed difference can likely be attributed to that.

iirc i read about this a while back, but i dont know where. the article said something about how when gentoo first began, it was a bit faster because of some compiling options that most distros didnt use...and that eventually pretty much everyone started to use them, too.

someone else here probably knows more about it then i do, i cared enough to post, not to go back and verify :p
 

Sureshot324

Diamond Member
Feb 4, 2003
3,370
0
71
Originally posted by: Nothinman
He's right about the lowest common denominator with regards to x86 32-bit software. That's why you see most packages saying i386 or similar in them. But for 99% of software out there it makes absolutely 0 difference. You really think SSE is going to make logging into Gnome faster? Apps that do benefit are usually smart enough to figure out what's available and use them, like mencoder and ffmpeg. He's either feeling a placebo affect or his Gentoo install is missing something that's in the Ubuntu install that has an affect on performance.

Good response. I have a couple questions though.

Are packages marked i386 literally compiled for a 386 processor? If so, what has been added to the x86 architecture since then that would need to be enabled in compile flags? Just SSE and 64 bit?

You mentioned mencoder and ffmpeg. Can these programs take advantage of the latest SSE instructions or whatever even if they are i386 packages?
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Originally posted by: xSauronx
Originally posted by: little elvis
I ran Gentoo for years and it's been my experience that it's not appreciably faster than any similarly configured Ubuntu, Fedora, etc install.

Since Gentoo is built from the ground up, you will likely end up with a system that has less "bloat" than Ubuntu or Fedora, etc. So any speed difference can likely be attributed to that.

iirc i read about this a while back, but i dont know where. the article said something about how when gentoo first began, it was a bit faster because of some compiling options that most distros didnt use...and that eventually pretty much everyone started to use them, too.

someone else here probably knows more about it then i do, i cared enough to post, not to go back and verify :p

Messing with compiler flags tends to cause more problems than help. If the people packaging for Debian or Ubuntu though -O3 was that much better than -O2 don't you think they'd use it?

There is a slight benefit in that you essentially can pick your dependences with Gentoo. Your builds only get i.e. ALSA support if you want but with Ubuntu you get that cause that's how its packaged. The benefits are pretty questionable though since you end up wasting a ton more disk space on compilers, headers, link libraries, etc that aren't needed for a precompiled distro.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Originally posted by: Sureshot324
Originally posted by: Nothinman
He's right about the lowest common denominator with regards to x86 32-bit software. That's why you see most packages saying i386 or similar in them. But for 99% of software out there it makes absolutely 0 difference. You really think SSE is going to make logging into Gnome faster? Apps that do benefit are usually smart enough to figure out what's available and use them, like mencoder and ffmpeg. He's either feeling a placebo affect or his Gentoo install is missing something that's in the Ubuntu install that has an affect on performance.

Good response. I have a couple questions though.

Are packages marked i386 literally compiled for a 386 processor? If so, what has been added to the x86 architecture since then that would need to be enabled in compile flags? Just SSE and 64 bit?

You mentioned mencoder and ffmpeg. Can these programs take advantage of the latest SSE instructions or whatever even if they are i386 packages?

Usually yes, if it says i386 that was the compile target. Most distros target i586 or i686 these days though. Even Debian dropped support for 386s a while back.

And yes, specially coded packages like ffmpeg will look at the running CPU flags on startup and use routines probably written in asm to use things like SSE. But that's not something most people care enough to do, most apps just aren't CPU-bound these days.
 

Sureshot324

Diamond Member
Feb 4, 2003
3,370
0
71
Is there anything that's been added since i686 (pentium pro) other than 64bit that would require specific compiler flags?
 

sourceninja

Diamond Member
Mar 8, 2005
8,805
65
91
I ran gentoo for a good long while. I liked that I could select exactly what support I wanted in each application. So if an application had hooks to libraries or features I didn't want I could quickly disable those.

Eventually I realized that the features I wanted were basically the features ubuntu compiles for anyway. So now I use that.

Everyone talks about compile times, but I never noticed it to be a problem. Of course I would use distcc for everything that could support it and I had at the time the top of the line processor. After the initial install, compiles would only be a minute or two for most things. The few that did require a long time such as firefox didn't stop you from using your existing firefox while it compiled.

It was a good learning experience, but still not as good as when I did a linux from scratch. I'm a mac guy now, but when I do have to pick linux I usually pick debian or ubuntu.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Is there anything that's been added since i686 (pentium pro) other than 64bit that would require specific compiler flags?

I really couldn't say, there's lots of different models and steppings of CPUs out there and I have no idea what flags apply to which ones.
 

sourceninja

Diamond Member
Mar 8, 2005
8,805
65
91
Originally posted by: Nothinman
Is there anything that's been added since i686 (pentium pro) other than 64bit that would require specific compiler flags?

I really couldn't say, there's lots of different models and steppings of CPUs out there and I have no idea what flags apply to which ones.

This might help http://en.gentoo-wiki.com/wiki/Safe_Cflags

I wish gcc 4.2 was out when I was using gentoo
-march=native would of saved me a ton of time figuring out what flags to use.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
The fact that Gentoo has a wiki page just for "safe cflags" should be sign enough to steer clear. The chances of you knowing better than the gcc and package manatainers in distros like Debian, RedHat, etc are pretty slim.
 

dinkumthinkum

Senior member
Jul 3, 2008
203
0
0
Kernel image packages are usually available pre-compiled for specific architectures. This is one area where it may be advantageous to have that. For the most part, you will find little to no difference between pre-compiled packages and locally compiled packages.

Windows applications are no different in these choices, except that Windows users tend to care even less about architecture-specific binaries. They also tend to never compile anything, ever. So the discussion doesn't even come up.

As for what is different since Pentium Pro, I know that the ACPI specification evolved slightly after that, and I think PPros themselves may implement it differently. I can't say what Windows Vista supports since I am not privy to their source code, but I wouldn't be surprised they dropped some legacy support for a platform you could never reasonably run Vista on anyway. There are other little fiddly issues with 386, 486, and the first Pentiums which may have been worth supporting at one time but probably are not anymore. The last time I used a 486 with Linux was about 8 years ago.
 

MrColin

Platinum Member
May 21, 2003
2,403
3
81
As far as I can tell, the primary, and perhaps only advantage to compiling your own is that you get control over what features/options are configured in the binary at the end. Apache for example, I read that one solid binary is more secure than one that is configured to accept modules after install, as a malicious user could attach their own module in case of a breach.

I haven't seen any evidence to support the claim that low level processor/kernel optimizations offer a significant performance increase over the professionally compiled ones. Theoretically, it seems possible that if you don't need some of the kernel you can get a smaller and possibly faster binary by compiling without the parts you don't need.
 

MrColin

Platinum Member
May 21, 2003
2,403
3
81
Originally posted by: SickBeast
He's right about Gentoo.
...
Ubuntu 9.04 is better overall anyway. Why don't you both settle it by running the Phoronix benchmark suite? :beer:

+1
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Apache for example, I read that one solid binary is more secure than one that is configured to accept modules after install, as a malicious user could attach their own module in case of a breach.

In order to put the module on the machine and make apache load it they have to be root so you're already screwed at that point.
 

Brazen

Diamond Member
Jul 14, 2000
4,259
0
0
It's like putting one of those fart cannons on your car to make it go faster.
 

postmortemIA

Diamond Member
Jul 11, 2006
7,721
40
91
unless you are going to rewrite code yourself to make it 'run faster' before you compile it is no going to help much. for example, if said app is designed to use only single core, which is pretty common, then where's advantage of compiling by yourself?
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
Originally posted by: postmortemIA
unless you are going to rewrite code yourself to make it 'run faster' before you compile it is no going to help much. for example, if said app is designed to use only single core, which is pretty common, then where's advantage of compiling by yourself?

The number of cores on the machine being used to compile really has nothing to do with the final output of the compiler and for general purpose computing the compiler won't be doing much in terms of making the final output multi-threaded regardless of what the code is.

What the compiler CAN do is optimize sections of code with varying degrees(loop unrolling for example), include asm instructions that are only present on your specific micro-architecture, optimizing for loading speed or even optimize for space constraints.

Whether or not any of that really makes a difference... well that's what this threads discussion is about :p
 

postmortemIA

Diamond Member
Jul 11, 2006
7,721
40
91
Originally posted by: Crusty
Originally posted by: postmortemIA
unless you are going to rewrite code yourself to make it 'run faster' before you compile it is no going to help much. for example, if said app is designed to use only single core, which is pretty common, then where's advantage of compiling by yourself?

The number of cores on the machine being used to compile really has nothing to do with the final output of the compiler and for general purpose computing the compiler won't be doing much in terms of making the final output multi-threaded regardless of what the code is.

What the compiler CAN do is optimize sections of code with varying degrees(loop unrolling for example), include asm instructions that are only present on your specific micro-architecture, optimizing for loading speed or even optimize for space constraints.

Whether or not any of that really makes a difference... well that's what this threads discussion is about :p

you didn't understand what I meant, i said real improvement would be rewriting the code so it is multi-threaded and that optimizing single-threaded apps is pointless in today multi core world.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
Originally posted by: postmortemIA
Originally posted by: Crusty
Originally posted by: postmortemIA
unless you are going to rewrite code yourself to make it 'run faster' before you compile it is no going to help much. for example, if said app is designed to use only single core, which is pretty common, then where's advantage of compiling by yourself?

The number of cores on the machine being used to compile really has nothing to do with the final output of the compiler and for general purpose computing the compiler won't be doing much in terms of making the final output multi-threaded regardless of what the code is.

What the compiler CAN do is optimize sections of code with varying degrees(loop unrolling for example), include asm instructions that are only present on your specific micro-architecture, optimizing for loading speed or even optimize for space constraints.

Whether or not any of that really makes a difference... well that's what this threads discussion is about :p

you didn't understand what I meant, i said real improvement would be rewriting the code so it is multi-threaded and that optimizing single-threaded apps is pointless in today multi core world.

You asked a question and I provided an answer :p

 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: Sureshot324
I got into an argument with coworker about this. He switched to using Gentoo and swears that it's much faster than Ubuntu because everything is compiled specifically for his CPU, and will take advantage of all features of his CPU such as new instruction sets and registers. He said the precompiled packages of a distribution like Ubuntu are compiled for the lowest common denominator. They'll pick a baseline CPU, say a Pentium 1, and compile it for that and it won't take advantage of any CPU feature after that.

I always thought that programs would automatically take advantage of whatever CPU features are available. If you install Windows XP on a Pentium 1, I'm pretty sure any app would still run provided you have enough ram) though perhaps slowly, and if you run those same apps on a modern CPU they'll automatically use all the features of that CPU.

The only time in Windows you typically have different compiled binaries for different CPUs is 32bit and 64bit. Why isn't it the same in Linux?

1. Most of the extra instructions don't benefit the type of code used in operating systems. SSE et al are for vector ops which mainly come up in scientific or media computing.

2. A 64 bit distro automatically has SSE2 as its baseline, so it will already have the majority of additional instructions utilized if possible.