• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Linux Qs: What is the advantage of compiling yourself?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: Nothinman
Who starts off using OSS anyways? Besides, ...

I dont feel like quoting everything so I will just make some points.

I do not belive in the everything should be free and open source license bull crap. I do not care if I have software like ut2004 and its 'restrictive' license gets on peoples nerves. I want my package manager to handle it. Gentoo is not breaking any laws by having their package manager handle it. Why you ask? How is this possible? I'll tell you. An ebuild is not a binary, it is not the program. It does not distribute anything. if I type emerge ut2004 it will ask for my ut2004 dvd. Then it will install ut2004 for me. It will also download and install all the patches from me from legit mirrors. Some ebuild actually require you to get the source yourself and put it in /usr/portage/distfiles. For example sun's java. We can't have an ebuild download it from sun do to licensing issues. So I go to suns website and download the tar from sun, stick it in /usr/portage/distfiles type emerge sun-jre and be done with it. I can do this with cedega as well. Any commercial software can have an ebuild. You simple make the user get the data either by inserting the CD,putting a tar in /usr/portage/distfiles, or having the ebuild download the tar directly from the distrbuter. If you look at the Opera ebuild you will find it actually downloads the opera installer from http://snapshot.opera.com/unix/ and then runs it in a sandbox then installs it on your system.

This is a strenght not a weakness. To say otherwise just means your an open source nutrider. There is no reason debian couldn't do the same thing but political reasons. Those political reasons make my system harder to patch and maintain.

I also pointed out my reasons for not installing librarys. It is not a size issue. It is a security one. If I dont have a library and an exploit is found, I do not have to worry about it. Its not much, but I think it is worth the time.

Firefox problems. Because of the way gentoo does its install firefox wont have issues until the new one is 100% installed. But while it is compiling firefox is 100% usable. You just have to love sandboxes.

And finally, portage is not yet bi-arch, they just supply binary ebuilds that are designed to use /lib32 for 32bit apps we can't live without yet. Ubuntu does this but not nearly to enough extent (firefox and mplayer are not enough for most of us). If we get bi-arch support (and I doubt it will be anytime soon) that will be a HUGE feature that will force me to give ubuntu a try on 64bit. I know the gentoo guys are not even looking at making portage bi-arch because of politics.

Also in regards to dependancy checking. I belive they choose to make it work that way to allow you the option of keeping dependancys. I dont know how this is useful, but it must be useful to someone. I personally have emerge rapped around a script I wrote that does dependancy checking for me. This way I never have to run depclean. I have had to run revdep-rebuild on occasion, but only because of my script being dumb and removing a needed library (this happened once when I wrote it.). I also had to run revdep-rebuild once when a library gaim 2.0 beta 2 got updated seperatly and was no longer linked properly or something. A revdep-rebuild noticed the problem and rebuilt gaim.

As for packages. I the only number I could find was from 2003 and it said around 5000 packages. However I know a lot of programs I use do not have packages in ubuntu or debian. This is mainly becuase they are 64bit I suspect. Also becuase a lot of them are not OSS licenses.
 
Originally posted by: Nothinman
If only CUPS was developped under a less restrictive license.

I never really noticed since the GPL is good enough for me, but now that you mention it I'm a little surprised we haven't seen an OpenCUPS from the OpenBSD guys yet.

They've been too busy. Building the pieces that help run the internets. 😛 Not to mention rewriting broken GNU things. 😉

Is sid the old, older, or oldest version?

sid is unstable, it's the equivalent of -CURRENT. And in general, it's not old.

Why do they create silly names for these things instead of just making sense? Obfuscation for the joy of feeling elite?
 
They've been too busy. Building the pieces that help run the internets. Not to mention rewriting broken GNU things.

Writing BSD licensed implementations of OSPF, BGP and NTP makes sense, but CVS sucks so if you're going to write a new versioning tool why not start over completely and implement somthing better?

Why do they create silly names for these things instead of just making sense? Obfuscation for the joy of feeling elite?

They're codenames, the releases have real numbers (i.e. sarge was 3.1) and can also be referred to as oldstable, stable, testing and unstable if you like.
 
Originally posted by: Nothinman
The speed difference is not trivial. You get The Snappy more than distros not made for your CPU.
It's called the placebo effect.
Alright, I'll take you up on that (er, once I'm back to using it, I'm having issues with new E17 and somehow broke KDE 🙂). Is there a Linux desktop benchmarking script/app worth anything, to compare with Sarge and Ubuntu?
 
Originally posted by: Nothinman

So? Debian has things like mysql, ldap, pcre, etc for postfix as seperate packages so you just install the ones you want and you've got the exact same thing in less time.
No fool
Postfix can use hardened, ipv6, ldap, mailwrapper, mbox, mysql, nis, pam, postgres, sasl, selinux, ssl, vda

With prebuilt packages you get everything, or only what the maintainer deems as important.
If I want postfix installed with only pam, vda, and ldap suppord it takes one command
USE="-hardened -ipv6 ldap -mailwrapper -mbox -mysql -nis pam -postgres -sasl -selinux -ssl -vda" emerge postfix
If I want it hardened I take the "-" away from the front of hardened. You can't do that with pre-built binaries unless they have one for every combination of useflag possible.

Why do thing always turn into debian vs gentoo? I didn't say debian was bad, but I prefer portage, and it's ultimate customizability. If the deb maintiners compile KDE with arts compiled in you have no choice but to install arts.
 
Are Gentoo & Slackware only LiveCDs? What about Fedora or Debian? Also can you upgrade from version to version automatically withing the OS or do you have to download and reinstall it?
 
Originally posted by: Nothinman
They've been too busy. Building the pieces that help run the internets. Not to mention rewriting broken GNU things.

Writing BSD licensed implementations of OSPF, BGP and NTP makes sense, but CVS sucks so if you're going to write a new versioning tool why not start over completely and implement somthing better?

Because relearning the tools would be a PITA. Because they have over 10 years worth of code already in CVS.

They plan on expanding on it once it's production ready, but first they just wanted a seamless transition from the current steaming pile they're using.

BTW I'm pretty sure they did test out subversion, openpm(?), and a couple of others. None of them could handle the load gracefully.

Why do they create silly names for these things instead of just making sense? Obfuscation for the joy of feeling elite?

They're codenames, the releases have real numbers (i.e. sarge was 3.1) and can also be referred to as oldstable, stable, testing and unstable if you like.

It just makes more sense to me to call it by something recognizable by someone that isn't intimately familiar with Debian.
 
Originally posted by: sebastienmaass
Are Gentoo & Slackware only LiveCDs? What about Fedora or Debian? Also can you upgrade from version to version automatically withing the OS or do you have to download and reinstall it?

No, those are all full blown distros.
 
Some ebuild actually require you to get the source yourself and put it in /usr/portage/distfiles. For example sun's java. We can't have an ebuild download it from sun do to licensing issues. So I go to suns website and download the tar from sun, stick it in /usr/portage/distfiles type emerge sun-jre and be done with it. I can do this with cedega as well. Any commercial software can have an ebuild. You simple make the user get the data either by inserting the CD,putting a tar in /usr/portage/distfiles, or having the ebuild download the tar directly from the distrbuter. If you look at the Opera ebuild you will find it actually downloads the opera installer from http://snapshot.opera.com/unix/ and then runs it in a sandbox then installs it on your system.

And the same thing can be done with Debian, look at the make-jpkg tool for example.

I also pointed out my reasons for not installing librarys. It is not a size issue. It is a security one. If I dont have a library and an exploit is found, I do not have to worry about it. Its not much, but I think it is worth the time.

And yet you are forced to have a compiler and all the development libraries installed on your system, so it's that much easier for an attacker to compile his exploits once he gets in.

Firefox problems. Because of the way gentoo does its install firefox wont have issues until the new one is 100% installed. But while it is compiling firefox is 100% usable. You just have to love sandboxes.

Extracting FF from a .deb takes like 5s on my machine so there's virtually no time where the new one isn't installed 100%, that's not the point. Once the new FF is installed, 1% or 100%, the XUL files on disk won't mesh with the ones you have in memory and FF will have problems.

And finally, portage is not yet bi-arch, they just supply binary ebuilds that are designed to use /lib32 for 32bit apps we can't live without yet. Ubuntu does this but not nearly to enough extent (firefox and mplayer are not enough for most of us). If we get bi-arch support (and I doubt it will be anytime soon) that will be a HUGE feature that will force me to give ubuntu a try on 64bit. I know the gentoo guys are not even looking at making portage bi-arch because of politics.

And Debian doesn't do the /lib32 thing because it's a hack, they'd rather take their time and do it right. 32-bit chroots work fine right now even if they waste some space.

I belive they choose to make it work that way to allow you the option of keeping dependancys. I dont know how this is useful, but it must be useful to someone.

Then the defaults should be inverted, having the "break my dependency tree without warning" option being the default is just plain stupid.

I personally have emerge rapped around a script I wrote that does dependancy checking for me.

So you actually had to fix your package manager yourself?

With prebuilt packages you get everything, or only what the maintainer deems as important.
If I want postfix installed with only pam, vda, and ldap suppord it takes one command
USE="-hardened -ipv6 ldap -mailwrapper -mbox -mysql -nis pam -postgres -sasl -selinux -ssl -vda" emerge postfix
If I want it hardened I take the "-" away from the front of hardened. You can't do that with pre-built binaries unless they have one for every combination of useflag possible.

I know what you meant, but 99% of the time it's f'ing worthless to jump through those hoops. And if you're really dead set on compiling things from scratch you can use 'apt-get source' and build it yourself.

Because relearning the tools would be a PITA. Because they have over 10 years worth of code already in CVS.

Once the BitKeeper license was pulled from the kernel Linus and his people wrote and switched to git extremely quickly. He even imported the entire bk history into his git tree. It's not quite 10 years because they weren't using bk that long, but it's still a huge amount of changelog entries.

BTW I'm pretty sure they did test out subversion, openpm(?), and a couple of others. None of them could handle the load gracefully.

That's one of the reasons Linus created git, the current tools out there couldn't deal with the history and usage patterns of the Linux kernel devs.

It just makes more sense to me to call it by something recognizable by someone that isn't intimately familiar with Debian.

RedHat users understand rawhide, Debian users understand sid or unstable, BSD users understand -CURRENT, etc. There's no universal term.
 
Originally posted by: Nothinman
Because relearning the tools would be a PITA. Because they have over 10 years worth of code already in CVS.

Once the BitKeeper license was pulled from the kernel Linus and his people wrote and switched to git extremely quickly. He even imported the entire bk history into his git tree. It's not quite 10 years because they weren't using bk that long, but it's still a huge amount of changelog entries.

Everyone involved with the kernel is using git now? We're talking every developer with commit privs. Plus users who want to download the latest and greatest.

OpenBSD has like 10 years and 4 months worth of source, and I think I read the source is around 300MB these days (not sure if that includes X and the ports tree). I'm not really sure how CVS handles branches/tags, but there is one for every release (netbsd 1.1-openbsd 3.8), and several smaller branches (UBC) throughout. So that's almost 10.5 years of a LOT of source, and who knows how many years of familiarity with CVS. ~22 CVS mirrors, not to mention branches individual developers have on their own machines or even mirrors users setup for themselves.

BTW I'm pretty sure they did test out subversion, openpm(?), and a couple of others. None of them could handle the load gracefully.

That's one of the reasons Linus created git, the current tools out there couldn't deal with the history and usage patterns of the Linux kernel devs.

The OpenBSD devs know CVS. They've been using it for a while. They didn't like the security issues, so they decided to write a drop in replacement. Once it gets to that point they have mentioned wanting to add features.
[L=Short interview with one of the gentlement working on OpenCVS]http://nedbsd.nl/modules/static/page/JorisVinkInterview/L]

It just makes more sense to me to call it by something recognizable by someone that isn't intimately familiar with Debian.

RedHat users understand rawhide, Debian users understand sid or unstable, BSD users understand -CURRENT, etc. There's no universal term.

Unstable, testing, and stable make more sense than sid. I don't care about universal terms, just something that can give you an idea of what's going on without having to look it up every time.
 
The speed difference is not trivial. You get The Snappy more than distros not made for your CPU.
It's called the placebo effect.
It's incredible how much performance difference people report between Firefox security updates that don't have any changes that changed performance in the real performance tests.

This is only true on software you do not have installed, I can still use firefox while portage compiles a new version of firefox. Only the initial install takes any time, and on really big packages you could just install bins anyways if you really cared. A lot of my friends do bins of thunderbird and firefox. I do not do any bins. The time is a non issue to me.
Firefox is a good example because if you install a new version while running the old one it'll overwrite some of the XUL files and random parts of FF will stop working. I've even seen the File->Quit command not work because the XUL on disk didn't work with what was already loaded into memory.
WTF? Are they disabling the XUL cache (really really really really really stupid), or linking the eventual binary back to the build directory (somewhat stupid)?

Why do they create silly names for these things instead of just making sense? Obfuscation for the joy of feeling elite?
They're codenames, the releases have real numbers (i.e. sarge was 3.1) and can also be referred to as oldstable, stable, testing and unstable if you like.
sid = Still In Development, doesn't it?

What's stupid about setting your own CFLAGS is that you can't possibly be as familiar with the package as its maintainers - sure, for some apps -O3 might help, but for others it may not, and the people maintaining binary packages have probably figured that out already and picked the best flags.

Any argument that you can remove more unwanted features with gentoo is moronic because you're burning much more space with source code + source deps than you'd waste with a slightly more feature-full binary, and more time in the compile than you'll lose by any minor performance difference over the next decade.
 
"Any argument that you can remove more unwanted features with gentoo is moronic because you're burning much more space with source code + source deps than you'd waste with a slightly more feature-full binary, and more time in the compile than you'll lose by any minor performance difference over the next decade."

But would you argue that not having some features is more secure?
 
Everyone involved with the kernel is using git now? We're talking every developer with commit privs. Plus users who want to download the latest and greatest.

No users have commit priviledges because there's no central repository, Linus has his own personal tree on kernel.org and he pulls changes from other people's trees into his own. When someone has a patch they want included they say "Pull from this git tree, here's what's changed". Some people still use old style patches in email, but just about all of the main developers have git trees setup now. And you don't need to use git to track the latest and greatest, daily snapshots are put on kernel.org that can be retrieved via ftp but using git is better because it saves bandwidth and would be faster.

OpenBSD has like 10 years and 4 months worth of source, and I think I read the source is around 300MB these days (not sure if that includes X and the ports tree). I'm not really sure how CVS handles branches/tags, but there is one for every release (netbsd 1.1-openbsd 3.8), and several smaller branches (UBC) throughout. So that's almost 10.5 years of a LOT of source, and who knows how many years of familiarity with CVS. ~22 CVS mirrors, not to mention branches individual developers have on their own machines or even mirrors users setup for themselves.

Pretty much all of that applies to Linux as well, except of course there's no userland code, but all of them seem to have coped well enough. And just FYI, I looked and a fresly extracted 2.6.14.3 kernel is 244M.

WTF? Are they disabling the XUL cache (really really really really really stupid), or linking the eventual binary back to the build directory (somewhat stupid)?

Couldn't tell you, I don't use FF very often so I never investigated. But what happens is after an update certain things break and I get XUL errors, so I assumed it was because some of the XUL on disk changed.

sid = Still In Development, doesn't it?

Actually it's the name of the kid in toy story that's always breaking his toys, but I think still in development was mentioned after the fact.

But would you argue that not having some features is more secure?

Any security you get by removing features is undone by giving the attacker a compiler and all of the headers, archives for linking, etc.
 
"Any security you get by removing features is undone by giving the attacker a compiler and all of the headers, archives for linking, etc."

Which is the first things I install on any linux box I build anyways because I develop software. But not having the library can decrease the number of possible ways for the attacker to get in.
 
Originally posted by: Nothinman
Everyone involved with the kernel is using git now? We're talking every developer with commit privs. Plus users who want to download the latest and greatest.

No users have commit priviledges because there's no central repository, Linus has his own personal tree on kernel.org and he pulls changes from other people's trees into his own. When someone has a patch they want included they say "Pull from this git tree, here's what's changed". Some people still use old style patches in email, but just about all of the main developers have git trees setup now. And you don't need to use git to track the latest and greatest, daily snapshots are put on kernel.org that can be retrieved via ftp but using git is better because it saves bandwidth and would be faster.

Why no central repository? 😕

Pretty much all of that applies to Linux as well, except of course there's no userland code, but all of them seem to have coped well enough. And just FYI, I looked and a fresly extracted 2.6.14.3 kernel is 244M.

That's a big kernel. :Q
 
Why no central repository?

Why use one? Distributed development is the in thing now. Everyone's doing it. =) Monotone, svk, bazaar-ng, Darcs, etc.

That's a big kernel.

Yea, I thought the same thing when I saw that but ~100M of it is just drivers. The biggest offenders seem to be network (22M), scsi (17M) and usb (11M).
 
I hate to break it to you, but once someone is actually on the machine it doesn't matter if you have a compiler or not. If they have full control they can install their own compiler.
If you use linux, and have never had to compile anything(be it a driver, the kernel, or an app that is missing from your package management system) then you proably should be using widnows anyway.

Examples:
I don't want the default kernel with every module known to man. I want my own monolithic kernel custom-tailored with only the things needed to run my hardware. Again that leaves fewer places of attack. You need a compiler to do that.

With the postfix again. If there is an ssl exploit, and I don't have ssl compiled in to postfix then I don't have to worry about that exploit. Why is that hard to understand?

Not having what you don't need is a small step in the direction of security.
 
I hate to break it to you, but once someone is actually on the machine it doesn't matter if you have a compiler or not. If they have full control they can install their own compiler.

Probably. But, given that most of the attacks are done by script kiddies, if you can make it so their exploits won't compile they'll probably move on to an easier target. Is that a good policy? Not really, but you should do everything in your power to make their life difficult and IMO not giving them a compiler is a much bigger piece of that puzzle than removing LDAP support from postfix.

If you use linux, and have never had to compile anything(be it a driver, the kernel, or an app that is missing from your package management system) then you proably should be using widnows anyway.

Not true, packages are provided for pretty much everything you can think of, including things like the fglrx and nvidia drivers.

I don't want the default kernel with every module known to man. I want my own monolithic kernel custom-tailored with only the things needed to run my hardware. Again that leaves fewer places of attack. You need a compiler to do that.

Pointless. Even if you remove kernel module support an attacker can mess with /dev/kmem and do whatever they want. Fedora has some patches that restrict /dev/kmem usage, but I doubt your custom kernel includes those patches. All you're doing is making life more difficult on yourself because when you buy a new NIC or plug in a USB hard disk with an odd filesytem on it you won't have the driver available without compiling it.

With the postfix again. If there is an ssl exploit, and I don't have ssl compiled in to postfix then I don't have to worry about that exploit. Why is that hard to understand?

And without SSL support everything is out in plain-text anyway so an attacker might as well concentrate on intercepting a username and password instead.

Not having what you don't need is a small step in the direction of security.

Yes, but you're giving too much credit to the wrong things.
 
Originally posted by: Nothinman
So? You timed one small app and that's supposed to be considered the norm?

time aptitude install ccze
real 0m7.567s
user 0m6.090s
sys 0m0.950s

Woohoo, look at that, Debian is ~21x faster than Gentoo!!!!!

I was just saying that you don't have to sit there and wait on end for your programs to install...

Originally posted by: Nothinman
I don't have a Gentoo box to check the man pages for emerge, but if that's the long version of -D which need to be added to -u to have it do dependency checking on removal, why should you have to remember to do that? The fact that it's not on by default is braindead.

-D is --deep. I don't believe there is a short version for --depclean has a short version

--depclean
Determines all packages installed on the system that have no
explicit reason for being there. emerge generates a list of
packages which it expects to be installed by checking the system
package list and the world file. It then compares that list to
the list of packages which are actually installed; the differ-
ences are listed as unnecessary packages and then unmerged after
a short timeout.



Originally posted by: Nothinman
So? Debian has things like mysql, ldap, pcre, etc for postfix as seperate packages so you just install the ones you want and you've got the exact same thing in less time.

It's not the same idea. In ubuntu/debian when you install xmms it has support for alsa, oss, arts, esd, etc all built in. ie, if you have arts installed, you can open up xmms and set its output plugin to arts. This is installing xmms with arts support. It doesn't mean that arts is installed alongside xmms, it just means that xmms can use arts if it is available.

In gentoo, you have to set a USE variable to tell portage to build xmms with arts support. Having this option of compiling support in has 2 nice benefits, one gentoo-specific, and one that would be good to have with every distro.

The gentoo-specific benefit is that if you have -arts set as a USE variable, any package that normally comes with arts support is compiled without it. This way, you don't have to install arts to have the source code there so you can use it to build xmms. Basically, it kills off a ton of dependencies if you decide that you don't want to have to be dependent on them.

The more general benefit is (continuing with my specific example... seems funny, but just for argument's sake here...), let's say that there is a security issue with the arts plugin for xmms. Until a patch is sent out, debian users have to uninstall xmms from their systems in order to be secure. Gentoo users can just set -arts and recompile xmms.

Originally posted by: Nothinman
You'd think that Gentoo would be able to add ebuilds faster since they don't have to wait for any autobuilders or QA.

There are plenty of ebuilds in portage. I'm pretty sure there are more in portage than there are in apt. QA? There's stable, ~ and -.

Originally posted by: Nothinman
Talk to the people that released those games under such restrictive licenses. I haven't checked, but I don't think you can legally redistribute them without consent so I'd bet that Gentoo's ebuilds are doing so illegally.

The ebuild is just an install script. They don't break licenses. Stick your game cd in and type emerge <whatever>. It'll install the game for you.

Originally posted by: Nothinman
There are over 17,000 packages in sid, I can't imagine what you couldn't find in there. And Ubuntu does have less officially supported packages, but universe and multiverse are pretty much just recompilations of the rest of sid so you still have access to all of those packages.

Opera (which is free for download btw), texmaker, mplayer, installers for a lot of 3rd party licensed software... a bunch of other stuff. Why is automatix so popular with ubuntu even with universe and multiverse enabled? Gnomebaker... that's just the stuff I remember trying to find and not being able to.
 
I was just saying that you don't have to sit there and wait on end for your programs to install...

But if it's anything more than a small util, you do.

-D is --deep. I don't believe there is a short version for --depclean has a short version

As I said, I don't have a Gentoo box to verify what you're talking about on, which I'm proud of.

--depclean
Determines all packages installed on the system that have no
explicit reason for being there. emerge generates a list of
packages which it expects to be installed by checking the system
package list and the world file. It then compares that list to
the list of packages which are actually installed; the differ-
ences are listed as unnecessary packages and then unmerged after
a short timeout.

Which is the opposite of what I'm talking about. If you 'emerge -u qt' (or whatever the QT package is called) it'll remove it without warning and break all of KDE because no real dependency checking is done, that's what I'm talking about.

It's not the same idea. In ubuntu/debian when you install xmms it has support for alsa, oss, arts, esd, etc all built in. ie, if you have arts installed, you can open up xmms and set its output plugin to arts. This is installing xmms with arts support. It doesn't mean that arts is installed alongside xmms, it just means that xmms can use arts if it is available.

So? If it's a real problem file a bug and the maintainer will seperate the ARTS package out. AFAICT xmms doesn't depend on anything ARTS related, so what do I care? A bigger problem is that xmms is done in GTK and as such pulls in all kiinds of GTK 1.2 plus all of it's crap.

The gentoo-specific benefit is that if you have -arts set as a USE variable, any package that normally comes with arts support is compiled without it. This way, you don't have to install arts to have the source code there so you can use it to build xmms. Basically, it kills off a ton of dependencies if you decide that you don't want to have to be dependent on them

And if you decide to switch to Gnome later on you have to recompile half your packages, yay.

The more general benefit is (continuing with my specific example... seems funny, but just for argument's sake here...), let's say that there is a security issue with the arts plugin for xmms. Until a patch is sent out, debian users have to uninstall xmms from their systems in order to be secure. Gentoo users can just set -arts and recompile xmms.

And if there is a security problem with 'ls' you'll have to worry about the latest update to coreutils, and there's not much you can do about that. If you're worrying about whether you have ARTS, esd, ALSA, etc support you're worrying too much and need to get out of the house more often. Chances are more likely that ARTS is the least of your problems.

There are plenty of ebuilds in portage. I'm pretty sure there are more in portage than there are in apt. QA? There's stable, ~ and -.

Packages.gentoo.org says 10678 which is still pretty far behind Debian sid which is almost ready to hit 18,000. And I know a few people have have gone from Gentoo -> Debian because the packages are more consistent and just of general better quality.

The ebuild is just an install script. They don't break licenses. Stick your game cd in and type emerge <whatever>. It'll install the game for you.

Sure they can break licenses, and unless you read the license and understand it 100% you can't claim otherwise.

Opera (which is free for download btw),

Irrelevant, downloading it for personal use and redistributing it with a product (i.e. Gentoo) are seperate things and last I checked redistributing it required an agreement with Opera.

Why is automatix so popular with ubuntu even with universe and multiverse enabled?

Because people don't know any better, up until very recently Automatix forced a lot of package installations which had the possibility of breaking a lot of other things LordHunter317 berrated the author of Automatix into removing the force options but, personally, I still wouldn't trust it.
 
Originally posted by: Nothinman
But if it's anything more than a small util, you do.

Define waiting around. A few minutes isn't a big deal. Even something like thunderbird only takes a few. And if you REALLY need something big quickly, then there are binary versions available. Openoffice just took me 1m45s to install...

Originally posted by: Nothinman
And if you decide to switch to Gnome later on you have to recompile half your packages, yay.

You're not quite getting it. First, arts is a kde thing, but I'm sure you know that. settinge USE="-arts" will prevent the dependencies of arts from needing to be installed just so you can build xmms. However, if you did build with USE="arts" enabled, used kde, and then switched to gnome, you wouldn't have to rebuild xmms. Just keep using it. Use esd if you want, provided that you compiled xmms with esd in the first place. Or alsa. Or oss. The key here is that you don't have to build all the dependencies of a package if you don't want to, provided that you tell portage that you're not going to use them. If you did set -gnome -gtk initially, and then want to use gnome later, and have gnome support for everything, then just change your use variables to "gnome gtk" and then do an emerge -e -newuse world before going to bed.

The ebuild is just an install script. They don't break licenses. Stick your game cd in and type emerge <whatever>. It'll install the game for you.

Sure they can break licenses, and unless you read the license and understand it 100% you can't claim otherwise.

Originally posted by: NothinmanIrrelevant, downloading it for personal use and redistributing it with a product (i.e. Gentoo) are seperate things and last I checked redistributing it required an agreement with Opera.

The opera licence is linked to on the packages.gentoo.org site's description of the package. It forbids selling, renting, leasing, or sublicensing.

Originally posted by: Nothinman
Because people don't know any better, up until very recently Automatix forced a lot of package installations which had the possibility of breaking a lot of other things LordHunter317 berrated the author of Automatix into removing the force options but, personally, I still wouldn't trust it.

Or maybe because the way ubuntu/debian is set up right now doesn't provide for a lot of users' needs? I'm not saying it's bad; they're fine distros. I had ubuntu installed on my own rig and use debian at school. Gentoo works better for me, and distros that are compiled from source have certain advantages. Sure, there are disadvantages too, but that scale will tip differently for different people.

 
Define waiting around. A few minutes isn't a big deal. Even something like thunderbird only takes a few. And if you REALLY need something big quickly, then there are binary versions available. Openoffice just took me 1m45s to install...

Your definition of a few minutes must not be the same as mine, I just built mozilla-thunderbird and it took 63m18.082s here.

You're not quite getting it.

I get it, I just think it's a pointless optimization and the only real benefit is a few K of disk space and maybe some VM space in the process that loads and doesn't use the library.

Sure they can break licenses, and unless you read the license and understand it 100% you can't claim otherwise.

It would be safer to assume that you're not in compliance and don't touch the software if you don't understand the license.

The opera licence is linked to on the packages.gentoo.org site's description of the package. It forbids selling, renting, leasing, or sublicensing.

Their site also says you have to register and agree to the terms in the multiple distribution agreement before you can distribute Opera.

Or maybe because the way ubuntu/debian is set up right now doesn't provide for a lot of users' needs?

Not that I agree, but if that's true then people should be working with Ubuntu to fix the problems instead of writing tools that work against the system to wedge software into it.
 
Their site also says you have to register and agree to the terms in the multiple distribution agreement before you can distribute Opera.
As I understand it, ebuilds pull from the official distributor's site. For example, someone who was working on a SeaMonkey 1.0 ebuild was pulling our official source tarball from our servers, then patching that.
 
Back
Top