• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

I'm using Fedora Core 4 and I think I want to use something else...

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Space *is* a neccessity because not everyting is a PC with 10+ gig HDs. You know... things like embedded hardware... compact flash and ROM based storage.

And in those cases you'll most likely be cross-compiling and building fs images on a completely seperate machine. You wouldn't use a normal distro of any kind because you would want to keep the amount of writes down since CF drives wear out relatively quickly.

To be quite blunt if I wanted bloat I would stick with windows. Ubuntu is nice, so is mandriva and suse but when it comes to making a system with a purpose other than *basic* computing needs binary distro's are just as guilty or moreso than Windows of being fatware.

They're only guilty of that if you don't know how to use the package manager and want to blame them instead. Saying you compile everything to avoid a few hundred K here or there is stupid and just wastes you a lot of time.

If slack was precompiled for anything other than i486 I would use it for most server installs

Yea, because who wants automated patch management and a support contract for an important server?

What your saying is actually what I have to say is indifferent to you... which is rude, lacks objectivity, and is very condescending.

I'm objective, but in my experience most commercial developers (especially those that do VB, Java and the like) have no idea how the underlying OS that they're coding for works. I do know some that have a good understanding of OS concepts and what all of tunable knobs in the OS do, but they're a lot more rare IME.
 
Originally posted by: Nothinman
And in those cases you'll most likely be cross-compiling and building fs images on a completely seperate machine. You wouldn't use a normal distro of any kind because you would want to keep the amount of writes down since CF drives wear out relatively quickly.

It was an example of space being important... you seem to not care about space.

They're only guilty of that if you don't know how to use the package manager and want to blame them instead. Saying you compile everything to avoid a few hundred K here or there is stupid and just wastes you a lot of time.

If it were a few hundred K then only cpu optimizations would make a difference. But I didn't say I saved just a few hundred K you did. So again... IYE... not mine... where I have saved 100s of MB via not allowing certain things into my Install.

IYO its stupid to save such little space but your making an absolute assumption that is all we/I/he save with gentoo. That is condescending F.U.D. Maybe if you learned portage's power as you imply others should do for apt-get, where your whole point revolves around, you might have a clue?

Yea, because who wants automated patch management and a support contract for an important server?

Slackware has a package manager and there is such things as cron and bash scripts. Maybe you should read the slack documentation and manpages for common tools under linux. Once you have then you already have a support contract at the highest level for your important server. You don't have to purchase a Sun server and its Solaris or Linux Distro to have a good reliable 100% warranty covered server. You sure don't have to buy into RHES or Suse Server to get a support contract for your Linux install. Stop adding to F.U.D. because its getting pretty deep.

I'm objective, but in my experience most commercial developers (especially those that do VB, Java and the like) have no idea how the underlying OS that they're coding for works. I do know some that have a good understanding of OS concepts and what all of tunable knobs in the OS do, but they're a lot more rare IME.

Commercial developers for what? Applications for windows or j2SE/EE? Thanks for making my next point : there are a variety of developers and your experience is pretty limited if you don't know many that know the turn knobs of the OS. Its not neccissarily a bad thing to buy into someone's api and let them keep the overhead of your code 'just working' on the SDK but that usually costs you money in the longrun if there is a drastic change. .NET and JAVA are riddled with those problems. Those programmers are riddled with problems. Biggest example I can point out that others probably will amen on is PeopleSoft. When you have one arm of the company that tunes the hardware and OS and another arm that programs code on that hardware and neither know eachother's business... then your asking for multi million dollar misteps.
 
Back to the original question...

...I like both Debian and Ubuntu, even for those who like compiling, since the debian/rules file for a package usually contains the right build rules for debian and will usually build you a nice package that can be used with other nice packages in your system. This may be significantly more annoying than finding a repository with the package you're looking for when it comes dist-upgrade time, but it allows you that extra little bit of control that some people like, and others seem to think that nobody should have. (Huh?)
 
It was an example of space being important... you seem to not care about space.

Genreally no, space isn't an issue in 99% of cases; even my notebook has a 60G disk in it. And in a situation where space is important I doubt Gentoo would be an option either, unless you had another Gentoo machine to make binary packages on or put portage on a network drive.

If it were a few hundred K then only cpu optimizations would make a difference. But I didn't say I saved just a few hundred K you did. So again... IYE... not mine... where I have saved 100s of MB via not allowing certain things into my Install.

My hundred K example was per-package. But on a relatively new machine even 100M is usually less than 1% of the total space.

And CPU optimizations don't even make a difference in most packages. The only ones that really benefit from different CPU optimizations are corner cases like 3D renderers, mpeg encoders, etc. Compiling glibc with higher optimizations might help since everything uses it, but that's a touchy one because if you break glibc you'll be booting some rescue discs.

IYO its stupid to save such little space but your making an absolute assumption that is all we/I/he save with gentoo. That is condescending F.U.D. Maybe if you learned portage's power as you imply others should do for apt-get, where your whole point revolves around, you might have a clue?

I tried installing Gentoo once a while back, portage was mind-numbingly slow compared to apt and after like 9 hrs X failed to compile properly so I gave up rather quickly. In under an hour I had Debian back on the machine with X and everything else that I wanted.

Slackware has a package manager and there is such things as cron and bash scripts. Maybe you should read the slack documentation and manpages for common tools under linux. Once you have then you already have a support contract at the highest level for your important server. You don't have to purchase a Sun server and its Solaris or Linux Distro to have a good reliable 100% warranty covered server. You sure don't have to buy into RHES or Suse Server to get a support contract for your Linux install. Stop adding to F.U.D. because its getting pretty deep.

I know I can do it, I've automated a lot of jobs where I work. But there's no reason to do it when it's been implemented 100 times over already. And support contracts from known companies are important to corporate people. If you tell them that we have a support contract with RedHat they'll feel a lot better than if you tell them you have one with Bob's Computer Contracting. I personally wouldn't pay for it, but in most cases it's required for a contract. And a lot of times you don't even have a choice in distribution, you have to run RedHat or SuSe to be able to run Oracle in a supported configuration. If your DBAs called up Oracle and told them that they were having problems on thier Slackware or Gentoo server they would get laughed at.

Thanks for making my next point : there are a variety of developers and your experience is pretty limited if you don't know many that know the turn knobs of the OS

I know that there are lots of kinds of developers, but I'm still of the opinion that the majority don't understand anything but their own programs. And in most cases they don't have to, so it's not that big of a deal.

Its not neccissarily a bad thing to buy into someone's api and let them keep the overhead of your code 'just working' on the SDK but that usually costs you money in the longrun if there is a drastic change. .NET and JAVA are riddled with those problems.

And sadly most seem to be willing to take that chance, they'd rather get a POS app out the door to customers than take the time to do it right the first time. They're always running under the belief that they can fix whatever's necessary in a patch later on. The company I work for has 1 app that has components written in like 6 different languages because as time went on and new parts were added they used newer technologies, for whatever reason, and the older stuff still hung around because no one has time to rewrite it. And everytime I see some "enterprise ready" commercial app it looks like the same thing happened there, so I can't imagine that what I'm seeing is abnormal, at least in the US.

 
Originally posted by: Nothinman
it is a bug, and I've reported it. When I do it msyelf it works just fine. But why should I have to wait for them to fix a bug when I can do it myself

I'm not saying you should wait, but if you found the bug and didn't report it, that's a problem. Even if you want to recompile your own kernels for whatever reason, not everyone does and not everyone will even be able to figure out why their cpufreq stuff isn't working.

What is harder, compiling your own kernel (if you know your hardware this is the easiest thing in the whole wide world) or compiling your own kernel and making a deb package that will meet the dependancys of other packages in ubuntu?

The process for both is virtually identical, except compiling a kernel saves you some steps. And as I said, there are very few packages that depend on a specific kernel or even the existance of a kernel package. Most packages safely assume you have a kernel since the box isn't usable without one. The only packages I can find in Debian that specify anything about a kernel are module packages and they have to because they're not usable without that specific kernel.

To compile a kernel .deb you essentially do everything normal (i.e. make menuconfig, oldconfig, whatever) and then run 'fakeroot make-kpkg -uc -us kernel_image' and then 'fakeroot make-kpkg -us -uc modules_image' and in the parent directory there will be a .deb for the kernel itself and one for each module package you have extracted in /usr/src/modules. the '-us -uc' switches tell make-kpkg to not sign the package and fakeroot is necessary to have the right permissions in the .deb without doing the actual compilation as root. make-kpkg is capable of much, much more but that's all you'll need in 99% of the cases. Then you can run update-grub or edit the file yourself to add the newly installed kernel.

One small thing, just a FYI (and you probably already know it)

With Debian's current default mkinitrd you have to enable devfs in the kernel to make it work properly... however with the latest 2.6.13 kernel devfs is completely absent. This is a problem with the 'old fasion' way that mkinitrd uses to make initrd images, they'll building a replacement.

In order for me get 2.6.13 I had to install the yaird package and use that to build the initrd image. It requires a 2.6.8 or newer kernel for it to work properly.
 
With Debian's current default mkinitrd you have to enable devfs in the kernel to make it work properly

No, you don't. I haven't ever had devfs enabled in my kernels and mkinitrd works fine, the only downside is a simple message about not being able to umount filesystems of type devfs on bootup.

however with the latest 2.6.13 kernel devfs is completely absent.

Just the config options were removed, the code is still there because everytime GregKH posts a patch to remove it someone cries.

 
Originally posted by: Nothinman
With Debian's current default mkinitrd you have to enable devfs in the kernel to make it work properly

No, you don't. I haven't ever had devfs enabled in my kernels and mkinitrd works fine, the only downside is a simple message about not being able to umount filesystems of type devfs on bootup.

Ya there is. But not always.

In my case it causes problem because I have root on LVM and it uses devfs to set that up. With yaird it uses /sysfs to get the information it needs.

however with the latest 2.6.13 kernel devfs is completely absent.

Just the config options were removed, the code is still there because everytime GregKH posts a patch to remove it someone cries.
[/quote]

your right about that. I remember reading that somewere now. But I don't know how to enable it.. (plus i've hated Devfs since I first ran into it when I tried out Gentoo long ago.) 🙂

edit:

the future replacement package for initrd-tools that will be completely devfs-free will (possibly) be called initramfs-tools. (see Bug#315654.. that's for booting a encrypted root, but I think it's close to the same problem I've run into.)
 
Back
Top