Update Linux kernel

jamin123

Member
Apr 27, 2005
36
0
0
I am new to linux. I have a book and a copy of the OS I got from work. I installed Linux Enterprize 4, it's works great and very easy to use. My kernel is i686 kernel 2.6.9.5. I want to learn to update the kernel. I went to download the latest kernel update at www.kernel.org. However I only see patches. Can someone point to the correct kernel I need to download?
 

sourceninja

Diamond Member
Mar 8, 2005
8,805
65
91
the distro you are using should supply an updated kernel. For most users really have no need to compile your own kernel. Plus depending on the distro, compiling your own kernel could lead to bigger issues. Even if you do compile your own kernel, most of the time you want your distro's kernel sources and not vanilla sources (sources from kernel.org) because they have patches for stablity and security.

I'm assuming you are using redhat enterprise linux. Which I have never personally used. But from what I've been told you really want to use redhat supplied kernels or sources (usually gotten via rpm downloads) You might check this out.

http://www.centos.org/docs/4/html/rhel-sag-en-4/ch-kernel.html

 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Well the 'correct' thing to do would be only to use official updates from Redhat to do it.

Redhat ships heavily modified kernels with specific performance and feature changes to suite different purposes.. for isntance much of what they do is for propriatory software products like Oracle. So if you make changes to the kernel or instance a vanilla kernel (official 'Linux' kernel from kernel.org) there is a chance you will introduce bugs into some products that you depend on.

However if your using Free/Open Source software it's generally safe becuase it's more robust and most programmers use vanilla kernels to test their stuff against.

If you want to compile your own kernel from source code generally what you need to do is to is on kernel.org select one of the 'locations' indicated on their home page. Personally I prefer just to use a command line ftp client and log in as a anonymous user into ftp.kernel.org then cd into pub and then into various subrectories and download a tarball of the source tree.

The file will be labeled something like:
linux-<kernelversion>-tar.gz or tar.bz2 and will be quite a bit larger then other files in that directory.

Then after you get the source tree you can keep it up to date with the various patches on the front page.. although you don't have to worry about that so much when you just want to compile a kernel for your personal use.

So after you download the tarball. copy it to /usr/src directory.
then untar it.

tar zxfv linux-2.6.12.3.tar.gz
or
tar jxfv linux-2.6.12.3.tar.bz2

Then:
cd linux-2.6.12
make menuconfig
to generate a menuconfig program to help you configure your kernel with different features.
make
make install && make modules modules_install

and that should be it. Then after that you have to modify your bootloader's configuration to add a menu entry for the new kernel and then you reboot.

sometimes you have to make a new initrd image if you compile many things as modules, but that shouldn't be needed if your carefull.

There are lots of better howtos and guides with better and more detailed informatoin, but this is just a rough overview. Such as:
http://www.digitalhermit.com/linux/Kernel-Build-HOWTO.html
and there are lots of other howtos, just google around.

Also you may have to install various developement tools and developement packages in order to be compiling stuff. These should be provided for by redhat...

Normally if your using redhat professionally you depend on pre-compiled packages and by installing those you can aviod the hassle of configuring and compiling everything.
 

imported_Phil

Diamond Member
Feb 10, 2001
9,837
0
0
Good grief, I come in here searching for a kernel build guide from drag, and before I've even started a search, the information's here :D :thumbsup:

Nice one as always drag ;) :beer: for you!
 

Mesix

Senior member
Apr 20, 2005
275
0
0
Compiling your own kernel is a passage into manhood. I've done it at least two dozen times.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Compiling your own kernel is a passage into manhood. I've done it at least two dozen times.

It's also largely a waste of time these days. Distribution kernels package everything possible as a module (minus NTFS for RH/FC for legal reasons) so the only thing you lose is a little bit of disk space. And chances are good that you'll end up leaving out something important the first 5-10 times you compile your kernel, just because it can take a long time to figure out what all of the options are and what requires them.

If you want to do it, no one's going to stop you. But if you're just doing it because you think you're goint to make your system run faster, you're in for a pretty big let down.
 

Mesix

Senior member
Apr 20, 2005
275
0
0
I don't like bloated kernels. And if I miss something the first time, it will, in the worst case, take two more times, which only take a few seconds after the first big one. It's not like adding stuff to a kernel is a chore.
 

sourceninja

Diamond Member
Mar 8, 2005
8,805
65
91
I always compile my own kernels, but I've always used debian (or now gentoo where you dont really have a choice but to do it yourself). But I tipically dont recomend users do their own unless they have very specific needs. Otherwise the stock ubuntu 686/k7 kernel is good enough for most people.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
The kernel isn't bloated unless you compile everything in statically, if you make everything modular (like any reputable distribution will do) there's no bloat to worry about and you don't have to worry about recompiling crap the next time you get a new piece of hardware.
 

Mesix

Senior member
Apr 20, 2005
275
0
0
You keep saying that like recompiling is a big deal! It takes like 5 seconds (Plus the minute or so to open menuconfig and find and add what you need)!
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
If it only takes 5s to compile a full kernel on your machine, you've got one helluva machine.
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
(on gentoo at least) it only compiles the added portion. So if I have my compiled kernel, and I go add Intel Pro 100 support (not as a loadable module) then it would only take a few minutes to compile, based on my experience. On a faster system, a minute might work. Most of my linux boxes are older discarded P2 500's or less, with the exception of my latest monster, a proliant 8000 which is 4 P2 400's...it's cool to see all 4 procs at 100% usage while I am emerging the system :D
 

silverpig

Lifer
Jul 29, 2001
27,703
12
81
Originally posted by: drag
Well the 'correct' thing to do would be only to use official updates from Redhat to do it.

Redhat ships heavily modified kernels with specific performance and feature changes to suite different purposes.. for isntance much of what they do is for propriatory software products like Oracle. So if you make changes to the kernel or instance a vanilla kernel (official 'Linux' kernel from kernel.org) there is a chance you will introduce bugs into some products that you depend on.

However if your using Free/Open Source software it's generally safe becuase it's more robust and most programmers use vanilla kernels to test their stuff against.

If you want to compile your own kernel from source code generally what you need to do is to is on kernel.org select one of the 'locations' indicated on their home page. Personally I prefer just to use a command line ftp client and log in as a anonymous user into ftp.kernel.org then cd into pub and then into various subrectories and download a tarball of the source tree.

The file will be labeled something like:
linux-<kernelversion>-tar.gz or tar.bz2 and will be quite a bit larger then other files in that directory.

Then after you get the source tree you can keep it up to date with the various patches on the front page.. although you don't have to worry about that so much when you just want to compile a kernel for your personal use.

So after you download the tarball. copy it to /usr/src directory.
then untar it.

tar zxfv linux-2.6.12.3.tar.gz
or
tar jxfv linux-2.6.12.3.tar.bz2

Then:
cd linux-2.6.12
make menuconfig
to generate a menuconfig program to help you configure your kernel with different features.
make
make install && make modules modules_install

and that should be it. Then after that you have to modify your bootloader's configuration to add a menu entry for the new kernel and then you reboot.

sometimes you have to make a new initrd image if you compile many things as modules, but that shouldn't be needed if your carefull.

There are lots of better howtos and guides with better and more detailed informatoin, but this is just a rough overview. Such as:
http://www.digitalhermit.com/linux/Kernel-Build-HOWTO.html
and there are lots of other howtos, just google around.

Also you may have to install various developement tools and developement packages in order to be compiling stuff. These should be provided for by redhat...

Normally if your using redhat professionally you depend on pre-compiled packages and by installing those you can aviod the hassle of configuring and compiling everything.

don't forget the

ln -sf linux-2.6.12 linux

(or is it ln -sf linux linux-2.6.12 ? I always forget and have to look up the order :) )
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Same difference.

Personally I try to avoid compiling my own kernels as much as possible, but lots of times I end up doing it anyways.

For instance I have a faulty ram module in my desktop, so instead of spending the money to replace a otherwise good memory module I use 'badram' patches to block out the small bad section.. this nessicitates getting a new kernel, plus manually modifying a kernel patch to work with various different kernels.

For performance reasons it's nice sometimes.. not because smaller kernels are faster or anything like that, but some kernel patches are nice for specific purposes..

For isntance I tried out realtime-preemptive kernel patches. These patches effectively make the Linux general-purpose kernel into a swear-to-goodness realtime kernel, just like what is used numerious embedded style operating systems like QNX. This is something relatively new.

It's kinda interesting... Even IRQ requestps from disk or any other hardware is completely preemtable. It gives you the power to sceduale your hardware cards! Everything from disk access to video cards is fully preemptival. Every possible aspect to the kernel. It's quite impressive.

It's very usefull for anybody interested in making music on the computer because it gives you the ability to get garrenteed latencies. This stuff is measured in msecs.. anything under 10msecs is considured undetectable.. but you have to take into account the latency of the entire system, from midi controller to usb to software sequencer to sound card to whatever to your monitors (speakers).

For instance with Windows you have a sound latency of around 100-300msecs by default. This is because the software kmix it uses is very slow. Windows XP is especially poor in this sort of thing.. for a long time people stuck with Win9x becuase of it's low overhead.. But since then a lot of work has been done by third parties and you have these special 'ASIO' drivers that bypass the windows API stuff and works directly with some types of sound cards.

This will get you latencies of around 5-10msecs, unless you have some disk access going on, then it can jump up to 100-250 if your unlucky and cause pops and such. For instance if your trying to record a few different inputs and your virus scanner kicks in then your screwed.

With OS X you have a MUCH better solution.. For a long time it had it's own ASIO drivers created by third parties for OS 9 and such but now you have it's "Core Audio" which is very nice and you can get around 3-10msec latencies by using professional style sound cards.

For a normal Linux 2.6.x kernel it started off promising with 2.6.x series, but there are some issues with disk access and such, but generally you can get under 30msec with any kernel. With 2.6.11 and 2.6.12 they improved the realtime-like aspects of it and it's fairly easy to get under 10msec... However with realtime-preempt patches it's possible to have garrenteed latencies of under 1msec.. which is absolutely unheard of before.

Personally I would get reliable operation of around 2msecs on my system (2400+ amd 32bit, 1gig ram, M-audio audiophile sound card, jackd sound server, etc). But I'd have the jackd (as the realtime proccess) running with latencies of 5msec to make the system run decently...

You see the downside is that it makes your computer slooww... Not super slow, but interactive performance is very degraded with realtime proccesses being run. It would run single things just as fast, but by having a single proccess granted unlimited and immediate access to the CPU is not something that makes a good desktop, the lower the latency the worse the performance gets, but it would still get near rock-solid realtime performance at whatever settings I've made for Jackd.

If your curious check out:
http://people.redhat.com/mingo/realtime-preempt/
For the latest realtime-preempt patches.

These patches are going thru rapid and intense developement. Sometimes they are updated hourly.. This sort of thing is very important for realtime embedded developers and is a possibly big market for Linux... be able to get realtime performance, but still have a full fledged capabilities of a PC OS.

A couple wikis:
http://www.affenbande.org/~tapas/wordpress/?page_id=3
http://tree.celinuxforum.org/CelfPubWiki/RealtimePreemptionAlso you'd need to enable

Linux security modules (Like used with SELinux) and compile the realtime-lsm to grant realtime scedualing rights to specific users.


And this is a special project just for Linux audio.
http://www.agnula.org/

They have Demudi, which is based on Debian GNU/Linux. I beleive they do most of the work of getting all this stuff working so you don't have to mess around with kernels so much.

Linux Audio is very interesting to me. It's unique becuase it has all this interproccess communication technologies like Jackd and Ladspa plug-ins so you can route program outputs into each other to generate some nice sounds. No other OS realy has it so that there is a standard way to do this sort of this thing.. each thing tends to be propriatory to each company that produces applications. With Linux it lacks the big professional-level apps that you can get with Windows.. but by using different programs and piping sound thru each app then it's possible to get professional results.

Of course most of this is getting ported to OS X, so that much of it is aviable is for that OS,too.

http://linux-sound.org/
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
don't forget the

ln -sf linux-2.6.12 linux

(or is it ln -sf linux linux-2.6.12 ? I always forget and have to look up the order

It's 'ln -s source destination' just like cp and mv. But you're not supposed to create the /usr/src/linux symlink, infact technically you're not supposed to compile the kernel under /usr/src at all. Userland apps should never look for kernel headers at /usr/src/linux, they should be using the headers provided by glibc at /usr/include/linux.

I explicitly said recompiling, not compiling a full kernel from scratch.

Recompiling implies compiling already compiled code a second time. But really, what's the big deal with compiling everything as a module up front? Why go through all of the trouble to pick out all the things that you don't want just to possibly reenable them later? A single kernels worth of modules is less than 40M, not exactly gonna break the bank on that one I hope.
 

kamper

Diamond Member
Mar 18, 2003
5,513
0
0
@drag: all that real time stuff sounds cool but wouldn't it be much more efficient to design for multi-cpu and pin your real-time process to one cpu and everything else to the other while you're recording? That would basically guarantee instantaneous access to the cpu. Maybe not everyone has multiple cpus but if you really need it it's not that hard to get when compared to the cost of overhauling an entire kernel. And multi-core should make it more common anyways...

Or wouldn't it be cheap to make a sound card that can buffer up to, say, 1 second of input, so that any os hiccups are just smoothed over? Here's my rough calculation for storage:
48khz * 24bit = 1,152,000,000 bits
1,152,000,000 / 8 / 1024 / 1024 = 137Mb (roughly)

Or are my 48khz and 24 bit not good enough for anything serious? I'm really just taking a wild guess. Anyways, I'm sure you can easily put 137+ Mb of cheap ram on a sound card.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Keep in mind that I just play around with music and stuff to entertain myself. It's absolutely horrible and sampling everything down to 16khz may actually improve things.. :)

The idea is that you want have a very low latency because you want the computer to be part of a musical instrument.

you have to interact and respond to the sound coming back out of the computer. If you have a one second buffer on your sound card then you end up with a 1 second delay in sound when you press a button on your keyboard and if your trying to play 'Fur Elise' on a midi controller then it's going to be a huge pain in the rear to accomplish anything.

Using a multiple CPU machine with pinning would help a lot, but it will still not be able to deleiver honest-to-goodness realtime performance. You could get a much lower _average_ latency, but what you want is garrenteed latency (and lots of time this latency can be very high.. just as long as your sure what it will be). There are locks and such in the vanilla kernel that would prevent you from being 100% sure of a certain latency 100% of the time.

If your working with a dozen music guys and halfway thru a 15 minute recording of a difficult song and all of a sudden the recording pauses for a bit then you get a 'pop', it's goint to realy piss them off.

Nowadays you can replace most of your hardware with software things. Like midi synths, or drum machines, samplers, mixing devices, midi controllers, etc etc. Can all be replaced by software devices, and in the case of Linux.. for absolutely no cost.

However if your dealing with multiple sound inputs and outputs that are lagging like 20 guys playing Quake3 over a dial-up modem it can make it impossible to use it for anything very nice.

Also keep in mind that these patches are not just for music. The same guy released patches for 2.4 that were just mostly for musicians, but these patches are general purpose.

If it all works out and it stablises then it would make Linux usefull for critical embedded devices and such. Like say your have a computerized crane that holds boiling metal in a big cup. A one second delay in activity because of a harddrive disk head touched a platter could kill someone... Or maybe a controller for a part of a factory production line, or some scientific data aquisition devices. Lots of different things that people need reliable software for.

But the point was, that this or maybe Badram patches, is a reason why you would need to recompile a kernel. Or to fix a bug with a patch. Stuff like that.

Sometimes recompiling makes things easier... Like if your distro compiles xfs as a module and your using that for root. If you don't want to deal with a initrd image compiling it staticly into the kernel would make sense. But nowadays a good distro should make it so you don't have to worry about stuff like that.
 

TGS

Golden Member
May 3, 2005
1,849
0
0
With a home box, learning how to recompile will come in handy with kernel patches. I had to load the nvidia agp patch for my nforce2 board. Granted servers are on a totally different level with the appropriate kernels being supplied by the distributor.

For home use being able to easily update a kernel is practically invaluable. Of course, if you don't care about onboard devices or 3d performance you probably woundn't care.

I've done a bit of searching and haven't found an approved per kernel revision listing. Along with a patch list for devices that can be added later.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
With a home box, learning how to recompile will come in handy with kernel patches.

I guess I'm just lucky that I have older hardware, all of my hardware works without any out-of-tree patches.
 

TGS

Golden Member
May 3, 2005
1,849
0
0
Yeah at the time the nforce2 was the latest and greatest, which in linux speak means unsupported or a patch is needed for everything to work. :(