Why do servers have operating systems?

chrstrbrts

Senior member
Aug 12, 2014
522
3
81
Hello,

OK, my question at first may seem crazy. But, let me explain my reasoning.

First of all, by sever I mean a "true" server.

I know that every computer can act in a server capacity, but I'm talking about those computers that you see in huge data centers.

They're just stacks of electronic equipment without a monitor, mouse, keyboard, soundcard, etc.

As far as I know, an OS handles memory management, processor focus regarding active processes, and coordinating the interrupt requests of peripheral I/O devices.

But, in these "true" servers there are no peripheral devices except for NICs, and the only process running is Apache, IIS, etc.

So, just as my digital microwave doesn't require an OS because it only runs one dedicated program, a true server shouldn't require one either, right?

Where am I wrong?

Thanks.
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,574
4,488
75
Got a router? (The networking kind, not the woodworking kind.) It probably has an OS too. And it doesn't even have to serve stuff from a hard drive.

Basically, it's simpler and cheaper to use an existing OS than to take the time to write a dedicated program.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
The OS provides functions that sit between the hardware and the software, or assist the software in some way.

For example, if you have a storage device, then you need some sort of file system on it, if you want to store and retrieve data in a file structure. One option is for software developers to develop their own in-house file system, or license an existing one, or find a free one. Then what happens if you want to support multiple file systems. What happens if you need to support a remote file system over a network share or something like an iSCSI hard drive - what if you need your iSCSI access to be redundant so that if one connection goes down, you can automatically find another route, via another NIC, to another port on the iSCSI server. All this sort of stuff is handled by a modern OS.

Sometimes, you need software features that are difficult - memory management, multi-tasking, error handling, interrupt handling, etc. These are easy to get wrong, expensive and difficult to program and hardware specific.

Having a good quality OS can make software development easier. Take a car ECU for example. The software needs to read a bunch of sensors, and send a bunch of carefully timed signals to turn injectors on and off, and ignition coils on and off. You don't necessarily need an OS to do that - it's simple enough that you could do it by programming the hardware directly. However, all major OEMs use a specialist OS on the ECU which runs a bunch of programs - typically, there will be a boot and supervisory program, a sensor monitoring program, one program for each fuel injector and one or two for each spark plug, a diagnostics program,and maybe additional programs for other functions like valve timing, cruise control, etc. It's a lot easier to debug and maintain this way. By using well tested code supplied as part of an OS, it avoids subtle bugs creeping into difficult parts of the code (e.g. real-time multi-tasking).

By having an OS, you make porting of the code easier - you develop it once, and it can run on different hardware with minimal changes. All the problems of multi-tasking, file management, network management, etc. are dealt with by the OS. What if your program crashes, the OS can catch it and take some sort of action.
 

irishScott

Lifer
Oct 10, 2006
21,562
3
0
Hello,

OK, my question at first may seem crazy. But, let me explain my reasoning.

First of all, by sever I mean a "true" server.

I know that every computer can act in a server capacity, but I'm talking about those computers that you see in huge data centers.

They're just stacks of electronic equipment without a monitor, mouse, keyboard, soundcard, etc.

As far as I know, an OS handles memory management, processor focus regarding active processes, and coordinating the interrupt requests of peripheral I/O devices.

But, in these "true" servers there are no peripheral devices except for NICs, and the only process running is Apache, IIS, etc.

So, just as my digital microwave doesn't require an OS because it only runs one dedicated program, a true server shouldn't require one either, right?

Where am I wrong?

Thanks.

Define "OS". Believe it or not your microwave might actually be running one, albeit a simple one, if it has advanced features like programmable profiles and diagnostics. The simpler microwaves you're thinking of get by without one because they're just control panels. They don't really store or manipulate information besides a few numbers and the time, which you can do with relatively simple circuits directly triggered by the buttons.

Serving digital information over a network is a whole other matter. That "one process" like Apache is really an abstraction for likely hundreds or thousands of related processes, including file access permissions, memory management, data retrieval, etc, that all have to happen in the correct order and often simultaneously. Underneath those you need software drivers for each and every hardware component that tells programs how to use the hardware, otherwise said hardware is just a bunch of inert metal. All of the stuff inside "Apache.exe" that allows it to function is provided by the operating system.

A prime example of just how complicated all this is is the basic network stack. Merely establishing a modern network connection requires several layers of abstraction, and different protocols that you've probably heard of are implemented at different layers. (http://en.wikipedia.org/wiki/OSI_model#Description_of_OSI_layers)

To manage that alone requires an operating system of some variety. Other conceptual components of a server (data access, security, bandwidth allocation, etc) are just as if not more complicated. In theory you could hard-code a bunch of extremely simple, interconnected dedicated operating systems to run each component (although this would be horribly inefficient and lacking in capabilities), but you'll still need operating systems.

What you call a "true server" appears to be Network Attached Storage, but even that has an operating system. Hell if you've ever heard of BIOS or UEFI, those are just simple operating systems, and you need one of them just to turn a server on. :)
 
Last edited:

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
So, just as my digital microwave doesn't require an OS because it only runs one dedicated program, a true server shouldn't require one either, right

In addition to what the others said, you're operating under the incorrect assumption that there's only one thing running on the server. My IIS box also has .NET, PHP, and MySQL running on it. My File Server also has Plex, FreeFileSync, BitTorrent, and OpenVPN running on it. My Game Server has....

You get the picture. :)
 

Scarpozzi

Lifer
Jun 13, 2000
26,391
1,780
126
An Operating System is just a link between the machine code and the software. All operating systems have a kernel and typically have memory managers that handle large stacks of memory and how it is recycled. The OS also provides a layer that translates the higher level languages back down to the hardware level so you can write more meaningful code without having to send commands in base 2 or hex.

Take a computer class somewhere. You'll have a better understanding of this stuff if you pick up a book and read.
 

Gryz

Golden Member
Aug 28, 2010
1,551
204
106
You are very sloppy in what you wrote. Sorry.

An Operating System is just a link between the machine code and the software.
No. Code is software. No idea what you mean.

A compiler translates programs written in a "higher level programming language" into machine code. This machine code is what is inside a .exe (aka an executable file).

The machine code (inside an executable) is directly executed by the processor in a computer.

And even that is not always true. Some CPUs have "microcode" that is even more elementary than the machine code. For the programmers it looks like the CPU is directly executing the operations in the machine code. But in reality, those operations are first translated by the microcode translator of the CPU into real instructions which are executed by the CPU. But this is details. All you need to know is: machine code is executed by the CPU.

What the OS does, is "load" and start each program. The OS starts a process, and the process runs the executable. The OS does not execute anything while a program is running. (See below about system-calls).

All operating systems have a kernel and typically have memory managers that handle large stacks of memory and how it is recycled.
Kinda true. The OS hands out large chunks of memory. It manages where that memory actually lives (in the page-file on disk, or in RAM).

The OS also provides a layer that translates the higher level languages back down to the hardware level so you can write more meaningful code without having to send commands in base 2 or hex.
No, the OS absolutely does not do that.
It's compilers that do that. And interpreters. And the assembler. Not the OS.

Another important thing that the OS does, is that it offers an abstraction of the hardware. Each piece of periferal hardware needs to be programmed in certain ways. Examples are hard disks (and ssds), monitor, keyboard, network interface, sensors, etc, etc. First of all you don't want every application programmer to reinvent the wheel, or redo huge amounts of works to deal with those periferals. Secondly from a security perspective, you don't want to give any process on a computer full control over all hardware.

Therefor the OS offers an abstraction of the hardware. It does this via "system calls". When a process wants to write something to disk, it uses a system-call to tell the OS that it needs to have bytes written to a file. Or is uses a system-call to set up a network connection. These system-calls look like library-calls to a programmers. The programmer includes those calls in his programs. The linker makes sure the correct machine code is included in the executable to make the system-call. When the executable is running, the CPU will see each system-call, and hand over control to the "kernel" to perform the requested action. The kernel will contain special code to deal with all devices, usually in the form of drivers. The kernel will do the work to do the requested action. And then the kernel will give control back to the user program.

Take a computer class somewhere. You'll have a better understanding of this stuff if you pick up a book and read.
Obviously even after you read a book, it's not easy.
An OS is not a compiler.

I do agree that chrstrbrts should read more about software architecture. The most important keyword to understand modern systems is "abstraction". And he clearly does not understand the concept.
 
Last edited:

Cogman

Lifer
Sep 19, 2000
10,284
138
106
Just to throw in my $0.02.

OSes provide so much stuff that it would be insane to run things without them. Want to run two applications on a server? You'll want an OS to manage that. Want to multithread your application? You'll want the OS to manage that. Want to use the NIC? You'll want the OS to implement and manage that. Want to read something from the harddrive? You'll want the OS to manage that.

Interacting with hardware sucks. Managing threads suck. Managing memory spaces suck. The OS manages all of that for you seamlessly.

Even booting up to the application is something that you don't want to worry about yourself. You have a bunch of magic addresses, special interrupts, and secret switches that you must flip to enable features that you want (SSE2, multiple cores, virtual memory). You could literally spend years learning all of the incantations needed just to boot into a specialty app on enterprise grade server software. Now you want to handle DMA requests? PCI-E drivers? Raid controllers? Nope, you don't want to do any of that. On a server, the OS is vital.

Interestingly enough. It is becoming more and more likely that your microwave is running a version of linux on it. Why? Because many of the pains that I mentioned above still apply in the microlevel (though, to a lesser extent). Linux makes dealing with these things a snap. For example, let's say you want to drive pin 1 high. How do you do that? Traditionally you would have to pull up the microprocessors data sheet, find the pin name (GPIO7), and then find the corresponding magic memory address which links out. All while using a special magic compiler that is dedicated to the architecture you are targeting (Though, if you are lucky, the guys that made your compiler will be kind enough to give you some headers with the magic memory addresses mapped to constant values). With linux, you cd into /dev/gpio (or something like that, it has been a while since I last did this). and write on the file 7 the value 1. Viola! You have driven the pin high. And you know what is great about this? You didn't have to use a specialty compiler to do this. You could have used ruby, python, java, C#, whatever has a runtime/interpreter/compiler for the flavor of linux you are using. No need for a 10 year old compiler that introduces a whole bunch of bugs you have to watch out for.

Because memory is dirt cheap now-a-days and processors which can run linux are a dime a dozen. It is simpler, cheaper, and faster to just put an OS on the chip and drive forward.

OSes solve many, MANY, problems for developers. General purpose OSes generally solve a bunch more problems. This is why we are steadily moving to a world where everything runs linux on it.
 

irishScott

Lifer
Oct 10, 2006
21,562
3
0
Just to throw in my $0.02.

OSes provide so much stuff that it would be insane to run things without them. Want to run two applications on a server? You'll want an OS to manage that. Want to multithread your application? You'll want the OS to manage that. Want to use the NIC? You'll want the OS to implement and manage that. Want to read something from the harddrive? You'll want the OS to manage that.

Interacting with hardware sucks. Managing threads suck. Managing memory spaces suck. The OS manages all of that for you seamlessly.

Even booting up to the application is something that you don't want to worry about yourself. You have a bunch of magic addresses, special interrupts, and secret switches that you must flip to enable features that you want (SSE2, multiple cores, virtual memory). You could literally spend years learning all of the incantations needed just to boot into a specialty app on enterprise grade server software. Now you want to handle DMA requests? PCI-E drivers? Raid controllers? Nope, you don't want to do any of that. On a server, the OS is vital.

Interestingly enough. It is becoming more and more likely that your microwave is running a version of linux on it. Why? Because many of the pains that I mentioned above still apply in the microlevel (though, to a lesser extent). Linux makes dealing with these things a snap. For example, let's say you want to drive pin 1 high. How do you do that? Traditionally you would have to pull up the microprocessors data sheet, find the pin name (GPIO7), and then find the corresponding magic memory address which links out. All while using a special magic compiler that is dedicated to the architecture you are targeting (Though, if you are lucky, the guys that made your compiler will be kind enough to give you some headers with the magic memory addresses mapped to constant values). With linux, you cd into /dev/gpio (or something like that, it has been a while since I last did this). and write on the file 7 the value 1. Viola! You have driven the pin high. And you know what is great about this? You didn't have to use a specialty compiler to do this. You could have used ruby, python, java, C#, whatever has a runtime/interpreter/compiler for the flavor of linux you are using. No need for a 10 year old compiler that introduces a whole bunch of bugs you have to watch out for.

Because memory is dirt cheap now-a-days and processors which can run linux are a dime a dozen. It is simpler, cheaper, and faster to just put an OS on the chip and drive forward.

OSes solve many, MANY, problems for developers. General purpose OSes generally solve a bunch more problems. This is why we are steadily moving to a world where everything runs linux on it.

Well to be fair depending on the system you might need to construct a device tree overlay first, which would require the afore-mentioned manual modification of registers and compiling/loading the .dtbo file so the system knows how to use the pins. IIRC Beaglebone Blacks and Raspberry Pis require such treatment, as their pins can have multiple configurations. But that's just a nitpick, and even with that step it's a helluva lot simpler than say, a PIC. *Remembers all-nighters deciphering shitty PIC documentation and tutorials* *shudders*
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
Well to be fair depending on the system you might need to construct a device tree overlay first, which would require the afore-mentioned manual modification of registers and compiling/loading the .dtbo file so the system knows how to use the pins. IIRC Beaglebone Blacks and Raspberry Pis require such treatment, as their pins can have multiple configurations. But that's just a nitpick, and even with that step it's a helluva lot simpler than say, a PIC. *Remembers all-nighters deciphering shitty PIC documentation and tutorials* *shudders*

Good point. The ones I dealt with had pre-compiled distros ready to go and already setup (and admittedly I haven't dealt with embedded systems since college).

But like you said, setup isn't much harder than dealing with the magic headers and documentation. It is a one time thing. After that, you are free.
 

Scarpozzi

Lifer
Jun 13, 2000
26,391
1,780
126
You are very sloppy in what you wrote. Sorry.


No. Code is software. No idea what you mean.

A compiler translates programs written in a "higher level programming language" into machine code. This machine code is what is inside a .exe (aka an executable file).

The machine code (inside an executable) is directly executed by the processor in a computer.

And even that is not always true. Some CPUs have "microcode" that is even more elementary than the machine code. For the programmers it looks like the CPU is directly executing the operations in the machine code. But in reality, those operations are first translated by the microcode translator of the CPU into real instructions which are executed by the CPU. But this is details. All you need to know is: machine code is executed by the CPU.

What the OS does, is "load" and start each program. The OS starts a process, and the process runs the executable. The OS does not execute anything while a program is running. (See below about system-calls).

Kinda true. The OS hands out large chunks of memory. It manages where that memory actually lives (in the page-file on disk, or in RAM).

No, the OS absolutely does not do that.
It's compilers that do that. And interpreters. And the assembler. Not the OS.

Another important thing that the OS does, is that it offers an abstraction of the hardware. Each piece of periferal hardware needs to be programmed in certain ways. Examples are hard disks (and ssds), monitor, keyboard, network interface, sensors, etc, etc. First of all you don't want every application programmer to reinvent the wheel, or redo huge amounts of works to deal with those periferals. Secondly from a security perspective, you don't want to give any process on a computer full control over all hardware.

Therefor the OS offers an abstraction of the hardware. It does this via "system calls". When a process wants to write something to disk, it uses a system-call to tell the OS that it needs to have bytes written to a file. Or is uses a system-call to set up a network connection. These system-calls look like library-calls to a programmers. The programmer includes those calls in his programs. The linker makes sure the correct machine code is included in the executable to make the system-call. When the executable is running, the CPU will see each system-call, and hand over control to the "kernel" to perform the requested action. The kernel will contain special code to deal with all devices, usually in the form of drivers. The kernel will do the work to do the requested action. And then the kernel will give control back to the user program.

Obviously even after you read a book, it's not easy.
An OS is not a compiler.

I do agree that chrstrbrts should read more about software architecture. The most important keyword to understand modern systems is "abstraction". And he clearly does not understand the concept.
Don't apologize....I was generalizing and you were correct in calling me out and going into such detail. I just thought typing anymore than what I typed would be a waste. I refer to most utilities and compilers that package with an OS as part of the OS. So if it's not handled in some piece of software you install, it's part of the OS. I just learned a long time ago that saying less is sometimes more on here....no one is really going to read it anyways. ;)
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
The best way I've heard of describing an operating system is that an operating systems sole job is present an interface to a process such that process can execute without any knowledge of the underlying hardware or what any other process is doing on the machine. If an operating system is doing its job correctly than a process executes as if it believes it is the only code running on the hardware.

It is purely an arbitrator between processes and hardware. If you only never need one process(extremely unlikely) then you don't need an operating system.
 

Gryz

Golden Member
Aug 28, 2010
1,551
204
106
You still do want an OS, even if you're gonna run only 1 process. Because you don't want to rewrite functionality that the OS gives you, even if it is only a bit, from scratch. That's why you'll see more and more devices run a full OS (like Linux) under the hood. E.g. TVs, DVD-players, thermostats, lots of stuff now runs on top of existing OSs. And with the popularity of Raspberries, I expect that within 10 years, everything in our houses will run Linux under the hood.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
You still do want an OS, even if you're gonna run only 1 process. Because you don't want to rewrite functionality that the OS gives you, even if it is only a bit, from scratch. That's why you'll see more and more devices run a full OS (like Linux) under the hood. E.g. TVs, DVD-players, thermostats, lots of stuff now runs on top of existing OSs. And with the popularity of Raspberries, I expect that within 10 years, everything in our houses will run Linux under the hood.

You are really conflating arguments here.

The OP asked why servers need an operating system and the simple fact is if you need to run more than one process you need an OS. That does not mean that all use cases require an operating system nor does that mean that your toaster will be running Linux in 10 years.

In any good system design you have a separation of responsibilities and multiple processes is just one step towards achieving that.

Server operating systems, desktop operating systems, it doesn't matter, their sole purpose is to multiplex processes to the hardware.

Would you take a microcontroller and put write/install an OS on it just so you could poll some sensors? No, that only needs one process, but you would use hardware with an OS on it to connect to the microcontroller and process the data using more familiar tools because that require more than one process. You might want a web interface or to broadcast the data over a UDP network.
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
Would you take a microcontroller and put write/install an OS on it just so you could poll some sensors? No, that only needs one process, but you would use hardware with an OS on it to connect to the microcontroller and process the data using more familiar tools because that require more than one process. You might want a web interface or to broadcast the data over a UDP network.

Actually, the answer here is yes. The shift has been towards putting OSes on microcontrollers precisely because an OS makes polling sensors that much easier. Rather than having to reinvent the I2C protocol or the various drivers/handshakes/etc needed for each sensor, you can often find an already built driver for a slim linux os which will do whatever you want.

Beyond that. OSes on microcontrollers provide some really nifty tools for debugging issues that simply aren't readily available or as well polished when dealing with single applications on a microcontroller.

The only reasons not to do an OS on a microcontroller have to do with memory and processing power limitations. However, linux is so small and memory so cheap that those are quickly becoming non-issues.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
I would say a microcontroller with an attached ARM core is not really a microcontroller anymore.

The prototypes for the Raspberry Pi ran on a 22mhz ATMega microcontroller and didn't boot linux.

Throw a 700mhz ARM core into a SoC style package and now the Pi boots linux, but is it really running on a microcontroller?
 

Gryz

Golden Member
Aug 28, 2010
1,551
204
106
You are correct Crusty, that in the past 30 years people would not put an OS (or any other bloat) on small systems. Because of limited memory, limited CPU power, and related costs. But I think that will change in the next 10 years. Maybe not toasters in 10 years, but certainly in 20 years. Seen all the hype for "the Internet of Things" ? (BTW, I hate that slogan). Manufacturors are gonna put OSes on everything that has a power-cord.
I fully agree with Cogman.

What's the cost of 1GB of (cheapest) RAM these days ? 5 euros ? Less maybe if you buy in bulk ? What's the cost of a small SoC ? A Raspberry Pi costs 35 euros. That's a big cost-factor, I agree. But that price will go down. Soon you'll have a decent little CPU plus lots of RAM for a few bucks. They will be integrated into everything.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
You are correct Crusty, that in the past 30 years people would not put an OS (or any other bloat) on small systems. Because of limited memory, limited CPU power, and related costs. But I think that will change in the next 10 years. Maybe not toasters in 10 years, but certainly in 20 years. Seen all the hype for "the Internet of Things" ? (BTW, I hate that slogan). Manufacturors are gonna put OSes on everything that has a power-cord.
I fully agree with Cogman.

What's the cost of 1GB of (cheapest) RAM these days ? 5 euros ? Less maybe if you buy in bulk ? What's the cost of a small SoC ? A Raspberry Pi costs 35 euros. That's a big cost-factor, I agree. But that price will go down. Soon you'll have a decent little CPU plus lots of RAM for a few bucks. They will be integrated into everything.


I don't think we were really in disagreement on most of what has been said, I was just trying to point to out that the current trends have no correlation to why a server needs an operating system.

A server/desktop/laptop/DVR needs an operating system because it needs to multiplex processes, it's that simple.

Everything else is just noise IMO and semantics around what defines a system or a computer.
 

irishScott

Lifer
Oct 10, 2006
21,562
3
0
even your hard drive has a CPU and some sort of elementary kernel/firmware (to handle caching/buffering, writing data, reporting HDD status via SMART, ...)

is it a (primitive) operating system?

http://spritesmods.com/?art=hddhack&page=4

Yep. To not have an operating system on a computer basically means going back to the punchcard days or using similarly simple technology; typically something that only needs to run one physical computation operation at a time.

A useful way to think about it might be in human terms. Originally everything a modern operating system does was done by human beings. Resource allocation was signing up on a piece of paper, runtime queues were literal queues of people waiting in line for the computer, the equivalent of SMART was a technician breaking open your computer to diagnose it, memory access was a stack of film or punch cards on a shelf, etc. As computers got faster, we began to use the extra resources to automate these tasks and voila, the first operating systems.