• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Best Linux Server uptime?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
My 664 day uptime Eee laptop is about to kick the bucket ...

load average: 640.29, 640.15, 641.77

completely unresponsive, login attempts timeout
and simple actions like backspace take 10 minutes to complete 🙁

makes it difficult to diagnose the problem let alone fix it
ps aux could take a day or two to complete ....

ohh well 🙁

UPDATE:
I figured out the problem, was a hanging cron job, .2% resources added every hour
problem fixed, and the uptime continues 🙂

note this system is no longer connected to the internet, i'ts only purpose is uptime now 🙂
 
Last edited:
Code:
[root@borg mnt]# uptime
 22:55:24 up 24 days,  3:49,  3 users,  load average: 5.09, 4.78, 4.58
[root@borg mnt]#

Been having lot of hardware issues on that box lately. There's still an issue where the disk IO starts crapping out and the system crashes. I have a new controller on the way to see if that fixes the issue.
 
i've been having some stability issue lately, if 11.10 doesn't fix it i'm switching to debian stable. should be healthy for my hipster-cred
 
I place a high priority on ensuring that any machines I manage are as secure as possible, so I normally don't have a very high uptime.

However, the highest uptime I've had on any of my boxes has actually been a Windows server. I was supporting a small business client with a Windows Server 2003 box running Exchange, Active Directory, and Windows File and Print sharing that had an uptime of about 450 days. The only reason it was that high was because the server (which was their main server btw) was a very old Gateway server with dozens of queued updates and practically no free disk space, and I was scared to death that if I rebooted it, it would never come back up again.

I finally had to shut it down to physically move it to another site. It never came back up again :'(
 
I place a priority that the machines I manage are secure as well. The majority of them are not connected to the internet as a result (nor with any path to get to it other than via "sneaker-net").
 
None of your boxes show any real load. Here is one for y'all.

xxxxxx@xxxxxx # uname -a
SunOS xxxxxx 5.8 Generic_117350-36 sun4u sparc SUNW,Sun-Fire-480R
xxxxxx@xxxxxx # uptime
8:57am up 1816 day(s), 13:13, 3 users, load average: 19.48, 21.66, 21.86

Server is on a very secure network, so patching is not as much of a concern.

That's not "real" load. I'll show you real load:

# uptime
2:10pm up 36 day(s), 6:50, 2 users, load average: 157.23, 163.95, 180.13

And that is actually pretty low. I have seen it in the 500-600 range at times. That said it was shutdown in preparation of hurricane Irene, so it has only been back on since than.
 
That's not "real" load. I'll show you real load:

# uptime
2:10pm up 36 day(s), 6:50, 2 users, load average: 157.23, 163.95, 180.13

And that is actually pretty low. I have seen it in the 500-600 range at times. That said it was shutdown in preparation of hurricane Irene, so it has only been back on since than.

I smell a forced power cycle lol. I'm surprised you got to that screen at all.
 
That is why I (and several others) always have a terminal open on that system. When things go bad, they go bad fast on that box, and we hear about it very quickly (it is one of the main NFS servers).
 
I've got a load to add to this thread.
# uptime
20:59:08 up 37 days, 11:21, 8 users, load average: 146.93, 122.94, 107.62

This on a server I'm used to seeing from 1-6 on...
 
I don't get all this keep your linux uptime as high as possible business. Doing so means you don't care about kernel updates at all, I prefer security over ridiculous uptimes. Of course kernel updates address other issues too, but if you have high uptime already then stability isn't one of them.
 
I don't get all this keep your linux uptime as high as possible business. Doing so means you don't care about kernel updates at all, I prefer security over ridiculous uptimes. Of course kernel updates address other issues too, but if you have high uptime already then stability isn't one of them.

There's more to it than that. Kernel updates usually address security issues that require a local shell to exploit, so if it's a host that no one logs into and has it's services appropriately secured then the chances of the exploit working or being used may be acceptably low. The same could be true for hosts physically separated from other networks or protected by IDS/IPS. The act of changing out the kernel might not be worth the risk of breaking whatever the box does. Patching isn't a blind process, there needs to be some intelligence involved to determine the risk of the exploit as related to that particular server and weigh that against the risk of potentially breaking whatever it does. Especially when it comes to HA, proprietary, "enterprise" software because even the smallest change to a system can have ripples of unforeseen consequences or just break it outright.

And if you're valuing a high uptime, wouldn't that normally imply stability since the system doesn't need restarted often?
 
I don't get all this keep your linux uptime as high as possible business. Doing so means you don't care about kernel updates at all, I prefer security over ridiculous uptimes. Of course kernel updates address other issues too, but if you have high uptime already then stability isn't one of them.

Also, in addition to what nothingman said, kernel updates require me to reconfigure some software, which stretches to a rather long downtime. If it ain't broke, don't fix it. No one has shell access but me anyways.
 
I had a ubuntu box hosted at calpop that was running 3 years prior to it being pulled earlier this year. We pulled it to do some hardware upgrades. Most of the updates were security updates, didnt really need to reboot the machine.
 
That's not "real" load. I'll show you real load:

# uptime
2:10pm up 36 day(s), 6:50, 2 users, load average: 157.23, 163.95, 180.13

And that is actually pretty low. I have seen it in the 500-600 range at times. That said it was shutdown in preparation of hurricane Irene, so it has only been back on since than.

i'm an Oracle DBA and we exclusively use oracle on RHEL 5-5.5.

our network is secure (disconnected from the internet completely) so we aren't worried about kernel updates and we don't really upgrade the OS or reboot the server unless there is a reason to.

however, sometimes a hardware or site failure causes a local servers to reboot.

the Oracle clusterware requires a heartbeat. if the server is too busy, it evicts itself and reboots. this, however can cause the load of both servers to be placed on a single server.

i have seen an ibm x3650 with 8 cores/16gb ram at 650 load average...

it held bravely for a few minutes, then rebooted. (we are replacing all three servers with more powerful ones).


we do have an old unix server (running hp-ux 11.00 ) that i don't believe has been rebooted in the last 9 years.
(no one in out IT department knows hp-ux...)
 
Currently at 41 days on WinXP Pentium III laptop at 850mhz doing folding at home. Despite it having 2 batteries (Dell latitude c600), I still put it on UPS, just for safety sakes.

I am moving from WinXP to Slitaz linux, and hoping to run it until the computer quits.
 
Currently at 41 days on WinXP Pentium III laptop at 850mhz doing folding at home. Despite it having 2 batteries (Dell latitude c600), I still put it on UPS, just for safety sakes.

I am moving from WinXP to Slitaz linux, and hoping to run it until the computer quits.

I would run Folding at Home if it wasnt for the electricity. I'm not too keen on firing up yet ANOTHER 24x7 box at home, I already have two, costing me an estimated $18 a month as it is.

Those load averages are nuts, as I understand it, a 1.00 load is 100% utilization on a 1 core machine. Basically 8.00 would be 100% on an 8 core machine and so on. So load averages in the HUNDREDS? How many cores do the machines have, and what are they doing to be hammered so hard? My servers barley hit 0.15 even when all of my friends are on the server. Course recreational servers vs business class production servers i would obviously expect to be a big difference, but still load averages of 150.00+? Just wow.
 
$ uptime
09:40:45 up 699 days, 19:03, 1 user, load average: 0.16, 0.07, 0.01
$

Linux 2.6.9-34.ELsmp #1 SMP Fri Feb 24 16:56:28 EST 2006 x86_64 x86_64 x86_64 GNU/Linux
 
$uptime
2:42pm up 1103 day(s), 1:16, 2 users, load average: 0.30, 0.44, 0.51

That's a preprod Solaris box.

$uptime
14:43:50 up 279 days, 21:48, 4 users, load average: 7.14, 6.51, 6.16

That's our RHEL5 monitoring server

I could probably dig up a few more dev boxen that are even higher...

# uptime
1:51pm up 1043 day(s), 5:07, 4 users, load average: 0.11, 0.07, 0.08

First one I looked at. I am sure we have plenty more that are longer.

Edit there we go 😀:

# uptime
1:52pm up 1589 day(s), 1:28, 1 user, load average: 0.05, 0.03, 0.03

My old Red Hat server was somewhere around 3 years, it only came down because I had to move. After that I went to a Windows based setup and could never get more than 6-9 months without needing a reboot.
 
I had 90 days on my VPS server but the host system went down for maintenance 🙁
I didn't like the way how they didn't send a message beforehand about it, and when I wrote them WTF is going on why is the server down they said they will look into the problem. And after the server came back they replied me: They don't see any problem with the server.
I asked why is the host server uptime 5 min and mine uptime gone during the fault.
They told it was a planned maintenance. This is somehow shady for me but anyway I had no other problems with them: http://www.server4you.com/
 
Years ago I was doing a site survey in preparation for an upgrade and I needed to check the status of an existing server on the site. Tried to power up the monitor and it was dead. Borrowed a monitor from a PC and found a NetWare 2.2 server that had been up for over 1400 days.

Several months later when I was installing the new servers, I migrated all the data and users to the new servers it was almost sad to shut it down as it was only a couple of weeks from 1500 days uptime.
 
Last edited:
I was less than 2 months away from 1 year on my laptop...

I was using it <1 hour ago, just went to use it and it had reboot 🙁

fuck.
 
My OpenBSD firewall was up 430+ days... Then they switched me to a "smart" power meter. My poor UPS wasn't up to the downtime.

Now?

~$ uptime
1:55AM up 44 days, 8:49, 1 user, load averages: 0.27, 0.12, 0.09
~$
 
ehh .... Back when I had my little home server on a UPS I had would go about 15 months or so between reboots ... usually to add/replace a hard drive.

The UPS battery died a few years ago, and I never bothered to replace it ... so now I reboot about two-three times per year due to power outages...
 
Back
Top