• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

One big SoB - Linux Server

GobBluth

Senior member
So I built a linux (Red Hat Enterprise 5) today that has 16 cores and 92 GB or memory.

Overkill?

This server runs hundreds of databases for one of the main programs we use here at my work. It has an average of 200 users logged in at any given time running reports, pulling images, etc.

I laughed at the thought of 92 gb on a linux server, but it is what I was told to do.

Thoughts?
 
If it's for work, has hundreds of databases, and you're not paying for it, then no, it's not overkill.

Why 92GB? I mean... what combination of DIMMs do you even use to get that? 4x16GB, 3x8GB, 1x4GB?
 
We've just got a new application at work, which runs on a dedicated pair of 32 core, 256 GB servers in order to support 2000 concurrent users to the application.

Sadly, due to an oversight in the planning stage, the box (for me, at least) sits behind an old 100 MBps firewall device. So performance is, at time, rather disappointing.

As far as hardware costs go, I believe the cost of the hardware was under 1% of the TCO, including consultancy, software licensing and support.
 
Last edited:
Even if you don't _need_ it, given enough uptime and activity linux will use whatever mem you give it for buffers/cache.
You can also use a chunk of that mem for a ram drive if you can come up with an idea for that.
 
So I built a linux (Red Hat Enterprise 5) today that has 16 cores and 92 GB or memory.

Overkill?

This server runs hundreds of databases for one of the main programs we use here at my work. It has an average of 200 users logged in at any given time running reports, pulling images, etc.

I laughed at the thought of 92 gb on a linux server, but it is what I was told to do.

Thoughts?

Without any idea of the workload those databases individually or as a whole it's impossible to say. But if it's really hundreds of databases, I don't see why 92G would be too much and if they're of any significant size then it may be too little.
 
16 processor cores really isn't all that much today. You can get that from just 2 E5 class Xeons.

96 GB might be overkill, but it really depends on the size of the databases you're running. If they total up to a TB or more of data, then it's probably not really overkill.
 
Back
Top