Strategy for a small lab, multiple servers?

What makes the most sense?

  • Build the machines ourselves

  • Buy refurbished equipment

  • Buy new equipment with vendor support

  • Something else (please post your suggestion!)


Results are only viewable after voting.

BillBraskey

Junior Member
Jun 29, 2016
4
0
0
I need some guidance, as I am IT-savvy, but have never worked in IT professionally. I am the manager of a small biotech research laboratory with the extra duty of being the IT guy in certain cases.

Our needs:

1. Ultra-secure, high-availability server capturing environmental telemetry, archiving data, and alerting me via email/SMS/POTS dial-out of any out-of-spec conditions. Essential hardware: native RS-232 & RS-422 interfaces (i.e., no USB convertors); 5TB RAID6 with battery-backed controller & hotswap drives; dual hotswap PSUs; true LOM (i.e., manageable even if machine is powered off); ECC RDIMMs; GPU driving dual displays 1920x1200 or better.

2. RDS/thin client server supporting 4-5 simultaneous users running Office, Acrobat Pro, Citrix Receiver, & general productivity apps.

3. Servers (2 or 3) with PCI-X bus to run very specialized, legacy data acquisition cards, powerful CPU and AMD-only GPU for rendering complex data, >64GB ECC RAM for the same.

Our constraints:

1. Budget is super tight. This is why I am tasked with this job instead of hiring a consultant. We have no in-house IT.

2. The machines will be housed in a 22U full enclosure, which will live with us in our primary workspace. Therefore heat and noise are a concern. No 1U boxes with screaming fans that sound like something built by Pratt & Whitney!

3. Because of the tight budget, we want stuff that is upgradeable, reliable, and standard in its form factors (i.e., not locked into one brand).

4. Server #1 above must be its own physical machine, not doing anything else or hosting/running as a VM.

Possibilties I am considering:

1. Virtualize as much as possible, especially those legacy boxes that may be permanently tied to Win XP x64.

2. Build the servers myself (I am quite experienced with this) with quality bits like Intel mainboards/CPUs, LSI RAID cards, etc. This would ensure openness in form factors and not having to pay IBM/HP/Dell/etc. a ransom for a firmware update.

3. Buy refurbished servers (I'm partial to IBM gear) from a reseller. This would provide more peace of mind regarding system integration than would building them myself.

What are your thoughts? (Please don't belabor the issue of our limited budget; it's not for you or me to fix. Just play along and brainstorm with me.)
 
Last edited:

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,703
4,661
75
I've never worked in IT either. That said...

It sounds to me like the computer for need #1 is very important. For that reason I think it should be equipment with vendor support, whether new or refurb.

Need #2 is obviously much lower priority. It can probably be virtualized on whatever works.

Need #3 looks fairly specialized, but you should be able to find some old Sandy Bridge-era servers that would work.

AMD-only GPU is an unusual requirement. I've heard of Nvidia-only, for CUDA, but not AMD-only.
 

BillBraskey

Junior Member
Jun 29, 2016
4
0
0
It sounds to me like the computer for need #1 is very important. For that reason I think it should be equipment with vendor support, whether new or refurb.
It is important, but not in any special way that I can't manage it. It just requires reliability, redundancy, and security. Pretty straightforward to set up.

AMD-only GPU is an unusual requirement. I've heard of Nvidia-only, for CUDA, but not AMD-only.
The specialized legacy equipment involves high-speed motion capture cameras and the PCI-X interface cards require an ATI (silly me, did I say AMD?) GPU for rendering.
 

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,103
126
If budget is super tight, and demands the standard form factor, you just have to build the servers yourself, there is no other way around. HP, DELL, IBM always use proprietary form factor.

But build yourself always mean low reliability, because you don't have the time & facility to test each components (CPU, motherboard, PSU, DIMM, controllers, disks) individually & combined. HP, DELL, IBM test products vigorously before putting them together for sale.
 

IndyColtsFan

Lifer
Sep 22, 2007
33,655
688
126
It is important, but not in any special way that I can't manage it. It just requires reliability, redundancy, and security. Pretty straightforward to set up.

As someone who has worked in IT for over 20 years in various technical and management roles, I can tell you this: NEVER build a server for a critical application - always buy from a reputable vendor such as Dell or HP. Setting up true redundancy is not as straightforward as you think it is and if a part should fail, those vendors can get you up and running (even years down the road) very quickly as opposed to trying to find the parts yourself.

If budget is super tight, and demands the standard form factor, you just have to build the servers yourself, there is no other way around. HP, DELL, IBM always use proprietary form factor.

But build yourself always mean low reliability, because you don't have the time & facility to test each components (CPU, motherboard, PSU, DIMM, controllers, disks) individually & combined. HP, DELL, IBM test products vigorously before putting them together for sale.

This.

Also, keep in mind, if you're 3 years down the road and the system board dies, HP, Dell, and IBM can likely get you another one within a few hours. Building it yourself means you're on your own.
 
Last edited:

monkeydelmagico

Diamond Member
Nov 16, 2011
3,961
145
106
1. Budget is super tight. This is why I am tasked with this job instead of hiring a consultant. We have no in-house IT.

)

Since the cheap bastards won't spring for any IT support you will be left holding the bag if/when things go wrong. Get new stuff and get a robust support contract. That way you can beat up on the vendor when the geeks break their equipment. R&D gets super pissed if they have to start data runs over.

Outsource it ASAP.
 

BillBraskey

Junior Member
Jun 29, 2016
4
0
0
Since the cheap bastards won't spring for any IT support you will be left holding the bag if/when things go wrong. Get new stuff and get a robust support contract. That way you can beat up on the vendor when the geeks break their equipment. R&D gets super pissed if they have to start data runs over.

Outsource it ASAP.

I'm already holding the bag. I'm responsible for QA and regulatory compliance, so if the FDA decides to sh-tcan us, it's all on me. And I'm the only geek who will touch the equipment. It will sit 10 feet from me in a locked enclosure.

We are bleeding money from many outsourced service contracts that barely provide meaningful service at all. Unfortunately we are stuck with some of them because of the specialized nature of the equipment requiring PM and calibration only by the manufacturer. And as for the local third-party contractors we work with, they do such a shi--y job that we are looking to train our own staff to do things that are so far outside the scope of their normal jobs that it's comical. I am teaching myself BACnet programming so that we can fire the HVAC contractor that we pay thousands of dollars each quarter to screw up. Why not find another contractor, you ask? Because the one or two other "qualified" ones consider our business is so small that they give us an insanely inflated quote to make us go away. We are unfortunately located >400 miles from the nearest city with other entities in our industry sector, so there is zero expertise in the mouth-breathing local economy.

We are a 501(c)3 nonprofit research facility, so capital expenses are almost exclusively tied to grants. Buying these machines was not written into the grant [one of many stupid oversights committed before I was hired], but we need them now and can't wait until we can build them into a future grant. Furthermore, the day-to-day budget that keeps the lights on between grant-funded research projects cannot support more service contracts that exist mostly "just in case".

I knew this thread would get overwhelmed by all the armchair CEOs offering their advice on how our company should be run and funded. I sure hope all of you understand how truly helpful and actionable such advice is to me!
 

BillBraskey

Junior Member
Jun 29, 2016
4
0
0
If budget is super tight, and demands the standard form factor, you just have to build the servers yourself, there is no other way around. HP, DELL, IBM always use proprietary form factor.

But build yourself always mean low reliability, because you don't have the time & facility to test each components (CPU, motherboard, PSU, DIMM, controllers, disks) individually & combined. HP, DELL, IBM test products vigorously before putting them together for sale.

You make a good point about OEM integrated reliability testing. But for that to be worth the cost, I'll need to research what sort of validation guarantees those vendors can provide. If it doesn't save me a bunch of time preparing my own DQ/IQ/OQ/PQ documents, then there isn't much value.

I have learned from experience that proprietary form factor is a trap. Whether it's funky riser cards, PSUs that can't be upgraded beyond a certain wattage, limited BIOS settings, RAID backplane issues, or that one oddball controller/bridge chip on the mainboard that nobody ever writes/updates drivers for. The big OEMs cater to the 5-year depreciation schedule. We expect all computers to be used, in some fashion, for 7-10 years. It's a nightmare trying to make a 7 year-old Dell server do any new tricks.
 

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,103
126
There are many off duty servers on eBay from Dell, HP & IBM that are damn cheap with 24GB or 32GB. But probably none have RAID 6.

Looks like you have no choice but build yourself. The risk of course is if anything went wrong, you have to handle it yourself. Better have spare parts around.

Also if you plan to use the machines for that long, make sure to keep the whole rack cool all the time. Excessive heat is the biggest killer for any components.
 
Last edited:

lakedude

Platinum Member
Mar 14, 2009
2,778
529
126
Setting up true redundancy is not as straightforward as you think it...
Just about impossible for a small scale setup in my experience.

We have several redundant systems and they all fail from time to time.

Perhaps the most impressive system we ever had used 2x of every board and 2x of each hard drive. The system had a modem on top and red LED error lights on every board and hard drive. The idea was that if a part went bad the system would run on the copy and use the modem to order itself a new part. The system was touted to have "100% uptime". In theory your only indication of a problem was that a part would show up and you were supposed to open the case are replace the part with the illuminated red error LED, hot, with no downtime.

In practice the thing was not 100% reliable. We never had a problem with any of the boards but the thing died completely, twice.

One time there was a problem with a clock that was shared between the 2 halves of the system and it knocked the whole system offline. It was one of the few parts that there were not 2 of.

The other time a hard drive went bad and the system knew it as this is one of the faults the system was said to be able to guard against. Unfortunately the system decided that the good drive was the bad drive and that the bad drive was the good drive so it shut down the good drive and tried to run from the bad one! Needless to say that didn't work so well.

Great redundant system, in theory...
 
Last edited:

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
It's getting harder to find servers with even a single PCI-X slot. By the sound of it you need several PCI-X slots? That's going to significantly narrow down your choices.

In addition, can you give more details on these cards? How big are they? Will they fit in half length/half height slots? Are they all bus powered or do they need cables?
 

pcgeek11

Lifer
Jun 12, 2005
22,380
4,999
136
I voted " Buy new equipment with vendor support " due to your lack of an IT Staff and lack of knowledge on the subject.

I would also consider a lease of equipment with support.
 

monkeydelmagico

Diamond Member
Nov 16, 2011
3,961
145
106
I knew this thread would get overwhelmed by all the armchair CEOs offering their advice on how our company should be run and funded. I sure hope all of you understand how truly helpful and actionable such advice is to me!

If you can't recognize the actionable item from every single person saying BUY NEW and GET A CONTRACT I understand completely why you are not the CEO and have been tasked with the impossible.

Good luck Gilligan.
 

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,103
126
Last edited:

Blain

Lifer
Oct 9, 1999
23,643
3
81
If you can't recognize the actionable item from every single person saying BUY NEW and GET A CONTRACT I understand completely why you are not the CEO and have been tasked with the impossible.
aka... "Buy new equipment with vendor support"

+1 for buying new with vendor support