Why does rack space cost so much?

Shalmanese

Platinum Member
Sep 29, 2000
2,157
0
0
One of the reasons Anand and several other sources give about why 1U and 2U cases are so popular is that rack space is very expensive, thus forcing people to come up with novel chipset and cooling solutions in order to cram it down.

Why exactly does rack space cost? I would have thought that the biggest costs to a co-location centre would be bandwith and maintaince? In which case, it would make more sense to go with a 4U case so there is less chance of a unit failing. I doubt that real estate is really a big worry since you don't need to be TOO picky where to host your co-loc. So why are people forced to cram their components into tiny little boxes?
 

ProviaFan

Lifer
Mar 17, 2001
14,993
1
0
I have never been shopping for rack space, but I assume that perhaps the rack space cost includes cost of electricity (bigger equipment usually consumes more) and cooling, and probably other things that I can't think of right now.
 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
I don't know, myself, but I imagine it would be something like this:

Imagine a room the size of your house dedicated to row upon row of racks.
Imagine the amount of power necessary to drive so many computers. Factor in industrial surge protectors, backup supplies, generators, and the like.
Imagine the cooling requirements for so many computers stacked end on end on each rack. Especially during summer. Also remember this will probably double the amount of electricity required.
Now start considering the amount of bandwidth such a facility would require, the maintenance and troubleshooting costs of so much high-end hardware, plus the insurance required just in case something does goes wrong and the entire facility fries itself.
Speaking of insurance, let's try to avoid data loss and keep uptime permanent by getting everything duplicated for redudancy. Hell, let's be real good and not only duplicate everything on site (data and hardware) but also have a matching facility on the other side of the coast.

I dunno about you, but I'm beginning to see why rack space is so expensive. Plus, the drive towards increasing density (blade servers) and reducing power requirements (especially heat). I see why hot-swapping is so prevalent (5 seconds to pull out and plug in a new one. Less downtime) and increasing redudancy (some CPUs, from what I understand, have error-checking and if the end result doesn't quite match up, it'll reset the state and re-process it. If that still doesn't work, it'll activate a duplicate execution engine and run the code through there. If that works, then it'll stop using the first and stick with the second. If that doesn't work, it sends everything back to be executed elsewhere and waves a red flag.)
 

Shalmanese

Platinum Member
Sep 29, 2000
2,157
0
0
But the thing is, all the stuff you mentioned, bandwidth, heat, power, data loss, maintenence, trouble-shooting etc. its all INDEPENDANT of rack space.

If you cram a 2xAthlon MP 2000+ with some 15k SCSI Hard drives in a case, it doesnt matter if the case is 1U or 4U, its still going to use the same amount of power, heat, bandwidth. Its still going to fail just as often (probably more often when its in a 1U cos of the heat).

The only thing I can think of that rack space increases is floor space which I cant imagine being all that expensive.

I can understand why the cost of co-location is so expensive in itself, I just dont understand why its so dependant on how much rack space you use.
 

GlassGhost

Member
Jan 30, 2003
65
0
0
Originally posted by: Shalmanese
But the thing is, all the stuff you mentioned, bandwidth, heat, power, data loss, maintenence, trouble-shooting etc. its all INDEPENDANT of rack space.

If you cram a 2xAthlon MP 2000+ with some 15k SCSI Hard drives in a case, it doesnt matter if the case is 1U or 4U, its still going to use the same amount of power, heat, bandwidth. Its still going to fail just as often (probably more often when its in a 1U cos of the heat).

The only thing I can think of that rack space increases is floor space which I cant imagine being all that expensive.

I can understand why the cost of co-location is so expensive in itself, I just dont understand why its so dependant on how much rack space you use.

True. But rack providers don't take this into account. They don't monitor exactly how much electricity your box consumes ( at least the vast majority don't), they just take what an average 1U vs. and average 4U server would consume. Ditto for heat produced.
 

Barnaby W. Füi

Elite Member
Aug 14, 2001
12,343
0
0
Say you have a 1U server, and a 4U server. They're both fairly big, and yet fairly small, depending on whether you're comparing them to a pencil, or a car. You could fit either in your closet, and you could fit neither in the glove box of your car. Now, you figure, at ABC datacenter, they have a total of 400 rackspaces available. You can fit 100 4U servers, or 400 1U servers. When you look at it that way, the difference is huge, a fourfold increase in density, so each space definitely has it's worth...
 

Shalmanese

Platinum Member
Sep 29, 2000
2,157
0
0
Okay, say ABC co-loc has 400 rack spaces and co-loc DEF also has 400 rack spaces. ABC has 100 4U racks and DEF has 400 1U racks. Now, all things being equal, each UNIT is going to be using up bandwidth so DEF has to have 4 times as many pipes as ABC. Each UNIT has a sperate HD so DEF is going to have to have 4 times as many dead Hard Drives. Its going to have 4 times as many crashes and 4 times as much downtime. Yet ABC and DEF are earning the exact same amount of money.

The point I am trying to make is that I dont quite see why computing density should be worth that much and computing quantity should be a relatively small amount. The only difference between a server farm being crammed into one room and the server farm being spread out into 2 rooms is the cost to rent/build another room.
 

bsobel

Moderator Emeritus<br>Elite Member
Dec 9, 2001
13,346
0
0
The point I am trying to make is that I dont quite see why computing density should be worth that much and computing quantity should be a relatively small amount. The only difference between a server farm being crammed into one room and the server farm being spread out into 2 rooms is the cost to rent/build another room.

You must not realize how horribly expensive it actually is to rent and buildout one of those rooms. Fire supression alone can run a fortune.
Bill


 

Evadman

Administrator Emeritus<br>Elite Member
Feb 18, 2001
30,990
5
81
You must not realize how horribly expensive it actually is to rent and buildout one of those rooms. Fire supression alone can run a fortune.
Bill

Halon is not cheap by any call of the word :)

But shalmanese does have a point. I would assume that like all people, these places are lazy and want an easy way of pricing. a "U" is easy. you measure, and your done.
 

Shalmanese

Platinum Member
Sep 29, 2000
2,157
0
0
So how much DOES it cost to build a co-loc room? I never considered that it had to be fireproofed. I always considered the actual housing of the computers to be something relatively cheap.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Why do larger apartments cost more than smaller ones? The more space you take up in a finite area, the more you should have to pay. Period. :)

Necessities for a datacenter:
Floor space -raised floors are a plus, in metropolitan areas floor space is harder to find and more expensive
Air conditioning -enough to keep the room "cool" 24/7 even when filled to the brim
Power -you will need enough circuits to support all of the machines you will be housing, 2+ connections to different sources is a plus so if one goes down the other still works
UPS -none of this 650VA stuff either
Generator -propane, gasoline, whatever just in case the power goes out for longer th an the UPSes can handle, multiple might be a plus
Physical security -locks, locks, locks (the generator should also be secured)
Staff -someone needs to be there pretty much 24/7
Internet connection -pregerably 2-3 connections over different media to different ISPs
Infrastructure -switches and routers in failover configurations to handle your connection and customers

Thats all I can come up with off the top of my head, but I am pretty sure I am forgetting something...
 

ProviaFan

Lifer
Mar 17, 2001
14,993
1
0
Originally posted by: n0cmonkey
Why do larger apartments cost more than smaller ones? The more space you take up in a finite area, the more you should have to pay. Period. :)

Necessities for a datacenter:
Floor space -raised floors are a plus, in metropolitan areas floor space is harder to find and more expensive
Air conditioning -enough to keep the room "cool" 24/7 even when filled to the brim
Power -you will need enough circuits to support all of the machines you will be housing, 2+ connections to different sources is a plus so if one goes down the other still works
UPS -none of this 650VA stuff either
Generator -propane, gasoline, whatever just in case the power goes out for longer th an the UPSes can handle, multiple might be a plus
Physical security -locks, locks, locks (the generator should also be secured)
Staff -someone needs to be there pretty much 24/7
Internet connection -pregerably 2-3 connections over different media to different ISPs
Infrastructure -switches and routers in failover configurations to handle your connection and customers

Thats all I can come up with off the top of my head, but I am pretty sure I am forgetting something...
The racks themselves? The wiring? The fiber? ;)
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: jliechty
Originally posted by: n0cmonkey
Why do larger apartments cost more than smaller ones? The more space you take up in a finite area, the more you should have to pay. Period. :)

Necessities for a datacenter:
Floor space -raised floors are a plus, in metropolitan areas floor space is harder to find and more expensive
Air conditioning -enough to keep the room "cool" 24/7 even when filled to the brim
Power -you will need enough circuits to support all of the machines you will be housing, 2+ connections to different sources is a plus so if one goes down the other still works
UPS -none of this 650VA stuff either
Generator -propane, gasoline, whatever just in case the power goes out for longer th an the UPSes can handle, multiple might be a plus
Physical security -locks, locks, locks (the generator should also be secured)
Staff -someone needs to be there pretty much 24/7
Internet connection -pregerably 2-3 connections over different media to different ISPs
Infrastructure -switches and routers in failover configurations to handle your connection and customers

Thats all I can come up with off the top of my head, but I am pretty sure I am forgetting something...
The racks themselves? The wiring? The fiber? ;)

Some of that has to do with infrastructure (cabling) I guess, but yeah, the "cheap" stuff too ;)
 

Shalmanese

Platinum Member
Sep 29, 2000
2,157
0
0
Originally posted by: n0cmonkey


Necessities for a datacenter:
Floor space -raised floors are a plus, in metropolitan areas floor space is harder to find and more expensive

Why do data centers need to be in a metropolitan area, shuffle it somewhere relatively rural but still accesible.

Air conditioning -enough to keep the room "cool" 24/7 even when filled to the brim

this should be independant of case size. Its more dependant on how many units you house

Power -you will need enough circuits to support all of the machines you will be housing, 2+ connections to different sources is a plus so if one goes down the other still works
UPS -none of this 650VA stuff either
Generator -propane, gasoline, whatever just in case the power goes out for longer th an the UPSes can handle, multiple might be a plus

again, independant of rack size

Physical security -locks, locks, locks (the generator should also be secured)

hmm... maybe one extra lock, all your getting is a bigger room

Staff -someone needs to be there pretty much 24/7

So the staff have to walk 50m further to reach the computer? staff size is based on number of units, not the size they take up

Internet connection -pregerably 2-3 connections over different media to different ISPs
Infrastructure -switches and routers in failover configurations to handle your connection and customers

This is all independant of rack size, a 4U machine and the same machine squeezed into a 1U case is going to use the same bandwidth

Thats all I can come up with off the top of my head, but I am pretty sure I am forgetting something...

 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Just a note, but the post you replied to was explaining some of the things necessary for a data center, in response to:
So how much DOES it cost to build a co-loc room? I never considered that it had to be fireproofed. I always considered the actual housing of the computers to be something relatively cheap.

Originally posted by: Shalmanese
Originally posted by: n0cmonkey


Necessities for a datacenter:
Floor space -raised floors are a plus, in metropolitan areas floor space is harder to find and more expensive

Why do data centers need to be in a metropolitan area, shuffle it somewhere relatively rural but still accesible.

How far is accessible? The DC metro area spreads out for quite a ways. If I had a machine coloed in DC (I live in NoVA), I would not consider it accessible. Its about a 45 minute trip, without traffic.

Air conditioning -enough to keep the room "cool" 24/7 even when filled to the brim

this should be independant of case size. Its more dependant on how many units you house

Agreed.

Power -you will need enough circuits to support all of the machines you will be housing, 2+ connections to different sources is a plus so if one goes down the other still works
UPS -none of this 650VA stuff either
Generator -propane, gasoline, whatever just in case the power goes out for longer th an the UPSes can handle, multiple might be a plus

again, independant of rack size

Agreed.

Physical security -locks, locks, locks (the generator should also be secured)

hmm... maybe one extra lock, all your getting is a bigger room

Extra locks (on all doors), security systems for doors (biotech is big), security cameras, etc.

Staff -someone needs to be there pretty much 24/7

So the staff have to walk 50m further to reach the computer? staff size is based on number of units, not the size they take up

Ok.

Internet connection -pregerably 2-3 connections over different media to different ISPs
Infrastructure -switches and routers in failover configurations to handle your connection and customers

This is all independant of rack size, a 4U machine and the same machine squeezed into a 1U case is going to use the same bandwidth

But this is still something that needs to be purchased for a datacenter :)

Thats all I can come up with off the top of my head, but I am pretty sure I am forgetting something...

So, as far as what a datacenter needs, my post is very valid. I did forget water-less fire extinguishing stuff.

As far as why you pay by the U, my initial response is still the one I am going with. Why does a bigger apartment cost more? Because you are using a larger space inside of a finite area.
 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
Originally posted by: Shalmanese


Air conditioning -enough to keep the room "cool" 24/7 even when filled to the brim

this should be independant of case size. Its more dependant on how many units you house

No offense, but I don't think you realize that although the case might be bigger, it doesn't always house the same components. A 1 U server is good enough to maintain a light load (say, a rarely accessed web site) and it may be duplicated so it takes up 2 U space. However, a 3 U or 4 U server isn't just all hot air. They house more powerful servers with more processors, memory, and/or disk space. Blade servers are 3 U, I believe, and house maybe 6-8 blades (I forget) each with 1-2 processors. Apple's new Xserve add-on is a 3 U RAID cabinet simply because there's no way you're gonna house many hdd's plus accompanying components into a 1 U case.

Power -you will need enough circuits to support all of the machines you will be housing, 2+ connections to different sources is a plus so if one goes down the other still works
UPS -none of this 650VA stuff either
Generator -propane, gasoline, whatever just in case the power goes out for longer th an the UPSes can handle, multiple might be a plus

again, independant of rack size

I'm beginning to wonder what you think is inside these racks. Once again, a 3 U or 4 U rack isn't just a 1 U rack shifted over to a larger case size. IT people simply aren't dumb enough to waste that much space. If all 1 U or 2 U units were shifter over to larger cases just for the hell of it, you're talking about doubling floor space right off the bat and even quadrupling. If it can fit in 1 U, then by all means use 1 U. It's a basic idea behind efficiency.
Oh, yeah, and if you're running a database off 1 U, you won't be holding much storage. With 4 U, we're talking multiple drives, each one using more power than the processor arbitrating data flow.

Staff -someone needs to be there pretty much 24/7

So the staff have to walk 50m further to reach the computer? staff size is based on number of units, not the size they take up

The reason staff size would be based on number of units is because EACH UNIT TAKES UP SPACE.
Using your reasoning, I see no reason why prisons need to be staffed with more than three guards in eight hour shifts.

Internet connection -pregerably 2-3 connections over different media to different ISPs
Infrastructure -switches and routers in failover configurations to handle your connection and customers

This is all independant of rack size, a 4U machine and the same machine squeezed into a 1U case is going to use the same bandwidth

Again, obviously you're right. It'll use about the same amount of bandwidth. However, a 4 U machine is probably not going to be as wimpy as a 1 U machine, so this point is moot.

 

Shalmanese

Platinum Member
Sep 29, 2000
2,157
0
0
Hmm... maybe Im going at this the wrong way. I always envisioned the purchasing process as something like: Hmm... Okay, we need XYZ hardware, now how small a rack can we cram it into?

Instead, you saying its more like: Hmm... We can only afford 10U of space, whats the most hardware we can cram into that?

What I was wondering is, if it was closer to my scenario, why they needed to cram anything. Obviously, if the costs are the same, or very close, you would opt for a 4U over a 1U since it gives you better cooling and does away with esoteric stuff like angled RAM slots and 90 degree PCI riser cards. However, since costs are VERY different for a 1U and a 4U case, people are forced to go as small as they can.

I agree that real estate may be a significant cost in built up, urban areas but that would mean that there would be far less of a difference in say, texas compared to NY. AFAIK, this is a universal phenomena.

Also: Sakhiel, are you saying that we need more guards in a prison if we gave them roomier cells? Guards is related to the number of prisoners, not how big the prison is.
 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
Originally posted by: Shalmanese
Hmm... maybe Im going at this the wrong way. I always envisioned the purchasing process as something like: Hmm... Okay, we need XYZ hardware, now how small a rack can we cram it into?

Instead, you saying its more like: Hmm... We can only afford 10U of space, whats the most hardware we can cram into that?

Not what I was trying to say. Sorry if I'm unclear.
It is as you say, trying to cram a given hardware requirement into as small a space as possible. It is just more efficient to cram lite servers into 1 U and seriously heavy duty units into 12 U or more. From what I understand of your logic, it seems it would be feasible to cram the equivelent of a VIA Eden Mini-ITX based web browser into a rather roomy server/workstation chassis. Sure, it doesn't matter if you've got one or two systems like that. However, try dealing with a few hundred.

What I was wondering is, if it was closer to my scenario, why they needed to cram anything. Obviously, if the costs are the same, or very close, you would opt for a 4U over a 1U since it gives you better cooling and does away with esoteric stuff like angled RAM slots and 90 degree PCI riser cards. However, since costs are VERY different for a 1U and a 4U case, people are forced to go as small as they can.

Obviously, yes, you would go 4 U over 1 U in those situations. However, anybody who manages servers knows you can cram 4 such units into the same space and will then charge four times as much for that much space.
A mid-sized car can hold four or more adults rather comfortably. So why bother with hatchbacks and other small cars that barely fit 3 adults? Why would we bother with motorcycles, then, that can hold at most two people, and some not even that?
If you're thinking "cost, of course" well, then you've got the answer for rack space.

I agree that real estate may be a significant cost in built up, urban areas but that would mean that there would be far less of a difference in say, texas compared to NY. AFAIK, this is a universal phenomena.

The more remote your location, the higher the costs associated with keeping said location as well connected and accessible as the one in the same skyrise as you.

Also: Sakhiel, are you saying that we need more guards in a prison if we gave them roomier cells? Guards is related to the number of prisoners, not how big the prison is.

I was just following your line of logic on that one.
Guards are not only related to the number of prisoners, but also how big the prison is. One guy isn't going to be covering the perimeter all by his lonesome when it measures a couple miles around. It'd take him an hour or two just to make one turn. So, prisons are made smaller to reduce staff size. Cells are not made bigger, because that's usually a waste of space. If you do find a reason to have bigger cells, it's usually to put a lot of prisoners into the same cell (and these are usually low-risk types. Think police department processing.)
Prisons are also monitored 24/7 and cover pretty much every single inch of prison space, especially in areas unfrequented (make sure no one stays there drilling holes unnoticed).

Any of that sound familiar? Replace prisoner with computer and prison with data center.
 

Tweakmeister

Senior member
Jul 12, 2000
646
0
0
Take a look at these two articles if you haven't already (below).

It's expensive to run these centers. Many of them are near bomb-proof. You've got support technicians, security, etc. The human aspects aside...the technology behind these places is amazing. The redundancy and cost of everything....down to the flooring, fingerprint readers (or eye or whatever). The Electricity bill is huge...air conditioning, battery backups/generators (that last days), offsite backups, etc.

Getting down to the cost per (U)nit. Picture having a parking space that you charge for. Now, ideally you can fit 2 motorcycles per one car space. Would you rather charge $35 per motorcycle space, or $50 per car space? Sure the motorcycles might "tear up" the pavement more (more traffic), but you'd rather see that $70 versus the $50 considering how small the cost of traffic maintenance is.


http://www.hardocp.com/article.html?art=Mjc5
http://www.hardocp.com/article.html?art=Mjgw
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
One also needs to realize the magnitude. With bigger providers, we're not talking hundreds of servers, it's more like thousands. Size and consumption does matter with those numbers.

My own provider just moved house, here's the report:
http://www.heise.de/ct/aktuell/data/uma-20.02.03-002/

It's in German, babelfish translation being kinda readable. 12000 servers, 8 megawatts consumption, batteries for 17 minutes, four standby Diesel generators 2400 kW each.

Understand why these people WANT small low power blades?
 

Buddha Bart

Diamond Member
Oct 11, 1999
3,064
0
0
I think one of the key pieces is, its hard to get good bandwith outside of major metropolitan areas.

Figure when you start a colo facility you're hundreds of thousands, if not several million, in the hole to begin with just to get the facility setup working and staffed. Then you get to start charging customers, so the more you can cram in the faster you can recoup your losses and be on your way to profit.

Plus, while land itself may be cheaper when you're farther from the cost of building the facility will still be pretty similar.

As higher tier bandwith makes spreads out and more datacenters get built, the price of colocating will decline. In fact it already has drastically in the last few years.

bart