Just built a fileserver. I wanted 3TB of total storage in RAID 5. I knew that hard drives were calculated using 1GB=1,000,000,000 (1000*3) bytes and everything else calculates 1GB=1,073,741,824 (1024*3) bytes. This never used to bother me when I used smaller drives like a 40 gigand windows would recognize it as 37.25. But as hard drives get bigger and bigger the gap seems to be widening. The 11x 300 gig hard drives I bought to run in RAID 5 would equal:
10X300 = 3000 gig = 3TB
When actually it comes out to 300gig = 279.48gig:
10x279.48 = 2794.8 gig = 2.729TB
To actually get windows to see 3TB I needed to use th 12 drive (which I was going to use as a hot spare):
11x279.28 = 3074.28 gig = 3.00TB
Anyone know why drive manufacturers are the only ones that use 1gig=1,000,000,000 bytes rule? With drive sizes increasing I really with they would start marketing their drive with the actual size.
~Sy
10X300 = 3000 gig = 3TB
When actually it comes out to 300gig = 279.48gig:
10x279.48 = 2794.8 gig = 2.729TB
To actually get windows to see 3TB I needed to use th 12 drive (which I was going to use as a hot spare):
11x279.28 = 3074.28 gig = 3.00TB
Anyone know why drive manufacturers are the only ones that use 1gig=1,000,000,000 bytes rule? With drive sizes increasing I really with they would start marketing their drive with the actual size.
~Sy