• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Hitachi Plans 500-Gigabyte Hard Disk, Largest Ever

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: PhasmatisNox
Moments later, Bill Gates announces that Longhorn requires 120GB free space to install.
To Microsoft's credit, Windows is the only major OS that still fits on 1 CD. OS X and Red Hat both take up 2+ CD's.
 
Originally posted by: ViRGE
Originally posted by: PhasmatisNox
Moments later, Bill Gates announces that Longhorn requires 120GB free space to install.
To Microsoft's credit, Windows is the only major OS that still fits on 1 CD. OS X and Red Hat both take up 2+ CD's.

Not really. You can get linux livecds in an 80 MB download. I think the record for a small linux distro was 14 MB... (although that still seems large...)

The 2nd cd for redhat contains a lot of extra applications and such. Linux on 3 cds includes openoffice, 8 different ftp clients, irc clients, web browsers, email clients, chat programs, the GIMP (photoshop)...
 
It seems that some headway is being made at new forms of storage. prototype of a holographic drive

no idea on read/write times (so it might only be a blue-ray successor - but it does have read/write capabilities). but with data capacities that range between 200 GB to 1.6 TB it seems interesting. and that an actual prototype has been made, IBM has been working on it for such a long time with nothing to show for it.
 
Originally posted by: Toastedlightly
"half a terabyte, or more than 500 billion bits of data."

Isn't that 500 billion bytes of data?

Yes, but it is technically also more that 500 billion bits. 😉
 
Originally posted by: SportSC4
It seems that some headway is being made at new forms of storage. prototype of a holographic drive

no idea on read/write times (so it might only be a blue-ray successor - but it does have read/write capabilities). but with data capacities that range between 200 GB to 1.6 TB it seems interesting. and that an actual prototype has been made, IBM has been working on it for such a long time with nothing to show for it.

pix? cant find any . . .
 
Originally posted by: Philippine Mango
Originally posted by: BannedTroll
Originally posted by: Philippine Mango
No I just used that term because when I was trying to explain to some one or trying to "mention" the issue with drive capacity I used "japanese byte". Remember I picked up this term a VERY LONG TIME AGO, I never debated it or ever got into an arguement with anyone about this "term" because I didn't mention it THAT often. If "japanese byte" is not the term and you want to refer to that "issue", how WOULD you refer to it? Would you give a drownout explanation every time even in a 30 second coversation you were planning to have and no longer?

I don't have a word for the capacity differences but that is what the guy told me and I simply accepted it, whether or it it's true I don't know for sure. I didn't bother to seriously debate this because I never heard evidence otherwise DISproving it like for example another term being used for it. The case with the frys associate, I DID have evidence DISproving it to which is why I argued with him, if you don't have any evidence disproving something and or you didn't your self have a term for something, then you go along with it. Anyways I was hesitant to accept the term and asked him, "are you serious"? I've never heard of that term before and agian this was awhile ago but apparently he had a good excuse to where he heard about this.
How would I refer to it? Oh I don't know......

It really pisses me off that hard drive manufacturers choose to measure drives in decimal when the rest of the industry uses binary. Why can't they make a drive that has a 500Gb in binary?




.......still a lame arguement.

Yes I acutally could say that but it seems like there isn't a word to describe that exact problem.
There's no one word for it (at least, to my knowledge) but you can use several. Try "GiB-GB conversion", for one.


(I've never actually used that term before 😉)

EDIT: After all, the entire problem is that the hard drive manufacturers measure capacity with GiB (10^6 bytes), whereas Windows calculates the capacity as GB (2^30 bytes).
 
meh, i'm still waiting for the 7200 rpm notebook drive seagate promised 6 months ago. they haven't mentioned it since. i think they're a bit behind on their product launches.
 
The term is decimal (or metric) vs. binary.


GB = gigabyte = 1,000,000,000 (decimal) or 10^9

GiB = gibibyte = 1,073,741,824 (binary) or 2^30


Thus, 128GiB = 137GB, etc.

EDIT: Somebody correct me if I'm wrong. And congrats to ElFenix on 50k posts!
 
Originally posted by: MisterCornell
That could hold all the porn in the world. :Q
Not true. I just purchased a 200GB drive and a 300GB one, and I'm pretty sure I'll fill them up very quickly with just such things. THERE IS NO LIMIT!!!!
 
Originally posted by: SportSC4
when are they going to get *faster*?
Are you kidding? I have a SCSI drive that's a couple of years old, and I was running my operating system on it solely to keep those nice access times. Then I saw the new Maxtor drive with 16MB of cache and I noticed that the transfer rates were MUCH higher on it than on my Ultrastar 15k drive.... and that's WITHOUT NCQ enabled. They're getting plenty fast.
 
Drives do get faster as they get bigger... The more densely the data is packed on the platters, the more data can be accessed at the standard 7200 rpm.

Access time doesn't always improve with bigger size - more 'area' to search, but the reading and writing speeds of large chunks of data are usually faster with bigger drives.

 
Back
Top