• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Maxtor ships 300GB drive

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Pariah, improvements in read/write technology barely keep up with drive size, which is why 300GB drives basically only read as fast as 30GB drives, and why the first 300GB drive is 5400RPM(to buy more time to do a read). In terms of precentage, drives are getting slower, they've just managed to conserve raw speed so far.
 
and I'm paranoid about backing up my 55.6 gigs of music (have it on 3 different drives)

Just imagine if I had to backup that... yikes...
 
those who said that 300 is enough are right I think.

the manufacturers should look more into making a quiet and more reliable drive. I mean - why isn't it possible to put some research into making a HDD more shock resistant and with less probability of failure? and I'm aware that currently the failiure rates are small (when taken into consideration of the number of drives produced and shipped) but perhaps it can be improved upon?
 
Originally posted by: Pariah
True, but twice the areal density, means the same amount of data fills half the platter area. So seek times are theoretically halved when the areal density is doubled.

Which theory would this be? Yes, the head will pass over N as many tracks in the same amount of timey, but that's because there are N number of tracks, so it's a wash.

It's like marking off every half yard on the football field, and then claiming you're running 2x as fast. Sure, you're passing 2x as many lines, but you're still going to get to end zone at the same time.

According to some tech sites, higher track density = faster seek times. I have not found any studies or interviews with hard drive engineers that prove or disprove this theory. At the moment, it merely looks like one of those ideas everyone thinks is true just because everyone else thinks it's true.
 
Originally posted by: ViRGE
In terms of precentage, drives are getting slower, they've just managed to conserve raw speed so far.

New drives are slower than older drives? How does something "get slower" concurrently with "conserving speed"?? What is the difference between raw speed & cooked speed?

That doesn't make any sense.
 
Originally posted by: grant2
Originally posted by: Pariah
True, but twice the areal density, means the same amount of data fills half the platter area. So seek times are theoretically halved when the areal density is doubled.

Which theory would this be? Yes, the head will pass over N as many tracks in the same amount of timey, but that's because there are N number of tracks, so it's a wash.

It's like marking off every half yard on the football field, and then claiming you're running 2x as fast. Sure, you're passing 2x as many lines, but you're still going to get to end zone at the same time.

According to some tech sites, higher track density = faster seek times. I have not found any studies or interviews with hard drive engineers that prove or disprove this theory. At the moment, it merely looks like one of those ideas everyone thinks is true just because everyone else thinks it's true.

You're not understanging what I'm saying. A higher areal density platter will decrease seek time for the same amount of data vs a lower density platter. If you have 40GB of data on a single platter 40GB drive it will fill up the entire drive. If you move that data to a single platter 80GB drive, the same 40GB of data doesn't suddenly transform itself into 80GB to fill the platter. The 40GB will only take up half the platter (I know it won't exactly due to the way data is stored on the drive but for simplicities sake we'll pretend it does), meaning the maximum seek distance the head will have to travel is half way across the platter instead of all the way across for the 40GB platter.

I couldn't figure out what Virge was talking about either, so I just left it alone.
 
Originally posted by: JackBurton
Originally posted by: bill_n_opus
Originally posted by: JackBurton
I have a question, though. What are people doing with these huge-@ss drives?
I need ALOT of room for my MP3s, DivX movies, apps, and ISOs. 😉 ISOs take up ALOT of room, but imagine converting all your CDs to ISO and storing them on a large drive (or multiple large drives). Now whenever you want to install a game or program, you don't have to go around looking for your CD. All you do is mount the ISO using Daemon Tool (or something similar like Alcohol 120% or whatever) and install it. You are MUCH more organized since you can list the ISOs alphabetically and not to mention it will install alot faster. 😉 And ISOs are also good for Playstation emulators. 🙂

You can NEVER have too much space or have too fast of a CPU. 😉


Hey JackBurton I was going to install Alcohol in the near future just to check it out. I've used both Daemon Tools and Virtual Clone Drive in the past.

Just wondering if you've used Alcohol 120% before and how they would compare to the other two. Any advantages or disadvantages. Thanks.

"Indeed!"
Alcohol 120% is pretty bad ass. I use Deamon Tool to mount ISOs, but Alcohol 120% can do the same thing. I just use DT because it does the job and I don't like alot of virtual drives. I just need one. 🙂 However, Alcohol is AWESOME when it comes to "backing up" copy protected CDs. If you want something that does it all ("backup" copy protected CD, and emulate a CDROM), I'd definitely check out Alcohol. AWESOME program. Well worth the money. 😉 Any disadvantages to using Alcohol 120%? None. 😉


Thanks. The one program that I haven't tried yet but also will someday is CDMate which I hear is very good too.
 
Originally posted by: Pariah

You're not understanging what I'm saying. A higher areal density platter will decrease seek time for the same amount of data vs a lower density platter. If you have 40GB of data on a single platter 40GB drive it will fill up the entire drive. If you move that data to a single platter 80GB drive, the same 40GB of data doesn't suddenly transform itself into 80GB to fill the platter. The 40GB will only take up half the platter (I know it won't exactly due to the way data is stored on the drive but for simplicities sake we'll pretend it does), meaning the maximum seek distance the head will have to travel is half way across the platter instead of all the way across for the 40GB platter.

Yes, I will agree that it's possible seek times will decrease in the situation above, if the data is contiguous..

There are a variety of issues that can disrupt that possibility:
- heads still need time for the controller logic to process, and the head to accellerate/decelerate
- Desired data on a harddrive is quite often NON-contiguous
- Can the head really travel as fast when it must "lock on" to a target that is 50% smaller?
- Who would buy an 80gb harddrive & only use 40gb of it??

I'm sure there are other issues with increased track density vs. seek time that neither of us has thought about, which is why I'd be interested to see some discussion from an expert i.e. a hard drive engineer.
 
Originally posted by: grant2

- Who would buy an 80gb harddrive & only use 40gb of it??

most people, i think. hard to fill up drives and drives when you're on dialup.
 
heads still need time for the controller logic to process, and the head to accellerate/decelerate

What does that have to do with the head having to travel less distance? At worse those activities would be equal in both scenarios.

- Desired data on a harddrive is quite often NON-contiguous

Anyone who gives a crap about any of this stuff in the first place knows the benefit of regulary defragging their HD's. This is also one of the major benfits to partitioning. When you create an OS and apps partition, you can guarantee maximum head travel so long as you aren't accessing something in another partition.

Can the head really travel as fast when it must "lock on" to a target that is 50% smaller?

Yes, that's what the purpose of R&D is for. Why do 2nd and 3rd generation 15K drives have faster seek and access time than 1st generation despite having much higher areal density and the same latency? Because technology improves across the board.

- Who would buy an 80gb harddrive & only use 40gb of it??

That was a random example for easy demonstration. Now you're just nitpicking. Filling HD's past 75% will increasingly adversely affect performance.
 
Originally posted by: grant2
Originally posted by: ViRGE
In terms of precentage, drives are getting slower, they've just managed to conserve raw speed so far.

New drives are slower than older drives? How does something "get slower" concurrently with "conserving speed"?? What is the difference between raw speed & cooked speed?

That doesn't make any sense.
Just as a theoretical example, let's say that it used to take 2 hours to write enough data to fill up a 60GB drive. Now, with a 120GB drive, it takes 4 hours to fill up the whole 120GB, so you're still only writing 30GB/hour, but since your drive is twice as large, you've only managed to fill up half as much of the drive(25% an hour vs. 50% an hour) so your drive has the same raw speed, but half the speed in a percentage basis. Engineers keep having to work on just "keeping up" with the smaller and smaller sectors, so drives aren't really increasing in speed; if this keeps up, we're going to have to go to on-drive RAID 1 or some other method, as drives are not growing in speed to meet demand(there's 15K RPM SCSI drives, but large SCSI is an oxymoron).
 
Pariah if you can come up with any EVIDENCE that increased track density = lower seek times, then i'm all ears.
 
Originally posted by: ViRGE
Originally posted by: grant2
Originally posted by: ViRGE
In terms of precentage, drives are getting slower, they've just managed to conserve raw speed so far.

New drives are slower than older drives? How does something "get slower" concurrently with "conserving speed"?? What is the difference between raw speed & cooked speed?

That doesn't make any sense.
Just as a theoretical example, let's say that it used to take 2 hours to write enough data to fill up a 60GB drive. Now, with a 120GB drive, it takes 4 hours to fill up the whole 120GB, so you're still only writing 30GB/hour, but since your drive is twice as large, you've only managed to fill up half as much of the drive(25% an hour vs. 50% an hour) so your drive has the same raw speed, but half the speed in a percentage basis. Engineers keep having to work on just "keeping up" with the smaller and smaller sectors, so drives aren't really increasing in speed; if this keeps up, we're going to have to go to on-drive RAID 1 or some other method, as drives are not growing in speed to meet demand(there's 15K RPM SCSI drives, but large SCSI is an oxymoron).


???

maybe if the drive were physically twice the size or spun half the speed.

but the drive with double the density spins at the same rate.
logically it would read at twice the speed.
 
Back
Top