Simulating HD cache with Ram?

isaacmacdonald

Platinum Member
Jun 7, 2002
2,820
0
0
Is there a way to allocate a specific amount of ram to act as a write cache for a specific HD? In this case, a p2p app writes 10 different streams @ lets say 100k/s in parrallel. With traditional windows management, the result after 1hr is a very very heavily fragmented drive. Increasing cluster size seems to have some effect, but I'm wondering if there isn't a way to more directly address the issue with otherwise superfluous ram. I assume windows isn't structured to handle this kind of predictable writing. Wouldn't it be substantially more efficient for the HD to write these files in 30MB chunks rather than cluster size chunks?
 

isaacmacdonald

Platinum Member
Jun 7, 2002
2,820
0
0
you don't think the info could be held in ram for 5 minutes without being corrupted? I haven't had any problems with the more conventional ram disk setup, so I assume it wouldn't be any more risky than that.
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
The OS caches the file in 256K chunks by default. There are calls you can make to MSVCRT to increase the buffering to 1MB before the OS even sees what is written, assuming the P2P programs are using stdio which they probably are considering the bugs with files larger than 2GB.
The problem is Kazaa (and probably most other P2P software) explicitly tells it to flush the file after every packet received.
Also it's writing more than just the packet it got each time. The information about what clients the file came from, and the metadata each person put about the file is stored at the end of the file and re-written after every single packet.
They could easily get around the problem by making the file the final size when it is first created and overwriting all the garbage in the middle later.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
It can be held in ram forever, but what happens when something crashes? You lose 30 meg instead of a few kb.
 

Sohcan

Platinum Member
Oct 10, 1999
2,127
0
0
Originally posted by: isaacmacdonald
Is there a way to allocate a specific amount of ram to act as a write cache for a specific HD? In this case, a p2p app writes 10 different streams @ lets say 100k/s in parrallel. With traditional windows management, the result after 1hr is a very very heavily fragmented drive. Increasing cluster size seems to have some effect, but I'm wondering if there isn't a way to more directly address the issue with otherwise superfluous ram. I assume windows isn't structured to handle this kind of predictable writing. Wouldn't it be substantially more efficient for the HD to write these files in 30MB chunks rather than cluster size chunks?

Unfortunately you're a little late with the idea ;)

Design and implementation of a Log-Structured File System

* not speaking for Intel Corp. *
 

isaacmacdonald

Platinum Member
Jun 7, 2002
2,820
0
0
Originally posted by: Sohcan
Originally posted by: isaacmacdonald
Is there a way to allocate a specific amount of ram to act as a write cache for a specific HD? In this case, a p2p app writes 10 different streams @ lets say 100k/s in parrallel. With traditional windows management, the result after 1hr is a very very heavily fragmented drive. Increasing cluster size seems to have some effect, but I'm wondering if there isn't a way to more directly address the issue with otherwise superfluous ram. I assume windows isn't structured to handle this kind of predictable writing. Wouldn't it be substantially more efficient for the HD to write these files in 30MB chunks rather than cluster size chunks?

Unfortunately you're a little late with the idea ;)

Design and implementation of a Log-Structured File System

* not speaking for Intel Corp. *

That was cool :). So was anything ever done with this? This seems like such a patently obvious idea, with huge potential returns. These days there should be tons of applications that could take advantage of this kind of thing. <scratches head>

for that other post, even my xp box has decent uptime (averaging two weeks or so) - as long as I have it hooked up to a UPS - the 32MB cost, versus the 5k cost of sudden failure, is negligable.
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
If you have access to the source code for the programs, you can easily increase the packet size. If you don't about all you can do is set your system to "optimize for system cache" if you're running windows 2k or XP.
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
How are you gonna set the packet size it RECEIVES?

The real solution since it knows the file size before the download starts is to create the file at its final size initially and fill in the data as it comes in.

Or if you could get ext2fs drivers for win2k..... it intentionally spreads out file start locations to prevent fragmentation rather than always using the lowest # available block.
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
Oh wait, I just realized that you were talking about p2p apps. Everything the other people have already said is true. P2P apps seem to flush immedately and the OS worries about the insecurity of leaving a lot of stuff in the write cache. So you're basically screwed. I've had winmx fragment a file in 9000 pieces!
 

Codewiz

Diamond Member
Jan 23, 2002
5,758
0
76
torrents is a P2P application. The nice things about the application is the fact that before it ever starts downloading, it creates the files that take up the correct amount of space. It fills the file with the data as it gets it. That way fragmentation isn't a problem.
 

Howard

Lifer
Oct 14, 1999
47,982
10
81
Originally posted by: Codewiz
torrents is a P2P application. The nice things about the application is the fact that before it ever starts downloading, it creates the files that take up the correct amount of space. It fills the file with the data as it gets it. That way fragmentation isn't a problem.
Yup.

BTW, I would like to know more about the ext2fs drivers...