How do I speed up DFS replication?

FreshPrince

Diamond Member
Dec 6, 2001
8,361
1
0
have 2 file servers running dfs, but the data is replicating soooooooo slowly! been out for an hour of lunch and it's only replicated 7GB of data :(
 

spyordie007

Diamond Member
May 28, 2001
6,229
0
0
There arent really any settings for the DFS v1 replication engine that will alter the speed.

7GB/Hr. is roughly 2Mbs of traffic, what kind of network link do you have between these 2 servers? Are they performing any other roles where one of them might be busy doing something else?

Generally I wouldnt be overly concerned with the speeds you are seeing. I havent ever really seen DFS replication saturate a LAN link.
 

FreshPrince

Diamond Member
Dec 6, 2001
8,361
1
0
we have isolated the master AD DC, the 2nd DC, and the 2 member storage servers into a saparate gigabit switch. It just seems so slow, it should be much much faster than that right?

something is not right...can you help me trouble shoot? I don't want to corrupt the data though...

Thanks,

-FP
 

stash

Diamond Member
Jun 22, 2000
5,468
0
0
It sounds fine to me. I would just leave it alone. You only need to do this initial replication once.
 

FreshPrince

Diamond Member
Dec 6, 2001
8,361
1
0
I'm worried about the speed afterwards...

this dfs is the store of our web server pool...

if it's not fast enough to replicate data...something is up :(

edit: do you think that it's the fact that I moved the 130GB folder into the replication folder instead of copying it into the replication folder?

where should I copy it from/to

from a dfs client access the dfs share? of from primiary data server, in to the dfs folder?

Thanks,

-FP
 

mikecel79

Platinum Member
Jan 15, 2002
2,858
1
81
Like everyone else suggested its working just fine. One thing to look at is how much free space is on each machine vs how much data is trying to be replicated? Your free space should be at least equal to how much data you are trying to replicate so that DFS can build it in the staging area.

You say this is your web server pool. Not sure what that means but I'm assuming this is where your data for your web servers are stored and you have multiple servers feeding off this? If so then why in the world would you not do ANY testing before putting this in place? If you had tested this you probably would have seen that DFS is not meant for fast replication of 100 of GB of data in minutes. Not trying to put you down but a little testing ahead of time would have really helped you out here.
 

stash

Diamond Member
Jun 22, 2000
5,468
0
0
mikecel79 is right. FRS isn't designed for fast replication, nor is is designed to replicate highly dynamic data. DFS is meant to be used both to provide fault tolerance and to obfuscate file shares for users, so that everything appears to be under one share when it could really be on multiple servers.

FRS compresses data, so the compression processing at both ends will slow down the process. The process of replicating data is also inherently slow because of the mutliple places a file is copied to before it ends up at the final location. A changed file gets copied to the outbound staging directory, goes across the wire and is placed in the inbound staging directory. From there, it goes to the preinstall directory of the share, and then finally it is renamed and copied to its final location in the share.

FRSv1 also does not do attribute or partial replication of changed data. If you have a 2GB file and you rename it, the entire 2GB will be replicated again. Any kind of data that you want your web servers accessing should be fairly static. Which is why, after this initial replication, everything will be fine. If the data changes a lot, DFS and FRS are not what you want.
 

FreshPrince

Diamond Member
Dec 6, 2001
8,361
1
0
is there a way to direct all requests to a particular root target and have that data replicated only from 1 root target to another, and not vice versa?

This would make the data more "static", and allow the web servers to access the data if the 1st root target fails.

Thanks,

-FP