http://www.httrack.com/ is a possiblity. Although I think that it may qualify as a "web crawler"
If you have a listing of the files located in that directory then you can use something like wget.
I've done it a couple times for stuff like freely aviable online books and such.
I would take a webpage html then use awk and text editing to filter out the hrefs out of it to get my listing, then I'd use a bash script using
wget to go and download all the files.
Although I suppose that your using windows, it would make it a bit more difficult I guess unless your familar with dos-style batch scripts.
There are versions of the programs I linked to above that will work in both linux and windows (and other OSes.)
that's what I can think of off the top of my head, don't know if it would be any helpfull.