So I have this Copy class written in Java

HumblePie

Lifer
Oct 30, 2000
14,665
440
126
I was trying to figure out a way to copy whole directories with files and subdirectories using java and all I could come up with is this copy class. Currently, the company I program for uses a file writing method of creating a batch file, based off the Operating System commands to do the copying. The problem is, that method isn't very operating system independant. Which is one of the reasons we are suppose to be usin java in the first place. I don't want to use an System.getProperty("osname") for figuring out how I'm going to code.

So anyhow, I write this nifty little recursive Copy class to do what I want instead. However, the lead "coder" and I use that term loosely because he is an EE and not a CS major guy, says my way would hog too many system resources since I use Filestreams. He says reading files in 2048 byte by 2048 byte would cause memory problems. I've done some testing with this so far and I've moved a few hundred megs in a couple of seconds with no problems and no corruptions. Anyhow, here's what I wrote.



import java.io.*;

public class Copy
{
/** Creates new Copy */
private Copy()
{
}

public static void copy(File destination, File source)
{
try
{
if (source.isDirectory())
{
if (!destination.isDirectory())
{
throw Exception e("Destination '"+destination.getName()+"' is not directory.");
}
copyDirectory(destination,source);
}
else
{
if (destination.isDirectory())
{
destination = new File(destination,source.getName());
}
copyFile(destination,source);
}
catch (Exception e)
{
}
}

protected static void copyDirectory(File destination, File source)
{
try
{
File[] list=source.listFiles();
for (int j=0;j<list.length;j++)
{
File dest = new File(destination,list[j].getName());
if (list[j].isDirectory())
{
dest.mkdir();
copyDirectory(dest,list[j]);
}
else
{
copyFile(dest,list[j]);
}
}
catch (Exception e)
{
}
}

protected static void copyFile(File destination, File source)
{
try
{
FileInputStream inStream = new FileInputStream(source);
FileOutputStream outStream = new FileOutputStream(destination);

int len;
byte[] buf=new byte[2048];

while ((len=inStream.read(buf))!=-1)
{
outStream.write(buf,0,len);
}
}
catch (Exception e)
{
}
}

protected static void copyDirectoriesOnly(File destination, File source)
{
try
{
File[] list=source.listFiles();
for (int j=0;j<list.length;j++)
{
File dest = new File(destination,list[j].getName());
if (list[j].isDirectory())
{
dest.mkdir();
copyDirectoriesOnly(dest,list[j]);
}
}
}
catch (Exception e)
{
}
}
}//End of Copy Class

//(\___/)
//{=*.*=)
//(')__(')



Is there another way in java I could accomplish this part of my task without resorting to OS dependant commands by using java to copy files?
 

statik213

Golden Member
Oct 31, 2004
1,654
0
0
how big are these files? It's usually more performance friendly to do longer sustained reads and writes than reading and writing small amounts very often. ....
you could try buffering 64k at a time (or maybe more?) instead of 2k.
finding the right buffer size would probably have to be done by trial and error, don't assume a java-based copy method to be as fast as an OS copy method but it shouldn't be significantly slower either.
 

kamper

Diamond Member
Mar 18, 2003
5,513
0
0
Java has buffered streams already in the api ;) I'd just hook up a PipedReader/Writer pair to a pair of those and let 'er rip. Just read the javadocs to make sure you avoid deadlock issues.

Aside from that, perl is cross platform enough and more appropriate for file system maintenance, no?
 

statik213

Golden Member
Oct 31, 2004
1,654
0
0
Originally posted by: kamper
Java has buffered streams already in the api ;) I'd just hook up a PipedReader/Writer pair to a pair of those and let 'er rip. Just read the javadocs to make sure you avoid deadlock issues.

Aside from that, perl is cross platform enough and more appropriate for file system maintenance, no?


Isn't he doing *exactly* what a buffferedreader/writer would be doing (minus the thread safe stuff, if any)?
 

HumblePie

Lifer
Oct 30, 2000
14,665
440
126
yep... doing what the buffered reader/writer does in 2K sizes. I stuck with small chunks because some files are really that small and some are on par with a few megs. I think the biggest is like 20 megs but I can't be certain.
 

statik213

Golden Member
Oct 31, 2004
1,654
0
0
Originally posted by: HumblePie
yep... doing what the buffered reader/writer does in 2K sizes. I stuck with small chunks because some files are really that small and some are on par with a few megs. I think the biggest is like 20 megs but I can't be certain.


it still wouldn't hurt to making the buffer bigger right? 2k is pretty small esp. of you've got 20meg files in there. You shouldn't have to change anything else in there 'cos of the way you've got the while loop setup.
 

kamper

Diamond Member
Mar 18, 2003
5,513
0
0
Originally posted by: statik213
Originally posted by: kamper
Java has buffered streams already in the api ;) I'd just hook up a PipedReader/Writer pair to a pair of those and let 'er rip. Just read the javadocs to make sure you avoid deadlock issues.

Aside from that, perl is cross platform enough and more appropriate for file system maintenance, no?


Isn't he doing *exactly* what a buffferedreader/writer would be doing (minus the thread safe stuff, if any)?
Oh probably. My point was, why worry about buffering if something else has already done it for you? The PipedReader/Writer would be optional. One could move a byte at a time between the Buffered streams and not worry about i/o inefficiencies, just the method call overhead, which would be negligble for such small files.