how to use XZ lib to compress/decompress file in android - java

https://tukaani.org/xz/java.html This site provide a XZ Library for compress/decompress files, I would like to give it a shot but I am lost.
Anyone got experience on this? Or a tutorial? Thanks.

I have used this library recently and this is the working code on my github link XZ compression algorithm. you can use this class in your android project. This is Main class to give the idea of how to use this class.
public static void main(String[] args){
String input = "Some string blah blah blah";
System.out.println("XZ or LZMA2 library.....");
// If you are using some file instead of plain text you have to
//convert it to bytes here and pass it to `compress` method.
byte[] xzCompressed = XZ_LZMA2.compress(input);
System.out.println("Input length:" + input.length());
System.out.println("XZ Compressed Length: "+ xzCompressed.length);
String xzDecompressed = XZ_LZMA2.decompress(xzCompressed);
System.out.println("XZ Decompressed : "+ xzDecompressed);
// If you are using decompression for some compressed file instead of
// plain text return bytes from `decompress` method and put it in
//FileOutputStream to get file back
}
Note: XZ compression algorithm needs lots of memory to run. It's not recommended to use it for any mobile platform like Android. It will give you out of memory exception. Android provides ZLIB compression library called Deflater and Inflater. This works well on Android platform.

You can use XZ-Java static library from Android AOSP or 'org.tukaani:xz:1.8'lib to compress the files in XZ file format. Here is working code to compress the file in XZ format.Create the Asynctask for compressing multiple files and use the below java code to compress.
Android.mk Changes for building in AOSP:
LOCAL_STATIC_JAVA_LIBRARIES := \
xz-java
OR
Gradle File Changes for building in Android Studio:
implementation 'org.tukaani:xz:1.8'
Java Code:
public void CompressFile(String inputFile, String outputFile){
XZOutputStream xzOStream = null;
try {
LZMA2Options opts = new LZMA2Options();
opts.setPreset(7);
FileOutputStream foStream = new FileOutputStream(outputFile);
xzOStream = new XZOutputStream(foStream, opts);
FileInputStream fiStream = new FileInputStream(inputFile);
Scanner sc = null;
try {
sc = new Scanner(fiStream);
while (sc.hasNextLine()) {
String line = sc.nextLine() + "\n";
xzOStream.write(line.getBytes(), 0, line.getBytes().length);
}
Utils.removeFile(inputFile);
} finally {
if (fiStream != null) {
fiStream.close();
}
if (sc != null) {
sc.close();
}
if(xzOStream != null)
xzOStream.close();
if(foStream != null)
foStream.close();
}
}catch (Exception e){
Log.e(TAG, "CompressFileXZ() Exception: " + e.toString());
}
}

My Android app loads a data file from its assets directory on startup, and I already know the decompressed size, so all I needed to write was:
byte[] data = new byte[ /* decompressed size here */ ];
new org.tukaani.xz.XZInputStream(context.getAssets().open("file.xz")).read(data);
and then git clone https://git.tukaani.org/xz-java.git and cp -r xz-java/src/org into my app's src directory (and ensure all .java files are mentioned on my javac command line—I still use old-school command-line scripts to compile my apps; I haven't set them up for Android Studio or Gradle).
However, the resulting app took six seconds to decompress 3M of compressed data on a 2013 Sony Xperia Z Ultra (Android 4.4), and changing the compression level of xz did not noticeably affect that 6-second startup. Yes it only took 1 second on a 2018 Samsung S9 running Android 10, but it's the older phones that need more compression, as they're the ones with less space available, so adding an unacceptable startup delay to the app on older devices seems to be self-defeating, especially if the alternative java.util.zip.Inflater is near instantaneous:
byte[] data = new byte[ /* compressed size here */];
context.getAssets().open("file.z").read(data);
java.util.zip.Inflater i=new java.util.zip.Inflater();
i.setInput(data);
byte[] decompressed=new byte[ /* decompressed size here */ ];
i.inflate(decompressed); i.end();
data = decompressed; /* allow earlier byte[] to be gc'd */
and for that fast startup you pay only a 20% increase in the APK size over the one with the xz file (I compressed using zopfli to get a tiny bit smaller than gzip -9 although it's still bigger than xz -0).
Tukaani's code doesn't currently seem to make an equivalent of setInput available. Tukaani's XZDecDemo.java contains the comment "Since XZInputStream does some buffering internally anyway, BufferedInputStream doesn't seem to be needed here to improve performance", but for completeness I tried it anyway:
byte[] data = new byte[ /* decompressed size here */ ];
new org.tukaani.xz.XZInputStream(
new java.io.BufferedInputStream(
context.getAssets().open("file.xz"),
/* compressed size here */)).read(data);
however this made no noticeable difference to the 6-second delay (so it seems the comment is correct: the performance is just as bad either way).

Related

How to download a large file from Google Cloud Storage using Java with checksum control

I want to download large files from Google Cloud Storage using the google provided Java library com.google.cloud.storage. I have working code, but I still have one question and one major concern:
My major concern is, when is the file content actually downloaded? During (references to the code below) storage.get(blobId), during blob.reader() or during reader.read(bytes)? This gets very important when it comes to how to handle an invalid checksum, what do I need to do in order to actually trigger that the file is fetched over the network again?
The simpler question is: Is there built in functionality to do md5 (or crc32c) check on the received file in the google library? Maybe I don't need to implement it on my own.
Here is my method trying to download big files from Google Cloud Storage:
private static final int MAX_NUMBER_OF_TRIES = 3;
public Path downloadFile(String storageFileName, String bucketName) throws IOException {
// In my real code, this is a field populated in the constructor.
Storage storage = Objects.requireNonNull(StorageOptions.getDefaultInstance().getService());
BlobId blobId = BlobId.of(bucketName, storageFileName);
Path outputFile = Paths.get(storageFileName.replaceAll("/", "-"));
int retryCounter = 1;
Blob blob;
boolean checksumOk;
MessageDigest messageDigest;
try {
messageDigest = MessageDigest.getInstance("MD5");
} catch (NoSuchAlgorithmException ex) {
throw new RuntimeException(ex);
}
do {
LOGGER.debug("Start download file {} from bucket {} to Content Store (try {})", storageFileName, bucketName, retryCounter);
blob = storage.get(blobId);
if (null == blob) {
throw new CloudStorageCommunicationException("Failed to download file after " + retryCounter + " tries.");
}
if (Files.exists(outputFile)) {
Files.delete(outputFile);
}
try (ReadChannel reader = blob.reader();
FileChannel channel = new FileOutputStream(outputFile.toFile(), true).getChannel()) {
ByteBuffer bytes = ByteBuffer.allocate(128 * 1024);
int bytesRead = reader.read(bytes);
while (bytesRead > 0) {
bytes.flip();
messageDigest.update(bytes.array(), 0, bytesRead);
channel.write(bytes);
bytes.clear();
bytesRead = reader.read(bytes);
}
}
String checksum = Base64.encodeBase64String(messageDigest.digest());
checksumOk = checksum.equals(blob.getMd5());
if (!checksumOk) {
Files.delete(outputFile);
messageDigest.reset();
}
} while (++retryCounter <= MAX_NUMBER_OF_TRIES && !checksumOk);
if (!checksumOk) {
throw new CloudStorageCommunicationException("Failed to download file after " + MAX_NUMBER_OF_TRIES + " tries.");
}
return outputFile;
}
The google-cloud-java storage library does not validate checksums on its own when reading data beyond normal HTTPS/TCP correctness checking. If it compared the MD5 of the received data to the known MD5, it would need to download the entire file before it could return any results from read(), which for very large files would be infeasible.
What you're doing is a good idea if you need the additional protection of comparing MD5s. If this is a one-off task, you could use the gsutil command-line tool, which does this same sort of additional check.
As the JavaDoc of ReadChannel says:
Implementations of this class may buffer data internally to reduce remote calls.
So the implementation you get from blob.reader() could cache the whole file, some bytes or nothing and just fetch byte for byte when you call read(). You will never know and you shouldn't care.
As only read() throws an IOException and the other methods you used do not, I'd say that only calling read() will actually download stuff. You can also see this in the sources of the lib.
Btw. despite the example in the JavaDocs of the library, you should check for >= 0, not > 0. 0 just means nothing was read, not that end of stream is reached. End of stream is signaled by returning -1.
For retrying after a failed checksum check, get a new reader from the blob. If something caches the downloaded data, then the reader itself. So if you get a new reader from the blob, the file will be redownloaded from remote.

File from InputStream

Yes, this question has been asked a millions times, and I believe I've looked at them all. They are very "sometimesy", slow, or not what I need.
On one project, I use the following code to use the InputStream received from a GET to turn that into a PDF. This works PERFECTLY, every time, on my physical device and my emulator (Genymotion 2.1.1, Emulator API 18 4.3). Note that some things are edited out, and the PDFs are generally small, less than 1 MB.
public abstract class MyPDFFile extends File implements ApiModel{
public MyPDFFile(InputStream inputStream){
super(context.getExternalFilesDir(
Environment.DIRECTORY_DOWNLOADS), "my_pdf.pdf");
if (externalStorageIsWritable()) {
try {
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream));
FileOutputStream fileInputStream = new FileOutputStream(this);
BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(fileInputStream));
int readLine;
char[] cbuf = new char[1];
do {
readLine = bufferedReader.read(cbuf);
bufferedWriter.write(cbuf);
} while (readLine != -1);
bufferedWriter.close();
}
catch(IOException e){
// Didn't work
}
}
else{
// Cant write
}
}
I figured on this new project, I could use the same code to download an APK from the internet to the device. Nope, definitely not the case. I eventually tried this for Inputstream to File:
FileOutputStream fileOutputStream = new FileOutputStream(file);
byte[] buffer = new byte[1];
while ( (read(buffer)) > 0 ) {
fileOutputStream.write(buffer);
}
fileOutputStream.close();
close();
That works on my emulator, and works fine. I moved to testing on my device... not so much, which is weird, because the working PDF code works on both my emulator and device. I've tried adjusting the size of my buffer to various multiples of 512 (which results in the file being EXTREMELY small, like a few KB, to being EXTREMELY large, about double the apk size, which is about 5.6 MB).
Also, another weird thing: I can NEVER get it to successfully save outside of the constructor. When I do the saving there, the InputStream is fine, my file gets created, yadayada, and when I use successful code, I just rename the file afterwards since all I have access to in the constructor is the InputStream. If I decide "No, I want to name it when I have the proper things" and simply save the InputStream to my object, it NEVER works properly. Can never get above 4KB for the downloaded file. I've tried extends InputStream and extends BufferedInputStream to no avail.
I can post more code if needed. All I would have access to is the InputStream from my GET request; I'm using the browep Android HTTP library and that's all I can get without trying to mess with the library itself (or overriding methods in it).
The problem is that you're reading the file byte by byte. This can take ton of time. Instead, read the file in bigger piece of chunks, like 4 or 8 KBs:
int file_chunk_size = 1024 * 4; //4KBs, written like this to easily change it to 8
byte[] buffer = new byte[file_chunk_size];
int bytesRead = 0;
while ( (bytesRead = read(buffer)) > 0 ) {
fileOutputStream.write(buffer, 0, bytesRead);
}

Out of memory when encoding file to base64

Using Base64 from Apache commons
public byte[] encode(File file) throws FileNotFoundException, IOException {
byte[] encoded;
try (FileInputStream fin = new FileInputStream(file)) {
byte fileContent[] = new byte[(int) file.length()];
fin.read(fileContent);
encoded = Base64.encodeBase64(fileContent);
}
return encoded;
}
Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space
at org.apache.commons.codec.binary.BaseNCodec.encode(BaseNCodec.java:342)
at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:657)
at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:622)
at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:604)
I'm making small app for mobile device.
You cannot just load the whole file into memory, like here:
byte fileContent[] = new byte[(int) file.length()];
fin.read(fileContent);
Instead load the file chunk by chunk and encode it in parts. Base64 is a simple encoding, it is enough to load 3 bytes and encode them at a time (this will produce 4 bytes after encoding). For performance reasons consider loading multiples of 3 bytes, e.g. 3000 bytes - should be just fine. Also consider buffering input file.
An example:
byte fileContent[] = new byte[3000];
try (FileInputStream fin = new FileInputStream(file)) {
while(fin.read(fileContent) >= 0) {
Base64.encodeBase64(fileContent);
}
}
Note that you cannot simply append results of Base64.encodeBase64() to encoded bbyte array. Actually, it is not loading the file but encoding it to Base64 causing the out-of-memory problem. This is understandable because Base64 version is bigger (and you already have a file occupying a lot of memory).
Consider changing your method to:
public void encode(File file, OutputStream base64OutputStream)
and sending Base64-encoded data directly to the base64OutputStream rather than returning it.
UPDATE: Thanks to #StephenC I developed much easier version:
public void encode(File file, OutputStream base64OutputStream) {
InputStream is = new FileInputStream(file);
OutputStream out = new Base64OutputStream(base64OutputStream)
IOUtils.copy(is, out);
is.close();
out.close();
}
It uses Base64OutputStream that translates input to Base64 on-the-fly and IOUtils class from Apache Commons IO.
Note: you must close the FileInputStream and Base64OutputStream explicitly to print = if required but buffering is handled by IOUtils.copy().
Either the file is too big, or your heap is too small, or you've got a memory leak.
If this only happens with really big files, put something into your code to check the file size and reject files that are unreasonably big.
If this happens with small files, increase your heap size by using the -Xmx command line option when you launch the JVM. (If this is in a web container or some other framework, check the documentation on how to do it.)
If the file recurs, especially with small files, the chances are that you've got a memory leak.
The other point that should be made is that your current approach entails holding two complete copies of the file in memory. You should be able to reduce the memory usage, though you'll typically need a stream-based Base64 encoder to do this. (It depends on which flavor of the base64 encoding you are using ...)
This page describes a stream-based Base64 encoder / decoder library, and includes lnks to some alternatives.
Well, do not do it for the whole file at once.
Base64 works on 3 bytes at a time, so you can read your file in batches of "multiple of 3" bytes, encode them and repeat until you finish the file:
// the base64 encoding - acceptable estimation of encoded size
StringBuilder sb = new StringBuilder(file.length() / 3 * 4);
FileInputStream fin = null;
try {
fin = new FileInputStream("some.file");
// Max size of buffer
int bSize = 3 * 512;
// Buffer
byte[] buf = new byte[bSize];
// Actual size of buffer
int len = 0;
while((len = fin.read(buf)) != -1) {
byte[] encoded = Base64.encodeBase64(buf);
// Although you might want to write the encoded bytes to another
// stream, otherwise you'll run into the same problem again.
sb.append(new String(buf, 0, len));
}
} catch(IOException e) {
if(null != fin) {
fin.close();
}
}
String base64EncodedFile = sb.toString();
You are not reading the whole file, just the first few kb. The read method returns how many bytes were actually read. You should call read in a loop until it returns -1 to be sure that you have read everything.
The file is too big for both it and its base64 encoding to fit in memory. Either
process the file in smaller pieces or
increase the memory available to the JVM with the -Xmx switch, e.g.
java -Xmx1024M YourProgram
This is best code to upload image of more size
bitmap=Bitmap.createScaledBitmap(bitmap, 100, 100, true);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream); //compress to which format you want.
byte [] byte_arr = stream.toByteArray();
String image_str = Base64.encodeBytes(byte_arr);
Well, looks like your file is too large to keep the multiple copies necessary for an in-memory Base64 encoding in the available heap memory at the same time. Given that this is for a mobile device, it's probably not possible to increase the heap, so you have two options:
make the file smaller (much smaller)
Do it in a stram-based way so that you're reading from an InputStream one small part of the file at a time, encode it and write it to an OutputStream, without ever keeping the enitre file in memory.
In Manifest in applcation tag write following
android:largeHeap="true"
It worked for me
Java 8 added Base64 methods, so Apache Commons is no longer needed to encode large files.
public static void encodeFileToBase64(String inputFile, String outputFile) {
try (OutputStream out = Base64.getEncoder().wrap(new FileOutputStream(outputFile))) {
Files.copy(Paths.get(inputFile), out);
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}

Java Apache FileUtils readFileToString and writeStringToFile problems

I need to parse a java file (actually a .pdf) to an String and go back to a file. Between those process I'll apply some patches to the given string, but this is not important in this case.
I've developed the following JUnit test case:
String f1String=FileUtils.readFileToString(f1);
File temp=File.createTempFile("deleteme", "deleteme");
FileUtils.writeStringToFile(temp, f1String);
assertTrue(FileUtils.contentEquals(f1, temp));
This test converts a file to a string and writtes it back. However the test is failing.
I think it may be because of the encodings, but in FileUtils there is no much detailed info about this.
Anyone can help?
Thanks!
Added for further undestanding:
Why I need this?
I have very large pdfs in one machine, that are replicated in another one. The first one is in charge of creating those pdfs. Due to the low connectivity of the second machine and the big size of pdfs, I don't want to synch the whole pdfs, but only the changes done.
To create patches/apply them, I'm using the google library DiffMatchPatch. This library creates patches between two string. So I need to load a pdf to an string, apply a generated patch, and put it back to a file.
A PDF is not a text file. Decoding (into Java characters) and re-encoding of binary files that are not encoded text is asymmetrical. For example, if the input bytestream is invalid for the current encoding, you can be assured that it won't re-encode correctly. In short - don't do that. Use readFileToByteArray and writeByteArrayToFile instead.
Just a few thoughts:
There might actually some BOM (byte order mark) bytes in one of the files that either gets stripped when reading or added during writing. Is there a difference in the file size (if it is the BOM the difference should be 2 or 3 bytes)?
The line breaks might not match, depending which system the files are created on, i.e. one might have CR LF while the other only has LF or CR. (1 byte difference per line break)
According to the JavaDoc both methods should use the default encoding of the JVM, which should be the same for both operations. However, try and test with an explicitly set encoding (JVM's default encoding would be queried using System.getProperty("file.encoding")).
Ed Staub awnser points why my solution is not working and he suggested using bytes instead of Strings. In my case I need an String, so the final working solution I've found is the following:
#Test
public void testFileRWAsArray() throws IOException{
String f1String="";
byte[] bytes=FileUtils.readFileToByteArray(f1);
for(byte b:bytes){
f1String=f1String+((char)b);
}
File temp=File.createTempFile("deleteme", "deleteme");
byte[] newBytes=new byte[f1String.length()];
for(int i=0; i<f1String.length(); ++i){
char c=f1String.charAt(i);
newBytes[i]= (byte)c;
}
FileUtils.writeByteArrayToFile(temp, newBytes);
assertTrue(FileUtils.contentEquals(f1, temp));
}
By using a cast between byte-char, I have the symmetry on conversion.
Thank you all!
Try this code...
public static String fetchBase64binaryEncodedString(String path) {
File inboundDoc = new File(path);
byte[] pdfData;
try {
pdfData = FileUtils.readFileToByteArray(inboundDoc);
} catch (IOException e) {
throw new RuntimeException(e);
}
byte[] encodedPdfData = Base64.encodeBase64(pdfData);
String attachment = new String(encodedPdfData);
return attachment;
}
//How to decode it
public void testConversionPDFtoBase64() throws IOException
{
String path = "C:/Documents and Settings/kantab/Desktop/GTR_SDR/MSDOC.pdf";
File origFile = new File(path);
String encodedString = CreditOneMLParserUtil.fetchBase64binaryEncodedString(path);
//now decode it
byte[] decodeData = Base64.decodeBase64(encodedString.getBytes());
String decodedString = new String(decodeData);
//or actually give the path to pdf file.
File decodedfile = File.createTempFile("DECODED", ".pdf");
FileUtils.writeByteArrayToFile(decodedfile,decodeData);
Assert.assertTrue(FileUtils.contentEquals(origFile, decodedfile));
// Frame frame = new Frame("PDF Viewer");
// frame.setLayout(new BorderLayout());
}

How to copy a large file in Windows XP?

I have a large file in windows XP - its 38GB. (a VM image)
I cannot seem to copy it.
Dragging on the desktop - gives error of "Insufficient system resources exist to complete the requested service"
Using Java - FileChannel.transferTo(0, fileSize, dest) fails for all files > 2GB
Using Java - FileChannel.transferTo() in chunks of 100Mb fails after ~18Gb
java.io.IOException: Insufficient system resources exist to complete the requested service
at sun.nio.ch.FileDispatcher.write0(Native Method)
at sun.nio.ch.FileDispatcher.write(FileDispatcher.java:44)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:72)
at sun.nio.ch.IOUtil.write(IOUtil.java:28)
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:198)
at sun.nio.ch.FileChannelImpl.transferToTrustedChannel(FileChannelImpl.java:439)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:510)
I mean - the computer has 3GB of RAM. A 100GB buffer should be enough!?!?
Apparently the DOS commands "copy" and "xcopy" also fail.
(edit) I've tried COPY & XCOPY - these fail with the same error. XCOPY seems to take a really really long time about it too.
I've heard of Robocopy, but it doesn't copy single files?
I'm really feeling that Windows is for the lose right now. Surely microsoft have heard of files larger than a few GB?
Thanks!
In Java, don't try to copy the whole file in a single operation. The transferTo() method works on chunks of a file; wasn't intended as a high-level file copy method. Invoke transferTo() in a loop, and assume that count bytes of data will be in RAM (i.e., lower that parameter to be comfortable fitting in RAM).
FileChannel src = ...
FileChannel dst = ...
final long CHUNK = 16 * 1024 * 1024; /* 16 Mb */
for (long pos = 0; pos < fileSize; ) {
pos += src.transferTo(pos, CHUNK, dst);
}
The comment in the transferTo() JavaDoc about it being "more efficient than a simple loop" refers to the fact that channel-to-channel communication can be optimized more than channel-to-user-space-to-channel. It doesn't mean that all looping can be avoided.
I am a Vmware ESX user, I have 30 production VM's with the largest being 232GB. I backup my VM instances onto an internal SATA drive and then copy these off once a week to an external eSata. I use teracopy (free), it runs on average at 45MB/s on an XP machine with 3GB.
Hope that helps
Sailen
Well - I've not managed to find a way that works.
None of the packaged tools in windows will copy the file. Drag and drop, COPY, XCOPY, java - all fail to copy the file.
The reason I wanted to copy the file was for a backup before doing an OS upgrade.
In the end i booted into knoppix and copied it.
Take a look at this Hotfix, worth a try as everything I have seen points to this as being a cure for your issue.
EDIT: You can also try XCOPY /Z as pointed out here.
There may be a hardware issue as well..
I suspect you don't have much time, however you may try dumber stream solution and don't set large buffers (8-16MB should be enough):
public static void copy(InputStream input, OutputStream output) throws IOException {
byte[] buffer = new byte[1024 * 1024 * 8]; // 8MB
int n = 0;
while (-1 != (n = input.read(buffer))) {
output.write(buffer, 0, n);
}
}
public static void main(String args[]) {
if (args.length != 2) {
System.err.println("wrong argument count");
System.exit(1);
}
FileInputStream in = null;
FileOutputStream out = null;
try {
in = new FileInputStream(new File(args[0]));
out = new FileOutputStream(new File(args[1]));
copy(in, out);
} catch (Exception e) {
e.printStackTrace();
}
if (in != null) { try { in.close(); } catch (Exception e) {}}
if (out != null) { try { out.close(); } catch (Exception e) {}}
}
are you sure the filesystem is actually able to cope with such big files (FAT32 cannot for example)? Take a look on this link for details http://www.ntfs.com/ntfs_vs_fat.htm
The system is 32 or 64 bit? On 32-bit you may have problems copy-ing files larger that 2-4Gb.
Also, you said that rsync scucks for you. I've had a very nice experience with it, copying between 2 hard drives at near-native speed. I've had lots of small files..you seem to have on big blob instead.
You may also try splitting the big blob into smaller blobs:)
final long CHUNK = 16 * 1024 * 1024; /* 16 Mb */
for (long pos = 0; pos < fileSize; pos++) {
pos += src.transferTo(pos, CHUNK, dst);
}
This does work! just make sure your src and dst are FileChannel objects (input, output respectively)
Another possible answer is Files.copy (java NIO 2), e.g.:
Path sourcePath = Paths.get("big-file.dat");
Path destinationPath = Paths.get("big-file-copy.dat");
try {
Files.copy(sourcePath, destinationPath,
StandardCopyOption.REPLACE_EXISTING);
} catch (IOException e) {
// something else went wrong
e.printStackTrace();
}

Categories