Slow operations in parallel - java

I need help with running parallel operations. The goal of the code is to extract a large amount of small files from the same tar in different folders in a very short time
This is the code:
public void decompress(File archive, File destination) throws RuntimeException {
try (InputStream in = new FileInputStream(archive);
BufferedInputStream buff = new BufferedInputStream(in);
TarArchiveInputStream is = (TarArchiveInputStream) new ArchiveStreamFactory().createArchiveInputStream("tar", buff)
) {
TarArchiveEntry entry;
while ((entry = is.getNextTarEntry()) != null) {
File file = new File(destination, entry.getName());
file.getParentFile().mkdirs();
Files.write(file.toPath(), is.readAllBytes());
}
} catch (IOException | ArchiveException e) {
e.printStackTrace();
}
}
When I execute one time this operation, it takes ~900ms
But when I do something like this to execute the same operation, multiple times in parallel it takes 20000ms:
ExecutorService EXECUTOR_SERVICE = Executors.newFixedThreadPool(20);
File archive = ...;
for (int i = 0; i < 5; i++) {
File directory = new File("Dir_" + i);
EXECUTOR_SERVICE.submit(() -> decompress(archive, directory));
}
or
File archive = ...;
for (int i = 0; i < 5; i++) {
File directory = new File("Dir_" + i);
new Thread(() -> decompress(archive, directory)).start();
}

One suspicion is that the directories contain many files, hence File.mkdirs does needlessly much checks.
The constructor of BufferedInputStream may have a custom buffer size. Never helped much, but it might be with your disk. Also with parallelism it could help to prevent much "disk head movements."
You probably already tried Files.copy but still, it might have a better memory behavior that readAllBytes.
So the version becomes (eschewing File in favor of Path):
public void decompress(File archive, File destination) throws RuntimeException {
final int bufferSize = 1024 * 128;
Path archivePath = archive.toPath();
Path destinationPath = destination.toPath();
try (InputStream in = new FileInputStream(archive);
BufferedInputStream buff = new BufferedInputStream(in, bufferSize);
TarArchiveInputStream is = (TarArchiveInputStream)
new ArchiveStreamFactory().createArchiveInputStream("tar", buff)
) {
Path oldFileParent = destinationPath;
oldFileParent.createDirectories();
TarArchiveEntry entry;
while ((entry = is.getNextTarEntry()) != null) {
Path file = Paths.get(destinationPath, entry.getName());
Path fileParent = file.getParent();
if (!fileParent.equals(oldFileParent)) {
oldFileParent = fileParent;
oldFileParent.createDirectories();
}
Files.copy(is, file);
//Files.write(file, is.readAllBytes());
}
} catch (IOException | ArchiveException e) {
e.printStackTrace();
}
}
Throwing a RuntimeException and capturing the IOException/ArchiveException without throwing it back (as new IllegalStateException(e)) is a matter of taste.
Now to adding parallelism: disk output is probably the bottleneck. Writing two files to the same disk in parallel means skipping back and forth on the disk. Small files might just do.
Better seems to parallelize reading a next file and then in another thread write it.
Two threads might theoretically perform better than many threads with enhightened disk traffic. readAllBytes might then be appropriate, to let the writing thread not use is.
As in the tar entry maybe the file size is kept too, that would allow to check whether readAllBytes is efficient enough - for large files.
Logging was mentioned in this question. It is known, that that can consume much time, and with parallelism becomes even more critical. But you seem to be aware of it. You wrote having written your own logger. For a library System.Logger is actually best. It is a façade that uses any logger the application provides. This would have prevented the logger vulnaribility hidden in library dependencies of the past year.

Ignoring the fact that you are not decompressing the file in parallel here (you are running multiple threads decompressing the same file concurrently, essentially overwriting the result), there may be several reasons for this performance hit. I/O is one, so it depends on the underlying implementation. Also, what is the Logger you are using there? While other parts of your code doesn't seem to be shared among multiple threads, the static call to Logger is something that is shared.
Also note: java.nio uses FileChannels which provide synchronous I/O, so depending on how you create the channels, you may get into similar situations (though I don't believe this applies here).

Related

Read/Write Bytes to and From a File Using Only Java.IO

How can we write a byte array to a file (and read it back from that file) in Java?
Yes, we all know there are already lots of questions like that, but they get very messy and subjective due to the fact that there are so many ways to accomplish this task.
So let's reduce the scope of the question:
Domain:
Android / Java
What we want:
Fast (as possible)
Bug-free (in a rigidly meticulous way)
What we are not doing:
Third-party libraries
Any libraries that require Android API later than 23 (Marshmallow)
(So, that rules out Apache Commons, Google Guava, Java.nio, and leaves us with good ol' Java.io)
What we need:
Byte array is always exactly the same (content and size) after going through the write-then-read process
Write method only requires two arguments: File file, and byte[] data
Read method returns a byte[] and only requires one argument: File file
In my particular case, these methods are private (not a library) and are NOT responsible for the following, (but if you want to create a more universal solution that applies to a wider audience, go for it):
Thread-safety (file will not be accessed by more than one process at once)
File being null
File pointing to non-existent location
Lack of permissions at the file location
Byte array being too large
Byte array being null
Dealing with any "index," "length," or "append" arguments/capabilities
So... we're sort of in search of the definitive bullet-proof code that people in the future can assume is safe to use because your answer has lots of up-votes and there are no comments that say, "That might crash if..."
This is what I have so far:
Write Bytes To File:
private void writeBytesToFile(final File file, final byte[] data) {
try {
FileOutputStream fos = new FileOutputStream(file);
fos.write(data);
fos.close();
} catch (Exception e) {
Log.i("XXX", "BUG: " + e);
}
}
Read Bytes From File:
private byte[] readBytesFromFile(final File file) {
RandomAccessFile raf;
byte[] bytesToReturn = new byte[(int) file.length()];
try {
raf = new RandomAccessFile(file, "r");
raf.readFully(bytesToReturn);
} catch (Exception e) {
Log.i("XXX", "BUG: " + e);
}
return bytesToReturn;
}
From what I've read, the possible Exceptions are:
FileNotFoundException : Am I correct that this should not happen as long as the file path being supplied was derived using Android's own internal tools and/or if the app was tested properly?
IOException : I don't really know what could cause this... but I'm assuming that there's no way around it if it does.
So with that in mind... can these methods be improved or replaced, and if so, with what?
It looks like these are going to be core utility/library methods which must run on Android API 23 or later.
Concerning library methods, I find it best to make no assumptions on how applications will use these methods. In some cases the applications may want to receive checked IOExceptions (because data from a file must exist for the application to work), in other cases the applications may not even care if data is not available (because data from a file is only cache that is also available from a primary source).
When it comes to I/O operations, there is never a guarantee that operations will succeed (e.g. user dropping phone in the toilet). The library should reflect that and give the application a choice on how to handle errors.
To optimize I/O performance always assume the "happy path" and catch errors to figure out what went wrong. This is counter intuitive to normal programming but essential in dealing with storage I/O. For example, just checking if a file exists before reading from a file can make your application twice as slow - all these kind of I/O actions add up fast to slow your application down. Just assume the file exists and if you get an error, only then check if the file exists.
So given those ideas, the main functions could look like:
public static void writeFile(File f, byte[] data) throws FileNotFoundException, IOException {
try (FileOutputStream out = new FileOutputStream(f)) {
out.write(data);
}
}
public static int readFile(File f, byte[] data) throws FileNotFoundException, IOException {
try (FileInputStream in = new FileInputStream(f)) {
return in.read(data);
}
}
Notes about the implementation:
The methods can also throw runtime-exceptions like NullPointerExceptions - these methods are never going to be "bug free".
I do not think buffering is needed/wanted in the methods above since only one native call is done
(see also here).
The application now also has the option to read only the beginning of a file.
To make it easier for an application to read a file, an additional method can be added. But note that it is up to the library to detect any errors and report them to the application since the application itself can no longer detect those errors.
public static byte[] readFile(File f) throws FileNotFoundException, IOException {
int fsize = verifyFileSize(f);
byte[] data = new byte[fsize];
int read = readFile(f, data);
verifyAllDataRead(f, data, read);
return data;
}
private static int verifyFileSize(File f) throws IOException {
long fsize = f.length();
if (fsize > Integer.MAX_VALUE) {
throw new IOException("File size (" + fsize + " bytes) for " + f.getName() + " too large.");
}
return (int) fsize;
}
public static void verifyAllDataRead(File f, byte[] data, int read) throws IOException {
if (read != data.length) {
throw new IOException("Expected to read " + data.length
+ " bytes from file " + f.getName() + " but got only " + read + " bytes from file.");
}
}
This implementation adds another hidden point of failure: OutOfMemory at the point where the new data array is created.
To accommodate applications further, additional methods can be added to help with different scenario's. For example, let's say the application really does not want to deal with checked exceptions:
public static void writeFileData(File f, byte[] data) {
try {
writeFile(f, data);
} catch (Exception e) {
fileExceptionToRuntime(e);
}
}
public static byte[] readFileData(File f) {
try {
return readFile(f);
} catch (Exception e) {
fileExceptionToRuntime(e);
}
return null;
}
public static int readFileData(File f, byte[] data) {
try {
return readFile(f, data);
} catch (Exception e) {
fileExceptionToRuntime(e);
}
return -1;
}
private static void fileExceptionToRuntime(Exception e) {
if (e instanceof RuntimeException) { // e.g. NullPointerException
throw (RuntimeException)e;
}
RuntimeException re = new RuntimeException(e.toString());
re.setStackTrace(e.getStackTrace());
throw re;
}
The method fileExceptionToRuntime is a minimal implementation, but it shows the idea here.
The library could also help an application to troubleshoot when an error does occur. For example, a method canReadFile(File f) could check if a file exists and is readable and is not too large. The application could call such a function after a file-read fails and check for common reasons why a file cannot be read. The same can be done for writing to a file.
Although you can't use third party libraries, you can still read their code and learn from their experience. In Google Guava for example, you usually read a file into bytes like this:
FileInputStream reader = new FileInputStream("test.txt");
byte[] result = ByteStreams.toByteArray(reader);
The core implementation of this is toByteArrayInternal. Before calling this, you should check:
A not null file is passed (NullPointerException)
The file exists (FileNotFoundException)
After that, it is reduced to handling an InputStream and this where IOExceptions come from. When reading streams a lot of things out of the control of your application can go wrong (bad sectors and other hardware issues, mal-functioning drivers, OS access rights) and manifest themselves with an IOException.
I am copying here the implementation:
private static final int BUFFER_SIZE = 8192;
/** Max array length on JVM. */
private static final int MAX_ARRAY_LEN = Integer.MAX_VALUE - 8;
private static byte[] toByteArrayInternal(InputStream in, Queue<byte[]> bufs, int totalLen)
throws IOException {
// Starting with an 8k buffer, double the size of each successive buffer. Buffers are retained
// in a deque so that there's no copying between buffers while reading and so all of the bytes
// in each new allocated buffer are available for reading from the stream.
for (int bufSize = BUFFER_SIZE;
totalLen < MAX_ARRAY_LEN;
bufSize = IntMath.saturatedMultiply(bufSize, 2)) {
byte[] buf = new byte[Math.min(bufSize, MAX_ARRAY_LEN - totalLen)];
bufs.add(buf);
int off = 0;
while (off < buf.length) {
// always OK to fill buf; its size plus the rest of bufs is never more than MAX_ARRAY_LEN
int r = in.read(buf, off, buf.length - off);
if (r == -1) {
return combineBuffers(bufs, totalLen);
}
off += r;
totalLen += r;
}
}
// read MAX_ARRAY_LEN bytes without seeing end of stream
if (in.read() == -1) {
// oh, there's the end of the stream
return combineBuffers(bufs, MAX_ARRAY_LEN);
} else {
throw new OutOfMemoryError("input is too large to fit in a byte array");
}
}
As you can see most of the logic has to do with reading the file in chunks. This is to handle situations, where you don't know the size of the InputStream, before starting reading. In your case, you only need to read files and you should be able to know the length beforehand, so this complexity could be avoided.
The other check is OutOfMemoryException. In standard Java the limit is too big, however in Android, it will be a much smaller value. You should check, before trying to read the file that there is enough memory available.

Copied DocumentFile has different siize and hash to original

I'm attempting to copy / duplicate a DocumentFile in an Android application, but upon inspecting the created duplicate, it does not appear to be exactly the same as the original (which is causing a problem, because I need to do an MD5 check on both files the next time a copy is called, so as to avoid overwriting the same files).
The process is as follows:
User selects a file from a ACTION_OPEN_DOCUMENT_TREE
Source file's type is obtained
New DocumentFile in target location is initialised
Contents of first file is duplicated into second file
The initial stages are done with the following code:
// Get the source file's type
String sourceFileType = MimeTypeMap.getSingleton().getExtensionFromMimeType(contextRef.getContentResolver().getType(file.getUri()));
// Create the new (empty) file
DocumentFile newFile = targetLocation.createFile(sourceFileType, file.getName());
// Copy the file
CopyBufferedFile(new BufferedInputStream(contextRef.getContentResolver().openInputStream(file.getUri())), new BufferedOutputStream(contextRef.getContentResolver().openOutputStream(newFile.getUri())));
The main copy process is done using the following snippet:
void CopyBufferedFile(BufferedInputStream bufferedInputStream, BufferedOutputStream bufferedOutputStream)
{
// Duplicate the contents of the temporary local File to the DocumentFile
try
{
byte[] buf = new byte[1024];
bufferedInputStream.read(buf);
do
{
bufferedOutputStream.write(buf);
}
while(bufferedInputStream.read(buf) != -1);
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
try
{
if (bufferedInputStream != null) bufferedInputStream.close();
if (bufferedOutputStream != null) bufferedOutputStream.close();
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
The problem that I'm facing, is that although the file copies successfully and is usable (it's a picture of a cat, and it's still a picture of a cat in the destination), it is slightly different.
The file size has changed from 2261840 to 2262016 (+176)
The MD5 hash has changed completely
Is there something wrong with my copying code that is causing the file to change slightly?
Thanks in advance.
Your copying code is incorrect. It is assuming (incorrectly) that each call to read will either return buffer.length bytes or return -1.
What you should do is capture the number of bytes read in a variable each time, and then write exactly that number of bytes. Your code for closing the streams is verbose and (in theory1) buggy as well.
Here is a rewrite that addresses both of those issues, and some others as well.
void copyBufferedFile(BufferedInputStream bufferedInputStream,
BufferedOutputStream bufferedOutputStream)
throws IOException
{
try (BufferedInputStream in = bufferedInputStream;
BufferedOutputStream out = bufferedOutputStream)
{
byte[] buf = new byte[1024];
int nosRead;
while ((nosRead = in.read(buf)) != -1) // read this carefully ...
{
out.write(buf, 0, nosRead);
}
}
}
As you can see, I have gotten rid of the bogus "catch and squash exception" handlers, and fixed the resource leak using Java 7+ try with resources.
There are still a couple of issues:
It is better for the copy function to take file name strings (or File or Path objects) as parameters and be responsible for opening the streams.
Given that you are doing block reads and writes, there is little value in using buffered streams. (Indeed, it might conceivably be making the I/O slower.) It would be better to use plain streams and make the buffer the same size as the default buffer size used by the Buffered* classes .... or larger.
If you are really concerned about performance, try using transferFrom as described here:
https://www.journaldev.com/861/java-copy-file
1 - In theory, if the bufferedInputStream.close() throws an exception, the bufferedOutputStream.close() call will be skipped. In practice, it is unlikely that closing an input stream will throw an exception. But either way, the try with resource approach will deals with this correctly, and far more concisely.

Java, why reading from MappedByteBuffer is slower than reading from BufferedReader

I tried to read lines from a file which maybe large.
To make a better performance, I tried to use mapped file. But when I compare the performance, I find that the mapped file way is even a a little slower than I read from BufferedReader
public long chunkMappedFile(String filePath, int trunkSize) throws IOException {
long begin = System.currentTimeMillis();
logger.info("Processing imei file, mapped file [{}], trunk size = {} ", filePath, trunkSize);
//Create file object
File file = new File(filePath);
//Get file channel in readonly mode
FileChannel fileChannel = new RandomAccessFile(file, "r").getChannel();
long positionStart = 0;
StringBuilder line = new StringBuilder();
long lineCnt = 0;
while(positionStart < fileChannel.size()) {
long mapSize = positionStart + trunkSize < fileChannel.size() ? trunkSize : fileChannel.size() - positionStart ;
MappedByteBuffer buffer = fileChannel.map(FileChannel.MapMode.READ_ONLY, positionStart, mapSize);//mapped read
for (int i = 0; i < buffer.limit(); i++) {
char c = (char) buffer.get();
//System.out.print(c); //Print the content of file
if ('\n' != c) {
line.append(c);
} else {// line ends
processor.processLine(line.toString());
if (++lineCnt % 100000 ==0) {
try {
logger.info("mappedfile processed {} lines already, sleep 1ms", lineCnt);
Thread.sleep(1);
} catch (InterruptedException e) {}
}
line = new StringBuilder();
}
}
closeDirectBuffer(buffer);
positionStart = positionStart + buffer.limit();
}
long end = System.currentTimeMillis();
logger.info("chunkMappedFile {} , trunkSize: {}, cost : {} " ,filePath, trunkSize, end - begin);
return lineCnt;
}
public long normalFileRead(String filePath) throws IOException {
long begin = System.currentTimeMillis();
logger.info("Processing imei file, Normal read file [{}] ", filePath);
long lineCnt = 0;
try (BufferedReader br = new BufferedReader(new FileReader(filePath))) {
String line;
while ((line = br.readLine()) != null) {
processor.processLine(line.toString());
if (++lineCnt % 100000 ==0) {
try {
logger.info("file processed {} lines already, sleep 1ms", lineCnt);
Thread.sleep(1);
} catch (InterruptedException e) {}
} }
}
long end = System.currentTimeMillis();
logger.info("normalFileRead {} , cost : {} " ,filePath, end - begin);
return lineCnt;
}
Test result in Linux with reading a file which size is 537MB:
MappedBuffer way:
2017-09-28 14:33:19.277 [main] INFO com.oppo.push.ts.dispatcher.imei2device.ImeiTransformerOfflineImpl - process imei file ends:/push/file/imei2device-local/20170928/imei2device-13 , lines :12758858 , cost :14804 , lines per seconds: 861852.0670089165
BufferedReader way:
2017-09-28 14:27:03.374 [main] INFO com.oppo.push.ts.dispatcher.imei2device.ImeiTransformerOfflineImpl - process imei file ends:/push/file/imei2device-local/20170928/imei2device-13 , lines :12758858 , cost :13001 , lines per seconds: 981375.1249903854
That is the thing: file IO isn't straight forward and easy.
You have to keep in mind that your operating system has a huge impact on what exactly is going to happen. In that sense: there are no solid rules that would work for all JVM implementations on all platforms.
When you really have to worry about the last bit of performance, doing in-depth profiling on your target platform is the primary solution.
Beyond that, you are getting that "performance" aspect wrong. Meaning: memory mapped IO doesn't magically increase the performance of reading a single file within an application once. Its major advantages go along this path:
mmap is great if you have multiple processes accessing data in a read only fashion from the same file, which is common in the kind of server systems I write. mmap allows all those processes to share the same physical memory pages, saving a lot of memory.
( quoted from this answer on using the C mmap() system call )
In other words: you example is about reading a file contents. In the end, the OS still has to turn to the drive to read all bytes from there. Meaning: it reads disc content and puts it in memory. When you do that the first time ... it really doesn't matter that you do some "special" things on top of that. To the contrary - as you do "special" things the memory-mapped approach might even be slower - because of the overhead compared to an "ordinary" read.
And coming back to my first record: even when you would have 5 process reading the same file, the memory-mapped approach isn't necessarily faster. As the Linux might figure: I already read that file into memory, and it didn't change - so even without explicit "memory mapping" the Linux kernel might cache information.
The memory mapping doesn't really give any advantage, since even though you're bulk loading a file into memory, you're still processing it one byte at a time. You might see a performance increase if you processed the buffer in suitably sized byte[] chunks. Even then the BufferedReader version may perform better or at least almost the same.
The nature of your task is to process a file sequentially. BufferedReader already does this very well and the code is simple, so if I had to choose I'd go with the simplest option.
Also note that your buffer code doesn't work except for single byte encodings. As soon as you get multiple bytes per character, it will fail magnificently.
GhostCat is correct. And in addition to your OS choice, other things that can affect performance.
Mapping a file will place greater demand on physical memory. If physical memory is "tight" that could cause paging activity, and a performance hit.
The OS could use a different read-ahead strategy if you read a file using read syscalls versus mapping it into memory. Read-ahead (into the buffer cache) can make file reading a lot faster.
The default buffer size for BufferedReader and the OS memory page size are likely to be different. This may result in the size of disk read requests being different. (Larger reads often result in greater throughput I/O. At least to a certain point.)
There could also be "artefacts" caused by the way that you benchmark. For example:
The first time you read a file, a copy of some or all of the file will land in the buffer cache (in memory)
The second time you read the same file, parts of it may still be in memory, and the apparent read time will be shorter.

Liferay Concurrent FileEntry Upload

Problem Statement :
In liferay i have to import a zip file in to some folder in liferay cms, So far I had implemented serial unzipping of the zip file create it's folder and then it's files. The problem here is that the whole process takes a lot of time. So I had to use parallel approach in creating folders and creating files.
My Solution :
I have used a java java.util.concurrent.ExecutorService to create a Executors.newFixedThreadPool(NTHREDS) where NTHREDS is the number of threads to be run in parallel (say 5)
I read all the folder paths from the zip and placed , list of zip
entires (files) against folder path as a key in HashMap
Traversed all keys in the map and created folders serially
Now traversed the list of zip entries (files) from map and passed to a thread worker,one file for each worker, these workers are then sent to
ExecutorService to Execute
So far i didn't find any significant reduction in time of the whole process, am i moving in the correct direction? Does liferay support concurrent file addition? What am I doing wrong?
I will be much thankful for any help in this regard
below is my code
imports
...
...
public class TestImportZip {
private static final int NTHREDS = 5;
ExecutorService executor = null;
...
...
....
Map<String,Folder> folders = new HashMap<String,Folder>();
File zipsFile = null;
public TestImportZip(............,File zipFile, .){
.
.
this.zipsFile = zipFile;
this.executor = Executors.newFixedThreadPool(NTHREDS);
}
// From here the process starts
public void importZip() {
Map<String,List<ZipEntry>> foldersMap = new HashMap<String, List<ZipEntry>>();
try (ZipFile zipFile = new ZipFile(zipsFile)) {
zipFile.stream().forEach(entry -> {
String entryName = entry.getName();
if(entryName.contains("/")) {
String key = entryName.substring(0, entryName.lastIndexOf("/"));
List<ZipEntry> zipEntries = foldersMap.get(key);
if(zipEntries == null){
zipEntries = new ArrayList<>();
}
zipEntries.add(entry);
foldersMap.put(key,zipEntries);
}
});
createFolders(foldersMap.keySet());
createFiles(foldersMap);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void createFolders(Set<String> folderPathSets) {
// create folder and put the folder in map
.
.
.
folders.put(folderPath,folder);
}
private void createFiles(Map<String, List<ZipEntry>> foldersMap) {
.
.
.
//Traverse all the files from all the list in map and send them to worker
createFileWorker(folderPath,zipEntry);
}
private void createFileWorker(String folderPath,ZipEntry zipEntry) {
CreateEntriesWorker cfw = new CreateEntriesWorker(folderPath, zipEntry);
executor.execute(cfw);
}
class CreateEntriesWorker implements Runnable{
Folder folder = null;
ZipEntry entryToCreate = null;
public CreateEntriesWorker(String folderPath, ZipEntry zipEntry){
this.entryToCreate = zipEntry;
// get folder from already created folder map
this.folder = folders.get(folderPath);
}
public void run() {
if(this.folder != null) {
long startTime = System.currentTimeMillis();
try (ZipFile zipFile = new ZipFile(zipsFile)) {
InputStream inputStream = zipFile.getInputStream(entryToCreate);
try{
String name = entryToCreate.getName();
// created file entry here
}catch(Exception e){
}finally{
if(inputStream != null)
inputStream.close();
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
}
Your simplified code does not contain any Liferay reference that I recognize. The description you provide gives a hint that you're trying to optimize some code, but don't get any better performance out of your try. This typically is a sign that you're trying to optimize the wrong aspect of the problem (or it's already quite optimized).
You'll need to determine the actual bottleneck of your operation in order to know if it's feasible to optimize. There's a common saying that "premature optimization is the root of all evil". What does it mean?
I'll completely make up numbers here - don't quote me on them: They're freely invented for illustration purposes. Let's say, that your operation of adding the contents of a Zip file to Liferay's repository is distributed to the following percentages of operational resources:
4% zip file decoding/decompressing
6% file I/O for zip operations and temporary files
10% database operation for storing the files
60% for extracting text-only from word, pdf, excel and other files stored within the zip file in order to index the document in the full-text index
20% overhead of the full-text indexing library for putting together the index.
Suppose you're optimizing the zip file decoding/decompressing - what overall improvement of numbers can you expect?
While my numbers are made up: If your optimizations do not have any result, I'd recommend to reverse them, measure where you need to optimize and go after that place (or accept it and upgrade your hardware if that place is out of reach).
Run those numbers for CPU, I/O, memory and other potential bottlenecks. Identify your actual bottleneck #1, fix it, measure again. You'll see that bottleneck #2 has gotten a promotion. Rinse repeat until you're happy

How to copy a large file in Windows XP?

I have a large file in windows XP - its 38GB. (a VM image)
I cannot seem to copy it.
Dragging on the desktop - gives error of "Insufficient system resources exist to complete the requested service"
Using Java - FileChannel.transferTo(0, fileSize, dest) fails for all files > 2GB
Using Java - FileChannel.transferTo() in chunks of 100Mb fails after ~18Gb
java.io.IOException: Insufficient system resources exist to complete the requested service
at sun.nio.ch.FileDispatcher.write0(Native Method)
at sun.nio.ch.FileDispatcher.write(FileDispatcher.java:44)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:72)
at sun.nio.ch.IOUtil.write(IOUtil.java:28)
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:198)
at sun.nio.ch.FileChannelImpl.transferToTrustedChannel(FileChannelImpl.java:439)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:510)
I mean - the computer has 3GB of RAM. A 100GB buffer should be enough!?!?
Apparently the DOS commands "copy" and "xcopy" also fail.
(edit) I've tried COPY & XCOPY - these fail with the same error. XCOPY seems to take a really really long time about it too.
I've heard of Robocopy, but it doesn't copy single files?
I'm really feeling that Windows is for the lose right now. Surely microsoft have heard of files larger than a few GB?
Thanks!
In Java, don't try to copy the whole file in a single operation. The transferTo() method works on chunks of a file; wasn't intended as a high-level file copy method. Invoke transferTo() in a loop, and assume that count bytes of data will be in RAM (i.e., lower that parameter to be comfortable fitting in RAM).
FileChannel src = ...
FileChannel dst = ...
final long CHUNK = 16 * 1024 * 1024; /* 16 Mb */
for (long pos = 0; pos < fileSize; ) {
pos += src.transferTo(pos, CHUNK, dst);
}
The comment in the transferTo() JavaDoc about it being "more efficient than a simple loop" refers to the fact that channel-to-channel communication can be optimized more than channel-to-user-space-to-channel. It doesn't mean that all looping can be avoided.
I am a Vmware ESX user, I have 30 production VM's with the largest being 232GB. I backup my VM instances onto an internal SATA drive and then copy these off once a week to an external eSata. I use teracopy (free), it runs on average at 45MB/s on an XP machine with 3GB.
Hope that helps
Sailen
Well - I've not managed to find a way that works.
None of the packaged tools in windows will copy the file. Drag and drop, COPY, XCOPY, java - all fail to copy the file.
The reason I wanted to copy the file was for a backup before doing an OS upgrade.
In the end i booted into knoppix and copied it.
Take a look at this Hotfix, worth a try as everything I have seen points to this as being a cure for your issue.
EDIT: You can also try XCOPY /Z as pointed out here.
There may be a hardware issue as well..
I suspect you don't have much time, however you may try dumber stream solution and don't set large buffers (8-16MB should be enough):
public static void copy(InputStream input, OutputStream output) throws IOException {
byte[] buffer = new byte[1024 * 1024 * 8]; // 8MB
int n = 0;
while (-1 != (n = input.read(buffer))) {
output.write(buffer, 0, n);
}
}
public static void main(String args[]) {
if (args.length != 2) {
System.err.println("wrong argument count");
System.exit(1);
}
FileInputStream in = null;
FileOutputStream out = null;
try {
in = new FileInputStream(new File(args[0]));
out = new FileOutputStream(new File(args[1]));
copy(in, out);
} catch (Exception e) {
e.printStackTrace();
}
if (in != null) { try { in.close(); } catch (Exception e) {}}
if (out != null) { try { out.close(); } catch (Exception e) {}}
}
are you sure the filesystem is actually able to cope with such big files (FAT32 cannot for example)? Take a look on this link for details http://www.ntfs.com/ntfs_vs_fat.htm
The system is 32 or 64 bit? On 32-bit you may have problems copy-ing files larger that 2-4Gb.
Also, you said that rsync scucks for you. I've had a very nice experience with it, copying between 2 hard drives at near-native speed. I've had lots of small files..you seem to have on big blob instead.
You may also try splitting the big blob into smaller blobs:)
final long CHUNK = 16 * 1024 * 1024; /* 16 Mb */
for (long pos = 0; pos < fileSize; pos++) {
pos += src.transferTo(pos, CHUNK, dst);
}
This does work! just make sure your src and dst are FileChannel objects (input, output respectively)
Another possible answer is Files.copy (java NIO 2), e.g.:
Path sourcePath = Paths.get("big-file.dat");
Path destinationPath = Paths.get("big-file-copy.dat");
try {
Files.copy(sourcePath, destinationPath,
StandardCopyOption.REPLACE_EXISTING);
} catch (IOException e) {
// something else went wrong
e.printStackTrace();
}

Categories