Mysterious EOF exception while reading a file with Java IO functions - java

I got following exception when I am trying to seek to some file.
>
Error while seeking to 38128 in myFile, File length: 85742
java.io.EOFException
at java.io.RandomAccessFile.readInt(RandomAccessFile.java:725)
at java.io.RandomAccessFile.readLong(RandomAccessFile.java:758)
>
But If you see I am trying to seek to '38128' where as file length is '85742'. It reported EOF exception. I wonder how it is possible? Another process appends contents to that file periodically and closes the file handler. It appends contents using DataOutputStream. My process is trying to seek to some locations and reading it. One more thing is I got this exception only once. I tried to reproduce it but it never happened again. The file is in local disk only. No filer.
Thanks
D. L. Kumar

I would be very careful when trying to do random access on a file that is concurrently being written to from another process. It might lead to all kinds of strange synchronisation problems, as you are experiencing right now.
Do you determine the length of the file from the same process as the one doing the seek()? Has the other modifying processing done a flush()?

The process writing the data may have been told to write the data, but the data could be buffered to write. Be sure to call flush() on the output stream prior to attempting to read the data.

Related

How file manipulations perform during power outage

Linux machine, Java standalone application
I am having the following situation:
I have:
consecutive file write(which creates the destination file and writes some content to it) and file move.
I also have a power outage problem, which instantly cuts off the power of computer during these operations.
As a result, I am getting that the file was created, and it was moved as well, but the file content is empty.
The question is what under the hood can be causing this exact outcome? Considering the time sensitivity, may be hard drive is disabled before the processor and RAM during the cut out, but in that case, how is it possible that the file is created and moved after, but the write before moving is not successful?
I tried catching and logging the exception and debug information but the problem is power outage disables the logging abilities(I/O) as well.
try {
FileUtils.writeStringToFile(file, JsonUtils.toJson(object));
} finally {
if (file.exists()) {
FileUtils.moveFileToDirectory(file, new File(path), true);
}
}
Linux file systems don't necessarily write things to disk immediately, or in exactly the order that you wrote them. That includes both file content and file / directory metadata.
So if you get a power failure at the wrong time, you may find that the file data and metadata is inconsistent.
Normally this doesn't matter. (If the power fails and you don't have a UPS, the applications go away without getting a chance to finish what they were doing.)
However, if it does matter, you can do the following: to force the file to "sync" before you move it:
FileOutputStream fos = ...
// write to file
fs.getFD().sync();
fs.close();
// now move it
You need to read the javadoc for sync() carefully to understand what the method actually does.
You also need to read the javadoc for the method you are using to move the file regarding atomicity.

Checking null position using RandomAccessFile

I need to check if a position in a random access file has not been written to. The problem with this is, when the position actually hasn't been written to, I get (as anticipated) an EOFException. I have been reading RandomAccessFile documentation to try to solve this problem, and tried researching online.
Things I've tried:
Using a try-catch block and catching every time there is a EOFException (Using try-catch as a conditional statement). It works, but it is horrible practice, and it is very inefficient, as for my case it is EOF the majority of the time.
Using a BufferReader to loop through and check the position. I ended up running into many problems and decided that there must be a better way.
I don't want to do any copying one file over to another or any other work around. I know there has to be a direct way of doing this, I just can't seem to find the correct solution.
Are you trying to write a "tailer"?
What you need to do is have one thread which reads only the data which is there using FileChannel.size() to check for more data. This data is passed to a piped input. This allows you to have a second thread which reads the piped stream continuously e.g. using BufferedReader and blocks where more data is needed.

java program file re-processing upon temporary IO Exception

I am processing a large number of files, say 1000 files using a java program. Processing each file takes significant amount of time. Problem is : when I process a file, due to some unknown problem (may be antivirus or any other problem) the input file is not able to be accessed by java program, so I get "Access is denied" and ultimately "java.io.FileNotFoundException".
One of the possible solution is to whenever I get the exception I process the file again calling the function, but calling the function with file name is difficult as this function is recursive function, which process the directories and files recursively. Kindly suggest me the alternative ways.
Move the catch() inside the body of the recursive method.
void readFilesRecursively(File dirOrFile){
boolean successfullRead=false;
for(; !successfullRead ;){
try{
..........Read....
successfullRead=true;
}catch(java.io.FileNotFoundException ex){}
}
}
Keep a list of files whose processing fails and add files in the list whenever u get the exception.
Once recursive call ends, check if list have any data, if yes, process them.

IO errors using memory mapped files in Java

I'm using memory mapped files in some Java code to quickly write to a 2G file. I'm mapping the entire file into memory. The issue I have with my solution is that if the file I'm writing to mysteriously disappears or the disk has some type of error, those errors aren't getting bubbled up to the Java code.
In fact, from the Java code, it looks as though my write completed successfully. Here's the unit test I created to simulate this type of failure:
File twoGigFile = new File("big.bin");
RandomAccessFile raf = new RandomAccessFile(twoGigFile, "rw");
raf.setLength(Integer.MAX_VALUE);
raf.seek(30000); // Totally arbitrary
raf.writeInt(42);
raf.writeInt(42);
MappedByteBuffer buf = raf.getChannel().map(MapMode.READ_WRITE, 0, Integer.MAX_VALUE);
buf.force();
buf.position(1000000); // Totally arbitrary
buf.putInt(0);
assertTrue(twoGigFile.delete());
buf.putInt(0);
raf.close();
This code runs without any errors at all. This is quite an issue for me. I can't seem to find anything out there that speaks about this type of issue. Does anyone know how to get memory mapped files to correctly throw exceptions? Or if there is another way to ensure that the data is actually written to the file?
I'm trying to avoid using a RandomAccessFile because they are much slower than memory mapped files. However, there might not be any other option.
You can't. To quote the JavaDoc:
All or part of a mapped byte buffer may become inaccessible at any time [...] An attempt to access an inaccessible region of a mapped byte buffer will not change the buffer's content and will cause an unspecified exception to be thrown either at the time of the access or at some later time.
And here's why: when you use a mapped buffer, you are changing memory, not the file. The fact that the memory happens to be backed by the file is irrelevant until the OS attempts to write the buffered blocks to disk, which is something that is managed entirely by the OS (ie, your application will not know it's happening).
If you expect the file to disappear underneath you, then you'll have to use an alternate mechanism to see if this happens. One possibility is to occasionally touch the file using a RandomAccessFile, and catch the error that it will throw. Depending on your OS, even this may not be sufficient: on Linux, for example, a file exists for those programs that have an open handle to it, even if it has been deleted externally.

Separating multiple images from stdin in Java

I want to write a program in Java with support for unix pipeline. The problem is that my input files are images and I need in some way to separate them from one another.
I thought that there is no problem because I can read InputStream using ImageIO.read() without reseting position. But it isn't that simple. ImageIO.read() closes the stream every time an image is read. So I can't read more than one file from stdin. Do you have some solution for this?
The API for read() mentions, "This method does not close the provided InputStream after the read operation has completed; it is the responsibility of the caller to close the stream, if desired." You might also check the result for null and verify that a suitable ImageReader is available.

Categories