How to lock document files? [duplicate] - java

I have a Java process that opens a file using a FileReader. How can I prevent another (Java) process from opening this file, or at least notify that second process that the file is already opened? Does this automatically make the second process get an exception if the file is open (which solves my problem) or do I have to explicitly open it in the first process with some sort of flag or argument?
To clarify:
I have a Java app that lists a folder and opens each file in the listing for processing it. It processes each file after the other. The processing of each file consists of reading it and doing some calculations based on the contents and it takes about 2 minutes. I also have another Java app that does the same thing but instead writes on the file. What I want is to be able to run these apps at the same time so the scenario goes like this. ReadApp lists the folder and finds files A, B, C. It opens file A and starts the reading. WriteApp lists the folder and finds files A, B, C. It opens file A, sees that is is open (by an exception or whatever way) and goes to file B. ReadApp finishes file A and continues to B. It sees that it is open and continues to C. It is crucial that WriteApp doesn't write while ReadApp is reading the same file or vice versa. They are different processes.

FileChannel.lock is probably what you want.
try (
FileInputStream in = new FileInputStream(file);
java.nio.channels.FileLock lock = in.getChannel().lock();
Reader reader = new InputStreamReader(in, charset)
) {
...
}
(Disclaimer: Code not compiled and certainly not tested.)
Note the section entitled "platform dependencies" in the API doc for FileLock.

Don't use the classes in thejava.io package, instead use the java.nio package . The latter has a FileLock class. You can apply a lock to a FileChannel.
try {
// Get a file channel for the file
File file = new File("filename");
FileChannel channel = new RandomAccessFile(file, "rw").getChannel();
// Use the file channel to create a lock on the file.
// This method blocks until it can retrieve the lock.
FileLock lock = channel.lock();
/*
use channel.lock OR channel.tryLock();
*/
// Try acquiring the lock without blocking. This method returns
// null or throws an exception if the file is already locked.
try {
lock = channel.tryLock();
} catch (OverlappingFileLockException e) {
// File is already locked in this thread or virtual machine
}
// Release the lock - if it is not null!
if( lock != null ) {
lock.release();
}
// Close the file
channel.close();
} catch (Exception e) {
}

If you can use Java NIO (JDK 1.4 or greater), then I think you're looking for java.nio.channels.FileChannel.lock()
FileChannel.lock()

use java.nio.channels.FileLock in conjunction with java.nio.channels.FileChannel

This may not be what you are looking for, but in the interest of coming at a problem from another angle....
Are these two Java processes that might want to access the same file in the same application? Perhaps you can just filter all access to the file through a single, synchronized method (or, even better, using JSR-166)? That way, you can control access to the file, and perhaps even queue access requests.

Use a RandomAccessFile, get it's channel, then call lock(). The channel provided by input or output streams does not have sufficient privileges to lock properly. Be sure to call unlock() in the finally block (closing the file doesn't necessarily release the lock).

Below is a sample snippet code to lock a file until it's process is done by JVM.
public static void main(String[] args) throws InterruptedException {
File file = new File(FILE_FULL_PATH_NAME);
RandomAccessFile in = null;
try {
in = new RandomAccessFile(file, "rw");
FileLock lock = in.getChannel().lock();
try {
while (in.read() != -1) {
System.out.println(in.readLine());
}
} finally {
lock.release();
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}finally {
try {
in.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}

Use this for unix if you are transferring using winscp or ftp:
public static void isFileReady(File entry) throws Exception {
long realFileSize = entry.length();
long currentFileSize = 0;
do {
try (FileInputStream fis = new FileInputStream(entry);) {
currentFileSize = 0;
while (fis.available() > 0) {
byte[] b = new byte[1024];
int nResult = fis.read(b);
currentFileSize += nResult;
if (nResult == -1)
break;
}
} catch (Exception e) {
e.printStackTrace();
}
System.out.println("currentFileSize=" + currentFileSize + ", realFileSize=" + realFileSize);
} while (currentFileSize != realFileSize);
}

Related

Java: how to synchronize file modification by threads

Only one instance of my Java application can run at a time. It runs on Linux. I need to ensure that one thread doesn't modify the file while the other thread is using it.
I don't know which file locking or synchronization method to use. I have never done file locking in Java and I don't have much Java or programming experience.
I looked into java NIO and I read that "File locks are held on behalf of the entire Java virtual machine. They are not suitable for controlling access to a file by multiple threads within the same virtual machine." Right away I knew that I needed expert help because this is production code and I have almost no idea what I'm doing (and I have to get it done today).
Here's a brief outline of my code to upload some stuff (archive files) to a server. It gets the list of files to upload from a file (call it "listFile") -- and listFile can be modified while this method is reading from it. I minimize the chances of that by copying listFile to a temp file and using that temp file thereafter. But I think I need to lock the file during this copy process (or something like that).
package myPackage;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import com.example.my.FileHelper;
import com.example.my.Logger;
public class BatchUploader implements Runnable {
private int processUploads() {
File myFileToUpload;
File copyOfListFile = null;
try {
copyOfListFile = new File("/path/to/temp/workfile");
File origFile = new File("/path/to/listFile"); //"listFile" - the file that contains a list of files to upload
DataWriter.copyFile(origFile, copyOfListFile);//see code below
} catch (IOException ex) {
Logger.log(ex);
}
try {
BufferedReader input = new BufferedReader(new FileReader(copyOfListFile));
try {
while (!stopRunning && (fileToUploadName = input.readLine()) != null) {
upload(new File(fileToUploadName));
}
} finally {
input.close();
isUploading = false;
}
}
return filesUploadedCount;
}
}
Here is the code that modifies the list of files to be uploaded used in the above code:
public class DataWriter {
public void modifyListOfFilesToUpload(String uploadedFilename) {
StringBuilder content = new StringBuilder();
try {
File listOfFiles = new File("/path/to/listFile"); //file that contains a list of files to upload
if (!listOfFiles.exists()) {
//some code
}
BufferedReader input = new BufferedReader(new FileReader(listOfFiles));
try {
String line = "";
while ((line = input.readLine()) != null) {
if (!line.isEmpty() && line.endsWith(FILE_EXTENSION)) {
if (!line.contains(uploadedFilename)) {
content.append(String.format("%1$s%n", line));
} else {
//some code
}
} else {
//some code
}
}
} finally {
input.close();
}
this.write("/path/to/", "listFile", content.toString(), false, false, false);
} catch (IOException ex) {
Logger.debug("Error reading/writing uploads logfile: " + ex.getMessage());
}
}
public static void copyFile(File in, File out) throws IOException {
FileChannel inChannel = new FileInputStream(in).getChannel();
FileChannel outChannel = new FileOutputStream(out).getChannel();
try {
inChannel.transferTo(0, inChannel.size(), outChannel);
} catch (IOException e) {
throw e;
} finally {
if (inChannel != null) {
inChannel.close();
}
if (outChannel != null) {
outChannel.close();
}
}
}
private void write(String path, String fileName, String data, boolean append, boolean addNewLine, boolean doLog) {
try {
File file = FileHelper.getFile(fileName, path);
BufferedWriter bw = new BufferedWriter(new FileWriter(file, append));
bw.write(data);
if (addNewLine) {
bw.newLine();
}
bw.flush();
bw.close();
if (doLog) {
Logger.debug(String.format("Wrote %1$s%2$s", path, fileName));
}
} catch (java.lang.Exception ex) {
Logger.log(ex);
}
}
}
My I suggest a slightly different approach. Afair on Linux the file rename (mv) operation is atomic on local disks. No chance for one process to see a 'half written' file.
Let XXX be a sequence number with three (or more) digits. You could let your DataWriter append to a file called listFile-XXX.prepare and write a fixed number N of filenames into it. When N names are written, close the file and rename it (atomic, see above) to listFile-XXX. With the next filename, start writing to listFile-YYY where YYY=XXX+1.
Your BatchUploader may at any time check whether it finds files matching the pattern listFile-XXX, open them, read them upload the named files, close and delete them. There is no chance for the threads to mess up each other's file.
Implementation hints:
Make sure to use a polling mechanism in BatchUploader that waits 1 or more seconds if it does not find a file ready for upload (prevent idle wait).
You may want to make sure to sort the listFile-XXX according to XXX to make sure the uploading is kept in sequence.
Of course you could variate the protocol of when listFile-XXX.prepare is closed. If DataWriter has nothing to do for a longer time, you don't want to have files ready for upload hang around just because there are not yet N ready.
Benefits: no locking (which will be a pain to get right), no copying, easy overview over the work queue and it state in the file system.
Here is a slightly different suggestion. Assuming your file names don't have '\n' characters in them (it's a big assumption on linux, I know, but you can have your writer look up for that), why not only read complete lines and ignore the incomplete ones? By incomplete lines, I mean lines that end with EOF and not with \n.
Edit: see more suggestions in comments below.

Java Temporary File Multithreaded Application

I'm looking for a foolproof way to generate a temporary file that will have always end up with a unique name on a per JVM basis. Basically I want to be sure in a multithreaded application that if two or more threads attempt to create a temporary file at the exact same moment in time that they will both end up with a unique temporary file and no exceptions will be thrown.
This is the method I have currently:
public File createTempFile(InputStream inputStream) throws FileUtilsException {
File tempFile = null;
OutputStream outputStream = null;
try {
tempFile = File.createTempFile("app", ".tmp");
tempFile.deleteOnExit();
outputStream = new FileOutputStream(tempFile);
IOUtils.copy(inputStream, outputStream);
} catch (IOException e) {
logger.debug("Unable to create temp file", e);
throw new FileUtilsException(e);
} finally {
try { if (outputStream != null) outputStream.close(); } catch (Exception e) {}
try { if (inputStream != null) inputStream.close(); } catch (Exception e) {}
}
return tempFile;
}
Is this perfectly safe for what my goal is? I reviewed the documentation at the below URL but I'm not sure.
See java.io.File#createTempFile
The answer posted at the below URL answers my question. The method I posted is safe in a multithreaded single JVM process environment. To make it safe in a multithreaded multi-JVM process environment (e.g. a clustered web app) you can use Chris Cooper's idea which involves passing a unique value in the prefix argument for the File.createTempFile method within each JVM process.
Is createTempFile thread-safe?
Just use the thread name and current time in millis to name the file.
You can supply a different prefix or suffix to the temporary files for this exact reason.
Assign a unique ID to each process starting up, and use that unique id as the prefix or suffix, multiple threads in the same VM will not clash, and now VMs will not clash either.

Check if a file is locked in Java

My Java program wants to read a file which can be locked by another program writing into it. I need to check if the file is locked and if so wait until it is free. How do I achieve this?
The Java program is running on a Windows 2000 server.
Should work in Windows:
File file = new File("file.txt");
boolean fileIsNotLocked = file.renameTo(file);
Under Windows with Sun's JVM, the FileLocks should work properly, although the JavaDocs leave the reliability rather vague (system dependent).
Nevertheless, if you only have to recognize in your Java program, that some other program is locking the file, you don't have to struggle with FileLocks, but can simply try to write to the file, which will fail if it is locked. You better try this on your actual system, but I see the following behaviour:
File f = new File("some-locked-file.txt");
System.out.println(f.canWrite()); // -> true
new FileOutputStream(f); // -> throws a FileNotFoundException
This is rather odd, but if you don't count platform independence too high and your system shows the same behaviour, you can put this together in a utility function.
With current Java versions, there is unfortunately no way to be informed about file state changes, so if you need to wait until the file can be written, you have to try every now and then to check if the other process has released its lock. I'm not sure, but with Java 7, it might be possible to use the new WatchService to be informed about such changes.
Use a FileLock in all the Java applications using that file and have them run within the same JVM. Otherwise, this can't be done reliably.
If multiple processes (which can be a mix of Java and non-Java) might be using the file, use a FileLock. A key to using file locks successfully is to remember that they are only "advisory". The lock is guaranteed to be visible if you check for it, but it won't stop you from doing things to the file if you forget. All processes that access the file should be designed to use the locking protocol.
You can try to get an exclusive lock on the file. As long as the exclusive lock cannot be obtained, another program has a lock (exclusive or shared) on the file.
The best way is to use FileLock, but in my case (jdk 1.6) I tried with success:
public static boolean isFileUnlocked(File file) {
try {
FileInputStream in = new FileInputStream(file);
if (in!=null) in.close();
return true;
} catch (FileNotFoundException e) {
return false;
} catch (Exception e) {
e.printStackTrace();
}
return true;
}
I tryed combination of answers (#vlad) on windows with access to Linux Smb share and it worked for me. The first part was enough for lock like Excel but not for some editors. I added second part (rename) for testing both situations.
public static boolean testLockFile(File p_fi) {
boolean bLocked = false;
try (RandomAccessFile fis = new RandomAccessFile(p_fi, "rw")) {
FileLock lck = fis.getChannel().lock();
lck.release();
} catch (Exception ex) {
bLocked = true;
}
if (bLocked)
return bLocked;
// try further with rename
String parent = p_fi.getParent();
String rnd = UUID.randomUUID().toString();
File newName = new File(parent + "/" + rnd);
if (p_fi.renameTo(newName)) {
newName.renameTo(p_fi);
} else
bLocked = true;
return bLocked;
}
For Windows, you can also use:
new RandomAccessFile(file, "rw")
If the file is exclusively locked (by MS Word for example), there will be exception:
java.io.FileNotFoundException: <fileName> (The process cannot access the file because it is being used by another process).
This way you do not need to open/close streams just for the check.
Note - if the file is not exclusively locked (say opened in Notepad++) there will be no exception.
Improved Amjad Abdul-Ghani answer, I found that no error was produced until attempting to read from the file
public static boolean isFilelocked(File file) {
try {
try (FileInputStream in = new FileInputStream(file)) {
in.read();
return false;
}
} catch (FileNotFoundException e) {
return file.exists();
} catch (IOException ioe) {
return true;
}
}
Tested on windows only :
you can check if the file is locked as following enhanced venergiac answer:
check for (file.exist()) file exists but with FileNotFoundException means is locked!
you will notice this message (The process cannot access the file because it is being used by another process)
public static boolean isFilelocked(File file) {
try {
FileInputStream in = new FileInputStream(file);
in.close();
return false;
} catch (FileNotFoundException e) {
if(file.exist()){
return true;
}
return false;
} catch (Exception e) {
e.printStackTrace();
}
return false;
}

file.delete() returns false even though file.exists(), file.canRead(), file.canWrite(), file.canExecute() all return true

I'm trying to delete a file, after writing something in it, with FileOutputStream. This is the code I use for writing:
private void writeContent(File file, String fileContent) {
FileOutputStream to;
try {
to = new FileOutputStream(file);
to.write(fileContent.getBytes());
to.flush();
to.close();
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
As it is seen, I flush and close the stream, but when I try to delete, file.delete() returns false.
I checked before deletion to see if the file exists, and: file.exists(), file.canRead(), file.canWrite(), file.canExecute() all return true. Just after calling these methods I try file.delete() and returns false.
Is there anything I've done wrong?
Another bug in Java. I seldom find them, only my second in my 10 year career. This is my solution, as others have mentioned. I have nether used System.gc(). But here, in my case, it is absolutely crucial. Weird? YES!
finally
{
try
{
in.close();
in = null;
out.flush();
out.close();
out = null;
System.gc();
}
catch (IOException e)
{
logger.error(e.getMessage());
e.printStackTrace();
}
}
It was pretty odd the trick that worked. The thing is when I have previously read the content of the file, I used BufferedReader. After reading, I closed the buffer.
Meanwhile I switched and now I'm reading the content using FileInputStream. Also after finishing reading I close the stream. And now it's working.
The problem is I don't have the explanation for this.
I don't know BufferedReader and FileOutputStream to be incompatible.
I tried this simple thing and it seems to be working.
file.setWritable(true);
file.delete();
It works for me.
If this does not work try to run your Java application with sudo if on linux and as administrator when on windows. Just to make sure Java has rights to change the file properties.
Before trying to delete/rename any file, you must ensure that all the readers or writers (for ex: BufferedReader/InputStreamReader/BufferedWriter) are properly closed.
When you try to read/write your data from/to a file, the file is held by the process and not released until the program execution completes. If you want to perform the delete/rename operations before the program ends, then you must use the close() method that comes with the java.io.* classes.
As Jon Skeet commented, you should close your file in the finally {...} block, to ensure that it's always closed. And, instead of swallowing the exceptions with the e.printStackTrace, simply don't catch and add the exception to the method signature. If you can't for any reason, at least do this:
catch(IOException ex) {
throw new RuntimeException("Error processing file XYZ", ex);
}
Now, question number #2:
What if you do this:
...
to.close();
System.out.println("Please delete the file and press <enter> afterwards!");
System.in.read();
...
Would you be able to delete the file?
Also, files are flushed when they're closed. I use IOUtils.closeQuietly(...), so I use the flush method to ensure that the contents of the file are there before I try to close it (IOUtils.closeQuietly doesn't throw exceptions). Something like this:
...
try {
...
to.flush();
} catch(IOException ex) {
throw new CannotProcessFileException("whatever", ex);
} finally {
IOUtils.closeQuietly(to);
}
So I know that the contents of the file are in there. As it usually matters to me that the contents of the file are written and not if the file could be closed or not, it really doesn't matter if the file was closed or not. In your case, as it matters, I would recommend closing the file yourself and treating any exceptions according.
There is no reason you should not be able to delete this file. I would look to see who has a hold on this file. In unix/linux, you can use the lsof utility to check which process has a lock on the file. In windows, you can use process explorer.
for lsof, it's as simple as saying:
lsof /path/and/name/of/the/file
for process explorer you can use the find menu and enter the file name to show you the handle which will point you to the process locking the file.
here is some code that does what I think you need to do:
FileOutputStream to;
try {
String file = "/tmp/will_delete.txt";
to = new FileOutputStream(file );
to.write(new String("blah blah").getBytes());
to.flush();
to.close();
File f = new File(file);
System.out.print(f.delete());
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
It works fine on OS X. I haven't tested it on windows but I suspect it should work on Windows too. I will also admit seeing some unexpected behavior on Windows w.r.t. file handling.
If you are working in Eclipse IDE, that could mean that you haven't close the file in the previous launch of the application. When I had the same error message at trying to delete a file, that was the reason. It seems, Eclipse IDE doesn't close all files after termination of an application.
Hopefully this will help. I came across similar problem where i couldn't delete my file after my java code made a copy of the content to the other folder. After extensive googling, i explicitly declared every single file operation related variables and called the close() method of each file operation object, and set them to NULL. Then, there is a function called System.gc(), which will clear up the file i/o mapping (i'm not sure, i just tell what is given on the web sites).
Here is my example code:
public void start() {
File f = new File(this.archivePath + "\\" + this.currentFile.getName());
this.Copy(this.currentFile, f);
if(!this.currentFile.canWrite()){
System.out.println("Write protected file " +
this.currentFile.getAbsolutePath());
return;
}
boolean ok = this.currentFile.delete();
if(ok == false){
System.out.println("Failed to remove " + this.currentFile.getAbsolutePath());
return;
}
}
private void Copy(File source, File dest) throws IOException {
FileInputStream fin;
FileOutputStream fout;
FileChannel cin = null, cout = null;
try {
fin = new FileInputStream(source);
cin = fin.getChannel();
fout = new FileOutputStream(dest);
cout = fout.getChannel();
long size = cin.size();
MappedByteBuffer buf = cin.map(FileChannel.MapMode.READ_ONLY, 0, size);
cout.write(buf);
buf.clear();
buf = null;
cin.close();
cin = null;
fin.close();
fin = null;
cout.close();
cout = null;
fout.close();
fout = null;
System.gc();
} catch (Exception e){
this.message = e.getMessage();
e.printStackTrace();
}
}
the answer is when you load the file, you need apply the "close" method, in any line of code, works to me
There was a problem once in ruby where files in windows needed an "fsync" to actually be able to turn around and re-read the file after writing it and closing it. Maybe this is a similar manifestation (and if so, I think a windows bug, really).
None of the solutions listed here worked in my situation. My solution was to use a while loop, attempting to delete the file, with a 5 second (configurable) limit for safety.
File f = new File("/path/to/file");
int limit = 20; //Only try for 5 seconds, for safety
while(!f.delete() && limit > 0){
synchronized(this){
try {
this.wait(250); //Wait for 250 milliseconds
} catch (InterruptedException e) {
e.printStackTrace();
}
}
limit--;
}
Using the above loop worked without having to do any manual garbage collecting or setting the stream to null, etc.
The problem could be that the file is still seen as opened and locked by a program; or maybe it is a component from your program that it had been opened in, so you have to ensure you use the dispose() method to solve that problem.
i.e. JFrame frame;
....
frame.dispose();
You have to close all of the streams or use try-with-resource block
static public String head(File file) throws FileNotFoundException, UnsupportedEncodingException, IOException
{
final String readLine;
try (FileInputStream fis = new FileInputStream(file);
InputStreamReader isr = new InputStreamReader(fis, "UTF-8");
LineNumberReader lnr = new LineNumberReader(isr))
{
readLine = lnr.readLine();
}
return readLine;
}
if file.delete() is sending false then in most of the cases your Bufferedreader handle will not be closed. Just close and it seems to work for me normally.
I had the same problem on Windows. I used to read the file in scala line by line with
Source.fromFile(path).getLines()
Now I read it as a whole with
import org.apache.commons.io.FileUtils._
// encoding is null for platform default
val content=readFileToString(new File(path),null.asInstanceOf[String])
which closes the file properly after reading and now
new File(path).delete
works.
FOR Eclipse/NetBeans
Restart your IDE and run your code again this is only trick work for me after one hour long struggle.
Here is my code:
File file = new File("file-path");
if(file.exists()){
if(file.delete()){
System.out.println("Delete");
}
else{
System.out.println("not delete");
}
}
Output:
Delete
Another corner case that this could happen: if you read/write a JAR file through a URL and later try to delete the same file within the same JVM session.
File f = new File("/tmp/foo.jar");
URL j = f.toURI().toURL();
URL u = new URL("jar:" + j + "!/META-INF/MANIFEST.MF");
URLConnection c = u.openConnection();
// open a Jar entry in auto-closing manner
try (InputStream i = c.getInputStream()) {
// just read some stuff; for demonstration purposes only
byte[] first16 = new byte[16];
i.read(first16);
System.out.println(new String(first16));
}
// ...
// i is now closed, so we should be good to delete the jar; but...
System.out.println(f.delete()); // says false!
Reason is that the internal JAR file handling logic of Java, tends to cache JarFile entries:
// inner class of `JarURLConnection` that wraps the actual stream returned by `getInputStream()`
class JarURLInputStream extends FilterInputStream {
JarURLInputStream(InputStream var2) {
super(var2);
}
public void close() throws IOException {
try {
super.close();
} finally {
// if `getUseCaches()` is set, `jarFile` won't get closed!
if (!JarURLConnection.this.getUseCaches()) {
JarURLConnection.this.jarFile.close();
}
}
}
}
And each JarFile (rather, the underlying ZipFile structure) would hold a handle to the file, right from the time of construction up until close() is invoked:
public ZipFile(File file, int mode, Charset charset) throws IOException {
// ...
jzfile = open(name, mode, file.lastModified(), usemmap);
// ...
}
// ...
private static native long open(String name, int mode, long lastModified,
boolean usemmap) throws IOException;
There's a good explanation on this NetBeans issue.
Apparently there are two ways to "fix" this:
You can disable the JAR file caching - for the current URLConnection, or for all future URLConnections (globally) in the current JVM session:
URL u = new URL("jar:" + j + "!/META-INF/MANIFEST.MF");
URLConnection c = u.openConnection();
// for only c
c.setUseCaches(false);
// globally; for some reason this method is not static,
// so we still need to access it through a URLConnection instance :(
c.setDefaultUseCaches(false);
[HACK WARNING!] You can manually purge the JarFile from the cache when you are done with it. The cache manager sun.net.www.protocol.jar.JarFileFactory is package-private, but some reflection magic can get the job done for you:
class JarBridge {
static void closeJar(URL url) throws Exception {
// JarFileFactory jarFactory = JarFileFactory.getInstance();
Class<?> jarFactoryClazz = Class.forName("sun.net.www.protocol.jar.JarFileFactory");
Method getInstance = jarFactoryClazz.getMethod("getInstance");
getInstance.setAccessible(true);
Object jarFactory = getInstance.invoke(jarFactoryClazz);
// JarFile jarFile = jarFactory.get(url);
Method get = jarFactoryClazz.getMethod("get", URL.class);
get.setAccessible(true);
Object jarFile = get.invoke(jarFactory, url);
// jarFactory.close(jarFile);
Method close = jarFactoryClazz.getMethod("close", JarFile.class);
close.setAccessible(true);
//noinspection JavaReflectionInvocation
close.invoke(jarFactory, jarFile);
// jarFile.close();
((JarFile) jarFile).close();
}
}
// and in your code:
// i is now closed, so we should be good to delete the jar
JarBridge.closeJar(j);
System.out.println(f.delete()); // says true, phew.
Please note: All this is based on Java 8 codebase (1.8.0_144); they may not work with other / later versions.

How can I lock a file using java (if possible)

I have a Java process that opens a file using a FileReader. How can I prevent another (Java) process from opening this file, or at least notify that second process that the file is already opened? Does this automatically make the second process get an exception if the file is open (which solves my problem) or do I have to explicitly open it in the first process with some sort of flag or argument?
To clarify:
I have a Java app that lists a folder and opens each file in the listing for processing it. It processes each file after the other. The processing of each file consists of reading it and doing some calculations based on the contents and it takes about 2 minutes. I also have another Java app that does the same thing but instead writes on the file. What I want is to be able to run these apps at the same time so the scenario goes like this. ReadApp lists the folder and finds files A, B, C. It opens file A and starts the reading. WriteApp lists the folder and finds files A, B, C. It opens file A, sees that is is open (by an exception or whatever way) and goes to file B. ReadApp finishes file A and continues to B. It sees that it is open and continues to C. It is crucial that WriteApp doesn't write while ReadApp is reading the same file or vice versa. They are different processes.
FileChannel.lock is probably what you want.
try (
FileInputStream in = new FileInputStream(file);
java.nio.channels.FileLock lock = in.getChannel().lock();
Reader reader = new InputStreamReader(in, charset)
) {
...
}
(Disclaimer: Code not compiled and certainly not tested.)
Note the section entitled "platform dependencies" in the API doc for FileLock.
Don't use the classes in thejava.io package, instead use the java.nio package . The latter has a FileLock class. You can apply a lock to a FileChannel.
try {
// Get a file channel for the file
File file = new File("filename");
FileChannel channel = new RandomAccessFile(file, "rw").getChannel();
// Use the file channel to create a lock on the file.
// This method blocks until it can retrieve the lock.
FileLock lock = channel.lock();
/*
use channel.lock OR channel.tryLock();
*/
// Try acquiring the lock without blocking. This method returns
// null or throws an exception if the file is already locked.
try {
lock = channel.tryLock();
} catch (OverlappingFileLockException e) {
// File is already locked in this thread or virtual machine
}
// Release the lock - if it is not null!
if( lock != null ) {
lock.release();
}
// Close the file
channel.close();
} catch (Exception e) {
}
If you can use Java NIO (JDK 1.4 or greater), then I think you're looking for java.nio.channels.FileChannel.lock()
FileChannel.lock()
use java.nio.channels.FileLock in conjunction with java.nio.channels.FileChannel
This may not be what you are looking for, but in the interest of coming at a problem from another angle....
Are these two Java processes that might want to access the same file in the same application? Perhaps you can just filter all access to the file through a single, synchronized method (or, even better, using JSR-166)? That way, you can control access to the file, and perhaps even queue access requests.
Use a RandomAccessFile, get it's channel, then call lock(). The channel provided by input or output streams does not have sufficient privileges to lock properly. Be sure to call unlock() in the finally block (closing the file doesn't necessarily release the lock).
Below is a sample snippet code to lock a file until it's process is done by JVM.
public static void main(String[] args) throws InterruptedException {
File file = new File(FILE_FULL_PATH_NAME);
RandomAccessFile in = null;
try {
in = new RandomAccessFile(file, "rw");
FileLock lock = in.getChannel().lock();
try {
while (in.read() != -1) {
System.out.println(in.readLine());
}
} finally {
lock.release();
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}finally {
try {
in.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
Use this for unix if you are transferring using winscp or ftp:
public static void isFileReady(File entry) throws Exception {
long realFileSize = entry.length();
long currentFileSize = 0;
do {
try (FileInputStream fis = new FileInputStream(entry);) {
currentFileSize = 0;
while (fis.available() > 0) {
byte[] b = new byte[1024];
int nResult = fis.read(b);
currentFileSize += nResult;
if (nResult == -1)
break;
}
} catch (Exception e) {
e.printStackTrace();
}
System.out.println("currentFileSize=" + currentFileSize + ", realFileSize=" + realFileSize);
} while (currentFileSize != realFileSize);
}

Categories