Writing to $HOME from a jar file - java

I'm trying to write to a file located in my $HOME directory. The code to write to that file has been packaged into a jar file. When I run the unit tests to package the jar file, everything works as expected - namely the file is populated and can be read from again.
When I try to run this code from another application where the jar file is contained the lib directory it fails. The file is created - but the file is never written to. When the app goes to read the file it fails parsing it because it is empty.
Here is the code that writes to the file:
logger.warn("TestNet wallet does not exist creating one now in the directory: " + walletPath)
testNetFileName.createNewFile()
logger.warn("Wallet file name: " + testNetFileName.getAbsolutePath)
logger.warn("Can write: "+ testNetFileName.canWrite())
logger.warn("Can read: " + testNetFileName.canRead)
val w = Wallet.fromWatchingKey(TestNet3Params.get(), testNetSeed)
w.autosaveToFile(testNetFileName, savingInterval, TimeUnit.MILLISECONDS, null)
w
}
here is the log form the above method that is relevant:
2015-12-30 15:11:46,416 - [WARN] - from class com.suredbits.core.wallet.ColdStorageWallet$ in play-akka.actor.default-dispatcher-9
TestNet wallet exists, reading in the one from disk
2015-12-30 15:11:46,416 - [WARN] - from class com.suredbits.core.wallet.ColdStorageWallet$ in play-akka.actor.default-dispatcher-9
Wallet file name: /home/chris/testnet-cold-storage.wallet
then it bombs.
Here is the definition for autoSaveToFile
public WalletFiles autosaveToFile(File f, long delayTime, TimeUnit timeUnit,
#Nullable WalletFiles.Listener eventListener) {
lock.lock();
try {
checkState(vFileManager == null, "Already auto saving this wallet.");
WalletFiles manager = new WalletFiles(this, f, delayTime, timeUnit);
if (eventListener != null)
manager.setListener(eventListener);
vFileManager = manager;
return manager;
} finally {
lock.unlock();
}
}
and the definition for WalletFiles
https://github.com/bitcoinj/bitcoinj/blob/master/core/src/main/java/org/bitcoinj/wallet/WalletFiles.java#L68
public WalletFiles(final Wallet wallet, File file, long delay, TimeUnit delayTimeUnit) {
// An executor that starts up threads when needed and shuts them down later.
this.executor = new ScheduledThreadPoolExecutor(1, new ContextPropagatingThreadFactory("Wallet autosave thread", Thread.MIN_PRIORITY));
this.executor.setKeepAliveTime(5, TimeUnit.SECONDS);
this.executor.allowCoreThreadTimeOut(true);
this.executor.setExecuteExistingDelayedTasksAfterShutdownPolicy(false);
this.wallet = checkNotNull(wallet);
// File must only be accessed from the auto-save executor from now on, to avoid simultaneous access.
this.file = checkNotNull(file);
this.savePending = new AtomicBoolean();
this.delay = delay;
this.delayTimeUnit = checkNotNull(delayTimeUnit);
this.saver = new Callable<Void>() {
#Override public Void call() throws Exception {
// Runs in an auto save thread.
if (!savePending.getAndSet(false)) {
// Some other scheduled request already beat us to it.
return null;
}
log.info("Background saving wallet, last seen block is {}/{}", wallet.getLastBlockSeenHeight(), wallet.getLastBlockSeenHash());
saveNowInternal();
return null;
}
};
}
I'm guessing it is some sort of permissions issue but I cannot seem to figure this out.
EDIT: This is all being run on the exact same Ubuntu 14.04 machine - no added complexity of different operating systems.

You cannot generally depend on the existence or writability of $HOME. There are really only two portable ways to identify (i.e. provide a path to) an external file.
Provide an explicit path using a property set on the invocation command line or provided in the environment, or
Provide the path in a configuration properties file whose location is itself provided as a property on the command line or in the environment.
The problem with using $HOME is that you cannot know what userID the application is running under. The user may or may not even have a home directory, and even if the user does, the directory may or may not be writable. In your specific case, your process may have the ability to create a file (write access on the directory itself) but write access to a file may be restricted by the umask and/or ACLs (on Windows) or selinux (on Linux).
Put another way, the installer/user of the library must explicitly provide a known writable path for your application to use.
Yet another way to think about it is that you are writing library code that may be used in completely unknown environments. You cannot assume ANYTHING about the external environment except what is in the explicit contract between you and the user. You can declare in your interface specification that $HOME must be writable, but that may be highly inconvenient for some users whose environment doesn't have $HOME writable.
A much better and portable solution is to say
specify -Dcom.xyz.workdir=[path] on the command line to indicate the work path to be used
or
The xyz library will look for its work directory in the path specified by the XYZ_WORK environment variable
Ideally, you do BOTH of these to give the user some flexibility.

savePending is always false. In the beginning of call you check that it is false, and return null. The actual save code is never executed. I am guessing you meant to check if it was true there, and also set it to true, not false. You then also need to reset it back to false in the end.
Now, why this works in your unit test is a different story. The test must be executing different code.

Related

How to test methods that download big files

Usually I have these methods responsible for downloading large file/s to a local directory.
It annoys me because I don't really know how to test that properly?
Should I run a test case that downloads these files to a temp directory using Role in Junit? Or maybe ask it to download files to the same local directory in production?
Importantly, such method takes a long time to download 1GB+ file, so is it bad that a test case would take a long time?
What would be an ideal test case for this?
public File downloadFile(File dir, URL url){
//url contains a 1GB or more
//method takes a long time
return file; // it's located in dir
}
My belief here is that you are approaching the problem in the wrong way altogether.
You want to test "if that method works". But this method is highly dependent on "side effects" which are, in this case, failures which can occur at any of these steps:
connection failure: the given URL cannot be accessed (for whatever reason);
network failure: connection is cut while the transfer was in progress;
file system failure: the specified file cannot be created/opened in write mode; or the write fails while you are downloading contents.
In short: it is impossible to test whether the method "works". And what is more, due to the amount of possible failures mentioned above, it means that your method should at least throw an exception so that the caller of the method can deal with it.
Finally, this is 2016; it is assumed below that you use Java 7 or later, but here is how you can rewrite your method:
public Path downloadFile(final Path dir, final URL url)
throws IOException
{
final String filename = /* decide about the filename here */;
final Path ret = dir.resolve(filename);
try (
final InputStream in = url.openStream();
) {
Files.copy(in, ret);
}
return ret;
}
Now, in your tests, just mock the behavior of that method; make it fail, make it return a valid path, make the URL fail on .openStream()... You can simulate the behavior of that method whichever way you want so that you can test how the callers of this method behave.
But such a method, in itself, is just too dependent on "side effects" that it cannot be tested reliably.

Detecting whether an application was launched from a read-only file system on OS X

I want to know whether the user launched our Java-based application from a read-only file system like from a .dmg, so functions like auto-update will be able to show meaningful information instead of aborting with an error. I first thought checking the .app's path would be sufficient (when launched from a .dmg it is something like /Volumes/MyApp 1.2.3/MyApp.app, but this won't work, because the user might have installed the application on a different partition. What other things may I check?
You can use -[NSURL getResourceValue:forKey:error:] with the key NSURLVolumeIsReadOnlyKey. You would apply this to the app bundle URL as returned by [[NSBundle mainBundle] bundleURL]. So:
NSBundle* bundle = [NSBundle mainBundle];
NSURL* bundleURL = bundle.bundleURL;
NSNumber* readOnly;
NSError* error;
if ([bundleURL getResourceValue:&readOnly forKey:NSURLVolumeIsReadOnlyKey error:&error])
{
BOOL isReadOnly = [readOnly boolValue];
// act on isReadOnly value
}
else
{
// Handle error
}
If OSX is POSIX compliant, to determine if filesystem is mounted R/O, You can use statvfs() or fstatvfs(), returned struct statvfs field f_flag should have ST_RDONLY bit set for R/O filesystem.
As it was pointed in comments, check if this information is correctly provided by OS.
JNA and this question may be usefull for Java.
A few more ideas, which may be usefull here (access(), open(), utime() ).
OS X specific statfs() may be used too, but this function is not portable (Linux and *BSD have slightly different statfs() functions).
You can also check directly from Java whether a certain path points to something within a read-only directory by querying the FileStore associated with your path:
File classpathRoot = new File(MyClass.class.getClassLoader().getResource("").getPath());
/* getPath() actually returns a String instead of a Path object,
* so we need to take this little detour */
Path yourAppPath = classpathRoot.toPath();
boolean isReadOnly = Files.getFileStore(yourAppPath).isReadOnly();

As of Java 7 update 45, one can no longer lookup manifest information without triggering a warning?

Unfortunately, the workaround given by Oracle and others here (Java applet manifest - Allow all Caller-Allowable-Codebase) for getting around the 7 update 45 problem does NOT work if your app needs to access its loaded signed jar manifests. In my case, our app does this so as to log the relevant manifest info.
With my web start app, everything worked fine and dandy with the "Trusted-Library" attribute that needed to be added for 7u21. With 7u45, removing the "Trusted-Library" attribute and adding in all the additional attributes talked about in other workarounds will NOT work -- I will get the same warning that you would get if you were running 7u21 without the Trusted-Library attribute (stating the application contains both signed and unsigned code):
I've tried just about every manifest/JNLP permutation possible -- nothing cuts the mustard.
What I've found is that, basically, when we load one of our applets' jar manifests (not the JRE jars), and call hasMoreElements, additional security checks occur which trigger the warning:
public List<String> getManifests() {
...
Enumeration<URL> urls = getClass().getClassLoader().getResources("META-INF/MANIFEST.MF");
while (urls.hasMoreElements()) {
.... a bunch of loop stuff
// at the end of the loop...
System.out.println("Checkpoint SGMLOOP8.");
System.out.println("Breaking....");
//break; <<<<<<---- if the next jar is one of our signed jars, the next line will trigger the warning. If instead we choose to break, the app works perfectly with no warning.
System.out.println("urls.hasMoreElements(): " + (urls.hasMoreElements() ? "true" : "false")); <<<<<<-------- will evaluate to false if user clicks Block on the warning, otherwise will evaluate to true when our signed jars are next
System.out.println("Checkpoint SGMLOOP9.");
}
...
}
This is what prints out in the Java console at maximum tracing:
Checkpoint SGMLOOP8.
Breaking.... <<<<---- console output pauses here until user answers warning
security: resource name "META-INF/MANIFEST.MF" in http://<path_to_jar> : java.lang.SecurityException: trusted loader attempted to load sandboxed resource from http://<path_to_jar>
(message repeats for all our signed jars)
urls.hasMoreElements(): false <<<<---- false because user clicked blocked, otherwise true when user clicks don't block
Checkpoint SGMLOOP9.
It took me FOREVER to figure out this out, because for some reason when a signed manifest that passes security checks earlier in the startup process, and then later is accessed and is complained about, I don't naturally think it's complaining about the manifest, but rather the resources referenced by the manifest. Go figure!
Looking into the Java source code, I can see why the warning could possibly happen (hasMoreElements leads to more security checks):
// getResources is called in my code above
java.lang.ClassLoader
public Enumeration<URL> getResources(String name) throws IOException {
Enumeration[] tmp = new Enumeration[2];
if (parent != null) {
tmp[0] = parent.getResources(name);
} else {
tmp[0] = getBootstrapResources(name);
}
tmp[1] = findResources(name); <<<<------ This returns a new Enumeration<URL> object which has its own “hasMoreElments()” method overwritten – see below code
return new CompoundEnumeration<>(tmp);
}
java.net.URLClassLoader
public Enumeration<URL> findResources(final String name)
throws IOException
{
final Enumeration<URL> e = ucp.findResources(name, true);
return new Enumeration<URL>() {
private URL url = null;
private boolean next() {
if (url != null) {
return true;
}
do {
URL u = AccessController.doPrivileged( <<-- Security context could block this
new PrivilegedAction<URL>() {
public URL run() {
if (!e.hasMoreElements())
return null;
return e.nextElement();
}
}, acc);
if (u == null)
break;
url = ucp.checkURL(u); <<-- Security checks done in here
} while (url == null);
return url != null;
}
public URL nextElement() {
if (!next()) {
throw new NoSuchElementException();
}
URL u = url;
url = null;
return u;
}
public boolean hasMoreElements() {
return next();
}
};
}
Yes, the manifests are properly signed! Yes, the manifests do have the appropriate attributes! In fact, this is proven by the fact that the jars load just fine and execute, as long as we don't try to directly access their manifests! Just to assuage your fears though, here are the relevant manifest attributes (I've tried MANY additions/subtractions of the below attributes):
Manifest-Version: 1.0
Ant-Version: Apache Ant 1.7.0
Created-By: 24.45-b08 (Oracle Corporation)
Application-Name: AppName
Codebase: *
Permissions: all-permissions
Application-Library-Allowable-Codebase: *
Caller-Allowable-Codebase: *
Trusted-Only: false
Class-Path: jar1.jar jar2.jar jar3.jar
Specification-Title: AppName
Specification-Version: 1.0
Specification-Vendor: CompanyName
Implementation-Title: AppName
Implementation-Version: 1.0
Implementation-Vendor: CompanyName
The question is: Should the warning be happening when we try to access the manifests? As it stands, we either have to choose to force users see the warning every time, or we have to remove our logging of our signed jar manifest info. Seems like a bad choice, especially since this manifest info is very useful for debugging issues as it is really the only way to verify an end-user is running the correct version of the app (short of direct physical inspection on-site). This is especially true of our applet since the jars are allowed to be cached on client systems (along with the corresponding JavaScript to access the applet), meaning they could very easily be running the wrong jars after upgrades/downgrades, etc. Not having this info in our logs could lead to large headaches in the future.
Any ideas? This is especially frustrating since Oracle intends to fix the Trusted-Library issue anyway, so putting all this investigative work into this may be doing nothing more than wasting my weekend. Grrr....
EDIT: One observation I had was that first jar that ran into the security exception actually had a dependency on a different jar in my app. I thought, "maybe the dependent jar's manifest should be read in first?" So I forced the load-order so that non-dependent jars would be loaded first. End result? I could see the non-dependent jar now threw the security exception first... and there is still a warning.
I'm getting this problem with applets, and found that I get the warning on hasMoreElements on my own applet .jar files (not the Java system .jars that are returned the first few times through the loop), and only when applet .jar file caching is enabled in the Java Control Panel.
I can't get all my customers to disable .jar file caching, but those that do are happy for the moment, as the mixed code warning does not appear for them. It's not an answer, it's a workaround at best.
Wanted to update this question to say that as of Java 7 Update 55, this issue has been rendered moot since it is possible again to put in both "Trusted-Library" and "Caller-Allowable-Codebase" manifest attributes simultaneously. With both of these attributes in the manifest, the warning will not be triggered.

Any sure fire way to check file existence on Linux NFS? [duplicate]

This question already has answers here:
Alternative to File.exists() in Java
(6 answers)
Closed 2 years ago.
I am working on a Java program that requires to check the existence of files.
Well, simple enough, the code make use calls to File.exists() for checking file existence. And the problem I have is, it reports false positive. That means the file does not actually exist but exists() method returns true. No exception was captured (at least no exception like "Stale NFS handle"). The program even managed to read the file through InputStream, getting 0 bytes as expected and yet no exception. The target directory is a Linux NFS. And I am 100% sure that the file being looked for never exists.
I know there are known bugs (kind of API limitation) exist for java.io.File.exists(). So I've then added another way round by checking file existence using Linux command ls. Instead of making call to File.exists() the Java code now runs a Linux command to ls the target file. If exit code is 0, file exists. Otherwise, file does not exist.
The number of times the issue is hit seems to be reduced with the introduction of the trick, but still pops. Again, no error was captured anywhere (stdout this time). That means the problem is so serious that even native Linux command won't fix for 100% of the time.
So there are couple of questions around:
I believe Java's well known issue on File.exists() is about reporting false negative. Where file was reported to not exist but in fact does exist. As the API does not throws IOException for File.exists(), it choose to swallow the Exception in the case calls to OS's underlying native functions failed e.g. NFS timeout. But then this does not explain the false positive case I am having, given that the file never exist. Any throw on this one?
My understanding on Linux ls exit code is, 0 means okay, equivalent to file exists. Is this understanding wrong? The man page of ls is not so clear on explaining the meaning of exit code: Exit status is 0 if OK, 1 if minor problems, 2 if serious trouble.
All right, back to subject. Any surefire way to check File existence with Java on Linux? Before we see JDK7 with NIO2 officially released.
Here is a JUnit test that shows the problem and some Java Code that actually tries to read the file.
The problem happens e.g. using Samba on OSX Mavericks. A possible reason
is explaned by the statement in:
http://appleinsider.com/articles/13/06/11/apple-shifts-from-afp-file-sharing-to-smb2-in-os-x-109-mavericks
It aggressively caches file and folder properties and uses opportunistic locking to enable better caching of data.
Please find below a checkFile that will actually attempt to read a few bytes and forcing a true file access to avoid the caching misbehaviour ...
JUnit test:
/**
* test file exists function on Network drive replace the testfile name and ssh computer
* with your actual environment
* #throws Exception
*/
#Test
public void testFileExistsOnNetworkDrive() throws Exception {
String testFileName="/Volumes/bitplan/tmp/testFileExists.txt";
File testFile=new File(testFileName);
testFile.delete();
for (int i=0;i<10;i++) {
Thread.sleep(50);
System.out.println(""+i+":"+OCRJob.checkExists(testFile));
switch (i) {
case 3:
// FileUtils.writeStringToFile(testFile, "here we go");
Runtime.getRuntime().exec("/usr/bin/ssh phobos /usr/bin/touch "+testFileName);
break;
}
}
}
checkExists source code:
/**
* check if the given file exists
* #param f
* #return true if file exists
*/
public static boolean checkExists(File f) {
try {
byte[] buffer = new byte[4];
InputStream is = new FileInputStream(f);
if (is.read(buffer) != buffer.length) {
// do something
}
is.close();
return true;
} catch (java.io.IOException fnfe) {
}
return false;
}
JDK7 was released a few months ago. There are exists and notExists methods in the Files class but they return a boolean rather than throwing an exception. If you really want an exception then use FileSystems.getDefault().provider().checkAccess(path) and it will throw an exception if the file does not exist.
If you need to be robust, try to read the file - and fail gracefully if the file is not there (or there is a permission or other problem). This applies to any other language than Java as well.
The only safe way to tell if the file exist and you can read from it is to actually read a data from the file. Regardless of a file system - local, or remote. The reason is a race condition which can occur right after you get success from checkAccess(path): check, then open file, and you find it suddenly does not exist. Some other thread (or another remote client) may have removed it, or has acquired an exclusive lock. So don't bother checking access, but rather try to read the file. Spending time in running ls just makes race condition window easier to fit.

JUnit tests fail when creating new Files

We have several JUnit tests that rely on creating new files and reading them. However there are issues with the files not being created properly. But this fault comes and goes.
This is the code:
#Test
public void test_3() throws Exception {
// Deletes files in tmp test dir
File tempDir = new File(TEST_ROOT, "tmp.dir");
if (tempDir.exists()) {
for (File f : tempDir.listFiles()) {
f.delete();
}
} else {
tempDir.mkdir();
}
File file_1 = new File(tempDir, "file1");
FileWriter out_1 = new FileWriter(file_1);
out_1.append("# File 1");
out_1.close();
File file_2 = new File(tempDir, "file2");
FileWriter out_2 = new FileWriter(file_2);
out_2.append("# File 2");
out_2.close();
File file_3 = new File(tempDir, "fileXXX");
FileWriter out_3 = new FileWriter(file_3);
out_3.append("# File 3");
out_3.close();
....
The fail is that the second file object, file_2, never gets created. Sometimes. Then when we try to write to it a FileNotFoundException is thrown
If we run only this testcase, everything works fine.
If we run this testfile with some ~40 testcases, it can both fail and work depending on the current lunar cycle.
If we run the entire testsuite, consisting of some 10*40 testcases, it always fails.
We have tried
adding sleeps (5sec) after new File, nothing
adding while loop until file_2.exists() is true but the loop never stopped
catching SecurityException, IOException and even throwable when we do the New File(..), but caught nothing.
At one point we got all files to be created, but file_2 was created before file_1 and a test that checked creation time failed.
We've also tried adding file_1.createNewFile() and it always returns true.
So what is going on? How can we make tests that depend on actual files and always be sure they exist?
This has been tested in both java 1.5 and 1.6, also in Windows 7 and Linux. The only difference that can be observed is that sometimes a similar testcase before fails, and sometimes file_1 isn't created instead
Update
We tried a new variation:
File file_2 = new File(tempDir, "file2");
while (!file_2.canRead()) {
Thread.sleep(500);
try {
file_2.createNewFile();
} catch (IOException e) {
e.printStackTrace();
}
}
This results in alot of Exceptions of the type:
java.io.IOException: Access is denied
at java.io.WinNTFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:883)
... but eventually it works, the file is created.
Are there multiple instances of your program running at once?
Check for any extra instances of javaw.exe running. If multiple programs have handles to the same file at once, things can get very wonky very quickly.
Do you have antivirus software or anything else running that could be getting in the way of file creation/deletion, by handle?
Don't hardcode your file names, use random names. It's the only way to abstract yourself from the various external situations that can occur (multiple access to the same file, permissions, file system error, locking problems, etc...).
One thing for sure: using sleep() or retrying is guaranteed to cause weird errors at some point in the future, avoid doing that.
I did some googling and based on this lucene bug and this board question seems to indicate that there could be an issue with file locking and other processes using the file.
Since we are running this on ClearCase it seems plausible that ClearCase does some indexing or something similar when the files are being created. Adding loops that repeat until the file is readable solved the issue, so we are going with that. Very ugly solution though.
Try File#createTempFile, this at least guarantees you that there are no other files by the same name that would still hold a lock.

Categories