I just read the article Programming by Coincidence. At the end of the page there are excercises. A few code fragments that are cases of "programming by coincidence". But I cant figure out the error in this piece:
This code comes from a general-purpose
Java tracing suite. The function
writes a string to a log file. It
passes its unit test, but fails when
one of the Web developers uses it.
What coincidence does it rely on?
public static void debug(String s) throws IOException {
FileWriter fw = new FileWriter("debug.log", true);
fw.write(s);
fw.flush();
fw.close();
}
What is wrong about this?
This code relies on the fact that there is a file called debug.log that is writable in the application's executing directory. Most likely the web developer's application is not set up with this file and the method fails when he tries to use it.
A unit test of this code will work because the original developer had the right file in the right place (and with the right permissions). This is the coincidence that allowed the unit test to succeed.
Interesting tidbit. Ideally, resources must be pulled from the classpath. However, there is no end to human studpidity though. What would happen if the file was present in test environment's classpath (say eclipse), but was missing in production deployments.?
Related
I have an integration test (ClientIT) which uses the logging output from a test helper Class (ClientBasic) to determine whether the test passes or fails. I have re-directed the System.out/System.err inside the ClientBasic class to provide a link back to ClientIT using an OutputStream as follows,
System.setOut(new PrintStream(ClientBasic.out));
System.setErr(new PrintStream(ClientBasic.err));
where,
static OutputStream out;
static OutputStream err;
In ClientIT I call Client basic as follows,
ClientBasic.process(clientArgs.getArguments(), out, err);
This works fine, exactly how I wanted except when I run it using the Maven Failsafe plugins as part of the Package/Verify goal I get the message,
[WARNING] Corrupted STDOUT by directly writing to native stream in
forked JVM 1. See FAQ web page and the dump file
/path-to-project/target/failsafe-reports/2019-07-17T13-33-44_769-jvmRun1.dumpstream
which has the effect of really corrupting the display output of any logging info when the ClientBasic runs as part of my integration test - run locally from the command line or in Jenkins (using mvn ...) - strangely it still works fine when I run it from inside the IDE - maybe its not calling the failsafe plug-in directly ?
Anyway the above effect is documented on the Maven site as Corrupted STDOUT
So to try and get around this problem what I would like to do is to simply hook onto the System.out/err stream and create a copy - i.e. a bit like a splitter, rather than re-directing the System.out stream.
Does anyone now if this is possible and if so how to do this ?
As the error does not immediately blame setOut/setErr, it should be possible to make ones own splitter extending PrintStream.
public class Splitter extends PrintStream {
public Splitter(PrintStream sysPS, PrintWriter myOut) { ... }
... override methods by code generation in IDE.
}
PrintStream myOut = ...;
Splitter outSplitter = new Splitter(System.out, myOut);
Splitter errSplitter = new Splitter(System.err, myOut);
System.setOut(outSplitter);
System.setErr(errSplitter);
...
// At end:
System.setOut(outSplitter.getSysPS());
System.setErr(errSplitter.getSysPS());
IDE code generation and a couple of smart regex replaces will give you a correct class.
Alternatively there probably exists such a splitter as a Logger.
I have the following program which adds a method to itself when run. But I have to refresh it every time using the F5 button or the refresh option.
Is there a way I could code the refresh in the program itself so that it refreshes itself after the modification? The project I am working on is a Java application and not an eclipse plugin so as far as I know the refreshLocal() method can't be used.
public class Demo {
public static void main(String[] args) throws IOException, CoreException {
File file = new File("/home/kishan/workspace/Roast/src/Demo.java");
if (file.exists()) {
JavaClassSource javaClass = Roaster.parse(JavaClassSource.class,
file);
javaClass.addMethod().setPublic().setStatic(true)
.setName("newMethod").setReturnTypeVoid()
.setBody("System.out.println(\"newMethod created\");")
.addParameter("String[]", "stringArray");
FileWriter writer = new FileWriter(file);
writer.write(javaClass.toString());
writer.flush();
writer.close();
}
}
}
I have tried using the refreshLocal() method defined in the eclipse JDT but since my project is a Java application the ResourcePlugin.getWorkspace() method does not work giving me a "workspace closed" error. Any suggestion is appreciated.
You see, eclipse runs your Java class within its own dedicated JVM. Thus there is no direct programmatic way of enforcing a refresh within eclipse.
You could check this older question; maybe that could lead to a reasonable workarounds.
On the other hand you might step back and ask yourself why exactly you want to achieve that. Your workflow simply doesn't make much sense when looking at it; as in: when generating code that way, shouldn't that generated code better go in its own specific place?
If you intend to "generate" code frequently to then continue to use it in eclipse; well, that somehow smells like a strange idea.
Eclipse has "Refresh using native hooks or polling" which might might help.
You can find it under Window > Prefrences > General > Workspace.
See On Eclipse, what does "Preferences -> General -> Workspace -> Refresh using native hooks or polling" do?
Edit 2 After recieving a response from Mathworks support I've answered the question myself. In brief, there is an options class MWComponentOptions that is passed to the exported class when instantiated. This can, among other things, specify unique print streams for error output and regular output (i.e. from disp()-liked functions). Thanks for all the responses none the less :)
====================================================================
Just a quick question - is there any way to prevent MATLAB code from outputting to the Java console with disp (and similar) functions once compiled? What is useful debugging information in MATLAB quickly becomes annoying extra text in the Java logs.
The compilation tool I'm using is MATLAB Compiler (which I think is not the same as MATLAB Builder JA, but I might be wrong). I can't find any good documentation on the mcc command so am not sure if there are any options for this.
Of course if this is impossible and a direct consequence of the compiler converting all MATLAB code to its Java equivalent then that's completely understandable.
Thanks in advance
Edit This will also be useful to handle error reporting on the Java side alone - currently all MATLAB errors are sent to the console regardless of whether they are caught or not.
The isdeployed function returns true if run in a deployed application (with e.g. MATLAB Compiler or Builder JA) and false when running in live MATLAB.
You can surround your disp statements with an if isdeployed block.
I heard back from a request to Mathworks support, and they provided the following solution:
When creating whatever class has been exported, you can specify an MWComponentOptions object. This is poorly documented in R2012b, but for what I wanted the following example would work:
MWComponentOptions options = new MWComponentOptions();
PrintStream o = new PrintStream(new File("MATLAB log.log"));
options.setPrintStream(o); // send all standard dips() output to a log file
// the following ignores all error output (this will be caught by Java exception handling anyway)
options.setErrorStream((java.io.PrintStream)null);
// instantiate and use the exported class
myClass obj = new myClass(options);
obj.myMatlabFunction();
// etc...
Update
In case anyone does want to suppress all output, casing null to java.io.PrintStream ended up causing a NullPointerException in deployment. A better way to suppress all output is use to create a dummy print stream, something like:
PrintStream dummy = new PrintStream(new OutputStream() {
public void close() {}
public void flush() {}
public void write(byte[] b) {}
public void write(byte[] b, int off, int len) {}
public void write(int b) {}
} );
Then use
options.setErrorStream(dummy);
Hope this helps :)
Another possible hack if you have a stand-alone application and don't want to bother with classes at all:
Use evalc and deploy your func name during compile:
function my_wrap()
evalc('my_orig_func(''input_var'')');
end
And compile like
mcc -m my_wrap my_orig_func <...>
Well, it is obviously yet another hack.
This question already has answers here:
Alternative to File.exists() in Java
(6 answers)
Closed 2 years ago.
I am working on a Java program that requires to check the existence of files.
Well, simple enough, the code make use calls to File.exists() for checking file existence. And the problem I have is, it reports false positive. That means the file does not actually exist but exists() method returns true. No exception was captured (at least no exception like "Stale NFS handle"). The program even managed to read the file through InputStream, getting 0 bytes as expected and yet no exception. The target directory is a Linux NFS. And I am 100% sure that the file being looked for never exists.
I know there are known bugs (kind of API limitation) exist for java.io.File.exists(). So I've then added another way round by checking file existence using Linux command ls. Instead of making call to File.exists() the Java code now runs a Linux command to ls the target file. If exit code is 0, file exists. Otherwise, file does not exist.
The number of times the issue is hit seems to be reduced with the introduction of the trick, but still pops. Again, no error was captured anywhere (stdout this time). That means the problem is so serious that even native Linux command won't fix for 100% of the time.
So there are couple of questions around:
I believe Java's well known issue on File.exists() is about reporting false negative. Where file was reported to not exist but in fact does exist. As the API does not throws IOException for File.exists(), it choose to swallow the Exception in the case calls to OS's underlying native functions failed e.g. NFS timeout. But then this does not explain the false positive case I am having, given that the file never exist. Any throw on this one?
My understanding on Linux ls exit code is, 0 means okay, equivalent to file exists. Is this understanding wrong? The man page of ls is not so clear on explaining the meaning of exit code: Exit status is 0 if OK, 1 if minor problems, 2 if serious trouble.
All right, back to subject. Any surefire way to check File existence with Java on Linux? Before we see JDK7 with NIO2 officially released.
Here is a JUnit test that shows the problem and some Java Code that actually tries to read the file.
The problem happens e.g. using Samba on OSX Mavericks. A possible reason
is explaned by the statement in:
http://appleinsider.com/articles/13/06/11/apple-shifts-from-afp-file-sharing-to-smb2-in-os-x-109-mavericks
It aggressively caches file and folder properties and uses opportunistic locking to enable better caching of data.
Please find below a checkFile that will actually attempt to read a few bytes and forcing a true file access to avoid the caching misbehaviour ...
JUnit test:
/**
* test file exists function on Network drive replace the testfile name and ssh computer
* with your actual environment
* #throws Exception
*/
#Test
public void testFileExistsOnNetworkDrive() throws Exception {
String testFileName="/Volumes/bitplan/tmp/testFileExists.txt";
File testFile=new File(testFileName);
testFile.delete();
for (int i=0;i<10;i++) {
Thread.sleep(50);
System.out.println(""+i+":"+OCRJob.checkExists(testFile));
switch (i) {
case 3:
// FileUtils.writeStringToFile(testFile, "here we go");
Runtime.getRuntime().exec("/usr/bin/ssh phobos /usr/bin/touch "+testFileName);
break;
}
}
}
checkExists source code:
/**
* check if the given file exists
* #param f
* #return true if file exists
*/
public static boolean checkExists(File f) {
try {
byte[] buffer = new byte[4];
InputStream is = new FileInputStream(f);
if (is.read(buffer) != buffer.length) {
// do something
}
is.close();
return true;
} catch (java.io.IOException fnfe) {
}
return false;
}
JDK7 was released a few months ago. There are exists and notExists methods in the Files class but they return a boolean rather than throwing an exception. If you really want an exception then use FileSystems.getDefault().provider().checkAccess(path) and it will throw an exception if the file does not exist.
If you need to be robust, try to read the file - and fail gracefully if the file is not there (or there is a permission or other problem). This applies to any other language than Java as well.
The only safe way to tell if the file exist and you can read from it is to actually read a data from the file. Regardless of a file system - local, or remote. The reason is a race condition which can occur right after you get success from checkAccess(path): check, then open file, and you find it suddenly does not exist. Some other thread (or another remote client) may have removed it, or has acquired an exclusive lock. So don't bother checking access, but rather try to read the file. Spending time in running ls just makes race condition window easier to fit.
We have several JUnit tests that rely on creating new files and reading them. However there are issues with the files not being created properly. But this fault comes and goes.
This is the code:
#Test
public void test_3() throws Exception {
// Deletes files in tmp test dir
File tempDir = new File(TEST_ROOT, "tmp.dir");
if (tempDir.exists()) {
for (File f : tempDir.listFiles()) {
f.delete();
}
} else {
tempDir.mkdir();
}
File file_1 = new File(tempDir, "file1");
FileWriter out_1 = new FileWriter(file_1);
out_1.append("# File 1");
out_1.close();
File file_2 = new File(tempDir, "file2");
FileWriter out_2 = new FileWriter(file_2);
out_2.append("# File 2");
out_2.close();
File file_3 = new File(tempDir, "fileXXX");
FileWriter out_3 = new FileWriter(file_3);
out_3.append("# File 3");
out_3.close();
....
The fail is that the second file object, file_2, never gets created. Sometimes. Then when we try to write to it a FileNotFoundException is thrown
If we run only this testcase, everything works fine.
If we run this testfile with some ~40 testcases, it can both fail and work depending on the current lunar cycle.
If we run the entire testsuite, consisting of some 10*40 testcases, it always fails.
We have tried
adding sleeps (5sec) after new File, nothing
adding while loop until file_2.exists() is true but the loop never stopped
catching SecurityException, IOException and even throwable when we do the New File(..), but caught nothing.
At one point we got all files to be created, but file_2 was created before file_1 and a test that checked creation time failed.
We've also tried adding file_1.createNewFile() and it always returns true.
So what is going on? How can we make tests that depend on actual files and always be sure they exist?
This has been tested in both java 1.5 and 1.6, also in Windows 7 and Linux. The only difference that can be observed is that sometimes a similar testcase before fails, and sometimes file_1 isn't created instead
Update
We tried a new variation:
File file_2 = new File(tempDir, "file2");
while (!file_2.canRead()) {
Thread.sleep(500);
try {
file_2.createNewFile();
} catch (IOException e) {
e.printStackTrace();
}
}
This results in alot of Exceptions of the type:
java.io.IOException: Access is denied
at java.io.WinNTFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:883)
... but eventually it works, the file is created.
Are there multiple instances of your program running at once?
Check for any extra instances of javaw.exe running. If multiple programs have handles to the same file at once, things can get very wonky very quickly.
Do you have antivirus software or anything else running that could be getting in the way of file creation/deletion, by handle?
Don't hardcode your file names, use random names. It's the only way to abstract yourself from the various external situations that can occur (multiple access to the same file, permissions, file system error, locking problems, etc...).
One thing for sure: using sleep() or retrying is guaranteed to cause weird errors at some point in the future, avoid doing that.
I did some googling and based on this lucene bug and this board question seems to indicate that there could be an issue with file locking and other processes using the file.
Since we are running this on ClearCase it seems plausible that ClearCase does some indexing or something similar when the files are being created. Adding loops that repeat until the file is readable solved the issue, so we are going with that. Very ugly solution though.
Try File#createTempFile, this at least guarantees you that there are no other files by the same name that would still hold a lock.