Execution of program stops and no errors are generated - java

I got this code
// "status" is a JLabel field
status.setText(" Download data ");
url = new URL( baseURL );
FileUtils.copyURLToFile( url, tempFile );
status.setText(" Download done... ");
When code is run, the "status" gets updated with: Download data
The file downloads, and I can see it when I open the "tempFile" in notepad. (tempFile.txt)
The next status update i.e "Download Done", never gets executed, and all the code that follows also doesn't get executed...
The Exception e.printStackTrace() is also empty...
What is goin gon here?
PS, the file downloaded is just a plain text file with about 2000 lines of text in it...
And I can see all of it in the temporary file I created tempFile (tempFile.txt)
I also commented all the code that follows after the second update, but still nothing.
Currently I'm downloading the file from localhost/data.txt

See copyUrlToFile(url,file)
Warning: this method does not set a connection or read timeout and thus might block forever. Use copyURLToFile(URL, File, int, int) with reasonable timeouts to prevent this.
There could be several explanations for why the download doesn't complete. Perhaps there is a step in the protocol that triggers at the end of the transfer that requires the use of a port your system is currently blocking.

Related

How to capture nullpointerexception,ftp connection issues via a log/output file talend job

My talend job is working perfectly now,but i would like to induce some basic quality validation checks and put them in place to capture the error right away without any hassle.So inorder for that i am going to schedule the job to run hourly and if there is some failure due to disk space,FTP connection failure issues,errors due to file parsing name(by splitting the name i will load into db) -So if i run into java IOException****nullpointerexception or something like that.How do i capture the error from console onto the log file in the folder from the talend job which keeps running in the background.
This is the code i am using to write the console output to an file.however i cannot capture the erraneous cases,please advise how to do that:
java.io.File file = new java.io.File("C:/Users/hsivakumar/Desktop/tests/output.txt");
java.io.PrintStream ps = new java.io.PrintStream(new java.io.FileOutputStream(file));
//java.io.File outputFileErr = new java.io.File("C:/Users/hsivakumar/Desktop/tests/outputerr.txt");
System.setOut(ps);
System.setErr(outputFileErr);
I am getting an error when using setErr
You don't need to code in Talend to catch and log Exception. Just drop a tLogCatcher and link its output to a file component.
The tLogCatcher will catch the Exception and populate several fields.
The tFileOutputDelimited will create your log file.

How to prevent file wipe if an error occurs while writing to it?

This is an issue I have had in many applications.
I want to change the information inside a file, which has an outdated version.
In this instance, I am updating the file that records playlists after adding a song to a playlist. (For reference, I am creating an app for android.)
The problem is if I run this code:
FileOutputStream output = new FileOutputStream(file);
output.write(data.getBytes());
output.close();
And if an IOException occurs while trying to write to the file, the data is lost (since creating an instance of FileOutputStream empties the file). Is there a better method to do this, so if an IOException occurs, the old data remains intact? Or does this error only occur when the file is read-only, so I just need to check for that?
My only "work around" is to inform the user of the error, and give said user the correct data, which the user has to manually update. While this might work for a developer, there is a lot of issues that could occur if this happens. Additionally, in this case, the user doesn't have permission to edit the file themselves, so the "work around" doesn't work at all.
Sorry if someone else has asked this. I couldn't find a result when searching.
Thanks in advance!
One way you could ensure that you do not wipe the file is by creating a new file with a different name first. If writing that file succeeds, you could delete the old file and rename the new one.
There is the possibility that renaming fails. To be completely safe from that, your files could be named according to the time at which they are created. For instance, if your file is named save.dat, you could add the time at which the file was saved (from System.currentTimeMillis()) to the end of the file's name. Then, no matter what happens later (including failure to delete the old file or rename the new one), you can recover the most recent successful save. I have included a sample implementation below which represents the time as a 16-digit zero-padded hexadecimal number appended to the file extension. A file named save.dat will be instead saved as save.dat00000171ed431353 or something similar.
// name includes the file extension (i.e. "save.dat").
static File fileToSave(File directory, String name) {
return new File(directory, name + String.format("%016x", System.currentTimeMillis()));
}
// return the entire array if you need older versions for which deletion failed. This could be useful for attempting to purge any unnecessary older versions for instance.
static File fileToLoad(File directory, String name) {
File[] files = directory.listFiles((dir, n) -> n.startsWith(name));
Arrays.sort(files, Comparator.comparingLong((File file) -> Long.parseLong(file.getName().substring(name.length()), 16)).reversed());
return files[0];
}

"java.io.FileNotFoundException: No files matched spec" althought file is successfully written to

Edit -- the error seems to come not from the write block but from
the output block, which is even stranger. Modified to reflect my
investigations.
Edit2 -- solved - the issue is due to an improperly closed writer, for some reason only triggered in the DataflowRunner but not in the DirectRunner. Will add an answer later today when I find the time. If anyone has an insight on why the writer is closed in the DirectRunner but not in the DataflowRunner, I am very interested.
Consider the following Java 2.5.0 Dataflow code:
BlobId blobTranscriptId = BlobId.of(tempBucket, fileName);
BlobInfo blobTranscriptInfo = BlobInfo.newBuilder(blobTranscriptId).build();
try (WriteChannel writer = storageClient.writer(blobTranscriptInfo)) {
LOG.info("Writing file");
writer.write(ByteBuffer.wrap(currentString.toString().getBytes(UTF_8)));
processContext.output("gs://" + tempBucket + "/" + fileName)
LOG.info("Wrote " + fileName);
} catch (Exception e) {
LOG.warn("Error caught while writing content : " + ExceptionUtils.getStackTrace(e));
}
When run locally (in a DirectPipeline), this code works fine and without errors.
When run in Dataflow (in a DataflowRunner) however, we notice a strange behavior:
the file is created on the requested bucket with the requested content and filename
a UserCodeException: java.io.FileNotFoundException: No files matched spec is caught on the processContext.output line.
Searching on google gcp "No files matched spec" doesn't return a single result. Looking at the source in org.apache.beam.sdk.io.FileSystems.java (error declared at line 173) is a not a lot more helpful.
Following the execution with a debugger shows that with a DirectRunner, the code never calls FileIO.MatchAll, the source of the error. With the DataflowRunner however, the error is somehow triggered. There is no reason why the output string should be interpreted as a filepath since the stacktrace indicates that the error happens in this stage, which is declared as outputing a PCollection<String>.
Why is a FileNotFoundException launched even though the file has clearly been created with the right content ?
Some additional information that could help :
the filename is generated through an UUID4 UUID.randomUUID(), which means that it contains '-' characters as well as long filenames. This should however not be an issue given that 1) it works in a DirectRunner 2) the files are actually created
the following stage is a TextIO.readAll()
stack trace (slightly modified for privacy): https://pastebin.com/wumha4ZZ
Additional investigation:
Changing the output to a fixed string pointing an an exiting file processContext.output("gs://" + tempBucket + "/" + alreadyExistingFileName); does not trigger the error. I then suspected it might (somehow) be due to delay errors between the write operation and the time when the bucket acknowledges the file.
Adding a Thread.sleep(15000) between the write and the output does NOT fix the issue. It seems that delay is not the issue here.
Looking to the stacktrace in more details reveals that the errors happens through FileIO, itself called through the TextIO stage following this one.
What happens is that in my code above I do not close the writer writer.close() before outputing the string to TextIO, but after (through the try(Writer writer){} block). Since buckets do not register files until their writers have been closed, the TextIO can't find the files and launches a FileNotFoundException. This in turn closes the try block and launches writer.close(), which is why the file still appears on the bucket in the end.
For some reason I do not know, this does not happen when launching through a local DirectLauncher.

Java file IO and "access denied" errors

I have been tearing my hair out on this and thus I am looks for some help .
I have a loop of code that performs the following
//imports ommitted
public void afterPropertiesSet() throws Exception{
//building of URL list ommitted
// urlMap is a HashMap <String,String> created and populated just prior
for ( Object urlVar : urlMap.keySet() ){
String myURLvar = urlMap.get(urlVar.toString);
System.out.println ("URL is "+myURLvar );
BufferedImage imageVar = ImageIO.read(myURLvar);//URL confirmed to be valid even for executions that fail
String fileName2Save = "filepath"// a valid file path
System.out.println ("Target path is "+fileName2Save );
File file2Save = new File (fileName2Save);
fileName2Save.SetWriteable(true);//set these just to be sure
fileName2Save.SetReadable(true);
try{
ImageIO.write (imageVar,"png",file2save)//error thrown here
}catch (Exception e){
System.out.println("R: "+file2Save.canRead()+" W: "+file2Save.canWrite()+" E:"+file2Save.canExecute()+" Exists: "+file2Save.exists+" is a file"+file2Save.isFile() );
System.out.println("parent Directory perms");// same as above except on parent directory of destination
}//end try
}//end for
}
This all runs on Windows 7 and JDK 1.6.26 and Netbeans,Tomcat 7.0.14 . The target directory is actually inside my netbeans project directory in a folder for a normal web app ( outside WEB-INF) where I would expect normally to have permission to write files.
When the error occurs I get one of two results for the file a.) All false b.)all true. The Parent directory permission never change all true except for isFile.
The error thrown ( java.IO.error with "access denied" ") does not occur every time ... in fact 60% of the time the loop runs it throws no error. The remaining 40% of the time I get the error on 1 of the 60+ files it writes. Infrequently the same one. The order in which the URLs it starts from changes everytime so the order in which the files are written is variable. The file names have short concise names like "1.png". The images are small..less then 8k.
In order to make sure the permissions are correct I have :
Given "full control" to EVERYONE from the net beans project directory down
Run the JDK,JRE and Netbeans as Administrator
Disabled UAC
Yet the error persists. Google searches for this seem to run the gamut and often read like vodoo. Clearly I ( and Java and Netbeans etc ) should have permission to write a file to the directory .
Anyone have any insight ? This is all ( code and the web server hosting the URL) on a closed system so I can't cut and paste code or stacktrace.
Update: I confirmed the imageURL is valid by doing a println & toString prior to each read. I then confirmed that a.) the web server hosting the target URL returned the image with a http 200 code b.) that the URL returned the image when tested in a web browser. In testing I also put a if () in after the read to confirm that the values was not NULL or empty. I also put in tests for NULL on all the other values . They are always as expected even for a failure .The error always occurs inside the try block. The destination directory is the same every execution. Prior to every execution the directory is empty.
Update 2: Here is one of the stack traces ( in this case perms for file2Save are R: True W:True E: True isFile:True exists:True )
java.io.FileNotFoundException <fullFilepathhere> (Access is denied)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
at javax.imageio.stream.FileImageOutputStream.<init>(FileImageOutputStream.java:53)
at com.sun.imageio.spi.FileImageOutputStreamSpi.createOutputStreamInstance(FileImageOutputStreamSpi.java:37)
at javax.imageio.ImageIO.createImageOutputStream(ImageIO.java:393)
at javax.imageio.ImageIO.write(ImageIO.java:1514)
at myPackage.myClass.afterPropertiesSet(thisClassexample.java:204)// 204 is the line number of the ImageIO write
This may not answer your problem since there can be many other possibilties to your limited information.
One common possibilty for not being able to write a file in web application is the file locking issue on Windows if the following four conditions are met simultaneously:
the target file exists under web root, e.g. WEB-INF folder and
the target file is served by the default servlet and
the target file has been requested at least once by client and
you are running under Windows
If you are trying to replace such a file that meets all of the four conditions, you will not be able to because some servlet containers such as tomcat and jetty will buffer the static contents and lock the files so you are unable to replace or change them.
If your web application has exactly this problem, you should not use the default servlet to serve the file contents. The default servlet is desigend to serve the static content which you do not want to change, e.g. css files, javascript files, background images, etc.
There is a trick to solve the file locking issue on Windows for jetty by disabling the NIO http://docs.codehaus.org/display/JETTY/Files+locked+on+Windows
The trick is useful for development process, e.g. you want to edit the css file and see the change without restarting your web application, but it is not recommended for production mode. If your web application relies on this trick in the production process, then you should seriously consider redesign your codes.
I cannot tell you what's going on or why... I have a feeling that it's something dependent on the way ImageIO tries to save the image. What you could do is saving the BufferedImage by leveraging the ByteArrayOutputStream as described below:
BufferedImage bufferedImage = ImageIO.read(new File("sample_image.gif"));
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write( bufferedImage, "gif", baos );
baos.flush(); //Is this necessary??
byte[] resultImageAsRawBytes = baos.toByteArray();
baos.close(); //Not sure how important this is...
OutputStream out = new FileOutputStream("myImageFile.gif");
out.write(resultImageAsRawBytes);
out.close();
I'm not really familiar with the ByteArrayOutputStream, but I guess its reset() function could be handy when dealing with saving multiple files. You could also try using its writeTo(OutputStream out) if you prefer. Documentation here.
Let me know how it goes...

Java: Efficient way to scan a folder for a particular file

I am contacting an external services with my Java app.
The flow is as follow: ->I generate an XML file, and put it in an folder, then the service processes the file and return another file with the same name having an extension .out
Right now after I put the file in the folder I start with a loop, until I get that file back so I can read the result.
Here is the code:
fileName += ".out";
File f = new File(fileName);
do
{
f = new File(fileName);
} while (!f.exists());
response = readResponse(fileName); // got the response now read it
My question comes here, am I doing it in the right way, is there a better/more efficient way to wait for the file?
Some info: I run my app on WinXP, usually it takes the external service less than a second to respond with a file, I send around 200 request per day to this services. The path to the folder with the result file is always the same.
All suggestions are welcome.
Thank you for your time.
There's no reason to recreate the File object. It just represents the file location, whether the file exists or not. Also you probably don't want a loop without at least a short delay, otherwise it'll just max out a processor until the file exists. You probably want something like this instead:
File file = new File(filename);
while (!file.exists()) {
Thread.sleep(100);
}
Edit: Ingo makes a great point in the comments. The file might not be completely there just because it exists. One way to guarantee that it's ready is have the first process create a second file after the first is completely written. Then have the Java program detect that second file, delete it and then safely read the first one.

Categories