I have a simple Java app that is trying to copy a file across the WAN (from Ireland to NY).
I recently modified it to use FileUtils because the native Java file copy was too slow. I researched and found that because Fileutils uses NIO it is better. The file copy now works great but occasionally I need to copy very large files (> 200Mb) and the copy fails with the error:
java.io.IOException: Failed to copy full contents from...
I know the error means that the destination file size is not the same as the source, so initially I figured it was network problems. The process tries repeatedly to copy the file every couple of hours but it is never successful. However, when I copy the file manually through a Windows exploer then it works fine. This would seem to rule out the network...but I'm not really sure.
I have searched but could not find any posts with the exact same issue. Any help would be greatly appreciated.
Thanks!
Addition:
I am using this FileUtils method:
public static void copyFile(java.io.File srcFile, java.io.File destFile) throws java.io.IOException
So I found the issue to be on the destination folder. There is a polling process that is suppose to pick up the file after it gets copied. However, the file was getting moved prior to the copy being completed. This probably wouldn't happen on a windows drive because the file would be locked (i tested locally and i could not delete while file is copying). However, the destination folder is a mounted a celerra share. The unix process under the hood is what grabs the file...I guess it doesn't care if some windows process is still writing to it.
Thanks for your time medPhys-pl!
Related
I have a directory M:\SOURCE from which I have listed and moved it's contents until it is empty
After that, I want to go ahead and delete it, I have tried (yes I also made sure it was empty):
sourceFile being "M:\SOURCE"
sourceFile.delete()
Files.delete(sourceFile.toPath());
FileUtils.deleteQuietly(sourceFile);
FileUtils.deleteDirectory(sourceFile);
FileUtils.forceDelete(sourceFile)
There are no exceptions being thrown by any of the other methods and .delete() returns true
HOWEVER, the directory still exists and when trying to access the folder I get the following message from windows:
When running process explorer I can see that java is using that resource (This only happens when I try to delete the source, and bear in mind that trying to delete source directory is the last thing my program does)
And just to make me freak out even more, once I stop my java virtual machine, THEN the folder magically disappears. So Java did got the instruction right, it's just that is not willing to delete it until it's terminated
Running System.gc() before deleting the directory also didn't help, and my working directory is not the one I'm trying to delete
You can get this problem when using Files NIO calls which list or return a Stream of directory contents before deleting the directory.
Using try with resources on any Stream of Path returned by Files NIO can help prevent this issue:
try(Stream<Path> stream = Files.list(directory)) {
// do any work on contents - move / delete
}
// delete directory after closing stream above
I made a small Java program for academic purposes, its main focus is to read some .txt files and present the information to the user. These files are present in the resources folder, under the src folder.
The program runs as intended when launched from Eclipse.
Using the Launch4j app I was able to successfully create an exe which runs fine and does what's intended, up until I try to read the .txt files I have in the resources folder, which appears not to be able to reach.
I'm guessing that when I launch the exe the run time path would change to where the exe was created, so I created the program in a desktop folder and specified this path in the program, but that doesn't seem to solve the situation.
As an alternative, I moved the .txt files out of the program and once again created the exe in a desktop folder with said .txt files, linked the program to this path and once again it didn't work.
The command used to get the .txt files is:
Files.readAllLines(Paths.get(doc)).get(line)
And doc is simply the path to the intended .txt file.
It's worth noting that I have no previous experience in Java and throughout the development of the program I tried my best to use commands I'd fully understand and to keep it as simple as possible. I hope the solution can be along these lines! I'm very confident this must be a rookie mistake, but I can't seem to find the solution to this specific problem anywhere.
The paths to files in Eclipse are different than the paths to files in an .exe or JAR file.
I will let this other user explain it because I am lazy :p
Rather than trying to address the resource as a File just ask the
ClassLoader to return an InputStream for the resource instead via
getResourceAsStream:
InputStream in = getClass().getResourceAsStream("/file.txt");
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
As long as the file.txt resource is available on the classpath then
this approach will work the same way regardless of whether the
file.txt resource is in a classes/ directory or inside a jar.
The URI is not hierarchical occurs because the URI for a resource
within a jar file is going to look something like this:
file:/example.jar!/file.txt. You cannot read the entries within a jar
(a zip file) like it was a plain old File.
This is explained well by the answers to:
How do I read a resource file from a Java jar file?
Java Jar file: use resource errors: URI is not hierarchical
The original post is here, all credit to its author.
Fixing your URL should let you read from that file when you are using the .exe.
EDITED FOR CORRECTION. Thanks #VGR (see comments) for correcting my mistake.
I have many xml files on hdfs which i extracted from a sequence files using java program.
Initially, the files were few so I copied the extracted xml files onto my local and then ran a unix zip command then zipped the xmls into a single .zip file.
The no of xml files have now increased and now i cant copy them onto local because I will run out of memory.
My need is to just zip all of those xml files(on hdfs) into a single zipped file(to hdfs) without a need of copying it to local.
I couldnt find any lead to start.. Can anyone provide me a start point or any code(even java MR) they have so that I can go further. I could see this can be done using mapreduce but I have never programmed in it thats why trying other ways
Thanks in advance..
I am developing an application where i have a startup video
I want this video to be embedded in my executable jar or a separate zip which can be password protected
I have tried to use the following code but its not working
audioPlayer.prepareMedia("zip:///C:/Users/User/Documents/NetBeansProjects/HanumanChalisa/res.zip!/res/startup.mp4");
Please help me for this.
EDIT :
this is the error i am getting
[mov,mp4,m4a,3gp,3g2,mj2 # 000000000053e320] moov atom not found
[000000000d245358] avcodec demux error: Could not open C:\Users\User\Documents\NetBeansProjects\HanumanChalisa\res.zip!\res\startup.mp4: Specified event object handle is invalid
[000000000d245358] ps demux error: cannot peek
[000000000d2390c8] main input error: no suitable demux module for 'zip/:///C:/Users/User/Documents/NetBeansProjects/HanumanChalisa/res.zip!/res/startup.mp4'
[000000000d2390c8] main input error: VLC can't recognize the input's format
[000000000d2390c8] main input error: The format of 'zip:///C:/Users/User/Documents/NetBeansProjects/HanumanChalisa/res.zip!/res/startup.mp4' cannot be detected. Have a look at the log for details.`
This works for me (when I adjust the path obviously), so I don't think there's anything inherently wrong with the code that you've posted. A couple of things to note:
Make sure the path is correct, that includes the case being the same (especially for the part inside the zip file)
Try opening the path just with VLC - if that doesn't work then you know the problem isn't related to your code
Try a zip file with different compression settings - perhaps VLC is struggling to read the format of that particular zip file (which it seems to allude to in its logs, though I'm always weary of following those too closely.)
If the worst comes to the worst, you could always extract the file you need to a tempoary file and then pass that location to VLCJ. Not an ideal workaround, but one that would at least still let you play the file.
Disclaimer: This isn't my repo, I'm trying to help a developer access theirs.
When checking out code (windows server 2003, tortoiseCVS 1.12.5), CVS displays many errors:
cvs udpate: cannot open temp file _new_r_cl_elementBeanInternalHome_12345b.class for writing
Eventually failing and aborting on the error:
cvs [update aborted]: cannot make directory path/path/path/PATH/Path/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/FOO/com/ams/BAR/entityBean/websphere_deploy/DB2UDBOS123_V0_1 no such file or directory.
There's nothing handy on Google about this or on stack overflow so far.
We do have a web browser on the cvs server and I can see the paths match and there are files there.
Anyone have any ideas?
In my case I wasn't able to check out to drive D: in windows but was able to checkout to drive c:
I believe that the problem is with the disk drive or filesystem.
Standard Windows API has 260-character limit for paths to any files. If the whole path to the file exceeds that limit you won't be able to save that file to in system.
Try to checkout repository as close the root of the disk as possible. If the file paths in you repo exceeds the limit, then try to checkout only fragment of the tree of your repository.
If you use the NTFS file system and the win32 API you can have as long as 32k characters path length. You may change your CVS client to other implementation, for example the Netbeans plugin for CVS is able to handle long paths, but probably you won't be able to work with it anyway.