I'm trying to write a method that, from a given directory, extract every file (also in every subdirectories). I'm using Files.find for this. The problem is that whenever it finds a file that I can't access it stops but I want to continue the research and add to the list the other files.
This is my code
public static List<String> search(String dir){
List<String> listFiles = new ArrayList<>();
try{
Files.find(Paths.get(dir), Integer.MAX_VALUE, (filePath, fileAttr) -> fileAttr.isRegularFile())
.forEach((file) -> {
listFiles.add(file.toAbsolutePath().toString());
});
} catch (UncheckedIOException ue){
System.out.println("Can't access that directory");
} catch (IOException e) {
e.printStackTrace();
}
return listFiles;
}
How can I change it?
You're looking for the FileVisitor interface from Java 8's NIO package. This class offers multiple functions to test directories for accessibility etc before entering them, as well as built-in error handling and an API to control the behaviour of your application.
Your specific problem would require to create some kind of list (E.g. outside of the FileVisitor) which you can then fill from inside the method using Collection::add
Sadly, Java's Stream API is completely unable to handle exceptions on its own, so any try to solve your problem with Streams would require a lot of unneccessary work, considering that NIO offers the more verbose, but far superior FileVisitor solution.
Related
Is there a way in Java to copy one file into another in an asynchrnous way? Something similar to Stream.CopyToAsync in C# is what I'm trying to find.
What I'm trying to achieve is to download a series of ~40 files from the Internet, and this is the best I've come up with for each file:
CompletableFuture.allOf(myFiles.stream()
.map(file -> CompletableFuture.supplyAsync(() -> syncDownloadFile(file)))
.toArray(CompletableFuture[]::class))
.then(ignored -> doSomethingAfterAllDownloadsAreComplete());
Where syncDownloadFile is:
private void syncDownloadFile(MyFile file) {
try (InputStream is = file.mySourceUrl.openStream()) {
long actualSize = Files.copy(is, file.myDestinationNIOPath);
// size validation here
} catch (IOException e) {
throw new RuntimeException(e);
}
}
But that means I have some blocking calls inside of the task executors, and I'd like to avoid that so I don't block too many executors at once.
I'm not sure if the C# method internally does the same (I mean, something has to be downloading that file right?).
Is there a better way to accomplish this?
AsynchronousFileChannel (AFC for short) is the right way to manage Files in Java with non-blocking IO. Unfortunately it does not provide a promises based (aka as Task in .net) API such as the CopyToAsync(Stream) of .Net.
The alternative RxIo library is built on top of the AFC and provides the AsyncFiles asynchronous API with different calling idioms: callbacks based, CompletableFuture (equivalent to .net Task) and also reactive streams.
For instance, copying from one file to another asynchronously can be done though:
Path in = Paths.get("input.txt");
Path out = Paths.get("output.txt");
AsyncFiles
.readAllBytes(in)
.thenCompose(bytes -> AsyncFiles.writeBytes(out, bytes))
.thenAccept(index -> /* invoked on completion */)
Note that continuations are invoked by a thread from the background AsynchronousChannelGroup.
Thus you may solve your problem using a non-blocking HTTP client, with ComplableFuture based API chained with the AsyncFiles use. For instance, AHC is valid choice. See usage here: https://github.com/AsyncHttpClient/async-http-client#using-continuations
I'm still working on the project I already needed a bit of help with:
JavaFX - TableView doesn't update items
Now I want to understand how this whole Serialization process in Java works, because unfortunately, I don't really get it now.
Before I go on, first of all, I'm a student, I'm not a professional. Second, I'm neither familiar with using DBs, nor XML or JSON, so I'd just like to find solution to my approach, no matter how inelegant it might be in the end, it just needs to work. So please don't feel offended if I just reject any advice in using other techniques.
So here's what I want:
Saving three different class objects to separate files BUT maintaining backward compatibility to each of it. The objects are Settings, Statistics and a "database" object, containing all words in a list added to it. In the future I may add more statistics or settings, means adding new variables, mostly type of IntegerProperty or DoubleProperty.
Now the question is: is it possible to load old version saved files and then during the process just initiate new variables not found in the old version with just null but keep the rest as it has been saved?
All I know is that the first thing to do so is not to alter the serialVersionUID.
Another thing would be saving the whole Model object (which contains the three objects mentioned before), so I just have to implement stuff for one class instead of three. But how would that work then concerning backward compatibility? I mean the class itself would not change but it's attributes in their own class structure.
Finally, what approach should I go for? And most of all, how do I do this and maintaning backward compatibilty at the same time? I do best with some concrete examples rather than plain theory.
Here are two example methods, if it's of any help. I already have methods for each class to write and read an object.
public static void saveModel(Model model, String destination) throws IOException
{
try
{
fileOutput = new FileOutputStream(destination);
objectOutput = new ObjectOutputStream(fileOutput);
objectOutput.writeObject(model);
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
if (objectOutput != null)
try
{
objectOutput.close();
}
catch (IOException e) {}
if (fileOutput != null)
try
{
fileOutput.close();
}
catch (IOException e) {}
}
}
public static Settings readSettings(String destination) throws IOException, FileNotFoundException
{
Settings s = null;
try
{
fileInput = new FileInputStream(destination);
objectInput = new ObjectInputStream(fileInput);
Object obj = objectInput.readObject();
if (obj instanceof Settings)
{
s = (Settings)obj;
}
}
catch (IOException e)
{
e.printStackTrace();
}
catch (ClassNotFoundException e)
{
e.printStackTrace();
}
finally
{
if (objectInput != null) try { objectInput.close(); } catch (IOException e) {}
if (fileInput != null) try { fileInput.close(); } catch (IOException e) {}
}
return s;
}
Tell me if you need more of my current code.
Thank you in advance!
... you must be this tall
Best advice for Serialisation is to avoid it for application persistence, especially if backwards compatibility is desired property in your application.
Answers
Is it possible to load old version saved files and then during the process just initiate new variables not found in the old version with just null but keep the rest as it has been saved?
Yes. Deserialising objects saved using previous versions of the class into a new version of this class will work only if:
fully qualified name of the class has not changed (same name and package)
previous and current class have exactly the same serialVersionUID; if one of the versions is missing it, it will be calculated as a 'hash' of all fields and methods and upon a mismatch deserialisation will fail.
inheritance hierarchy has not changed for that class (the same ancestors)
no fields have been removed in the new version of the class
no fields have become static
no fields have become transient
I just have to implement stuff for one class instead of three. But how would that work then concerning backward compatibility?
Yes. Providing that all classes of all fields of Model and Model class itself adhere to the rules above.
Finally, what approach should I go for? And most of all, how do I do this and maintaning backward compatibilty at the same time?
Yes, as long as you can guarantee all of the above rules forever, you will be backwards compatible.
I am sure you can appreciate that forever, or even for next year can be very hard to guarantee, especially in software.
This is why people do application persistence using more robust data exchange formats, than binary representation of serialised Java objects.
Raw data for the table, could be saved using anything from CSV file to JSON docs stored as files or as documents in NoSQL database.
For settings / config have a look at Java's Properties class which could store and load properties to and from *.properties or *.xml files or separately have a look at YAML.
Finally for backwards compatibility, have a look at FlatBuffers
The field of application persistence is very rich and ripe, so happy exploring.
I want to check if a Windows Workstation is logged on or off. I've found a solution in C#:
public class CheckForWorkstationLocking : IDisposable
{
private SessionSwitchEventHandler sseh;
void SysEventsCheck(object sender, SessionSwitchEventArgs e)
{
switch (e.Reason)
{
case SessionSwitchReason.SessionLock: Console.WriteLine("Lock Encountered"); break;
case SessionSwitchReason.SessionUnlock: Console.WriteLine("UnLock Encountered"); break;
}
}
public void Run()
{
sseh = new SessionSwitchEventHandler(SysEventsCheck);
SystemEvents.SessionSwitch += sseh;
}
#region IDisposable Members
public void Dispose()
{
SystemEvents.SessionSwitch -= sseh;
}
#endregion
}
but at the end I'm going to need this boolean in my Java Program.
I already tried the following:
I started both programs and C# writes into a file from where I can check all few seconds if the data has changed or not from java (don't need to say that this solution is just slow and insufficient)
Another solution would be :
Java starts the C# .exe which waits until Java connects to it through sockets and they share the data over the open connection.
Is there a better way to solve this with less effort than with this socket interface solution?
You don't have to go to any complicated lengths to get this done. It can be quite simple.
Save the boolean into a file in C#, then have a file watcher watching the directory in Java. If there is a change it can read the file in Java and find the value of the boolean. Such a solution would not be expensive and eat up a lot of CPU cycles, like a solution where you had a while loop that checked the file would be.
The beginnings of the Java code can be as simple as
import static java.nio.file.StandardWatchEventKinds.*;
Path dir = ...;
try {
WatchKey key = dir.register(watcher,
ENTRY_CREATE,
ENTRY_DELETE,
ENTRY_MODIFY);
} catch (IOException x) {
System.err.println(x);
}
There are lots of possible solutions to this issue. My personal preference would be to use a message queue to post messages between the applications. (http://zeromq.org/ is light and would be my recommendation)
The advantage of this approach is the two applications are decoupled and and its not relying on the filesystem which is notoriously prone to errors.
To call a function that is written in C# (or any .NET library function) from Java, you can use JNI.
However, all JNI will do is get you to C/C++. You will need to write a simple managed C++ object that can forward request from the unmanaged side to the .NET library.
Example Here
So here is my problem:
for a project I had to create a custom linked list whereby I had to add nodes to it and save/load it to and from the disk using serialisation
here are some things about my system before I define the problem
I have a generic 'customer file' which acts as the node data
this is stored in a node object which acts as an element of the list
there is a customer file class where the information such as name and email address are stored as well as the various get and set methods for each - these work fine
there is a node class with get and set data and next methods for each whereby the next item is a node object and acts as the next item in the list
there is a singly linked class with add, remove, modify, sort, search etc... methods - IT IS A CUSTOM MADE CLASS AND SO DOES NOT IMPLEMENT ANY JAVA PREMADE LISTS.
a lot of testing has been on all classes separately and used together - these methods are foolproof - they work
there is a main class which is used for the main interface between the user and the system - it is a CLI system (command line)
it has a save list to file function and load list from file function (in the main class) whereby each function uses serialization or deserialization to save/load the list from the disk
all classes implement the serializable interface
there is a 'MAIN' method in the main class whereby a while loop operates which allows the user to modify the list in some way (eg add a record, remove a record etc...)
the list is loaded outside the loop so it is not cleared each time the loop iterates (a common problem amongst colleagues)
i have a password for the system whereby identical methods are used to save a string to another file location and that has worked for weeks - the password is saved at that location and can be accessed, changed and removed at will
these load/save methods have the appropriate try/catch methods to catch any exceptions
The problem is that each time i load up my programming environment and want to look at the list, I find that the list on file is 'empty' and contained no records from when i last added/removed stuff.
I add records and modify the list - easy peasy as the other classes work - once these are added, i call the print function which simply displays all items in the list and it is fine.
However, the minute i close the environment, they are lost and when i reopen the environment to look at the list again, it is empty!
Upon looking in the folder where these classes are saved, i have noticed each time i run the program that 'shells' are created and remain there until the program is closed/finished however the 'listData.ser' which should have the linked list saved does not have any data.
Likewise the password file contains the password which was saved fine - so i am a little confused as to why my code does not work.
Here is my save list method:
private static void saveListToFile(SinglyLinkedList lst) {
try {
ObjectOutputStream os = new ObjectOutputStream(new FileOutputStream("ListData.ser"));
os.writeObject(lst);
os.flush();
os.close();
}
catch (FileNotFoundException e) {
e.printStackTrace();
}
catch (IOException e) {
e.printStackTrace();
}
}
Likewise the load list method is similar but uses object input stream and file input stream.
Any suggestions?
P.S. My main while loop is over 400 lines of code long and therefore not feasible to post.
Update 1.
Deserialization code in load list method:
private static SinglyLinkedList loadListFromFile() {
SinglyLinkedList lst = null;
try {
ObjectInputStream is = new ObjectInputStream(new FileInputStream("ListData.ser"));
lst = (SinglyLinkedList) is.readObject();
is.close();
}
catch(FileNotFoundException e) {
e.printStackTrace();
}
catch(IOException e) {
e.printStackTrace();
}
catch(ClassNotFoundException e) {
e.printStackTrace();
}
return lst;
}
I dont think the singly linked list class itself is the problem (response to comment) and it is not feasible to copy as it is also over 300 lines of code (lots of methods).
Have you tried calling close() on the FileOutputStream when you are done writing the file/object?
I have solved it, instead of posting my code I tried to do it myself. I turns out there were a few static methods of the list class - these were changed to non-static and now the list saves as expected each time.
Thanks for the help
In my experience and after repeated tests I've done and deep web researches, I've found that major java libraries (either "Apache Commons" or Google.coomons or Jcifs) doesn't predict the case of “cyclic copy” of a file onto a destination differently mapped (denoted with different RootPath according with newer java.nio package Path Class) that,at last end of mapping cycle,resolves into the itself origin file.
That's a situation of data losing, because Outputsream method nor jnio's GetChannel method prevents itself this case:the origin file and the destination file are in reality "the same file" and the result of these methods is that the file become lost, better said the size o file become 0 length.
How can one avoid this without get off at a lower filesystem level or even surrender to a more safe Runtime.exec, delegating the stuff at the underlying S.O.
Should I have to lock the destination file (the above methods not allowing this), perhaps with the aid of the oldest RandomAccessFile Class ?
You can test using those cited major libraries with a common "CopyFile(File origin,File dest)" method after having done:
1) the origin folder of file c:\tmp\test.txt mapped to to x: virtual drive via a cmd's [SUBST x: c:\tmp] thus trying to copy onto x:\test.txt
2) Similar case if the local folder c:\tmp has been shared via Windows share mechanism and the destination is represented as a UNC path ending with the same file name
3) Other similar network situations ...
I think there must be another better solution, but my experience of java is fairly few and so I ask for this to you all. Thanks in advance if interested in this “real world” discussion.
Your question is interesting, never thought about that. Look at this question: Determine Symbolic Links. You should detect the cycle before copying.
Perhaps you can try to approach this problem slightly differently and try to detect that source and destination files are the same by comparing file's metadata (name, size, date, etc) and perhaps even calculate hash of the files content as well. This would of course slow processing down.
If you have enough permissions you could also write 'marker' file with random name in destination and try to read it at the source to detect that they're pointing to the same place. Or try to check that file already exist at destination before copying.
I agree that it is unusual situations, but you will agree that files are a critical base of every IT system. I disagree that manipulating files in java is unusual: in my case I have to attach image files of products through FileChooser and copy them in ordered way to a repository ... but real world users (call them customers who buy your product) may fall in such situations and if it happens, one can not 'blame the devil of bad luck if your product does something "less" than expected.
It is a good practice learning from experience and try to avoid what one of Murphy's Laws says, more' or less: "if something CAN go wrong, it WILL go wrong sooner or later.
Is perhaps also for one of those a reason I believe the Java team at Sun and Oracle has enhanced the old java.io package for to the newest java.nio. I'm analyzing a the new java.nio.Files Class which I had escaped to attention, and soon I believe I've found the solution I wanted and expected. See you later.
Thank for the address from other experienced members of the community,and thanks also to a young member of my team, Tindaro, who helped me in the research, I've found the real solution in Jdk 1.7, which is made by reliable, fast, simple and almost definitively will spawn a pity veil on older java.io solutions. Despite the web is still plenty full of examples of copying files in java using In/out Streams I'll warmely suggest everyone to use a simple method : java.nio.Files.copy(Path origin, Path destination) with optional parameters for replacing destination,migrate metadata file attributes and even try a transactional move of files (if permitted by the underlying O.S.).
That's a really good Job, waited for so long!
You can easily convert code from copy(File file1, File file2) by appending a ".toPath()" to the File instance (e.g. file1.toPath(), file2.toPath().
Note also that the boolean method "isSameFile(file1.toPath(), file2.toPath())", is already used inside the above copy method but easily usable in every case you want.
For every case you can't upgrade to 1.7 using community libraries from Apache or Google is still suggested, but for reliable purpose, permit me to suggest the temporary workaround I've found before:
public static boolean isTheSameFile(File f1, File f2) {//throws Exception{
// minimum prerequisites !
if(f1.length()!=f2.length()) return false;
if (!file1.exists() || !file2.exists()) { return false; }
if (file1.isDirectory() || file2.isDirectory()){ return false; }
//if (file1.getCanonicalFile().equals(file2.getCanonicalFile())); //don't rely in this ! can even still fail
//new FileInputStream(f2).getChannel().lock();//exception, can lock only on OutputStream
RandomAccessFile rf1=null,rf2=null; //the only practicable solution on my own ... better than parse entire files
try {
rf1 = new RandomAccessFile(f1, "r");
rf2=new RandomAccessFile(f2, "rw");
} catch (FileNotFoundException e) {
e.printStackTrace();
return false;
}
try {
rf2.getChannel().lock();
} catch (IOException e) {
return false;
}
try {
rf1.getChannel().read(ByteBuffer.allocate(1));//reads 1 only byte
} catch (IOException e) {
//e.printStackTrace(); // if and if only the same file, the O.S. will throw an IOException with reason "file already in use"
try {rf2.close();} catch (IOException e1) {}
return true;
}
//close the still opened resources ...
if (rf1.getChannel().isOpen())
try {rf1.getChannel().close();} catch (IOException e) {}
try {
rf2.close();
} catch (IOException e) {
return false;
}
// done, files differs
return false;
}