I started with requirement of reading and writing files in from/in a directory on a remote Ubuntu machine.
First, I wrote a Java program that could read,write files from a shared folder on a remote Windows machine i.e on a LAN. Here, something like this works on my(local) Windows machine :
File inputFile = new File(
"\\172.17.89.76\EBook PDF");/*ignore the syntax errors, the loc is just for the idea*/
Now when I consider a remote Ubuntu machine, obviously I cannot do something like this as the machine is not on the LAN(I'm not sure if that can be done even if it is on the LAN!). Hence, I tried following approaches :
Using Jsch, establishing the trust between two machines(local - remote Linux , remote Linux - remote Linux) and file writing using sftp.(done)
Running sockets on the two machines - one sender, one receiver(both Java)(done)
Attempting to achieve I/O alike the code snippet for Windows (LAN) machines(not achieved)
While doing all these, I had many queries, read many posts etc. and I felt that I'm missing something on the fundamentals :
Some sort of trust-building(between two machines) utility will be required to achieve IO. But finally, I want to write a code like the snippet given, irrespective of the machines, network etc.
The Jsch solution and the others suggested(usage of http, ftp etc. over URL) finally are using some services that are running on the remote machine. In other words, it is NOT THAT Java IO is being used to access the remote file system - this doesn't appeal to me as I'm relying on services rather than using good-old I/O.
Samba, SSHFS too popped onto the scene, only to add to my confusion. But I don't see them as the solutions to my objective !
To reiterate, I want to write a code using Java I/O(either plain or nio, both are fine) which simply can read, write remote files without using services over protocols like ftp, http etc. or socket sender-receiver model. Is my expectation valid?
If not, why and what is the best I can do to read/write remote files
using Java?
If yes, how to achieve the same !
P.S : Please comment in case I need to elaborate to pose my question accurately !
To answer your question - No, your expectation isn't valid.
Retrieving files from a remote server is inherently reliant on the services running on that server. To retrieve a file from a remote server, the remote server needs to be expecting your request for a file.
The cases you listed in your question (using jsch and sftp, using a sender and receiver Java sockets) that you have achieved already, are essentially the same as this:
File inputFile = new File(
"\\172.17.89.76\EBook PDF");
The only difference is that Java is using the native os's built in support for reading from a windows style share. The remote windows machine has a sharing service running on it (just like Samba on linux, or a java socket program) waiting for your request.
From the Java API docs on File (http://docs.oracle.com/javase/6/docs/api/java/io/File.html)
The canonical pathname of a file that resides on some other machine and is accessed via a remote-filesystem protocol such as SMB or NFS ...
So essentially "Good old Java I/O" is more or less just a wrapper over some common protocols.
To answer the second part of your question (what is the best I can do to read/write remote files using Java?), that depends on what remote system you are accessing and, more importantly, what services are running on it.
In the case of your target remote machine being an Ubuntu machine, I would say the best alternative would be to use Jsch. If your target machine can be either a windows machine or a linux machine, I would probably go for running Java sockets on the two machines (obviously dependant on whether you have access to installing your app on the remote machine).
Generally speaking, go with the common lowest denominator between your target systems (in terms of file sharing protocols).
If you want to access a filesystem on a remote computer, then this computer has to make his filesystem available with a service. Such a service is typically a background job, which handles incoming requests and returns a response, e.g. for authentication, authorization, reading and writing. The specification of the request/response pattern is called a protocol. Well known protocols are SMB (or SAMBA) on Windows or NFS on UNIX/LINUX. To access such a remote service you mount the remote filesystem on the level of the operating system and make it available locally as a drive on Windows or as mount point on UNIX.
Then you can access the remote file system from your Java program like any local file system.
Of course it is also possible to write your own file service provider (with your own protocol layer) and run it on the remote machine. As transport layer for such an endeavor sockets (TCP/IP) can be used. Another good transport layer would be the http protocol, e.g. with a restful service or something based on WebDav.
We used sshfs. You can add to /etc/fstab the line:
sshfs#user#remoteAddress:remoteDir /mnt/ssh fuse defaults 0 0
and then mount /mnt/ssh
I think RMI might be the solution, you could set up a server an RMI server on the machine you want to connect to, and use your machine a the client.
I would give the client a path to the file this will be sent to the server, the server could then read in the file as bytes and sent the file back to the client.
Related
Okay, so I'm unsure how mounts over a network work with file locks.
This is the scenario:
There are two JVMs, each running on a machine of its own (both Linux).
Then there is a file share, on a third machine (Windows).
Both the machines running a JVM each have mounted the same windows fileshare using CIFS/SAMBA.
If JVM-1 takes a lock on a file, using the FileLocker from Spring Integration for example, in its "local network mount" (or however to phrase it), will JVM-2 recognise that lock?
Or will the lock only be taken on that file local to the Linux machine, even though it is network share mounted and is bound somehow to a file on the Windows machine?
The NIOFileLocker essentially works properly only on Windows. It doesn't matter how you mount that remote Windows dir, you stil work from Linux. Moreover you said it yourself: you deal with files via SMB protocol - nothing about local filesystem where NIOFileLocker would have an effect.
See Spring Integration SMB support: https://docs.spring.io/spring-integration/docs/current/reference/html/smb.html#smb and consider to use an SmbPersistentAcceptOnceFileListFilter based on some shared persistent DB: https://docs.spring.io/spring-integration/docs/current/reference/html/meta-data-store.html#metadata-store. The filter will look into the store to check if file has been already processed in some other instance. This is essentially a distributed file locking you are looking for.
I have NFS server where in i require to host files and read it. The approach to read and write file on NFS server found is
using NFS Client Like here.
My question is when we can write content on NFS server with normal java read/write program then why is NFS client introduced ? Is there any service specific to NFS which these client provide and why is it different than normal file creation process ?
When you're using normal Java API to access a NFS folder, all communications are actually handled by your OS. So you can just use the normal File API and Java doesn't know if it's accessing a local file or a remote one. But in cases that your OS doesn't support NFS (e.g. if your Java app is running in an environment with limited resources or NFS mounting is disabled in OS level) or you are developing an application that needs more lower level details about the NFS resource (e.g. when you're developing a framework or a middleware), you may need to be able to communicate directly with the server that is exposing files/folders via a library like nfs-client-java.
I have two servers, one that runs my program written in Java (Server A) and one that stores a graph (Server B) that must be continuously accessed by Server A. To access Server B you must ssh with a username and password using Server B's IP address.
As of now I have failed to find a method to continuously access a directory on a different server and I am wondering if anyone knows a method that lets me do this (or if it is not possible, if there is a workaround).
I have looked into SSH libraries, but they all seem to only give you access to the directory for a brief amount of time. I need continuos access because I write and read from the graph on Server B all the time.
I basically want to make a proxy directory on Server A that actually refers/links to the directory on Server B:
graphDb = new EmbeddedGraphDatabase("/192.168.1.**/media/graphDB");
Any help would be great.
Probably unrelated option:
If client and server are Linux machines, you can use rsync to synchronize files between them. In that way you have a copy of the files on server A. The rsync command could be executed from the Java program or periodically from a cronjob on server A.
You could write your own client/server service, so that the server service provide you with the means to send data over the network to. It tends to be a lot of work though.
You could write your self a "heart beat" service on the client that tests the SSH connection and reestablishes it if it closes
You could "test" the ssh connection before you writing/reading from the connection
You could do as AlperAkture suggests (and mount the directory as a remote drive)
The question us also related to linux but solution is needed for Java. So I have a data directory
/somedir/data
on linux server
servername
I can ssh to the server and do anything I want only from deployment machine (due public/private keys in place). But there's a Java process that should read files from that directory. How can I force it read that files? I was trying to use File("//servername/somedir/data") with no success. Any help would be appreciated.
You must share the file using one of the network file services.
For example:
NFS (check with showmount -e);
Samba (check with smbclient -L);
AFS;
HTTP/FTP (check first if there a HTTP/FTP-server on the host).
You can also access this file using SSH (you say that you have SSH connection to the host, that means that SSH is accessible anyway).
If you want to connect to the SSH server from Java program,
you can use (for example) JSch for that.
Example of JSch usage is here.
I want to establish a connection with my UNIX file system using java program.. So that I can make some File I/O operations and normally I can connect using Putty.
How can I do the same using java program
I have the Host name, username,password and Port number
Help appreciated :)
You need several things:
A server that takes commands (create directory, list directory, write data to a file, read data from a file) over the network. This server should listen to port1 on localhost
You need to configure putty to forward port2 on your local computer to port1 on the server.
A local client which allows you to connect to port2 on your local computer. Putty will tunnel any data send to port2 to port1 on the remote server and vice versa.
Or you get WinSCP which uses the SSH protocol (just like Putty) and maybe already does what you want.
There's a pure Java implementation of SSH/SCP available: http://www.cleondris.ch/opensource/ssh2/
You can use its SCPClient or SFTPv3Client classes to work on the remote file system.
Documentation is available at http://www.cleondris.ch/opensource/ssh2/javadoc.
If you want to do it from Java, you can use Apache Commons VFS. It provides a common approach to dealing with files on all of the supported file systems. SFTP is one of the supported types which is most likely what you would need if you have been connecting with PuTTY.
You need SSH client. There are various pure java SSH clients. Google "java ssh client" and try any one of them. I used Jsch http://www.jcraft.com/jsch/ and it worked fine for me.