Why NFS Client is introduced? - java

I have NFS server where in i require to host files and read it. The approach to read and write file on NFS server found is
using NFS Client Like here.
My question is when we can write content on NFS server with normal java read/write program then why is NFS client introduced ? Is there any service specific to NFS which these client provide and why is it different than normal file creation process ?

When you're using normal Java API to access a NFS folder, all communications are actually handled by your OS. So you can just use the normal File API and Java doesn't know if it's accessing a local file or a remote one. But in cases that your OS doesn't support NFS (e.g. if your Java app is running in an environment with limited resources or NFS mounting is disabled in OS level) or you are developing an application that needs more lower level details about the NFS resource (e.g. when you're developing a framework or a middleware), you may need to be able to communicate directly with the server that is exposing files/folders via a library like nfs-client-java.

Related

Mounting Server Folder - Accessing files through application server

I have 4 Linux machines where applications runs on 3 and 1 Linux file server where i have stored files.
application server1 has port 192.168.1.101:8080
application server2 has port 192.168.1.102:8080
application server3 has port 192.168.1.103:8080
file server has ip 195.168.1.108 - has images directory where images are stored
#4 has folder images and it has been mounted on all the application servers 1, 2, 3 as /images
Assume common domain name configured: www.imageprocessing.com and internally routed with load balancing to hit any of the application servers.
Now will I be able to access the above images as www.imageprocessing.com/images/1.jpeg ? I want to access the images via a direct URL.
Please advise.
Thanks
Accessing Images via Direct URL through mounting of drive and application server
What makes you think that's even a good idea? The security implications would be tremendous were this so simply possible.
You're going to need to have some sort of process running on the web servers that retrieves the images from the other machine and serves them to the client. Which means you're also going to need to have security set up to allow that process access to that machine (possibly through sftp or scp, or through a web service with the server for that running on the machine hosting the images).
There possibly is a very crude way, by mapping the remote directory on the image hosting server through a symlink to a directory in the static content directory of the web servers, but that's a major security risk.

How does a FileLock work over the network? Will it only lock a local file in the mount, or will the lock "propagate" to the file on another machine?

Okay, so I'm unsure how mounts over a network work with file locks.
This is the scenario:
There are two JVMs, each running on a machine of its own (both Linux).
Then there is a file share, on a third machine (Windows).
Both the machines running a JVM each have mounted the same windows fileshare using CIFS/SAMBA.
If JVM-1 takes a lock on a file, using the FileLocker from Spring Integration for example, in its "local network mount" (or however to phrase it), will JVM-2 recognise that lock?
Or will the lock only be taken on that file local to the Linux machine, even though it is network share mounted and is bound somehow to a file on the Windows machine?
The NIOFileLocker essentially works properly only on Windows. It doesn't matter how you mount that remote Windows dir, you stil work from Linux. Moreover you said it yourself: you deal with files via SMB protocol - nothing about local filesystem where NIOFileLocker would have an effect.
See Spring Integration SMB support: https://docs.spring.io/spring-integration/docs/current/reference/html/smb.html#smb and consider to use an SmbPersistentAcceptOnceFileListFilter based on some shared persistent DB: https://docs.spring.io/spring-integration/docs/current/reference/html/meta-data-store.html#metadata-store. The filter will look into the store to check if file has been already processed in some other instance. This is essentially a distributed file locking you are looking for.

Using apache sshd and commons vfs on windows

I'm recently working on a file agent that will be deployed on machine of both linux and windows to perform unified file transfer. It generally consist of a sshd server and a vfs manager. Usually one agent would use the vfs manager to connect to the sftp server on another agent and manipulate file on it.
The obstacle I just encountered is that windows file system is different with linux's for it typically has multiple roots(drives). Although the root path of ssh can be configured using the FileSystemFactory, it can't be changed at runtime and therefore making it impossible to access other drives after server bootstrap.
When using vfs to connect to the sftp subsystem of another agent, as expected it can only resolve the file in the drive where its root path resides. But, WinSCP seems not restricted by this and can change both current directory and drive when connected.
I wonder whether it is possible to construct a virtual FileObject corresponding to the / of linux file system, and access different drive like they're folders under that root. Or is there other ways to acquire the FileObject on other drives?

Writing files to windows machine from Broker (Unix)

I have a requirement to create/append file on windows machine from WebSphere Message Broker Toolkit v 7.0 (Unix). Unix user does not have permissions to access the windows machine. I wanted to write a Java code which can create or append to a file with other credentials which has access to windows machine (not FTP, it's a shared drive in the same network but a different group).
I found some solutions which the client don't want to use whatever constraints.
Creating a NFS mount point and write to that mount point location.
Use SAMBA framework.
Can anyone suggest something other than this ?
Thanks in advance.
Run a WebSphere MQ Managed File Transfer agent on the Windows host. Then the broker can simply send files to the agent which will write them to the local filesystem on the Windows host. This capability is built into modern versions of MQ.

Basics - reading/writing remote files using Java

I started with requirement of reading and writing files in from/in a directory on a remote Ubuntu machine.
First, I wrote a Java program that could read,write files from a shared folder on a remote Windows machine i.e on a LAN. Here, something like this works on my(local) Windows machine :
File inputFile = new File(
"\\172.17.89.76\EBook PDF");/*ignore the syntax errors, the loc is just for the idea*/
Now when I consider a remote Ubuntu machine, obviously I cannot do something like this as the machine is not on the LAN(I'm not sure if that can be done even if it is on the LAN!). Hence, I tried following approaches :
Using Jsch, establishing the trust between two machines(local - remote Linux , remote Linux - remote Linux) and file writing using sftp.(done)
Running sockets on the two machines - one sender, one receiver(both Java)(done)
Attempting to achieve I/O alike the code snippet for Windows (LAN) machines(not achieved)
While doing all these, I had many queries, read many posts etc. and I felt that I'm missing something on the fundamentals :
Some sort of trust-building(between two machines) utility will be required to achieve IO. But finally, I want to write a code like the snippet given, irrespective of the machines, network etc.
The Jsch solution and the others suggested(usage of http, ftp etc. over URL) finally are using some services that are running on the remote machine. In other words, it is NOT THAT Java IO is being used to access the remote file system - this doesn't appeal to me as I'm relying on services rather than using good-old I/O.
Samba, SSHFS too popped onto the scene, only to add to my confusion. But I don't see them as the solutions to my objective !
To reiterate, I want to write a code using Java I/O(either plain or nio, both are fine) which simply can read, write remote files without using services over protocols like ftp, http etc. or socket sender-receiver model. Is my expectation valid?
If not, why and what is the best I can do to read/write remote files
using Java?
If yes, how to achieve the same !
P.S : Please comment in case I need to elaborate to pose my question accurately !
To answer your question - No, your expectation isn't valid.
Retrieving files from a remote server is inherently reliant on the services running on that server. To retrieve a file from a remote server, the remote server needs to be expecting your request for a file.
The cases you listed in your question (using jsch and sftp, using a sender and receiver Java sockets) that you have achieved already, are essentially the same as this:
File inputFile = new File(
"\\172.17.89.76\EBook PDF");
The only difference is that Java is using the native os's built in support for reading from a windows style share. The remote windows machine has a sharing service running on it (just like Samba on linux, or a java socket program) waiting for your request.
From the Java API docs on File (http://docs.oracle.com/javase/6/docs/api/java/io/File.html)
The canonical pathname of a file that resides on some other machine and is accessed via a remote-filesystem protocol such as SMB or NFS ...
So essentially "Good old Java I/O" is more or less just a wrapper over some common protocols.
To answer the second part of your question (what is the best I can do to read/write remote files using Java?), that depends on what remote system you are accessing and, more importantly, what services are running on it.
In the case of your target remote machine being an Ubuntu machine, I would say the best alternative would be to use Jsch. If your target machine can be either a windows machine or a linux machine, I would probably go for running Java sockets on the two machines (obviously dependant on whether you have access to installing your app on the remote machine).
Generally speaking, go with the common lowest denominator between your target systems (in terms of file sharing protocols).
If you want to access a filesystem on a remote computer, then this computer has to make his filesystem available with a service. Such a service is typically a background job, which handles incoming requests and returns a response, e.g. for authentication, authorization, reading and writing. The specification of the request/response pattern is called a protocol. Well known protocols are SMB (or SAMBA) on Windows or NFS on UNIX/LINUX. To access such a remote service you mount the remote filesystem on the level of the operating system and make it available locally as a drive on Windows or as mount point on UNIX.
Then you can access the remote file system from your Java program like any local file system.
Of course it is also possible to write your own file service provider (with your own protocol layer) and run it on the remote machine. As transport layer for such an endeavor sockets (TCP/IP) can be used. Another good transport layer would be the http protocol, e.g. with a restful service or something based on WebDav.
We used sshfs. You can add to /etc/fstab the line:
sshfs#user#remoteAddress:remoteDir /mnt/ssh fuse defaults 0 0
and then mount /mnt/ssh
I think RMI might be the solution, you could set up a server an RMI server on the machine you want to connect to, and use your machine a the client.
I would give the client a path to the file this will be sent to the server, the server could then read in the file as bytes and sent the file back to the client.

Categories