Android NFS Client - java

I have found a good library to implement an Android NFS Client 'nfs-client-java', I'm creating an Nfs3 Client and I can access files and create new files... on the server. But the problem is that I can't mount the whole shared directory from the server. On Linux NFS Client, I can specify the mount point with
mount -t nfs -o nolock,rw,vers=3 192.168.1.10:/media/user/ /mnt/media_rw/remote
where /mnt/media_rw/remote is where the shared directory will be mounted.
My question is: How can I achieve the same result on Android App ?

In Android app development, there's no mounting at the Linux vfs layer. So you wouldn't be able to achieve exactly the same result.
There closest thing I'm aware of is the documents provider system https://developer.android.com/reference/android/provider/DocumentsProvider. From the documentation:
A document provider offers read and write access to durable files, such as files stored on a local disk, or files in a cloud storage service.
You'd implement methods such as openFile in your NFS documents provider by, for example, downloading a copy through the library you found, opening it, and forwarding a parcelable file descriptor in the return value.

Related

Is there a way to upload file to EFS from local system using java?

I am new to AWS EFS and trying to understand how EFS file upload works.
Is there a way to upload files to EFS from local machine programmatically using java?
EFS is only accessible from within a VPC. You can't access it directly from outside of AWS. So you would have to setup a VPN connection between home network and your VPC, and then mount EFS filesystem in your local computer.
AWS EFS is a managed NFS service. Copying files from a local (on-premise) machine would require to mount it through a VPN connection or AWS Direct Connect. There is a guide for this here.
Once this is done, you can access it just like any other mounted file system, either with Java or otherwise.

where to keep the file path while hosting the java web app?

I have developed a website, one of its operation is to read and write data to text files stored at my local machine such as D://test.txt or C://file.txt, but now I am going to host my website at the external server, i mean over the internet use, i wonder where to keep these files that are associated with read and writing operations. At present I am getting an exception file not found if i am using my local machine location. For your information, I am using GlassFish server.
You will want to create a system property on Glassfish, which represents the file path and name. Then upload the file to that location of your choosing on the server where your website application is deployed.
Depending upon your needs, you may find it easier to deploy the file out with your application. Make sure the file is on the classpath, and you can load it using any number of ways.

File transfer to a remote machine in amazon ec2

after creating a instance in amazon cloud using webservice in java i need to transfer a executable file or war file via program from my local machine to the newly created instance in amazon and i want to execute that excetuable,i tried and found that there is something called createbucket in ec2 api and using that we can upload the file to that and we can transfer that reference using PutObjectRequest i can transfer the reference to a remote computer in amazon do it is possible or if it is wrong please suggest me the correct way to proceed for file transfer from my local machine to the amazon ec2.
The basic suggestion is, you shouldn't transfer the file(s) with CreateBucket, which is actually an S3 API. Use scp may be a better solution.
Amazon S3, which you are trying to use with CreateBucket, is a data storage service mainly for flexible, public (with authentication) file sharing. You can use REST or SOAP APIs to access the data, but cannot really read/write it in EC2 instances as if it's in local harddisk.
To access file system in EC2 instances, that really depends on your operating system (on EC2). If it's running Linux, scp is a mature choice. You can use Java to directly invoke scp, if you are using Linux locally, or pscp if you are using Windows. If the EC2 instance is running Windows, one choice is to host an SSH/SFTP environment with FreeSSHD, and then proceed like Linux. Another option is use Shared Folder and regular file copy.

FTP Upload directory tree in java

Is there any library that supports uploading directory tree in remote server ?
You can always use the org.apache.commons.net.ftp.FTPClient client and recursively upload all files in your directory.
The Apache Virtual File System (VFS) project can do this, whilst abstracting the details of dealing directly with FTP connections.

Zipping a directory to a remote location in Java

I'm trying to create a zip file from a directory in Java, then place that file on a remote server (a network share). I'm currently getting an error because the Java File object cannot reference a remote location and I'm not really sure where to go from there.
Is there a way to zip a directory in Java to a remote location without using the File class?
Create the ZIP file locally and use either commons-net FTP or SFTP to move the ZIP file across to the remote location, assuming that by "remote location" you mean some FTP server, or possibly a blade on your network.
If you are using the renameTo method on java.io.File, note that this doesn't work on some operating systems (e.g. Solaris) where the locations are on different shares. You would have to do a manual copy of the file data from one location to another. This is pretty simple using standard Java I/O.

Categories