I have a separate services that is managing files and s3 authentication. It produces presigned URLs, which I am able to use in other services to upload (and download) files.
I would like to take advantage of the Multipart upload sdk- currently the 'uploadToUrl' method seems to spend most of its time on getResponseCode, so it's difficult to provide user feedback. Also, the multipart upload seems much faster in my testing.
Ideally, I'd like to be able to create some AWSCredentials using a presigned URL instead of a secret key / access key for temporary use. Is that just a pipe dream?
//s3 service
public URL getUrl(String bucketName, String objectKey, Date expiration, AmazonS3 s3Client, HttpMethod method, String contentType) {
GeneratePresignedUrlRequest generatePresignedUrlRequest;
generatePresignedUrlRequest = new GeneratePresignedUrlRequest(bucketName, objectKey);
generatePresignedUrlRequest.setMethod(method);
generatePresignedUrlRequest.setExpiration(expiration);
generatePresignedUrlRequest.setContentType(contentType);
URL s = s3Client.generatePresignedUrl(generatePresignedUrlRequest);
System.out.println(String.format("Generated Presigned URL: %n %S", s.toString()));
return s;
}
//Upload service
Override
public void uploadToUrl(URL url, File file) {
HttpURLConnection connection;
try {
InputStream inputStream = new FileInputStream(file);
connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
OutputStream out =
connection.getOutputStream();
byte[] buf = new byte[1024];
int count;
int total = 0;
long fileSize = file.length();
while ((count =inputStream.read(buf)) != -1)
{
if (Thread.interrupted())
{
throw new InterruptedException();
}
out.write(buf, 0, count);
total += count;
int pctComplete = new Double(new Double(total) / new Double(fileSize) * 100).intValue();
System.out.print("\r");
System.out.print(String.format("PCT Complete: %d", pctComplete));
}
System.out.println();
out.close();
inputStream.close();
System.out.println("Finishing...");
int responseCode = connection.getResponseCode();
if (responseCode == 200) {
System.out.printf("Successfully uploaded.");
}
} catch (IOException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
A few years later, but digging around in the AWS Java SDK reveals that adding the following to GeneratePresignedUrlRequest works pretty well:
AmazonS3Client amazonS3Client = /* ... */;
GeneratePresignedUrlRequest request = /* ... */;
// the following are required to trigger the multipart upload API
request.addRequestParameter("uploadId", uploadIdentifier);
request.addRequestParameter("partNumber", Integer.toString(partNumber));
// the following may be optional but are recommended to validate data integrity during upload
request.putCustomRequestHeader(Headers.CONTENT_MD5, md5Hash);
request.putCustomRequestHeader(Headers.CONTENT_LENGTH, Long.toString(contentLength));
URL presignedURL = amazonS3Client.generatePresignedUrl(request);
(I haven't dug deeply enough to determine whether CONTENT_MD5 or CONTENT_LENGTH are required.)
with PHP, you can
$command = $this->s3client->getCommand ('CreateMultipartUpload', array (
'Bucket' => $this->rootBucket,
'Key' => $objectName
));
$signedUrl = $command->createPresignedUrl ('+5 minutes');
But I found no way so far how to acheive this with Java.
For a single PUT (or GET) operation, one can use generatePresignedUrl, but I wouldn't know how to apply this to multipart upload like with the PHP getCommand ('CreateMultipartUpload'/'UploadPart'/'CompleteMultipartUpload') methods.
For now I am exploring returning temporary credentials returned by my trusted code instead of a signed url.
http://docs.aws.amazon.com/AmazonS3/latest/dev/AuthUsingTempSessionTokenJava.html
Related
I have a requirement where I need to download files from third party APIs.
I have written an API in my application which calls third party APIs and downloads files. I am able to download the files and unzip it successfully. These files are getting downloaded into tomcat server where the code is deployed when i hit my API.
But I would like to get those files downloaded into the system in which i am executing my API. Suppose, if i deploy that code into test environment server and execute my API using curl command from my local system, then files should get downloaded into my local system. Is there anyway I can achieve this in Java?
public class SnapshotFilesServiceImplCopy {
public static final ILogger LOGGER = FWLogFactory.getLogger();
private RestTemplate mxRestTemplate = new RestTemplate();
public void listSnapShotFiles(String diId, String snapshotGuid) {
LOGGER.debug("Entry - SnapshotFilesServiceImpl: FI=" + diId + " snapshotGuid=" + snapshotGuid);
ResponseEntity responseEntity = null;
HttpEntity entity = new HttpEntity(CommonUtil.getReportingAPIHeaders());
String resourceURL = "files_url";
try {
responseEntity = mxRestTemplate.exchange(resourceURL, HttpMethod.GET, entity, String.class);
} catch (RestClientException re) {
if (re instanceof HttpStatusCodeException) {
//TO be handled
}
}
String data = (String) responseEntity.getBody();
try {
Object obj = new JSONParser().parse(data);
JSONObject jsonObject = (JSONObject) obj;
JSONArray jsonArray = (JSONArray) jsonObject.get("accounts");
for (int i = 0; i < jsonArray.size(); i++) {
String accountFileURL = (String) jsonArray.get(i);
downloadAccountsData(diId, accountFileURL);
}
} catch (ParseException e) {
e.printStackTrace();
}
}
public void downloadAccountsData(String diId, String accountsURL) {
LOGGER.debug("Entry - SnapshotFilesServiceImpl: FI=" + diId + " snapshotGuid=" + accountsURL);
ResponseEntity<Resource> responseEntity = null;
HttpHeaders headers = new HttpHeaders();
headers.set("Accept", "application/vnd.mx.logs.v1+json");
headers.set("API-KEY", "key");
HttpEntity entity = new HttpEntity(CommonUtil.getReportingAPIHeaders());
String resourceURL = accountsURL;
try {
responseEntity = mxRestTemplate.exchange(resourceURL, HttpMethod.GET, entity, Resource.class);
} catch (RestClientException re) {
if (re instanceof HttpStatusCodeException) {
//To be handled
}
}
Date date = new Date();
String fileName = RenumberingConstants.SNAPSHOT_FILE_ACCOUNTS + date.getTime();
try {
FileOutputStream fileOutputStream = new FileOutputStream(fileName + ".gz");
byte[] bytes = IOUtils.toByteArray(responseEntity.getBody().getInputStream());
fileOutputStream.write(bytes);
} catch (Exception e) {
e.printStackTrace();
}
try {
FileInputStream fis = new FileInputStream(fileName + ".gz");
GZIPInputStream gis = new GZIPInputStream(fis);
FileOutputStream fos = new FileOutputStream(fileName + ".avro");
byte[] buffer = new byte[1024];
int len;
while ((len = gis.read(buffer)) != -1) {
fos.write(buffer, 0, len);
}
//close resources
fos.close();
gis.close();
} catch (IOException e) {
e.printStackTrace();
}
File file = new File(fileName + ".gz");
boolean isDeleted = file.delete();
if (isDeleted)
System.out.println("File has been deleted successfully.." + fileName + ".gz");
else
System.out.println("Could not delete the file.." + fileName + ".gz");
}
}
You can store the file received from third party API in one file (new File()) object, and then can save that file object at desired location. I need to see the code snippet which downloads the file from third party API to answer precisely.
What you are doing is saving the file in server (as Java program in tomcat cannot access client machine) and not returning it to the client which is calling your API. You need to open another output stream, and return the file data to client machine using that stream. You can refer this tutorial on how to download a file using streams.
Suppose, if i deploy that code into QA environment and execute my API using curl command from my local system, then files should get downloaded into my local system. Is there anyway I can achieve this in Java?
It is only achievable in Java (or any language) if the file or files are returned as the HTTP response to the request that you made using curl.
Or ... I suppose ... if you set up an HTTP server on your local system and the QA system "delivered" the files (in effect, a reverse upload!) by making an HTTP request (API call) the HTTP server.
I have a microservice which needs to take a remote file, and upload it to a S3 bucket. The remote file is presented as a download link, and requires basic authentication.
Using the latest AWS 2.0 SDK I'm trying to stream the file so it doesn't have to all download to the server first. The files can be anywhere from 90kb -> 100GB+.
Using Spring Boot 2.0, I can't find if there is support in the new WebClient to handle this, so I'm trying to hack something together such as:
public Mono<String> upload(#PathVariable String projectId, #RequestBody String downloadLink) {
try {
String authString = properties.getSilverstripe().getUsername() + ":" + properties.getSilverstripe().getToken();
// Download link needs to be cleaned a little;
URL url = new URL(downloadLink.replaceAll("\"", ""));
URLConnection urlConnection = url.openConnection();
// Add basic authentication to the stream
urlConnection.setRequestProperty("Authorization", "Basic " + new String(Base64.encodeBase64(authString.getBytes())));
InputStream inputStream = urlConnection.getInputStream();
// Attempt to create a input stream and upload to the bucket
BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream));
Stream<String> stream = reader.lines();
{
stream.forEach(part -> {
S3AsyncClient client = S3AsyncClient.create();
client.putObject(
PutObjectRequest.builder()
.bucket(BUCKET)
.key(projectId + "/" + LocalDate.now() + ".sspak")
.build(),
AsyncRequestProvider.fromString(part)
);
});
}
return Mono.just(downloadLink);
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
I was hoping there would be a pretty standard library/pattern for doing this but I can't find much online.
Any help at all is appreciated.
Uploading a file to Amazon S3 using presigned URL which has Signature, Expiration and AccessKey, using the following code I'm able to upload the file using normal java code but same code in Android gives me 403 error. Presigned URL is generate using Amazon SDK
I have read http://developer.android.com/reference/java/net/HttpURLConnection.html
and http://android-developers.blogspot.com/2011/09/androids-http-clients.html but not able to figure out what header param should i use, I guess in android it is setting headers in request which server rejects
HttpURLConnection connection=(HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
OutputStreamWriter out = new OutputStreamWriter(connection.getOutputStream());
out.write("This text uploaded as object.");
out.close();
int responseCode = connection.getResponseCode();
Exception: 403; Signature don't match :-o
Has anyone come across this issue?
Or more details into which Header-parameters are added behind the scenes from android library?
please set your content type like this:
connection.setRequestProperty("Content-Type"," ");
because the default for HttpsUrlConnection Automatically generates content type as:
"Content-Type: application/x-www-form-urlencoded"
This will cause signature mismatch.
So after some trial/error/search found out the issue:
While creating a presigned URL is it important to specify Content-Type (based on your data) or else you will keep getting 403 Signature don't match, the contentType that you specify here should be mentioned in HttpURLConnection connection object
string s3url = s3Client.GetPreSignedURL(new GetPreSignedUrlRequest()
.WithBucketName(bucketName)
.WithKey(keyName)
.WithContentType("application/octet-stream") // IMPORTANT
.WithVerb(HttpVerb.PUT)
.WithExpires(<YOUR EXPIRATION TIME>);
Inside your connection..add this to the code in question after ("PUT")
connection.setFixedLengthStreamingMode(< YOUR DATA.LENGTH ?>);
connection.setRequestProperty("Content-Type","application/octet-stream");
Check this bug, latest SDK should let you set Content-Type
The way to generate the URL:
private static URL generateRUL(String objectKey, String ACCESS_KEY, String SECRET_KEY, String BUCKET_NAME) {
AmazonS3 s3Client = new AmazonS3Client(new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY));
URL url = null;
try {
GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest(BUCKET_NAME, objectKey);
request.setMethod(com.amazonaws.HttpMethod.PUT);
request.setExpiration(new Date( System.currentTimeMillis() + (60 * 60 * 1000)));
// Very important ! It won't work without adding this!
// And request.addRequestParameter("Content-Type", "application/octet-stream") won't work neither
request.setContentType("application/octet-stream");
url = s3Client.generatePresignedUrl(request );
} catch (AmazonServiceException exception) {
} catch (AmazonClientException ace) { }
return url;
}
The way to upload the file:
public int upload(byte[] fileBytes, URL url) {
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
connection.setRequestProperty("Content-Type", "application/octet-stream"); // Very important ! It won't work without adding this!
OutputStream output = connection.getOutputStream();
InputStream input = new ByteArrayInputStream(fileBytes);
byte[] buffer = new byte[4096];
int length;
while ((length = input.read(buffer)) > 0) {
output.write(buffer, 0, length);
}
output.flush();
return connection.getResponseCode();
}
Are you using the AWS SDK for Android? That has the correct APIs and I suspect it will be easier to upload to S3 using that, and more secure for that matter. I believe they have tutorials, code samples, and demos there as well.
There is also an API for S3 and other classes you may need. A PutObjectRequest is what will help you upload the file from the phone client and can be useful in your case.
I had this same problem. Here's the reason why I faced it:
A pre-signed URL gives you access to the object identified in the URL,
provided that the creator of the pre-signed URL has permissions to
access that object. That is, if you receive a pre-signed URL to upload
an object, you can upload the object only if the creator of the
pre-signed URL has the necessary permissions to upload that object.
http://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html
Once I gave my lambda function PutObject permissions on that s3 bucket then it worked!
After almost 2 workdays of Googling and trying several different possibilities I found throughout the web, I'm asking this question here, hoping that I might finally get an answer.
First of all, here's what I want to do:
I'm developing a client and a server application with the purpose of exchanging a lot of large files between multiple clients on a single server. The client is developed in pure Java (JDK 1.6), while the web application is done in Grails (2.0.0).
As the purpose of the client is to allow users to exchange a lot of large files (usually about 2GB each), I have to implement it in a way, so that the uploads are resumable, i.e. the users are able to stop and resume uploads at any time.
Here's what I did so far:
I actually managed to do what I wanted to do and stream large files to the server while still being able to pause and resume uploads using raw sockets. I would send a regular request to the server (using Apache's HttpClient library) to get the server to send me a port that was free for me to use, then open a ServerSocket on the server and connect to that particular socket from the client.
Here's the problem with that:
Actually, there are at least two problems with that:
I open those ports myself, so I have to manage open and used ports myself. This is quite error-prone.
I actually circumvent Grails' ability to manage a huge amount of (concurrent) connections.
Finally, here's what I'm supposed to do now and the problem:
As the problems I mentioned above are unacceptable, I am now supposed to use Java's URLConnection/HttpURLConnection classes, while still sticking to Grails.
Connecting to the server and sending simple requests is no problem at all, everything worked fine. The problems started when I tried to use the streams (the connection's OutputStream in the client and the request's InputStream in the server). Opening the client's OutputStream and writing data to it is as easy as it gets. But reading from the request's InputStream seems impossible to me, as that stream is always empty, as it seems.
Example Code
Here's an example of the server side (Groovy controller):
def test() {
InputStream inStream = request.inputStream
if(inStream != null) {
int read = 0;
byte[] buffer = new byte[4096];
long total = 0;
println "Start reading"
while((read = inStream.read(buffer)) != -1) {
println "Read " + read + " bytes from input stream buffer" //<-- this is NEVER called
}
println "Reading finished"
println "Read a total of " + total + " bytes" // <-- 'total' will always be 0 (zero)
} else {
println "Input Stream is null" // <-- This is NEVER called
}
}
This is what I did on the client side (Java class):
public void connect() {
final URL url = new URL("myserveraddress");
final byte[] message = "someMessage".getBytes(); // Any byte[] - will be a file one day
HttpURLConnection connection = url.openConnection();
connection.setRequestMethod("GET"); // other methods - same result
// Write message
DataOutputStream out = new DataOutputStream(connection.getOutputStream());
out.writeBytes(message);
out.flush();
out.close();
// Actually connect
connection.connect(); // is this placed correctly?
// Get response
BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream()));
String line = null;
while((line = in.readLine()) != null) {
System.out.println(line); // Prints the whole server response as expected
}
in.close();
}
As I mentioned, the problem is that request.inputStream always yields an empty InputStream, so I am never able to read anything from it (of course). But as that is exactly what I'm trying to do (so I can stream the file to be uploaded to the server, read from the InputStream and save it to a file), this is rather disappointing.
I tried different HTTP methods, different data payloads, and also rearranged the code over and over again, but did not seem to be able to solve the problem.
What I hope to find
I hope to find a solution to my problem, of course. Anything is highly appreciated: hints, code snippets, library suggestions and so on. Maybe I'm even having it all wrong and need to go in a totally different direction.
So, how can I implement resumable file uploads for rather large (binary) files from a Java client to a Grails web application without manually opening ports on the server side?
HTTP GET method have special headers for range retrieval: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35 It's used by most downloaders to do resumable download from server.
As I understand, there are no standard practice for using this headers for POST/PUT request, but it's up to you, right? You can make pretty standard Grails controller, that will accept standard http upload, with header like Range: bytes=500-999. And controller should put this 500 uploaded bytes from client into file, starting at position 500
At this case you don't need to open any socket, and make own protocols, etc.
P.S. 500 bytes is just a example, probably you're using much bigger parts.
Client Side Java Programming:
public class NonFormFileUploader {
static final String UPLOAD_URL= "http://localhost:8080/v2/mobileApp/fileUploadForEOL";
static final int BUFFER_SIZE = 4096;
public static void main(String[] args) throws IOException {
// takes file path from first program's argument
String filePath = "G:/study/GettingStartedwithGrailsFinalInfoQ.pdf";
File uploadFile = new File(filePath);
System.out.println("File to upload: " + filePath);
// creates a HTTP connection
URL url = new URL(UPLOAD_URL);
HttpURLConnection httpConn = (HttpURLConnection) url.openConnection();
httpConn.setDoOutput(true);
httpConn.setRequestMethod("POST");
// sets file name as a HTTP header
httpConn.setRequestProperty("fileName", uploadFile.getName());
// opens output stream of the HTTP connection for writing data
OutputStream outputStream = httpConn.getOutputStream();
// Opens input stream of the file for reading data
FileInputStream inputStream = new FileInputStream(uploadFile);
byte[] buffer = new byte[BUFFER_SIZE];
int bytesRead = -1;
while ((bytesRead = inputStream.read(buffer)) != -1) {
System.out.println("bytesRead:"+bytesRead);
outputStream.write(buffer, 0, bytesRead);
outputStream.flush();
}
System.out.println("Data was written.");
outputStream.flush();
outputStream.close();
inputStream.close();
int responseCode = httpConn.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
// reads server's response
BufferedReader reader = new BufferedReader(new InputStreamReader(
httpConn.getInputStream()));
String response = reader.readLine();
System.out.println("Server's response: " + response);
} else {
System.out.println("Server returned non-OK code: " + responseCode);
}
}
}
Server Side Grails Programme:
Inside the controller:
def fileUploadForEOL(){
def result
try{
result = mobileAppService.fileUploadForEOL(request);
}catch(Exception e){
log.error "Exception in fileUploadForEOL service",e
}
render result as JSON
}
Inside the Service Class:
def fileUploadForEOL(request){
def status = false;
int code = 500
def map = [:]
try{
String fileName = request.getHeader("fileName");
File saveFile = new File(SAVE_DIR + fileName);
System.out.println("===== Begin headers =====");
Enumeration<String> names = request.getHeaderNames();
while (names.hasMoreElements()) {
String headerName = names.nextElement();
System.out.println(headerName + " = " + request.getHeader(headerName));
}
System.out.println("===== End headers =====\n");
// opens input stream of the request for reading data
InputStream inputStream = request.getInputStream();
// opens an output stream for writing file
FileOutputStream outputStream = new FileOutputStream(saveFile);
byte[] buffer = new byte[BUFFER_SIZE];
int bytesRead = inputStream.read(buffer);
long count = bytesRead
while(bytesRead != -1) {
outputStream.write(buffer, 0, bytesRead);
bytesRead = inputStream.read(buffer);
count += bytesRead
}
println "count:"+count
System.out.println("Data received.");
outputStream.close();
inputStream.close();
System.out.println("File written to: " + saveFile.getAbsolutePath());
code = 200
}catch(Exception e){
mLogger.log(java.util.logging.Level.SEVERE,"Exception in fileUploadForEOL",e);
}finally{
map <<["code":code]
}
return map
}
I have tried with above code it is worked for me(only for file size 3 to 4MB, but for small size files some bytes of code missing or not even coming but in request header content-length is coming, not sure why it is happening.)
I am looking for an easy way to get files that are situated on a remote server. For this I created a local ftp server on my Windows XP, and now I am trying to give my test applet the following address:
try
{
uri = new URI("ftp://localhost/myTest/test.mid");
File midiFile = new File(uri);
}
catch (Exception ex)
{
}
and of course I receive the following error:
URI scheme is not "file"
I've been trying some other ways to get the file, they don't seem to work. How should I do it? (I am also keen to perform an HTTP request)
You can't do this out of the box with ftp.
If your file is on http, you could do something similar to:
URL url = new URL("http://q.com/test.mid");
InputStream is = url.openStream();
// Read from is
If you want to use a library for doing FTP, you should check out Apache Commons Net
Reading binary file through http and saving it into local file (taken from here):
URL u = new URL("http://www.java2s.com/binary.dat");
URLConnection uc = u.openConnection();
String contentType = uc.getContentType();
int contentLength = uc.getContentLength();
if (contentType.startsWith("text/") || contentLength == -1) {
throw new IOException("This is not a binary file.");
}
InputStream raw = uc.getInputStream();
InputStream in = new BufferedInputStream(raw);
byte[] data = new byte[contentLength];
int bytesRead = 0;
int offset = 0;
while (offset < contentLength) {
bytesRead = in.read(data, offset, data.length - offset);
if (bytesRead == -1)
break;
offset += bytesRead;
}
in.close();
if (offset != contentLength) {
throw new IOException("Only read " + offset + " bytes; Expected " + contentLength + " bytes");
}
String filename = u.getFile().substring(filename.lastIndexOf('/') + 1);
FileOutputStream out = new FileOutputStream(filename);
out.write(data);
out.flush();
out.close();
You are almost there. You need to use URL, instead of URI. Java comes with default URL handler for FTP. For example, you can read the remote file into byte array like this,
try {
URL url = new URL("ftp://localhost/myTest/test.mid");
InputStream is = url.openStream();
ByteArrayOutputStream os = new ByteArrayOutputStream();
byte[] buf = new byte[4096];
int n;
while ((n = is.read(buf)) >= 0)
os.write(buf, 0, n);
os.close();
is.close();
byte[] data = os.toByteArray();
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
However, FTP may not be the best protocol to use in an applet. Besides the security restrictions, you will have to deal with connectivity issues since FTP requires multiple ports. Use HTTP if all possible as suggested by others.
I find this very useful: https://docs.oracle.com/javase/tutorial/networking/urls/readingURL.html
import java.net.*;
import java.io.*;
public class URLReader {
public static void main(String[] args) throws Exception {
URL oracle = new URL("http://www.oracle.com/");
BufferedReader in = new BufferedReader(
new InputStreamReader(oracle.openStream()));
String inputLine;
while ((inputLine = in.readLine()) != null)
System.out.println(inputLine);
in.close();
}
}
This worked for me, while trying to bring the file from a remote machine onto my machine.
NOTE - These are the parameters passed to the function mentioned in the code below:
String domain = "xyz.company.com";
String userName = "GDD";
String password = "fjsdfks";
(here you have to give your machine ip address of the remote system, then the path of the text file (testFileUpload.txt) on the remote machine, here C$ means C drive of the remote system. Also the ip address starts with \\ , but in order to escape the two backslashes we start it \\\\ )
String remoteFilePathTransfer = "\\\\13.3.2.33\\c$\\FileUploadVerify\\testFileUpload.txt";
(here this is the path on the local machine at which the file has to be transferred, it will create this new text file - testFileUploadTransferred.txt, with the contents on the remote file - testFileUpload.txt which is on the remote system)
String fileTransferDestinationTransfer = "D:/FileUploadVerification/TransferredFromRemote/testFileUploadTransferred.txt";
import java.io.File;
import java.io.IOException;
import org.apache.commons.vfs.FileObject;
import org.apache.commons.vfs.FileSystemException;
import org.apache.commons.vfs.FileSystemManager;
import org.apache.commons.vfs.FileSystemOptions;
import org.apache.commons.vfs.Selectors;
import org.apache.commons.vfs.UserAuthenticator;
import org.apache.commons.vfs.VFS;
import org.apache.commons.vfs.auth.StaticUserAuthenticator;
import org.apache.commons.vfs.impl.DefaultFileSystemConfigBuilder;
public class FileTransferUtility {
public void transferFileFromRemote(String domain, String userName, String password, String remoteFileLocation,
String fileDestinationLocation) {
File f = new File(fileDestinationLocation);
FileObject destn;
try {
FileSystemManager fm = VFS.getManager();
destn = VFS.getManager().resolveFile(f.getAbsolutePath());
if(!f.exists())
{
System.out.println("File : "+fileDestinationLocation +" does not exist. transferring file from : "+ remoteFileLocation+" to: "+fileDestinationLocation);
}
else
System.out.println("File : "+fileDestinationLocation +" exists. Transferring(override) file from : "+ remoteFileLocation+" to: "+fileDestinationLocation);
UserAuthenticator auth = new StaticUserAuthenticator(domain, userName, password);
FileSystemOptions opts = new FileSystemOptions();
DefaultFileSystemConfigBuilder.getInstance().setUserAuthenticator(opts, auth);
FileObject fo = VFS.getManager().resolveFile(remoteFileLocation, opts);
System.out.println(fo.exists());
destn.copyFrom(fo, Selectors.SELECT_SELF);
destn.close();
if(f.exists())
{
System.out.println("File transfer from : "+ remoteFileLocation+" to: "+fileDestinationLocation+" is successful");
}
}
catch (FileSystemException e) {
e.printStackTrace();
}
}
}
I have coded a Java Remote File client/server objects to access a remote filesystem as if it was local. It works without any authentication (which was the point at that time) but it could be modified to use SSLSocket instead of standard sockets for authentication.
It is very raw access: no username/password, no "home"/chroot directory.
Everything is kept as simple as possible:
Server setup
JRFServer srv = JRFServer.get(new InetSocketAddress(2205));
srv.start();
Client setup
JRFClient cli = new JRFClient(new InetSocketAddress("jrfserver-hostname", 2205));
You have access to remote File, InputStream and OutputStream through the client. It extends java.io.File for seamless use in API using File to access its metadata (i.e. length(), lastModified(), ...).
It also uses optional compression for file chunk transfer and programmable MTU, with optimized whole-file retrieval. A CLI is built-in with an FTP-like syntax for end-users.
org.apache.commons.io.FileUtils.copyURLToFile(new URL(REMOTE_URL), new File(FILE_NAME), CONNECT_TIMEOUT, READ_TIMEOUT);
Since you are on Windows, you can set up a network share and access it that way.