Download files into system in which we are executing an API - java

I have a requirement where I need to download files from third party APIs.
I have written an API in my application which calls third party APIs and downloads files. I am able to download the files and unzip it successfully. These files are getting downloaded into tomcat server where the code is deployed when i hit my API.
But I would like to get those files downloaded into the system in which i am executing my API. Suppose, if i deploy that code into test environment server and execute my API using curl command from my local system, then files should get downloaded into my local system. Is there anyway I can achieve this in Java?
public class SnapshotFilesServiceImplCopy {
public static final ILogger LOGGER = FWLogFactory.getLogger();
private RestTemplate mxRestTemplate = new RestTemplate();
public void listSnapShotFiles(String diId, String snapshotGuid) {
LOGGER.debug("Entry - SnapshotFilesServiceImpl: FI=" + diId + " snapshotGuid=" + snapshotGuid);
ResponseEntity responseEntity = null;
HttpEntity entity = new HttpEntity(CommonUtil.getReportingAPIHeaders());
String resourceURL = "files_url";
try {
responseEntity = mxRestTemplate.exchange(resourceURL, HttpMethod.GET, entity, String.class);
} catch (RestClientException re) {
if (re instanceof HttpStatusCodeException) {
//TO be handled
}
}
String data = (String) responseEntity.getBody();
try {
Object obj = new JSONParser().parse(data);
JSONObject jsonObject = (JSONObject) obj;
JSONArray jsonArray = (JSONArray) jsonObject.get("accounts");
for (int i = 0; i < jsonArray.size(); i++) {
String accountFileURL = (String) jsonArray.get(i);
downloadAccountsData(diId, accountFileURL);
}
} catch (ParseException e) {
e.printStackTrace();
}
}
public void downloadAccountsData(String diId, String accountsURL) {
LOGGER.debug("Entry - SnapshotFilesServiceImpl: FI=" + diId + " snapshotGuid=" + accountsURL);
ResponseEntity<Resource> responseEntity = null;
HttpHeaders headers = new HttpHeaders();
headers.set("Accept", "application/vnd.mx.logs.v1+json");
headers.set("API-KEY", "key");
HttpEntity entity = new HttpEntity(CommonUtil.getReportingAPIHeaders());
String resourceURL = accountsURL;
try {
responseEntity = mxRestTemplate.exchange(resourceURL, HttpMethod.GET, entity, Resource.class);
} catch (RestClientException re) {
if (re instanceof HttpStatusCodeException) {
//To be handled
}
}
Date date = new Date();
String fileName = RenumberingConstants.SNAPSHOT_FILE_ACCOUNTS + date.getTime();
try {
FileOutputStream fileOutputStream = new FileOutputStream(fileName + ".gz");
byte[] bytes = IOUtils.toByteArray(responseEntity.getBody().getInputStream());
fileOutputStream.write(bytes);
} catch (Exception e) {
e.printStackTrace();
}
try {
FileInputStream fis = new FileInputStream(fileName + ".gz");
GZIPInputStream gis = new GZIPInputStream(fis);
FileOutputStream fos = new FileOutputStream(fileName + ".avro");
byte[] buffer = new byte[1024];
int len;
while ((len = gis.read(buffer)) != -1) {
fos.write(buffer, 0, len);
}
//close resources
fos.close();
gis.close();
} catch (IOException e) {
e.printStackTrace();
}
File file = new File(fileName + ".gz");
boolean isDeleted = file.delete();
if (isDeleted)
System.out.println("File has been deleted successfully.." + fileName + ".gz");
else
System.out.println("Could not delete the file.." + fileName + ".gz");
}
}

You can store the file received from third party API in one file (new File()) object, and then can save that file object at desired location. I need to see the code snippet which downloads the file from third party API to answer precisely.
What you are doing is saving the file in server (as Java program in tomcat cannot access client machine) and not returning it to the client which is calling your API. You need to open another output stream, and return the file data to client machine using that stream. You can refer this tutorial on how to download a file using streams.

Suppose, if i deploy that code into QA environment and execute my API using curl command from my local system, then files should get downloaded into my local system. Is there anyway I can achieve this in Java?
It is only achievable in Java (or any language) if the file or files are returned as the HTTP response to the request that you made using curl.
Or ... I suppose ... if you set up an HTTP server on your local system and the QA system "delivered" the files (in effect, a reverse upload!) by making an HTTP request (API call) the HTTP server.

Related

Downloading file with SharePoint API: file damaged

I'm developing a Java library for basic operations on SharePoint using Graph API.
I make a call on this entry point using SOAP UI:
https://graph.microsoft.com/v1.0/drives/{drive-id}/items/{item-id}/content
And I obtain a raw response:
%PDF-1.6
%âãÏÓ
1751 0 obj
<</Filter/FlateDecode/First 98/Length 322/N 11/Type/ObjStm>>stream
hޜԽJ1†á[ÉL’ó“–m,md±ÁElTü)¼{3“wXYDØ©¾3!ç<)&I^kˆ!ymÁ¤gë¥ÍE ...
endstream
endobj
startxref
2993893
%%EOF
It look like i'm retrieving an input stream.
In the HttpRequest class I try to build a response object that returns the InputStream. My property fileInputStream is an InputStream:
SharePointDownloadResponseModel returnValue = new SharePointDownloadResponseModel();
InputStream inputStream = new ByteArrayInputStream(response.toString().getBytes(StandardCharsets.UTF_8));
returnValue.setFileInputStream(inputStream);
return returnValue;
Now in my manager class I try to save the input stream in the hard drive. I handle 2 cases. First case, I have a fileName a folder to store the file. My request object :
if(request.getDownloadFolder() != null && request.getFileName() !=null) {
InputStream initialStream = returnValue.getFileInputStream();
FileOutputStream fos = new FileOutputStream(request.getDownloadFolder() + "/" + request.getFileName());
BufferedOutputStream bos = new BufferedOutputStream(fos );
// Read bytes from URL to the local file
byte[] buffer = new byte[4096];
int bytesRead = 0;
System.out.println("Downloading " + request.getFileName());
while ((bytesRead = initialStream.read(buffer)) != -1) {
bos.write(buffer, 0, bytesRead);
}
bos.flush();
// Close destination stream
bos.close();
// Close URL stream
initialStream.close();
}
The document is created where it should be created but the file is damaged and can't be opened. I wonder what is the issue at this stage.
I finally solved my issue. Here is a basic method that shows my implementation :
public class DownloadFile {
public static void main(String[] args) throws IOException {
String url = "https://graph.microsoft.com/v1.0/drives/{driveId}/items/{itemId}/content";
SharePointCredentialRequest sharePointCredentialRequest = new SharePointCredentialRequest(Constants.TENANT_CLIENT_ID,
Constants.TENANT_CLIENT_SECRET, Constants.TENANT_AUTHORITY);
String token = Utils.getToken(sharePointCredentialRequest);
CloseableHttpClient client = HttpClients.createDefault();
HttpGet httpGet = new HttpGet(url);
httpGet.setHeader("Authorization", "Bearer " + token);
try (CloseableHttpResponse response = client.execute(httpGet)) {
HttpEntity entity = response.getEntity();
if (entity != null) {
System.out.println(response.getAllHeaders().length);
System.out.println(entity.getContentEncoding());
System.out.println(entity.getContentLength());
System.out.println(entity.getContentType().getElements().toString());
try {
// do something useful with the stream
InputStream inputStream = IOUtils.toBufferedInputStream(response.getEntity().getContent());
File targetFile = new File("C:\\myFolder\\kant.pdf");
FileUtils.copyInputStreamToFile(inputStream, targetFile);
} catch (IOException | UnsupportedOperationException e) {
e.printStackTrace();
}
}
}
}
}

Http response: Cannot convert JSON into stream

I have an API in JAVA to upload a zip file to a server in Delphi, and I am doing it as follows:
DSRESTConnection conn = new DSRESTConnection();
conn.setHost("example.com");
conn.setPort(8080);
TServerMethods1 proxy = new TServerMethods1(conn);
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
BufferedOutputStream bos = new BufferedOutputStream(baos);
ZipOutputStream zos = new ZipOutputStream(bos);
zos.putNextEntry(new ZipEntry("test.json"));
byte[] bytes = inputJson.getBytes();
zos.write(bytes, 0, bytes.length);
zos.close();
bos.close();
baos.close();
TStream outZip = new TStream(baos.toByteArray());
zipResponse = proxy.UserZipUpLoad("username", "password", 5, outZip, outZip.asByteArray().length);
} catch (DBXException | IOException e) {
e.printStackTrace();
}
and here is the API:
public UserZipUpLoadReturns UserZipUpLoad(String user, String pwd, int ZipType, TStream strmUpLoad, long iSize) throws DBXException {
DSRESTCommand cmd = getConnection().CreateCommand();
cmd.setRequestType(DSHTTPRequestType.POST);
cmd.setText("TServerMethods1.UserZipUpLoad");
cmd.prepare(get_TServerMethods1_UserZipUpLoad_Metadata());
cmd.getParameter(0).getValue().SetAsString(user);
cmd.getParameter(1).getValue().SetAsString(pwd);
cmd.getParameter(2).getValue().SetAsInt32(ZipType);
cmd.getParameter(3).getValue().SetAsStream(strmUpLoad);
cmd.getParameter(4).getValue().SetAsInt64(iSize);
getConnection().execute(cmd);
UserZipUpLoadReturns ret = new UserZipUpLoadReturns();
ret.ReturnCode = cmd.getParameter(5).getValue().GetAsInt32();
ret.ReturnString = cmd.getParameter(6).getValue().GetAsString();
ret.returnValue = cmd.getParameter(7).getValue().GetAsInt32();
return ret;
}
To create the body for the request, _parameter is created of the params in the API which cannot be in the url such as a byteArray or blob:
boolean CanAddParamsToUrl = true;
_parameters = new TJSONArray();
for (DSRESTParameter parameter : ParametersToSend)
if (CanAddParamsToUrl && isURLParameter(parameter))
URL += encodeURIComponent(parameter) + '/';
else // add the json representation in the body
{
CanAddParamsToUrl = false;
parameter.getValue().appendTo(_parameters);
}
and using the _parameters, body is built:
TJSONObject body = new TJSONObject();
body.addPairs("_parameters", _parameters);
p.setEntity(new StringEntity(body.toString(), "utf-8"));
I don't have access to the server side and don't know what happens there. When I send a JSON object or any other strings, the server returns ok but as soon as I zip the JSON and send it, the server return error 500 saying "Cannot convert input JSON into a stream". I think the JSON it is referring is the body not the JSON string in the file.
From the last part of the code I can see why a string would work but I don't know how to use the current code to send a zip file as requested.
Is there anyway to use this code? or should I change it? If so, how?
Does anybody know if this is a bug?

S3 Java- Multipart upload using a presigned URL?

I have a separate services that is managing files and s3 authentication. It produces presigned URLs, which I am able to use in other services to upload (and download) files.
I would like to take advantage of the Multipart upload sdk- currently the 'uploadToUrl' method seems to spend most of its time on getResponseCode, so it's difficult to provide user feedback. Also, the multipart upload seems much faster in my testing.
Ideally, I'd like to be able to create some AWSCredentials using a presigned URL instead of a secret key / access key for temporary use. Is that just a pipe dream?
//s3 service
public URL getUrl(String bucketName, String objectKey, Date expiration, AmazonS3 s3Client, HttpMethod method, String contentType) {
GeneratePresignedUrlRequest generatePresignedUrlRequest;
generatePresignedUrlRequest = new GeneratePresignedUrlRequest(bucketName, objectKey);
generatePresignedUrlRequest.setMethod(method);
generatePresignedUrlRequest.setExpiration(expiration);
generatePresignedUrlRequest.setContentType(contentType);
URL s = s3Client.generatePresignedUrl(generatePresignedUrlRequest);
System.out.println(String.format("Generated Presigned URL: %n %S", s.toString()));
return s;
}
//Upload service
Override
public void uploadToUrl(URL url, File file) {
HttpURLConnection connection;
try {
InputStream inputStream = new FileInputStream(file);
connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
OutputStream out =
connection.getOutputStream();
byte[] buf = new byte[1024];
int count;
int total = 0;
long fileSize = file.length();
while ((count =inputStream.read(buf)) != -1)
{
if (Thread.interrupted())
{
throw new InterruptedException();
}
out.write(buf, 0, count);
total += count;
int pctComplete = new Double(new Double(total) / new Double(fileSize) * 100).intValue();
System.out.print("\r");
System.out.print(String.format("PCT Complete: %d", pctComplete));
}
System.out.println();
out.close();
inputStream.close();
System.out.println("Finishing...");
int responseCode = connection.getResponseCode();
if (responseCode == 200) {
System.out.printf("Successfully uploaded.");
}
} catch (IOException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
A few years later, but digging around in the AWS Java SDK reveals that adding the following to GeneratePresignedUrlRequest works pretty well:
AmazonS3Client amazonS3Client = /* ... */;
GeneratePresignedUrlRequest request = /* ... */;
// the following are required to trigger the multipart upload API
request.addRequestParameter("uploadId", uploadIdentifier);
request.addRequestParameter("partNumber", Integer.toString(partNumber));
// the following may be optional but are recommended to validate data integrity during upload
request.putCustomRequestHeader(Headers.CONTENT_MD5, md5Hash);
request.putCustomRequestHeader(Headers.CONTENT_LENGTH, Long.toString(contentLength));
URL presignedURL = amazonS3Client.generatePresignedUrl(request);
(I haven't dug deeply enough to determine whether CONTENT_MD5 or CONTENT_LENGTH are required.)
with PHP, you can
$command = $this->s3client->getCommand ('CreateMultipartUpload', array (
'Bucket' => $this->rootBucket,
'Key' => $objectName
));
$signedUrl = $command->createPresignedUrl ('+5 minutes');
But I found no way so far how to acheive this with Java.
For a single PUT (or GET) operation, one can use generatePresignedUrl, but I wouldn't know how to apply this to multipart upload like with the PHP getCommand ('CreateMultipartUpload'/'UploadPart'/'CompleteMultipartUpload') methods.
For now I am exploring returning temporary credentials returned by my trusted code instead of a signed url.
http://docs.aws.amazon.com/AmazonS3/latest/dev/AuthUsingTempSessionTokenJava.html

Get list of files from server and Call download function on that list to download content

I have a web server that stores the files at http://user.mysite.com/content
Now all I want to achieve in my android application is to download every files that user can upload on this server, I have created function in android that can download files and stores it into sdcard which is something like this:
public void doDownload(){
try {
int count;
URL url = new URL("http://user.mysite.com/content");
URLConnection connection = url.openConnection();
connection.connect();
int lengthOfFile = connection.getContentLength();
long total = 0;
InputStream input = new BufferedInputStream(url.openStream());
OutputStream output = new FileOutputStream(f);
byte data[] = new byte[1024];
while ((count = input.read(data)) != -1) {
total += count;
publishProgress((int)(total/1024),lengthOfFile/1024);
output.write(data, 0, count);
}
output.flush();
output.close();
input.close();
}
catch (Exception e) {
Log.e("Download Error: ", e.toString());
}
}
How can I retrive the list of files on server and URL for those files + name of files and download Each one of them on to app using loop?
To get the list of file I have some thing list this:
public List clientServerFileList(){
URL url;
List serverDir = null;
try {
url = new URL("http://user.mysite.com/content/");
ApacheURLLister lister = new ApacheURLLister();
serverDir = lister.listAll(url);
}
catch (Exception e) {
e.printStackTrace();
Log.e("ERROR ON GETTING FILE","Error is " +e);
}
System.out.println(serverDir);
return serverDir;
}
My server is: Apache/2.2.24 (Unix) mod_ssl/2.2.24 OpenSSL/1.0.0-fips mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 Server at user.mysite.com Port 80
Send a POST or GET request to your server. When your server recive this request, response the JSON or XML to the client.
Parse the XML or JSON that server response to you, get the filename and ..., you can download the file in the file list.

Reading a remote file using Java

I am looking for an easy way to get files that are situated on a remote server. For this I created a local ftp server on my Windows XP, and now I am trying to give my test applet the following address:
try
{
uri = new URI("ftp://localhost/myTest/test.mid");
File midiFile = new File(uri);
}
catch (Exception ex)
{
}
and of course I receive the following error:
URI scheme is not "file"
I've been trying some other ways to get the file, they don't seem to work. How should I do it? (I am also keen to perform an HTTP request)
You can't do this out of the box with ftp.
If your file is on http, you could do something similar to:
URL url = new URL("http://q.com/test.mid");
InputStream is = url.openStream();
// Read from is
If you want to use a library for doing FTP, you should check out Apache Commons Net
Reading binary file through http and saving it into local file (taken from here):
URL u = new URL("http://www.java2s.com/binary.dat");
URLConnection uc = u.openConnection();
String contentType = uc.getContentType();
int contentLength = uc.getContentLength();
if (contentType.startsWith("text/") || contentLength == -1) {
throw new IOException("This is not a binary file.");
}
InputStream raw = uc.getInputStream();
InputStream in = new BufferedInputStream(raw);
byte[] data = new byte[contentLength];
int bytesRead = 0;
int offset = 0;
while (offset < contentLength) {
bytesRead = in.read(data, offset, data.length - offset);
if (bytesRead == -1)
break;
offset += bytesRead;
}
in.close();
if (offset != contentLength) {
throw new IOException("Only read " + offset + " bytes; Expected " + contentLength + " bytes");
}
String filename = u.getFile().substring(filename.lastIndexOf('/') + 1);
FileOutputStream out = new FileOutputStream(filename);
out.write(data);
out.flush();
out.close();
You are almost there. You need to use URL, instead of URI. Java comes with default URL handler for FTP. For example, you can read the remote file into byte array like this,
try {
URL url = new URL("ftp://localhost/myTest/test.mid");
InputStream is = url.openStream();
ByteArrayOutputStream os = new ByteArrayOutputStream();
byte[] buf = new byte[4096];
int n;
while ((n = is.read(buf)) >= 0)
os.write(buf, 0, n);
os.close();
is.close();
byte[] data = os.toByteArray();
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
However, FTP may not be the best protocol to use in an applet. Besides the security restrictions, you will have to deal with connectivity issues since FTP requires multiple ports. Use HTTP if all possible as suggested by others.
I find this very useful: https://docs.oracle.com/javase/tutorial/networking/urls/readingURL.html
import java.net.*;
import java.io.*;
public class URLReader {
public static void main(String[] args) throws Exception {
URL oracle = new URL("http://www.oracle.com/");
BufferedReader in = new BufferedReader(
new InputStreamReader(oracle.openStream()));
String inputLine;
while ((inputLine = in.readLine()) != null)
System.out.println(inputLine);
in.close();
}
}
This worked for me, while trying to bring the file from a remote machine onto my machine.
NOTE - These are the parameters passed to the function mentioned in the code below:
String domain = "xyz.company.com";
String userName = "GDD";
String password = "fjsdfks";
(here you have to give your machine ip address of the remote system, then the path of the text file (testFileUpload.txt) on the remote machine, here C$ means C drive of the remote system. Also the ip address starts with \\ , but in order to escape the two backslashes we start it \\\\ )
String remoteFilePathTransfer = "\\\\13.3.2.33\\c$\\FileUploadVerify\\testFileUpload.txt";
(here this is the path on the local machine at which the file has to be transferred, it will create this new text file - testFileUploadTransferred.txt, with the contents on the remote file - testFileUpload.txt which is on the remote system)
String fileTransferDestinationTransfer = "D:/FileUploadVerification/TransferredFromRemote/testFileUploadTransferred.txt";
import java.io.File;
import java.io.IOException;
import org.apache.commons.vfs.FileObject;
import org.apache.commons.vfs.FileSystemException;
import org.apache.commons.vfs.FileSystemManager;
import org.apache.commons.vfs.FileSystemOptions;
import org.apache.commons.vfs.Selectors;
import org.apache.commons.vfs.UserAuthenticator;
import org.apache.commons.vfs.VFS;
import org.apache.commons.vfs.auth.StaticUserAuthenticator;
import org.apache.commons.vfs.impl.DefaultFileSystemConfigBuilder;
public class FileTransferUtility {
public void transferFileFromRemote(String domain, String userName, String password, String remoteFileLocation,
String fileDestinationLocation) {
File f = new File(fileDestinationLocation);
FileObject destn;
try {
FileSystemManager fm = VFS.getManager();
destn = VFS.getManager().resolveFile(f.getAbsolutePath());
if(!f.exists())
{
System.out.println("File : "+fileDestinationLocation +" does not exist. transferring file from : "+ remoteFileLocation+" to: "+fileDestinationLocation);
}
else
System.out.println("File : "+fileDestinationLocation +" exists. Transferring(override) file from : "+ remoteFileLocation+" to: "+fileDestinationLocation);
UserAuthenticator auth = new StaticUserAuthenticator(domain, userName, password);
FileSystemOptions opts = new FileSystemOptions();
DefaultFileSystemConfigBuilder.getInstance().setUserAuthenticator(opts, auth);
FileObject fo = VFS.getManager().resolveFile(remoteFileLocation, opts);
System.out.println(fo.exists());
destn.copyFrom(fo, Selectors.SELECT_SELF);
destn.close();
if(f.exists())
{
System.out.println("File transfer from : "+ remoteFileLocation+" to: "+fileDestinationLocation+" is successful");
}
}
catch (FileSystemException e) {
e.printStackTrace();
}
}
}
I have coded a Java Remote File client/server objects to access a remote filesystem as if it was local. It works without any authentication (which was the point at that time) but it could be modified to use SSLSocket instead of standard sockets for authentication.
It is very raw access: no username/password, no "home"/chroot directory.
Everything is kept as simple as possible:
Server setup
JRFServer srv = JRFServer.get(new InetSocketAddress(2205));
srv.start();
Client setup
JRFClient cli = new JRFClient(new InetSocketAddress("jrfserver-hostname", 2205));
You have access to remote File, InputStream and OutputStream through the client. It extends java.io.File for seamless use in API using File to access its metadata (i.e. length(), lastModified(), ...).
It also uses optional compression for file chunk transfer and programmable MTU, with optimized whole-file retrieval. A CLI is built-in with an FTP-like syntax for end-users.
org.apache.commons.io.FileUtils.copyURLToFile(new URL(REMOTE_URL), new File(FILE_NAME), CONNECT_TIMEOUT, READ_TIMEOUT);
Since you are on Windows, you can set up a network share and access it that way.

Categories