Upload a file to s3 bucket with REST API - java

Doing this with S3 SDK makes it simple. But want to go with S3 REST API(Read some advantages with this).
I have gone through with S3 API documentation and find difficult to code using it. I am totally new to this type of coding wherein it uses Request Parameters, Request Headers, Response Headers, Authorization, Error Codes, ACL etc. It also provided sample examples but could not find a way how to use those examples and do coding.
Can any one help where to start and end so that I can code for all CRUD operations on S3 using API. An example for uploading image file will helps me in better understanding.

I am putting some basic code snippets below, that you can easily integrate in your code.
Getting s3 client:
private AmazonS3 getS3Client() {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withCredentials(credentials)
.withAccelerateModeEnabled(true).withRegion(Regions.US_EAST_1).build();
return s3Client;
}
Uploading file:
public void processOutput(FileServerDTO fileRequest) {
try {
AmazonS3 s3Client = getS3Client();
s3Client.putObject(fileRequest.getBucketName(), fileRequest.getKey(), fileRequest.getFileContent(), null);
} catch (Exception e) {
logger.error("Exception while uploading file" + e.getMessage());
throw e;
}
}
Downloading File:
public byte[] downloadFile(FileServerDTO fileRequest) {
AmazonS3 s3Client = getS3Client();
S3Object s3object = s3Client.getObject(new GetObjectRequest(fileRequest.getBucketName(), fileRequest.getKey()));
S3ObjectInputStream inputStream = s3object.getObjectContent();
try {
return FileCopyUtils.copyToByteArray(inputStream);
} catch (Exception e) {
logger.error("Exception while downloading file" + e.getMessage());
}
return null;
}
FileServerDTO contains basic attributes related to file info.
You can easily use these util methods in your service.

If you go through S3 services you will get better understanding of how S3 services works here are some example how to create upload delete files form S3 server using S3 servies:-
1) how to use Amazon’s S3 storage with the Java API
2) S3 Docs
There is brief explanation how it works.

Related

Can static content on spring-boot-web application be dynamic (refreshed)?

I am still searching around this subject, but I cannot find a simple solution, and I don't sure it doesn't exist.
Part 1
I have a service on my application that's generating an excel doc, by the dynamic DB data.
public static void
notiSubscribersToExcel(List<NotificationsSubscriber>
data) {
//generating the file dynamically from DB's data
String prefix = "./src/main/resources/static";
String directoryName = prefix + "/documents/";
String fileName = directoryName + "subscribers_list.xlsx";
File directory = new File(directoryName);
if (! directory.exists()){
directory.mkdir();
// If you require it to make the entire directory path including parents,
// use directory.mkdirs(); here instead.
}
try (OutputStream fileOut = new FileOutputStream(fileName)) {
wb.write(fileOut);
fileOut.close();
wb.close();
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Part 2
I want to access it from the browser, so when I call it will get downloaded.
I know that for the static content, all I need to do is to call to the file, from the browser like that:
http://localhost:8080/documents/myfile.xlsx
After I would be able to do it, all I need is to create link to this url from my client app.
The problem -
Currently if I call to the file as above, it will download only the file which have been there in the compiling stage, but if I am generating a new files after the app is running the content won't be available.
It seems that the content is (as it's called) "static" and cannot be changed after startup.
So my question is
is there is a way to define a folder on the app structure that will be dynamic? I just want to access the new generated file.
BTW I found this answer and others which doing configuration methods, or web services, but I don't want all this. And I have tried some of them, but the result is the same.
FYI I don't bundle my client app with the server app, I run them from different hosts
The problem is to download the file with the dynamic content from a Spring app.
This can be solved with Spring BOOT. Here is the solution as shown in this illustration - when i click Download report, my app generates a dynamic Excel report and its downloaded to the browser:
From a JS, make a get request to a Spring Controller:
function DownloadReport(e){
//Post the values to the controller
window.location="../report" ;
}
Here is the Spring Controller GET Method with /report:
#RequestMapping(value = ["/report"], method = [RequestMethod.GET])
#ResponseBody
fun report(request: HttpServletRequest, response: HttpServletResponse) {
// Call exportExcel to generate an EXCEL doc with data using jxl.Workbook
val excelData = excel.exportExcel(myList)
try {
// Download the report.
val reportName = "ExcelReport.xls"
response.contentType = "application/vnd.ms-excel"
response.setHeader("Content-disposition", "attachment; filename=$reportName")
org.apache.commons.io.IOUtils.copy(excelData, response.outputStream)
response.flushBuffer()
} catch (e: Exception) {
e.printStackTrace()
}
}
This code is implemented in Kotlin - but you can implement it as easily in Java too.

Unable to clone a remote repository(On Premise) to a S3 Bucket using aws lambda function

I am trying to make a connection from my On Premise git repository(Remote Url), which i am able to connect on my local using credentials, to AWS code build or code pipeline.
To achieve this, I have got solution statement from AWS side--
Currently CodeBuild only supports BitBucket repositories hosted on bitbucket.org , BitBucket Server (on-premise BitBucket) is not supported at this time. There is an existing feature request regarding supporting BitBucket Server in CodeBuild to which I've added your interest. I am unable to share an ETA as to when this feature may be released however you can keep an eye on the following sites for updates regarding this feature:
-- AWS CodeBuild Release History: https://docs.aws.amazon.com/codebuild/latest/userguide/history.html
-- What's New: http://aws.amazon.com/new/
-- AWS Blogs: https://aws.amazon.com/blogs/aws/
There is a possible workaround by making a custom bridge between the Git repository and AWS, which can be done by creating a Lambda function that will clone the repository into a S3 bucket which can then be used as source in CodeBuild, you can read more about this here [1].
Hope that helps, if you have any questions please let me know, I'll be happy to help.
[1] https://aws.amazon.com/quickstart/architecture/git-to-s3-using-webhooks/
So i have implemented a lambda function like this:
public String handleRequest(S3Event event, Context context) {
final String REMOTE_URL = "https://username#stash.some.com/scm/something/dpautomation.git";
CredentialsProvider cp = new UsernamePasswordCredentialsProvider("username", "password");
try (Git result = Git.cloneRepository().setURI(REMOTE_URL).setDirectory(cretaeS3()).setCredentialsProvider(cp)
.call()) {
System.out.println("Having repository: " + result.getRepository().getDirectory());
} catch (InvalidRemoteException e) {
e.printStackTrace();
} catch (TransportException e) {
e.printStackTrace();
} catch (GitAPIException e) { // TODO Auto-generated
e.printStackTrace();
} catch (IOException e1) {
e1.printStackTrace();
}
return "sucess";
}
public File cretaeS3() throws IOException {
String bucketName = "selenium-lambda-java";
String destinationKey ="screenshots_on_failure/testdata1";
String filename="\"+temp";
AmazonS3 s3Client =
AmazonS3ClientBuilder.standard().withRegion("us-east-1").withCredentials(new
ProfileCredentialsProvider()).build();
File src = new File(filename);
s3Client.putObject(new PutObjectRequest(bucketName, destinationKey, src));
return src;
}
In the above code, as i have to copy Remote url repo files to directly to s3 bucket, so i used jgit.api.Git where when i tried to clone remote repo, i need to give some path in File format into setDirectory parameter while should ideally a bucket path so i tried the above code. i am getting below error:
{
"errorMessage": "Unable to calculate MD5 hash: \"+temp (No such file or directory)",
"errorType": "com.amazonaws.SdkClientException",
"stackTrace": [
"com.amazonaws.services.s3.AmazonS3Client.getInputStream(AmazonS3Client.java:1852)", "com.amazonaws.services.s3.AmazonS3Client.uploadObject(AmazonS3Client.java:1770)", "com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1749)", "com.amazonaws.lambda.demo.LambdaFunctionHandler.cretaeS3(LambdaFunctionHandler.java:79)", "com.amazonaws.lambda.demo.LambdaFunctionHandler.handleRequest(LambdaFunctionHandler.java:52)", "com.amazonaws.lambda.demo.LambdaFunctionHandler.handleRequest(LambdaFunctionHandler.java:1)" ], "cause": { "errorMessage": "\"+temp (No such file or directory)",
"errorType": "java.io.FileNotFoundException",
"stackTrace": [
"java.io.FileInputStream.open0(Native Method)",
"java.io.FileInputStream.open(FileInputStream.java:195)",
"java.io.FileInputStream.<init>(FileInputStream.java:138)",
"com.amazonaws.util.Md5Utils.computeMD5Hash(Md5Utils.java:97)",
"com.amazonaws.util.Md5Utils.md5AsBase64(Md5Utils.java:104)",
"com.amazonaws.services.s3.AmazonS3Client.getInputStream(AmazonS3Client.java:1848)",
"com.amazonaws.services.s3.AmazonS3Client.uploadObject(AmazonS3Client.java:1770)",
"com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1749)",
"com.amazonaws.lambda.demo.LambdaFunctionHandler.cretaeS3(LambdaFunctionHandler.java:79)",
"com.amazonaws.lambda.demo.LambdaFunctionHandler.handleRequest(LambdaFunctionHandler.java:52)",
"com.amazonaws.lambda.demo.LambdaFunctionHandler.handleRequest(LambdaFunctionHandler.java:1)"
]
}
}
I am stuck from last 5 days please some body help me on this.

Uploading file to Amazon S3 bucket : upload works fine but some large files are empty

public void upload(List<CommonsMultipartFile> fileUpload) {
for(CommonsMultipartFile file : fileUpload)
{
try
{
String contentType = file.getContentType();
String newName = generateFileKey(file);
AmazonS3UploadRequest uploadRequest = new AmazonS3UploadRequest.Builder()
.bucket(bucket)
.contentType(contentType)
.newResourceName(newName)
.resourceStream(file.getInputStream(), Long.valueOf(file.getBytes().length))
.build();
uploadToS3(uploadRequest);
}
catch (Exception e)
{
LOG.error("Error while uploading files to s3: ", e);
throw new ServiceRuntimeException("Error writing file to s3 bucket");
}
}
}
public String uploadToS3(AmazonS3UploadRequest uploadRequest) {
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(uploadRequest.getResourceLength());
objectMetadata.setContentType(uploadRequest.getContentType());
Upload upload = transferManager.upload(uploadRequest.getBucket(), uploadRequest.getNewResourceName(), uploadRequest.getResourceStream(), objectMetadata);
}
I am uploading pdf file to amazon S3 bucket, all files are uploaded successfully
but large files(15 pages pdf etc) are empty.
Amazon transferManager - is being injected through spring. Amazon credentials are injected from .property file in TansferManager.
Note : .png/.jpeg files are also being uploaded as empty.
Hmm I am kind of confused...what's happening. need some inputs.
Thanks in advance.
Try with FileUpload class from Apache Commons thru Streaming API. The code snippet on that page will work fine.
Try this code to upload your PDF to amazon s3, I assume that you have AWS S3 app Id and secret key.
AWSCredentials credentials = new BasicAWSCredentials(appId, appSecret);
AmazonS3 s3Client = new AmazonS3Client(credentials);
String bucketPath = "YOUR_S3_BUCKET";
public void upload(List < CommonsMultipartFile > fileUpload) {
for (CommonsMultipartFile file: fileUpload) {
try {
String contentType = file.getContentType();
String pdfName = generateFileKey(file);
InputStream is = file.getInputStream();
ObjectMetadata meta = new ObjectMetadata();
meta.setContentLength(is.available());
s3Client.putObject(new PutObjectRequest(bucketPath, pdfName, is, meta).withCannedAcl(CannedAccessControlList.PublicRead));
} catch (Exception e) {
LOG.error("Error while uploading files to s3: ", e);
throw new ServiceRuntimeException("Error writing file to s3 bucket");
}
}
}

Play 2.3.x: Non-blocking image upload to Amazon S3

I am wondering what the correct way with Play 2.3.x (Java) is to upload images to Amazon S3 in a non-blocking way.
Right now I am wrapping the amazons3.putObject method inside a promise. However I fear that I am basically just blocking another thread with this logic. My code looks like following:
return Promise.promise(
new Function0<Boolean>() {
public Boolean apply() {
if (S3Plugin.amazonS3 != null) {
try {
PutObjectRequest putObjectRequest = new PutObjectRequest(
S3Plugin.s3Bucket, name + "." + format, file.getFile());
ObjectMetadata metadata = putObjectRequest.getMetadata();
if(metadata == null) {
metadata = new ObjectMetadata();
}
metadata.setContentType(file.getContentType());
putObjectRequest.setMetadata(metadata);
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
S3Plugin.amazonS3.putObject(putObjectRequest);
return true;
} catch (AmazonServiceException e) {
// error uploading image to s3
Logger.error("AmazonServiceException: " + e.toString());
} catch (AmazonClientException e) {
// error uploading image to s3
Logger.error("AmazonClientException: " + e.toString());
}
}
return false;
}
}
);
What is the best way to do the upload process non-blocking?
The Amazon library also provides the TransferManager.class for asynchronous uploads but I am not sure how to utilize this in a non-blocking way either...
SOLUTION:
After spending quite a while figuring out how to utilize the Promise/Future in Java, I came up with following solution thanks to Will Sargent:
import akka.dispatch.Futures;
final scala.concurrent.Promise<Boolean> promise = Futures.promise();
... create AmazonS3 upload object ...
upload.addProgressListener(new ProgressListener() {
#Override
public void progressChanged(ProgressEvent progressEvent) {
if(progressEvent.getEventCode() == ProgressEvent.COMPLETED_EVENT_CODE) {
promise.success(true);
}
else if(progressEvent.getEventCode() == ProgressEvent.FAILED_EVENT_CODE) {
promise.success(false);
}
}
});
return Promise.wrap(promise.future());
Important to note is that I have to use the scala promise and not the play framework promise. The return value however is a play.libs.F.Promise.
You can do the upload process and return a Future by creating a Promise, returning the promise's future, and only writing to the promise in the TransferManager's progress listener:
Examples of promises / futures: http://docs.scala-lang.org/overviews/core/futures.html
Return the Scala future from Promise: http://www.playframework.com/documentation/2.3.x/api/java/play/libs/F.Promise.html#wrapped()
TransferManager docs: http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html

How to mark a certain S3 file as Make Public

How to mark a certain S3 file as Make Public via the webservices API.
Use method setCannedAcl(CannedAccessControlList.PublicRead) to change Access control rights. Read java doc for details here
Sample Code:
BasicAWSCredentials basicAWSCredentials = new BasicAWSCredentials(ACCESS_KEY,SECRET_KEY);
AmazonS3 s3 = new AmazonS3Client(basicAWSCredentials);
PutObjectRequest putObj = new PutObjectRequest(bucketName, objectKey, fileToUpload);
// making the object Public
putObj.setCannedAcl(CannedAccessControlList.PublicRead);
s3.putObject(putObj);

Categories