Play 2.3.x: Non-blocking image upload to Amazon S3 - java

I am wondering what the correct way with Play 2.3.x (Java) is to upload images to Amazon S3 in a non-blocking way.
Right now I am wrapping the amazons3.putObject method inside a promise. However I fear that I am basically just blocking another thread with this logic. My code looks like following:
return Promise.promise(
new Function0<Boolean>() {
public Boolean apply() {
if (S3Plugin.amazonS3 != null) {
try {
PutObjectRequest putObjectRequest = new PutObjectRequest(
S3Plugin.s3Bucket, name + "." + format, file.getFile());
ObjectMetadata metadata = putObjectRequest.getMetadata();
if(metadata == null) {
metadata = new ObjectMetadata();
}
metadata.setContentType(file.getContentType());
putObjectRequest.setMetadata(metadata);
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
S3Plugin.amazonS3.putObject(putObjectRequest);
return true;
} catch (AmazonServiceException e) {
// error uploading image to s3
Logger.error("AmazonServiceException: " + e.toString());
} catch (AmazonClientException e) {
// error uploading image to s3
Logger.error("AmazonClientException: " + e.toString());
}
}
return false;
}
}
);
What is the best way to do the upload process non-blocking?
The Amazon library also provides the TransferManager.class for asynchronous uploads but I am not sure how to utilize this in a non-blocking way either...
SOLUTION:
After spending quite a while figuring out how to utilize the Promise/Future in Java, I came up with following solution thanks to Will Sargent:
import akka.dispatch.Futures;
final scala.concurrent.Promise<Boolean> promise = Futures.promise();
... create AmazonS3 upload object ...
upload.addProgressListener(new ProgressListener() {
#Override
public void progressChanged(ProgressEvent progressEvent) {
if(progressEvent.getEventCode() == ProgressEvent.COMPLETED_EVENT_CODE) {
promise.success(true);
}
else if(progressEvent.getEventCode() == ProgressEvent.FAILED_EVENT_CODE) {
promise.success(false);
}
}
});
return Promise.wrap(promise.future());
Important to note is that I have to use the scala promise and not the play framework promise. The return value however is a play.libs.F.Promise.

You can do the upload process and return a Future by creating a Promise, returning the promise's future, and only writing to the promise in the TransferManager's progress listener:
Examples of promises / futures: http://docs.scala-lang.org/overviews/core/futures.html
Return the Scala future from Promise: http://www.playframework.com/documentation/2.3.x/api/java/play/libs/F.Promise.html#wrapped()
TransferManager docs: http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html

Related

Vert.x - GraphQL Subscriptions with DataInputStreams

I have 3rd party code that I connect to via DataInputStream. The 3rd party code continually spits out information as it generates it. When something of interest comes across I want to pass it along to GraphQL Subscription(s)
I'm not sure how to wire the 3rd party code to the server-side GraphQL subscription code given this scenario. Any suggestions would be appreciated.
Some conceptual code is below:
public void liveStream(DataInputStream in) {
// Sit and constantly watch input stream and report when messages come in
while(true) {
SomeMessage message = readFromInputStream(in);
System.out.println("Received Message Type:" + message.getType());
// Convert SomeMessage into the appropriate class based on its type
if (message.getType() == "foo") {
Foo foo = convertMessageToFoo(message);
} else if (message.getType() == "bar") {
Bar bar = convertMessageToBar(message);
} else if (howeverManyMoreOfThese) {
// Keep converting to different objects
}
}
}
// The client code will eventually trigger this method when
// the GraphQL Subscription query is sent over
VertxDataFetcher<Publisher<SomeClassTBD>> myTestDataFetcher() {
return new VertxDataFetcher<> (env, future) -> {
try {
future.complete(myTest());
} catch(Exception e) {
future.fail(e);
}
});
}
OK, I wrapped my liveStream code in an ObservableOnSubscribe using an executorService and I'm getting back all the data. I guess I can now either pass it straight through to the front end or create separate publishers to deal with specific object types and have graphql subscriptions point to their respective publishers.
ExecutorService executor = Executors.newSingleThreadExecutor;
ObservableOnSubscribe<SomeClassTBD> handler = emitter ->
executor.submit(() -> {
try {
//liveStream code here
emitter.onComplete();
}
catch(Exception e) {
emitter.onError(e);
}
finally {
// Cleanup here
}
});
Observable<SomeClassTBD> = Observable.create(handler);

Verify if the deleteObject has actually deleted the object in AWS S3 Java sdk

I have the following method, which deletes a file from AWS S3 Bucket, however,
there is no exception thrown if the file doesn't exist
there is no success code or flag to see if the file has been deleted successfully
is there any workaround to deal with this situation.
#Override
public void deleteFile(String fileName) {
try {
this.client.deleteObject(builder ->
builder
.bucket(this.bucketName).key(fileName)
.build());
} catch (S3Exception ex) {
ex.printStackTrace();
}
}
If your request succeeded then your object is deleted. Note, that due to eventual consistency, the object is not guaranteed to disappear immediately. You need to check on the HTTP status code.
AmazonS3 as3 = new AmazonS3();
Status myStatus = as3.DeleteObject(<fill in paramters here>);
if (myStatus.Code >= 200 && myStatus.Code < 300)
{
// Success
}
else
{
// Delete Failed
// Handle specific Error Codes below
if (myStatus.Description == "AllAccessDisabled")
{
// Do something
}
if (myStatus.Description == "NoSuchKey")
{
// Do something
}
}
Also, there is an api available to check if the Object exists in S3
doesObjectExist
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3.html#doesObjectExist-java.lang.String-java.lang.String-

Upload a file to s3 bucket with REST API

Doing this with S3 SDK makes it simple. But want to go with S3 REST API(Read some advantages with this).
I have gone through with S3 API documentation and find difficult to code using it. I am totally new to this type of coding wherein it uses Request Parameters, Request Headers, Response Headers, Authorization, Error Codes, ACL etc. It also provided sample examples but could not find a way how to use those examples and do coding.
Can any one help where to start and end so that I can code for all CRUD operations on S3 using API. An example for uploading image file will helps me in better understanding.
I am putting some basic code snippets below, that you can easily integrate in your code.
Getting s3 client:
private AmazonS3 getS3Client() {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withCredentials(credentials)
.withAccelerateModeEnabled(true).withRegion(Regions.US_EAST_1).build();
return s3Client;
}
Uploading file:
public void processOutput(FileServerDTO fileRequest) {
try {
AmazonS3 s3Client = getS3Client();
s3Client.putObject(fileRequest.getBucketName(), fileRequest.getKey(), fileRequest.getFileContent(), null);
} catch (Exception e) {
logger.error("Exception while uploading file" + e.getMessage());
throw e;
}
}
Downloading File:
public byte[] downloadFile(FileServerDTO fileRequest) {
AmazonS3 s3Client = getS3Client();
S3Object s3object = s3Client.getObject(new GetObjectRequest(fileRequest.getBucketName(), fileRequest.getKey()));
S3ObjectInputStream inputStream = s3object.getObjectContent();
try {
return FileCopyUtils.copyToByteArray(inputStream);
} catch (Exception e) {
logger.error("Exception while downloading file" + e.getMessage());
}
return null;
}
FileServerDTO contains basic attributes related to file info.
You can easily use these util methods in your service.
If you go through S3 services you will get better understanding of how S3 services works here are some example how to create upload delete files form S3 server using S3 servies:-
1) how to use Amazon’s S3 storage with the Java API
2) S3 Docs
There is brief explanation how it works.

How to abort large file upload with OkHttp?

I am using OkHttp 3.1.2.
I've created file upload similar to the original recipe which is found here: https://github.com/square/okhttp/blob/master/samples/guide/src/main/java/okhttp3/recipes/PostMultipart.java
I can't find example how to abort an upload of large file upon user request. I mean not how to get the user request but how to tell the OkHttp to stop sending data.
So far the only solution that I can imagine is to use custom RequestBody, add an abort() method and override the writeTo() method like this:
public void abort() {
aborted = true;
}
#Override
public void writeTo(BufferedSink sink) throws IOException {
Source source = null;
try {
source = Okio.source(mFile);
long transferred = 0;
long read;
while (!aborted && (read = source.read(sink.buffer(), SEGMENT_SIZE)) != -1) {
transferred += read;
sink.flush();
mListener.transferredSoFar(transferred);
}
} finally {
Util.closeQuietly(source);
}
}
Is there any other way?
It turns out it is quite easy:
Just hold reference to the Call object and cancel it when needed like this:
private Call mCall;
private void executeRequest (Request request) {
mCall = mOkHttpClient.newCall(request);
try {
Response response = mCall.execute();
...
} catch (IOException e) {
if (!mCall.isCanceled()) {
mLogger.error("Error uploading file: {}", e);
uploadFailed(); // notify whoever is needed
}
}
}
public void abortUpload() {
if (mCall != null) {
mCall.cancel();
}
}
Please note that when you cancel the Call while uploading an IOException will be thrown so you have to check in the catch if it is cancelled (as shown above) otherwise you will have false positive for error.
I think the same approach can be used for aborting download of large files.

AWS S3 file search using Java

We are using a java class to dowload a file from AWS s3 bucket with the following code
inputStream = AWSFileUtil.getInputStream(
AWSConnectionUtil.getS3Object(null),
"cdn.generalsentiment.com", filePath);
AWSFileUtil is a class which check the credentials and gets the inputstream from S3bucket using the getInputStream method.The filePath is the file inside cdn.generalsentiment.com bucket.
We want to write a method which can just check whether the particular file exists or not in the AWS S3 bucket and returns a boolean or some other value.
Please suggest me a solution for this.
public static boolean isValidFile(AmazonS3 s3,
String bucketName,
String path) throws AmazonClientException {
try {
ObjectMetadata objectMetadata =
s3.getObjectMetadata("cdn.generalsentiment.com", path);
} catch (NotFoundException nfe) {
nfe.printStackTrace();
}
return true;
}
If the file exists it returns true, else it throws NotFoundException, which i want to catch and return the "isValidFile" method result as false.
Guys any other alternative for the method body or return type would be great.
The updated one
public static boolean doesFileExist(AmazonS3 s3,
String bucketName,
String path) throws AmazonClientException,
AmazonServiceException {
boolean isValidFile = true;
try {
ObjectMetadata objectMetadata =
s3.getObjectMetadata("cdn.generalsentiment.com", path);
} catch (NotFoundException nfe) {
isValidFile = false;
}
catch (Exception exception) {
exception.printStackTrace();
isValidFile = false;
}
return isValidFile;
}
Daan's answer using GET Bucket (List Objects) (via the respective wrapper from the AWS for Java, see below) is the most efficient approach to get the desired information for many objects at once (+1), you'll need to post process the response accordingly of course.
This is done most easily via one of the respective methods of Class AmazonS3Client, e.g. listObjects(String bucketName):
AmazonS3 s3 = new AmazonS3Client(); // provide credentials, if need be
ObjectListing objectListing = s3.listObjects(new ListObjectsRequest()
.withBucketName("cdn.generalsentiment.com");
for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) {
System.out.println(objectSummary.getKey());
}
Alternative
If you are only interested in a single object (file) at a time, using HEAD Object will be much more efficient, insofar you can deduce existence straight from the respective HTTP response code (see Error Responses for details), i.e. 404 Not Found for a response of NoSuchKey - The specified key does not exist.
Again, this is done most easily via Class AmazonS3Client, namely getObjectMetadata(String bucketName, String key), e.g.:
public static boolean isValidFile(AmazonS3 s3,
String bucketName,
String path) throws AmazonClientException, AmazonServiceException {
boolean isValidFile = true;
try {
ObjectMetadata objectMetadata = s3.getObjectMetadata(bucketName, path);
} catch (AmazonS3Exception s3e) {
if (s3e.getStatusCode() == 404) {
// i.e. 404: NoSuchKey - The specified key does not exist
isValidFile = false;
}
else {
throw s3e; // rethrow all S3 exceptions other than 404
}
}
return isValidFile;
}
Use the GET Bucket S3 API:
http://docs.amazonwebservices.com/AmazonS3/latest/API/RESTBucketGET.html
and specify the full file name as a prefix.
this is simple way to find existing folder in bucket. above answer also true.
folder name contain '/' at last ,
it return true .
note: mybucket/userProfileModule/abc.pdf, it my folder structrue
boolean result1 = s3client.doesObjectExist("mybucket", "userProfileModule/");
System.out.println(result);

Categories