I am currently using S3 with the Java API to get objects and their content. I've created a Cloudfront distribution using the AWS console and I set my S3 bucket with my objects as the Bucket-origin. But I didn't notice any improvement in the download performance, and I noticed in the console window the url refers to s3:
INFO: Sending Request: GET https://mybucket.s3.amazonaws.com /picture.jpg Headers: (Range: bytes=5001-1049479, Content-Type: application/x-www-form-urlencoded; charset=utf-8, )
whereas in the Getting Started guide for Cloudfront, the url should be:
http://(domain name)/picture.jpg
where (domain name) is specific to the Cloudfront distribution. So the Java API still is getting the file from S3 and not through cloudfront
Is there anyway using the Java API for S3 to download files via Cloudfront? If not, what's the best approach I should use to get objects via cloudfront in my java program? I am still kinda new to this stuff, any help greatly appreciated!
JAVA API for S3 can not be used for interacting with Cloudfront.
If you want to download the content through cloud front distribution, you have to write your own HTTP code (which should be simple). You can also just use http://(cloud front domain name)/picture.jpg in browser and check the download speed first.
URL url = new URL(your_cloudfront_url);
InputStream in = url.openStream();
But, you should know that it can take 24 hours or more for changes in S3 to be active.
If you cannot open the stream, the other way is to use getObject(bucketName, key) method.
Related
I want to create a signed request for external users so they can upload a file to my s3 bucket. I want to have the following limitations
URL expiration time
limit size range
content type
I did some research and found out that there are 2 main ways to achieve this:
Presigned URL(PutObject API) https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html
Post object(post policy with limitations specified)
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html
but unfortunately, neither of these 2 ways supports all the limitations I need. To be precise, presigned URL doesn't allow to limit file range(only exact number of bytes to be uploaded), on the other hand, Post object doesn't support multipart upload(as far as I understood).
So the question is how can I achieve all the limitations and still be able to upload file in parallel(multipart) ?
I'm using java language and checked minio already, but didn't find anything. Maybe there is some API that is not supported in AWS SDK or minio implementations that allows doing this?
I use Java AWS S3 SDK to presign my requests. I have the following code:
var request = new GeneratePresignedUrlRequest(bucketName, filename)
.withMethod(method)
.withExpiration(expiration());
// do something with request
return s3Client.generatePresignedUrl(request);
What I need to write in place of comment to add custom conditions like content-length-range?
For browser-based POST uploads to S3, the AWS Java SDK doesn't provide a way to generate pre-signed URLs with conditions. There's an open feature request to add this to the v2 SDK. Note that the PHP, Node.js, and Python SDKs do all provide this feature.
For regular HTTP PUT pre-signed URLs, you can't apply content length restrictions to pre-signed URLs. You can place conditions using a custom policy but that only supports:
DateLessThan
DateGreaterThan
IpAddress
If you need to deal with objects outside of a given size range then you could potentially do that in AWS Lambda, after the object has been uploaded.
I'm inheriting a codebase that makes use of the Java AWS SDK to generate presigned S3 URLs for both Putting and Getting Objects. The code looks something like this:
GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest(bucket, filename);
request.setMethod(HttpMethod.PUT);
request.setExpiration(new DateTime().plusMinutes(30).toDate());
request.setContentType("image/jpeg");
String url = awss3.generatePresignedUrl(request);
And this existing codebase has always worked, and is very close to working. However, one business requirement that changed is that we need to encrypt the contents of the S3 bucket. So, naturally, I set the default encryption on the bucket to AWS-KMS (since it seemed like the most modern) and choose the default "aws/s3" key that had been created for my account.
However, now when an end user tries to actually utilize the URLs I generate in their browser, this is the error message that appears:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<code>InvalidArgument</code>
<Message>Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.</Message>
<ArgumentName>Authorization</ArgumentName>
<ArgumentValue>null</ArgumentValue>
<RequestId>...</RequestId>
<HostId>...</HostId>
</Error>
My question is utlimately: how do I get this working again? As I see it there are two different paths I could take. Either: 1) I could downgrade the bucket encryption from AWS-KMS to AES-256 and hope that it all works, or 2) I can make some change to my client code to support KMS, which I'm guessing would probably involve downloading the KMS key through the AWS SDK and using it to sign the requests, and possibly also adding some Authorization and other headers.
Option 1 seems like less work but also less ideal, because who knows if a less secure form of encryption will always be supported. And Option 2 seems like the better choice conceptually, but also raises some concerns because it does seem like a lot more work and I'm worried about having to include extra headers. The code I've shown above reflects the equivalent of a PutObject request (proxied through the generated URL), but there are also equivalents of GetObject requests to download the images, which are possibly rendered directly in the browser. It would be a lot harder to write frontend code there to use different headers just to render an image. (I wonder if query parameters can be substituted for headers?)
Anyways, what would I need to change in my Java to get this working with AWS KMS? Do I need to use the AWS SDK to "download" the KMS key first as I suspected? And should I go about doing it that way, or would AES-256 really be the better option?
Signing signature verion 4 has been the default for several years. Unless you are overriding the signature in your AWS SDK profile, then you are using version 4. You can override this using the following code:
AmazonS3Client s3 = new AmazonS3Client(new ClientConfiguration().withSignerOverride("AWSS3V4SignerType"));
Most likely the real issue is that you need to specify server side encryption when you create the presigned URL.
GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest(
myBucket, myKey, HttpMethod.PUT)
.withSSEAlgorithm(SSEAlgorithm.KMS.getAlgorithm());
request.setExpiration(new DateTime().plusMinutes(30).toDate());
request.setContentType("image/jpeg");
URL puturl = s3.generatePresignedUrl(request);
i used following code to copy file from one bucket to other bucket
AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider());
s3client.copyObject(sourceBucketName, sourceKey,
destinationBucketName, destinationKey);
but i alway get
"com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: B6466D562B6988E2)"` as the respone what may be the reason for that
There can be lot of possibilities for getting such an error (like non-existent bucket, permissions issues, custom policy applied on source or target buckets etc.) I will recommend to setup AWS S3 CLI on your machine and try different s3 commands to make sure that you actually have right set of permissions to do the operation. This will allow you to iterate fast and debug issue quickly. I am not against writing Java code here to do the same but cli will definitely save time for you.
Also look this SO link to see if this helps you in fixing your problem.
now i copy the file using java aws sdk
This is due to the absence of meta data while cpoying the file so we must add the meta data using copy copyObjRequest.setNewObjectMetadata(objectMetadata);
ref
http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html for the details
This might seem strange at first considering the easiest way is to work with the SDK, but it's not an option for me. The reason being, I'm actually building a Third Party API and allowing for the upload of files to my bucket.
The first person consuming my API is an android application and I'd like to get some idea as to the best way to make this possible.
I can't give 3rd party developers my AWS credentials.
I've authorised this on my website with Cross Origin Resource Sharing and signed requests. Is there a similar way to do this on android?
Ideally I'd like the flow to be:
3rd party app sends me the file info, key, etc.
my API service signs the request and sends it back.
The app then uses the request to upload the file.
Is this possible on Android?
I've read up on STS and creating temporary credentials, but that's still not nailing down permissions to a per-request level like the signed request method allows me to do.
You can use Browser based Form Upload feature offered by AWS for S3. Even though use case is described as browser upload, but it can be easily used by any http client.
Third Party Client (Android etc) calls your server to create a upload policy
Your server creates a signed policy by making AWS S3 API Call (YOu may use SDK to do that). This policy may allow upload for a particular key in S3.
The Android Client can submit the multi-part data using S3 key, signed policy.
Here is an example for HTML Form (You can easily use any http client to do for Android App) :
http://docs.aws.amazon.com/AmazonS3/latest/dev/HTTPPOSTExamples.html
This approach will only require your to share IAM Access key but not secret key.
Another alternative is to totally abstract S3 and let your server manage the upload: