i used following code to copy file from one bucket to other bucket
AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider());
s3client.copyObject(sourceBucketName, sourceKey,
destinationBucketName, destinationKey);
but i alway get
"com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: B6466D562B6988E2)"` as the respone what may be the reason for that
There can be lot of possibilities for getting such an error (like non-existent bucket, permissions issues, custom policy applied on source or target buckets etc.) I will recommend to setup AWS S3 CLI on your machine and try different s3 commands to make sure that you actually have right set of permissions to do the operation. This will allow you to iterate fast and debug issue quickly. I am not against writing Java code here to do the same but cli will definitely save time for you.
Also look this SO link to see if this helps you in fixing your problem.
now i copy the file using java aws sdk
This is due to the absence of meta data while cpoying the file so we must add the meta data using copy copyObjRequest.setNewObjectMetadata(objectMetadata);
ref
http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html for the details
Related
I have to generate presigned url to upload object on the bucket using AWS SDK 2. Up-until I am doing this with bucket with object lock disabled it is working fine but when I am doing it with bucket with object lock enabled it is throwing error that I should send MD5 encryption. My question here is at the time of creation when I don't know about the file then how am I supposed to generate its md5 encryption and create the presigned url with it. Anybody please help me in understanding this and how this can be implemented.
I used the simple code that is given on official website to generate the presigned url nothing new.
For SDK Version 2, there is an open feature request: https://github.com/aws/aws-sdk-java-v2/issues/2155
For SDK Version 1, you could use Md5Utils.md5AsBase64(file) to determine the MD5 of the file. See Closed Bug: https://github.com/aws/aws-sdk-java/issues/2634
MD5Utils also exists in Version 2
https://github.com/aws/aws-sdk-java-v2/blob/master/utils/src/main/java/software/amazon/awssdk/utils/Md5Utils.java
Note
If you configure a default retention period on a bucket, requests to
upload objects in such a bucket must include the Content-MD5 header.
For more information, see Put Object in the Amazon Simple Storage
Service API Reference.
It seems that the Content-MD5 header restriction is applied only if you have a (Retention Period set).
So if you do have a Retention Period, then yes you must provide Content-MD5 Header.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
Note
To successfully complete the PutObject request, you must have the s3:PutObject in your IAM permissions.
To successfully change the objects acl of your PutObject request, you must have the s3:PutObjectAcl in your IAM permissions.
To successfully set the tag-set with your PutObject request, you must have the s3:PutObjectTagging in your IAM permissions.
The Content-MD5 header is required for any request to upload an object with a retention period configured using Amazon S3 Object Lock. For more information about Amazon S3 Object Lock, see Amazon S3 Object Lock Overview in the Amazon S3 User Guide.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html
I am trying to read objects from a specific S3 bucket using Java sdk, but I am getting Access Denied error when trying to perform read objects.
I've generated temporary creds(access key and access id) using AWS cli, and the creds and config file got generated in .aws folder, and I've mentioned the same access key and access id in my system environment variables. The code is able to connect to S3 endpoint and check if the bucket is present, but not able to perform any read operations.
The required s3read policies are attached to the IAM role, via which we generate credentials from CLI and use the same in java code.
The bucket policy also looks fine as the block public access is off.
I am able to list the bucket's objects from AWS CLI, but failing to do so from Java SDK.Please help on this.
Please find the below error from Java SDK.
Exception occured while reading object from s3 bucketcom.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied;
To generate security credentials for specific profile I ran the samlapi command that was written for our application post which I configured the credentials generated in env variables which includes access id, secret access key and security token
The IAM role has AmazonS3ReadAccess policy attached and with respect to bucket policy no action is denied and public access is not blocked.
To list the bucket objects used
aws s3 ls s3:bucket-name --profile profile-name
Used the following Java code to connect to s3 client from java sdk and to list objects from s3 bucket
s3client = AmazonS3ClientBuilder.standard().withCredentials(new EnvironmentVariableCredentialsProvider()).build();
if(s3client.doesBucketExistV2(bucketName)) {
ListObjectsV2Result objectList = s3client.listObjectsV2(bucketName);
List<S3ObjectSummary> s3ObjSummaryList = objectList.getObjectSummaries();
}
I am trying to download the CSV file from S3 using access key and secret provided in the environment variable. Below are my finding based on debugging.
One of my folder name start with / which is causing this issue other folder and files are working file.
AWS console does not allow you to create a folder with name starts with /. However, Cost usage report can have / in the report path prefix which creates that folder inside S3 bucket.
I am able to download the file using CLI by appending one more / before the folder name. I also check the Java SDK code which also does the same thing but it did not work.
I am able to list the files but when I try to get the s3Object it gave SignatureDoesNotMatch error.
I tried all possible solution but it did not work in the end from AWS JAVA SDK however it is working with AWS CLI.
Can someone provide me any pointer or reference? I tried a few solutions given on the internet but nothing works for me.
Getting below error
com.amazonaws.services.s3.model.AmazonS3Exception:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
(Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: A2C6F2F49F230E18; S3 Extended Request ID: xxxxxxxx), S3 Extended Request ID: yyyyyyyyyyyy
Note: I am using Spring boot app with JAVA SDK version 1.11.510.
After spending two days in debugging, I found out that there is a problem with Spring boot version.
I was using spring boot version 2.1.3.RELEASE after updating Spring boot version 2.1.0.RELEASE it worked like a charm.
Note: This issue occurred only for few S3 Folder which starts with '/'.
FYI:
https://github.com/aws/aws-sdk-java/issues/1919#issuecomment-471451804
Finding from AWS
AWS console does not allow the creation of folder with name starts with '/' however Cost usage report can contain '/' in path prefix.
We have some Scala code running in Elastic Beanstalk (using Tomcat) that accesses S3 using the Java AWS SDK. It was working perfectly for months. Then, a few days ago, we started seeing some strange errors. It can read and write to S3 about a third of the time. The other two thirds of the time, it gets an access denied error when reading from S3.
The exceptions look like this: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 6CAC5AB616FC6F23)
All S3 operations use the same bucket. The IAM role has full access to S3 (allowed to do any operation using any bucket).
We contacted Amazon support and they can't help us unless we provide a host ID and request ID that they can research. But the exception only has a request ID.
I'm looking for one of two things: either a solution to the access denied errors, or a way to get a host ID we can give to Amazon support. I already tried calling s3Client.getCachedResponseMetadata(getObjectRequest), but it always returns null after the getObject call fails.
I was able to get the Host ID by calling AmazonS3Exception.getErrorResponseXml(). We're still working with Amazon to determine the root cause.
I'm writing an Amazon S3 client that might potentially access buckets in different regions. Our IT department is fairly strict about outgoing HTTP, and I want to use path-style access for this client to avoid having to make firewall changes for each new bucket.
My client uses the java SDK v1.4.4.2. As a test, I created a bucket in Singapore, then took a working S3 unit test that lists objects, and changed it to use path-style access:
AmazonS3 client = new AmazonS3Client(environ);
client.setS3ClientOptions(new S3ClientOptions().withPathStyleAccess(true));
When I run the unit test with this version of the client, all S3 accesses fail with the error that I have to set the right endpoint.
My question is, do I have to add the logic to look up the bucket's region and set that for the client? Or can the SDK be set to do that on its own? It seems the SDK should be able to do this automatically, since the function to look up a bucket's location is in there.
As a side issue, are there any particular performance issues with using path-style access? I presume it's just an extra round trip to query the bucket's location if I don't already know it.
If you need the client to access objects in different region, you probably want to use the option:
AmazonS3ClientBuilder builder.withForceGlobalBucketAccessEnabled(true)
to build your client... See the s3 client builder documentation
this with ensure successful requests even if the client default region is not the same as the bucket/object targeted.
Also, if you need to get the bucket "mybucketname" exact end-point, you can use (headBucketResult ref page):
s3client.headBucket(HeadBucketRequest("mybucketname")).getBucketRegion()
As stated in the documentation, The path-style syntax, however, requires that you use the region-specific endpoint when attempting to access a bucket. In other words, with path style access, you've to tell to the SDK in which region is the bucket, it doesn't try to determine it on its own.
Performance wise, there should not be a difference.