We have some Scala code running in Elastic Beanstalk (using Tomcat) that accesses S3 using the Java AWS SDK. It was working perfectly for months. Then, a few days ago, we started seeing some strange errors. It can read and write to S3 about a third of the time. The other two thirds of the time, it gets an access denied error when reading from S3.
The exceptions look like this: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 6CAC5AB616FC6F23)
All S3 operations use the same bucket. The IAM role has full access to S3 (allowed to do any operation using any bucket).
We contacted Amazon support and they can't help us unless we provide a host ID and request ID that they can research. But the exception only has a request ID.
I'm looking for one of two things: either a solution to the access denied errors, or a way to get a host ID we can give to Amazon support. I already tried calling s3Client.getCachedResponseMetadata(getObjectRequest), but it always returns null after the getObject call fails.
I was able to get the Host ID by calling AmazonS3Exception.getErrorResponseXml(). We're still working with Amazon to determine the root cause.
Related
Working on simple use case of transferring a file from S3 to Azure Blob Storage container using Azure SDK in Java based AWS Lambda. But before going to file transfer, I wanted to test the connectivity itself from my Lambda so I decided to first try and list the blobs in a container.
I am using "Shared Access Signature" token for authenticating access to Azure Blob Storage container. Faced lot of challenges to establish the connection on the local but at last I was able to make successful connection and finally I was able to list all the blobs in a given container.
Now, when I merged the same code to my Lambda and run it. It is giving me Authorization error as below.
Lambda Exception Trace
Since I am new to Azure, can someone help me in understanding if there is any authentication, network configuration missing to establish this connection or am I fundamentally missing something.
Code this is working on Eclipse IDE on Local
It appears to be an Authentication Failure. This includes the possibility that the SAS (Shared access signature) token you are using to connect is missing one or more permissions to execute a particular action that is needed by the BlobContainerClient. The actions are: Read, Write, Delete, List, Add, Create, Process, Immutable storage, Permanent delete. You also have different types of services you can interact with: blob, file, queue, table. Finally, when the SAS token is created, it can be configured with an expiration date, a set of allowed IP addresses, limited to use only a certain protocol and choose a signing key. Perhaps one of these conditions is not allowing the same code to behave in the same way when it is executed from two different locations?
I am trying to read objects from a specific S3 bucket using Java sdk, but I am getting Access Denied error when trying to perform read objects.
I've generated temporary creds(access key and access id) using AWS cli, and the creds and config file got generated in .aws folder, and I've mentioned the same access key and access id in my system environment variables. The code is able to connect to S3 endpoint and check if the bucket is present, but not able to perform any read operations.
The required s3read policies are attached to the IAM role, via which we generate credentials from CLI and use the same in java code.
The bucket policy also looks fine as the block public access is off.
I am able to list the bucket's objects from AWS CLI, but failing to do so from Java SDK.Please help on this.
Please find the below error from Java SDK.
Exception occured while reading object from s3 bucketcom.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied;
To generate security credentials for specific profile I ran the samlapi command that was written for our application post which I configured the credentials generated in env variables which includes access id, secret access key and security token
The IAM role has AmazonS3ReadAccess policy attached and with respect to bucket policy no action is denied and public access is not blocked.
To list the bucket objects used
aws s3 ls s3:bucket-name --profile profile-name
Used the following Java code to connect to s3 client from java sdk and to list objects from s3 bucket
s3client = AmazonS3ClientBuilder.standard().withCredentials(new EnvironmentVariableCredentialsProvider()).build();
if(s3client.doesBucketExistV2(bucketName)) {
ListObjectsV2Result objectList = s3client.listObjectsV2(bucketName);
List<S3ObjectSummary> s3ObjSummaryList = objectList.getObjectSummaries();
}
I am trying to download the CSV file from S3 using access key and secret provided in the environment variable. Below are my finding based on debugging.
One of my folder name start with / which is causing this issue other folder and files are working file.
AWS console does not allow you to create a folder with name starts with /. However, Cost usage report can have / in the report path prefix which creates that folder inside S3 bucket.
I am able to download the file using CLI by appending one more / before the folder name. I also check the Java SDK code which also does the same thing but it did not work.
I am able to list the files but when I try to get the s3Object it gave SignatureDoesNotMatch error.
I tried all possible solution but it did not work in the end from AWS JAVA SDK however it is working with AWS CLI.
Can someone provide me any pointer or reference? I tried a few solutions given on the internet but nothing works for me.
Getting below error
com.amazonaws.services.s3.model.AmazonS3Exception:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
(Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: A2C6F2F49F230E18; S3 Extended Request ID: xxxxxxxx), S3 Extended Request ID: yyyyyyyyyyyy
Note: I am using Spring boot app with JAVA SDK version 1.11.510.
After spending two days in debugging, I found out that there is a problem with Spring boot version.
I was using spring boot version 2.1.3.RELEASE after updating Spring boot version 2.1.0.RELEASE it worked like a charm.
Note: This issue occurred only for few S3 Folder which starts with '/'.
FYI:
https://github.com/aws/aws-sdk-java/issues/1919#issuecomment-471451804
Finding from AWS
AWS console does not allow the creation of folder with name starts with '/' however Cost usage report can contain '/' in path prefix.
i used following code to copy file from one bucket to other bucket
AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider());
s3client.copyObject(sourceBucketName, sourceKey,
destinationBucketName, destinationKey);
but i alway get
"com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: B6466D562B6988E2)"` as the respone what may be the reason for that
There can be lot of possibilities for getting such an error (like non-existent bucket, permissions issues, custom policy applied on source or target buckets etc.) I will recommend to setup AWS S3 CLI on your machine and try different s3 commands to make sure that you actually have right set of permissions to do the operation. This will allow you to iterate fast and debug issue quickly. I am not against writing Java code here to do the same but cli will definitely save time for you.
Also look this SO link to see if this helps you in fixing your problem.
now i copy the file using java aws sdk
This is due to the absence of meta data while cpoying the file so we must add the meta data using copy copyObjRequest.setNewObjectMetadata(objectMetadata);
ref
http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html for the details
From few days back i am receiving this exception when i try to push files to my S3Bucket. Ealier everything seems to work and i am sure there is no code changes from my side.
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden
(Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden;
Request ID: XXXXXXXXXXXX),
S3 Extended Request ID: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1077)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:725)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3699)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:999)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:977)
....
....
I came across many such Q related to com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden following those
I have installed NTP on my server to solve any time related issue.
I have also added endPointUrl for "AmazonS3Client" object to the code which i feel may solve my problem.
Anything else i can try to solve this issue
i am using aws-java-sdk:1.9.10 for pushing files to S3 Bucket.
Most likely your instance has not been launched with an IAM instance profile role that has access to S3.
All access to AWS services must be signed with access key and secret. When you do this from your local machine the DefaultCredentialsProviderChain uses the access key and secret defined in your .aws/credentials file.
When you launch an EC2 instance in AWS it also needs to sign the requests to services, like s3. However, it does this by retrieving it's credentials from an internal metadata service.
So what you do is create an IAM instance profile that your instance will assume when it starts up. This IAM instance profile, like other IAM profiles for user's for example, defines what the instance has access to.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
In my case bucket name was different at android side ( bucket name was 'a' at S3 and i entered 'ab' at android side), by changing bucket name and vice versa, I solved issue.