Method Not allowed in S3 - java

I am trying to create a bucket with ceph and s3 library and get the below exception 405. Any pointers to resolve this issue?
com.amazonaws.services.s3.model.AmazonS3Exception: null (Service:
Amazon S3; Status Code: 405; Error Code: MethodNotAllowed; Request ID:
tx00000000000000000000a-005d37c963-1009-
Code:
BasicAWSCredentials credentials = new BasicAWSCredentials("", "");
ClientConfiguration clientConfig = new ClientConfiguration();
clientConfig.setProtocol(Protocol.HTTP);
AmazonS3 conn = new AmazonS3Client(credentials, clientConfig);
conn.setEndpoint("localhost:8080");
Bucket bucket = conn.createBucket("my-new-bucket");

Try to add below code
conn.setS3ClientOptions(new S3ClientOptions().withPathStyleAccess(true));

I got stuck for ages on MethodNotAllowed trying to create a ceph bucket.
Firstly I'd note that you should be able to use the s3cmd command line tool to create a bucket with the same user (or you should be able to see the same MethodNotAllowed response) to verify whether it's a problem with your java code.
For me the answer turn out to be this: You're not allowed to name your bucket "documents"! (not sure what other reserved words there are)

Related

AWS S3 access denied with Java

I'm trying to upload files on Amazon S3, but it returns this error :
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 26B9B4844CEA580C), S3 Extended Request ID: S0Ds3cvubCSZTAd1ESUG5rMRifGLHRAHBviyUHzDVI5W8FQxtRDMBNSrhVme6K86UryWxqORv30=
I have already configured IAM and the bucket on AWS and I use the good access/secret (the one aws gave me). What's the problem? Thank you!
S3Client Code:
s3client = AmazonS3ClientBuilder
.standard()
.withRegion(Regions.CA_CENTRAL_1) // The first region to try your request against
.withForceGlobalBucketAccessEnabled(true) // If a bucket is in a different region, try again in the correct region
.withCredentials(AWSStaticCredentialsProvider(BasicAWSCredentials(accessKey, secretKey)))
.build()
Upload file code:
private fun uploadFileTos3bucket(fileName: String, file: File) {
s3client.putObject(PutObjectRequest(bucketName, fileName, file)
.withCannedAcl(CannedAccessControlList.PublicRead))
}
IAM user :
S3 permissions :
I had the same problem. I made it work disabling the first option
Block public access to buckets and objects granted through new access control lists (ACLs)
If you want to get more information about the default Canned ACL, here is a link
If you don't want to disable any of these policies you can use pre signed urls
You can see examples here

Not able to put object in S3 bucket when it does not have public access

I'm tying to write object to s3 bucket in my aws account but it fails with below error.
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 34399CEF4B28B50D; S3 Extended Request ID:
I tried making the bucket public with full access and then I'm able to write to it.
Code I have written to write object to S3 bucket :
......
private final AmazonS3 amazonS3Client;
........
final PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, s3Key, stream,
metadata);
amazonS3Client.putObject(putObjectRequest);
final URL url = amazonS3Client.getUrl(bucketName, s3Key);
I am building my S3 client as :
#Configuration
public class AWSConfig {
#Bean
public AmazonS3 amazonS3Client() {
String awsRegion = System.getenv("AWS_REGION");
if (StringUtils.isBlank(awsRegion)) {
awsRegion = Regions.getCurrentRegion().getName();
}
return AmazonS3ClientBuilder.standard()
.withCredentials(new DefaultAWSCredentialsProviderChain())
.withRegion(awsRegion)
.build();
}
}
Please suggest me if I am missing anything and how can I fix the error mentioned above.
You are missing Access Keys (Access Key ID and Secret Access Key)
Right now it only works if you set the bucket to public, because you are not providing any Access Keys.
Access Keys are can be best compared with API keys, which provide a secure way of accessing private data in AWS.
You will need:
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(ACCESSKEY, SECRETKEY)));
See the official documentation on how to generate/obtain them.
Make sure below option is "off"
s3--> mybucket --> Permissions --> Block all public access -->> off
I have found the solution for this. Issue was that the java service from where I was trying to call put object request does not have access to s3 bucket. For resolving this, I have added permission for the instance where my service was running to access the s3 bucket which resolved the problem.

Amazon S3 Copy between two buckets with different Authentication

I have two buckets, each with a Private ACL.
I have an authenticated link to the source:
String source = "https://bucket-name.s3.region.amazonaws.com/key?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=...&X-Amz-SignedHeaders=host&X-Amz-Expires=86400&X-Amz-Credential=...Signature=..."
and have been trying to use the Java SDK CopyObjectRequest to copy it into another bucket using:
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey)
AWSCredentialsProvider provider = new AWSStaticCredentialsProvider(credentials)
AmazonS3 s3Client = AmazonS3ClientBuilder
.standard()
.withCredentials(provider)
AmazonS3URI sourceURI = new AmazonS3URI(URI(source))
CopyObjectRequest request = new CopyObjectRequest(sourceURI.getBucket, sourceURI.getKey, destinationBucket, destinationKey);
s3Client.copyObject(request);
However I get AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied because My AWS credentials I've set the SDK up with do not have access to the source file.
Is there a way I can provide an authenticated source URL instead of just the bucket and key?
This isn't supported. The PUT+Copy service API, which is used by s3Client.copyObject(), uses an internal S3 mechanism to copy of the object, and the source object is passed as /bucket/key -- not as a full URL. There is no API functionality that can be used for fetching from a URL, S3 or otherwise.
With PUT+Copy, the user making the request to S3...
must have READ access to the source object and WRITE access to the destination bucket
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
The only alternative is download followed by upload.
Doing this from EC2... or a Lambda function running in the source region would be the most cost-effective, but if the object is larger than the Lambda temp space, you'll have to write hooks and handlers to read from the stream and juggle the chunks into a multipart upload... not impossible, but requires some mental gyrations in order to understand what you're actually trying to persuade your code to do.

How to Change Amazon S3 file URL Like s3.amazonaws.com/bucket/key from bucket.s3.amazonaws.com/key in Java?

Working with AmazonS3 bucket - After uploading file to bucket, we can get uploaded file URL using below code :
String fileDownloadUrl = AmzonS3Client.getUrl(bucketName, fileName);
In Result it will give url like i.e : bucket.s3.amazonaws.com/key but I want s3.amazonaws.com/bucket/key. So Can anyone help me how can I solve this in java?
By default pathstyleaccess is false so your uploaded file should be bucket.s3.amazonaws.com/key but when you explicitly add clientOptions -- pathStyleAccess to true then it will generate URL like s3.amazonaws.com/bucket/key. Please find below code snippet
S3ClientOptions clientOptions = new S3ClientOptions();
clientOptions.setPathStyleAccess(true);
And set this clientOptions to Amazons3client.
Another Solution:
Create AmazonS3Client object using AmazonS3ClientBuilder with enablePathStyleAccess().
AmazonS3 client = AmazonS3ClientBuilder.standard()
.enablePathStyleAccess()
.withRegion(regionName)
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.build();

Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: xxxxxxxxxxxxx)

I am trying to access my s3 bucket using a application deployed on my tomcat running on ec2.
I could see lots of posts related to this, but look like most of them complaint about not having proper access. I have proper access to all buckets, I am able to upload the file from another application using different application like jenkins s3 plugin without any issues. I am clueless why this should happen only for a java web application deployed on tomcat. I have confirmed below things.
The ec2 instance was created with an IAM role.
The IAM role has write access to bucket. The puppet scripts is able to write to bucket.
Tried with other application to check the IAM role and it is working fine with out any issues.
As per my understanding if I do not specify any credentials while creating the S3 bucket client(AmazonS3Client ),it will take the IAM role authentication as default.
This is a sample function which I wrote to test the permission.
public boolean checkWritePermission(String bucketName) {
AmazonS3Client amazonS3Client=new AmazonS3Client();
LOG.info("Checking bucket write permission.....");
boolean hasWritePermissions = false;
final ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(0);
// Create empty content
final InputStream emptyContent = new ByteArrayInputStream(new byte[0]);
// Create a PutObjectRequest with test object
final PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName,
"TestDummy.txt", emptyContent, metadata);
try {
if (amazonS3Client.putObject(putObjectRequest) != null) {
LOG.info("Permissions validated!");
// User has write permissions, TestPassed.
hasWritePermissions = true;
}
}
catch (AmazonClientException s3Ex) {
LOG.warn("Write permissions not available!", s3Ex.getMessage());
LOG.error("Write permissions not available!", s3Ex);
}
return hasWritePermissions;
}
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: xxxxxxxxxxxxxx).
Not sure if you have solved this issue yet; however, if you are using custom KMS keys on your bucket and the file you are trying to reach is encrypted with the custom key then this error will also be thrown.
This issue is sometimes hidden by the fact you can still list objects inside your S3 bucket. Make sure your IAM policy includes kms permissions to decrypt.

Categories