i'm trying to use the aws sdk for java (it's not the first time and it's always worked) but i'm getting this error:
com.amazonaws.services.s3.model.AmazonS3Exception: The unspecified location constraint is incompatible for the region specific endpoint this request was sent to
In my pom i have this maven dependency
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
<version>1.12.402</version>
</dependency>
and this is my code to instantiate the s3 client:
#Bean
public AmazonS3 amazonS3() {
AWSCredentials cred = new BasicAWSCredentials(accesskey, secretkey);
AWSCredentialsProvider credProvider = new AWSStaticCredentialsProvider(cred);
return AmazonS3Client.builder()
.withRegion(Regions.ME_CENTRAL_1)
//.withCredentials(credProvider)
.withCredentials(DefaultAWSCredentialsProviderChain.getInstance())
.build();
}
As you can see I tried two different AWSCredentialsProviders, but I always get the same error
For buckets created in regions outside of us-east-1 (N. Virginia) a location constraint is required. See https://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html for more details
LocationConstraint -> (string)
Specifies the Region where the bucket will be created. If you don't specify a Region, the bucket is created in the US East (N. Virginia) Region (us-east-1).
Related
I am using an AWS Educate account and I would like to use S3 in my Java Spring Boot application.
I try to create a bucket using the following code where in place of access key and secret key I use those listed in Account Details from vocareum page:
#Repository
public class S3Repository {
private final AmazonS3 s3Client;
public S3Repository() {
AWSCredentials credentials = new BasicAWSCredentials(
"<AWS accesskey>",
"<AWS secretkey>"
);
this.s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.withRegion(Regions.US_EAST_1)
.build();
}
public void createBucket(String name) {
s3Client.createBucket(name);
}
}
When I invoke createBucket(String name) get this exception:
com.amazonaws.services.s3.model.AmazonS3Exception: The AWS Access Key Id you provided does not exist in our records.
I tried creating a new user in IAM but it does not create an access key and secret key due to educating account limitations. Every time I sign in to aws educate account it generates new keys and I'm using current ones. Configuration with YAML file and auto wiring s3Client gives the same result.
Is there any additional configuration that I need to include?
I would like to avoid creating a new regular account if there is another solution.
As you may already know, AWS Educate account have limitation across many services.
AWS Educate account's credential (access key, secret key, session token) would always change within 2-3 hours. But you can use this information to implement what you want.
All you need to do is add a session token along with an access key and secret key. Here is the sample code:
BasicSessionCredentials sessionCredentials = new BasicSessionCredentials(
"<type your aws_access_key_id>",
"<type your aws_secret_access_key>",
"<type your aws_session_token>");
final AmazonS3 s3 = AmazonS3ClientBuilder.standard().withCredentials(new AWSStaticCredentialsProvider(sessionCredentials)).withRegion(
Regions.US_EAST_1).build();
// do whatever you want with s3
...
Please note that this program would work for few hours while the credentials are valid. However, when the session expires, you would get the error again. This is the limitation of the AWS educate account.
src: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html
I'm tying to write object to s3 bucket in my aws account but it fails with below error.
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 34399CEF4B28B50D; S3 Extended Request ID:
I tried making the bucket public with full access and then I'm able to write to it.
Code I have written to write object to S3 bucket :
......
private final AmazonS3 amazonS3Client;
........
final PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, s3Key, stream,
metadata);
amazonS3Client.putObject(putObjectRequest);
final URL url = amazonS3Client.getUrl(bucketName, s3Key);
I am building my S3 client as :
#Configuration
public class AWSConfig {
#Bean
public AmazonS3 amazonS3Client() {
String awsRegion = System.getenv("AWS_REGION");
if (StringUtils.isBlank(awsRegion)) {
awsRegion = Regions.getCurrentRegion().getName();
}
return AmazonS3ClientBuilder.standard()
.withCredentials(new DefaultAWSCredentialsProviderChain())
.withRegion(awsRegion)
.build();
}
}
Please suggest me if I am missing anything and how can I fix the error mentioned above.
You are missing Access Keys (Access Key ID and Secret Access Key)
Right now it only works if you set the bucket to public, because you are not providing any Access Keys.
Access Keys are can be best compared with API keys, which provide a secure way of accessing private data in AWS.
You will need:
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(ACCESSKEY, SECRETKEY)));
See the official documentation on how to generate/obtain them.
Make sure below option is "off"
s3--> mybucket --> Permissions --> Block all public access -->> off
I have found the solution for this. Issue was that the java service from where I was trying to call put object request does not have access to s3 bucket. For resolving this, I have added permission for the instance where my service was running to access the s3 bucket which resolved the problem.
I have two buckets, each with a Private ACL.
I have an authenticated link to the source:
String source = "https://bucket-name.s3.region.amazonaws.com/key?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=...&X-Amz-SignedHeaders=host&X-Amz-Expires=86400&X-Amz-Credential=...Signature=..."
and have been trying to use the Java SDK CopyObjectRequest to copy it into another bucket using:
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey)
AWSCredentialsProvider provider = new AWSStaticCredentialsProvider(credentials)
AmazonS3 s3Client = AmazonS3ClientBuilder
.standard()
.withCredentials(provider)
AmazonS3URI sourceURI = new AmazonS3URI(URI(source))
CopyObjectRequest request = new CopyObjectRequest(sourceURI.getBucket, sourceURI.getKey, destinationBucket, destinationKey);
s3Client.copyObject(request);
However I get AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied because My AWS credentials I've set the SDK up with do not have access to the source file.
Is there a way I can provide an authenticated source URL instead of just the bucket and key?
This isn't supported. The PUT+Copy service API, which is used by s3Client.copyObject(), uses an internal S3 mechanism to copy of the object, and the source object is passed as /bucket/key -- not as a full URL. There is no API functionality that can be used for fetching from a URL, S3 or otherwise.
With PUT+Copy, the user making the request to S3...
must have READ access to the source object and WRITE access to the destination bucket
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
The only alternative is download followed by upload.
Doing this from EC2... or a Lambda function running in the source region would be the most cost-effective, but if the object is larger than the Lambda temp space, you'll have to write hooks and handlers to read from the stream and juggle the chunks into a multipart upload... not impossible, but requires some mental gyrations in order to understand what you're actually trying to persuade your code to do.
The AWS Glacier API gives me an error about not finding the region even when I specify it specifically:
EndpointConfiguration endpointConfig = new EndpointConfiguration("https://glacier.us-east-2.amazonaws.com/", "us-east-2");
AmazonGlacier glacierClient = AmazonGlacierClientBuilder.standard()
.withEndpointConfiguration(endpointConfig)
.withCredentials(credentials)
.build();
ArchiveTransferManager xferMgr = new ArchiveTransferManagerBuilder()
.withGlacierClient(glacierClient)
.build();
UploadResult result = xferMgr.upload("Data_Full", "my archive " + (new Date()), new File("C:\\myBigFile"));
I get this stack trace:
com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder
or setup environment to supply a region. at
com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:371)
at
com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:337)
at
com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46)
at
com.amazonaws.services.sqs.AmazonSQSClientBuilder.defaultClient(AmazonSQSClientBuilder.java:44)
at
com.amazonaws.services.glacier.transfer.ArchiveTransferManagerBuilder.resolveSQSClient(ArchiveTransferManagerBuilder.java:129)
at
com.amazonaws.services.glacier.transfer.ArchiveTransferManagerBuilder.getParams(ArchiveTransferManagerBuilder.java:135)
at
com.amazonaws.services.glacier.transfer.ArchiveTransferManagerBuilder.build(ArchiveTransferManagerBuilder.java:143)
Note that I use the API to list vaults and it works:
AmazonGlacierClientBuilder clientbuilder = AmazonGlacierClientBuilder.standard();
EndpointConfiguration endpointConfig = new EndpointConfiguration("https://glacier.us-east-2.amazonaws.com/", "us-east-2");
clientbuilder.withEndpointConfiguration(endpointConfig);
ProfilesConfigFile cf = new ProfilesConfigFile();
AWSCredentialsProvider credentials = new ProfileCredentialsProvider(cf, "My AWS Profile Name");
clientbuilder.setCredentials(credentials);
AmazonGlacier glacierClient = CustomAmazonGlacierClientBuilder.buildCustomAmazonGlacierClient();
ListVaultsRequest request = new ListVaultsRequest();
ListVaultsResult result = glacierClient.listVaults(request);
I recently downloaded the AWS / Glacier libraries as an Eclipse plugin. It shows the .jar version of aws-java-sdk-opensdk-1.11.130.jar
Does anyone have any insight as to what I could put in the code to satisfy the region requirement? I'd rather do it programmatically
I solved this by adding the AWS_REGION environment variable. E.g. us-east-2. When using Eclipse, you can add this using the Run --> Run Configurations.
I also updated the Eclipse and AWS Eclipse plugins using the Eclipse Help --> Check for Updates feature.
I am using the JAVA SDK from AWS to create a Polly client.
Like this:
BasicAWSCredentials awsCreds = new BasicAWSCredentials("<IAM access Key>", "IAM secret key>");
AmazonPollyClient apClient = (AmazonPollyClient) AmazonPollyClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
SynthesizeSpeechRequest tssRequest = new SynthesizeSpeechRequest();
tssRequest.setText(<text>);
tssRequest.setVoiceId(<voiceid>);
tssRequest.setOutputFormat(OutputFormat.Mp3);
SynthesizeSpeechResult tssResult = apClient.synthesizeSpeech(tssRequest);
When I run this code, I get the following error message:
Exception in thread "main" com.amazonaws.SdkClientException: Unable to
load region information from any provider in the chain at
com.amazonaws.regions.AwsRegionProviderChain.getRegion(AwsRegionProviderChain.java:56)
at
com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:319)
at
com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:295)
at
com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:38)
at com.eoffice.aws.speech.Polly.main(Polly.java:42)
I checked the credentials using the IAM Policy Simulator. This works fine, permissions are OK.
The method to set the Region in the ClientBuilder is NOT visible for the AmazonPollyClientBuilder, so I have no (Java SDK) way to specify the region.
Update:
When I just ask the defaultAwsREgionProviderChain, I get the same error message
DefaultAwsRegionProviderChain defaultAwsRegionProviderChain = new DefaultAwsRegionProviderChain();
System.out.println(defaultAwsRegionProviderChain.getRegion());
Updat 2:
When I create a config file in de .aws folder with the following content:
[default]
region = eu-west-1
It works, but I need a way to set this without relying on the file system.
Providing a System Environment variable with name "AWS_REGION" did the trick.
See screenshot for configuration in IBM Bluemix
I think you can set Region like this
AmazonPollyClient apClient = (AmazonPollyClient) AmazonPollyClientBuilder.standard().withCredentials(new AWSStaticCredentialsProvider(awsCreds)).withRegion("<aws-region>").build();