Using AWS Educate with Spring - java

I am using an AWS Educate account and I would like to use S3 in my Java Spring Boot application.
I try to create a bucket using the following code where in place of access key and secret key I use those listed in Account Details from vocareum page:
#Repository
public class S3Repository {
private final AmazonS3 s3Client;
public S3Repository() {
AWSCredentials credentials = new BasicAWSCredentials(
"<AWS accesskey>",
"<AWS secretkey>"
);
this.s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.withRegion(Regions.US_EAST_1)
.build();
}
public void createBucket(String name) {
s3Client.createBucket(name);
}
}
When I invoke createBucket(String name) get this exception:
com.amazonaws.services.s3.model.AmazonS3Exception: The AWS Access Key Id you provided does not exist in our records.
I tried creating a new user in IAM but it does not create an access key and secret key due to educating account limitations. Every time I sign in to aws educate account it generates new keys and I'm using current ones. Configuration with YAML file and auto wiring s3Client gives the same result.
Is there any additional configuration that I need to include?
I would like to avoid creating a new regular account if there is another solution.

As you may already know, AWS Educate account have limitation across many services.
AWS Educate account's credential (access key, secret key, session token) would always change within 2-3 hours. But you can use this information to implement what you want.
All you need to do is add a session token along with an access key and secret key. Here is the sample code:
BasicSessionCredentials sessionCredentials = new BasicSessionCredentials(
"<type your aws_access_key_id>",
"<type your aws_secret_access_key>",
"<type your aws_session_token>");
final AmazonS3 s3 = AmazonS3ClientBuilder.standard().withCredentials(new AWSStaticCredentialsProvider(sessionCredentials)).withRegion(
Regions.US_EAST_1).build();
// do whatever you want with s3
...
Please note that this program would work for few hours while the credentials are valid. However, when the session expires, you would get the error again. This is the limitation of the AWS educate account.
src: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html

Related

Setting secret on Azure Keyvault using Managed Identites

I have an application using Java Springboot and I have already given access from my Managed Identity to the KeyVault.
When I try to set a new secret using the java code I got this error below:
"message":"Failed to set secret - secret-name \nStatus code 401,
"{"error":{"code":"Unauthorized","message":"AKV10032:
Invalid issuer. Expected one of
public void setAzureTokens() {
try{
SecretClient secretClient = new SecretClientBuilder()
.vaultUrl(keyVaultUri)
.credential(new DefaultAzureCredentialBuilder().build())
.buildClient();
secretClient.setSecret(new KeyVaultSecret(key, value));
}
catch (Exception e){
LOG.error("Error during during token update", e);
}
}
Do I need to set any information about Tenant, clientId, or my Managed Identity on Application.properties?
This is a cross-tenant issue as said by #Alex i.e; this problem arises for whom has access to multiple Azure AD tenancies and when library for accessing the Key Vault endpoint cannot decide which credentials to authenticate you with.
The solution is to tell the DefaultCredentialProvider which tenancy to use, and can be done with the Options that you can pass in DefaultAzureCredentialOptions().
var o = new DefaultAzureCredentialOptions();
o.VisualStudioTenantId = preConfig["AzureTenantId"];
configurationBuilder.AddAzureKeyVault(new Uri(preConfig["KeyVaultName"]), new DefaultAzureCredential(o));
Reference
(or)
You can set AZURE_TENANT_ID as environment variables and can be used as here > Azure Key Vault Secret client library for Java | Microsoft Docs

Not able to put object in S3 bucket when it does not have public access

I'm tying to write object to s3 bucket in my aws account but it fails with below error.
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 34399CEF4B28B50D; S3 Extended Request ID:
I tried making the bucket public with full access and then I'm able to write to it.
Code I have written to write object to S3 bucket :
......
private final AmazonS3 amazonS3Client;
........
final PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, s3Key, stream,
metadata);
amazonS3Client.putObject(putObjectRequest);
final URL url = amazonS3Client.getUrl(bucketName, s3Key);
I am building my S3 client as :
#Configuration
public class AWSConfig {
#Bean
public AmazonS3 amazonS3Client() {
String awsRegion = System.getenv("AWS_REGION");
if (StringUtils.isBlank(awsRegion)) {
awsRegion = Regions.getCurrentRegion().getName();
}
return AmazonS3ClientBuilder.standard()
.withCredentials(new DefaultAWSCredentialsProviderChain())
.withRegion(awsRegion)
.build();
}
}
Please suggest me if I am missing anything and how can I fix the error mentioned above.
You are missing Access Keys (Access Key ID and Secret Access Key)
Right now it only works if you set the bucket to public, because you are not providing any Access Keys.
Access Keys are can be best compared with API keys, which provide a secure way of accessing private data in AWS.
You will need:
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(ACCESSKEY, SECRETKEY)));
See the official documentation on how to generate/obtain them.
Make sure below option is "off"
s3--> mybucket --> Permissions --> Block all public access -->> off
I have found the solution for this. Issue was that the java service from where I was trying to call put object request does not have access to s3 bucket. For resolving this, I have added permission for the instance where my service was running to access the s3 bucket which resolved the problem.

Amazon S3 Copy between two buckets with different Authentication

I have two buckets, each with a Private ACL.
I have an authenticated link to the source:
String source = "https://bucket-name.s3.region.amazonaws.com/key?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=...&X-Amz-SignedHeaders=host&X-Amz-Expires=86400&X-Amz-Credential=...Signature=..."
and have been trying to use the Java SDK CopyObjectRequest to copy it into another bucket using:
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey)
AWSCredentialsProvider provider = new AWSStaticCredentialsProvider(credentials)
AmazonS3 s3Client = AmazonS3ClientBuilder
.standard()
.withCredentials(provider)
AmazonS3URI sourceURI = new AmazonS3URI(URI(source))
CopyObjectRequest request = new CopyObjectRequest(sourceURI.getBucket, sourceURI.getKey, destinationBucket, destinationKey);
s3Client.copyObject(request);
However I get AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied because My AWS credentials I've set the SDK up with do not have access to the source file.
Is there a way I can provide an authenticated source URL instead of just the bucket and key?
This isn't supported. The PUT+Copy service API, which is used by s3Client.copyObject(), uses an internal S3 mechanism to copy of the object, and the source object is passed as /bucket/key -- not as a full URL. There is no API functionality that can be used for fetching from a URL, S3 or otherwise.
With PUT+Copy, the user making the request to S3...
must have READ access to the source object and WRITE access to the destination bucket
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
The only alternative is download followed by upload.
Doing this from EC2... or a Lambda function running in the source region would be the most cost-effective, but if the object is larger than the Lambda temp space, you'll have to write hooks and handlers to read from the stream and juggle the chunks into a multipart upload... not impossible, but requires some mental gyrations in order to understand what you're actually trying to persuade your code to do.

The AWS Access Key Id you provided does not exist in our records

I had written a library in PHP using the AWS to communicate with the ECS server. Now I am migrating my code to java. In java I am using the same key and secret which I used in PHP.
In php I used the following method:
$s3 = Aws\S3\S3Client::factory(array( 'base_url' => $this->base_url, 'command.params' => array('PathStyle' => true), 'key' => $this->key, 'secret' => $this->secret ));
In java I am using following method
BasicAWSCredentials(String accessKey, String secretKey);
I am getting the following exception:
Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 3A3000708C8D7883), S3 Extended Request ID: U2rG9KLBBQrAqa6M2rZj65uhaHhOpZpY2VK1rXzqoGrKVd4R/JdR8aeih/skG4fIrecokE4FY3w=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1401)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:945)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:723)
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:475)
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:437)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:386)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3996)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3933)
at com.amazonaws.services.s3.AmazonS3Client.listBuckets(AmazonS3Client.java:851)
at com.amazonaws.services.s3.AmazonS3Client.listBuckets(AmazonS3Client.java:857)
at com.justdial.DocUpload.DocUpload.main(DocUpload.java:22)
Do I need a new key and secret for Java or the previous one can be used.
My questions are:
1 - Is then any prerequisite that we need to follow before using AWS's BasicAwsCredentials() method.
2 - Do we need to create the IAM roles.
3 - If we do then how will it come to know which IP to hit as in case of PHP in Aws\S3\S3Client::factory method I was specifying the base url.
Following code worked for me.
String accesskey = "objuser1";
String secret = "xxxxxxxxxxxxxxxx";
ClientConfiguration config = new ClientConfiguration();
config.setProtocol(Protocol.HTTP);
AmazonS3 s3 = new AmazonS3Client(new BasicAWSCredentials(accesskey, secret), config);
S3ClientOptions options = new S3ClientOptions();
options.setPathStyleAccess(true);
s3.setS3ClientOptions(options);
s3.setEndpoint("1.2.3.4:9020"); //ECS IP Address
System.out.println("Listing buckets");
for (Bucket bucket : s3.listBuckets()) {
System.out.println(" - " + bucket.getName());
}
System.out.println();
AWS access key and secret access keys are independent of SDKs you use (doesn't matter whether you use PHP, Java or Ruby etc), it's more about granting and getting access to services you run/use on AWS.
Root cause of the error is I think you've not set AWS region though you've given access key and secret access key. Please check this to set AWS credentials for Java.
To answer your questions.
As far as I know BasicAwsCredentials() itself acts as prerequisite step before you want to do anything with Amazon services. Only prerequisite for BasicAwsCredentials() is to have the authentication keys.
No, it's not mandatory step. It's an alternative. Please check the definition of IAM role from this link.
You can use the 'endpoint' to do the same job in Java. Check 'endpoint' details from this link.
Please take a look at the examples from here.
Using the same method I'm able to successfully do the 'BasicAwsCredentials()'. And also cross check whether 'access key' and 'secret access key' have necessary permissions to access Amazon services as per your need. Check IAM user, group and ensure they've necessary AWS permissions.

Amazon Cognito developer authenticated identity with Java SDK

I am trying to authenticate a java app to AWS services using a developer-authenticated Cognito identity. This is very straightforward in the AWS mobile SDKs (documentation), but I can't seem to find the equivalent classes in the Java SDK.
The main issue I am having is that the Java SDK classes (such as WebIdentityFederationSessionCredentialsProvider) require the client code to know the arn of the role being assumed. With the mobile SDK, it uses the role configured for the federated identity. That's what I'd prefer to do, but it seems the Java SDK doesn't have the supporting classes for that.
The last comment from Jeff led me to the answer. Thanks Jeff!
String cognitoIdentityId = "your user's identity id";
String openIdToken = "open id token for the user created on backend";
Map<String,String> logins = new HashMap<>();
logins.put("cognito-identity.amazonaws.com", openIdToken);
GetCredentialsForIdentityRequest getCredentialsRequest =
new GetCredentialsForIdentityRequest()
.withIdentityId(cognitoIdentityId)
.withLogins(logins);
AmazonCognitoIdentityClient cognitoIdentityClient = new AmazonCognitoIdentityClient();
GetCredentialsForIdentityResult getCredentialsResult = cognitoIdentityClient.getCredentialsForIdentity(getCredentialsRequest);
Credentials credentials = getCredentialsResult.getCredentials();
AWSSessionCredentials sessionCredentials = new BasicSessionCredentials(
credentials.getAccessKeyId(),
credentials.getSecretKey(),
credentials.getSessionToken()
);
AmazonS3Client s3Client = new AmazonS3Client(sessionCredentials);
...
If that's the route you want to go, you can find this role in the IAM console, named Cognito_(Auth|Unauth)_DefaultRole. These are what Cognito generated and linked to your pool, and you can get the ARN from there.
This blog post may be of some assistance. All of the APIs the SDK uses to communicate with Cognito to get credentials are exposed in the Java SDK, you just need to use your own back end to get the token itself. Once you have it, you can set the logins the same way you normally would with another provider and it'll all work.

Categories