I have to download data from an S3 bucket , the data is encrypted and I have the kms key to decrypt data .The code is running in an EC2 instane and the EC2 instance is having the IAM role to read from S3 .
I have seen a sample code in this link ,but I am not able to read the contents .I am getting the following exception
Exception in thread "main" com.amazonaws.SdkClientException: Unable to load credentials into profile [default]: AWS Access Key ID is not specified.
at com.amazonaws.auth.profile.internal.ProfileStaticCredentialsProvider.fromStaticCredentials(ProfileStaticCredentialsProvider.java:55)
at com.amazonaws.auth.profile.internal.ProfileStaticCredentialsProvider.<init>(ProfileStaticCredentialsProvider.java:40)
at com.amazonaws.auth.profile.ProfilesConfigFile.fromProfile(ProfilesConfigFile.java:207)
at com.amazonaws.auth.profile.ProfilesConfigFile.getCredentials(ProfilesConfigFile.java:160)
Can somebody suggest where I am going wrong or give some guidelines on how to read encrypted data from S3 buckets without credentials
I was able to find a solution by providing InstanceProfileCredentialsProvider .Below is the code .
String kms_key = Constants.KMS_key;
String inputString = null;
KMSEncryptionMaterialsProvider materialProvider = new KMSEncryptionMaterialsProvider(kms_key);
AmazonS3EncryptionClient client = new AmazonS3EncryptionClient(InstanceProfileCredentialsProvider.getInstance(),
materialProvider);
S3Object downloadedObject = client.getObject(bucketName, filePath);
if (null != downloadedObject) {
inputString = convertToString(downloadedObject.getObjectContent());
}
Related
Im trying to read a text file from AWS S3 object store (and then send it via http to a client). I have AWS CLI command which copies the file locally, but how can I do that via the SDK? I want to read the contents as string and avoid saving as a file and then read it back.
In CLI, I create a profile with keys (one time only):
aws configure --profile cloudian
Which then asks for questions like AWS Access Key ID [None]: and such. And then I need to run this command to retrieve the file:
aws --profile=cloudian --endpoint-url=https://s3-abc.abcstore.abc.net s3 cp s3://abc-store/STORE1/abc2/ABC/test_08.txt test.txt
For Reading S3 Object using SDK :
String s3Key ="";
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withRegion(region).build();
s3Key = URLDecoder.decode("s3Key", "UTF-8");
String s3BuckerName="Your Bucket Name";
S3Object object = s3Client.getObject(new GetObjectRequest(s3BuckerName, s3Key));
S3ObjectInputStream inputStream = object.getObjectContent();
You can get the content with the above code.
And I dint get second part of your question, do you wanna send this data somewhere?
I am trying to create a bucket with ceph and s3 library and get the below exception 405. Any pointers to resolve this issue?
com.amazonaws.services.s3.model.AmazonS3Exception: null (Service:
Amazon S3; Status Code: 405; Error Code: MethodNotAllowed; Request ID:
tx00000000000000000000a-005d37c963-1009-
Code:
BasicAWSCredentials credentials = new BasicAWSCredentials("", "");
ClientConfiguration clientConfig = new ClientConfiguration();
clientConfig.setProtocol(Protocol.HTTP);
AmazonS3 conn = new AmazonS3Client(credentials, clientConfig);
conn.setEndpoint("localhost:8080");
Bucket bucket = conn.createBucket("my-new-bucket");
Try to add below code
conn.setS3ClientOptions(new S3ClientOptions().withPathStyleAccess(true));
I got stuck for ages on MethodNotAllowed trying to create a ceph bucket.
Firstly I'd note that you should be able to use the s3cmd command line tool to create a bucket with the same user (or you should be able to see the same MethodNotAllowed response) to verify whether it's a problem with your java code.
For me the answer turn out to be this: You're not allowed to name your bucket "documents"! (not sure what other reserved words there are)
I have two buckets, each with a Private ACL.
I have an authenticated link to the source:
String source = "https://bucket-name.s3.region.amazonaws.com/key?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=...&X-Amz-SignedHeaders=host&X-Amz-Expires=86400&X-Amz-Credential=...Signature=..."
and have been trying to use the Java SDK CopyObjectRequest to copy it into another bucket using:
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey)
AWSCredentialsProvider provider = new AWSStaticCredentialsProvider(credentials)
AmazonS3 s3Client = AmazonS3ClientBuilder
.standard()
.withCredentials(provider)
AmazonS3URI sourceURI = new AmazonS3URI(URI(source))
CopyObjectRequest request = new CopyObjectRequest(sourceURI.getBucket, sourceURI.getKey, destinationBucket, destinationKey);
s3Client.copyObject(request);
However I get AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied because My AWS credentials I've set the SDK up with do not have access to the source file.
Is there a way I can provide an authenticated source URL instead of just the bucket and key?
This isn't supported. The PUT+Copy service API, which is used by s3Client.copyObject(), uses an internal S3 mechanism to copy of the object, and the source object is passed as /bucket/key -- not as a full URL. There is no API functionality that can be used for fetching from a URL, S3 or otherwise.
With PUT+Copy, the user making the request to S3...
must have READ access to the source object and WRITE access to the destination bucket
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
The only alternative is download followed by upload.
Doing this from EC2... or a Lambda function running in the source region would be the most cost-effective, but if the object is larger than the Lambda temp space, you'll have to write hooks and handlers to read from the stream and juggle the chunks into a multipart upload... not impossible, but requires some mental gyrations in order to understand what you're actually trying to persuade your code to do.
I had written a library in PHP using the AWS to communicate with the ECS server. Now I am migrating my code to java. In java I am using the same key and secret which I used in PHP.
In php I used the following method:
$s3 = Aws\S3\S3Client::factory(array( 'base_url' => $this->base_url, 'command.params' => array('PathStyle' => true), 'key' => $this->key, 'secret' => $this->secret ));
In java I am using following method
BasicAWSCredentials(String accessKey, String secretKey);
I am getting the following exception:
Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 3A3000708C8D7883), S3 Extended Request ID: U2rG9KLBBQrAqa6M2rZj65uhaHhOpZpY2VK1rXzqoGrKVd4R/JdR8aeih/skG4fIrecokE4FY3w=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1401)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:945)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:723)
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:475)
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:437)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:386)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3996)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3933)
at com.amazonaws.services.s3.AmazonS3Client.listBuckets(AmazonS3Client.java:851)
at com.amazonaws.services.s3.AmazonS3Client.listBuckets(AmazonS3Client.java:857)
at com.justdial.DocUpload.DocUpload.main(DocUpload.java:22)
Do I need a new key and secret for Java or the previous one can be used.
My questions are:
1 - Is then any prerequisite that we need to follow before using AWS's BasicAwsCredentials() method.
2 - Do we need to create the IAM roles.
3 - If we do then how will it come to know which IP to hit as in case of PHP in Aws\S3\S3Client::factory method I was specifying the base url.
Following code worked for me.
String accesskey = "objuser1";
String secret = "xxxxxxxxxxxxxxxx";
ClientConfiguration config = new ClientConfiguration();
config.setProtocol(Protocol.HTTP);
AmazonS3 s3 = new AmazonS3Client(new BasicAWSCredentials(accesskey, secret), config);
S3ClientOptions options = new S3ClientOptions();
options.setPathStyleAccess(true);
s3.setS3ClientOptions(options);
s3.setEndpoint("1.2.3.4:9020"); //ECS IP Address
System.out.println("Listing buckets");
for (Bucket bucket : s3.listBuckets()) {
System.out.println(" - " + bucket.getName());
}
System.out.println();
AWS access key and secret access keys are independent of SDKs you use (doesn't matter whether you use PHP, Java or Ruby etc), it's more about granting and getting access to services you run/use on AWS.
Root cause of the error is I think you've not set AWS region though you've given access key and secret access key. Please check this to set AWS credentials for Java.
To answer your questions.
As far as I know BasicAwsCredentials() itself acts as prerequisite step before you want to do anything with Amazon services. Only prerequisite for BasicAwsCredentials() is to have the authentication keys.
No, it's not mandatory step. It's an alternative. Please check the definition of IAM role from this link.
You can use the 'endpoint' to do the same job in Java. Check 'endpoint' details from this link.
Please take a look at the examples from here.
Using the same method I'm able to successfully do the 'BasicAwsCredentials()'. And also cross check whether 'access key' and 'secret access key' have necessary permissions to access Amazon services as per your need. Check IAM user, group and ensure they've necessary AWS permissions.
I am trying to access my s3 bucket using a application deployed on my tomcat running on ec2.
I could see lots of posts related to this, but look like most of them complaint about not having proper access. I have proper access to all buckets, I am able to upload the file from another application using different application like jenkins s3 plugin without any issues. I am clueless why this should happen only for a java web application deployed on tomcat. I have confirmed below things.
The ec2 instance was created with an IAM role.
The IAM role has write access to bucket. The puppet scripts is able to write to bucket.
Tried with other application to check the IAM role and it is working fine with out any issues.
As per my understanding if I do not specify any credentials while creating the S3 bucket client(AmazonS3Client ),it will take the IAM role authentication as default.
This is a sample function which I wrote to test the permission.
public boolean checkWritePermission(String bucketName) {
AmazonS3Client amazonS3Client=new AmazonS3Client();
LOG.info("Checking bucket write permission.....");
boolean hasWritePermissions = false;
final ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(0);
// Create empty content
final InputStream emptyContent = new ByteArrayInputStream(new byte[0]);
// Create a PutObjectRequest with test object
final PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName,
"TestDummy.txt", emptyContent, metadata);
try {
if (amazonS3Client.putObject(putObjectRequest) != null) {
LOG.info("Permissions validated!");
// User has write permissions, TestPassed.
hasWritePermissions = true;
}
}
catch (AmazonClientException s3Ex) {
LOG.warn("Write permissions not available!", s3Ex.getMessage());
LOG.error("Write permissions not available!", s3Ex);
}
return hasWritePermissions;
}
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: xxxxxxxxxxxxxx).
Not sure if you have solved this issue yet; however, if you are using custom KMS keys on your bucket and the file you are trying to reach is encrypted with the custom key then this error will also be thrown.
This issue is sometimes hidden by the fact you can still list objects inside your S3 bucket. Make sure your IAM policy includes kms permissions to decrypt.