Is it safe to cache AmazonS3 client for later use? - java

I am writing a Java program to upload files to AWS S3 and I have succeed to get the S3 client using the following code:
BasicAWSCredentials awsCreds = new BasicAWSCredentials("aaa", "bbb");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(Regions.fromName("ccc"))
.withCredentials(new AWSStaticCredentialsProvider(awsCreds)).build();
As I find that it takes quite a few seconds each time to setup the S3 client, I am wondering if it is possible to cache the client for repeated use.
Also, if I cache the client for like a year, will the client still be valid to connect to AWS?

Your client will work as long as the the credentials are valid. It will work for a year if your credentials are not changed or updated.
Basically, when you create a client, you don't convert the original credentials to any form, everything will be reference later when needed to perform the actual operation.
Your client will no longer work once you update your credentials after its object creation.
If you want to initialize once and use it later for a year. Yes, it will work. With the best security practices, it is not good to keep the credentials fixed for a longer period of time.
More about credentials:
https://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/credentials.html
Hope it helps.

Related

S3 path configuration and SQS Extended Client Library

I want to save all messages that go in a particular SQS queue in the already created s3 bucket.
But I want to save those messages in certain directories for an easier search by date and time.
S3Client has software.amazon.awssdk.services.s3.model.PutObjectRequest
Where I can determine bucket, path where the object is saved and some headers
PutObjectRequest objectRequest =
PutObjectRequest.builder()
.bucket(bucketName)
.key(s3Path)
.metadata(keyAndMetadata.getMetadata())
.build();
After that s3Client.putObject(objectRequest, body) do the thing
Now, I want to configure s3 in a similar way using ExtendedClientConfiguration, but I can only see very simple input parameters
ExtendedClientConfiguration extendedClientConfiguration =
new ExtendedClientConfiguration()
.withPayloadSupportEnabled(s3Client, bucketName, false)
.withAlwaysThroughS3(true);
And after that, we create that extended Sqs client with no way to configure s3 more extensively
AmazonSQSExtendedClient amazonSQSExtendedClient = new AmazonSQSExtendedClient(sqsClient, extendedClientConfiguration);
I know that I could probably separately save all messages that go to SQS to s3, but I'd better configure all that on the client level. Does someone have any ideas?
I found out there's no way to configure s3 path on the client level. But back up to s3 wasn't created for that purpose, and saving to s3 probably should be handled differently. Deleting files from s3 as they disappear from SQS is the best option for using this library.

Amazon S3 Copy between two buckets with different Authentication

I have two buckets, each with a Private ACL.
I have an authenticated link to the source:
String source = "https://bucket-name.s3.region.amazonaws.com/key?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=...&X-Amz-SignedHeaders=host&X-Amz-Expires=86400&X-Amz-Credential=...Signature=..."
and have been trying to use the Java SDK CopyObjectRequest to copy it into another bucket using:
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey)
AWSCredentialsProvider provider = new AWSStaticCredentialsProvider(credentials)
AmazonS3 s3Client = AmazonS3ClientBuilder
.standard()
.withCredentials(provider)
AmazonS3URI sourceURI = new AmazonS3URI(URI(source))
CopyObjectRequest request = new CopyObjectRequest(sourceURI.getBucket, sourceURI.getKey, destinationBucket, destinationKey);
s3Client.copyObject(request);
However I get AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied because My AWS credentials I've set the SDK up with do not have access to the source file.
Is there a way I can provide an authenticated source URL instead of just the bucket and key?
This isn't supported. The PUT+Copy service API, which is used by s3Client.copyObject(), uses an internal S3 mechanism to copy of the object, and the source object is passed as /bucket/key -- not as a full URL. There is no API functionality that can be used for fetching from a URL, S3 or otherwise.
With PUT+Copy, the user making the request to S3...
must have READ access to the source object and WRITE access to the destination bucket
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
The only alternative is download followed by upload.
Doing this from EC2... or a Lambda function running in the source region would be the most cost-effective, but if the object is larger than the Lambda temp space, you'll have to write hooks and handlers to read from the stream and juggle the chunks into a multipart upload... not impossible, but requires some mental gyrations in order to understand what you're actually trying to persuade your code to do.

Windows SSPI to Java GSSAPI interoperability to achieve SSO on EJB calls

I have Java client running on Windows machine that calls remote EJB
on JBoss EAP/Wildfly running on Linux machine.
I use Kerberos to achieve SSO. Java client verifies the user against Windows domain
and pass his identity within EJB call to the JBoss server.
I started with JAAS and the builtin com.sun.security.auth.module.Krb5LoginModule.
It works correctly except one thing: user has to type his username and password
again. So, it is not a real SSO.
The problem is that Windows prohibits to export kerberos session key from its LSA credential cache.
It can be fixed by setting a specific Windows registry key on each client machine - but this is not acceptable for the customer.
Therefore I am trying to find an alternative solution.
I learned that Windows provides SSPI that shall be interoperable with GSSAPI used by Java. I use Waffle library to access SSPI from Java on the client. On the server I keep using JAAS, because it runs on Linux so I cannot use Waffle there.
I also learned that I don't need to implement LoginModule, rather I need SASL client.
So, I had a look how com.sun.security.sasl.gsskerb.GssKrb5Client works and I am trying to reimplement it using Waffle.
First step seems to work correctly - I obtain SSPI security context from Waffle,
then get the initial token and send it to the server.
The server accepts the token and respond with its own token.
And now the problem comes. In the original SASL client the 'unwrap' operation is
used to extract data from the server token, and 'wrap' operation is used to create
reply token to be sent to server.
GSSAPI wrap / unwrap operations shall correspond to SSPI EncryptMessage / DecryptMessage
operations according to Microsoft doc. This two methods are not available in Waflle, but are available
in NetAccountClient library.
However, I am not able to use them correctly. If I use a single SECBUFFER_STREAM then the DecryptMessage
is succesfull, however the data part of the token is not extracted and I don't know how to determine
the offset where it begins.
If I use SECBUFFER_STREAM and SECBUFFER_DATA as suggested by Microsoft docs, then I get an error:
com.sun.jna.platform.win32.Win32Exception: The message or signature supplied for verification has been altered
I also tried other combinations of SECBUFFER types as suggested elsewhere, but without success.
Any idea what am I doing wrong ?
To source code of unwrap method:
public byte[] unwrap(byte[] wrapper) throws LoginException {
Sspi.SecBuffer.ByReference inBuffer = new Sspi.SecBuffer.ByReference(Secur32Ext.SECBUFFER_STREAM, wrapper);
Sspi.SecBuffer.ByReference buffer = new Sspi.SecBuffer.ByReference();
buffer.BufferType = Sspi.SECBUFFER_DATA;
Secur32Ext.SecBufferDesc2 buffers = new Secur32Ext.SecBufferDesc2(inBuffer, buffer);
NativeLongByReference pfQOP = new NativeLongByReference();
int responseCode = Secur32Ext.INSTANCE.DecryptMessage(secCtx.getHandle(), buffers, new NativeLong(1), pfQOP);
if (responseCode != W32Errors.SEC_E_OK) {
throw handleError(responseCode);
}
byte[] data = buffer.getBytes();
return data;
}

How to instantiate GoogleIdTokenVerifier properly / what does .setAudience() do?

My Guidelines
If followed this Google documentation about verifying Google-Account-Tokens on the server side, but I am kinda confused.
My Problem
GoogleIdTokenVerifier googleIdTokenVerifier = new GoogleIdTokenVerifier.Builder(new NetHttpTransport(), new JacksonFactory())
.setAudience(Collections.singletonList(CLIENT_ID))
.build();
In this piece of code I figured out that the transport and jsonFactory arguments can be filled as new NetHttpTransport() and new JacksonFactory() here. It also describes how to get AudienceString, but I couldn't figure out what it is for. I couldn't test it, but my question is if I can use it without .setAudience() or if I need it and what it is for.
In .setAudience() you have to pass all client ID's. You can get the ID for your client from the Credentials Page. It's explained here.
Thanks to #StevenSoneff.
If you didn't get the basic concept
For every client you want your server to accept, you need to create a project in the `Developer Console`. Clients are differentiated by their `SHA-1` fingerprint. You can for example have a debug project (will take your debug fingerprint) and a release one. To make both work, you have to add both `ID`'s to your server's `GoogleIdTokenVerifier`'s `.setAudience()`.
In my case, If you're using Firebase to get the id token on Android or iOS. You should follow these instructions to verify it on your backend server.
Verify ID tokens using a third-party JWT library
For me, I'm using Google OAuth Client as the third-party library so it's easy to use.
But it's a little bit different from this document.
Verify the Google ID token on your server side
The CLIENT_ID is your firebase project ID.
The Issuer has to be set as https://securetoken.google.com/<projectId>.
You need to use GooglePublicKeysManager and call setPublicCertsEncodedUrl to set it as https://www.googleapis.com/robot/v1/metadata/x509/securetoken#system.gserviceaccount.com
GooglePublicKeysManager manager = new GooglePublicKeysManager.Builder(HTTP_TRANSPORT, JSON_FACTORY)
.setPublicCertsEncodedUrl(PUBLIC_KEY_URL)
.build();
GoogleIdTokenVerifier verifier = new GoogleIdTokenVerifier.Builder(manager)
.setAudience(Collections.singletonList(FIREBASE_PROJECT_ID))
.setIssuer(ISSUER)
.build();
If you have multiple issuers, then you have to create GoogleIdTokenVerifier for each one.

S3 presigned URL file upload failing with secured/https URL

I have two buckets, one private and one public. Private will have file with CannedAccessControlList.Private and public will have file with CannedAccessControlList.PublicRead. Apart from these they are all same.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
AmazonS3 s3client = new AmazonS3Client(new BasicAWSCredentials(AWS_ACCESS_KEY,AWS_SECRET_KEY));
generatePresignedUrlRequest = new GeneratePresignedUrlRequest(AWS_BUCKET_PRIVATE_NAME, path,HttpMethod.PUT);
generatePresignedUrlRequest.setExpiration(expiration);
generatePresignedUrlRequest.putCustomRequestHeader("x-amz-acl", CannedAccessControlList.Private.toString());
generatePresignedUrlRequest.putCustomRequestHeader("content-type", fileType);
url = s3client.generatePresignedUrl(generatePresignedUrlRequest);
I able to upload file to s3 in below scenarios. All generated URL are by default https.
1. private bucket with https
2. public bucket failed over https, replaced https to http it worked.
Problem why public bucket upload is failing with https. I can't work with http on production system as it have ssl installed.
There are two things which I have learned.
S3 have two different styles of writing URL Path style and virtual host style. (You will have to be careful when your bucket looks like hostname)
Virtual Host Style
https://xyz.com.s3.amazonaws.com/myObjectKey
Path style
https://s3.amazonaws.com/xyz.com/myObjectKey
Ajax call to upload file fails in first case if you are on https, since SSL certificate is valid only for s3.amazonaws.com and if bucket name like hostname SSL check will fail and block ajax upload call.
Solution for this in Java
s3client.setS3ClientOptions(new S3ClientOptions().withPathStyleAccess(true));
I am still not able to figure out how S3client which region to pick for URL formation, but I found some time it is picking proper "s3-ap-southeast-1.amazonaws.com" and sometimes it picks "s3.amazonaws.com".
In later case you upload will again fail mentioning CORS issues, If you presigned URL is s3.amazonaws.com and even if you have enabled CORS in your buckets its not gonna pick "Access-Control-Allow-Origin". So you need make to make sure you are giving proper region name using below code.
s3client.setEndpoint("s3-ap-southeast-1.amazonaws.com");//or whatever region you bucket is in.
Reference :http://shlomoswidler.com/2009/08/amazon-s3-gotcha-using-virtual-host.html
I resolved this issue by creating a folder inside my bucket and
generated pre-signed url for "my-bucket/folder" instead of "my-bucket".

Categories