AWS S3 Java Embedded Mock for Integration Tests - java

After searching the internet for a good solution to an embedded Java AWS S3 mock it seemed that S3Ninja and S3Proxy seemed to be the most popular solutions.
However there doesn't seem to be an easy way to fire these up programmatically. After giving up with S3Ninja, I tried to do it with S3Proxy but it's not quite working.
Maven Dependencies
<dependency>
<groupId>org.gaul</groupId>
<artifactId>s3proxy</artifactId>
<version>${s3proxy.version}</version>
<scope>test</scope>
</dependency>
Code
String endpoint = "http://127.0.0.1:8085";
URI uri = URI.create(endpoint);
Properties properties = new Properties();
properties.setProperty("s3proxy.authorization", "none");
properties.setProperty("s3proxy.endpoint", endpoint);
properties.setProperty("jclouds.provider", "filesystem");
properties.setProperty("jclouds.filesystem.basedir", "/tmp/s3proxy");
ContextBuilder builder = ContextBuilder
.newBuilder("filesystem")
.credentials("x", "x")
.modules(ImmutableList.<Module>of(new SLF4JLoggingModule()))
.overrides(properties);
BlobStoreContext context = builder.build(BlobStoreContext.class);
BlobStore blobStore = context.getBlobStore();
S3Proxy s3Proxy = S3Proxy.builder().awsAuthentication("x", "x").endpoint(uri).keyStore("", "").blobStore(blobStore).build();
s3Proxy.start();
BasicAWSCredentials awsCredentials = new BasicAWSCredentials("x", "x");
AmazonS3Client client = new AmazonS3Client(awsCredentials, new ClientConfiguration());
client.setEndpoint(endpoint);
// Should Throw AWS Client Exception as Bucket / Key does not exist!
GetObjectRequest objectRequest = new GetObjectRequest("bucket", "key");
S3Object object = client.getObject(objectRequest);
s3Proxy.stop();
Exception
java.lang.NoSuchMethodError: com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.<init>(Lcom/google/gson/internal/ConstructorConstructor;Lcom/google/gson/FieldNamingStrategy;Lcom/google/gson/internal/Excluder;)V
at org.jclouds.json.internal.DeserializationConstructorAndReflectiveTypeAdapterFactory.<init>(DeserializationConstructorAndReflectiveTypeAdapterFactory.java:116)
at org.jclouds.json.config.GsonModule.provideGson(GsonModule.java:129)
...
at org.jclouds.providers.config.BindProviderMetadataContextAndCredentials.backend(BindProviderMetadataContextAndCredentials.java:84)
...
at org.jclouds.ContextBuilder.build(ContextBuilder.java:581)
Any help is truly appreciated. I'm sure this is a big requirement for many Java Integration Tests that interact with AWS S3.

Just to comment the reason is because your project is using a conflicting version of gson. S3Proxy's dep requires gson 2.5.

Maybe you give ladon-S3-server a chance.
Take a look at my github reference.
The core is based on a servlet and has very little dependencies.

Related

ArchiveTransferManagerBuilder Unable to find a region via the region provider chain

The AWS Glacier API gives me an error about not finding the region even when I specify it specifically:
EndpointConfiguration endpointConfig = new EndpointConfiguration("https://glacier.us-east-2.amazonaws.com/", "us-east-2");
AmazonGlacier glacierClient = AmazonGlacierClientBuilder.standard()
.withEndpointConfiguration(endpointConfig)
.withCredentials(credentials)
.build();
ArchiveTransferManager xferMgr = new ArchiveTransferManagerBuilder()
.withGlacierClient(glacierClient)
.build();
UploadResult result = xferMgr.upload("Data_Full", "my archive " + (new Date()), new File("C:\\myBigFile"));
I get this stack trace:
com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder
or setup environment to supply a region. at
com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:371)
at
com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:337)
at
com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46)
at
com.amazonaws.services.sqs.AmazonSQSClientBuilder.defaultClient(AmazonSQSClientBuilder.java:44)
at
com.amazonaws.services.glacier.transfer.ArchiveTransferManagerBuilder.resolveSQSClient(ArchiveTransferManagerBuilder.java:129)
at
com.amazonaws.services.glacier.transfer.ArchiveTransferManagerBuilder.getParams(ArchiveTransferManagerBuilder.java:135)
at
com.amazonaws.services.glacier.transfer.ArchiveTransferManagerBuilder.build(ArchiveTransferManagerBuilder.java:143)
Note that I use the API to list vaults and it works:
AmazonGlacierClientBuilder clientbuilder = AmazonGlacierClientBuilder.standard();
EndpointConfiguration endpointConfig = new EndpointConfiguration("https://glacier.us-east-2.amazonaws.com/", "us-east-2");
clientbuilder.withEndpointConfiguration(endpointConfig);
ProfilesConfigFile cf = new ProfilesConfigFile();
AWSCredentialsProvider credentials = new ProfileCredentialsProvider(cf, "My AWS Profile Name");
clientbuilder.setCredentials(credentials);
AmazonGlacier glacierClient = CustomAmazonGlacierClientBuilder.buildCustomAmazonGlacierClient();
ListVaultsRequest request = new ListVaultsRequest();
ListVaultsResult result = glacierClient.listVaults(request);
I recently downloaded the AWS / Glacier libraries as an Eclipse plugin. It shows the .jar version of aws-java-sdk-opensdk-1.11.130.jar
Does anyone have any insight as to what I could put in the code to satisfy the region requirement? I'd rather do it programmatically
I solved this by adding the AWS_REGION environment variable. E.g. us-east-2. When using Eclipse, you can add this using the Run --> Run Configurations.
I also updated the Eclipse and AWS Eclipse plugins using the Eclipse Help --> Check for Updates feature.

Restarting app server using AWS API

I need to restart my AWS app server, for this I tried to use AWS API and have done the following:
1) Used the aws java sdk maven repository
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-elasticbeanstalk</artifactId>
<version>1.11.86</version>
</dependency>
2) Used the below code segment:
AWSElasticBeanstalk client = new AWSElasticBeanstalkClient();
RestartAppServerRequest request = new RestartAppServerRequest()
.withEnvironmentId("<myEnvId>")
.withEnvironmentName("<myEnvName>");
RestartAppServerResult response = client.restartAppServer(request);
I get the below error:
com.amazonaws.services.elasticbeanstalk.model.AWSElasticBeanstalkException: No Environment found for EnvironmentId = ''. (Service: AWSElasticBeanstalk; Status Code: 400; Error Code: InvalidParameterValue; Request ID: 4d025449-ed00-11e6-8405-4d5eb8e5ecd9)
The <myEnvId> and <myEnvName> are correct as they are taken from the AWS dashboard.
I also tried including the aws.accessKeyId and aws.secretKey to java system properties. Still I get the same error.
Is there something I am missing or doing wrong? Please advice.
Thanks,
Clyde
It sounds like you need to configure the region. For example to configure the region to us-west-2 you would use the following code:
AWSElasticBeanstalk client = new AWSElasticBeanstalkClient();
client.configureRegion(Regions.US_WEST_2);
Thanks to all who posted. I manged to solve the issue. The code segment used is as follows:
AWSElasticBeanstalk client = new AWSElasticBeanstalkClient();
client.setEndpoint(<set your endpoint>);
RestartAppServerRequest request = new RestartAppServerRequest()
.withEnvironmentId(<set your env id>)
.withEnvironmentName(<set your env name>);
RestartAppServerResult response = client.restartAppServer(request);
This worked find.

Jclouds Rackspace Cloud Files service net

Is it possible to use the servicenet when using the cloud files api in Java? Currently I'm using it as follows:
ContextBuilder cb = ContextBuilder.newBuilder(config.getProvider())
.credentials(config.getUserName(), config.getApiKey()).modules(modules);
CloudFilesApi cfa = cb.buildApi(CloudFilesApi.class);
I'm asking this because I used to use the python client which has a boolean parameter in order to choose whether to use the public or the service net:
cf = pyrax.connect_to_cloudfiles(region=CDN_REGION, public=CDN_USEPUBLIC)
Iterable<Module> modules = ImmutableSet.<Module> of(new SLF4JLoggingModule(),
new InternalUrlModule());
ContextBuilder builder = ContextBuilder.newBuilder(PROVIDER)
.modules(modules)
.credentials(username, apiKey);
blobStore = builder.buildView(RegionScopedBlobStoreContext.class).getBlobStore(REGION);
cloudFiles = blobStore.getContext().unwrapApi(CloudFilesApi.class);
You need to make sure to add the InternalUrlModule to the list of modules. This in turn will make jclouds use the applicable ServiceNet endpoints when connecting to the service.

Android: how do I authenticate to my google account and how can I get my GMail Tasks?

I am developing an Android app to connect to my Google Tasks and show them in a ListView.
I tried to follow step by step some tutorial such as https://developers.google.com/google-apps/tasks/oauth-and-tasks-on-android but none of those works.
I tried to download the google-api-services-tasks-v1-1.1.0-beta.jar and all the jars inicated in that tutorial, and after importing all the necessary libraries it just didn't work, and when i try to get my tasks after the connection i just get nulls.
I found out that i could use Oauth2.0 for the authentication and to access to the tasks API, to get my clientID ecc., so i created an account on the Google API's Console and created my OAuth clientID.
After that I try to authenticate with this code
HttpTransport transport = new NetHttpTransport();
JacksonFactory jsonFactory = new JacksonFactory();
String clientId = "myID";
String clientSecret = "mySecret";
String redirectUrl = "https://localhost/oauth2callback";
Iterable<String> scope ="https://www.googleapis.com/auth/tasks";
String authorizationUrl = new GoogleAuthorizationCodeRequestUrl(clientId, redirectUrl, scope)
.build();
String code="Code";
GoogleTokenResponse response = new GoogleAuthorizationCodeTokenRequest(transport, jsonFactory,
clientId, clientSecret, code, redirectUrl).execute();
GoogleAccessProtectedResource accessProtectedResource = new GoogleAccessProtectedResource(
response.getAccessToken(), transport, jsonFactory, clientId, clientSecret,
response.getRefreshToken());
Tasks service = new Tasks(transport, jsonFactory, accessProtectedResource);
AccessProtectedResource accessProtectedResource = new GoogleAccessProtectedResource(accessToken);
Tasks service = new Tasks(transport, new JacksonFactory(), accessProtectedResource);
service.accessKey="MyKey";
service.setApplicationName("GTasks");
I don't get any error but after creating this service I tried to get my tasklists but nothing happened and i didn't get any result.
When I tried to Log the content of the List of tasklists i just got an empty list "{}".
I suspect that this could be because of the old version of the libraries that i found, but even when i tried to use the latest versions it didn't work and i got the same results.
I'm really confused.
Every tutorial I found recommends a different version of the libraries and a different strategy. I really don't know wich one should I follow.
The Tasks API is REALLY confusing.
I, like you, was trying without success to roll my own authentication. I've found it way easier to modify the sample provided by google-api-java-client. This sample simply lists the tasks from the "default" task list.
You'll need to download and install the GDT plugin for Eclipse, and then use it to install the Google APIs you want to use.
Once you have the sample working, you can do more with it by looking here for the different functions to use. For example, to request all the tasks in the "Default" task list use this line:
client.tasks().list("#default").setFields("items/title").execute().getItems();
To update a task:
client.tasks().update("#default", task.getId(), task).execute();
And so on.

Store Blob in Heroku (or similar cloud services)

I want to deploy an app in Heroku to try their new Play! Framework support. For what I've read in the site (I gotta confess I did not try it yet) they don't provide any file system. This means that (probably) Blob fields used in Play to store files won't work properly.
Could somebody:
Confirm if you can use the Play Blob in Heroku?
Provide the "best" alternative to store files in Heroku? Is better to store them in the database (they use PostgreSQL) or somewhere else?
I put an example of how to do this with Amazon S3 on github:
https://github.com/jamesward/plays3upload
Basically you just need to send the file to S3 and save the key in the entity:
AWSCredentials awsCredentials = new BasicAWSCredentials(System.getenv("AWS_ACCESS_KEY"), System.getenv("AWS_SECRET_KEY"));
AmazonS3 s3Client = new AmazonS3Client(awsCredentials);
s3Client.createBucket(BUCKET_NAME);
String s3Key = UUID.randomUUID().toString();
s3Client.putObject(BUCKET_NAME, s3Key, attachment);
Document doc = new Document(comment, s3Key, attachment.getName());
doc.save();
listUploads();

Categories