Is it possible to use the servicenet when using the cloud files api in Java? Currently I'm using it as follows:
ContextBuilder cb = ContextBuilder.newBuilder(config.getProvider())
.credentials(config.getUserName(), config.getApiKey()).modules(modules);
CloudFilesApi cfa = cb.buildApi(CloudFilesApi.class);
I'm asking this because I used to use the python client which has a boolean parameter in order to choose whether to use the public or the service net:
cf = pyrax.connect_to_cloudfiles(region=CDN_REGION, public=CDN_USEPUBLIC)
Iterable<Module> modules = ImmutableSet.<Module> of(new SLF4JLoggingModule(),
new InternalUrlModule());
ContextBuilder builder = ContextBuilder.newBuilder(PROVIDER)
.modules(modules)
.credentials(username, apiKey);
blobStore = builder.buildView(RegionScopedBlobStoreContext.class).getBlobStore(REGION);
cloudFiles = blobStore.getContext().unwrapApi(CloudFilesApi.class);
You need to make sure to add the InternalUrlModule to the list of modules. This in turn will make jclouds use the applicable ServiceNet endpoints when connecting to the service.
Related
I am trying to use Google Phishing Protection API over gRPC, everything seems straight-forward looking here, but comparing to here you can see that using REST you can send a request without authenticating or some-such, rather you can pass an API key as a query param.
I tested the REST option and it works for me, but trying to use the gRPC option I get failures while trying to authenticate, which I do not want to do.
The equivalent of the REST key query parameter in gRPC is the x-goog-api-key metadata. The API to add that metadata key will vary by language.
When using Java with the googleapi client (which you should be using), you can use:
PhishingProtectionServiceV1Beta1Client.create(
PhishingProtectionServiceV1Beta1Settings.newBuilder()
.setCredentialsProvider(new NoCredentialsProvider())
.setHeaderProvider(PhishingProtectionServiceV1Beta1Settings.defaultApiClientHeaderProviderBuilder()
.setApiClientHeaderKey(yourApiKey)
.build())
.build());
In "plain" grpc it would look more like:
import io.grpc.Metadata;
private static final Metadata.Key<String> API_KEY
= Metadata.Key.of("x-goog-api-key", Metadata.ASCII_STRING_MARSHALLER);
Metadata apiKeyMetadata = new Metadata();
apiKeyMetadata.put(API_KEY, yourApiKey);
stub = stub.withInterceptors(MetadataUtils.newAttachHeadersInterceptor(apiKeyMetadata));
Talking about gRPC it is understandable that you are required to authenticate. It is necessary for quota
The AWS Glacier API gives me an error about not finding the region even when I specify it specifically:
EndpointConfiguration endpointConfig = new EndpointConfiguration("https://glacier.us-east-2.amazonaws.com/", "us-east-2");
AmazonGlacier glacierClient = AmazonGlacierClientBuilder.standard()
.withEndpointConfiguration(endpointConfig)
.withCredentials(credentials)
.build();
ArchiveTransferManager xferMgr = new ArchiveTransferManagerBuilder()
.withGlacierClient(glacierClient)
.build();
UploadResult result = xferMgr.upload("Data_Full", "my archive " + (new Date()), new File("C:\\myBigFile"));
I get this stack trace:
com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder
or setup environment to supply a region. at
com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:371)
at
com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:337)
at
com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46)
at
com.amazonaws.services.sqs.AmazonSQSClientBuilder.defaultClient(AmazonSQSClientBuilder.java:44)
at
com.amazonaws.services.glacier.transfer.ArchiveTransferManagerBuilder.resolveSQSClient(ArchiveTransferManagerBuilder.java:129)
at
com.amazonaws.services.glacier.transfer.ArchiveTransferManagerBuilder.getParams(ArchiveTransferManagerBuilder.java:135)
at
com.amazonaws.services.glacier.transfer.ArchiveTransferManagerBuilder.build(ArchiveTransferManagerBuilder.java:143)
Note that I use the API to list vaults and it works:
AmazonGlacierClientBuilder clientbuilder = AmazonGlacierClientBuilder.standard();
EndpointConfiguration endpointConfig = new EndpointConfiguration("https://glacier.us-east-2.amazonaws.com/", "us-east-2");
clientbuilder.withEndpointConfiguration(endpointConfig);
ProfilesConfigFile cf = new ProfilesConfigFile();
AWSCredentialsProvider credentials = new ProfileCredentialsProvider(cf, "My AWS Profile Name");
clientbuilder.setCredentials(credentials);
AmazonGlacier glacierClient = CustomAmazonGlacierClientBuilder.buildCustomAmazonGlacierClient();
ListVaultsRequest request = new ListVaultsRequest();
ListVaultsResult result = glacierClient.listVaults(request);
I recently downloaded the AWS / Glacier libraries as an Eclipse plugin. It shows the .jar version of aws-java-sdk-opensdk-1.11.130.jar
Does anyone have any insight as to what I could put in the code to satisfy the region requirement? I'd rather do it programmatically
I solved this by adding the AWS_REGION environment variable. E.g. us-east-2. When using Eclipse, you can add this using the Run --> Run Configurations.
I also updated the Eclipse and AWS Eclipse plugins using the Eclipse Help --> Check for Updates feature.
I am trying to authenticate a java app to AWS services using a developer-authenticated Cognito identity. This is very straightforward in the AWS mobile SDKs (documentation), but I can't seem to find the equivalent classes in the Java SDK.
The main issue I am having is that the Java SDK classes (such as WebIdentityFederationSessionCredentialsProvider) require the client code to know the arn of the role being assumed. With the mobile SDK, it uses the role configured for the federated identity. That's what I'd prefer to do, but it seems the Java SDK doesn't have the supporting classes for that.
The last comment from Jeff led me to the answer. Thanks Jeff!
String cognitoIdentityId = "your user's identity id";
String openIdToken = "open id token for the user created on backend";
Map<String,String> logins = new HashMap<>();
logins.put("cognito-identity.amazonaws.com", openIdToken);
GetCredentialsForIdentityRequest getCredentialsRequest =
new GetCredentialsForIdentityRequest()
.withIdentityId(cognitoIdentityId)
.withLogins(logins);
AmazonCognitoIdentityClient cognitoIdentityClient = new AmazonCognitoIdentityClient();
GetCredentialsForIdentityResult getCredentialsResult = cognitoIdentityClient.getCredentialsForIdentity(getCredentialsRequest);
Credentials credentials = getCredentialsResult.getCredentials();
AWSSessionCredentials sessionCredentials = new BasicSessionCredentials(
credentials.getAccessKeyId(),
credentials.getSecretKey(),
credentials.getSessionToken()
);
AmazonS3Client s3Client = new AmazonS3Client(sessionCredentials);
...
If that's the route you want to go, you can find this role in the IAM console, named Cognito_(Auth|Unauth)_DefaultRole. These are what Cognito generated and linked to your pool, and you can get the ARN from there.
This blog post may be of some assistance. All of the APIs the SDK uses to communicate with Cognito to get credentials are exposed in the Java SDK, you just need to use your own back end to get the token itself. Once you have it, you can set the logins the same way you normally would with another provider and it'll all work.
After searching the internet for a good solution to an embedded Java AWS S3 mock it seemed that S3Ninja and S3Proxy seemed to be the most popular solutions.
However there doesn't seem to be an easy way to fire these up programmatically. After giving up with S3Ninja, I tried to do it with S3Proxy but it's not quite working.
Maven Dependencies
<dependency>
<groupId>org.gaul</groupId>
<artifactId>s3proxy</artifactId>
<version>${s3proxy.version}</version>
<scope>test</scope>
</dependency>
Code
String endpoint = "http://127.0.0.1:8085";
URI uri = URI.create(endpoint);
Properties properties = new Properties();
properties.setProperty("s3proxy.authorization", "none");
properties.setProperty("s3proxy.endpoint", endpoint);
properties.setProperty("jclouds.provider", "filesystem");
properties.setProperty("jclouds.filesystem.basedir", "/tmp/s3proxy");
ContextBuilder builder = ContextBuilder
.newBuilder("filesystem")
.credentials("x", "x")
.modules(ImmutableList.<Module>of(new SLF4JLoggingModule()))
.overrides(properties);
BlobStoreContext context = builder.build(BlobStoreContext.class);
BlobStore blobStore = context.getBlobStore();
S3Proxy s3Proxy = S3Proxy.builder().awsAuthentication("x", "x").endpoint(uri).keyStore("", "").blobStore(blobStore).build();
s3Proxy.start();
BasicAWSCredentials awsCredentials = new BasicAWSCredentials("x", "x");
AmazonS3Client client = new AmazonS3Client(awsCredentials, new ClientConfiguration());
client.setEndpoint(endpoint);
// Should Throw AWS Client Exception as Bucket / Key does not exist!
GetObjectRequest objectRequest = new GetObjectRequest("bucket", "key");
S3Object object = client.getObject(objectRequest);
s3Proxy.stop();
Exception
java.lang.NoSuchMethodError: com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.<init>(Lcom/google/gson/internal/ConstructorConstructor;Lcom/google/gson/FieldNamingStrategy;Lcom/google/gson/internal/Excluder;)V
at org.jclouds.json.internal.DeserializationConstructorAndReflectiveTypeAdapterFactory.<init>(DeserializationConstructorAndReflectiveTypeAdapterFactory.java:116)
at org.jclouds.json.config.GsonModule.provideGson(GsonModule.java:129)
...
at org.jclouds.providers.config.BindProviderMetadataContextAndCredentials.backend(BindProviderMetadataContextAndCredentials.java:84)
...
at org.jclouds.ContextBuilder.build(ContextBuilder.java:581)
Any help is truly appreciated. I'm sure this is a big requirement for many Java Integration Tests that interact with AWS S3.
Just to comment the reason is because your project is using a conflicting version of gson. S3Proxy's dep requires gson 2.5.
Maybe you give ladon-S3-server a chance.
Take a look at my github reference.
The core is based on a servlet and has very little dependencies.
I have a working application for managing HDFS using WebHDFS.
I need to be able to do this on a Kerberos secured cluster.
The problem is, that there is no library or extension to negotiate the ticket for my app, I only have a basic HTTP client.
Would it be possible to create a Java service which would handle the ticket exchange and once it gets the Service ticket to just pass it to the app for use in a HTTP request?
In other words, my app would ask the Java service to negotiate the tickets and it would return the Service ticket back to my app in a string or raw string and the app would just attach it to the HTTP request?
EDIT: Is there a similar elegant solution like #SamsonScharfrichter described for HTTPfs? (To my knowledge, it does not support delegation tokens)
EDIT2: Hi guys, I am still completly lost. Im trying to figure out the Hadoop-auth client without any luck. Could you please help me out again? I already spent hours reading upon it without luck.
The examples say to do this:
* // establishing an initial connection
*
* URL url = new URL("http://foo:8080/bar");
* AuthenticatedURL.Token token = new AuthenticatedURL.Token();
* AuthenticatedURL aUrl = new AuthenticatedURL();
* HttpURLConnection conn = new AuthenticatedURL(url, token).openConnection();
* ....
* // use the 'conn' instance
* ....
Im lost already here. What initial connection do I need? How can
new AuthenticatedURL(url, token).openConnection();
take two parameters? there is no constructor for such a case. (im getting error because of this). Shouldnt a principal be somewhere specified? It is probably not going to be this easy.
URL url = new URL("http://<host>:14000/webhdfs/v1/?op=liststatus");
AuthenticatedURL.Token token = new AuthenticatedURL.Token();
HttpURLConnection conn = new AuthenticatedURL(url, token).openConnection(url, token);
Using Java code plus the Hadoop Java API to open a Kerberized session, get the Delegation Token for the session, and pass that Token to the other app -- as suggested by #tellisnz -- has a drawback: the Java API requires quite a lot of dependencies (i.e. a lot of JARs, plus Hadoop native libraries). If you run you app on Windows, in particular, it will be a tough ride.
Another option is to use Java code plus WebHDFS to run a single SPNEGOed query and GET the Delegation Token, then pass it to the other app -- that option requires absolutely no Hadoop library on your server. The barebones version would be sthg like
URL urlGetToken = new URL("http://<host>:<port>/webhdfs/v1/?op=GETDELEGATIONTOKEN") ;
HttpURLConnection cnxGetToken =(HttpURLConnection) urlGetToken.openConnection() ;
BufferedReader httpMessage = new BufferedReader( new InputStreamReader(cnxGetToken.getInputStream()), 1024) ;
Pattern regexHasToken =Pattern.compile("urlString[\": ]+(.[^\" ]+)") ;
String httpMessageLine ;
while ( (httpMessageLine =httpMessage.readLine()) != null)
{ Matcher regexToken =regexHasToken.matcher(httpMessageLine) ;
if (regexToken.find())
{ System.out.println("Use that template: http://<Host>:<Port>/webhdfs/v1%AbsPath%?delegation=" +regexToken.group(1) +"&op=...") ; }
}
httpMessage.close() ;
That's what I use to access HDFS from a Windows Powershell script (or even an Excel macro). Caveat: with Windows you have to create your Kerberos TGT on the fly, by passing to the JVM a JAAS config pointing to the appropriate keytab file. But that caveat also applies to the Java API, anyway.
You could take a look at the hadoop-auth client and create a service which does the first connection, then you might be able to grab the 'Authorization' and 'X-Hadoop-Delegation-Token' headers and cookie from it and add it to your basic client's requests.
First you'll need to have either used kinit to authenticate your user for application before running. Otherwise, you're going to have to do a JAAS login for your user, this tutorial provides a pretty good overview on how to do that.
Then, to do the login to WebHDFS/HttpFS, we'll need to do something like:
URL url = new URL("http://youhost:8080/your-kerberised-resource");
AuthenticatedURL.Token token = new AuthenticatedURL.Token();
HttpURLConnection conn = new AuthenticatedURL().openConnection(url, token);
String authorizationTokenString = conn.getRequestProperty("Authorization");
String delegationToken = conn.getRequestProperty("X-Hadoop-Delegation-Token");
...
// do what you have to to get your basic client connection
...
myBasicClientConnection.setRequestProperty("Authorization", authorizationTokenString);
myBasicClientConnection.setRequestProperty("Cookie", "hadoop.auth=" + token.toString());
myBasicClientConnection.setRequestProperty("X-Hadoop-Delegation-Token", delegationToken);