Exception in thread "main" com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
at com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:371)
The Error message itself self sufficient and the error message clearly explains that you had not set the amazonAWSRegion when you are building Amazon service client. As your question does not clearly depicts which AWS service your are trying to connect through your code, I have shown a sample example to build AWS dynamoDB client.
when you were building AmazonDynamoDB client instance use the below code sample.
String amazonAWSAccessKey = "yourAmazonAWSAccessKey";
String amazonAWSSecretKey = "yourAmazonAWSSecretKey";
String amazonDynamoDBEndpoint = "AmazonDynamoDBEndpoint";
String amazonAWSRegion = "amazonAWSRegion"; //(ex: us-east-1/us-west-1)
AWSStaticCredentialsProvider awsCredentialsProvider = new AWSStaticCredentialsProvider(new BasicAWSCredentials(
amazonAWSAccessKey, amazonAWSSecretKey));
AmazonDynamoDB amazonDynamoDB = AmazonDynamoDBClientBuilder.standard()
.withCredentials(awsCredentialsProvider)
.withEndpointConfiguration(new AwsClientBuilder
.EndpointConfiguration(amazonDynamoDBEndpoint, amazonAWSRegion)).build();
If you want to connect the AWS service through Eclipse, the configuration setup is documented at Set up AWS Toolkit
Related
How can I send messages to a topic using Azure managed identity in java?
Right now im using the connectionString to send the message to the topic.
ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
.connectionString(connectionString)
.sender()
.topicName(topicName)
.buildClient();
In the Azure SDK for java, i could only find this example, which is for service bus queue
ServiceBusSenderAsyncClient sender = new ServiceBusClientBuilder()
.credential("<<fully-qualified-namespace>>", credential)
.sender()
.queueName("<<queue-name>>")
.buildAsyncClient();
Your second snippet is mostly correct; you're missing the step of creating the credential that you're passing to the builder. That is discussed in the Authorizing with DefaultAzureCredential section of the overview and looks like:
TokenCredential credential = new DefaultAzureCredentialBuilder()
.build();
ServiceBusReceiverAsyncClient receiver = new ServiceBusClientBuilder()
.credential("<<fully-qualified-namespace>>", credential)
.receiver()
.queueName("<<queue-name>>")
.buildAsyncClient();
Service Bus can use any of the Azure.Identity credentials for authorization. DefaultAzureCredentialBuilder is demonstrated only because it is a chained credential that allows for success in a variety of scenarios. More information can be found in the Azure.Identity overview.
If you'd prefer to restrict authorization to only a managed identity, you can do so by using ManagedIdentityCredentialBuilder rather than the default credential. An example of creating the can be found here. It can then be passed to Service Bus in the same manner as the default credential.
I am trying to invoke an authenticated HTTP-based cloud function from another cloud function. Let's call them CF1 and CF2 respectively, for the sake of brevity; thus I wish to invoke CF2 from CF1.
Following the example given by the Google Documentation: Authenticating for Invocation, I created a new service account for CF2, and then attached it to CF1 with the roles/cloudfunctions.admin . I downloaded a service key for local testing with Functions Framework, setting it as the Application Default Credentials(ADC); thus CF2 on my local machine connects to CF1 on GCP, authenticating as CF2's service account via ADC.
I have deployed CF1 on Cloud Functions successfully, and was testing whether CF2 on my local machine could reach to CF1 when I was surprised to receive a HTTP 401.
For reference, here is the code in question, which is almost identical to the samples provided by the Google Documentation:
String serviceUrl = "<cf1-url>";
GoogleCredentials credentials = GoogleCredentials.getApplicationDefault();
if (!(credentials instanceof IdTokenProvider)) {
throw new IllegalArgumentException("Credentials are not an instance of IdTokenProvider.");
}
IdTokenCredentials tokenCredential =
IdTokenCredentials.newBuilder()
.setIdTokenProvider((IdTokenProvider) credentials)
.setTargetAudience(serviceUrl)
.build();
GenericUrl genericUrl = new GenericUrl(serviceUrl);
HttpCredentialsAdapter adapter = new HttpCredentialsAdapter(tokenCredential);
HttpTransport transport = new NetHttpTransport();
com.google.api.client.http.HttpRequest request = transport.createRequestFactory(adapter).buildGetRequest(genericUrl);
com.google.api.client.http.HttpResponse response = request.execute();
I tried referring to:
Google Cloud Platform - cloud functions API - 401 Unauthorized
Cloud Function Permissions (403 when invoking from another cloud function)
Google Cloud Function Authorization Failing
but I was not able to find a solution to my problem from those questions.
Further testing revealed that the identity token generated via the client SDK:
tokenCredential.getIdToken().getTokenValue() is different from the GCloud CLI command gcloud auth print-identity-token. I could use the identity token generated by GCloud CLI to directly invoke CF1 (e.g. via Postman/cURL and authenticated as CF2's service account) but not the identity token printed by the client SDK. This was a surprise as I am using CF 2's service account keys as the ADC, and also authorized it for gcloud access via gcloud auth activate-service-account.
It seems to me that there is no issue with the permissions of the service accounts and cloud functions, as I can directly invoke CF1; thus it would appear to be an issue with the code. However, I am unable to determine the cause of the 401 error.
The target audience, your serviceURL, must be the raw url, this one provided by the Cloud Functions service.
If you add your parameters (query or path) it won't work.
I have set up my Elastic Cloud Service through Google Cloud and have set up an Elastic Search Instance.
I can upload my data to Elastic search and query my data just fine. However, when I try to connect to the Elastic Search Instance through my Java Client, I keep getting a 'java.io.IOException' and a 'java.net.UnknownHostException' exceptions.
24-Jun-2020 18:55:52.657 SEVERE [http-nio-8181-exec-8] org.apache.catalina.core.StandardWrapperValve.invoke Servlet.service() for servlet [dispatcher] in context with path [] threw exception
java.io.IOException: <Elastic Search endpoint>
at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:828)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:248)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:235)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1611)
Caused by: java.net.UnknownHostException: <Elastic Search Endpoint>
at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298)
at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45)
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager$InternalAddressResolver.resolveRemoteAddress(PoolingNHttpClientConnectionManager.java:664)
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager$InternalAddressResolver.resolveRemoteAddress(PoolingNHttpClientConnectionManager.java:635)
at org.apache.http.nio.pool.AbstractNIOConnPool.processPendingRequest(AbstractNIOConnPool.java:474)
at org.apache.http.nio.pool.AbstractNIOConnPool.lease(AbstractNIOConnPool.java:280)
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.requestConnection(PoolingNHttpClientConnectionManager.java:295)
at org.apache.http.impl.nio.client.AbstractClientExchangeHandler.requestConnection(AbstractClientExchangeHandler.java:377)
at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.start(DefaultClientExchangeHandlerImpl.java:129)
at org.apache.http.impl.nio.client.InternalHttpAsyncClient.execute(InternalHttpAsyncClient.java:141)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:244)
... 49 more
And my Java Code:
String ELASTIC_SEARCH_USER_NAME = "elastic";
String ELASTIC_SEARCH_PASSWORD = <Password>;
String ELASTIC_SEARCH_ENDPOINT_URL = "https://92d5f385db294fb4b7ff335201d0a854.asia-northeast1.gcp.cloud.es.io";
Integer ELASTIC_SEARCH_PORT = 9243;
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(ELASTIC_SEARCH_USER_NAME, ELASTIC_SEARCH_PASSWORD));
RestClientBuilder builder = RestClient.builder(new HttpHost(ELASTIC_SEARCH_ENDPOINT_URL, ELASTIC_SEARCH_PORT, "https"))
.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
#Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider);
}
});
RestHighLevelClient highLevelClient = new RestHighLevelClient(builder);
Strangely, I have tried pinging my endpoint url in the command line but my cmd is unable to ping the url.
Is there something I need to set up in my Elastic Stack Console for my Java Client to request queries?
Thank you!
Are you sure, that your Elasticsearch is running on port 9243? This is in GCP but for AWS managed ES, there is no need to give the port number and only url is sufficient, make a change to below part of code and see if it works as it works in AWS ES where we don't have to mention the port.
RestClientBuilder builder = RestClient.builder(new HttpHost(ELASTIC_SEARCH_ENDPOINT_URL,, "https"))
Asking the guys over at elastic.co forums, they suggested that I drop the "https://" portion from the ELASTIC_SEARCH_ENDPOINT_URL.
String ELASTIC_SEARCH_ENDPOINT_URL = "https://92d5f385db294fb4b7ff335201d0a854.asia-northeast1.gcp.cloud.es.io";
to
String ELASTIC_SEARCH_ENDPOINT_URL = "92d5f385db294fb4b7ff335201d0a854.asia-northeast1.gcp.cloud.es.io";
and keeping the rest of the code, it worked!
The forum post I made if anyone wants to take a look
The AWS Glacier API gives me an error about not finding the region even when I specify it specifically:
EndpointConfiguration endpointConfig = new EndpointConfiguration("https://glacier.us-east-2.amazonaws.com/", "us-east-2");
AmazonGlacier glacierClient = AmazonGlacierClientBuilder.standard()
.withEndpointConfiguration(endpointConfig)
.withCredentials(credentials)
.build();
ArchiveTransferManager xferMgr = new ArchiveTransferManagerBuilder()
.withGlacierClient(glacierClient)
.build();
UploadResult result = xferMgr.upload("Data_Full", "my archive " + (new Date()), new File("C:\\myBigFile"));
I get this stack trace:
com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder
or setup environment to supply a region. at
com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:371)
at
com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:337)
at
com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46)
at
com.amazonaws.services.sqs.AmazonSQSClientBuilder.defaultClient(AmazonSQSClientBuilder.java:44)
at
com.amazonaws.services.glacier.transfer.ArchiveTransferManagerBuilder.resolveSQSClient(ArchiveTransferManagerBuilder.java:129)
at
com.amazonaws.services.glacier.transfer.ArchiveTransferManagerBuilder.getParams(ArchiveTransferManagerBuilder.java:135)
at
com.amazonaws.services.glacier.transfer.ArchiveTransferManagerBuilder.build(ArchiveTransferManagerBuilder.java:143)
Note that I use the API to list vaults and it works:
AmazonGlacierClientBuilder clientbuilder = AmazonGlacierClientBuilder.standard();
EndpointConfiguration endpointConfig = new EndpointConfiguration("https://glacier.us-east-2.amazonaws.com/", "us-east-2");
clientbuilder.withEndpointConfiguration(endpointConfig);
ProfilesConfigFile cf = new ProfilesConfigFile();
AWSCredentialsProvider credentials = new ProfileCredentialsProvider(cf, "My AWS Profile Name");
clientbuilder.setCredentials(credentials);
AmazonGlacier glacierClient = CustomAmazonGlacierClientBuilder.buildCustomAmazonGlacierClient();
ListVaultsRequest request = new ListVaultsRequest();
ListVaultsResult result = glacierClient.listVaults(request);
I recently downloaded the AWS / Glacier libraries as an Eclipse plugin. It shows the .jar version of aws-java-sdk-opensdk-1.11.130.jar
Does anyone have any insight as to what I could put in the code to satisfy the region requirement? I'd rather do it programmatically
I solved this by adding the AWS_REGION environment variable. E.g. us-east-2. When using Eclipse, you can add this using the Run --> Run Configurations.
I also updated the Eclipse and AWS Eclipse plugins using the Eclipse Help --> Check for Updates feature.
I am using the JAVA SDK from AWS to create a Polly client.
Like this:
BasicAWSCredentials awsCreds = new BasicAWSCredentials("<IAM access Key>", "IAM secret key>");
AmazonPollyClient apClient = (AmazonPollyClient) AmazonPollyClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
SynthesizeSpeechRequest tssRequest = new SynthesizeSpeechRequest();
tssRequest.setText(<text>);
tssRequest.setVoiceId(<voiceid>);
tssRequest.setOutputFormat(OutputFormat.Mp3);
SynthesizeSpeechResult tssResult = apClient.synthesizeSpeech(tssRequest);
When I run this code, I get the following error message:
Exception in thread "main" com.amazonaws.SdkClientException: Unable to
load region information from any provider in the chain at
com.amazonaws.regions.AwsRegionProviderChain.getRegion(AwsRegionProviderChain.java:56)
at
com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:319)
at
com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:295)
at
com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:38)
at com.eoffice.aws.speech.Polly.main(Polly.java:42)
I checked the credentials using the IAM Policy Simulator. This works fine, permissions are OK.
The method to set the Region in the ClientBuilder is NOT visible for the AmazonPollyClientBuilder, so I have no (Java SDK) way to specify the region.
Update:
When I just ask the defaultAwsREgionProviderChain, I get the same error message
DefaultAwsRegionProviderChain defaultAwsRegionProviderChain = new DefaultAwsRegionProviderChain();
System.out.println(defaultAwsRegionProviderChain.getRegion());
Updat 2:
When I create a config file in de .aws folder with the following content:
[default]
region = eu-west-1
It works, but I need a way to set this without relying on the file system.
Providing a System Environment variable with name "AWS_REGION" did the trick.
See screenshot for configuration in IBM Bluemix
I think you can set Region like this
AmazonPollyClient apClient = (AmazonPollyClient) AmazonPollyClientBuilder.standard().withCredentials(new AWSStaticCredentialsProvider(awsCreds)).withRegion("<aws-region>").build();