AWS Athena ClientExecutionTimeoutException - java

I am trying to do a POC for AWS Athena using Java. I am using the sample code given in https://docs.aws.amazon.com/athena/latest/ug/code-samples.html
BasicAWSCredentials awsCredentials = new BasicAWSCredentials("accesskey","secretkey");
private final AmazonAthenaClientBuilder builder = AmazonAthenaClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withCredentials(new AWSStaticCredentialsProvider(awsCredentials))
.withClientConfiguration(new ClientConfiguration().withClientExecutionTimeout(5000));
public AmazonAthena createClient()
{
return builder.build();
}
=============================================================
public class StartQueryExample
{
public static void main(String[] args) throws InterruptedException
{
AthenaClientFactory factory = new AthenaClientFactory();
AmazonAthena client = factory.createClient();
String queryExecutionId = submitAthenaQuery(client);
}
private static String submitAthenaQuery(AmazonAthena client)
{
QueryExecutionContext queryExecutionContext = new QueryExecutionContext().withDatabase("DB_Name");
ResultConfiguration resultConfiguration = new ResultConfiguration()
.withOutputLocation("s3://bucket_name/results");
StartQueryExecutionRequest startQueryExecutionRequest = new StartQueryExecutionRequest()
.withQueryString("select * from tablename")
.withQueryExecutionContext(queryExecutionContext)
.withResultConfiguration(resultConfiguration);
StartQueryExecutionResult startQueryExecutionResult = client.startQueryExecution(startQueryExecutionRequest);
return startQueryExecutionResult.getQueryExecutionId();
}
The sample table has only 6 rows and 2 columns.
I tried running the code using Boto3 and it works perfectly fine.
But when running from Java I get ClientExecutionTimeoutException:
Exception in thread "main" **com.amazonaws.http.timers.client.ClientExecutionTimeoutException: Client execution did not complete before the specified timeout configuration.**
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleAbortedException(AmazonHttpClient.java:813)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:703)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.athena.AmazonAthenaClient.doInvoke(AmazonAthenaClient.java:813)
at com.amazonaws.services.athena.AmazonAthenaClient.invoke(AmazonAthenaClient.java:789)
at com.amazonaws.services.athena.AmazonAthenaClient.executeStartQueryExecution(AmazonAthenaClient.java:694)
at com.amazonaws.services.athena.AmazonAthenaClient.startQueryExecution(AmazonAthenaClient.java:669)
at com.capitalone.aws.athena.StartQueryExample.submitAthenaQuery(StartQueryExample.java:60)
at com.capitalone.aws.athena.StartQueryExample.main(StartQueryExample.java:32)
I have tried running it from eclipse and also tried creating a jar and running it on the ec2 instance using Athena IAM role.
Any help would be useful.
Thanks

Fixed the above issue by fixing these lines
.withClientConfiguration(buildClientConfig().withClientExecutionTimeout(5000));
private ClientConfiguration buildClientConfig() {
ClientConfiguration clientConfiguration = new ClientConfiguration();
clientConfiguration.setProxyHost("host");
clientConfiguration.setProxyPort(port);
clientConfiguration.setProxyUsername("");
clientConfiguration.setProxyPassword("");
clientConfiguration.setPreemptiveBasicProxyAuth(false);
clientConfiguration.setConnectionTimeout(2000);
clientConfiguration.setRequestTimeout(2000);
return clientConfiguration;
}

Related

Is it possible to use the DeviceCode authentication Flow with Azure Java SDK?

I successfully generate an IAuthenticationResult using the azure msal4jlibrary - I am presented with a device code, and when that code is typed into a browser, it shows the correct scopes / permissions,
and now I'd like to take this authentication result and pass it into the Azure-SDK authentication similar to:
val result = DeviceCodeFlow.acquireTokenDeviceCode()
val a: Azure = Azure.configure()
.withLogLevel(LogLevel.BODY_AND_HEADERS)
.authenticate(AzureCliCredentials.create(result))
.withDefaultSubscription()
Does anyone know where to look / or any samples which do this?
If you want to use msal4j library to get access token, then use the token to manage Azure resource with Azure management SDK, please refer to the following code
public class App {
public static void main(String[] args) throws Exception {
String subscriptionId = ""; // the subscription id
String domain="";// Azure AD tenant domain
DeviceCodeTokenCredentials tokencred = new DeviceCodeTokenCredentials(AzureEnvironment.AZURE,domain);
Azure azure =Azure.configure()
.withLogLevel(LogLevel.BASIC)
.authenticate(tokencred)
.withSubscription(subscriptionId);
for(AppServicePlan plan : azure.appServices().appServicePlans().list()) {
System.out.println(plan.name());
}
}
}
// define a class to extend AzureTokenCredentials
class DeviceCodeTokenCredentials extends AzureTokenCredentials{
public DeviceCodeTokenCredentials(AzureEnvironment environment, String domain) {
super(environment, domain);
}
#Override
public String getToken(String resource) throws IOException {
// use msal4j to get access token
String clientId="d8aa570a-68b3-4283-adbe-a1ad3c1dfd8d";// you Azure AD application app id
String AUTHORITY = "https://login.microsoftonline.com/common/";
Set<String> SCOPE = Collections.singleton("https://management.azure.com/user_impersonation");
PublicClientApplication pca = PublicClientApplication.builder(clientId)
.authority(AUTHORITY)
.build();
Consumer<DeviceCode> deviceCodeConsumer = (DeviceCode deviceCode) ->
System.out.println(deviceCode.message());
DeviceCodeFlowParameters parameters =
DeviceCodeFlowParameters
.builder(SCOPE, deviceCodeConsumer)
.build();
IAuthenticationResult result = pca.acquireToken(parameters).join();
return result.accessToken();
}
}

upwork-api return 503 ioexception

I created app for getting info from upwork.com. I use java lib and Upwork OAuth 1.0. The problem is local request to API works fine, but when I do deploy to Google Cloud, my code does not work. I get ({"error":{"code":"503","message":"Exception: IOException"}}).
I create UpworkAuthClient for return OAuthClient and next it is used for requests in JobClient.
run() {
UpworkAuthClient upworkClient = new UpworkAuthClient();
upworkClient.setTokenWithSecret("USER TOKEN", "USER SECRET");
OAuthClient client = upworkClient.getOAuthClient();
//set query
JobQuery jobQuery = new JobQuery();
jobQuery.setQuery("query");
List<JobQuery> jobQueries = new ArrayList<>();
jobQueries.add(jobQuery);
// Get request of job
JobClient jobClient = new JobClient(client, jobQuery);
List<Job> result = jobClient.getJob();
}
public class UpworkAuthClient {
public static final String CONSUMERKEY = "UPWORK KEY";
public static final String CONSUMERSECRET = "UPWORK SECRET";
public static final String OAYTРCALLBACK = "https://my-app.com/main";
OAuthClient client ;
public UpworkAuthClient() {
Properties keys = new Properties();
keys.setProperty("consumerKey", CONSUMERKEY);
keys.setProperty("consumerSecret", CONSUMERSECRET);
Config config = new Config(keys);
client = new OAuthClient(config);
}
public void setTokenWithSecret (String token, String secret){
client.setTokenWithSecret(token, secret);
}
public OAuthClient getOAuthClient() {
return client;
}
public String getAuthorizationUrl() {
return this.client.getAuthorizationUrl(OAYTРCALLBACK);
}
}
public class JobClient {
private JobQuery jobQuery;
private Search jobs;
public JobClient(OAuthClient oAuthClient, JobQuery jobQuery) {
jobs = new Search(oAuthClient);
this.jobQuery = jobQuery;
}
public List<Job> getJob() throws JSONException {
JSONObject job = jobs.find(jobQuery.getQueryParam());
jobList = parseResponse(job);
return jobList;
}
}
Local dev server works fine, I get resilts on local machine, but in Cloud not.
I will be glad to any ideas, thanks!
{"error":{"code":"503","message":"Exception: IOException"}}
doesn't seem like a response return by Upwork API. Could you please provide the full response including the returned headers? So, we will take a more precise look into it.

Using AWS Java's SDKs, how can I terminate the CloudFormation stack of the current instance?

Uses on-line decomentation I come up with the following code to terminate the current EC2 Instance:
public class Ec2Utility {
static private final String LOCAL_META_DATA_ENDPOINT = "http://169.254.169.254/latest/meta-data/";
static private final String LOCAL_INSTANCE_ID_SERVICE = "instance-id";
static public void terminateMe() throws Exception {
TerminateInstancesRequest terminateRequest = new TerminateInstancesRequest().withInstanceIds(getInstanceId());
AmazonEC2 ec2 = new AmazonEC2Client();
ec2.terminateInstances(terminateRequest);
}
static public String getInstanceId() throws Exception {
//SimpleRestClient, is an internal wrapper on http client.
SimpleRestClient client = new SimpleRestClient(LOCAL_META_DATA_ENDPOINT);
HttpResponse response = client.makeRequest(METHOD.GET, LOCAL_INSTANCE_ID_SERVICE);
return IOUtils.toString(response.getEntity().getContent(), "UTF-8");
}
}
My issue is that my EC2 instance is under an AutoScalingGroup which is under a CloudFormationStack, that is because of my organisation deployment standards though this single EC2 is all there is there for this feature.
So, I want to terminate the entire CloudFormationStack from the JavaSDK, keep in mind, I don't have the CloudFormation Stack Name in advance as I didn't have the EC2 Instance Id so I will have to get it from the code using the API calls.
How can I do that, if I can do it?
you should be able to use the deleteStack method from cloud formation sdk
DeleteStackRequest request = new DeleteStackRequest();
request.setStackName(<stack_name_to_be_deleted>);
AmazonCloudFormationClient client = new AmazonCloudFormationClient (<credentials>);
client.deleteStack(request);
If you don't have the stack name, you should be able to retrieve from the Tag of your instance
DescribeInstancesRequest request =new DescribeInstancesRequest();
request.setInstanceIds(instancesList);
DescribeInstancesResult disresult = ec2.describeInstances(request);
List <Reservation> list = disresult.getReservations();
for (Reservation res:list){
List <Instance> instancelist = res.getInstances();
for (Instance instance:instancelist){
List <Tag> tags = instance.getTags();
for (Tag tag:tags){
if (tag.getKey().equals("aws:cloudformation:stack-name")) {
tag.getValue(); // name of the stack
}
}
At the end I've achieved the desired behaviour using the set of the following util functions I wrote:
/**
* Delete the CloudFormationStack with the given name.
*
* #param stackName
* #throws Exception
*/
static public void deleteCloudFormationStack(String stackName) throws Exception {
AmazonCloudFormationClient client = new AmazonCloudFormationClient();
DeleteStackRequest deleteStackRequest = new DeleteStackRequest().withStackName("");
client.deleteStack(deleteStackRequest);
}
static public String getCloudFormationStackName() throws Exception {
AmazonEC2 ec2 = new AmazonEC2Client();
String instanceId = getInstanceId();
List<Tag> tags = getEc2Tags(ec2, instanceId);
for (Tag t : tags) {
if (t.getKey().equalsIgnoreCase(TAG_KEY_STACK_NAME)) {
return t.getValue();
}
}
throw new Exception("Couldn't find stack name for instanceId:" + instanceId);
}
static private List<Tag> getEc2Tags(AmazonEC2 ec2, String instanceId) throws Exception {
DescribeInstancesRequest describeInstancesRequest = new DescribeInstancesRequest().withInstanceIds(instanceId);
DescribeInstancesResult describeInstances = ec2.describeInstances(describeInstancesRequest);
List<Reservation> reservations = describeInstances.getReservations();
if (reservations.isEmpty()) {
throw new Exception("DescribeInstances didn't returned reservation for instanceId:" + instanceId);
}
List<Instance> instances = reservations.get(0).getInstances();
if (instances.isEmpty()) {
throw new Exception("DescribeInstances didn't returned instance for instanceId:" + instanceId);
}
return instances.get(0).getTags();
}
// XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
// Example of usage from the code:
deleteCloudFormationStack(getCloudFormationStackName());
// XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

ElasticSearch in-memory for testing

I would like to write some integration with ElasticSearch. For testing I would like to run in-memory ES.
I found some information in documentation, but without example how to write those kind of test. Elasticsearch Reference [1.6] » Testing » Java Testing Framework » integration tests
« unit tests
Also I found following article, but it's out of data. Easy JUnit testing with Elastic Search
I looking example how to start and run ES in-memory and access to it over REST API.
Based on the second link you provided, I created this abstract test class:
#RunWith(SpringJUnit4ClassRunner.class)
public abstract class AbstractElasticsearchTest {
private static final String HTTP_PORT = "9205";
private static final String HTTP_TRANSPORT_PORT = "9305";
private static final String ES_WORKING_DIR = "target/es";
private static Node node;
#BeforeClass
public static void startElasticsearch() throws Exception {
removeOldDataDir(ES_WORKING_DIR + "/" + clusterName);
Settings settings = Settings.builder()
.put("path.home", ES_WORKING_DIR)
.put("path.conf", ES_WORKING_DIR)
.put("path.data", ES_WORKING_DIR)
.put("path.work", ES_WORKING_DIR)
.put("path.logs", ES_WORKING_DIR)
.put("http.port", HTTP_PORT)
.put("transport.tcp.port", HTTP_TRANSPORT_PORT)
.put("index.number_of_shards", "1")
.put("index.number_of_replicas", "0")
.put("discovery.zen.ping.multicast.enabled", "false")
.build();
node = nodeBuilder().settings(settings).clusterName("monkeys.elasticsearch").client(false).node();
node.start();
}
#AfterClass
public static void stopElasticsearch() {
node.close();
}
private static void removeOldDataDir(String datadir) throws Exception {
File dataDir = new File(datadir);
if (dataDir.exists()) {
FileSystemUtils.deleteRecursively(dataDir);
}
}
}
In the production code, I configured an Elasticsearch client as follows. The integration test extends the above defined abstract class and configures property elasticsearch.port as 9305 and elasticsearch.host as localhost.
#Configuration
public class ElasticsearchConfiguration {
#Bean(destroyMethod = "close")
public Client elasticsearchClient(#Value("${elasticsearch.clusterName}") String clusterName,
#Value("${elasticsearch.host}") String elasticsearchClusterHost,
#Value("${elasticsearch.port}") Integer elasticsearchClusterPort) throws UnknownHostException {
Settings settings = Settings.settingsBuilder().put("cluster.name", clusterName).build();
InetSocketTransportAddress transportAddress = new InetSocketTransportAddress(InetAddress.getByName(elasticsearchClusterHost), elasticsearchClusterPort);
return TransportClient.builder().settings(settings).build().addTransportAddress(transportAddress);
}
}
That's it. The integration test will run the production code which is configured to connect to the node started in the AbstractElasticsearchTest.startElasticsearch().
In case you want to use the elasticsearch REST api, use port 9205. E.g. with Apache HttpComponents:
HttpClient httpClient = HttpClients.createDefault();
HttpPut httpPut = new HttpPut("http://localhost:9205/_template/" + templateName);
httpPut.setEntity(new FileEntity(new File("template.json")));
httpClient.execute(httpPut);
Here is my implementation
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.UUID;
import org.elasticsearch.client.Client;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.node.Node;
import org.elasticsearch.node.NodeBuilder;
/**
*
* #author Raghu Nair
*/
public final class ElasticSearchInMemory {
private static Client client = null;
private static File tempDir = null;
private static Node elasticSearchNode = null;
public static Client getClient() {
return client;
}
public static void setUp() throws Exception {
tempDir = File.createTempFile("elasticsearch-temp", Long.toString(System.nanoTime()));
tempDir.delete();
tempDir.mkdir();
System.out.println("writing to: " + tempDir);
String clusterName = UUID.randomUUID().toString();
elasticSearchNode = NodeBuilder
.nodeBuilder()
.local(false)
.clusterName(clusterName)
.settings(
ImmutableSettings.settingsBuilder()
.put("script.disable_dynamic", "false")
.put("gateway.type", "local")
.put("index.number_of_shards", "1")
.put("index.number_of_replicas", "0")
.put("path.data", new File(tempDir, "data").getAbsolutePath())
.put("path.logs", new File(tempDir, "logs").getAbsolutePath())
.put("path.work", new File(tempDir, "work").getAbsolutePath())
).node();
elasticSearchNode.start();
client = elasticSearchNode.client();
}
public static void tearDown() throws Exception {
if (client != null) {
client.close();
}
if (elasticSearchNode != null) {
elasticSearchNode.stop();
elasticSearchNode.close();
}
if (tempDir != null) {
removeDirectory(tempDir);
}
}
public static void removeDirectory(File dir) throws IOException {
if (dir.isDirectory()) {
File[] files = dir.listFiles();
if (files != null && files.length > 0) {
for (File aFile : files) {
removeDirectory(aFile);
}
}
}
Files.delete(dir.toPath());
}
}
You can start ES on your local with:
Settings settings = Settings.settingsBuilder()
.put("path.home", ".")
.build();
NodeBuilder.nodeBuilder().settings(settings).node();
When ES started, access it over REST like:
http://localhost:9200/_cat/health?v
As of 2016 embedded elasticsearch is no-longer supported
As per a response from one of the devlopers in 2017 you can use the following approaches:
Use the Gradle tools elasticsearch already has. You can read some information about this here: https://github.com/elastic/elasticsearch/issues/21119
Use the Maven plugin: https://github.com/alexcojocaru/elasticsearch-maven-plugin
Use Ant scripts like http://david.pilato.fr/blog/2016/10/18/elasticsearch-real-integration-tests-updated-for-ga
Using Docker: https://www.testcontainers.org/modules/elasticsearch
Using Docker from maven: https://github.com/dadoonet/fscrawler/blob/e15dddf72b1ed094dad279d492e4e0314f73683f/pom.xml#L241-L289

Running Apache DS embedded in my application

I'm trying to run an embedded ApacheDS in my application. After reading http://directory.apache.org/apacheds/1.5/41-embedding-apacheds-into-an-application.html I build this:
public void startDirectoryService() throws Exception {
service = new DefaultDirectoryService();
service.getChangeLog().setEnabled( false );
Partition apachePartition = addPartition("apache", "dc=apache,dc=org");
addIndex(apachePartition, "objectClass", "ou", "uid");
service.startup();
// Inject the apache root entry if it does not already exist
try
{
service.getAdminSession().lookup( apachePartition.getSuffixDn() );
}
catch ( LdapNameNotFoundException lnnfe )
{
LdapDN dnApache = new LdapDN( "dc=Apache,dc=Org" );
ServerEntry entryApache = service.newEntry( dnApache );
entryApache.add( "objectClass", "top", "domain", "extensibleObject" );
entryApache.add( "dc", "Apache" );
service.getAdminSession().add( entryApache );
}
}
But I can't connect to the server after running it. What is the default port? Or am I missing something?
Here is the solution:
service = new DefaultDirectoryService();
service.getChangeLog().setEnabled( false );
Partition apachePartition = addPartition("apache", "dc=apache,dc=org");
LdapServer ldapService = new LdapServer();
ldapService.setTransports(new TcpTransport(389));
ldapService.setDirectoryService(service);
service.startup();
ldapService.start();
Here is an abbreviated version of how we use it:
File workingDirectory = ...;
Partition partition = new JdbmPartition();
partition.setId(...);
partition.setSuffix(...);
DirectoryService directoryService = new DefaultDirectoryService();
directoryService.setWorkingDirectory(workingDirectory);
directoryService.addPartition(partition);
LdapService ldapService = new LdapService();
ldapService.setSocketAcceptor(new SocketAcceptor(null));
ldapService.setIpPort(...);
ldapService.setDirectoryService(directoryService);
directoryService.startup();
ldapService.start();
I wasn't able to make it run neither with cringe's, Kevin's nor Jörg Pfünder's version. Received constantly NPEs from within my JUnit test. I have debugged that and compiled all of them to a working solution:
public class DirContextSourceAnonAuthTest {
private static DirectoryService directoryService;
private static LdapServer ldapServer;
#BeforeClass
public static void startApacheDs() throws Exception {
String buildDirectory = System.getProperty("buildDirectory");
File workingDirectory = new File(buildDirectory, "apacheds-work");
workingDirectory.mkdir();
directoryService = new DefaultDirectoryService();
directoryService.setWorkingDirectory(workingDirectory);
SchemaPartition schemaPartition = directoryService.getSchemaService()
.getSchemaPartition();
LdifPartition ldifPartition = new LdifPartition();
String workingDirectoryPath = directoryService.getWorkingDirectory()
.getPath();
ldifPartition.setWorkingDirectory(workingDirectoryPath + "/schema");
File schemaRepository = new File(workingDirectory, "schema");
SchemaLdifExtractor extractor = new DefaultSchemaLdifExtractor(
workingDirectory);
extractor.extractOrCopy(true);
schemaPartition.setWrappedPartition(ldifPartition);
SchemaLoader loader = new LdifSchemaLoader(schemaRepository);
SchemaManager schemaManager = new DefaultSchemaManager(loader);
directoryService.setSchemaManager(schemaManager);
schemaManager.loadAllEnabled();
schemaPartition.setSchemaManager(schemaManager);
List<Throwable> errors = schemaManager.getErrors();
if (!errors.isEmpty())
throw new Exception("Schema load failed : " + errors);
JdbmPartition systemPartition = new JdbmPartition();
systemPartition.setId("system");
systemPartition.setPartitionDir(new File(directoryService
.getWorkingDirectory(), "system"));
systemPartition.setSuffix(ServerDNConstants.SYSTEM_DN);
systemPartition.setSchemaManager(schemaManager);
directoryService.setSystemPartition(systemPartition);
directoryService.setShutdownHookEnabled(false);
directoryService.getChangeLog().setEnabled(false);
ldapServer = new LdapServer();
ldapServer.setTransports(new TcpTransport(11389));
ldapServer.setDirectoryService(directoryService);
directoryService.startup();
ldapServer.start();
}
#AfterClass
public static void stopApacheDs() throws Exception {
ldapServer.stop();
directoryService.shutdown();
directoryService.getWorkingDirectory().delete();
}
#Test
public void anonAuth() throws NamingException {
DirContextSource.Builder builder = new DirContextSource.Builder(
"ldap://localhost:11389");
DirContextSource contextSource = builder.build();
DirContext context = contextSource.getDirContext();
assertNotNull(context.getNameInNamespace());
context.close();
}
}
the 2.x sample is located at folloing link :
http://svn.apache.org/repos/asf/directory/sandbox/kayyagari/embedded-sample-trunk/src/main/java/org/apache/directory/seserver/EmbeddedADSVerTrunk.java
The default port for LDAP is 389.
Since ApacheDS 1.5.7 you will get a NullpointerException. Please use the tutorial at
http://svn.apache.org/repos/asf/directory/documentation/samples/trunk/embedded-sample
This project helped me:
Embedded sample project
I use this dependency in pom.xml:
<dependency>
<groupId>org.apache.directory.server</groupId>
<artifactId>apacheds-server-integ</artifactId>
<version>1.5.7</version>
<scope>test</scope>
</dependency>
Further, in 2.0.* the working dir and other paths aren't anymore defined in DirectoryService, but rather in separate class InstanceLayout, which you need to instantiate and then call
InstanceLayout il = new InstanceLayout(BASE_PATH);
directotyService.setInstanceLayout(il);

Categories