Dynamodb requestHandler acception - java

I have found a cryptic exception when running dynamo inserts in the cloud, any help or clues as to how to debug such an error ?
Background
The code I am running :
Succesfully inserts data into dynamodb when run from my local machines, but
Fails abruptly due to authentication when running in the cloud in a mapreduce job over EMR.
Uses a URL endpoint for authentication.
I simply create credentials like so:
client=new DynamoDBClient(new BasicAWSCredentials(
"XXXX",
"XXXXXXXXXXX));
client.setEndpoint("https://dynamodb.eu-west-1.amazonaws.com");
The exception Im getting is below:
Exception in thread "main" java.lang.NoSuchFieldError: requestHandlers
at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.init(AWSSecurityTokenServiceClient.java:214)
at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.<init>(AWSSecurityTokenServiceClient.java:160)
at com.amazonaws.auth.STSSessionCredentialsProvider.<init>(STSSessionCredentialsProvider.java:73)
at com.amazonaws.auth.SessionCredentialsProviderFactory.getSessionCredentialsProvider(SessionCredentialsProviderFactory.java:96)
at com.amazonaws.services.dynamodb.AmazonDynamoDBClient.setEndpoint(AmazonDynamoDBClient.java:857)
at com.amazonaws.services.dynamodb.AmazonDynamoDBClient.init(AmazonDynamoDBClient.java:262)
at com.amazonaws.services.dynamodb.AmazonDynamoDBClient.<init>(AmazonDynamoDBClient.java:181)
at com.amazonaws.services.dynamodb.AmazonDynamoDBClient.<init>(AmazonDynamoDBClient.java:142)

The "real" answer here, is that, dynamodb clients which don't match up with the latest or current versions can exhibit odd reflection / class loading error when we attempt to use them in a modern environment.
AWS jars exist on the class path of older EMR AMI instances can conflict with proper (latest) AWS jars used by hadoop job which invokes a non-EMR service (i.e. such as dynamodb, in our case).
On my older AMI instance, I simply issued:
mv $HOME/lib/aws-java-sdk-1.1.1.jar $HOME/lib/aws-java-sdk-1.1.1.jar.old
To resolve the issue on a single node cluster.
The ROOT cause of this error? was that I was using an older Ruby elastic-mapreduce client, which led to creation of older AMI versions in my EMR cloud, which had obsolete aws-sdk jars on the class path.

Related

Failed to get driver instance for jdbcUrl=jdbc:postgresql:// on aws EC2 instance

I have a war file deployed on tomcat server running on aws ec2 instance. Whenever I tried to restart my tomcat I got these exception
Unable to build Hibernate SessionFactory; nested exception is java.lang.RuntimeException: Failed to get driver instance for jdbcUrl=jdbc:postgresql://xyz.com:5432/db
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1796)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:595)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517)
Earlier I thought it may be happening because of the wrong url, but I executed the application locally with the same application.properties file and it worked. I was able to connect to the db server which is again running on another instance of aws ec2. So I don't think it's happening because of wrong url.
Is it because of one of my aws ec2 instance is not being able to connect to the db server instance ? How can I resolve this ?
EDIT : I have tried adding inbound rules for both the aws ec2 instances, still same issue.
Regards
Oops, I should have known this. The issue here is that postgresql jdbc driver was missing. And that's what the error keep saying.
I just added the postgresql driver jar file in PATH_TO_TOMCAT_DIR/lib, and it worked.
Anyway posting this so that in future it might help others.

"No subject alternative names present" error when connecting to private ip over https

I have a java application running as an Azure App Service. We would like this app to be able to connect to an apache server running on a vm which is in the same vnet that the java application is integrated with. The app can communicate fine with this apache server over its public domain. However when changing to the private ip (e.g https:///path) I get the following error:
[INFO] org.springframework.web.client.ResourceAccessException: I/O error on POST request for "https://<my private ip>/path": No subject alternative names present; nested exception is javax.net.ssl.SSLHandshakeException: No subject alternative names present
I've looked at this myself and I know this issue is due to Java not allowing it to connect because it's not using the domain listed in the ssl certificate.
Any suggestions on how to work around this without changing the certificate or making any changes to the java code? (For work reasons I am unable to modify the code of the java app itself)
I've tried adding the property -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true as suggested here to the startup command for the java application as seen below:
-Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true
The error is still occurring. A colleague has also suggested using the hosts file but I don't think this is possible for Azure web apps.
Hope this is clear. Thanks

Cassandra Talend Job running via java posing errors

I have 3 node Apache Cassandra cluster, where we are doing data loading operations via java prepared statement, while running the job we are facing the following error:
INSERT INTO "abc" () VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
is not prepared on /xx.xx.xx.xx:9042, preparing before retrying executing.
Seeing this message a few times is fine, but seeing it a lot may be source of performance problems. 
This query is used in a java code where the talend jars are called, which is taking a lot of time to complete the job of data loading  
Above error message is showing for all 3 Cassandra nodes in cluster. Below is the environment setup:
Apache Cassanndra version - 3.8.0
Talend Version - 6.4
Apache Cassandra driver - cassandra-driver-core-3.0.0-shaded.jar

correct way to establish a redis connection with lettuce in java 8

I am trying to build a Denodo java stored procedure that communicates with redis via lettuce.
I am using the Denodo 4e eclipse extension and oxygen as recommended by Denodo.
I am clearly missing something because all of the documentation indicates that both
int port = 6379;
String host = "127.0.0.1";
RedisURi uri = RedisURI.Builder.redis(host,port).withDatabase(1).build();
RedisClient client = RedisClient.create(uri);
and
RedisClient client = RedisClient.create("redis://localhost:6379");
are throwing errors that are obscured by the debugging method all i know is that in the first instance the builder fails and in the second the client fails.
When I invoke the redis-cli i see that redis is running at 127.0.0.1:6379> and am able to get the test keys I have set.
user#system:~$ redis-cli
127.0.0.1:6379> get datum1
"datum2"
I am using a default redis.conf and running eclipse, denodo, and redis on the same machine.
Bind in redis.conf is 127.0.0.1 ::1
timeout is disabled (0)
I don't normally develop in Java so I'm hoping I am clearly doing something wrong rather than having to actually do this in a non-denodo project and sort out proper builds and debugging.
So a few rookie mistakes here for anyone new to java or Denodo.
The Java mistake was using catch exception which apparently doesn't catch everything. Moving to catch throwable allowed me to get a useful stack trace, though I understand this is not recommended out side of debugging as catch throwable will also catch underlying JVMerrors and stuff you have no business dealing with in code.
The underlying issue was a Java.Lang.ClassNotFoundException for a dependency.
The Denodo mistake was that Java Stored Procedures in Denodo either need to have dependency jars imported or should use an uber(?)/fat(?) jar.
I used the maven assembly plugin to build with maven instead of using the denodo4e deploy tool, then copied the jar to a procs folder under denodo home and browsed to it when creating a new stored procedure in the VDP Admin.

How to connect to k8s api server within a pod using k8s java client

Context
I have a java application built as a docker image.
The image is deployed in a k8s cluster.
In the java application, I want to connect to the api server and save something in Secrets.
How can I do that with k8s java client?
Current Attempts
The k8s official document says:
From within a pod the recommended ways to connect to API are:
run kubectl proxy in a sidecar container in the pod, or as a background process within the container. This proxies the Kubernetes API to the localhost interface of the pod, so that other processes in any container of the pod can access it.
use the Go client library, and create a client using the rest.InClusterConfig() and kubernetes.NewForConfig() functions. They handle locating and authenticating to the apiserver.
But I can't find similar functions neither similar examples in java client.
With the assumption that your Pod has a serviceAccount automounted -- which is the default unless you have specified otherwise -- the ClientBuilder.cluster() method reads the API URL from the environment, reads the cluster CA from the well known location, and similarly the ServiceAccount token from that same location.
Then, while not exactly "create a Secret," this PatchExample performs a mutation operation which one could generalize into "create or update a Secret."

Categories