Unable to connect to a Kerberos-secured Phoenix datasource - java

I want to test pulling data from Apache HBase with a Java application. The application will use SQL-like queries via a JDBC to Apache Phoenix.
I've set up my Hadoop "cluster" on one machine using Ambari and the HortonWorks HDP 2.5 platform. I've also Kerberized the environment using Ambari's wizard, where my KDC is a seperate machine running Windows Active Directory.
Ambari shows no errors, and I am able to use sqlline.py to successfully make SQL-like calls to HBase through Phoenix. I set up some example tables this way (cf. HortonWorks Phoenix & ODBC tutorial, although I had to kinit etc. first).
However, I am having problems creating a JDBC datasource to be used by the Java application. In my case, I am planning to host the webapp on WildFly 10.1 and I am developing with Eclipse JEE with the JBoss Tools plugin.
These are the steps I used to create the datasource:
Datasource Explorer > Database Connections > New...
Connection Profile: Generic JDBC
URL: jdbc:phoenix:hdfs.eaa.local:2181/hbase-secure:HTTP/hbase.eaa.local#EAA.LOCAL:jboss.server.temp.dir/spnego.service.keytab
Username: hbase -I'm unsure what to put here-
Driver: I've created a new driver of the type "Generic JDBC Driver" and I had to add JAR files for all of the dependencies of phoenix-core-[version].jar. The Driver Class is org.apache.phoenix.jbdc.PhoenixDriver.
I got the connection string from an extant post in the HortonWorks community, which is why it includes the Kerberos principal and keytab used for the connection.
When I try to test the datasource connection, it churns for about 5 minutes before spitting out an error message (after something like 35 attempts). The client returns Java exceptions that the sockets are in a "closing state", and the Zookeeper logs are less helpful:
INFO [SyncThread:0:ZooKeeperServer#617] - Established session 0x157aef451560217 with negotiated timeout 40000 for client /192.168.40.3:52674
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.40.41:43860
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#827] - Processing ruok command from /192.168.40.41:43860
INFO [Thread-1448:NIOServerCnxn#1007] - Closed socket connection for client /192.168.40.41:43860 (no session established for client)
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.40.41:43922
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#868] - Client attempting to establish new session at /192.168.40.41:43922
INFO [SyncThread:0:ZooKeeperServer#617] - Established session 0x157aef451560218 with negotiated timeout 40000 for client /192.168.40.41:43922
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:SaslServerCallbackHandler#118] - Successfully authenticated client: authenticationID=hbase/hdfs.eaa.local#EAA.LOCAL; authorizationID=hbase/hdfs.eaa.local#EAA.LOCAL.
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:SaslServerCallbackHandler#134] - Setting authorizedID: hbase
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#964] - adding SASL authorization for authorizationID: hbase
INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#494] - Processed session termination for sessionid: 0x157aef451560218
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /192.168.40.41:43922 which had sessionid 0x157aef451560218
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.40.41:44008
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#827] - Processing ruok command from /192.168.40.41:44008
INFO [Thread-1449:NIOServerCnxn#1007] - Closed socket connection for client /192.168.40.41:44008 (no session established for client)
NB. 192.168.40.3 is the VPN server, which my host machine is using to tunnel into the environment with the Hadoop cluster. 192.168.40.41 is the machine running the cluster, hdfs.eaa.local.
There are plenty of accepted socket connections which are then immediately closed. Occasionally the client authenticates successfully (so I'm confident in my Kerberos settings) but then there is a session termination immediately afterward.
I've also tried to deploy the Datasource directly in WildFly with jboss-cli and standalone.xml and module.xml modifications. But I get lots of problems with missing dependencies that I'm not sure how to resolve without creating a new module for each required JAR (and there are a lot) for phoenix-core-[version].jar. I followed this guide.
What can I do to fix the issue or diagnose further? I've been pulling my hair out for a couple of days now.

You need to add hbase-site.xml and core-site.xml to your classpath.
See How to connect to a Kerberos-secured Apache Phoenix data source with WildFly? for more information.

Related

Accessing Neo4j/GrapheneDB (Dev free plan) on Heroku from Micronaut Java app fails: Connection to database terminated

currently I'm struggling with Neo4j/GrapheneDB (Dev free plan) on Heroku platform.
Launching my app locally via "heroku local" works fine, it connects (Neo4j Java Driver 4) to a Neo4j 3.5.18 (running from Docker image "neo4j:3.5").
My app is built using Micronaut framework, using its Neo4j support. Launching my app on Heroku platform succeeds, I'm using Gradle Heroku plugin for this task.
But accessing the database with business operations (and health checks) fails with exception like this:
INFO Driver - Direct driver instance 1523082263 created for server address hobby-[...]ldel.dbs.graphenedb.com:24787
WARN RetryLogic - Transaction failed and will be retried in 1032ms
org.neo4j.driver.exceptions.ServiceUnavailableException: Connection to the database terminated. Please ensure that your database is listening on the correct host and port and that you have compatible encryption settings both on Neo4j server and driver. Note that the default encryption setting has changed in Neo4j 4.0.
at org.neo4j.driver.internal.util.Futures.blockingGet(Futures.java:143)
at org.neo4j.driver.internal.InternalSession.beginTransaction(InternalSession.java:163)
at org.neo4j.driver.internal.InternalSession.lambda$transaction$4(InternalSession.java:147)
at org.neo4j.driver.internal.retry.ExponentialBackoffRetryLogic.retry(ExponentialBackoffRetryLogic.java:101)
at org.neo4j.driver.internal.InternalSession.transaction(InternalSession.java:146)
at org.neo4j.driver.internal.InternalSession.readTransaction(InternalSession.java:112)
at org.neo4j.driver.internal.InternalSession.readTransaction(InternalSession.java:106)
at PersonController.logInfoOf(PersonController.java:57)
at PersonController.<init>(PersonController.java:50)
at $PersonControllerDefinition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1814)
[...]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:832)
Suppressed: org.neo4j.driver.internal.util.ErrorUtil$InternalExceptionCause: null
at org.neo4j.driver.internal.util.ErrorUtil.newConnectionTerminatedError(ErrorUtil.java:52)
at org.neo4j.driver.internal.async.connection.HandshakeHandler.channelInactive(HandshakeHandler.java:81)
[...]
at org.neo4j.driver.internal.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 common frames omitted
I'm sure to get login credentials from OS environment variables GRAPHENEDB_BOLT_URL, GRAPHENEDB_BOLT_USER, and GRAPHENEDB_BOLT_PASSWORD injected to the app correctly; I've verified it with some debug log statements:
State changed from starting to up
INFO io.micronaut.runtime.Micronaut - Startup completed in 2360ms. Server Running: http://localhost:7382
INFO Application - Neo4j Bolt URIs: [bolt://hobby-[...]ldel.dbs.graphenedb.com:24787]
INFO Application - Neo4j Bolt encrypted? false
INFO Application - Neo4j Bolt trust strategy: TRUST_SYSTEM_CA_SIGNED_CERTIFICATES
INFO Application - Changed trust strategy to: TRUST_ALL_CERTIFICATES
INFO Application - Env.: GRAPHENEDB_BOLT_URL='bolt://hobby-[...]ldel.dbs.graphenedb.com:24787'
INFO Application - Env.: GRAPHENEDB_BOLT_USER='app1[...]hdai'
INFO Application - Env.: GRAPHENEDB_BOLT_PASSWORD of length 31
I've also tried restarting GrapheneDB instance via Heroku plugin website, but with same negative results.
What's going wrong here? Are there any ways to further nail down the root cause?
Thanks
Christian
I had a closer look at this and it seems that you need the driver encryption turned on for the Graphene db instances. This can be configured in application.yml as below:
neo4j:
encryption: true
For reference, here is a sample project https://github.com/aldrinm/micronaut-neo4j-heroku

RabbitMQ is running still getting connection refused Connect Exception

I have installed rabbit MQ 3.8.3 on windows 10 and I can see it is running as windows services.
When I try to access http://localhost:15672/ it is unreachable.
I have enabled the rabbit MQ management plugin in sbin directory
rabbitmq-plugins enable rabbitmq_management
But still, http://localhost:15672/ is unreachable.
Getting the following error in java service :
org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused: connect
I have also run the command to see if anything is running on port 5672:
Command : netstat -ano | find "5672"
Response : TCP 0.0.0.0:25672 0.0.0.0:0 LISTENING 2900
How do I fix this?
The management UI can be accessed using a Web browser at http://{node-hostname}:15672/
The management plugin is included in the RabbitMQ distribution. Like any other plugin, it must be enabled before it can be used.
please refer below URL:
https://www.rabbitmq.com/management.html
Stop rabbitmq
Create the RABBITMQ_BASE environment variable set its value to full
path of rabbit mq server. (eg. C:\Program Files\RabbitMQ Server)
Restart the rabbit mq and make sure the rabbit mq plugin is enabled.

Error NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042

I Recently started learning cassandra and going through online tutorials for cassandra with DataStax Java Drivers. I am simply trying to connect a localhost node on my laptop.
Setup Details -
OS - Windows 7
Cassandra Version - Cassandra version: 2.1-SNAPSHOT
DataStax java driver version - 3.1.0
I could able to connect to local node by using CQLSH and cassandra-cli clients.
I can also see the default keyspace system and system_traces.
Below is the cassandra server log
INFO 12:12:51 Node localhost/127.0.0.1 state jump to normal
INFO 12:12:51 Startup completed! Now serving reads.
INFO 12:12:51 Starting listening for CQL clients on /0.0.0.0:9042...
INFO 12:12:51 Binding thrift service to /0.0.0.0:9160
INFO 12:12:51 Using TFramedTransport with a max frame size of 15728640 bytes.
INFO 12:12:51 Using synchronous/threadpool thrift server on 0.0.0.0 : 9160
INFO 12:12:51 Listening for thrift clients...
I am trying below simple code -
Cluster cluster = Cluster.builder().addContactPoint("127.0.0.1").build();
Metadata metadata = cluster.getMetadata();
This code throws below exception - Part of the Trace
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured columnfamily schema_usertypes))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:233)
I have been through all the previously asked question. Most of answers suggests changing the configuration in cassandra.yaml file.
My cassandra.yaml configuration is -
start_native_transport: true
native_transport_port: 9042
listen_address: localhost
rpc_address: 0.0.0.0
rpc_port: 9160
Most of the answers suggests to use actual IP address of machine at rpc_address, which I tried but did not worked.
Here are the Questions I been through -
Question One, Question two ,Question three, Topic ,Connection requirement, Question four.
This page lists compatibility of Java DataStax drivers with cassandra versions, so I changed the driver version to 2.1.1 (As I am using cassandra 2.1), but it did not worked.
Please suggest what could be wrong.
The error with schema_usertypes seems like the driver is trying to look for a table that isn't there maybe related to this Jira.
You say you are running a 2.1-SNAPSHOT of Cassandra? Try Cassandra 2.1.15. Something seems off on your Cassandra node, the driver is able to talk to your cluster since it trys to look up data inside schema_usertypes.

Amazon ElasticCache Autodiscovery - Client is not initialized

I am trying to test Amazon's new Memcached client with AutoDiscovery. I have one memcached node which I am able to connect to using XMemcached 1.3.5 as well as a standard SpyMemcached library.
I am following the instructions here: http://docs.amazonwebservices.com/AmazonElastiCache/latest/UserGuide/AutoDiscovery.html
The code is almost identical to the example and is:
String configEndpoint = "<server name>.rgcl8z.cfg.use1.cache.amazonaws.com";
Integer clusterPort = 11211;
MemcachedClient client = new MemcachedClient(new InetSocketAddress(configEndpoint, clusterPort));
client.set("theKey", 3600, "This is the data value");
I see the following in the logs when I create the connection. The error happens when I try to set a value:
2013-01-04 22:05:30.445 INFO net.spy.memcached.MemcachedConnection: Added {QA sa=/<ip>:11211, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
2013-01-04 22:05:32.861 INFO net.spy.memcached.ConfigurationPoller: Starting configuration poller.
2013-01-04 22:05:32.861 INFO net.spy.memcached.ConfigurationPoller: Endpoint to use for configuration access in this poll NodeEndPoint - HostName:<our-server>.rgcl8z.cfg.use1.cache.amazonaws.com IpAddress:<ip> Port:11211
2013-01-04 22:05:32.950 WARN net.spy.memcached.MemcachedClient: Configuration endpoint timed out for config call. Leaving the initialization work to configuration poller.
Exception in thread "main" java.lang.IllegalStateException: Client is not initialized
at net.spy.memcached.MemcachedClient.checkState(MemcachedClient.java:1623)
at net.spy.memcached.MemcachedClient.enqueueOperation(MemcachedClient.java:1617)
at net.spy.memcached.MemcachedClient.asyncStore(MemcachedClient.java:474)
at net.spy.memcached.MemcachedClient.set(MemcachedClient.java:905)
at com.thinknear.venice.initializers.VeniceAssets.main(VeniceAssets.java:227)
I've tried this both locally and on a EC2 instance (I can connect using other libraries to the nodes)
I've tried using both 1.4.5 and 1.4.14 Memcached engines
I relaxed the security group constraints as well just in case
Any thoughts on why the config endpoint would be timing out?
Client is not initialised:
You can not directly connect to amazon elastic cache node through your local machine you can only access it through your ec2 machiene.If you want to check you can telnet from your local machine it will not connect I also suufered from the same problem .You can telnet it from your Ec2 machine.so try your code at ec2 machine it will work.
Do telnet on memcache server to check connectivity ,in mine case it was not listed so was not able to made connection ,
problem solved by listing my server to memcache.

Access Hive Metadata using JDBC

I am trying to access Hive Metadata using JDBC. I have added all the jar files required in the classpath. I was following the tutorial from here- https://cwiki.apache.org/confluence/display/Hive/HiveClient#HiveClient-JDBC
After adding all the jar files I tried to wrote a sample program to connect to Hive using Java. When I debug the code, as soon as it hit the below line.
Connection con =
DriverManager.getConnection("jdbc:hive://lvsaishdc3in0001.lvs.host.com:10000/
default","", "");
I always get this exception. Not sure why is it happening. Can anyone help me how to fix this problem?
java.sql.SQLException: Could not establish connection to
lvsaishdc3in0001.lvs.host.com:10000/default: java.net.ConnectException: Connection
timed out: connect
I started my Hive sever by logging into Putty and passing the hostname.
$ bash
bash-3.00$ cd /usr/local/bin
bash-3.00$ hive --service hiveserver
Starting Hive Thrift Server
12/07/03 08:07:11 INFO service.HiveServer: Starting hive server on port 10000
Hive server is pretty unsecured by itself. It doesn't eve require username/password for authentication. From my experience, the most typical configuration is to start Hive server on the same box as the application, make sure that this box is connected to the Hadoop cluster and to secure this box. This way your connection string is jdbc:hive://localhost:10000 and the security becomes not your headache, but a headache of your networking folks.

Categories