Amazon ElasticCache Autodiscovery - Client is not initialized - java

I am trying to test Amazon's new Memcached client with AutoDiscovery. I have one memcached node which I am able to connect to using XMemcached 1.3.5 as well as a standard SpyMemcached library.
I am following the instructions here: http://docs.amazonwebservices.com/AmazonElastiCache/latest/UserGuide/AutoDiscovery.html
The code is almost identical to the example and is:
String configEndpoint = "<server name>.rgcl8z.cfg.use1.cache.amazonaws.com";
Integer clusterPort = 11211;
MemcachedClient client = new MemcachedClient(new InetSocketAddress(configEndpoint, clusterPort));
client.set("theKey", 3600, "This is the data value");
I see the following in the logs when I create the connection. The error happens when I try to set a value:
2013-01-04 22:05:30.445 INFO net.spy.memcached.MemcachedConnection: Added {QA sa=/<ip>:11211, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
2013-01-04 22:05:32.861 INFO net.spy.memcached.ConfigurationPoller: Starting configuration poller.
2013-01-04 22:05:32.861 INFO net.spy.memcached.ConfigurationPoller: Endpoint to use for configuration access in this poll NodeEndPoint - HostName:<our-server>.rgcl8z.cfg.use1.cache.amazonaws.com IpAddress:<ip> Port:11211
2013-01-04 22:05:32.950 WARN net.spy.memcached.MemcachedClient: Configuration endpoint timed out for config call. Leaving the initialization work to configuration poller.
Exception in thread "main" java.lang.IllegalStateException: Client is not initialized
at net.spy.memcached.MemcachedClient.checkState(MemcachedClient.java:1623)
at net.spy.memcached.MemcachedClient.enqueueOperation(MemcachedClient.java:1617)
at net.spy.memcached.MemcachedClient.asyncStore(MemcachedClient.java:474)
at net.spy.memcached.MemcachedClient.set(MemcachedClient.java:905)
at com.thinknear.venice.initializers.VeniceAssets.main(VeniceAssets.java:227)
I've tried this both locally and on a EC2 instance (I can connect using other libraries to the nodes)
I've tried using both 1.4.5 and 1.4.14 Memcached engines
I relaxed the security group constraints as well just in case
Any thoughts on why the config endpoint would be timing out?

Client is not initialised:
You can not directly connect to amazon elastic cache node through your local machine you can only access it through your ec2 machiene.If you want to check you can telnet from your local machine it will not connect I also suufered from the same problem .You can telnet it from your Ec2 machine.so try your code at ec2 machine it will work.

Do telnet on memcache server to check connectivity ,in mine case it was not listed so was not able to made connection ,
problem solved by listing my server to memcache.

Related

Jedis Issue - "Failed to connect to any host resolved for DNS name."

Whenever I try to connect to my Redis server from my Java application using Jedis, I get JedisConnectionException: Failed to connect to any host resolved for DNS name. The Java application runs on the same machine as the Redis server.
When I check the Redis server's status using systemctl, it's online and running without problems. I also connected to the Redis client via terminal using command-line on the Linux machine it is running on, authenticated and performed PING in which PONG was returned to make sure the Redis was up running.
Redis configuration
I have bind and requirepass un-commented in the redis.conf and looks like following (not my entire config, of course):
bind 127.0.0.1
requirepass mypassword
port 6379
This is the code I am using:
private void setupRedis(RedisCredentials credentials) {
final GenericObjectPoolConfig<Jedis> poolConfig = new JedisPoolConfig();
poolConfig.setMaxIdle(0);
Jedis jedis;
try (JedisPool pool = new JedisPool(poolConfig, credentials.getIp(), credentials.getPort())) {
jedis = pool.getResource();
}
jedis.auth(credentials.getPassword());
jedis.connect();
log.info("Redis connection was established.")
}
I am a bit new to working with Redis therefor I wasn't sure on how much information to include in my post. All and any help is very much appreciated!
Tried
I tried the following code provided above multiple times. I have also tried restarting the Redis server and running the code again, with no successful try.
Expected to happen
For the application to log "Redis connection was establish" and to receive no errors in the process.
Resulted
The console logs the redis.clients.jedis.exceptions.JedisConnectionException: Failed to connect to any host resolved for DNS name and the application therefore obviously does not managed to establish a connection to Redis.

Connection refused when trying to connect to ActiveMQ Artemis deployed on Openshift

We have an Openshift project ( project1 ) in which we setup an AMQ Artemis broker using the image : amq- amq-broker-7-tech-preview/amq-broker-71-openshif . Being the basic image we don't have any configuration such as SSL or TLS. In order to do the setup we used as example : https://github.com/jboss-container-images/jboss-amq-7-broker-openshift-image/blob/amq71-dev/templates/amq-broker-71-basic.yaml
After the deployment of the image on Openshift we have the following:
broker-amq-amqp (5672/TCP 5672) No route
broker-amq-jolokia (8161/TCP 8161) https://broker-amq-jolokia-project1.192.168.99.105.nip.io
broker-amq-mqtt ( 1883/TCP 1883 ) No route
broker-amq-stomp ( 61613/TCP 61613 ) No route
broker-amq-tcp ( 61616/TCP 61616 ) No route
From another Openshift service, in Java we try to connect to the broker but we receive the following error :
[org.apache.activemq.transport.failover.FailoverTransport] (ActiveMQ Task-1) Failed to connect to [tcp://broker-amq-amqp-project1.192.168.99.105.nip.io:61616?keepAlive=true] after: 230 attempt(s) with Connection refused (Connection refused), continuing to retry.
The Java code:
user = "example";
password = "example";
String address = "queue/example";
InitialContext context = new InitialContext();
queue = (Queue) context.lookup(address);
ConnectionFactory cf = (ConnectionFactory) context.lookup("ConnectionFactory");
try (Connection connection = cf.createConnection(user, password);) {
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
}
The JNDI Properties file
java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
java.naming.provider.url=failover:(tcp://broker-amq-amqp-project1.192.168.99.105.nip.io:61616?keepAlive=true)?randomize=false
queue.queue/example=example/strings
It looks as if you're trying to connect to the broker using an OpenShift route, when there is no route defined for the relevant service. You (or the installer) defined a route for Jolokia, but there's no route for the broker.
You won't get a helpful error message here, because any hostname that ends with the right domain will get connected to the OpenShift router. However, the router won't know how to process the connection without a valid route, and will probably just return some sort of meaningless error packet to the JMS client.
If you're trying to connect to the broker from another application in the same OpenShift namespace as the broker, you don't need to connect via the router -- just use the service name (presumably broker-amq-tcp) and service port explicitly in your JMS set-up.
If you're connecting to the broker from another application in a different OpenShift namespace in the same cluster, you might be able to configure the networking subsystem to allow direct connections to the service across namespaces. This is, unfortunately, a little fiddly to set up after OpenShift is installed.
If you're connecting to the broker from outside an OpenShift namespace, and you can't use services directly, you'll have to connect via a route, and you must use an encrypted connection. That's not necessarily for security -- the router will read the SNI information from the SSL header to work out how to route the request.
So you'll need to create a service for the broker's SSL port, create a route for that service, export server certificates from the broker, import those certificates into your client, and configure the client to use an SSL connection URI via the router. Clearly, using the service directly is easier, if you can ;)
All these set-up steps are described in Red Hat's AMQ7-on-OpenShift documentation:
https://access.redhat.com/documentation/en-us/red_hat_amq/7.5/html/deploying_amq_broker_on_openshift/index
although I can't deny that there's an awful lot of information to wade through in that document.
Addition and clarification to answer by Kevin Boone (which is very much correct).
In order for the AMQ broker running inside pod named "broker-amq-tcp" to be reachable from other pods in same cluster:
start broker inside the container on address 0.0.0.0. This is critical - localhost (loopback; 127.0.0.1) will prevent any connections from outside of the pod from reaching the broker;
create service (e.g. "broker-amq-tcp-service") for broker-amq-tcp that maps a pod port to container's 61616; e.g. 62626 (or any other);
connect from other pods using tcp://broker-amq-tcp-service:62626.
The 0.0.0.0 part cost me few days of debugging :)

Access ActiveMQ with reverse proxy enabled

The goal is to publish/send message into ActiveMQ through Java code inside a secured company network.
I have configured ActiveMQ in an AWS Cloud EC2 machine (console access: IPAddress:8161). Also I can publish the messages using the AWS IPAddress and port number 61616 (IPAddress:61616) through Java code.
But now I need to publish messages from inside a company network. It is secured and can't access the AWS IPAddress directly.
So we create reverse proxy for
IPAddress:8161 to activemq-ui.testdemo.com
IPAddress:61616 to activemq-api.testdemo.com
Now I can access ActiveMQ console from our company network using activemq-ui.testdemo.com. But couldn't access activemq-api.testdemo.com through Java code.
Getting Below Error:
SEVERE: Error Message: javax.jms.JMSException: Could not connect to broker URL: tcp://activemq-api.demo.com. Reason:
java.lang.IllegalArgumentException: port out of range:-1
Error looks like expecting port number in the URL. But not sure what to pass for this.
Can anyone help me on how to access ActiveMQ API inside corporate network?
You need to provide the port that the client should attempt to connect to on the connection URI as the error is telling you, something like:
tcp://activemq-api.demo.com:80
The client does not attempt to guess or deduce what the port is you want it to use and so that field is mandatory.

Azure and Apache Mina

I am not sure whether this question is Mina-related or more Azure-related but it has to do with the networking. I have also added Netty tag since Mina and Netty share many networking principles.
I hope to get an advice where to dig into.
I have used certain Mina application quite long in local network, now I am trying to migrate it into the cloud. I deploy Linux virtual machines in Azure (each has public IP but does this really matter?).
They connect (using Mina) to a machine outside Azure that also has its
own public IP. Usual thing:
SocketConnector connector = new NioSocketConnector(numberOfConnectors);
ConnectFuture connectFuture = connector.connect(new
InetSocketAddress(remoteHost, remotePort));
connectFuture.awaitUninterruptibly(connectTimeout);
That Mina machine outside the Azure also runs Mina. Let's call it
server machine.
It accepts connections like this:
NioSocketAcceptor acceptor = new NioSocketAcceptor(acceptor_threads);
org.apache.mina.core.buffer.IoBuffer.setUseDirectBuffer(false);
acceptor.getSessionConfig().setTcpNoDelay(true);
acceptor.setReuseAddress(true);
acceptor.getSessionConfig().setSendBufferSize(buffer_size);
acceptor.getSessionConfig().setMinReadBufferSize(64000);
acceptor.getSessionConfig().setReceiveBufferSize(buffer_size);
acceptor.getSessionConfig().setIdleTime(IdleStatus.BOTH_IDLE, iddle_time);
acceptor.getFilterChain().addLast("codec", new
ProtocolCodecFilter(CodecFactory.getInstance()));
acceptor.setDefaultLocalAddress(new InetSocketAddress(port));
When Azure applications connect to server machine, server saves
IoSession session
to asynchronously push messages back in future like this:
session.write(message);
This worked well inside a local network (without Azure), but in current
deployment server sends message
2017-01-17/15:45:19.823/GMT-00:00 [nioEventLoopGroup-3-3] [...] DEBUG
Sending message to /13.94.143.139:41790
and an Azure machine does not receive anything. Moreover, after a
while the following exception arises on server machine:
2017-01-17/16:01:11.419/GMT-00:00 [NioProcessor-4] [...] ERROR
Exception in IOHandlerConnection timed out
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.apache.mina.transport.socket.nio.NioProcessor.read(NioProcessor.java:280)
at org.apache.mina.transport.socket.nio.NioProcessor.read(NioProcessor.java:44)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:695)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:668)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:657)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.access$600(AbstractPollingIoProcessor.java:68)
at org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:1141)
at org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2017-01-17/16:01:11.424/GMT-00:00 [NioProcessor-3] [...] DEBUG sessionClosed
I use Mina version 2.0.4 (yes, it is old but it works on local network for several years for now).
I setup Azure network with Java Azure SDK 1.0.0-beta3
Network.DefinitionStages.WithCreate creatableNetwork = azure.networks()
.define(networkName)
.withRegion(region)
.withExistingResourceGroup(resourceGroup)
.withAddressSpace("10.0.0.0/20");
And create virtual machines as
VirtualMachine.DefinitionStages.WithCreate creatableVirtualMachine =
azure.virtualMachines()
.define(String.format(...))
.withRegion(region)
.withExistingResourceGroup(resourceGroup)
.withNewPrimaryNetwork(creatableNetwork)
.withPrimaryPrivateIpAddressStatic(inetAddress.getHostAddress())
.withNewPrimaryPublicIpAddress(String.format("chr-vm-%04d", i)) .withPopularLinuxImage(KnownLinuxVirtualMachineImage.UBUNTU_SERVER_16_04_LTS)
.withRootUserName(linuxUserName)
.withPassword(linuxUserPassword)
.withSize(VirtualMachineSizeTypes.STANDARD_D2_V2)
.withNewStorageAccount(creatableStorageAccount);
I wonder what reasons may prevent traveling messages from server to
Azure client machines? Azure network configuration? Mina configuration? (the first messages from client machines to server machine do come after they connect)
I hope that above information may contain a clue.
I have solved my problem thanks to Peter Pan - MSFT noting about NSG - Network Security Group.
NSG controls in/out rules like a Windows Firewall. You should create NSG, add rules to it, and assign NSG to a particular entity:
There are at least two options to assign NSG:
to a network subnet
to a network interface
There is a tutorial 1 and Java code sample 2. In my case, a separate network interface is created for each VM (since each VM has public IP). So, I assigned one NSG to a single subnet.
Fisrt, create NSG:
NetworkSecurityGroup NSG = azure.networkSecurityGroups()
.define(networkSecurityGroup)
.withRegion(region)
.withExistingResourceGroup(resourceGroup)
.defineRule("Inbound")
.allowInbound()
.fromAnyAddress()
.fromAnyPort()
.toAnyAddress()
.toAnyPort()
.withAnyProtocol()
.withDescription("Incoming messsages")
.withPriority(100)
.attach()
.create();
Than modify the code to explicitly define a subnet and assign NSG to it ( subnet1 is automatically created without NSG if none defined explicitly)
Network.DefinitionStages.WithCreate creatableNetwork = azure.networks()
.define(networkName)
.withRegion(region)
.withExistingResourceGroup(resourceGroup)
.withAddressSpace("10.0.0.0/20")
.defineSubnet(subnetName)
.withAddressPrefix("10.0.0.0/20")
.withExistingNetworkSecurityGroup(NSG)
.attach();
So, the rest of the code remains the same as posted in the question above.
Helpful links:
Azure Portal Tutorial
Java Azure SDK NSG Example

Unable to connect to a Kerberos-secured Phoenix datasource

I want to test pulling data from Apache HBase with a Java application. The application will use SQL-like queries via a JDBC to Apache Phoenix.
I've set up my Hadoop "cluster" on one machine using Ambari and the HortonWorks HDP 2.5 platform. I've also Kerberized the environment using Ambari's wizard, where my KDC is a seperate machine running Windows Active Directory.
Ambari shows no errors, and I am able to use sqlline.py to successfully make SQL-like calls to HBase through Phoenix. I set up some example tables this way (cf. HortonWorks Phoenix & ODBC tutorial, although I had to kinit etc. first).
However, I am having problems creating a JDBC datasource to be used by the Java application. In my case, I am planning to host the webapp on WildFly 10.1 and I am developing with Eclipse JEE with the JBoss Tools plugin.
These are the steps I used to create the datasource:
Datasource Explorer > Database Connections > New...
Connection Profile: Generic JDBC
URL: jdbc:phoenix:hdfs.eaa.local:2181/hbase-secure:HTTP/hbase.eaa.local#EAA.LOCAL:jboss.server.temp.dir/spnego.service.keytab
Username: hbase -I'm unsure what to put here-
Driver: I've created a new driver of the type "Generic JDBC Driver" and I had to add JAR files for all of the dependencies of phoenix-core-[version].jar. The Driver Class is org.apache.phoenix.jbdc.PhoenixDriver.
I got the connection string from an extant post in the HortonWorks community, which is why it includes the Kerberos principal and keytab used for the connection.
When I try to test the datasource connection, it churns for about 5 minutes before spitting out an error message (after something like 35 attempts). The client returns Java exceptions that the sockets are in a "closing state", and the Zookeeper logs are less helpful:
INFO [SyncThread:0:ZooKeeperServer#617] - Established session 0x157aef451560217 with negotiated timeout 40000 for client /192.168.40.3:52674
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.40.41:43860
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#827] - Processing ruok command from /192.168.40.41:43860
INFO [Thread-1448:NIOServerCnxn#1007] - Closed socket connection for client /192.168.40.41:43860 (no session established for client)
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.40.41:43922
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#868] - Client attempting to establish new session at /192.168.40.41:43922
INFO [SyncThread:0:ZooKeeperServer#617] - Established session 0x157aef451560218 with negotiated timeout 40000 for client /192.168.40.41:43922
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:SaslServerCallbackHandler#118] - Successfully authenticated client: authenticationID=hbase/hdfs.eaa.local#EAA.LOCAL; authorizationID=hbase/hdfs.eaa.local#EAA.LOCAL.
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:SaslServerCallbackHandler#134] - Setting authorizedID: hbase
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#964] - adding SASL authorization for authorizationID: hbase
INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#494] - Processed session termination for sessionid: 0x157aef451560218
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /192.168.40.41:43922 which had sessionid 0x157aef451560218
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.40.41:44008
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#827] - Processing ruok command from /192.168.40.41:44008
INFO [Thread-1449:NIOServerCnxn#1007] - Closed socket connection for client /192.168.40.41:44008 (no session established for client)
NB. 192.168.40.3 is the VPN server, which my host machine is using to tunnel into the environment with the Hadoop cluster. 192.168.40.41 is the machine running the cluster, hdfs.eaa.local.
There are plenty of accepted socket connections which are then immediately closed. Occasionally the client authenticates successfully (so I'm confident in my Kerberos settings) but then there is a session termination immediately afterward.
I've also tried to deploy the Datasource directly in WildFly with jboss-cli and standalone.xml and module.xml modifications. But I get lots of problems with missing dependencies that I'm not sure how to resolve without creating a new module for each required JAR (and there are a lot) for phoenix-core-[version].jar. I followed this guide.
What can I do to fix the issue or diagnose further? I've been pulling my hair out for a couple of days now.
You need to add hbase-site.xml and core-site.xml to your classpath.
See How to connect to a Kerberos-secured Apache Phoenix data source with WildFly? for more information.

Categories