Error when connecting to DAX in AWS - java

we're having trouble connecting to DAX from a java application in our test environment. The DAX cluster and configuration are done in our cloudformation template for our test env.
These are the errors in the trace:
[ERROR] DaxClient-39: caught exception during cluster refresh:
java.io.IOException: failed to configure cluster endpoints from hosts
Suppressed: com.amazon.dax.client.exceptions.DaxServiceException: [X.X.XX.XX]
Connection requires authentication (Service: null; Status Code: -1;
Error Code: null; Request ID: null)
We use the same template in our dev environment and are able to connect to DAX in that environment from ec2 instances in that env.
We have verified connectivity to the cluster using:
nc -z v-dax-test.3fxxxx.clustercfg.dax.usw2.cache.amazonaws.com 8111
and can run
aws dax describe-clusters --r us-west-2
on the ec2 instance that is trying to connect to DAX and get back results that seem sane.
The instance is running a java application using the aws java sdk and the dax client lib.
We've verified that DAX's security group allows incoming connections from 8111 from the security group the ec2 instance is in.
The dax subnet group specifies the subnets which the ec2 instance is in.
Can anyone tell me what this error means, and how to resolve it?
Thank you!

This could be occurring if you haven't specified the region when instantiating the DAX ClientConfig and accessing a DAX cluster in in a region other than us-east-1 (the clients default region). To specify the region try:
ClientConfig daxConfig = new ClientConfig()
.withEndpoints(daxEndpoint).withRegion("us-west-2");
AmazonDaxClient client = new ClusterDaxClient(daxConfig);

Related

Jedis Issue - "Failed to connect to any host resolved for DNS name."

Whenever I try to connect to my Redis server from my Java application using Jedis, I get JedisConnectionException: Failed to connect to any host resolved for DNS name. The Java application runs on the same machine as the Redis server.
When I check the Redis server's status using systemctl, it's online and running without problems. I also connected to the Redis client via terminal using command-line on the Linux machine it is running on, authenticated and performed PING in which PONG was returned to make sure the Redis was up running.
Redis configuration
I have bind and requirepass un-commented in the redis.conf and looks like following (not my entire config, of course):
bind 127.0.0.1
requirepass mypassword
port 6379
This is the code I am using:
private void setupRedis(RedisCredentials credentials) {
final GenericObjectPoolConfig<Jedis> poolConfig = new JedisPoolConfig();
poolConfig.setMaxIdle(0);
Jedis jedis;
try (JedisPool pool = new JedisPool(poolConfig, credentials.getIp(), credentials.getPort())) {
jedis = pool.getResource();
}
jedis.auth(credentials.getPassword());
jedis.connect();
log.info("Redis connection was established.")
}
I am a bit new to working with Redis therefor I wasn't sure on how much information to include in my post. All and any help is very much appreciated!
Tried
I tried the following code provided above multiple times. I have also tried restarting the Redis server and running the code again, with no successful try.
Expected to happen
For the application to log "Redis connection was establish" and to receive no errors in the process.
Resulted
The console logs the redis.clients.jedis.exceptions.JedisConnectionException: Failed to connect to any host resolved for DNS name and the application therefore obviously does not managed to establish a connection to Redis.

Vault server on Openshift - connection refused

I wanted to use vault server to store secrets and deploy it on openshift.
I wrote this dockerfile, built the image and pushed it to the openshift registry and created a deployment from this image stream:
FROM vault:1.5.0
ADD *.hcl /etc/config.hcl
ENTRYPOINT ["vault", "server", "-config=/etc/config.hcl"]
Here is the config:
storage "file" {
path = "/vault/data"
}
listener "tcp" {
address="127.0.0.1:8200"
tls_disable=1
}
disable_mlock = true
api_addr = "http://127.0.0.1:8200"
I created a route to the 8200 port. When I use the vault CLI from inside the vault-server pod it works fine, I can login, configure etc. When i use the openshift cli on my local computer to forward port 8200 to my local 8200 port I can also access the API.
The problem is I cannot access the API from anywhere outside the pod. The route fives me a 503 response and when trying via http://vault-server.namepsace.svc:8200 I get connection refused (using Spring Rest Template).
How can I configure Vault to also accept external traffic?
Your listener block means you are only listening for connections from localhost. Change the address field to 0.0.0.0:8200 to listen on all interfaces:
listener "tcp" {
address="0.0.0.0:8200"
}
And please don't forget to enable TLS as soon as you've got connectivity working.

Access ActiveMQ with reverse proxy enabled

The goal is to publish/send message into ActiveMQ through Java code inside a secured company network.
I have configured ActiveMQ in an AWS Cloud EC2 machine (console access: IPAddress:8161). Also I can publish the messages using the AWS IPAddress and port number 61616 (IPAddress:61616) through Java code.
But now I need to publish messages from inside a company network. It is secured and can't access the AWS IPAddress directly.
So we create reverse proxy for
IPAddress:8161 to activemq-ui.testdemo.com
IPAddress:61616 to activemq-api.testdemo.com
Now I can access ActiveMQ console from our company network using activemq-ui.testdemo.com. But couldn't access activemq-api.testdemo.com through Java code.
Getting Below Error:
SEVERE: Error Message: javax.jms.JMSException: Could not connect to broker URL: tcp://activemq-api.demo.com. Reason:
java.lang.IllegalArgumentException: port out of range:-1
Error looks like expecting port number in the URL. But not sure what to pass for this.
Can anyone help me on how to access ActiveMQ API inside corporate network?
You need to provide the port that the client should attempt to connect to on the connection URI as the error is telling you, something like:
tcp://activemq-api.demo.com:80
The client does not attempt to guess or deduce what the port is you want it to use and so that field is mandatory.

Azure and Apache Mina

I am not sure whether this question is Mina-related or more Azure-related but it has to do with the networking. I have also added Netty tag since Mina and Netty share many networking principles.
I hope to get an advice where to dig into.
I have used certain Mina application quite long in local network, now I am trying to migrate it into the cloud. I deploy Linux virtual machines in Azure (each has public IP but does this really matter?).
They connect (using Mina) to a machine outside Azure that also has its
own public IP. Usual thing:
SocketConnector connector = new NioSocketConnector(numberOfConnectors);
ConnectFuture connectFuture = connector.connect(new
InetSocketAddress(remoteHost, remotePort));
connectFuture.awaitUninterruptibly(connectTimeout);
That Mina machine outside the Azure also runs Mina. Let's call it
server machine.
It accepts connections like this:
NioSocketAcceptor acceptor = new NioSocketAcceptor(acceptor_threads);
org.apache.mina.core.buffer.IoBuffer.setUseDirectBuffer(false);
acceptor.getSessionConfig().setTcpNoDelay(true);
acceptor.setReuseAddress(true);
acceptor.getSessionConfig().setSendBufferSize(buffer_size);
acceptor.getSessionConfig().setMinReadBufferSize(64000);
acceptor.getSessionConfig().setReceiveBufferSize(buffer_size);
acceptor.getSessionConfig().setIdleTime(IdleStatus.BOTH_IDLE, iddle_time);
acceptor.getFilterChain().addLast("codec", new
ProtocolCodecFilter(CodecFactory.getInstance()));
acceptor.setDefaultLocalAddress(new InetSocketAddress(port));
When Azure applications connect to server machine, server saves
IoSession session
to asynchronously push messages back in future like this:
session.write(message);
This worked well inside a local network (without Azure), but in current
deployment server sends message
2017-01-17/15:45:19.823/GMT-00:00 [nioEventLoopGroup-3-3] [...] DEBUG
Sending message to /13.94.143.139:41790
and an Azure machine does not receive anything. Moreover, after a
while the following exception arises on server machine:
2017-01-17/16:01:11.419/GMT-00:00 [NioProcessor-4] [...] ERROR
Exception in IOHandlerConnection timed out
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.apache.mina.transport.socket.nio.NioProcessor.read(NioProcessor.java:280)
at org.apache.mina.transport.socket.nio.NioProcessor.read(NioProcessor.java:44)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:695)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:668)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:657)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.access$600(AbstractPollingIoProcessor.java:68)
at org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:1141)
at org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2017-01-17/16:01:11.424/GMT-00:00 [NioProcessor-3] [...] DEBUG sessionClosed
I use Mina version 2.0.4 (yes, it is old but it works on local network for several years for now).
I setup Azure network with Java Azure SDK 1.0.0-beta3
Network.DefinitionStages.WithCreate creatableNetwork = azure.networks()
.define(networkName)
.withRegion(region)
.withExistingResourceGroup(resourceGroup)
.withAddressSpace("10.0.0.0/20");
And create virtual machines as
VirtualMachine.DefinitionStages.WithCreate creatableVirtualMachine =
azure.virtualMachines()
.define(String.format(...))
.withRegion(region)
.withExistingResourceGroup(resourceGroup)
.withNewPrimaryNetwork(creatableNetwork)
.withPrimaryPrivateIpAddressStatic(inetAddress.getHostAddress())
.withNewPrimaryPublicIpAddress(String.format("chr-vm-%04d", i)) .withPopularLinuxImage(KnownLinuxVirtualMachineImage.UBUNTU_SERVER_16_04_LTS)
.withRootUserName(linuxUserName)
.withPassword(linuxUserPassword)
.withSize(VirtualMachineSizeTypes.STANDARD_D2_V2)
.withNewStorageAccount(creatableStorageAccount);
I wonder what reasons may prevent traveling messages from server to
Azure client machines? Azure network configuration? Mina configuration? (the first messages from client machines to server machine do come after they connect)
I hope that above information may contain a clue.
I have solved my problem thanks to Peter Pan - MSFT noting about NSG - Network Security Group.
NSG controls in/out rules like a Windows Firewall. You should create NSG, add rules to it, and assign NSG to a particular entity:
There are at least two options to assign NSG:
to a network subnet
to a network interface
There is a tutorial 1 and Java code sample 2. In my case, a separate network interface is created for each VM (since each VM has public IP). So, I assigned one NSG to a single subnet.
Fisrt, create NSG:
NetworkSecurityGroup NSG = azure.networkSecurityGroups()
.define(networkSecurityGroup)
.withRegion(region)
.withExistingResourceGroup(resourceGroup)
.defineRule("Inbound")
.allowInbound()
.fromAnyAddress()
.fromAnyPort()
.toAnyAddress()
.toAnyPort()
.withAnyProtocol()
.withDescription("Incoming messsages")
.withPriority(100)
.attach()
.create();
Than modify the code to explicitly define a subnet and assign NSG to it ( subnet1 is automatically created without NSG if none defined explicitly)
Network.DefinitionStages.WithCreate creatableNetwork = azure.networks()
.define(networkName)
.withRegion(region)
.withExistingResourceGroup(resourceGroup)
.withAddressSpace("10.0.0.0/20")
.defineSubnet(subnetName)
.withAddressPrefix("10.0.0.0/20")
.withExistingNetworkSecurityGroup(NSG)
.attach();
So, the rest of the code remains the same as posted in the question above.
Helpful links:
Azure Portal Tutorial
Java Azure SDK NSG Example

Amazon ElasticCache Autodiscovery - Client is not initialized

I am trying to test Amazon's new Memcached client with AutoDiscovery. I have one memcached node which I am able to connect to using XMemcached 1.3.5 as well as a standard SpyMemcached library.
I am following the instructions here: http://docs.amazonwebservices.com/AmazonElastiCache/latest/UserGuide/AutoDiscovery.html
The code is almost identical to the example and is:
String configEndpoint = "<server name>.rgcl8z.cfg.use1.cache.amazonaws.com";
Integer clusterPort = 11211;
MemcachedClient client = new MemcachedClient(new InetSocketAddress(configEndpoint, clusterPort));
client.set("theKey", 3600, "This is the data value");
I see the following in the logs when I create the connection. The error happens when I try to set a value:
2013-01-04 22:05:30.445 INFO net.spy.memcached.MemcachedConnection: Added {QA sa=/<ip>:11211, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
2013-01-04 22:05:32.861 INFO net.spy.memcached.ConfigurationPoller: Starting configuration poller.
2013-01-04 22:05:32.861 INFO net.spy.memcached.ConfigurationPoller: Endpoint to use for configuration access in this poll NodeEndPoint - HostName:<our-server>.rgcl8z.cfg.use1.cache.amazonaws.com IpAddress:<ip> Port:11211
2013-01-04 22:05:32.950 WARN net.spy.memcached.MemcachedClient: Configuration endpoint timed out for config call. Leaving the initialization work to configuration poller.
Exception in thread "main" java.lang.IllegalStateException: Client is not initialized
at net.spy.memcached.MemcachedClient.checkState(MemcachedClient.java:1623)
at net.spy.memcached.MemcachedClient.enqueueOperation(MemcachedClient.java:1617)
at net.spy.memcached.MemcachedClient.asyncStore(MemcachedClient.java:474)
at net.spy.memcached.MemcachedClient.set(MemcachedClient.java:905)
at com.thinknear.venice.initializers.VeniceAssets.main(VeniceAssets.java:227)
I've tried this both locally and on a EC2 instance (I can connect using other libraries to the nodes)
I've tried using both 1.4.5 and 1.4.14 Memcached engines
I relaxed the security group constraints as well just in case
Any thoughts on why the config endpoint would be timing out?
Client is not initialised:
You can not directly connect to amazon elastic cache node through your local machine you can only access it through your ec2 machiene.If you want to check you can telnet from your local machine it will not connect I also suufered from the same problem .You can telnet it from your Ec2 machine.so try your code at ec2 machine it will work.
Do telnet on memcache server to check connectivity ,in mine case it was not listed so was not able to made connection ,
problem solved by listing my server to memcache.

Categories