Redis Cluster on K8s - problem with external connection - java

My problem is to connect with the redis cluster outside the cloud(uses Redisson as a client).
I have debugged and when Redis client connects to the cluster, it uses properties from nodes.conf, and then tries to connect to them via that addresses (connection is refused because this are clusterIps instead of defined nodeports in config)
Redis is deployed using strimzi operator (default config, 3 master and 3 slaves)
Is there any easy way to tell redis to use nodeports instead of clusterIps?

Related

In Spring - How to connect to AWS RDS via a EC2 bastion host?

I have a AWS RDS MySQL instance running, on a private subnet.
I have another EC2 instance running on a public subnet, which functions as a bastion host for the MySQL instance.
They are both in the same VPC.
I can connect to said instance via the mySQL workbench by configuring a Standard TCP/IP over SSH connection. I provide the SSH Hostname as the EC2 public IPv4 DNS, SSH Key file and the MySQL hostname and credentials.
I can't quite figure out how to connect to this db through a local springboot application. Any explanations would be helpful.
As in what should be the JDBC URL, and how do I get the application to route via the bastion host?

Kafka: events published from the host machine are not consumed by the application running in Docker

I am writing end-to-end tests for an application. I start an instance of an application, a Kafka instance, and a Zookeeper (all Dockerized) and then I interact with the application API to test its functionality. I need to test an event consumer's functionality in this application. I publish events from my tests and the application is expected to handle them.
Problem: If I run the application locally (not in Docker) and run tests that would produce events, the consumer in the application code handles events correctly. In this case, the consumer and the test have bootstrapServers set to localhost:9092. But if the application is run as a Dockerized instance it doesn't see the events. In this case bootstrapServers are set to kafka:9092 in the application and localhost:9092 in the test where kafka is a Docker container name. The kafka container exposes its 9092 port to the host so that the same instance of Kafka can be accessed from inside a Docker container and from the host (running my tests).
The only difference in the code is localhost vs kafka set as bootstrap servers. In both scenarios consumers and producers start successfully; events are published without errors. It is just that in one case the consumer doesn't receive events.
Question: How to make Dockerized consumers see events posted from the host machine?
Note: I have a properly configured Docker network which includes the application instance, Zookeeper, and Kafka. They all "see" each other. The corresponding ports of kafka and zookeeper are exposed to the host.
Kafka ports: 0.0.0.0:9092->9092/tcp. Zookeeper ports: 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp.
I am using wurstmeister/kafka and wurstmeister/zookeeper Docker images (I cannot replace them).
Any ideas/thoughts are appreciated. How would you debug it?
UPDATE: The issue was with KAFKA_ADVERTISED_LISTENERS and KAFKA_LISTENERS env variables that were set to different ports for INSIDE and OUTSIDE communications. The solution was to use a correct port in the application code when it is run inside a Docker container.
Thes kind of issues are usually related to the way Kafka handles the broker's address.
When you start a Kafka broker it binds itself on 0.0.0.0:9092 and register itself on Zookeeper with the address <hostname>:9092. When you connect with a client, Zookeeper will be contacted to fetch the address of the specific broker.
This means that when you start a Kafka container you have a situation like the following:
container name: kafka
network name: kafkanet
hostname: kafka
registration on zookeeper: kafka:9092
Now if you connect a client to your Kafka from a container inside the kafkanet network, the address you get back from Zookeeper is kafka:9092 which is resolvable through the kafkanet network.
However if you connect to Kafka from outside docker (i.e. using the localhost:9092 endpoint mapped by docker), you still get back the kafka:9092 address which is not resolvable.
In order to address this issue you can specify the advertised.host.name and advertised.port in the broker configuration in such a way that the address is resolvable by all the client (see documentation).
What is usually done is to set advertised.host.name as <container-name>.<network> (in your case something like kafka.kafkanet) so that any container connected to the network is able to correctly resolve the IP of the Kafka broker.
In your case however you have a mixed network configuration, as some components live inside docker (hence able to resolve the kafkanet network) while others live outside it. If it were a production system my suggestion would be to set the advertised.host.name to the DNS/IP of the host machine and always rely on docker port mapping to reach the Kafka broker.
From my understanding however you only need this setup to test things out, so the easiest thing would be to "trick" the system living outside docker. Using the naming specified above, this means simply to add to your /etc/hosts (or windows equivalent) the line 127.0.0.1 kafka.kafkanet.
This way when your client living outside docker connects to Kafka the following should happen:
client -> Kafka via localhost:9092
kafka queries Zookeeper and return the host kafka.kafkanet
client resolves kafka.kafkanet to 127.0.0.1
client -> Kafka via 127.0.0.1:9092
EDIT
As pointed out in a comment, newer Kafka version now use the concept of listeners and advertised.listeners which are used in place of host.name and advertised.host.name (which are deprecated and only used in case the the above ones are not specified). The general idea is the same however:
host.name: specifies the host to which the Kafka broker should bind itself to (works in conjunction with port
listeners: specifies all the endpoints to which the Kafka broker should bind (for instance PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9091)
advertised.host.name: specifies how the broker is advertised to client (i.e. which address client should use to connect to it)
avertised.listeners: specifies all the advertised endpoints (for instance PLAINTEXT://kafka.example.com:9092,SSL://kafka.example.com:9091)
In both cases for client to be able to successfully communicate with Kafka they need to be able to resolve and connect to the advertised hostname and port.
In both cases if not specified they are automatically derived by the broker using the hostname of the machine the broker is running on.
You kept referencing 8092. Was that intentional? Kafka runs on 9092. Easiest test is to download the same version of Kafka and manually run its kafka-console-consumer and kafka-console-producer scripts to see if you can pub-sub from your host machine.
did you try "host.docker.internal" in dockerized application?
You could create a docker network for your containers and then containers will be able to resolve each other hostnames and communicate.
Note: this is usable with docker-compose as well with standalone containers

What is a proper way to connect to AWS Elasticache (Redis cluster) from Java?

I'm new to AWS Elasticache redis, and I got below endpoint.
I'm confused in either using Jedis and Redisson, because both provides single connection and cluster connection class.
Like in Jedis, for a single connection we can use:
Jedis conn = new Jedis("endpoint_address");
And for cluster connection we use:
Set<HostAndPort> jedisClusterNodes = new HashSet<HostAndPort>();
jedisClusterNodes.add(new HostAndPort("redis_cluster_ip", 7379));
JedisCluster jc = new JedisCluster(jedisClusterNodes);
These option also occur when I want to use Redisson. I'm not try to compare these two lib, my question is: WHICH ONE is the right method of connecting to AWS Redis Elasticache cluster, when you only have one end-point and still can utilize AWS auto scaling feature?
Expected answer is: use SINGLE or CLUSTER MODE.
Thanks :)
It depends on how you have the redis cluster configured. Whether or not cluster mode is enabled.
You can find it in the console
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Endpoints.html
Redis (cluster mode disabled) clusters, use the Primary Endpoint for
all write operations. Use the individual Node Endpoints for read
operations (In the API/CLI these are referred to as Read Endpoints).
Redis (cluster mode enabled) clusters, use the cluster's Configuration
Endpoint for all operations. You must use a client that supports Redis
Cluster (Redis 3.2). You can still read from individual node endpoints
(In the API/CLI these are referred to as Read Endpoints).
Or with the AWS CLI
aws elasticache describe-cache-clusters \
--cache-cluster-id mycluster \
--show-cache-node-info
http://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-cache-clusters.html
ConfigurationEndpoint -> (structure) Represents a Memcached cluster
endpoint which, if Automatic Discovery is enabled on the cluster, can
be used by an application to connect to any node in the cluster. The
configuration endpoint will always have .cfg in it. Example:
mem-3.9dvc4r.cfg.usw2.cache.amazonaws.com:11211
You should use Replicated configuration in Redisson for AWS Elasticache Redis or other similar hosted services. The usage is described in the documentation.

Cannot connect Locally to ElasticCache cluster on aws using Jedis Lib

We are trying to access de ElasticCache (Redis) on aws using a Java client that runs locally using Jedis lib. We were able to access the redis using redis-cli locally following the steps here.
The problem is that when we try to connect to aws Redis using Jedis lib, the NAT public address are being translated to the redis private IPs in order to calculate the slots (initializeSlotsCache). We couldn't find a way to disable this. Are there any workaround?
Here's how we connect using Jedis:
factory = new JedisConnectionFactory(new RedisClusterConfiguration(this.clusterProperties.getNodes()));
factory.setUsePool(true);
factory.setPoolConfig(this.jedisPoolConfig());
factory.afterPropertiesSet();
return factory;
We are using the mapped NAT ips for each node. But the Jedis lib is saving the private ips in the cache, so we get the following exception:
redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
Any suggestions would be great! We are running out of options. Thank you in advance.
You cannot connect to AWS hosted redis from outside the AWS environment.
We had faced a similar issue, and the work around we had was to install a local redis instance for dev and unit testing.

Java web application cannot connect to AWS RDS DB Instance

I created DB Instance (MySQL) with Publicly Accessible option. In DB Security group, I opened MySQL Port for EC2 instance security group(Web server). In EC2 security group it allows ssh, web server ports. I can able to connect DB Instance from EC2 instance. I deployed web app on web server and it can't able to connect with RDS Instance.
I am getting the exception in my local web server:
org.springframework.jdbc.CannotGetJdbcConnectionException: Could not
get JDBC Connection; nested exception is java.sql.SQLException: Server
connection failure during transaction. Due to underlying exception:
'java.net.UnknownHostException:
"kbawstry2.cdhtaamn5ynq.us-east-1.rds.amazonaws.com"'.
EC2 Security Group
DBInstance Security Group
Using MySQL WorkBench, I can connect to RDS DB Instance using TCP/IP SSH Tunnelling option. But in Java Programming How can I connect with database?
Thanks in advance.
Finally I solved the issue. The problem is in application the way I mentioned connection string. I rectified and tested application in localhost then updated to tomcat server in EC2 instance. Another change in DB Security Group. I replaced source ip as EC2 instance's private IP address as both RDS and EC2 Instance are in same VPC.
Now It is working fine.

Categories