Akka Clustering not working with Docker container in host node - java

We are trying to use Application level clustering using the Akka Clustering for our distributed application which runs within docker containers across multiple nodes. We plan to run the docker container in the "host" mode networking.
When the dockerized application comes up for the first time, the Akka Clustering does seem to work and we do not see any Gossip messages being exchanged between the cluster nodes. This gets resolved only when we remove the file "/var/lib/docker/network/files/local-kv.db” and restart the docker service. This is not an acceptable solution for the production deployment and so we are trying to do an RCA and provide a proper solution.
Any help here would be really appreciated.
Tried removing file "/var/lib/docker/network/files/local-kv.db” and restarting docker service worked. But this workaround is unacceptable in the production deployment
Tried using the bridge network mode for the dockerized container. That helps, but our current requirement requires us to run the container in "host" mode.
application.conf has the following settings for the host and port currently.
hostname = "" port = 2551 bind-hostname = "0.0.0.0" bind-port = 2551
No gossip messages are exchanged between the akka cluster nodes. Whereas we see those messages after applying the mentioned workaround

Related

JMX Monitoring for Kubernetes scaling Statefulset

I'm trying to get JMX monitoring from jconsole for an application that is running inside a Kubernetes pod.
Currently, I'm following this method:
I expose a port, let's say 5000 in the YAML
I create a nodePort service that binds that pod port to the worker nodes port
I add the following 4 java properties:
JMX_REMOTE_AUTHENTICATE
JMX_REMOTE_PORT
JMX_REMOTE_RMI_PORT
JMX_REMOTE_SSL
Then I can go into a monitoring tool like jvisualvm and create a connection to the public IP of the worker node hosting that pod at port 5000 and I can monitor that pod, which works great.
Issue:
Now let's say my application scales up, and a new pod comes up on another worker node, I can manually add all the above steps again to monitor that pod.
But that isn't ideal. Ideally, I'd like every pod to be automatically get monitored as it comes online. I can add the properties for JMX in my statefulset YAML, but do I need a nodePort service for every single pod that comes online that binds to a different port? If so, there must be a way to do this through a script or a built-in function?
Anyone with any experience with this, any pointers would be very helpful?

Kafka: events published from the host machine are not consumed by the application running in Docker

I am writing end-to-end tests for an application. I start an instance of an application, a Kafka instance, and a Zookeeper (all Dockerized) and then I interact with the application API to test its functionality. I need to test an event consumer's functionality in this application. I publish events from my tests and the application is expected to handle them.
Problem: If I run the application locally (not in Docker) and run tests that would produce events, the consumer in the application code handles events correctly. In this case, the consumer and the test have bootstrapServers set to localhost:9092. But if the application is run as a Dockerized instance it doesn't see the events. In this case bootstrapServers are set to kafka:9092 in the application and localhost:9092 in the test where kafka is a Docker container name. The kafka container exposes its 9092 port to the host so that the same instance of Kafka can be accessed from inside a Docker container and from the host (running my tests).
The only difference in the code is localhost vs kafka set as bootstrap servers. In both scenarios consumers and producers start successfully; events are published without errors. It is just that in one case the consumer doesn't receive events.
Question: How to make Dockerized consumers see events posted from the host machine?
Note: I have a properly configured Docker network which includes the application instance, Zookeeper, and Kafka. They all "see" each other. The corresponding ports of kafka and zookeeper are exposed to the host.
Kafka ports: 0.0.0.0:9092->9092/tcp. Zookeeper ports: 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp.
I am using wurstmeister/kafka and wurstmeister/zookeeper Docker images (I cannot replace them).
Any ideas/thoughts are appreciated. How would you debug it?
UPDATE: The issue was with KAFKA_ADVERTISED_LISTENERS and KAFKA_LISTENERS env variables that were set to different ports for INSIDE and OUTSIDE communications. The solution was to use a correct port in the application code when it is run inside a Docker container.
Thes kind of issues are usually related to the way Kafka handles the broker's address.
When you start a Kafka broker it binds itself on 0.0.0.0:9092 and register itself on Zookeeper with the address <hostname>:9092. When you connect with a client, Zookeeper will be contacted to fetch the address of the specific broker.
This means that when you start a Kafka container you have a situation like the following:
container name: kafka
network name: kafkanet
hostname: kafka
registration on zookeeper: kafka:9092
Now if you connect a client to your Kafka from a container inside the kafkanet network, the address you get back from Zookeeper is kafka:9092 which is resolvable through the kafkanet network.
However if you connect to Kafka from outside docker (i.e. using the localhost:9092 endpoint mapped by docker), you still get back the kafka:9092 address which is not resolvable.
In order to address this issue you can specify the advertised.host.name and advertised.port in the broker configuration in such a way that the address is resolvable by all the client (see documentation).
What is usually done is to set advertised.host.name as <container-name>.<network> (in your case something like kafka.kafkanet) so that any container connected to the network is able to correctly resolve the IP of the Kafka broker.
In your case however you have a mixed network configuration, as some components live inside docker (hence able to resolve the kafkanet network) while others live outside it. If it were a production system my suggestion would be to set the advertised.host.name to the DNS/IP of the host machine and always rely on docker port mapping to reach the Kafka broker.
From my understanding however you only need this setup to test things out, so the easiest thing would be to "trick" the system living outside docker. Using the naming specified above, this means simply to add to your /etc/hosts (or windows equivalent) the line 127.0.0.1 kafka.kafkanet.
This way when your client living outside docker connects to Kafka the following should happen:
client -> Kafka via localhost:9092
kafka queries Zookeeper and return the host kafka.kafkanet
client resolves kafka.kafkanet to 127.0.0.1
client -> Kafka via 127.0.0.1:9092
EDIT
As pointed out in a comment, newer Kafka version now use the concept of listeners and advertised.listeners which are used in place of host.name and advertised.host.name (which are deprecated and only used in case the the above ones are not specified). The general idea is the same however:
host.name: specifies the host to which the Kafka broker should bind itself to (works in conjunction with port
listeners: specifies all the endpoints to which the Kafka broker should bind (for instance PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9091)
advertised.host.name: specifies how the broker is advertised to client (i.e. which address client should use to connect to it)
avertised.listeners: specifies all the advertised endpoints (for instance PLAINTEXT://kafka.example.com:9092,SSL://kafka.example.com:9091)
In both cases for client to be able to successfully communicate with Kafka they need to be able to resolve and connect to the advertised hostname and port.
In both cases if not specified they are automatically derived by the broker using the hostname of the machine the broker is running on.
You kept referencing 8092. Was that intentional? Kafka runs on 9092. Easiest test is to download the same version of Kafka and manually run its kafka-console-consumer and kafka-console-producer scripts to see if you can pub-sub from your host machine.
did you try "host.docker.internal" in dockerized application?
You could create a docker network for your containers and then containers will be able to resolve each other hostnames and communicate.
Note: this is usable with docker-compose as well with standalone containers

How to connect multiple Java applications to same Ignite cluster?

I have three Java applications that will connect to the same Ignite node (running on a particular VM) to access the same cache store.
Is there a step-by-step procedure on how to run a node outside Java application (from command prompt, may be) and connect my Java apps to it?
Your Java applications should serve as client nodes in your cluster. More information about client/sever mode can be found in the documentation. Server node(s) could be started from command line, it's described here. Information about running with a custom configuration could be found there as well. You need to set up discovery in order to make the entire thing work. It should be done on every node (incl. client nodes). I'd recommend you to use static IP finder in the configuration.

How to configure Java client connecting to AWS EMR spark cluster

I'm trying to write a simple spark application, and when i run it locally it works with setting the master as
.master("local[2]")
But after configuring spark cluster on AWS (EMR) i can't connet to the master url:
.master("spark://<master url>:7077")
Is this the way to do it? am i missing something here?
The cluster is up and running, and when i tried adding my application as a step jar, so it will run directly in the cluster it worked. But i want to be able to run it from a remote machine.
would appreciate some help here,
Thanks
To run from a remote machine, you will need to open the appropriate ports in the Security Group assigned to your EMR master node. You will need to add at least 7077.
If by "remote" you mean one that isn't in your AWS environment, you will also need to setup a way to route traffic to it from the outside.

Kubernetes service in java does not resolve restarted service/replicationcontroller

I have a kubernetes cluster where one service (java application) connects to another service to write data (elasticsearch).
When elasticsearch (service & replicationcontroller) is restarted/redeployed, the java-application looses it's connection, which can only be recovered by restarting the java-application (rc). This is not the desired behaviour and should be solved.
Using curl from the kubernetes pod of the application to query elasticsearch does work fine after restart, so it must be probably something java is doing.
It does work when only the replicationcontroller for elasticsearch is touched, leaving the service as it is. But why does curl work in that case, however this should not be the solution.
Using the same konfiguration in a local docker setup without kubernetes does also not lead to problems.
Promising solutions that did not worked:
Setting networkaddress.cache.ttlor networkaddress.cache.negative.ttl to zero (or other small positive values)
Hacking /etc/nsswitch.conf as described in https://stackoverflow.com/a/32550032/363281
I'm using kubernetes 1.1.3, OpenJDK 8u66, service Dockerfile is derived from java:8
Try java.security.Security.setProperty("networkaddress.cache.ttl" , "60");
This means sixty seconds and you should adapt to your needs.
One solution is not to restart your Service: a Service resolves the Pods by IPs and watches the Pods by selectors, so you don't need to restart the Service when you restart your Pods.
Now likely what is happening is that your application is resolving the Service at start up, and it then caches the IP. When you restart the Service it likely gets a new IP which messes up your application's behavior. You need to check how you can reset this cache or initiate some sort of restart of that App when the pods/services are changes.
If you don't restart the Service, the IP won't change, but it will still proxy to the Pods that are restarted.

Categories