I've a rest service running in docker container on port 5000 which is used to produce message over kafka topic running out of docker container.
I've configure my producer client with below properties :-
bootstrap.servers=localhost:9093
And I've started my contained with below command:-
docker run -d -p 127.0.0.1:5000:5000 <contained id>
I've also made below configuration to advertise kafka host and port
advertised.host.name=localhost
advertised.port=9093
Despite of having all configuration when I try to produce to a kafka topic then I get below error:-
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Can someone please point where the actual problem is?
In real life, advertised.host.name should never be localhost.
In your case, you run your Docker container in bridge networking mode, so it won't be able to reach the broker via localhost, as it will point to the container network, not the host machine.
To make it work you should set the advertised.host.name and bootstrap.servers to the IP address returned by ifconfig docker0 (might be not docker0 in your case but you get the point).
Alternatively, you may probably run your container with --net=host, but I think you'd better properly configure the advertised host name.
Related
I am writing end-to-end tests for an application. I start an instance of an application, a Kafka instance, and a Zookeeper (all Dockerized) and then I interact with the application API to test its functionality. I need to test an event consumer's functionality in this application. I publish events from my tests and the application is expected to handle them.
Problem: If I run the application locally (not in Docker) and run tests that would produce events, the consumer in the application code handles events correctly. In this case, the consumer and the test have bootstrapServers set to localhost:9092. But if the application is run as a Dockerized instance it doesn't see the events. In this case bootstrapServers are set to kafka:9092 in the application and localhost:9092 in the test where kafka is a Docker container name. The kafka container exposes its 9092 port to the host so that the same instance of Kafka can be accessed from inside a Docker container and from the host (running my tests).
The only difference in the code is localhost vs kafka set as bootstrap servers. In both scenarios consumers and producers start successfully; events are published without errors. It is just that in one case the consumer doesn't receive events.
Question: How to make Dockerized consumers see events posted from the host machine?
Note: I have a properly configured Docker network which includes the application instance, Zookeeper, and Kafka. They all "see" each other. The corresponding ports of kafka and zookeeper are exposed to the host.
Kafka ports: 0.0.0.0:9092->9092/tcp. Zookeeper ports: 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp.
I am using wurstmeister/kafka and wurstmeister/zookeeper Docker images (I cannot replace them).
Any ideas/thoughts are appreciated. How would you debug it?
UPDATE: The issue was with KAFKA_ADVERTISED_LISTENERS and KAFKA_LISTENERS env variables that were set to different ports for INSIDE and OUTSIDE communications. The solution was to use a correct port in the application code when it is run inside a Docker container.
Thes kind of issues are usually related to the way Kafka handles the broker's address.
When you start a Kafka broker it binds itself on 0.0.0.0:9092 and register itself on Zookeeper with the address <hostname>:9092. When you connect with a client, Zookeeper will be contacted to fetch the address of the specific broker.
This means that when you start a Kafka container you have a situation like the following:
container name: kafka
network name: kafkanet
hostname: kafka
registration on zookeeper: kafka:9092
Now if you connect a client to your Kafka from a container inside the kafkanet network, the address you get back from Zookeeper is kafka:9092 which is resolvable through the kafkanet network.
However if you connect to Kafka from outside docker (i.e. using the localhost:9092 endpoint mapped by docker), you still get back the kafka:9092 address which is not resolvable.
In order to address this issue you can specify the advertised.host.name and advertised.port in the broker configuration in such a way that the address is resolvable by all the client (see documentation).
What is usually done is to set advertised.host.name as <container-name>.<network> (in your case something like kafka.kafkanet) so that any container connected to the network is able to correctly resolve the IP of the Kafka broker.
In your case however you have a mixed network configuration, as some components live inside docker (hence able to resolve the kafkanet network) while others live outside it. If it were a production system my suggestion would be to set the advertised.host.name to the DNS/IP of the host machine and always rely on docker port mapping to reach the Kafka broker.
From my understanding however you only need this setup to test things out, so the easiest thing would be to "trick" the system living outside docker. Using the naming specified above, this means simply to add to your /etc/hosts (or windows equivalent) the line 127.0.0.1 kafka.kafkanet.
This way when your client living outside docker connects to Kafka the following should happen:
client -> Kafka via localhost:9092
kafka queries Zookeeper and return the host kafka.kafkanet
client resolves kafka.kafkanet to 127.0.0.1
client -> Kafka via 127.0.0.1:9092
EDIT
As pointed out in a comment, newer Kafka version now use the concept of listeners and advertised.listeners which are used in place of host.name and advertised.host.name (which are deprecated and only used in case the the above ones are not specified). The general idea is the same however:
host.name: specifies the host to which the Kafka broker should bind itself to (works in conjunction with port
listeners: specifies all the endpoints to which the Kafka broker should bind (for instance PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9091)
advertised.host.name: specifies how the broker is advertised to client (i.e. which address client should use to connect to it)
avertised.listeners: specifies all the advertised endpoints (for instance PLAINTEXT://kafka.example.com:9092,SSL://kafka.example.com:9091)
In both cases for client to be able to successfully communicate with Kafka they need to be able to resolve and connect to the advertised hostname and port.
In both cases if not specified they are automatically derived by the broker using the hostname of the machine the broker is running on.
You kept referencing 8092. Was that intentional? Kafka runs on 9092. Easiest test is to download the same version of Kafka and manually run its kafka-console-consumer and kafka-console-producer scripts to see if you can pub-sub from your host machine.
did you try "host.docker.internal" in dockerized application?
You could create a docker network for your containers and then containers will be able to resolve each other hostnames and communicate.
Note: this is usable with docker-compose as well with standalone containers
I need multiple instance of same application, for that I am using
server.port=0 to run application in random port.
my question is how can I map randomly generated port to docker-compose.yml to create multiple instances.
I am using spring boot at the back-end. I am unable to find any solution.
Any help much appreciated.
Each Docker container runs a single process in an isolated network namespace, so this isn't necessary. Pick a fixed port. For HTTP services, common port numbers include 80, 3000, 8000, and 8080, depending on permissions and the language runtime (80 requires elevated privileges, 3000 is Node's default, and so on). The exact port number doesn't matter.
You access the port from outside Docker space using a published port. If you're running multiple containers, there is the potential for conflict if multiple services use the same host port, which is probably what you're trying to avoid. In the docker run -p option or the Docker Compose ports: setting, it's possible to list only the port running inside the container, and Docker will choose a host port for you.
version: "3"
services:
web:
image: ...
ports:
- "8000" # no explicit host port
command: ... -Dserver.port=8000 # fixed container port
docker-compose port web 8000 will tell you what the host (public) port number is. For communication between containers in the same docker-compose.yml file, you can use the service name and the (fixed, known) internal port, http://web:8000.
We are trying to use Application level clustering using the Akka Clustering for our distributed application which runs within docker containers across multiple nodes. We plan to run the docker container in the "host" mode networking.
When the dockerized application comes up for the first time, the Akka Clustering does seem to work and we do not see any Gossip messages being exchanged between the cluster nodes. This gets resolved only when we remove the file "/var/lib/docker/network/files/local-kv.db” and restart the docker service. This is not an acceptable solution for the production deployment and so we are trying to do an RCA and provide a proper solution.
Any help here would be really appreciated.
Tried removing file "/var/lib/docker/network/files/local-kv.db” and restarting docker service worked. But this workaround is unacceptable in the production deployment
Tried using the bridge network mode for the dockerized container. That helps, but our current requirement requires us to run the container in "host" mode.
application.conf has the following settings for the host and port currently.
hostname = "" port = 2551 bind-hostname = "0.0.0.0" bind-port = 2551
No gossip messages are exchanged between the akka cluster nodes. Whereas we see those messages after applying the mentioned workaround
I have some middleware running in a docker container.
When I run this middleware on my host machine everything works fine.
When I ran it on the docker container with all the necessary ports exposed and published:
Dockerfile:
EXPOSE 5672 15672 1337 1338 5556 3000
Docker-compose.yml
ports:
- "5672:5672"
- "15672:15672"
- "1337:1337"
- "1338:1338"
- "5556:5556"
- "3000:3000"
It’s weird because I have rabbitmq and mule in that image. Rabbit works well beacause I can access the management console and my mule app publish in it.
I have a flow, that with a quartz component publish in rabbitmq a keep alive each 30ms, and works well.
But I have other flow which receives information in an UDP inbound endpoint and publish that on a rabbitmq queue. The inbound endpoind doesn´t receive anything, this endpoint listens in 0.0.0.0 and port 1338, and I am binding 1338:1338.
So if I receive packages on my localhost:1338 in my host machine, the inbound endpoint should receive it no?
Also in other flow I have a java client socket which gives me connection refeused.
The strange thing is that nothing of this happens when I run this on my host machine, and in docker I have the ports exposed and published.
Thanks everyone
Need to indicate docker it is udp protocol.
FROM:
-1338:1338
TO:
- 1338:1338/udp
we have kafka 0.8.2.1 running on a single docker container. This is one of several containers set up to work together with docker compose. The docker compose also exposes the kafka 9092 port to the host machine.
The advertised.host.name of the kafka server is set to kafka and all the other containers can talk to it fine using this name.
The problem is that java test programs cannot send messages to the kafka server from the host machine.
Using kafka.javaapi.producer.Producer errors
Using org.apache.kafka.clients.producer.KafkaProducer hangs for ages then errors
In both cases if I add an entry to the /etc/hosts file on the host machine it works fine. But this is not ideal, I would like anyone to be able to run these tests without messing with their hosts file.
So is there a way in a java Kafka producer to override the hostname/ip specified in the metadata. We only have one instance of kafka so there is no issue in getting the "right one".
On the FAQ page it implies this can be done:
In another rare case where the binding host/port is different from the
host/port for client connection, you can set advertised.host.name and
advertised.port for client connection
But there are no details how...
Or failing that a more general solution.
Is there a way to set a hosts entry in the java runtime environment? Without messing with system /etc/hosts files?
Thanks
It is possible to set up a dockerised broker so that it can talk to consumers and producers both in containers and on the host machine.
Set advertised.host.name in the dockerised broker to its docker-compose network IP address, and bootstrap.servers for any non-dockerised producers or consumers to localhost.
To get the compose network IP you can either specify a static IP in the compose file or set your container to run a shell command before starting kafka to get it. The command I use is ip -4 addr show scope global dev $(route -n | grep 'UG[ \t]' | awk '{print $8}' | head -1) | grep inet | awk '{print $2}' | cut -d / -f 1, but there are probably nicer ways.