For mutualTLS "ssl.client.auth" should be set to "required". So, if we are trying to do the dynamic update using the below command
sh /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:28104 --entity-type brokers --entity-name 117373 **--alter --add-config listener.name.app.ssl.client.auth=required
Completed updating config for broker 117373.
sh /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:28104 --entity-type brokers --entity-name 117373 --describe
Dynamic configs for broker 117373 are:
listener.name.app.ssl.client.auth=required sensitive=false synonyms={DYNAMIC_BROKER_CONFIG:listener.name.app.ssl.client.auth=required, STATIC_BROKER_CONFIG:ssl.client.auth=none, DEFAULT_CONFIG:ssl.client.auth=none}
Dynamic command execution is success but in captured tcpdump(pcap) "Certificate Request" is not sent from Server below
But if we alter manually and restart Kafka we can see "Certificate Request" from Server in tcpdump.
Please help in resolving the dynamic update of altering "ssl.client.auth=Required"
Related
I setup a single node Kafka Docker container on my local machine like it is described in the Confluent documentation (steps 2-3).
In addition, I also exposed Zookeeper's port 2181 and Kafka's port 9092 so that I'll be able to connect to them from a client running on local machine:
$ docker run -d \
-p 2181:2181 \
--net=confluent \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=2181 \
confluentinc/cp-zookeeper:4.1.0
$ docker run -d \
--net=confluent \
--name=kafka \
-p 9092:9092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:4.1.0
Problem: When I try to connect to Kafka from the host machine, the connection fails because it can't resolve address: kafka:9092.
Here is my Java code:
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("client.id", "KafkaExampleProducer");
props.put("key.serializer", LongSerializer.class.getName());
props.put("value.serializer", StringSerializer.class.getName());
KafkaProducer<Long, String> producer = new KafkaProducer<>(props);
ProducerRecord<Long, String> record = new ProducerRecord<>("foo", 1L, "Test 1");
producer.send(record).get();
producer.flush();
The exception:
java.io.IOException: Can't resolve address: kafka:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.common.network.Selector.connect(Selector.java:214) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:176) [kafka-clients-2.0.0.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
Caused by: java.nio.channels.UnresolvedAddressException: null
at sun.nio.ch.Net.checkAddress(Net.java:101) ~[na:1.8.0_144]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) ~[na:1.8.0_144]
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233) ~[kafka-clients-2.0.0.jar:na]
... 7 common frames omitted
Question: How to connect to Kafka running in Docker? My code is running from host machine, not Docker.
Note: I know that I could theoretically play around with DNS setup and /etc/hosts but it is a workaround - it shouldn't be like that.
There is also similar question here, however it is based on ches/kafka image. I use confluentinc based image which is not the same.
Disclaimer
tl;dr - A simple port forward from the container to the host will not work, and no hosts files should be modified. What exact IP/hostname + port do you want to connect to? Make sure that value is set as advertised.listeners on the broker. Make sure that address and the servers listed as part of bootstrap.servers are actually resolvable (ping an IP/hostname, use netcat to check ports...)
To verify the ports are mapped correctly on the host, ensure that docker ps shows the kafka container is mapped from 0.0.0.0:<host_port> -> <advertised_listener_port>/tcp. The ports must match if trying to run a client from outside the Docker network.
The below answer uses confluentinc docker images to address the question that was asked, not wurstmeister/kafka. More specifically, the latter images are not well-maintained despite being the one of the most popular Kafka docker image.
The following sections try to aggregate all the details needed to use another image. For other, commonly used Kafka images, it's all the same Apache Kafka running in a container.
You're just dependent on how it is configured. And which variables make it so.
wurstmeister/kafka
Refer their README section on listener configuration, Also read their Connectivity wiki.
bitnami/kafka
If you want a small container, try these. The images are much smaller than the Confluent ones and are much more well maintained than wurstmeister. Refer their README for listener configuration.
debezium/kafka
Docs on it are mentioned here.
Note: advertised host and port settings are deprecated. Advertised listeners covers both. Similar to the Confluent containers, Debezium can use KAFKA_ prefixed broker settings to update its properties.
Others
spotify/kafka is deprecated and outdated.
fast-data-dev or lensesio/box are great for an all in one solution, but are bloated if you only want Kafka
Your own Dockerfile - Why? Is something incomplete with these others? Start with a pull request, not starting from scratch.
For supplemental reading, a fully-functional docker-compose, and network diagrams, see this blog by #rmoff
Answer
The Confluent quickstart (Docker) document assumes all produce and consume requests will be within the Docker network.
You could fix the problem of connecting to kafka:9092 by running your Kafka client code within its own container as that uses the Docker network bridge, but otherwise you'll need to add some more environment variables for exposing the container externally, while still having it work within the Docker network.
First add a protocol mapping of PLAINTEXT_HOST:PLAINTEXT that will map the listener protocol to a Kafka protocol
Key: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
Value: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
Then setup two advertised listeners on different ports. (kafka here refers to the docker container name; it might also be named broker, so double check your service + hostnames).
Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
Notice the protocols here match the left-side values of the protocol mapping setting above
When running the container, add -p 29092:29092 for the host port mapping, and advertised PLAINTEXT_HOST listener.
tl;dr
(with the above settings)
If something still doesn't work, KAFKA_LISTENERS can be set to include <PROTOCOL>://0.0.0.0:<PORT> where both options match the advertised setting and Docker-forwarded port
Client on same machine, not in a container
Advertising localhost and the associated port will let you connect outside of the container, as you'd expect.
In other words, when running any Kafka Client outside the Docker network (including CLI tools you might have installed locally), use localhost:29092 for bootstrap servers and localhost:2181 for Zookeeper (requires Docker port forwarding)
Client on another machine
If trying to connect from an external server, you'll need to advertise the external hostname/ip (e.g. 192.168.x.y) of the host as well as/in place of localhost.
Simply advertising localhost with a port forward will not work because Kafka protocol will still continue to advertise the listeners you've configured.
This setup requires Docker port forwarding and router port forwarding (and firewall / security group changes) if not in the same local network, for example, your container is running in the cloud and you want to interact with it from your local machine.
Client (or another broker) in a container, on the same host
This is the least error-prone configuration; you can use DNS service names directly.
When running an app in the Docker network, use kafka:9092 (see advertised PLAINTEXT listener config above) for bootstrap servers and zookeeper:2181 for Zookeeper, just like any other Docker service communication (doesn't require any port forwarding)
If you use separate docker run commands, or Compose files, you need to define a shared network manually
See the example Compose file for the full Confluent stack or more minimal one for a single broker.
If using multiple brokers, then they need to use unique hostnames + advertised listeners. See example
Related question
Connect to Kafka on host from Docker (ksqlDB)
Appendix
For anyone interested in Kubernetes deployments:
Accessing Kafka
Operators (recommended): https://operatorhub.io/?keyword=Kafka
Helm Artifact Hub: https://artifacthub.io/packages/search?ts_query_web=kafka&sort=stars&page=1
When you first connect to a kafka node, it will give you back all the kafka node and the url where to connect. Then your application will try to connect to every kafka directly.
Issue is always what is the kafka will give you as url ? It's why there is the KAFKA_ADVERTISED_LISTENERS which will be used by kafka to tell the world how it can be accessed.
Now for your use-case, there is multiple small stuff to think about:
Let say you set plaintext://kafka:9092
This is OK if you have an application in your docker compose that use kafka. This application will get from kafka the URL with kafka that is resolvable through the docker network.
If you try to connect from your main system or from another container which is not in the same docker network this will fail, as the kafka name cannot be resolved.
==> To fix this, you need to have a specific DNS server like a service discovery one, but it is big trouble for small stuff. Or you set manually the kafka name to the container ip in each /etc/hosts
If you set plaintext://localhost:9092
This will be ok on your system if you have a port mapping ( -p 9092:9092 when launching kafka)
This will fail if you test from an application on a container (same docker network or not) (localhost is the container itself not the kafka one)
==> If you have this and wish to use a kafka client in another container, one way to fix this is to share the network for both container (same ip)
Last option : set an IP in the name: plaintext://x.y.z.a:9092 ( kafka advertised url cannot be 0.0.0.0 as stated in the doc https://kafka.apache.org/documentation/#brokerconfigs_advertised.listeners )
This will be ok for everybody... BUT how can you get the x.y.z.a name ?
The only way is to hardcode this ip when you launch the container: docker run .... --net confluent --ip 10.x.y.z .... Note that you need to adapt the ip to one valid ip in the confluent subnet.
before zookeeper
docker container run --name zookeeper -p 2181:2181 zookeeper
after kafka
docker container run --name kafka -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=192.168.8.128:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip_address_of_your_computer_but_not_localhost!!!:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka
in kafka consumer and producer config
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
I run my project with these regulations. Good luck dude.
the simplest way to solve this is adding a custom hostname to your broker using -h option
docker run -d \
--net=confluent \
--name=kafka \
-h broker-1 \
-p 9092:9092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:4.1.0
and edit your /etc/hosts
127.0.0.1 broker-1
and use:
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "broker-1:9092");
This allows me to access localhost:9092 in Kafka applications on my M1 Mac
Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
plus port forwarding :
ports
- "9092:9092"
Finally, again, for my set up, I have to set listeners key this way
Key: KAFKA_LISTENERS
Value: PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092
I setup a single node Kafka Docker container on my local machine like it is described in the Confluent documentation (steps 2-3).
In addition, I also exposed Zookeeper's port 2181 and Kafka's port 9092 so that I'll be able to connect to them from a client running on local machine:
$ docker run -d \
-p 2181:2181 \
--net=confluent \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=2181 \
confluentinc/cp-zookeeper:4.1.0
$ docker run -d \
--net=confluent \
--name=kafka \
-p 9092:9092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:4.1.0
Problem: When I try to connect to Kafka from the host machine, the connection fails because it can't resolve address: kafka:9092.
Here is my Java code:
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("client.id", "KafkaExampleProducer");
props.put("key.serializer", LongSerializer.class.getName());
props.put("value.serializer", StringSerializer.class.getName());
KafkaProducer<Long, String> producer = new KafkaProducer<>(props);
ProducerRecord<Long, String> record = new ProducerRecord<>("foo", 1L, "Test 1");
producer.send(record).get();
producer.flush();
The exception:
java.io.IOException: Can't resolve address: kafka:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.common.network.Selector.connect(Selector.java:214) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:176) [kafka-clients-2.0.0.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
Caused by: java.nio.channels.UnresolvedAddressException: null
at sun.nio.ch.Net.checkAddress(Net.java:101) ~[na:1.8.0_144]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) ~[na:1.8.0_144]
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233) ~[kafka-clients-2.0.0.jar:na]
... 7 common frames omitted
Question: How to connect to Kafka running in Docker? My code is running from host machine, not Docker.
Note: I know that I could theoretically play around with DNS setup and /etc/hosts but it is a workaround - it shouldn't be like that.
There is also similar question here, however it is based on ches/kafka image. I use confluentinc based image which is not the same.
Disclaimer
tl;dr - A simple port forward from the container to the host will not work, and no hosts files should be modified. What exact IP/hostname + port do you want to connect to? Make sure that value is set as advertised.listeners on the broker. Make sure that address and the servers listed as part of bootstrap.servers are actually resolvable (ping an IP/hostname, use netcat to check ports...)
To verify the ports are mapped correctly on the host, ensure that docker ps shows the kafka container is mapped from 0.0.0.0:<host_port> -> <advertised_listener_port>/tcp. The ports must match if trying to run a client from outside the Docker network.
The below answer uses confluentinc docker images to address the question that was asked, not wurstmeister/kafka. More specifically, the latter images are not well-maintained despite being the one of the most popular Kafka docker image.
The following sections try to aggregate all the details needed to use another image. For other, commonly used Kafka images, it's all the same Apache Kafka running in a container.
You're just dependent on how it is configured. And which variables make it so.
wurstmeister/kafka
Refer their README section on listener configuration, Also read their Connectivity wiki.
bitnami/kafka
If you want a small container, try these. The images are much smaller than the Confluent ones and are much more well maintained than wurstmeister. Refer their README for listener configuration.
debezium/kafka
Docs on it are mentioned here.
Note: advertised host and port settings are deprecated. Advertised listeners covers both. Similar to the Confluent containers, Debezium can use KAFKA_ prefixed broker settings to update its properties.
Others
spotify/kafka is deprecated and outdated.
fast-data-dev or lensesio/box are great for an all in one solution, but are bloated if you only want Kafka
Your own Dockerfile - Why? Is something incomplete with these others? Start with a pull request, not starting from scratch.
For supplemental reading, a fully-functional docker-compose, and network diagrams, see this blog by #rmoff
Answer
The Confluent quickstart (Docker) document assumes all produce and consume requests will be within the Docker network.
You could fix the problem of connecting to kafka:9092 by running your Kafka client code within its own container as that uses the Docker network bridge, but otherwise you'll need to add some more environment variables for exposing the container externally, while still having it work within the Docker network.
First add a protocol mapping of PLAINTEXT_HOST:PLAINTEXT that will map the listener protocol to a Kafka protocol
Key: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
Value: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
Then setup two advertised listeners on different ports. (kafka here refers to the docker container name; it might also be named broker, so double check your service + hostnames).
Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
Notice the protocols here match the left-side values of the protocol mapping setting above
When running the container, add -p 29092:29092 for the host port mapping, and advertised PLAINTEXT_HOST listener.
tl;dr
(with the above settings)
If something still doesn't work, KAFKA_LISTENERS can be set to include <PROTOCOL>://0.0.0.0:<PORT> where both options match the advertised setting and Docker-forwarded port
Client on same machine, not in a container
Advertising localhost and the associated port will let you connect outside of the container, as you'd expect.
In other words, when running any Kafka Client outside the Docker network (including CLI tools you might have installed locally), use localhost:29092 for bootstrap servers and localhost:2181 for Zookeeper (requires Docker port forwarding)
Client on another machine
If trying to connect from an external server, you'll need to advertise the external hostname/ip (e.g. 192.168.x.y) of the host as well as/in place of localhost.
Simply advertising localhost with a port forward will not work because Kafka protocol will still continue to advertise the listeners you've configured.
This setup requires Docker port forwarding and router port forwarding (and firewall / security group changes) if not in the same local network, for example, your container is running in the cloud and you want to interact with it from your local machine.
Client (or another broker) in a container, on the same host
This is the least error-prone configuration; you can use DNS service names directly.
When running an app in the Docker network, use kafka:9092 (see advertised PLAINTEXT listener config above) for bootstrap servers and zookeeper:2181 for Zookeeper, just like any other Docker service communication (doesn't require any port forwarding)
If you use separate docker run commands, or Compose files, you need to define a shared network manually
See the example Compose file for the full Confluent stack or more minimal one for a single broker.
If using multiple brokers, then they need to use unique hostnames + advertised listeners. See example
Related question
Connect to Kafka on host from Docker (ksqlDB)
Appendix
For anyone interested in Kubernetes deployments:
Accessing Kafka
Operators (recommended): https://operatorhub.io/?keyword=Kafka
Helm Artifact Hub: https://artifacthub.io/packages/search?ts_query_web=kafka&sort=stars&page=1
When you first connect to a kafka node, it will give you back all the kafka node and the url where to connect. Then your application will try to connect to every kafka directly.
Issue is always what is the kafka will give you as url ? It's why there is the KAFKA_ADVERTISED_LISTENERS which will be used by kafka to tell the world how it can be accessed.
Now for your use-case, there is multiple small stuff to think about:
Let say you set plaintext://kafka:9092
This is OK if you have an application in your docker compose that use kafka. This application will get from kafka the URL with kafka that is resolvable through the docker network.
If you try to connect from your main system or from another container which is not in the same docker network this will fail, as the kafka name cannot be resolved.
==> To fix this, you need to have a specific DNS server like a service discovery one, but it is big trouble for small stuff. Or you set manually the kafka name to the container ip in each /etc/hosts
If you set plaintext://localhost:9092
This will be ok on your system if you have a port mapping ( -p 9092:9092 when launching kafka)
This will fail if you test from an application on a container (same docker network or not) (localhost is the container itself not the kafka one)
==> If you have this and wish to use a kafka client in another container, one way to fix this is to share the network for both container (same ip)
Last option : set an IP in the name: plaintext://x.y.z.a:9092 ( kafka advertised url cannot be 0.0.0.0 as stated in the doc https://kafka.apache.org/documentation/#brokerconfigs_advertised.listeners )
This will be ok for everybody... BUT how can you get the x.y.z.a name ?
The only way is to hardcode this ip when you launch the container: docker run .... --net confluent --ip 10.x.y.z .... Note that you need to adapt the ip to one valid ip in the confluent subnet.
before zookeeper
docker container run --name zookeeper -p 2181:2181 zookeeper
after kafka
docker container run --name kafka -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=192.168.8.128:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip_address_of_your_computer_but_not_localhost!!!:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka
in kafka consumer and producer config
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
I run my project with these regulations. Good luck dude.
the simplest way to solve this is adding a custom hostname to your broker using -h option
docker run -d \
--net=confluent \
--name=kafka \
-h broker-1 \
-p 9092:9092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:4.1.0
and edit your /etc/hosts
127.0.0.1 broker-1
and use:
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "broker-1:9092");
This allows me to access localhost:9092 in Kafka applications on my M1 Mac
Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
plus port forwarding :
ports
- "9092:9092"
Finally, again, for my set up, I have to set listeners key this way
Key: KAFKA_LISTENERS
Value: PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092
I have 3 nodes of Kafka cluster in the Windows environment. I have recently added security to this existing cluster with the SASL_SSL mechanism.
Here is my server.properties security configurations on each node:
authroizer.class.name=kafka.security.auth.SimpleAclAuthorizer
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2
ssl.endpoint.identification.algorithm=
ssl.truststore.location=kafka-truststore.jks
ssl.truststore.password=******
ssl.keystore.location=kafka.keystore.jks
ssl.keystore.password=******
ssl.key.password=******
Everything is working fine. I am able to store and retrieve messages. Kafka stream applications are properly connected. But from yesterday I am getting continuous logs on all three nodes as
INFO [SocketServer brokerId=2] Failed authentication with host.docker.internal/ip (SSL handshake failed) (org.apache.kafka.common.network.Selector)
As the log says broker with id 2 is refusing the SSL handshake from the other brokers i.e. 1 & 3.
I have verified the jks certificates and they all are valid.
Did anyone know the reason for such logs?
I have two servers on VirtualBox guests each ubuntu. I can SSH from my main machine to both, and between the two so they all have the natnetwork.
I ran on one server kafka as described here:
https://kafka.apache.org/quickstart
So I brought up singlenode zookeper
Kafka then started. I added the test topic.
(All on MachineA . 10.75.1.247)
I am trying to list the topics on that node from another machine:
bin/kafka-topics.sh --list --bootstrap-server 10.75.1.247:9092
from MachineB (10.75.1.2)
doing that, causes the error over and over:
[2019-09-16 23:57:07,864] WARN [AdminClient clientId=adminclient-1] Error connecting to node ubuntukafka:9092 (id: 0 rack: null) (org.apache.kafka.clients.NetworkClient)
java.net.UnknownHostException: ubuntukafka
at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298)
at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)
at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:943)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:288)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:925)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1140)
at java.base/java.lang.Thread.run(Thread.java:834)
it does resolve the name
(says ubuntukafka instead of ubuntukafkanode) but fails.
What am I missing? Am I using kafka wrong? I thought I could have a nice kafka server where all my other servers with data can produce information too. Then many other consumers can read the information from?
Ultimately what I wanted to test was if I could send messages to my kafka server:
bin/kafka-console-producer.sh --broker-list 10.75.1.247:9092 --topic test
And even then use python later to produce messages to the server.
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers='10.75.1.247:9092')
for _ in range(100):
try:
producer.send('test', b'some_message_bytes')
except:
print('doh')
Generally, seems your hostnames aren't resolvable. Does ping ubuntukafka work? If not, then you'll need to adjust what you're making Kafka return via advertised.listeners to be the external IP rather than the hostname
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://10.75.1.247:9092
Where, 10.75.1.247 is the network address to be resolved by the external machines (e.g. make sure you can ping that address, too)
only changing listeners=PLAINTEXT://localhost:9092 work for me no need to change advertised.listeners property in server config
You can add below into file /etc/hosts:
127.0.0.1 ${your/hostname}
I have a unsecured kafka instance with 2 brokers everything was running fine until I decided to configure ACL for topics, after ACL configuration my consumers stopped polling data from Kafka and I keep getting warning Error while fetching metadata with correlation id , my broker properties looks like below:-
listeners=PLAINTEXT://localhost:9092
advertised.listeners=PLAINTEXT://localhost:9092
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
And my client configuration looks like below:-
bootstrap.servers=localhost:9092
topic.name=topic-name
group.id=topic-group
I've used below command to configure ACL
bin\windows\kafka-acls.bat --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* Read --allow-host localhost --consumer --topic topic-name --group topic-group
After having all above configuration when I start consumer it stopped receiving messages. Can someone point where I'm mistaking. Thanks in advance.
We are using ACLs successfully, but not with PLAINTEXT protocol.
IMHO you shall use SSL protocol and instead of localhost use the real machine name.