Configuring ACL for kafka topic - java

I have a unsecured kafka instance with 2 brokers everything was running fine until I decided to configure ACL for topics, after ACL configuration my consumers stopped polling data from Kafka and I keep getting warning Error while fetching metadata with correlation id , my broker properties looks like below:-
listeners=PLAINTEXT://localhost:9092
advertised.listeners=PLAINTEXT://localhost:9092
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
And my client configuration looks like below:-
bootstrap.servers=localhost:9092
topic.name=topic-name
group.id=topic-group
I've used below command to configure ACL
bin\windows\kafka-acls.bat --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* Read --allow-host localhost --consumer --topic topic-name --group topic-group
After having all above configuration when I start consumer it stopped receiving messages. Can someone point where I'm mistaking. Thanks in advance.

We are using ACLs successfully, but not with PLAINTEXT protocol.
IMHO you shall use SSL protocol and instead of localhost use the real machine name.

Related

RabbitMQ - Spring boot consumer does not reconnect after rabbit node failover

I have got a multiple-node RabbitMQ broker with 3 node setup. Failover tests in which one of the scenarios was restarting 2 of 3 rabbit nodes. After restarting the one node Spring boot application was gone and also it wasn't reconnecting. All I got is this error message from SpringBoot application:
java.io.IOException
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=403, reply-text=ACCESS_REFUSED - access to exchange 'EXAMPLE_NAME' in vhost 'EXAMPLE_VHOST' refused for user 'EXAMPLE_USER', class-id=40, method-id=10)
I am using the apache camel RabbitMq consumer to connect to the broker.
I tried use Spring property but without any solution.
camel.component.rabbitmq.automatic-recovery-enabled=true
Or
camel.component.rabbitmq.automatic-recovery-enabled=true
After nodes are restarted and again up there is something like this in the Rabbit UI client:
Can anyone have a solution to use other properties or something else?

Dynamic update of SSL property of Kafka is not working

For mutualTLS "ssl.client.auth" should be set to "required". So, if we are trying to do the dynamic update using the below command
sh /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:28104 --entity-type brokers --entity-name 117373 **--alter --add-config listener.name.app.ssl.client.auth=required
Completed updating config for broker 117373.
sh /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:28104 --entity-type brokers --entity-name 117373 --describe
Dynamic configs for broker 117373 are:
listener.name.app.ssl.client.auth=required sensitive=false synonyms={DYNAMIC_BROKER_CONFIG:listener.name.app.ssl.client.auth=required, STATIC_BROKER_CONFIG:ssl.client.auth=none, DEFAULT_CONFIG:ssl.client.auth=none}
Dynamic command execution is success but in captured tcpdump(pcap) "Certificate Request" is not sent from Server below
But if we alter manually and restart Kafka we can see "Certificate Request" from Server in tcpdump.
Please help in resolving the dynamic update of altering "ssl.client.auth=Required"

Kafka Error connecting to node ubuntukafka:9092 (id: 0 rack: null) (org.apache.kafka.clients.NetworkClient) java.net.UnknownHostException:

I have two servers on VirtualBox guests each ubuntu. I can SSH from my main machine to both, and between the two so they all have the natnetwork.
I ran on one server kafka as described here:
https://kafka.apache.org/quickstart
So I brought up singlenode zookeper
Kafka then started. I added the test topic.
(All on MachineA . 10.75.1.247)
I am trying to list the topics on that node from another machine:
bin/kafka-topics.sh --list --bootstrap-server 10.75.1.247:9092
from MachineB (10.75.1.2)
doing that, causes the error over and over:
[2019-09-16 23:57:07,864] WARN [AdminClient clientId=adminclient-1] Error connecting to node ubuntukafka:9092 (id: 0 rack: null) (org.apache.kafka.clients.NetworkClient)
java.net.UnknownHostException: ubuntukafka
at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298)
at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)
at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:943)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:288)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:925)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1140)
at java.base/java.lang.Thread.run(Thread.java:834)
it does resolve the name
(says ubuntukafka instead of ubuntukafkanode) but fails.
What am I missing? Am I using kafka wrong? I thought I could have a nice kafka server where all my other servers with data can produce information too. Then many other consumers can read the information from?
Ultimately what I wanted to test was if I could send messages to my kafka server:
bin/kafka-console-producer.sh --broker-list 10.75.1.247:9092 --topic test
And even then use python later to produce messages to the server.
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers='10.75.1.247:9092')
for _ in range(100):
try:
producer.send('test', b'some_message_bytes')
except:
print('doh')
Generally, seems your hostnames aren't resolvable. Does ping ubuntukafka work? If not, then you'll need to adjust what you're making Kafka return via advertised.listeners to be the external IP rather than the hostname
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://10.75.1.247:9092
Where, 10.75.1.247 is the network address to be resolved by the external machines (e.g. make sure you can ping that address, too)
only changing listeners=PLAINTEXT://localhost:9092 work for me no need to change advertised.listeners property in server config
You can add below into file /etc/hosts:
127.0.0.1 ${your/hostname}

Kafka ACL - LEADER_NOT_AVAILABLE

I have an issue producing messages to a Kafka topic (named secure.topic) secured with ACL.
My Groovy-based producer throws this error:
Error while fetching metadata with correlation id 9 : {secure.topic=LEADER_NOT_AVAILABLE}
Some notes about the configuration:
1 Kafka server, version 2.11_1.0.0 (both server and Java client libs)
topic ACL is set to All (also tested with --producer) and the user is the full name specified in the certificate
client auth enabled using self generated certificates
Additional server config:
security.inter.broker.protocol = SSL
ssl.client.auth = required
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
If I remove the authorizer.class.name property, then my client can produce messages (so, no problem with SSL and certificates).
Also, the kafka-authorizer.log produces the following message:
[2018-01-25 11:57:02,779] INFO Principal = User:CN= User,OU=XXX,O=XXX,L=XXX,ST=Unknown,C=X is Denied Operation = ClusterAction from host = 127.0.0.1 on resource = Cluster:kafka-cluster (kafka.authorizer.logger)
Any idea what can cause the LEADER_NOT_AVAILABLE error when enabling ACL?
From the authorizer logs, it looks like the Authorizer denied ClusterAction on the Cluster resource.
If you check your topic status (for example using kafka-topic.sh), I'd expect to see it without a Leader (-1).
When enabling authorizations, they are applied to all Kafka API messages reaching your cluster including inter-broker messages like StopReplica, LeaderAndIsr, ControlledShutdown, etc. So it looks like you only added ACLs for your client but forgot to add the ACLs required for the brokers to function.
So you need to at least add an ACL granting ClusterAction on the Cluster resource for your broker's principals. IIRC that's the only required ACL for inter-broker messages.
Following that, your cluster should be able to correctly elect a leader for the partition enabling your client to produce.

Kafka ACL authorization issue using Java code

I want to grant access to kafka topic through java application as we does through kafka-acls.sh. I just wanted to run below command through java api.
kafka-acls.sh --add --allow-principals User:ctadmin --operation ALL --topic test --authorizer-properties zookeeper.connect=localhost:2181
I use these Java instruction to do it (topicName has test as value):
String[] cmdPArm = {"--add", "--allow-principals", "User:ctadmin", "--operation", "ALL","--topic", topicName ,"--authorizer-properties", "zookeeper.connect=localhost:2181"};
AclCommand.main(cmdPArm);
The command works without any issue. The ACL authorization is set but I have a little issue on how this command works. When I try to get the current permissions for my topic, instead of this output:
Current ACLs for resource `Topic:test`:
User:ctadmin has Allow permission for operations: All from hosts: localhost
I have this:
Current ACLs for resource `Topic:test`:
user:ctadmin has Allow permission for operations: All from hosts: localhost
You can see the difference between user:ctadmin and User:ctadmin which does not grant permissions correctly for my user, and I'm not authorized to consume on this topic.
How can I fix this?
Issue solved. It was probably due to some old data in the cache. I did a fresh Kafka install/config in another host and things worked just fine.

Categories