I want to grant access to kafka topic through java application as we does through kafka-acls.sh. I just wanted to run below command through java api.
kafka-acls.sh --add --allow-principals User:ctadmin --operation ALL --topic test --authorizer-properties zookeeper.connect=localhost:2181
I use these Java instruction to do it (topicName has test as value):
String[] cmdPArm = {"--add", "--allow-principals", "User:ctadmin", "--operation", "ALL","--topic", topicName ,"--authorizer-properties", "zookeeper.connect=localhost:2181"};
AclCommand.main(cmdPArm);
The command works without any issue. The ACL authorization is set but I have a little issue on how this command works. When I try to get the current permissions for my topic, instead of this output:
Current ACLs for resource `Topic:test`:
User:ctadmin has Allow permission for operations: All from hosts: localhost
I have this:
Current ACLs for resource `Topic:test`:
user:ctadmin has Allow permission for operations: All from hosts: localhost
You can see the difference between user:ctadmin and User:ctadmin which does not grant permissions correctly for my user, and I'm not authorized to consume on this topic.
How can I fix this?
Issue solved. It was probably due to some old data in the cache. I did a fresh Kafka install/config in another host and things worked just fine.
Related
Tried accessing the RabbitMQ management page on localhost:5672 and the connection is being refused. I have reinstalled RabbitMQ via Homebrew and still running into the same problem. I ran rabbitmq-server after the reinstallation and got back this prompt:
## ## RabbitMQ 3.8.1
## ##
########## Copyright (c) 2007-2019 Pivotal Software, Inc.
###### ##
########## Licensed under the MPL 1.1. Website: https://rabbitmq.com
Doc guides: https://rabbitmq.com/documentation.html
Support: https://rabbitmq.com/contact.html
Tutorials: https://rabbitmq.com/getstarted.html
Monitoring: https://rabbitmq.com/monitoring.html
Logs: /usr/local/var/log/rabbitmq/rabbit#localhost.log
/usr/local/var/log/rabbitmq/rabbit#localhost_upgrade.log
Config file(s): (none)
Starting broker... completed with 6 plugins.
Not sure why I cant access the management page via the default port. I had a few applications using RabbitMQ running and none of them are working now. What is the best way to completely uninstall RabbitMQ from a Mac so that I can run a clean install?
I think you should have to enable management plugin as stated in rabbitmq documention:
The management plugin is included in the RabbitMQ distribution. Like any other plugin, it must be enabled before it can be used.
Just go to your rabbitmq installation directory (example path /usr/save/rabbitmq_server-x.x.x/sbin) and run following command:
rabbitmq-plugins enable rabbitmq_management
After this if rabbitmq management still not accessible try to stop and restart rabbitmq server.
Here are reference link:
Rabbitmq documention on management plugin
Rabbitmq different networking ports information
To answer these:
Not sure why I cant access the management page via the default port.
Still cant access localhost:5762 after starting RabbitMQ server
If the rabbitmqctl status / rabbitmq-diagnostics status command shows a listener like this:
Interface: [::], port: 15672, protocol: http, purpose: HTTP API
then RabbitMQ might be setup correctly.
Probable cause: Http redirection
The issue might rather be with the URL that's visited. Chrome could be set to redirect HTTP to HTTPS. If this is so, and you don't have an HTTPS listener setup, you'd see a ERR_SSL_PROTOCOL_ERROR.
To get around this, you can disable redirection on Chrome only for localhost. By doing so, http://localhost:15672 will no longer be redirected to https://localhost:15672 and the management web client will therefore be visible.
How to disable HTTP redirection for a domain in Chrome
Visit chrome://net-internals/#hsts
Delete domain security policies for the domain (in this case simply enter localhost)
Click the Delete button
Running on RHEL 7.5 with Java 8. Kerberos 5 release 1.15.1.
We are seeing a strange behaviour with this set-up that has been seen in all versions since 2.11.10.
Note, I can't post direct logs or config as it my company blocks this.
Steps to reproduce
1) Configure gerrit to use kerberos
gerrit.config
[container]
javaHome = <path to JRE>
javaOptions = -Djava.security.auth.login.config=<path to jaas.conf>
[auth]
type = LDAP
[ldap]
authentication = GSSAPI
server = ldap://<AD Realm>
<.. other AD related stuff..>
jaas.conf
KerberosLogin {
com.sun.security.auth.module.Krb5LoginModule
required
useTicketCache=true
doNotPrompt=true
renewTGT=true;
};
which is direct from the documentation.
2) kinit the keytab to create a ticket in the cache.
3) Try to login. It fails with "Server not found in Kerberos database (7)".
It will also fail if you change the jaas.conf to try and use the keytab directly.
You can access LDAP directly using the username/password but due to Company restrictions we can't have an unencrypted password at rest on a device so this is not a viable long, term solution.
We have taken packet captures of the traffic to the AD Realm and we see the same behaviour whether we use the keytab or the cache.
1) For the kinit we see one request to AD with the SPN field set to the SPN from the keytab. This, of course, works fine.
2) For any request from Gerrit we see TWO requests to AD, the first has the correct SPN from the cache/keytab the second tries to send an SPN of "ldap/" no matter what value of SPN is set. This second request is what is causing the error as that SPN is not recognised b AD.. Note, we have tried keytabs with various SPN's (HTTP/device, host/device, HTTP/device# etc etc). The same thing happens every time.
This may well be something very simple is wrong in our config but we have been banging our heads on this for weeks now.
The second request most likely shows up because you specified an LDAP server ldap://<AD Realm> in Gerrit's configuration. HTTP GSSAPI authentication may very well have succeeded at this point, but now the application needs to authenticate itself against the LDAP server before it can retrieve information about the user. That happens independently from the HTTP authentication itself.
It's normal that the SPN is not recognized because Active Directory generally doesn't use <AD Realm> to pick a domain controller – instead the individual server names have to be specified, e.g. ldap://dc01.ad.example.com. (Real AD clients choose a server automatically via DNS SRV records, but plain LDAP clients often don't support that.)
Note also that a keytab is essentially an unencrypted password at rest.
I have an issue producing messages to a Kafka topic (named secure.topic) secured with ACL.
My Groovy-based producer throws this error:
Error while fetching metadata with correlation id 9 : {secure.topic=LEADER_NOT_AVAILABLE}
Some notes about the configuration:
1 Kafka server, version 2.11_1.0.0 (both server and Java client libs)
topic ACL is set to All (also tested with --producer) and the user is the full name specified in the certificate
client auth enabled using self generated certificates
Additional server config:
security.inter.broker.protocol = SSL
ssl.client.auth = required
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
If I remove the authorizer.class.name property, then my client can produce messages (so, no problem with SSL and certificates).
Also, the kafka-authorizer.log produces the following message:
[2018-01-25 11:57:02,779] INFO Principal = User:CN= User,OU=XXX,O=XXX,L=XXX,ST=Unknown,C=X is Denied Operation = ClusterAction from host = 127.0.0.1 on resource = Cluster:kafka-cluster (kafka.authorizer.logger)
Any idea what can cause the LEADER_NOT_AVAILABLE error when enabling ACL?
From the authorizer logs, it looks like the Authorizer denied ClusterAction on the Cluster resource.
If you check your topic status (for example using kafka-topic.sh), I'd expect to see it without a Leader (-1).
When enabling authorizations, they are applied to all Kafka API messages reaching your cluster including inter-broker messages like StopReplica, LeaderAndIsr, ControlledShutdown, etc. So it looks like you only added ACLs for your client but forgot to add the ACLs required for the brokers to function.
So you need to at least add an ACL granting ClusterAction on the Cluster resource for your broker's principals. IIRC that's the only required ACL for inter-broker messages.
Following that, your cluster should be able to correctly elect a leader for the partition enabling your client to produce.
I have a problem communicating with Kafka secured with sasl using console scripts. Kafka is secured with sasl, listener is SASL_PLAINTEXT and mechanism is PLAIN.
What I did:
I tried listing some data using one of kafka scripts:
bin/kafka-consumer-groups.sh --bootstrap-server (address) --list
However I get
WARN Bootstrap broker (address) disconnected (org.apache.kafka.clients.NetworkClient)
and command fails, which is understandable because it's secured with sasl.
So I tried how to add client username/password to that command.
First, I tried to run kafka-console-consumer script, I used --command-config to add necessary file. I quickly discovered that I can't add jaas file directly and I needed to use .properties file, so I did.
My properties file(keep in mind that brackets indicate "censored" data, I can't put all real data here):
bootstrap.servers=(address)
zookeeper.connect=127.0.0.1:2181
zookeeper.connection.timeout.ms=6000
sasl.jaas.config=(path)/consumer_jaas.conf
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
group.id=(group)
My jaas file:
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username=(username)
password=(password);
};
This jaas file works in my standard java applications.
However, when I'm trying to run either kafka-consumer-groups script or kafka-console-consumer, I get this error:
Exception in thread "main" org.apache.kafka.common.KafkaException: java.lang.IllegalArgumentException: Login module not specified in JAAS config
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:94)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:93)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:51)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:84)
at kafka.admin.AdminClient$.create(AdminClient.scala:229)
at kafka.admin.AdminClient$.create(AdminClient.scala:223)
at kafka.admin.AdminClient$.create(AdminClient.scala:221)
at kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.createAdminClient(ConsumerGroupCommand.scala:454)
at kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.<init>(ConsumerGroupCommand.scala:389)
at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:65)
at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
Caused by: java.lang.IllegalArgumentException: Login module not specified in JAAS config
at org.apache.kafka.common.security.JaasConfig.<init>(JaasConfig.java:68)
at org.apache.kafka.common.security.JaasUtils.jaasConfig(JaasUtils.java:59)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:85)
This jaas file is a direct copy of a file that I'm using in java app that communicates with kafka and it works, however here, using console tools, it just doesn't work. I tried searching for a solution but I can't find anything useful.
Can anyone help me with this?
There are 2 ways to provide the JAAS configuration to the Kafka clients.
Via the client property: sasl.jaas.config. In that case you set it to the actual JAAS configuration entry. For example, your configuration file becomes:
bootstrap.servers=(address)
zookeeper.connect=127.0.0.1:2181
zookeeper.connection.timeout.ms=6000
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="(username)" password="(password)";
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
group.id=(group)
As you've already figured out, you can use --command-config to pass a properties file to kafka-consumer-groups.sh.
Via the Java property: java.security.auth.login.config. In this case, you set it to the path of your JAAS file. Also if you set it in KAFKA_OPTS, kafka-consumer-groups.sh will pick it up automatically.
export KAFKA_OPTS="-Djava.security.auth.login.config=(path)/consumer_jaas.conf"
I have a unsecured kafka instance with 2 brokers everything was running fine until I decided to configure ACL for topics, after ACL configuration my consumers stopped polling data from Kafka and I keep getting warning Error while fetching metadata with correlation id , my broker properties looks like below:-
listeners=PLAINTEXT://localhost:9092
advertised.listeners=PLAINTEXT://localhost:9092
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
And my client configuration looks like below:-
bootstrap.servers=localhost:9092
topic.name=topic-name
group.id=topic-group
I've used below command to configure ACL
bin\windows\kafka-acls.bat --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* Read --allow-host localhost --consumer --topic topic-name --group topic-group
After having all above configuration when I start consumer it stopped receiving messages. Can someone point where I'm mistaking. Thanks in advance.
We are using ACLs successfully, but not with PLAINTEXT protocol.
IMHO you shall use SSL protocol and instead of localhost use the real machine name.