add user to kafka authentication without cluster restart - java

We want to add authentication to our kafka cluster by using SASL_SSL.
As we see that we want to be able to frequently add users we are looking for at way to do this without having to perform a rolling restart.
server.properties
listeners=PLAINTEXT://localhost:9092,SASL_SSL://localhost:9093
advertised.listeners=PLAINTEXT://localhost:9092,SASL_SSL://localhost:9093
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SASL_SSL:SASL_SSL
#SASL_SSL:Listener with TLS-based encryption and SASL-based authentication.
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-256
inter.broker.listener.name=PLAINTEXT
ssl.endpoint.identification.algorithm=
ssl.client.auth=required
ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
sasl.mechanism.controller.protocol=SCRAM-SHA-256
ssl.keystore.location=certs\\server.keystore.jks
ssl.keystore.password=******
ssl.truststore.location=certs\\server.truststore.jks
ssl.truststore.password==******
ssl.key.password==******
super.users=User:admin
zookeeper.set.acl=true
allow.everyone.if.no.acl.found=true
zookeeper.properties
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
authProvider.2=org.apache.zookeeper.server.auth.DigestAuthenticationProvider
requireClientAuthScheme=sasl
sasl.client=true
sasl.clientconfig=Client
producer.properties & consumer.properties
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="alice" password="alice-secret";
ssl.keystore.location=certs\\client.keystore.jks
ssl.keystore.password==******
ssl.key.password==******
ssl.truststore.location=certs\\client.truststore.jks
ssl.truststore.password==******
zookeeper and kafka start.....
then add user with below command throw Exception:
sh kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password='testUSer-secret']' --entity-type users --entity-name testUSer
exception:
org.apache.zookeeper.keeperException$NoAuthException:KeeperErrorCode = NoAuth for config/users/testUser
Is it possible to add new users to the SASL JAAS configuration without restarting the Kafka cluster?

Related

Spring Boot Redisson not able to read clusterServersConfig

Here is the application.yml I am using for my Spring WebFlux project
redis:
redisson:
config: |
clusterServersConfig:
idleConnectionTimeout: 10000
connectTimeout: ${REDISSON_CONNECT_TIMEOUT:20000}
timeout: ${REDISSON_TIMEOUT:3000}
retryAttempts: ${REDISSON_RETRY_ATTEMPTS:3}
retryInterval: ${REDISSON_RETRY_INTERVAL:1500}
subscriptionConnectionPoolSize: ${REDISSON_SUBSCRIPTION_POOL_SIZE:50}
slaveConnectionMinimumIdleSize: ${REDISSON_SLAVE_MIN_IDLE_SIZE:24}
slaveConnectionPoolSize: ${REDISSON_SLAVE_POOL_SIZE:48}
masterConnectionMinimumIdleSize: ${REDISSON_MASTER_MIN_IDLE_SIZE:24}
masterConnectionPoolSize: ${REDISSON_MASTER_POOL_SIZE:48}
nodeAddresses:
- "rediss://${APPS_REDIS:-}:${APPS_REDIS_PORT:6379}"
password: ${APPS_REDIS_SECRET:-}
threads: ${REDISSON_THREADS:16}
nettyThreads: ${REDISSON_NETTY_THREADS:96}
But whenever I am starting the project in my laptop, this error comes up
Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'clusterServersConfig': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
I am not sure why it is saying clusterServersConfig is an unrecognized token. In the official doc also, it is mentioned and here is an example of this.
At first I thought it might be because I am running redis locally in my M1 Mac so redis-clusters aren't generated by default. I even tried to enable clusters in redis.conf and run a redis clusters with 3 nodes using redis-cli but still this happens. I have tried almost everything I could think of or search on the net. Any help appreciated :)

How can I add a user that can connect to ActiveMQ Artemis in WildFly via Quarkus JMS?

I have a WildFly application with an embedded ActiveMQ Artemis and I can't seem to figure how to debug it. It seems to connect fine since I'm getting no errors in the log, but I'm unable to read any queues. There is no error but nothing is happening.
I'm running the Quarkus JMS client 1.03 which is using org.apache.activemq » artemis-jms-client 2.19. The server is running WildFly 26.1 with the embedded ActiveMQ Artemis server 2.19.1 so the client and server should be compatible. However, I'm not sure if I've configured it correctly. I'm using the standalone-full.xml.
For testing I've added a user test1:
bash-4.4$ ./add-user.sh
What type of user do you wish to add?
a) Management User (mgmt-users.properties)
b) Application User (application-users.properties)
(a): b
Enter the details of the new user to add.
Using realm 'ApplicationRealm' as discovered from the existing property files.
Username : test1
Password recommendations are listed below. To modify these restrictions edit the add-user.properties configuration file.
- The password should be different from the username
- The password should not be one of the following restricted values {root, admin, administrator}
- The password should contain at least 8 characters, 1 alphabetic character(s), 1 digit(s), 1 non-alphanumeric symbol(s)
Password :
WFLYDM0098: The password should be different from the username
Are you sure you want to use the password entered yes/no? yes
Re-enter Password :
What groups do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[ ]: guest
About to add user 'test1' for realm 'ApplicationRealm'
Is this correct yes/no? yes
Is this new user going to be used for one AS process to connect to another AS process?
e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server Jakarta Enterprise Beans calls.
yes/no? yes
To represent the user add the following to the server-identities definition <secret value="dGVzdDE=" />
This is the log for creating that user. In my standalone-full.xml I have the role guest for ActiveMQ Artemis and the queue prices set up:
<subsystem xmlns="urn:jboss:domain:messaging-activemq:13.1">
<server name="default">
...
<security-setting name="#">
<role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
...
<address-setting name="jms.queue.prices" expiry-address="jms.queue.ExpiryQueue" redelivery-delay="1000" max-delivery-attempts="0"/>
...
<jms-queue name="prices" entries="java:/jms/queue/prices" durable="true"/>
...
</server>
</subsystem>
My Quarkus client tries to connect with the following:
quarkus.artemis.url=tcp://localhost:8080
quarkus.artemis.username=test1
quarkus.artemis.password=test1
It has a consumer:
#Override
public void run() {
try (JMSContext context = connectionFactory.createContext(JMSContext.AUTO_ACKNOWLEDGE)) {
JMSConsumer consumer = context.createConsumer(context.createQueue("prices"));
while (true) {
Message message = consumer.receive();
if (message == null) {
// receive returns `null` if the JMSConsumer is closed
return;
}
lastPrice = message.getBody(String.class);
}
} catch (JMSException e) {
throw new RuntimeException(e);
}
}
and a producer:
#Override
public void run() {
try (JMSContext context = connectionFactory.createContext(JMSContext.AUTO_ACKNOWLEDGE)) {
context.createProducer().send(context.createQueue("prices"), Integer.toString(random.nextInt(100)));
}
}
It seem to connect just fine. I can see in the Quarkus log:
2022-12-08 14:29:59,596 DEBUG [org.apa.act.art.cor.cli.imp.ClientSessionFactoryImpl] (pool-10-thread-1) Trying reconnection attempt 0/1
2022-12-08 14:29:59,596 DEBUG [org.apa.act.art.cor.cli.imp.ClientSessionFactoryImpl] (pool-10-thread-1) Trying to connect with connectorFactory=org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory#6096907d and currentConnectorConfig: TransportConfiguration(name=null, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=localhost
2022-12-08 14:29:59,596 DEBUG [org.apa.act.art.cor.rem.imp.net.NettyConnector] (pool-10-thread-1) Connector NettyConnector [host=localhost, port=8080, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] using native epoll
2022-12-08 14:29:59,596 DEBUG [org.apa.act.art.cor.client] (pool-10-thread-1) AMQ211002: Started EPOLL Netty Connector version 4.1.82.Final to localhost:8080
2022-12-08 14:29:59,597 DEBUG [org.apa.act.art.cor.rem.imp.net.NettyConnector] (pool-10-thread-1) Remote destination: localhost/127.0.0.1:8080
2022-12-08 14:29:59,598 DEBUG [org.apa.act.art.cor.rem.imp.net.NettyConnector] (Thread-1 (ActiveMQ-client-netty-threads)) Added ActiveMQClientChannelHandler to Channel with id = 6339db8b
2022-12-08 14:29:59,598 DEBUG [org.apa.act.art.cor.cli.imp.ClientSessionFactoryImpl] (pool-10-thread-1) Connected with the currentConnectorConfig=TransportConfiguration(name=null, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=localhost
2022-12-08 14:29:59,599 DEBUG [org.apa.act.art.cor.cli.imp.ClientSessionFactoryImpl] (pool-10-thread-1) Reconnection successful
So it seems to be working fine, but I can't seem to send or consume from the queue.
Answers to Justin questions in the comments
Have you tried adding ?httpUpgradeEnabled=true
yes no change
quarkus.artemis.url=tcp://localhost:8080?httpUpgradeEnabled=true
Can you check the consumer-count
Actually there does not even seem to be a session active. I tried to close the connection for my test user but it shown me a dialog "no connection for that user" So maybe it's not even connecting as I thought
Why do you suspect the problem is with the user
I thought it might have something to do with some permission error. I was not sure if I was suppost to add the user to the "guest" group or not
But since I have in my config
<security-setting name="#">
<role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
EDIT2. Actually I see now in the log that quarkus seems to be missing permission for creating a consumer. So i guess "guest" role is not the correct one.
I assume that was corrected

Apache Kafka Cluster start fails with NoNodeException

I'm trying to start a spark streaming session which consumes from a Kafka queue and I'm using Zookeeper for config mgt. However, when I try to start this following exception is being thrown.
18/03/26 09:25:49 INFO ZookeeperConnection: Checking Kafka topic core-data-tickets does exists ...
18/03/26 09:25:49 INFO Broker: Kafka topic core-data-tickets exists
18/03/26 09:25:49 INFO Broker: Processing topic : core-data-tickets
18/03/26 09:25:49 WARN ZookeeperConnection: Resetting Topic Offset
org.I0Itec.zkclient.exception.ZkNoNodeException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /consumers/clt/offsets/core-data-tickets/4
at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:685)
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:766)
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:761)
at kafka.utils.ZkUtils$.readData(ZkUtils.scala:443)
at kafka.utils.ZkUtils.readData(ZkUtils.scala)
at net.core.data.connection.ZookeeperConnection.readTopicPartitionOffset(ZookeeperConnection.java:145)
I have already created the relevant Kafka topic.
Any insights on this would be highly appreciated.
#
I'm using the following code to run the spark job
spark-submit --class net.core.data.compute.Broker --executor-memory 512M --total-executor-cores 2 --driver-java-options "-Dproperties.path=/ebs/tmp/continuous-loading-tool/continuous-loading-tool/src/main/resources/dev.properties" --conf spark.ui.port=4045 /ebs/tmp/dev/data/continuous-loading-tool/target/continuous-loading-tool-1.0-SNAPSHOT.jar
I guess that this error has to do with offsets retention. By default, offsets are stored for only 1440 minutes (i.e. 24 hours). Therefore, if the group has not committed offsets within a day, Kafka won't have information about it.
A possible workaround is to set the value of offsets.retention.minutes accordingly.
offsets.retention.minutes
Offsets older than this retention period will be discarded

IBM MQ - Java api - PCFMessageAgent - connection fails

I changed the MQIVP sample in MQ with a server connection channel of my own local.server.con and it is working fine. But I tried connecting to the same channel with PCFMessageAgent and the connection is failing with errors in MQ log. What is the relation between my channel and SYSTEM.DEFAULT.MODEL.QUEUE which gives the error.
C:\Program Files\IBM\WebSphere MQ\Tools\wmqjava\samples>java -Djava.library.path="C:\Program Files\IBM\WebSphere MQ\java\lib" MQIVPMod
Websphere MQ for Java Installation Verification Program
5724-B4 (C) Copyright IBM Corp. 2002, 2014. All Rights Reserved.
================================================================
Please enter the IP address of the MQ server :10.40.1.16
Please enter the port to connect to : (1414)1415
Please enter the server connection channel name :local.server.con
Please enter the user name (or RETURN for none) :test
Please enter the password for the user :test123
Please enter the queue manager name :local
Success: Connected to queue manager.
Success: Opened SYSTEM.DEFAULT.LOCAL.QUEUE
Success: Put a message to SYSTEM.DEFAULT.LOCAL.QUEUE
Success: Got a message from SYSTEM.DEFAULT.LOCAL.QUEUE
Success: Closed SYSTEM.DEFAULT.LOCAL.QUEUE
Success: Disconnected from queue manager
Tests complete -
SUCCESS: This MQ Transport is functioning correctly.
Press Enter to continue ...
My PCFMessageAgent code and error:
new PCFMessageAgent(host, Integer.parseInt(port), channelName); // connect
com.ibm.mq.MQException: MQJE001: Completion Code '2', Reason '2035'.
at com.ibm.mq.MQDestination.open(MQDestination.java:323)
at com.ibm.mq.MQQueue.<init>(MQQueue.java:236)
at com.ibm.mq.MQQueueManager.accessQueue(MQQueueManager.java:2674)
at com.ibm.mq.pcf.PCFAgent.open(PCFAgent.java:448)
at com.ibm.mq.pcf.PCFAgent.open(PCFAgent.java:394)
at com.ibm.mq.pcf.PCFAgent.connect(PCFAgent.java:287)
at com.ibm.mq.pcf.PCFAgent.<init>(PCFAgent.java:190)
at com.ibm.mq.pcf.PCFMessageAgent.<init>(PCFMessageAgent.java:157)
at test.wmq.PCFTest.main(PCFTest.java:49)
And the MQ log :
5/2/2017 14:01:31 - Process(6048.60) User(MUSR_MQADMIN) Program(amqzlaa0.exe)
Host(BLR_SWG_N09505) Installation(Installation1)
VRMF(8.0.0.4) QMgr(local)
AMQ8077: Entity 'test#blr_swg_n09505' has insufficient authority to access
object 'SYSTEM.DEFAULT.MODEL.QUEUE'.
EXPLANATION:
The specified entity is not authorized to access the required object. The
following requested permissions are unauthorized: get
ACTION:
Ensure that the correct level of authority has been set for this entity against
the required object, or ensure that the entity is a member of a privileged
group.
----- amqzfubn.c : 518 --------------------------------------------------------
5/2/2017 14:01:32 - Process(8004.41) User(MUSR_MQADMIN) Program(amqrmppa.exe)
Host(BLR_SWG_N09505) Installation(Installation1)
VRMF(8.0.0.4) QMgr(local)
AMQ9208: Error on receive from host BLR_SWG_N09505 (10.40.1.16).
EXPLANATION:
An error occurred receiving data from BLR_SWG_N09505 (10.40.1.16) over TCP/IP.
This may be due to a communications failure.
ACTION:
The return code from the TCP/IP recv() call was 10054 (X'2746'). Record these
values and tell the systems administrator.
----- amqccita.c : 4076 -------------------------------------------------------
5/2/2017 14:01:32 - Process(8004.41) User(MUSR_MQADMIN) Program(amqrmppa.exe)
Host(BLR_SWG_N09505) Installation(Installation1)
VRMF(8.0.0.4) QMgr(local)
AMQ9999: Channel 'local.server.con' to host '10.40.1.16' ended abnormally.
EXPLANATION:
The channel program running under process ID 8004(7988) for channel
'local.server.con' ended abnormally. The host name is '10.40.1.16'; in some
cases the host name cannot be determined and so is shown as '????'.
ACTION:
Look at previous error messages for the channel program in the error logs to
determine the cause of the failure. Note that this message can be excluded
completely or suppressed by tuning the "ExcludeMessage" or "SuppressMessage"
attributes under the "QMErrorLog" stanza in qm.ini. Further information can be
found in the System Administration Guide.
----- amqrmrsa.c : 930 --------------------------------------------------------
You need to go and read up on MQ permissions (i.e. authorizations). It is best to do permissions on the group rather than principle (UserId).
setmqaut -m {QM_NAME} -n SYSTEM.ADMIN.COMMAND.QUEUE -t queue -g {GROUP} +put +inq +dsp
setmqaut -m {QM_NAME} -n SYSTEM.DEFAULT.MODEL.QUEUE -t queue -g {GROUP} +get +inq +dsp
There is no relation between channels and model queues.
But I think, that the PCFMessageAgent is trying to create a dynamic queue to use as a ReplyToQ to receive responses, and it seems it tries to create the dynamic queue by opening the SYSTEM.DEFAULT.MODEL.QUEUE.

The group coordinator is not available-Kafka

When I am write a topic to kafka,there is an error:Offset commit failed:
2016-10-29 14:52:56.387 INFO [nioEventLoopGroup-3-1][org.apache.kafka.common.utils.AppInfoParser$AppInfo:82] - Kafka version : 0.9.0.1
2016-10-29 14:52:56.387 INFO [nioEventLoopGroup-3-1][org.apache.kafka.common.utils.AppInfoParser$AppInfo:83] - Kafka commitId : 23c69d62a0cabf06
2016-10-29 14:52:56.409 ERROR [nioEventLoopGroup-3-1][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$DefaultOffsetCommitCallback:489] - Offset commit failed.
org.apache.kafka.common.errors.GroupCoordinatorNotAvailableException: The group coordinator is not available.
2016-10-29 14:52:56.519 WARN [kafka-producer-network-thread | producer-1][org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater:582] - Error while fetching metadata with correlation id 0 : {0085000=LEADER_NOT_AVAILABLE}
2016-10-29 14:52:56.612 WARN [pool-6-thread-1][org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater:582] - Error while fetching metadata with correlation id 1 : {0085000=LEADER_NOT_AVAILABLE}
When create a new topic using command,it is ok.
./kafka-topics.sh --zookeeper localhost:2181 --create --topic test1 --partitions 1 --replication-factor 1 --config max.message.bytes=64000 --config flush.messages=1
This is the producer code using Java:
public void create() {
Properties props = new Properties();
props.clear();
String producerServer = PropertyReadHelper.properties.getProperty("kafka.producer.bootstrap.servers");
String zookeeperConnect = PropertyReadHelper.properties.getProperty("kafka.producer.zookeeper.connect");
String metaBrokerList = PropertyReadHelper.properties.getProperty("kafka.metadata.broker.list");
props.put("bootstrap.servers", producerServer);
props.put("zookeeper.connect", zookeeperConnect);//声明ZooKeeper
props.put("metadata.broker.list", metaBrokerList);//声明kafka broker
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 1000);
props.put("linger.ms", 10000);
props.put("buffer.memory", 10000);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
producer = new KafkaProducer<String, String>(props);
}
Where is wrong?
I faced a similar issue. The problem was when you start your Kafka broker there is a property associated with it, "KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR". If you are working with a single node cluster make sure you set this property with the value '1'. As its default value is 3. This change resolved my problem. (you can check the value in Kafka.properties file)
Note: I was using base image of confluent kafka version 4.0.0 ( confluentinc/cp-kafka:4.0.0)
Looking at your logs the problem is that cluster probably don't have connection to node which is the only one know replica of given topic in zookeeper.
You can check it using given command:
kafka-topics.sh --describe --zookeeper localhost:2181 --topic test1
or using kafkacat:
kafkacat -L -b localhost:9092
Example result:
Metadata for all topics (from broker 1003: localhost:9092/1003):
1 brokers:
broker 1003 at localhost:9092
1 topics:
topic "topic1" with 1 partitions:
partition 0, leader -1, replicas: 1001, isrs: , Broker: Leader not available
If you have single node cluster then broker id(1001) should match leader of topic1 partition.
But as you can see the only one known replica of topic1 was 1001 - which is not available now, so there is no possibility to recreate topic on different node.
The source of the problem can be an automatic generation of broker id(if you don't have specified broker.id or it is set to -1).
Then on starting the broker(the same single broker) you probably receive broker id different that previously and different than was marked in zookeeper (this a reason why partition deletion can help - but it is not a production solution).
The solution may be setting broker.id value in node config to fixed value - according to documentation it should be done on produciton environment:
broker.id=1
If everything is alright you should receive sth like this:
Metadata for all topics (from broker 1: localhost:9092/1001):
1 brokers:
broker 1 at localhost:9092
1 topics:
topic "topic1" with 1 partitions:
partition 0, leader 1, replicas: 1, isrs: 1
Kafka Documentation:
https://kafka.apache.org/documentation/#prodconfig
Hi you have to keep your kafka replicas and replication factor for your code same.
for me i keep 3 as replicas and 3 as replication factor.
The solution for me was that I had to make sure KAFKA_ADVERTISED_HOST_NAME was the correct IP address of the server.
We had the same issue and replicas and replication factors both were 3. and the Partition count was 1 . I increased the partition count to 10 and it started working.
We faced same issue in production too. The code was working fine for long time suddenly we got this exception.
We analyzed that there is no issue in code. So we asked deployment team to restart the zookeeper. Restarting it solved the issue.

Categories