Spring application cannot connect to a Kafka broker - java

I am trying to build my first application using Kafka as a messaging system, but I have some problems with running it. I am using Docker images of zookeeper and Kafka from wurstmeister with this docker-compose.yml:
version: '3.8'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
restart: unless-stopped
kafka:
image: wurstmeister/kafka
ports:
- "9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "toAll:1:1"
restart: unless-stopped
docker-compose up gives me following output:
kafka_1 | [2022-06-13 22:08:10,071] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1 | [2022-06-13 22:08:10,073] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1 | [2022-06-13 22:08:10,074] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1 | [2022-06-13 22:08:10,076] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1 | [2022-06-13 22:08:10,089] INFO Log directory /kafka/kafka-logs-44d30bad178c not found, creating it. (kafka.log.LogManager)
kafka_1 | [2022-06-13 22:08:10,121] INFO Loading logs from log dirs ArraySeq(/kafka/kafka-logs-44d30bad178c) (kafka.log.LogManager)
kafka_1 | [2022-06-13 22:08:10,123] INFO Attempting recovery for all logs in /kafka/kafka-logs-44d30bad178c since no clean shutdown file was found (kafka.log.LogManager)
kafka_1 | [2022-06-13 22:08:10,128] INFO Loaded 0 logs in 7ms. (kafka.log.LogManager)
kafka_1 | [2022-06-13 22:08:10,128] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka_1 | [2022-06-13 22:08:10,130] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka_1 | [2022-06-13 22:08:10,429] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
kafka_1 | [2022-06-13 22:08:10,433] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
kafka_1 | [2022-06-13 22:08:10,465] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1049] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
kafka_1 | [2022-06-13 22:08:10,489] INFO [broker-1049-to-controller-send-thread]: Starting (kafka.server.BrokerToControllerRequestThread)
kafka_1 | [2022-06-13 22:08:10,504] INFO [ExpirationReaper-1049-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,505] INFO [ExpirationReaper-1049-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,506] INFO [ExpirationReaper-1049-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,506] INFO [ExpirationReaper-1049-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,518] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
kafka_1 | [2022-06-13 22:08:10,552] INFO Creating /brokers/ids/1049 (is it secure? false) (kafka.zk.KafkaZkClient)
kafka_1 | [2022-06-13 22:08:10,567] INFO Stat of the created znode at /brokers/ids/1049 is: 1191,1191,1655158090560,1655158090560,1,0,0,72057715199770624,202,0,1191
kafka_1 | (kafka.zk.KafkaZkClient)
kafka_1 | [2022-06-13 22:08:10,567] INFO Registered broker 1049 at path /brokers/ids/1049 with addresses: PLAINTEXT://127.0.0.1:9092, czxid (broker epoch): 1191 (kafka.zk.KafkaZkClient)
kafka_1 | [2022-06-13 22:08:10,623] INFO [ExpirationReaper-1049-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,627] INFO [ExpirationReaper-1049-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,628] INFO [ExpirationReaper-1049-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,640] INFO [GroupCoordinator 1049]: Starting up. (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2022-06-13 22:08:10,644] INFO [GroupCoordinator 1049]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2022-06-13 22:08:10,664] INFO [ProducerId Manager 1049]: Acquired new producerId block (brokerId:1049,blockStartProducerId:25000,blockEndProducerId:25999) by writing to Zk with path version 26 (kafka.coordinator.transaction.ProducerIdManager)
kafka_1 | [2022-06-13 22:08:10,665] INFO [TransactionCoordinator id=1049] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_1 | [2022-06-13 22:08:10,668] INFO [TransactionCoordinator id=1049] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_1 | [2022-06-13 22:08:10,668] INFO [Transaction Marker Channel Manager 1049]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
kafka_1 | [2022-06-13 22:08:10,688] INFO [ExpirationReaper-1049-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,704] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
kafka_1 | [2022-06-13 22:08:10,722] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1049] Starting socket server acceptors and processors (kafka.network.SocketServer)
kafka_1 | [2022-06-13 22:08:10,735] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1049] Started data-plane acceptor and processor(s) for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
kafka_1 | [2022-06-13 22:08:10,735] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1049] Started socket server acceptors and processors (kafka.network.SocketServer)
kafka_1 | [2022-06-13 22:08:10,739] INFO Kafka version: 2.8.1 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1 | [2022-06-13 22:08:10,739] INFO Kafka commitId: 839b886f9b732b15 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1 | [2022-06-13 22:08:10,740] INFO Kafka startTimeMs: 1655158090735 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1 | [2022-06-13 22:08:10,744] INFO [KafkaServer id=1049] started (kafka.server.KafkaServer)
zookeeper_1 | 2022-06-13 22:08:10,770 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#596] - Got user-level KeeperException when processing sessionid:0x100001c35cf0000 type:multi cxid:0x3b zxid:0x4ab txntype:-1 reqpath:n/a aborting remaining multi ops. Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
kafka_1 | [2022-06-13 22:08:10,798] INFO [broker-1049-to-controller-send-thread]: Recorded new controller, from now on will use broker 127.0.0.1:9092 (id: 1049 rack: null) (kafka.server.BrokerToControllerRequestThread)
kafka_1 | creating topics: toAll:1:1
zookeeper_1 | 2022-06-13 22:08:19,653 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#215] - Accepted socket connection from /172.20.0.3:60284
zookeeper_1 | 2022-06-13 22:08:19,655 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /172.20.0.3:60284
zookeeper_1 | 2022-06-13 22:08:19,659 [myid:] - INFO [SyncThread:0:ZooKeeperServer#694] - Established session 0x100001c35cf0001 with negotiated timeout 30000 for client /172.20.0.3:60284
zookeeper_1 | 2022-06-13 22:08:19,803 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#487] - Processed session termination for sessionid: 0x100001c35cf0001
zookeeper_1 | 2022-06-13 22:08:19,808 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1056] - Closed socket connection for client /172.20.0.3:60284 which had sessionid 0x100001c35cf0001
So, as those said: from now on will use broker 127.0.0.1:9092 and creating topics: toAll:1:1 i am guessing it should be ready to run my application, which is:
#SpringBootApplication
#Slf4j
public class KafkaTestApplication {
public static void main(String[] args) {
SpringApplication.run(KafkaTestApplication.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("toAll").build();
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
String mess = "Test message";
template.send("toAll", mess);
log.info(String.format("Message sent: %s", mess));
};
}
}
But when I run it, I get this output:
2022-06-14 00:25:25.148 INFO 1108 --- [ main] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
bootstrap.servers = [localhost:9092]
client.dns.lookup = use_all_dns_ips
client.id =
connections.max.idle.ms = 300000
default.api.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
2022-06-14 00:25:25.254 INFO 1108 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.1.1
2022-06-14 00:25:25.255 INFO 1108 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 97671528ba54a138
2022-06-14 00:25:25.255 INFO 1108 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1655159125253
2022-06-14 00:25:27.299 INFO 1108 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Node -1 disconnected.
2022-06-14 00:25:27.301 WARN 1108 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2022-06-14 00:25:29.454 INFO 1108 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Node -1 disconnected.
2022-06-14 00:25:29.454 WARN 1108 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2022-06-14 00:25:31.724 INFO 1108 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Node -1 disconnected.
2022-06-14 00:25:31.724 WARN 1108 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
and last two lines repeats for a while to finally throw an exception.
I am fighting with this issue for two days, trying some other setups for Kafka, like kafka_listeners, kafka_advertised_listeners and so on... Some of them make Kafka not running properly, some change IPs, but non of them make my app gives different output.
Do you some ideas what can cause this problem and how might it be fixed?
Thanks a lot in advance.

Look at the output of docker ps
If you don't see 0.0.0.0:9092->9092/tcp, then you've not forwarded the host to the necessary KAFKA_ADVERTISED_PORT
Setting this way in compose makes Docker pick a random host port to map 9092 to inside the container, and therefore localhost:9092 isn't a valid open port on your host.
ports:
- "9092"
Related Connect to Kafka running in Docker

Related

Topic is not assigned a partition

So I am running my kafka consumer. Im just curious why my topic is not anymore placed inside the partition assigned. I believe when it shows that it is assigned, it means that I am able to listen to that topic. But now, logs are showing that there are no assigned partition for my topic. Am I still able to successfully listen to the topic or not?
Here are the logs:
Successfully logged in.
2021-10-01 02:43:14.668 INFO 35404 --- [_user#TEST.COM] o.a.k.c.security.kerberos.KerberosLogin : [Principal=kafka_user#TEST.COM]: TGT refresh thread started.
2021-10-01 02:43:14.668 INFO 35404 --- [_user#TEST.COM] o.a.k.c.security.kerberos.KerberosLogin : [Principal=kafka_user#TEST.COM]: TGT valid starting at: Fri Oct 01 02:43:14 CST 2021
2021-10-01 02:43:14.668 INFO 35404 --- [_user#TEST.COM] o.a.k.c.security.kerberos.KerberosLogin : [Principal=kafka_user#TEST.COM]: TGT expires: Fri Oct 01 12:43:14 CST 2021
2021-10-01 02:43:14.668 INFO 35404 --- [_user#TEST.COM] o.a.k.c.security.kerberos.KerberosLogin : [Principal=kafka_user#TEST.COM]: TGT refresh sleeping until: Fri Oct 01 11:06:34 CST 2021
2021-10-01 02:43:14.689 INFO 35404 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.3.1
2021-10-01 02:43:14.689 INFO 35404 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 18a913733fb71c01
2021-10-01 02:43:14.690 INFO 35404 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1633027394689
2021-10-01 02:43:14.692 INFO 35404 --- [ main] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-2, groupId=my-group-service-dev] Subscribed to topic(s): MY_TOPIC
2021-10-01 02:43:14.695 INFO 35404 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService
2021-10-01 02:43:14.706 INFO 35404 --- [ main] s.i.k.i.KafkaMessageDrivenChannelAdapter : started org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter#6ad2c882
2021-10-01 02:43:14.712 INFO 35404 --- [ main] d.s.w.p.DocumentationPluginsBootstrapper : Context refreshed
2021-10-01 02:43:14.751 INFO 35404 --- [ main] d.s.w.p.DocumentationPluginsBootstrapper : Found 1 custom documentation plugin(s)
2021-10-01 02:43:14.843 INFO 35404 --- [ main] s.d.s.w.s.ApiListingReferenceScanner : Scanning for api listing references
2021-10-01 02:43:15.646 INFO 35404 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8090 (http) with context path ''
2021-10-01 02:43:15.648 INFO 35404 --- [ main] .s.c.n.e.s.EurekaAutoServiceRegistration : Updating port to 8090
2021-10-01 02:43:17.051 WARN 35404 --- [container-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-2, groupId=my-group-service-dev] Connection to node -3 (broker.port3.com/10.235.27.73:6667) could not be established. Broker may not be available.
2021-10-01 02:43:17.388 INFO 35404 --- [ main] com.ap.Application : Started Application in 62.57 seconds (JVM running for 143.114)
2021-10-01 02:43:17.816 INFO 35404 --- [container-0-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-2, groupId=my-group-service-dev] Cluster ID: qa_IFa70SgeMIT5JIcDhHA
2021-10-01 02:43:18.213 INFO 35404 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=my-group-service-dev] Discovered group coordinator broker.port1.com:6667 (id: 2147482644 rack: null)
2021-10-01 02:43:18.219 INFO 35404 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=my-group-service-dev] Revoking previously assigned partitions []
2021-10-01 02:43:18.220 INFO 35404 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1 : my-group-service-dev: partitions revoked: []
2021-10-01 02:43:18.220 INFO 35404 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=my-group-service-dev] (Re-)joining group
2021-10-01 02:43:18.621 INFO 35404 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=my-group-service-dev] (Re-)joining group
2021-10-01 02:43:19.816 INFO 35404 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=my-group-service-dev] Successfully joined group with generation 68
2021-10-01 02:43:19.819 INFO 35404 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=my-group-service-dev] Setting newly assigned partitions:
2021-10-01 02:43:19.821 INFO 35404 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1 : my-group-service-dev: partitions assigned: []

Unable to consume messages Kafka Spring Boot Docker Compose

My code is working fine while running separately using IDE. But when using docker-compose, the producer produce the messages correctly and I can consume the messages using docker CLI composer also but my Spring Boot microservice responsible for consuming the messages is not consuming.
There are no Error shows in the consumer container (name: process), it logs only the following...
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
2021-04-07 14:58:07.123 INFO 1 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.6.0
2021-04-07 14:58:07.126 INFO 1 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 62abe01bee039651
2021-04-07 14:58:07.127 INFO 1 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1617807487120
2021-04-07 14:58:07.138 INFO 1 --- [ main] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-testgroup-1, groupId=testgroup] Subscribed to topic(s): test
2021-04-07 14:58:07.146 INFO 1 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService
2021-04-07 14:58:07.241 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8082 (http) with context path ''
2021-04-07 14:58:07.354 INFO 1 --- [ main] c.h.p.p.ProcessFcmDataServiceApplication : Started ProcessFcmDataServiceApplication in 9.916 seconds (JVM running for 11.719)
2021-04-07 14:58:08.036 WARN 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-testgroup-1, groupId=testgroup] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2021-04-07 14:58:08.037 WARN 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-testgroup-1, groupId=testgroup] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected
2021-04-07 14:58:08.165 WARN 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-testgroup-1, groupId=testgroup] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2021-04-07 14:58:08.165 WARN 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-testgroup-1, groupId=testgroup] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected
2021-04-07 14:58:08.316 WARN 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-testgroup-1, groupId=testgroup] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2021-04-07 14:58:08.317 WARN 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-testgroup-1, groupId=testgroup] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected
Here is my docker-compose.yml
version: "3"
services:
register:
container_name: register
build: register-FCM-token-service/
networks:
- push-notification
ports:
- "8001:8001"
send:
container_name: send
build: send-FCM-notification-service/
networks:
- push-notification
depends_on:
- process
links:
- kafka:kafka
environment:
kafka.boot.server: kafka:9092
ports:
- "8083:8083"
process:
container_name: process
build: process-FCM-data-service/
networks:
- push-notification
depends_on:
- recieve
links:
- kafka:kafka
environment:
kafka.boot.server: kafka:9092
ports:
- "8082:8082"
recieve:
container_name: recieve
build: recieve-push-request-service/
depends_on:
- kafka
links:
- kafka:kafka
ports:
- "8080:8080"
environment:
kafka.boot.server: kafka:9092
networks:
- push-notification
zookeeper:
container_name: zookeeper
image: wurstmeister/zookeeper
restart: always
networks:
- push-notification
kafka:
container_name: kafka
image: wurstmeister/kafka
ports:
- "9092:9092"
networks:
- push-notification
restart: always
environment:
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
ALLOW_PLAINTEXT_LISTENER: "yes"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ADVERTISED_PORT: 9092
KAFKA_AUTO_OFFSET_RESET: 'latest'
networks:
push-notification:
The "process" container (the consumer) is not working.
When trying using the container CLI, the kafka consumer works fine inside the kafka container, here is the output...
/ # kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test
{"source":"salesforce","userid":"1235aa","messageTitle":"test 1 title","messageBody":"test 1 body"}
Here is my Kafka Config Java file for the process microservice (the consumer)...
#Configuration
#EnableKafka
public class KafkaConfig {
#Value("${kafka.boot.server}")
private String kafkaServer;
#Value("${kafka.consumer.group.id}")
private String kafkaGroupId;
public ProducerFactory<String, PushMessageModel> getProducerFactory(){
Map<String, Object> configs = new HashMap<>();
configs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaServer);
configs.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configs.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class); //use StringSerializer.class instead for simple string value
return new DefaultKafkaProducerFactory<>(configs);
}
#Bean
public KafkaTemplate<String, PushMessageModel> getKafkaTemplate(){
return new KafkaTemplate<>(getProducerFactory());
}
#Bean
public ConsumerFactory<String, FCMMessageContainer> getConsumerFactory(){
Map<String, Object> configs = new HashMap<>();
configs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaServer);
configs.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaGroupId);
configs.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
configs.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
return new DefaultKafkaConsumerFactory<>(configs, new StringDeserializer(), new JsonDeserializer<>(FCMMessageContainer.class));
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, FCMMessageContainer>> getKafkaListener(){
ConcurrentKafkaListenerContainerFactory<String, FCMMessageContainer> listener = new ConcurrentKafkaListenerContainerFactory<>();
listener.setConsumerFactory(getConsumerFactory());
listener.setErrorHandler(new KafkaErrHandler());
return listener;
}
}
How can I debug this issue? How to resolve this? There is no issue with my code clearly as it works on the IDE, then what is the problem?

Kafka is giving: "The group member needs to have a valid member id before actually entering a consumer group"

I am using Kafka to consume messages in Java. I want to test by starting the same app multiple times on my local box. When I start up, the first time I am able to start consuming messages from the topic. When I start up a second one I get:
Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group
and dont get any messages from the topic. If I try to start more of them I get the same issues.
The configuration I am using for Kafka is
spring:
kafka:
bootstrap-servers: kafka:9092
consumer:
auto-offset-reset: earliest
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer2
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer2
properties:
spring.deserializer.key.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.json.use.type.headers: false
listener:
missing-topics-fatal: false
I have two topics
#Configuration
public class KafkaTopics {
#Bean("alertsTopic")
public NewTopic alertsTopic() {
return TopicBuilder.name("XXX.alerts")
.compact()
.build();
}
#Bean("serversTopic")
public NewTopic serversTopic() {
return TopicBuilder.name("XXX.servers")
.compact()
.build();
}
}
And two listeners in different class files.
#KafkaListener(topics = SERVERS_KAFKA_TOPIC, id = "#{T(java.util.UUID).randomUUID().toString()}",
properties = {
"spring.json.key.default.type=java.lang.String",
"spring.json.value.default.type=com.devhaus.learningjungle.db.kafka.ServerInfo"
})
public void registerServer(
#Payload(required = false) ServerInfo serverInfo
)
#KafkaListener(topics = ALERTS_KAFKA_TOPIC,
id = "#{T(java.util.UUID).randomUUID().toString()}",
properties = {
"spring.json.key.default.type=com.devhaus.learningjungle.db.kafka.AlertOnKafkaKey",
"spring.json.value.default.type=com.devhaus.learningjungle.db.kafka.AlertOnKafka"
})
public void processAlert(
#Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) AlertOnKafkaKey key,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partitionId,
#Header(KafkaHeaders.OFFSET) long offset,
#Payload(required = false) AlertOnKafka alert)
From my analysis. This is normal behaviour, you can change the log levels to exclude it.
The reason for this is if the server detects that the client can support member.id it will give that error back to the client. This is noted in KIP-394.
The client will then reconnect back to the server with a generated member ID.
If the answer from #Archimedes Trajano doesn't work for you (like in my case), then this happens when kafka can't pick up the consumer group id.
if you have a single consumer group, you can specify it in the properties file like this:
spring:
kafka:
bootstrap-servers: kafka:9092
consumer:
group-id: insert-your-consumer-group-id
... rest of your properties ...
or if you have multiple consumers then you can specify the groupId for each one:
#KafkaListener(topics="topic-1", groupId="group-1")
public void registerServer(#Payload(required = false) ServerInfo serverInfo)
#KafkaListener(topics="topic-2",groupId="group-2")
public void processAlert(#Payload(required = false) AlertOnKafka alert)
docs: https://docs.spring.io/spring-kafka/reference/html/#annotation-properties
I encountered the same issue, also preventing my consumer from subscribing to certain topics.
I figured that the member.id might be similar to the client-id (using Camel here), which might be in some way related to the consumer group.
What fixed it for me is a changed, this time non-changing client-id for my consuming service, leaving the same consumer group as is.
hummm....in my case, consumers will rejoin the group with a generated ID. Here is my test and result. FYI
#Test
void testSyncSend() throws ExecutionException, InterruptedException {
int id = (int)(System.currentTimeMillis()/1000);
SendResult result = producer.syncSend(id);
logger.info("[testSyncSend] id:{}, result:{}", id, result);
new CountDownLatch(1).await();
}
2021-04-20 14:32:20.980 INFO 6672 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.5.1
2021-04-20 14:32:20.981 INFO 6672 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 0efa8fb0f4c73d92
2021-04-20 14:32:20.981 INFO 6672 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1618900340980
2021-04-20 14:32:21.125 INFO 6672 --- [listener1-0-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-listener1-1, groupId=listener1] Cluster ID: RctzTn4XR4WNNVuqh25izw
2021-04-20 14:32:21.125 INFO 6672 --- [ad | producer-1] org.apache.kafka.clients.Metadata : [Producer clientId=producer-1] Cluster ID: RctzTn4XR4WNNVuqh25izw
2021-04-20 14:32:21.125 INFO 6672 --- [listener1-3-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-listener1-4, groupId=listener1] Cluster ID: RctzTn4XR4WNNVuqh25izw
2021-04-20 14:32:21.125 INFO 6672 --- [listener1-2-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-listener1-3, groupId=listener1] Cluster ID: RctzTn4XR4WNNVuqh25izw
2021-04-20 14:32:21.125 INFO 6672 --- [listener1-1-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-listener1-2, groupId=listener1] Cluster ID: RctzTn4XR4WNNVuqh25izw
2021-04-20 14:32:21.127 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] Discovered group coordinator localhost:29092 (id: 2147483646 rack: null)
2021-04-20 14:32:21.127 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] Discovered group coordinator localhost:29092 (id: 2147483646 rack: null)
2021-04-20 14:32:21.127 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] Discovered group coordinator localhost:29092 (id: 2147483646 rack: null)
2021-04-20 14:32:21.127 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] Discovered group coordinator localhost:29092 (id: 2147483646 rack: null)
2021-04-20 14:32:21.130 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.130 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.130 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.130 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.147 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group
2021-04-20 14:32:21.147 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group
2021-04-20 14:32:21.147 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group
2021-04-20 14:32:21.147 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group
2021-04-20 14:32:21.148 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.148 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.148 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.148 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.317 INFO 6672 --- [ main] c.z.s.cbaConnector.KafkaProducerTest : [testSyncSend] id:1618900340, result:SendResult [producerRecord=ProducerRecord(topic=test1, partition=null, headers=RecordHeaders(headers = [RecordHeader(key = __TypeId__, value = [99, 111, 109, 46, 122, 104, 105, 108, 105, 46, 115, 109, 115, 109, 111, 100, 117, 108, 101, 46, 101, 110, 116, 105, 116, 121, 46, 77, 101, 115, 115, 97, 103, 101])], isReadOnly = true), key=null, value=Message(id=1618900340), timestamp=null), recordMetadata=test1-1#0]
2021-04-20 14:32:23.770 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] Finished assignment for group at generation 2: {consumer-listener1-4-9a383250-e84d-413a-b012-85405abdcf7f=Assignment(partitions=[test1-8, test1-9]), consumer-listener1-2-3d26d9ef-b973-4d19-a930-5ba77d938680=Assignment(partitions=[test1-3, test1-4, test1-5]), consumer-listener1-1-10b6895e-264e-45bd-ba90-c71ea12b21e5=Assignment(partitions=[test1-0, test1-1, test1-2]), consumer-listener1-3-54ce965a-87cd-4e28-b0e9-b0f2c9f69423=Assignment(partitions=[test1-6, test1-7])}
2021-04-20 14:32:23.775 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] Successfully joined group with generation 2
2021-04-20 14:32:23.775 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] Successfully joined group with generation 2
2021-04-20 14:32:23.775 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] Successfully joined group with generation 2
2021-04-20 14:32:23.776 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] Successfully joined group with generation 2
2021-04-20 14:32:23.779 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] Adding newly assigned partitions: test1-5, test1-4, test1-3
2021-04-20 14:32:23.779 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] Adding newly assigned partitions: test1-6, test1-7
2021-04-20 14:32:23.779 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] Adding newly assigned partitions: test1-0, test1-2, test1-1
2021-04-20 14:32:23.779 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] Adding newly assigned partitions: test1-9, test1-8
2021-04-20 14:32:23.789 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] Found no committed offset for partition test1-8
2021-04-20 14:32:23.789 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] Found no committed offset for partition test1-6
2021-04-20 14:32:23.789 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] Found no committed offset for partition test1-2
2021-04-20 14:32:23.789 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] Found no committed offset for partition test1-1
2021-04-20 14:32:23.791 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] Setting offset for partition test1-0 to the committed offset FetchPosition{offset=14, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:29092 (id: 1 rack: null)], epoch=0}}
2021-04-20 14:32:23.791 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] Setting offset for partition test1-5 to the committed offset FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:29092 (id: 1 rack: null)], epoch=0}}
2021-04-20 14:32:23.791 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] Setting offset for partition test1-9 to the committed offset FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:29092 (id: 1 rack: null)], epoch=0}}
2021-04-20 14:32:23.791 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] Setting offset for partition test1-7 to the committed offset FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:29092 (id: 1 rack: null)], epoch=0}}
2021-04-20 14:32:23.791 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] Setting offset for partition test1-4 to the committed offset FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:29092 (id: 1 rack: null)], epoch=0}}
2021-04-20 14:32:23.791 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] Setting offset for partition test1-3 to the committed offset FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:29092 (id: 1 rack: null)], epoch=0}}
2021-04-20 14:32:23.792 INFO 6672 --- [listener1-1-C-1] o.s.k.l.KafkaMessageListenerContainer : listener1: partitions assigned: [test1-5, test1-4, test1-3]
2021-04-20 14:32:23.804 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-listener1-4, groupId=listener1] Resetting offset for partition test1-8 to offset 0.
2021-04-20 14:32:23.804 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-listener1-3, groupId=listener1] Resetting offset for partition test1-6 to offset 0.
2021-04-20 14:32:23.804 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-listener1-1, groupId=listener1] Resetting offset for partition test1-2 to offset 0.
2021-04-20 14:32:23.804 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-listener1-1, groupId=listener1] Resetting offset for partition test1-1 to offset 0.
2021-04-20 14:32:23.804 INFO 6672 --- [listener1-3-C-1] o.s.k.l.KafkaMessageListenerContainer : listener1: partitions assigned: [test1-9, test1-8]
2021-04-20 14:32:23.804 INFO 6672 --- [listener1-2-C-1] o.s.k.l.KafkaMessageListenerContainer : listener1: partitions assigned: [test1-6, test1-7]
2021-04-20 14:32:23.804 INFO 6672 --- [listener1-0-C-1] o.s.k.l.KafkaMessageListenerContainer : listener1: partitions assigned: [test1-0, test1-2, test1-1]
2021-04-20 14:32:23.817 INFO 6672 --- [listener1-0-C-1] c.z.s.cbaConnector.KafkaConsumer : [KafakaConsumer][consume] thread:21 received message:{"id":1618900340}

Spring boot + Mongo listener , does not allow tomcat to start

I have a spring boot application , where is deployed on multiple nodes.
I want to write a mongo listener inside my spring boot application to do this, as I listen to db events and burst my inmemory cache.
The problem is when I write a listener as below, my tomcat startup does not happen at all, only event listener starts:
#Component
public class MongoListener {
#PostConstruct
public void listenProduct() {
MongoClient mongoClient = new MongoClient(new MongoClientURI("mongodb://localhost:27017"));
MongoDatabase database = mongoClient.getDatabase("test");
MongoCollection<Document> collection = database.getCollection("products");
Block<ChangeStreamDocument<Document>> printBlock = new Block<ChangeStreamDocument<Document>>() {
#Override
public void apply(final ChangeStreamDocument<Document> changeStreamDocument) {
System.out.println(" primary key is : " + changeStreamDocument.getDocumentKey() + " operation name:"
+ changeStreamDocument.getOperationType() + "change document" + changeStreamDocument);
}
};
collection
.watch(Arrays.asList(Aggregates
.match(Filters.in("operationType", Arrays.asList("update","delete")))))
.fullDocument(FullDocument.UPDATE_LOOKUP).forEach(printBlock);
}
}
If I comment out #Component annotation then my tomcat starts up properly.
How do I ensure my tomcat is started as well as my listener?
Below are my tomcat logs, notice there are no errors , it just doesnt move forward after listener starts
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.5.13.RELEASE)
[2019-04-22T20:35:28:660Z] INFO --- [main] com.MyProject : Starting MyProject on abclocal with PID 87098 (/Users/abclocal/MyProject/target/classes started by abc in /Users/abclocal/MyProject)
[2019-04-22T20:35:28:662Z] DEBUG --- [main] com.MyProject : Running with Spring Boot v1.5.13.RELEASE, Spring v4.3.17.RELEASE
[2019-04-22T20:35:28:663Z] INFO --- [main] com.MyProject : The following profiles are active: dev
[2019-04-22T20:35:28:711Z] INFO --- [main] o.s.b.c.e.AnnotationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#70b0b186: startup date [Mon Apr 22 20:35:28 IST 2019]; root of context hierarchy
[2019-04-22T20:35:29:518Z] INFO --- [main] o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cache.annotation.ProxyCachingConfiguration' of type [org.springframework.cache.annotation.ProxyCachingConfiguration$$EnhancerBySpringCGLIB$$2024e1d6] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
[2019-04-22T20:35:29:951Z] INFO --- [main] o.s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8080 (http)
[2019-04-22T20:35:29:976Z] INFO --- [main] o.a.catalina.core.StandardService : Starting service [Tomcat]
[2019-04-22T20:35:29:977Z] INFO --- [main] o.a.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.31
[2019-04-22T20:35:30:073Z] INFO --- [localhost-startStop-1] o.a.c.c.C.[.[localhost].[/v1] : Initializing Spring embedded WebApplicationContext
[2019-04-22T20:35:30:073Z] INFO --- [localhost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 1365 ms
[2019-04-22T20:35:30:230Z] INFO --- [localhost-startStop-1] o.s.b.w.s.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/]
[2019-04-22T20:35:30:234Z] INFO --- [localhost-startStop-1] o.s.b.w.s.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
[2019-04-22T20:35:30:235Z] INFO --- [localhost-startStop-1] o.s.b.w.s.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
[2019-04-22T20:35:30:235Z] INFO --- [localhost-startStop-1] o.s.b.w.s.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*]
[2019-04-22T20:35:30:235Z] INFO --- [localhost-startStop-1] o.s.b.w.s.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*]
[2019-04-22T20:35:30:771Z] INFO --- [main] org.mongodb.driver.cluster : Cluster created with settings {hosts=[127.0.0.1:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
[2019-04-22T20:35:30:892Z] INFO --- [cluster-ClusterId{value='5cbdd83a72f7790491c14c9c', description='null'}-127.0.0.1:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:1, serverValue:26}] to 127.0.0.1:27017
[2019-04-22T20:35:30:897Z] INFO --- [cluster-ClusterId{value='5cbdd83a72f7790491c14c9c', description='null'}-127.0.0.1:27017] org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=127.0.0.1:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 0, 6]}, minWireVersion=0, maxWireVersion=7, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=3905086, setName='rs0', canonicalAddress=localhost:27017, hosts=[localhost:27017], passives=[], arbiters=[], primary='localhost:27017', tagSet=TagSet{[]}, electionId=7fffffff0000000000000003, setVersion=1, lastWriteDate=Mon Apr 22 20:35:24 IST 2019, lastUpdateTimeNanos=45738116405786}
[2019-04-22T20:35:31:262Z] INFO --- [main] org.mongodb.driver.cluster : Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
[2019-04-22T20:35:31:268Z] INFO --- [cluster-ClusterId{value='5cbdd83b72f7790491c14c9d', description='null'}-localhost:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:2, serverValue:27}] to localhost:27017
[2019-04-22T20:35:31:270Z] INFO --- [cluster-ClusterId{value='5cbdd83b72f7790491c14c9d', description='null'}-localhost:27017] org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 0, 6]}, minWireVersion=0, maxWireVersion=7, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1316621, setName='rs0', canonicalAddress=localhost:27017, hosts=[localhost:27017], passives=[], arbiters=[], primary='localhost:27017', tagSet=TagSet{[]}, electionId=7fffffff0000000000000003, setVersion=1, lastWriteDate=Mon Apr 22 20:35:24 IST 2019, lastUpdateTimeNanos=45738489844647}
[2019-04-22T20:35:31:324Z] INFO --- [main] org.mongodb.driver.connection : Opened connection [connectionId{localValue:3, serverValue:28}] to localhost:27017

Docker: Trying to run java vert.x app inside a container.

I'm trying to run a vert.x java app inside a docker container. The app connects to a zookeeper instance running on another host. Connectivity to the zookeeper instance has been tested from the host and container. The app runs fine when I run it directly on the host. However, when I try to run the jar file inside the container it throws an error stating the following:
SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.5.9.RELEASE)
2018-01-15 11:05:15.126 INFO 7 --- [ main] c.b.vertxdemo.VertxdemoApplication : Starting VertxdemoApplication v0.0.1-SNAPSHOT on de43fb40ccba with PID 7 (/tradingengine/vertxdemo-0.0.1-SNAPSHOT.jar started by root in /tradingengine)
2018-01-15 11:05:15.131 INFO 7 --- [ main] c.b.vertxdemo.VertxdemoApplication : No active profile set, falling back to default profiles: default
2018-01-15 11:05:15.223 INFO 7 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#4b9af9a9: startup date [Mon Jan 15 11:05:15 GMT 2018]; root of context hierarchy
Vertx Options PORT - 0
Vertx Options PUBLICPORT - -1
2018-01-15 11:05:15.992 INFO 7 --- [worker-thread-0] i.v.s.c.z.ZookeeperClusterManager : Zookeeper hosts set to 10.1.0.199:2181
2018-01-15 11:05:16.131 INFO 7 --- [worker-thread-0] o.a.c.f.imps.CuratorFrameworkImpl : Starting
2018-01-15 11:05:16.151 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Client environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT
2018-01-15 11:05:16.151 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Client environment:host.name=de43fb40ccba
2018-01-15 11:05:16.151 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Client environment:java.version=1.8.0_151
2018-01-15 11:05:16.151 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Client environment:java.vendor=Oracle Corporation
2018-01-15 11:05:16.151 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Client environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre
2018-01-15 11:05:16.151 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Client environment:java.class.path=vertxdemo-0.0.1-SNAPSHOT.jar
2018-01-15 11:05:16.151 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Client environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2018-01-15 11:05:16.151 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Client environment:java.io.tmpdir=/tmp
2018-01-15 11:05:16.151 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Client environment:java.compiler=<NA>
2018-01-15 11:05:16.151 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Client environment:os.name=Linux
2018-01-15 11:05:16.152 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Client environment:os.arch=amd64
2018-01-15 11:05:16.152 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Client environment:os.version=3.10.0-693.5.2.el7.x86_64
2018-01-15 11:05:16.152 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Client environment:user.name=root
2018-01-15 11:05:16.152 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Client environment:user.home=/root
2018-01-15 11:05:16.152 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Client environment:user.dir=/tradingengine
2018-01-15 11:05:16.153 INFO 7 --- [worker-thread-0] org.apache.zookeeper.ZooKeeper : Initiating client connection, connectString=10.1.0.199:2181 sessionTimeout=20000 watcher=org.apache.curator.ConnectionState#34191123
2018-01-15 11:05:16.184 INFO 7 --- [0.1.0.199:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server 10.1.0.199/10.1.0.199:2181. Will not attempt to authenticate using SASL (unknown error)
2018-01-15 11:05:16.201 INFO 7 --- [0.1.0.199:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established to 10.1.0.199/10.1.0.199:2181, initiating session
2018-01-15 11:05:16.220 INFO 7 --- [0.1.0.199:2181)] org.apache.zookeeper.ClientCnxn : Session establishment complete on server 10.1.0.199/10.1.0.199:2181, sessionid = 0x160e3de5a400082, negotiated timeout = 20000
2018-01-15 11:05:16.230 INFO 7 --- [d-0-EventThread] o.a.c.f.state.ConnectionStateManager : State change: CONNECTED
2018-01-15 11:05:16.353 INFO 7 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2018-01-15 11:05:16.371 INFO 7 --- [ main] c.b.vertxdemo.VertxdemoApplication : Started VertxdemoApplication in 1.622 seconds (JVM running for 2.123)
2018-01-15 11:05:17.020 ERROR 7 --- [ntloop-thread-0] io.vertx.core.impl.VertxImpl : Failed to start event bus
java.net.BindException: Address not available
at sun.nio.ch.Net.bind0(Native Method) ~[na:1.8.0_151]
at sun.nio.ch.Net.bind(Net.java:433) ~[na:1.8.0_151]
at sun.nio.ch.Net.bind(Net.java:425) ~[na:1.8.0_151]
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) ~[na:1.8.0_151]
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:128) ~[netty-transport-4.1.15.Final.jar!/:4.1.15.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:558) ~[netty-transport-4.1.15.Final.jar!/:4.1.15.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1283) ~[netty-transport-4.1.15.Final.jar!/:4.1.15.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:501) ~[netty-transport-4.1.15.Final.jar!/:4.1.15.Final]
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:486) ~[netty-transport-4.1.15.Final.jar!/:4.1.15.Final]
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:989) ~[netty-transport-4.1.15.Final.jar!/:4.1.15.Final]
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:254) ~[netty-transport-4.1.15.Final.jar!/:4.1.15.Final]
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:365) ~[netty-transport-4.1.15.Final.jar!/:4.1.15.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) ~[netty-common-4.1.15.Final.jar!/:4.1.15.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) ~[netty-common-4.1.15.Final.jar!/:4.1.15.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463) ~[netty-transport-4.1.15.Final.jar!/:4.1.15.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[netty-common-4.1.15.Final.jar!/:4.1.15.Final]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_151]
Vertx Failed
Here is my code that I am trying to run:
package com.myapp.vertxdemo;
import java.util.UUID;
import javax.annotation.PostConstruct;
import io.vertx.core.spi.cluster.ClusterManager;
import io.vertx.spi.cluster.zookeeper.ZookeeperClusterManager;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.ComponentScan;
import io.vertx.core.Vertx;
import io.vertx.core.VertxOptions;
import io.vertx.core.json.JsonObject;
#SpringBootApplication
#ComponentScan(basePackages = "com.myapp")
#EnableAutoConfiguration
public class VertxdemoApplication implements CommandLineRunner {
#Value("${zookeeper.host}")
String zookeeperHost;
#Value("${zookeeper.cluster.host}")
String zookeeperClusterHost;
#Value("${zookeeper.cluster.port}")
int zookeeperClusterPort;
#Autowired
DemoVerticle demovertical;
public static void main(String[] args) {
SpringApplication.run(VertxdemoApplication.class, args);
}
#Override
public void run(String... arg0) throws Exception {
// TODO Auto-generated method stub
}
#PostConstruct
private void Deploy() {
JsonObject zkConfig = new JsonObject();
zkConfig.put("zookeeperHosts", zookeeperHost);
zkConfig.put("rootPath", "io.vertxdemo1");
zkConfig.put("retry", new JsonObject().put("initialSleepTime", 3000).put("maxTimes", 3));
ClusterManager mgr = new ZookeeperClusterManager(zkConfig);
VertxOptions options = new VertxOptions()
.setClustered(true)
.setClusterHost(zookeeperClusterHost)
//.setClusterPort(zookeeperClusterPort)
.setClusterManager(mgr);
System.out.println("Vertx Options PORT - " + options.getClusterPort());
System.out.println("Vertx Options PUBLICPORT - " + options.getClusterPublicPort());
String guid = UUID.randomUUID().toString();
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
Vertx vertx = res.result();
vertx.deployVerticle(demovertical);
vertx.setPeriodic(5000, id -> {
vertx.eventBus().publish("PRICE_FEED", guid);
});
System.out.println("Vertx Deployed");
} else {
System.out.println("Vertx Failed");
}
});
}
}
when you run the docker instance
docker run --network host
The problem is the interface (and the address) you are trying to bind to is not available inside the docker (inside the container it's usually 172.17.x.x)
when you specify the the host type networking (read here for more info https://docs.docker.com/network/#scope-of-this-topic) the container can use the ip of the host
Your exception indicates that binding fails, I'm pretty sure the values you set for your zookeeper config are either not set or invalid.
Debug the values of ${zookeeper.host}, ${zookeeper.cluster.host} and ${zookeeper.cluster.port}and if they're not set configure them properly

Categories