Error connecting to Kafka running in docker container - java

I have configured following Kafka properties for my spring boot based library bundled inside a lib directory of an EAR deployed to Wildfly. I am able to start the spring components successfully by loading the porperty file from classpath (WEB-INF/classes)
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.gson.GsonAutoConfiguration,org.springframework.boot.autoconfigure.jms.artemis.ArtemisAutoConfiguration,\
org.springframework.boot.autoconfigure.data.web.SpringDataWebAutoConfiguration
spring.kafka.admin.client-id=iris-admin-local
spring.kafka.producer.client-id=iris-producer-local
spring.kafka.producer.retries=3
spring.kafka.producer.properties.max.block.ms=2000
spring.kafka.producer.bootstrap-servers=127.0.0.1:19092
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
foo.app.kafka.executor.core-pool-size=10
foo.app.kafka.executor.max-pool-size=500
foo.app.kafka.executor.queue-capacity=1000
I run Kafka and zookeeper via docker compose, and the containers are mapped to host ports 12181 and 19092 respectively. The publish fails with the error
19:37:42,914 ERROR [org.springframework.kafka.support.LoggingProducerListener] (swiftalker-3) Exception thrown when sending a message with key='543507' and payload='com.foo.app.kanban.defect.entity.KanbanDefect#84b13' to topic alm_swift-alm:: org.apache.kafka.common.errors.TimeoutException: Topic alm_swift-alm not present in metadata after 2000 ms.
19:37:43,124 WARN [org.apache.kafka.clients.NetworkClient] (kafka-producer-network-thread | iris-producer-local-1) [Producer clientId=iris-producer-local-1] Error connecting to node 6be446692a1f:9092 (id: 1001 rack: null): java.net.UnknownHostException: 6be446692a1f
at java.net.InetAddress.getAllByName0(InetAddress.java:1281)
at java.net.InetAddress.getAllByName(InetAddress.java:1193)
at java.net.InetAddress.getAllByName(InetAddress.java:1127)
at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:110)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)
at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:955)
at org.apache.kafka.clients.NetworkClient.access$600(NetworkClient.java:73)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1128)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1016)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:547)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:324)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
at java.lang.Thread.run(Thread.java:748)
Now this is after having provided spring.kafka.producer.bootstrap-servers=127.0.0.1:19092 property. What's interesting though is
CONTAINER ID NAMES PORTS CREATED STATUS
2133c81ed51d mongo 0.0.0.0:23556->27017/tcp, 0.0.0.0:23557->27018/tcp, 0.0.0.0:23558->27019/tcp 29 minutes ago Up 29 minutes
f18b86d8739e kafka-ui 0.0.0.0:18080->8080/tcp 29 minutes ago Up 29 minutes
6be446692a1f kafka 0.0.0.0:19092->9092/tcp 29 minutes ago Up 29 minutes
873304e1e6a0 zookeeper 2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp 29 minutes ago Up 29 minutes
the Wildfly server error logs show the app is actually connecting to the docker container via it's container ID i.e.
6be446692a1f kafka 0.0.0.0:19092->9092/tcp
from the docker ps -a output and
Error connecting to node 6be446692a1f:9092 (id: 1001 rack: null): java.net.UnknownHostException: 6be446692a1f
I'm confused as to how is the spring boot code, despite the config property referring server over localhost and mapped port 19092, is managing to find a docker container on it's ID and default port and then trying to connect to it? How do I fix this?
Update: The docker compose
version: '3'
networks:
app-tier:
driver: bridge
services:
zookeeper:
image: 'docker.io/bitnami/zookeeper:3-debian-10'
container_name: 'zookeeper'
networks:
- app-tier
volumes:
- 'zookeeper_data:/bitnami'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'docker.io/bitnami/kafka:2-debian-10'
container_name: 'kafka'
ports:
- 19092:9092
networks:
- app-tier
volumes:
- 'kafka_data:/bitnami'
- /var/run/docker.sock:/var/run/docker.sock
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
database:
image: 'mongo'
container_name: 'mongo'
environment:
- MONGO_INITDB_DATABASE='swiftalk_db'
networks:
- app-tier
ports:
- 23556-23558:27017-27019
depends_on:
- kafka
kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:latest
ports:
- 18080:8080
networks:
- app-tier
volumes:
- 'mongo_data:/data/db'
depends_on:
- kafka
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
- KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
mongo_data:
driver: local

You've not shared your Docker Compose so I can't give you the specific fix to make, but in essence you need to configure your advertised listeners correctly. This is the value that the broker provides to the client telling it where to find it when it makes subsequent connections.
Details: https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/

Related

Prometheus don't see actuator endpoints in docker. Get "http://<container-name>:8080/actuator/prometheus": tcp ip:8080:connection refused

I have a problem with my Prometheus container. Can't connect with /actuator/prometheus from other services. Then i get an error:
Get "http://notification-service:8080/actuator/prometheus": dial tcp 172.24.0.13:8080: connect: connection refused
or
server returned HTTP status 404
ss from prometheus
I trying to send curl request (with container ip) and at start i really get 404 error, but after a few minutes i get normal response.
yml file with one example service
services:
ticket-service:
container_name: ticket-service
image: <docker_hub_nick>/ticket-service:latest
environment:
- SPRING_PROFILES_ACTIVE=docker
- SPRING_DATASOURCE_URL=jdbc:postgresql://postgres-ticket:5431/ticket_service
depends_on:
- postgres-ticket
- broker
- zipkin
- discovery-server
- api-gateway
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: always
ports:
- "9090:9090"
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
depends_on:
- bet-service
- ticket-service
- chat-service
- notification-service
- wallet-service
ticket .properties
spring.application.name=ticket-service
management.health.circuitbreakers.enabled=true
management.endpoints.web.exposure.include=*
management.endpoint.health.show-details=always
prometheus .yml
global:
scrape_interval: 1m
evaluation_interval: 30s
scrape_timeout: 1m
external_labels:
monitor: 'easywin-monitor'
scrape_configs:
- job_name: 'ticket_service'
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['ticket-service:8080']
labels:
application: 'Ticket Service'

unable to start Apache NiFi UI

I have created a container for Apache NiFi using a docker-compose file. When I run the docker-compose up command I get the following error when the nifi container is run:
2022-10-19 14:59:34,234 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start the web server... shutting down.
nifi_container_persistent | java.io.IOException: Function not implemented
What exactly is the java.io.IOexception function? Where can I make the required change to that I can fix this error?
Here is the docker compose file for nifi with custom bridge network "my_network":
nifi:
hostname: mynifi
container_name: nifi_container_persistent
image: 'apache/nifi:1.16.1' # latest image as of 2021-11-09.
restart: on-failure
ports:
- '8091:8080'
environment:
- NIFI_WEB_HTTP_PORT=8080
- NIFI_CLUSTER_IS_NODE=true
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
- NIFI_ZK_CONNECT_STRING=myzookeeper:2181
- NIFI_ELECTION_MAX_WAIT=30 sec
- NIFI_SENSITIVE_PROPS_KEY='12345678901234567890A'
healthcheck:
test: "${DOCKER_HEALTHCHECK_TEST:-curl localhost:8091/nifi/}"
interval: "60s"
timeout: "3s"
start_period: "5s"
retries: 5
volumes:
- ./nifi/database_repository:/opt/nifi/nifi-current/database_repository
- ./nifi/flowfile_repository:/opt/nifi/nifi-current/flowfile_repository
- ./nifi/content_repository:/opt/nifi/nifi-current/content_repository
- ./nifi/provenance_repository:/opt/nifi/nifi-current/provenance_repository
- ./nifi/state:/opt/nifi/nifi-current/state
- ./nifi/logs:/opt/nifi/nifi-current/logs
# uncomment the next line after copying the /conf directory from the container to your local directory to persist NiFi flows
#- ./nifi/conf:/opt/nifi/nifi-current/conf
networks:
- my_network
please help

Cannot run or connect Portgresql container on Docker

In my Windows 10 machine I have a Java app and create Postgresql images on Docker using the following configuration:
docker-compose.yml:*
version: '2.0'
services:
postgresql:
image: postgres:11
ports:
- "5432:5432"
expose:
- "5432"
environment:
- POSTGRES_USER=demo
- POSTGRES_PASSWORD=******
- POSTGRES_DB=demo_test
And I use the following command to compose images:
cd postgresql
docker-compose up -d
Although pgadmin container is working on Docker, postgres container is generally restarting state and sometines seems to be running state for a second. When I look at that container log, I see I encounter the following errors:
2021-03-16 09:00:18.526 UTC [82] FATAL: data directory "/data/postgres" has wrong ownership
2021-03-16 09:00:18.526 UTC [82] HINT: The server must be started by the user that owns the data directory.
child process exited with exit code 1
*initdb: removing contents of data directory "/data/postgres"
running bootstrap script ... The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
I have tried to apply several workaround suggestions e.g. PostgreSQL with docker ownership issue, but none of them is working. So, how can I fix this problem?
Update: Here is last status of my docker-compoese.yml file:
version: '2.0'
services:
postgresql:
image: postgres:11
container_name: "my-pg"
ports:
- "5432:5432"
expose:
- "5432"
environment:
- POSTGRES_USER=demo
- POSTGRES_PASSWORD=******
- POSTGRES_DB=demo_test
volumes:
- psql:/var/lib/postgresql/data
volumes:
psql:
As I already stated in my comment I'd suggest using a named volume.
Here's my docker-compose.yml for Postgres 12:
version: "3"
services:
postgres:
image: "postgres:12"
container_name: "my-pg"
ports:
- 5432:5432
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres"
POSTGRES_DB: "mydb"
volumes:
- psql:/var/lib/postgresql/data
volumes:
psql:
Then I created the psql volume via docker volume create psql (so just a volume without any actual path mapping).

Cannot produce kafka message on kubernetes

I'm getting error on kafka: [2020-05-04 12:46:59,477] ERROR [KafkaApi-1001] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
And error when I'm trying to produce message: 2020-05-04 12:47:45.221 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Error while fetching metadata with correlation id 10 : {activate-user=LEADER_NOT_AVAILABLE}
Using docker-compose everything works fine but I'm trying to move it also to k8s. I started that process with kompose convert tool and modify the output.
Here is a fragment of the docker-compse:
zookeeper:
container_name: zookeeper
image: wurstmeister/zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ports:
- "2181:2181"
mail-sender-kafka:
container_name: mail-sender-kafka
image: wurstmeister/kafka:2.12-2.2.1
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_HOST_NAME: mail-sender-kafka
KAFKA_CREATE_TOPICS: "activate-user:1:1"
ports:
- 9092:9092
depends_on:
- zookeeper
account-service:
image: szastarek/food-delivery-account-service:${TAG}
container_name: account-service
environment:
- KAFKA_URI=mail-sender-kafka:9092
depends_on:
- config-server
- account-service-db
mail-sender:
image: szastarek/food-delivery-mail-sender:${TAG}
container_name: mail-sender
environment:
- KAFKA_URI=mail-sender-kafka:9092
depends_on:
- config-server
After converting it to k8s I've got zookeeper-deployment, zookeeper-service, mail-sender-deployment, mail-sender-kafka-deployment, mail-sender-kafka-service.
I've also tried to add some env variables and for now, it looks like that:
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: mail-sender-kafka
- name: KAFKA_ADVERTISED_PORT
value: '9092'
- name: ADVERTISED_LISTENERS
value: PLAINTEXT://mail-sender-kafka:9092
- name: KAFKA_CREATE_TOPICS
value: activate-user:1:1
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
I've found one thing that probably is connected with the problem.
When I run ping mail-sender-kafka on docker I can reach myself. But if I connect to kubernetes mail-sender-kafka pod then I cannot ping myself.
After update the hosts file it works. There was something like:
172.18.0.24 mail-sender-kafka-xxxxxxx
And I changed it to
172.18.0.24 mail-sender-kafka
Any tips about how should I fix it?

How to make Spring boot with Redis Sentinel work with Docker

I am trying to set up a Spring boot application with Redis Sentinel 3.2.11 using docker. However I am getting
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /172.27.0.2:6379
My docker compose configuration
version: '3.1'
services:
master:
image: redis:3
container_name: redis-master
hostname: host_dev
networks:
- docker_dev
slave:
image: redis:3
command: redis-server --slaveof redis-master 6379
hostname: host_dev
links:
- master:redis-master
container_name: redis-slave
networks:
- docker_dev
sentinel:
build: sentinel
environment:
- SENTINEL_DOWN_AFTER=5000
- SENTINEL_FAILOVER=5000
- MASTER_NAME=mymaster
hostname: host_dev
image: sentinel:3
links:
- master:redis-master
- slave
container_name: sentinel
ports:
- "26379:26379"
networks:
- docker_dev
networks:
docker_dev:
Docker file
FROM redis:3
EXPOSE 26379
ADD sentinel.conf /etc/redis/sentinel.conf
RUN chown redis:redis /etc/redis/sentinel.conf
ENV SENTINEL_QUORUM 2
ENV SENTINEL_DOWN_AFTER 30000
ENV SENTINEL_FAILOVER 180000
COPY sentinel-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/sentinel-entrypoint.sh
ENTRYPOINT ["sentinel-entrypoint.sh"]
Spring configuration in application.properties:
redis.cluster.name=mymaster
redis.sentinel.nodes=localhost:26379
redis.timeout=2000
Issue:
The spring boot app(run from outside docker-machine) is able to connect with Sentinel node. The sentinel node provides the master information with IP 172.27.0.2 i.e docker n/w IP. The spring boot app tries to connect with redis-master at IP 172.27.0.2 and fails as the IP is not visible outside the docker machine.
Possible fix:
How can I make sentinel node provide an master IP as localhost instead of internal docker-machine n/w ip?

Categories