Kafka AdminClientConfig ignoring provided configuration - java

I have a microservice-based Java (Spring Boot) application where I'm integrating Kafka for event-driven internal service communication. Services are running inside a docker-compose all under the same bridged network. I've added cp-kafka into that docker-compose again under the same network.
My problem is that once I start the docker-compose neither the producer nor consumer would connect to the broker. What happens is the AdminClientConfig uses localhost:9092 rather than the kafka:9092 I've defined as advertised listener in the Broker configuration.
This is the output I get at the producer:
2023-02-14 13:09:17.563 INFO [article-service,,] 1 --- [ main] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-14T13:09:17.563482000Z bootstrap.servers = [localhost:9092]
2023-02-14T13:09:17.563518700Z client.dns.lookup = use_all_dns_ips
2023-02-14T13:09:17.563524100Z client.id =
2023-02-14T13:09:17.563528200Z connections.max.idle.ms = 300000
...
The consumer would briefly connect using the ConsumerConfig I've provided:
2023-02-14 13:10:13.358 INFO 1 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [kafka:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-saveArticle-1
client.rack =
connections.max.idle.ms = 540000
...
However, right after that it'd retry, this time using the AdminClientConfig instead
2023-02-14 13:10:42.365 INFO 1 --- [ scheduling-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
bootstrap.servers = [localhost:9092]
client.dns.lookup = use_all_dns_ips
client.id =
connections.max.idle.ms = 300000
Relevant parts of docker-compose.yml
...
networks:
backend:
name: backend
driver: bridge
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.3.0
container_name: dev.infra.zookeeper
networks:
- backend
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:7.3.0
container_name: kafka
networks:
- backend
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
AUTO_CREATE_TOPICS_ENABLE: 'false'
...
Producer application.yml
spring:
kafka:
producer:
bootstrap-servers: kafka:9092
value-serializer: org.apache.kafka.common.serialization.StringSerializer
key-serializer: org.apache.kafka.common.serialization.StringSerializer
topic:
name: saveArticle
Consumer application.yml
spring:
kafka:
consumer:
bootstrap-servers: kafka:9092
auto-offset-reset: earliest
group-id: stock
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
topic:
name: saveArticle
Kafka dependencies I'm using:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>3.0.2</version>
<type>pom</type>
</dependency>
Any clue into where it's getting the localhost:9092 from and why is it ignoring the explicitly specified kafka:9092 host I've provided in the broker config? How can I resolve my problem?

You only need one yaml file for one application.
The error is because you're not setting spring.kafka.bootstrap-servers=kafka:9092, only the producer and consumer client, individually, therefore, has nothing to do with what's advertised by the broker, but rather spring-kafka default values.
You can add spring.kafka.admin section, but better not to duplicate unnecessary config
https://docs.spring.io/spring-boot/docs/current/reference/html/messaging.html#messaging.kafka
However, you will need to advertise localhost:9092 if you're trying to run this code on your host machine, otherwise, you'll end up with UnknownHostException: kafka

Related

Spring boot kafka error: app fail to start

enter image description here
error found in running app.
kafka:
consumer:
bootstrap-servers: localhost:9092
group-id: group_id
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
bootstrap-servers: localhost:9092
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer

K8s Spring Application cannot connect to Mysql DB

I wanted to try and deploy my spring boot application on kubernetes. I setted up a test environment with microk8s (dns,storage,ingress enabled) which consists of a pod running the application itself and a pod with the MySQL database. Each pod has its own service and is running on the same default namespace. The yaml files can be seen bellow:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: test-app
image: myImage
ports:
- containerPort: 8080
imagePullPolicy: Always
env:
- name: SPRING_APPLICATION_JSON
valueFrom:
configMapKeyRef:
name: spring-config
key: app-config.json
Application Service:
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- port: 8080
targetPort: 8080
Mysql Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-server
labels:
# app: mysql
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
volumes:
- name: mysql-persistent-volume-storage
persistentVolumeClaim:
claimName: mysql-pvc-claim
containers:
- name: mysql
image: mysql
volumeMounts:
- name: mysql-persistent-volume-storage
mountPath: /var/lib/mysql
subPath: mysql-server
env:
- name: MYSQL_ROOT_PASSWORD
value: pass_root
- name: MYSQL_USER
value: user
- name: MYSQL_PASSWORD
value: pass
- name: MYSQL_DATABASE
value: test
ports:
- containerPort: 3306
Mysql Service:
apiVersion: v1
kind: Service
metadata:
name: db-service
spec:
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
For some reason my application can't use the database. It throws this error:
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) ~[mysql-connector-java-8.0.25.jar!/:8.0.25]
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) ~[mysql-connector-java-8.0.25.jar!/:8.0.25]
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:833) ~[mysql-connector-java-8.0.25.jar!/:8.0.25]
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:453) ~[mysql-connector-java-8.0.25.jar!/:8.0.25]
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246) ~[mysql-connector-java-8.0.25.jar!/:8.0.25]
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198) ~[mysql-connector-java-8.0.25.jar!/:8.0.25]
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:121) ~[HikariCP-4.0.3.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:364) ~[HikariCP-4.0.3.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) ~[HikariCP-4.0.3.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:476) ~[HikariCP-4.0.3.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561) ~[HikariCP-4.0.3.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) ~[HikariCP-4.0.3.jar!/:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) ~[HikariCP-4.0.3.jar!/:na]
.......
The application.yml:
db:
datasource:
url: jdbc:mysql://ip/test
user: user
password: pass
---
spring:
datasource:
url: ${db.datasource.url}
username: ${db.datasource.user}
password: ${db.datasource.password}
driver-class-name: com.mysql.cj.jdbc.Driver
jpa:
database-platform: org.hibernate.dialect.MySQLDialect
mvc:
view:
suffix: .html
thymeleaf:
cache: false
allowPublicKeyRetrieval: true
hibernate:
show_sql: true
logging:
level:
org:
hibernate:
SQL: debug
I tried accessing the service from another pod on the namespace running mysql, since it has the mysql-client pre installed, and from the host. Both had access to database. I also tring ping on the pod running the application. It found the service withoyt any problem.
Then I tried using NodePort instead of ClusterIP. Nothing changed.
I made sure the credentials are correct.
Finally, I tried removing and adding the port in the application.yml.
I am completely stuck and I have no idea what's the problem. Any help would be appreciated.
I spot two problems in your configuration: The username/password in your mysql deployment does not match the values in your application.yaml.
The other one is that you use a property that spring boot does not use by default and I assume you have no special logic to handle that: "spring.data.url" should be "spring.datasource.url".
Note that it should be "spring.datasource.username" , not "user" as well.
Referring to your 'db' section in the YAML: In general I would not recommend to have a separate section for the database credentials and reference it using variables as well.

EurekaServer com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server on Docker

Eureka server not working on Docker compose
Here is the docker-compose for the Eureka server and config server
version: '3'
services:
fetebird-eurekaservice:
container_name: FeteBird-EurekaService
build:
context: ../../Eureka-Service-Registry/
dockerfile: Dockerfile
image: fetebird/eurekaservice
ports:
- "8761:8761"
networks:
- spring-cloud-network
volumes:
- ./fetebird-eurekaservice/data:/data
logging:
driver: json-file
fetebird-configserver:
container_name: FeteBird-ConfigServer
build:
context: ../../FeteBird-ConfigServer
dockerfile: Dockerfile
image: fetebird/configserver
ports:
- "8085:8085"
networks:
- spring-cloud-network
volumes:
- ./fetebird-configserver/data:/data
logging:
driver: json-file
networks:
spring-cloud-network:
driver: bridge
I tried with the expose command as well but no luck
Eureka server docker file
FROM openjdk:14
WORKDIR /fetebird-eurekaservice/service
ADD build/libs/fete-bird-eureka-service-registry-0.0.1-SNAPSHOT.jar fete-bird-eureka-service-registry-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java", "-jar", "fete-bird-eureka-service-registry-0.0.1-SNAPSHOT.jar"]
Config server Client Docker file
FROM openjdk:14
WORKDIR /fetebird-eurekaservice/service
ADD build/libs/fete-bird-configuration-server-0.0.1-SNAPSHOT.jar fete-bird-configuration-server-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java", "-jar", "fete-bird-configuration-server-0.0.1-SNAPSHOT.jar"]
Config Server
#SpringBootApplication
#EnableEurekaServer
public class FeteBirdEurekaServiceRegistryApplication {
public static void main(String[] args) {
SpringApplication.run(FeteBirdEurekaServiceRegistryApplication.class, args);
}
}
Configuration of Eureka server
application.yml
server:
port: 8761
eureka:
client:
register-with-eureka: false
fetch-registry: false
spring:
profiles:
active: dev
bootstrap.yml
spring:
application:
name: CONFIG-SERVER
profiles:
active: native
cloud:
config:
server:
native:
search-locations: classpath:/config
Config Server
server:
port: 8085
Discovery Server Access
eureka:
client:
register-with-eureka: true
fetch-registry: true
service-url:
defaultZone: http://localhost:8761/eureka/
instance:
hostname: localhost
Errors
com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server
FeteBird-ConfigServer | at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:112) ~[eureka-client-1.9.21.jar!/:1.9.21]
FeteBird-ConfigServer | at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) ~[eureka-client-1.9.21.jar!/:1.9.21]
FeteBird-ConfigServer | at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$1.execute(EurekaHttpClientDecorator.java:59) ~[eureka-client-1.9.21.jar!/:1.9.21]
FeteBird-ConfigServer | at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77) ~[eureka-client-1.9.21.jar!/:1.9.21]
FeteBird-ConfigServer | at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) ~[eureka-client-1.9.21.jar!/:1.9.21]
FeteBird-ConfigServer | at com.netflix.discovery.DiscoveryClient.register(DiscoveryClient.java:857) ~[eureka-client-1.9.21.jar!/:1.9.21]
FeteBird-ConfigServer | at com.netflix.discovery.InstanceInfoReplicator.run(InstanceInfoReplicator.java:121) ~[eureka-client-1.9.21.jar!/:1.9.21]
FeteBird-ConfigServer | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na]
FeteBird-ConfigServer | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
FeteBird-ConfigServer | at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[na:na]
FeteBird-ConfigServer | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) ~[na:na]
FeteBird-ConfigServer | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) ~[na:na]
FeteBird-ConfigServer | at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
FeteBird-ConfigServer |
FeteBird-ConfigServer | 2020-07-20 14:30:19.268 WARN 1 --- [nfoReplicator-0] c.n.discovery.InstanceInfoReplicator : There was a problem with the instance info replicator
The problem is this
eureka:
client:
serviceUrl:
defaultZone: http://localhost:1111/eureka
It is pointing to localhost and Eureka is no longer running on localhost, localhost in this case is the individual containers. The containers are linked together so you can just change this to
eureka:
client:
serviceUrl:
defaultZone: http://fetebird-eurekaservice:8761/eureka/
instance:
hostname: fetebird-eurekaservice
Each docker file
ENTRYPOINT ["java", "-jar", "fete-bird-configuration-server-0.0.1-SNAPSHOT.jar"] fetebird-eurekaservice
Docker compose file (Add links and depends_on)
fetebird-configserver:
container_name: FeteBird-ConfigServer
build:
context: ../../FeteBird-ConfigServer
dockerfile: Dockerfile
image: fetebird/configserver
ports:
- "8085:8085"
links:
- fetebird-eurekaservice
depends_on:
- fetebird-eurekaservice
networks:
- spring-cloud-network
volumes:
- ./fetebird-configserver/data:/data
logging:
driver: json-file
Reference - https://github.com/spring-cloud/spring-cloud-netflix/issues/2442
As pointed out by OP's answer: Eureka is no longer running on localhost, localhost in this case is the individual containers.
I'm here to introduce two addtional solutions:
Solution 1. Using docker environment
Dev environment (in application.properties):
eureka.client.service-url.defaultZone = http://localhost:8761/eureka
Docker environment (override defaultZone settings in docker-compose.yml file):
myservice:
container_name: myservice
environment:
eureka.client.service-url.defaultZone: http://<your-eureka-service-name>:8761/eureka
This will override application.properties or application.yml values.
If you'd like to override a bunch stuff in docker environment like jdbc connection, you can use SPRING_APPLICATION_JSON (spring will read it as it is embedded in an environment variable or system property):
myservice:
container_name: myservice
environment:
SPRING_APPLICATION_JSON: '{
"spring.datasource.url=jdbc:mysql": "//jdbc:mysql:3306/mydb",
"spring.datasource.username": "username",
"spring.datasource.password": "password",
"spring.jpa.hibernate.ddl-auto": "update",
"eureka.client.service-url.defaultZone": "http://<your-eureka-service-name>:8761/eureka"
}'
Solution 2. Setting up different profiles:
Step one, prepare two versions of application.properties, if you prefer application.yml, your choise:
application-default.properties for dev environment:
eureka.client.service-url.defaultZone = http://localhost:8761/eureka
application-docker.properties for docker environment:
eureka.client.service-url.defaultZone = http://<your-eureka-service-name>:8761/eureka
Step two (docker-compose file):
myservice:
container_name: myservice
environment:
SPRING_PROFILES_ACTIVE: docker
By setting up SPRING_PROFILES_ACTIVE to docker, it will load application-docker.properties as active profiles.
References:
Spring Profiles
Spring Environment-Specific Properties File
How to load the SPRING_PROFILES_ACTIVE value in the client service dynamically?

How to collect the Prometheus metrics in a java program?

I searched most of the articles, but not found my expected solutions. I am new to Prometheus. I need to get the Prometheus metrics to collect by my java program. So, need help to read or get the Prometheus metrics by my java program.
Any help or suggestion would be helpful.
Thanks
If you are using Spring boot 2.1.5.RELEASE then
add dependencies actuator and micrometer-prometheus
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
add config to enable access to endpoint /actuator/prometheus
management:
endpoints:
web:
exposure:
include: '*'
try to request http://domain:port/actuator/prometheus
EDIT
For kubernetes im using kind deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myAppName
spec:
replicas: 1
selector:
matchLabels:
app: myAppName
template:
metadata:
labels:
app: myAppName
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8091"
prometheus.io/path: "/actuator/prometheus"
spec:
containers:
- name: myAppName
image: images.com/app-service:master
imagePullPolicy: Always
ports:
- containerPort: 8091
env:
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SPRING_PROFILES_ACTIVE
value: "prod"
- name: CONFIG_SERVER_ADDRESS
value: "http://config-server:8888"
livenessProbe:
failureThreshold: 3
httpGet:
path: /actuator/health
port: 8091
scheme: HTTP
initialDelaySeconds: 45
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
failureThreshold: 5
httpGet:
path: /actuator/health
port: 8091
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
nodeSelector:
servicetype: mvp-cluster
There are bunch of ways you can do it Please refer this https://github.com/prometheus/client_java

spring-cloud-stream-binder-kafka configuration for Confluent Cloud Schema Registry Unauthorized error

I'm having trouble configuring a connection to Confluent when using spring-cloud-stream-binder-kafka. Possibly somebody can see what is wrong?
When I use the example from https://www.confluent.io/blog/schema-registry-avro-in-spring-boot-application-tutorial/
Then it works fine and I can see messages on Confluent Cloud
However, when adding the same connection details using the spring-cloud-stream-binder-kafka config, it is returning Unauthorized error.
Caused by: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"MySchema","namespace":"org.test","fields":[{"name":"value","type":"double"}]}
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Unauthorized; error code: 401
My Configuration below gives the above error. Not sure what is going wrong?
cloud:
stream:
default:
producer:
useNativeEncoding: true
kafka:
binder:
brokers: myinstance.us-east1.gcp.confluent.cloud:9092
producer-properties:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
schema.registry.url: https://myinstance.us-central1.gcp.confluent.cloud
basic.auth.credentials.source: USER_INFO
schema.registry.basic.auth.user.info: mySchemaKey:mySchemaSecret
configuration:
ssl.endpoint.identification.algorithm: https
sasl.mechanism: PLAIN
request.timeout.ms: 20000
retry.backoff.ms: 500
sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username="myKey" password="MySecret";
security.protocol: SASL_SSL
bindings:
normals-out:
destination: normals
contentType: application/*+avro
Example from Confluent that is working fine:
kafka:
bootstrap-servers:
- myinstance.us-east1.gcp.confluent.cloud:9092
properties:
ssl.endpoint.identification.algorithm: https
sasl.mechanism: PLAIN
request.timeout.ms: 20000
retry.backoff.ms: 500
sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username="myKey" password="MySecret";
security.protocol: SASL_SSL
schema.registry.url: https://myinstance.us-central1.gcp.confluent.cloud
basic.auth.credentials.source: USER_INFO
schema.registry.basic.auth.user.info: mySchemaKey:mySchemaSecret
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
template:
default-topic:
logging:
level:
root: info
My issue was only that I was missing a dependency in my pom.
I should delete my question, but I leave it here as a reference that the configuration does actually work as it is above.
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-schema-registry-client</artifactId>
<version>5.3.0</version>
</dependency>

Categories