Spring boot kafka error: app fail to start - java

enter image description here
error found in running app.
kafka:
consumer:
bootstrap-servers: localhost:9092
group-id: group_id
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
bootstrap-servers: localhost:9092
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer

Related

Kafka AdminClientConfig ignoring provided configuration

I have a microservice-based Java (Spring Boot) application where I'm integrating Kafka for event-driven internal service communication. Services are running inside a docker-compose all under the same bridged network. I've added cp-kafka into that docker-compose again under the same network.
My problem is that once I start the docker-compose neither the producer nor consumer would connect to the broker. What happens is the AdminClientConfig uses localhost:9092 rather than the kafka:9092 I've defined as advertised listener in the Broker configuration.
This is the output I get at the producer:
2023-02-14 13:09:17.563 INFO [article-service,,] 1 --- [ main] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-14T13:09:17.563482000Z bootstrap.servers = [localhost:9092]
2023-02-14T13:09:17.563518700Z client.dns.lookup = use_all_dns_ips
2023-02-14T13:09:17.563524100Z client.id =
2023-02-14T13:09:17.563528200Z connections.max.idle.ms = 300000
...
The consumer would briefly connect using the ConsumerConfig I've provided:
2023-02-14 13:10:13.358 INFO 1 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [kafka:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-saveArticle-1
client.rack =
connections.max.idle.ms = 540000
...
However, right after that it'd retry, this time using the AdminClientConfig instead
2023-02-14 13:10:42.365 INFO 1 --- [ scheduling-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
bootstrap.servers = [localhost:9092]
client.dns.lookup = use_all_dns_ips
client.id =
connections.max.idle.ms = 300000
Relevant parts of docker-compose.yml
...
networks:
backend:
name: backend
driver: bridge
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.3.0
container_name: dev.infra.zookeeper
networks:
- backend
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:7.3.0
container_name: kafka
networks:
- backend
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
AUTO_CREATE_TOPICS_ENABLE: 'false'
...
Producer application.yml
spring:
kafka:
producer:
bootstrap-servers: kafka:9092
value-serializer: org.apache.kafka.common.serialization.StringSerializer
key-serializer: org.apache.kafka.common.serialization.StringSerializer
topic:
name: saveArticle
Consumer application.yml
spring:
kafka:
consumer:
bootstrap-servers: kafka:9092
auto-offset-reset: earliest
group-id: stock
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
topic:
name: saveArticle
Kafka dependencies I'm using:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>3.0.2</version>
<type>pom</type>
</dependency>
Any clue into where it's getting the localhost:9092 from and why is it ignoring the explicitly specified kafka:9092 host I've provided in the broker config? How can I resolve my problem?
You only need one yaml file for one application.
The error is because you're not setting spring.kafka.bootstrap-servers=kafka:9092, only the producer and consumer client, individually, therefore, has nothing to do with what's advertised by the broker, but rather spring-kafka default values.
You can add spring.kafka.admin section, but better not to duplicate unnecessary config
https://docs.spring.io/spring-boot/docs/current/reference/html/messaging.html#messaging.kafka
However, you will need to advertise localhost:9092 if you're trying to run this code on your host machine, otherwise, you'll end up with UnknownHostException: kafka

When I use spring-cloud-stream to send rabbitmq messages, I cannot specify the RoutingKey sent

I use the 3.1.3 version. After the following configuration,'output-out-0.producer.bindingRoutingKey' does not take effect. When I send a message, Routing keys = command_exchange_open instead of: ORDER_PUSH
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-rabbit</artifactId>
</dependency>
spring:
rabbitmq:
addresses: amqp://sycx:sycx#192.168.1.204
cloud:
stream:
rabbit:
bindings:
input-in-0:
consumer:
bindingRoutingKey: ORDER_PUSH
exchangeType: direct
queueNameGroupOnly: true
output-out-0:
producer:
bindingRoutingKey: ORDER_PUSH
queueNameGroupOnly: true
bindQueue: false
bindings:
input-in-0:
destination: command_exchange_open
group: ORDER_END
output-out-0:
destination: command_exchange_open
group: ORDER_END
function:
definition: input;output
The producer property should be routing-key-expression: '''ORDER_PUSH''' not bindingRoutingKey.
output-out-0:
producer:
bindingRoutingKey: ORDER_PUSH
queueNameGroupOnly: true
bindQueue: false
These properties do not apply to producers (unless you have required-groups set).

K8s Spring Application cannot connect to Mysql DB

I wanted to try and deploy my spring boot application on kubernetes. I setted up a test environment with microk8s (dns,storage,ingress enabled) which consists of a pod running the application itself and a pod with the MySQL database. Each pod has its own service and is running on the same default namespace. The yaml files can be seen bellow:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: test-app
image: myImage
ports:
- containerPort: 8080
imagePullPolicy: Always
env:
- name: SPRING_APPLICATION_JSON
valueFrom:
configMapKeyRef:
name: spring-config
key: app-config.json
Application Service:
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- port: 8080
targetPort: 8080
Mysql Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-server
labels:
# app: mysql
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
volumes:
- name: mysql-persistent-volume-storage
persistentVolumeClaim:
claimName: mysql-pvc-claim
containers:
- name: mysql
image: mysql
volumeMounts:
- name: mysql-persistent-volume-storage
mountPath: /var/lib/mysql
subPath: mysql-server
env:
- name: MYSQL_ROOT_PASSWORD
value: pass_root
- name: MYSQL_USER
value: user
- name: MYSQL_PASSWORD
value: pass
- name: MYSQL_DATABASE
value: test
ports:
- containerPort: 3306
Mysql Service:
apiVersion: v1
kind: Service
metadata:
name: db-service
spec:
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
For some reason my application can't use the database. It throws this error:
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) ~[mysql-connector-java-8.0.25.jar!/:8.0.25]
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) ~[mysql-connector-java-8.0.25.jar!/:8.0.25]
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:833) ~[mysql-connector-java-8.0.25.jar!/:8.0.25]
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:453) ~[mysql-connector-java-8.0.25.jar!/:8.0.25]
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246) ~[mysql-connector-java-8.0.25.jar!/:8.0.25]
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198) ~[mysql-connector-java-8.0.25.jar!/:8.0.25]
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:121) ~[HikariCP-4.0.3.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:364) ~[HikariCP-4.0.3.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) ~[HikariCP-4.0.3.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:476) ~[HikariCP-4.0.3.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561) ~[HikariCP-4.0.3.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) ~[HikariCP-4.0.3.jar!/:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) ~[HikariCP-4.0.3.jar!/:na]
.......
The application.yml:
db:
datasource:
url: jdbc:mysql://ip/test
user: user
password: pass
---
spring:
datasource:
url: ${db.datasource.url}
username: ${db.datasource.user}
password: ${db.datasource.password}
driver-class-name: com.mysql.cj.jdbc.Driver
jpa:
database-platform: org.hibernate.dialect.MySQLDialect
mvc:
view:
suffix: .html
thymeleaf:
cache: false
allowPublicKeyRetrieval: true
hibernate:
show_sql: true
logging:
level:
org:
hibernate:
SQL: debug
I tried accessing the service from another pod on the namespace running mysql, since it has the mysql-client pre installed, and from the host. Both had access to database. I also tring ping on the pod running the application. It found the service withoyt any problem.
Then I tried using NodePort instead of ClusterIP. Nothing changed.
I made sure the credentials are correct.
Finally, I tried removing and adding the port in the application.yml.
I am completely stuck and I have no idea what's the problem. Any help would be appreciated.
I spot two problems in your configuration: The username/password in your mysql deployment does not match the values in your application.yaml.
The other one is that you use a property that spring boot does not use by default and I assume you have no special logic to handle that: "spring.data.url" should be "spring.datasource.url".
Note that it should be "spring.datasource.username" , not "user" as well.
Referring to your 'db' section in the YAML: In general I would not recommend to have a separate section for the database credentials and reference it using variables as well.

spring-cloud-stream-binder-kafka configuration for Confluent Cloud Schema Registry Unauthorized error

I'm having trouble configuring a connection to Confluent when using spring-cloud-stream-binder-kafka. Possibly somebody can see what is wrong?
When I use the example from https://www.confluent.io/blog/schema-registry-avro-in-spring-boot-application-tutorial/
Then it works fine and I can see messages on Confluent Cloud
However, when adding the same connection details using the spring-cloud-stream-binder-kafka config, it is returning Unauthorized error.
Caused by: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"MySchema","namespace":"org.test","fields":[{"name":"value","type":"double"}]}
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Unauthorized; error code: 401
My Configuration below gives the above error. Not sure what is going wrong?
cloud:
stream:
default:
producer:
useNativeEncoding: true
kafka:
binder:
brokers: myinstance.us-east1.gcp.confluent.cloud:9092
producer-properties:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
schema.registry.url: https://myinstance.us-central1.gcp.confluent.cloud
basic.auth.credentials.source: USER_INFO
schema.registry.basic.auth.user.info: mySchemaKey:mySchemaSecret
configuration:
ssl.endpoint.identification.algorithm: https
sasl.mechanism: PLAIN
request.timeout.ms: 20000
retry.backoff.ms: 500
sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username="myKey" password="MySecret";
security.protocol: SASL_SSL
bindings:
normals-out:
destination: normals
contentType: application/*+avro
Example from Confluent that is working fine:
kafka:
bootstrap-servers:
- myinstance.us-east1.gcp.confluent.cloud:9092
properties:
ssl.endpoint.identification.algorithm: https
sasl.mechanism: PLAIN
request.timeout.ms: 20000
retry.backoff.ms: 500
sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username="myKey" password="MySecret";
security.protocol: SASL_SSL
schema.registry.url: https://myinstance.us-central1.gcp.confluent.cloud
basic.auth.credentials.source: USER_INFO
schema.registry.basic.auth.user.info: mySchemaKey:mySchemaSecret
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
template:
default-topic:
logging:
level:
root: info
My issue was only that I was missing a dependency in my pom.
I should delete my question, but I leave it here as a reference that the configuration does actually work as it is above.
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-schema-registry-client</artifactId>
<version>5.3.0</version>
</dependency>

Zuul proxy not routing

While working on spring Microservices I am not able to route api through zuul proxy
This is my code
eurka:
application.yml
spring:
application:
name: api
cloud:
config:
enabled: true
server:
port: ${PORT:8761}
eureka:
client:
registerWithEureka: false
fetchRegistry: false
serviceUrl:
defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/
instance:
hostname: localhost
zuul:
application.yml
spring:
application:
name: proxy-server
server:
port: 8079
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka
fetchRegistry: true
zuul:
ignored-services: '*'
prefix: /api
routes:
account:
path: /account/**
serviceId: account
stripPrefix: false
host:
socket-timeout-millis: 30000
ribbion:
eureka:
enabled: true
account
application.yml
ribbion:
eureka:
enabled: true
eureka:
instance:
preferIpAddress: true
client:
serviceUrl:
defaultZone: ${EUREKA_URI:http://localhost:8761/eureka}
instance:
preferIpAddress: true
dependency:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Finchley.SR1</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
Now the url localhost:8080/user/ is working fine but localhost:8080/api/account/user/ is throwing a 404.
Not sure what wrong I am doing here, any insight will be helpful please let me know if you need other details
What you need to do is,
Generate a Spring boot project with Zuul as a dependency.
Annotate the main class with #EnableZuulProxy
Configure routes and endpoints in application.properties file
Build and start the service
I suggest to implement the above steps first and make them work. Then customize that project as per your need. Found below article explaining the above steps with more details.
How to build an API-Gateway with Netflix Zuul + Spring Boot
I think you forgotten to add the name the of account service in account application.yml
spring:
application:
name: account

Categories