I have problem with a common task and i can find any solutions or help (maybe some properties i need to pass for this to work ?)
I use local server 1.3.0.M2 and create simple stream
dataflow:>stream create --name test --definition ":bosstds > log" --deploy
In log i got this :
2017-09-28 12:31:00.644 INFO 5156 --- [ -C-1]
o.a.k.c.c.internals.AbstractCoordinator : Successfully joined group
test with generation 1
2017-09-28 12:31:00.646 INFO 5156 --- [ -C-1] o.a.k.c.c.internals.ConsumerCoordinator : Setting newly assigned
partitions [bosstds-0] for group test
2017-09-28 12:31:00.671 INFO 5156 --- [ -C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$3 : partitions
assigned:[bosstds-0]
2017-09-28 12:37:08.898 ERROR 5156 --- [ -L-1] o.s.c.s.b.k.KafkaMessageChannelBinder : Could not convert message:
74657374
java.lang.StringIndexOutOfBoundsException: String index out of range: 103
at java.lang.String.checkBounds(String.java:385) ~[na:1.8.0_144]
at java.lang.String.(String.java:425) ~[na:1.8.0_144]
at org.springframework.cloud.stream.binder.EmbeddedHeaderUtils.oldExtractHeaders(EmbeddedHeaderUtils.java:154)
~[spring-cloud-stream-1.3.0.M2.jar!/:1.3.0.M2]
at org.springframework.cloud.stream.binder.EmbeddedHeaderUtils.extractHeaders(EmbeddedHeaderUtils.java:115)
~[spring-cloud-stream-1.3.0.M2.jar!/:1.3.0.M2]
message is produced with kafka-console-producer.sh --broker-list localhost:9092 --topic bosstds and simply send line "test"
any suggestions ?
SCS embed headers on kafka in order to get this working set header mode to raw. You need to do that when interfacing with external apps not using SCS
tnx for help. I fixed this with:
--spring.cloud.stream.bindings.input.content-type=text/plain
--spring.cloud.stream.bindings.input.consumer.headerMode=raw
Related
We are experiencing the issue in dev environment which was working well previously. In the local environment, the application is running without starting an idempotent producer and that is the expected behavior here.
Issue: Sometimes an Idempotent producer is starting to run
automatically when compiling the spring boot application. And hence
the consumer fails to consume the message produced by the actual
producer.
Adding a snippet of relevant log info:
2022-07-05 15:17:54.449 WARN 7 --- [ntainer#0-0-C-1] o.s.k.l.DeadLetterPublishingRecoverer : Destination resolver returned non-existent partition consumer-topic.DLT-1, KafkaProducer will determine partition to use for this topic
2022-07-05 15:17:54.853 INFO 7 --- [ntainer#0-0-C-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [kafka server urls]
buffer.memory = 33554432
.
.
.
2022-07-05 15:17:55.047 INFO 7 --- [ntainer#0-0-C-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Instantiated an idempotent producer.
2022-07-05 15:17:55.347 INFO 7 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.0.1
2022-07-05 15:17:55.348 INFO 7 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId:
2022-07-05 15:17:57.162 INFO 7 --- [ad | producer-1] org.apache.kafka.clients.Metadata : [Producer clientId=producer-1] Cluster ID: XFGlM9HVScGD-PafRlFH7g
2022-07-05 15:17:57.169 INFO 7 --- [ad | producer-1] o.a.k.c.p.internals.TransactionManager : [Producer clientId=producer-1] ProducerId set to 6013 with epoch 0
2022-07-05 15:18:56.681 ERROR 7 --- [ntainer#0-0-C-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='byte[63]' to topic consumer-topic.DLT:
org.apache.kafka.common.errors.TimeoutException: Topic consumer-topic.DLT not present in metadata after 60000 ms.
2022-07-05 15:18:56.748 ERROR 7 --- [ntainer#0-0-C-1] o.s.k.l.DeadLetterPublishingRecoverer : Dead-letter publication to consumer-topic.DLTfailed for: consumer-topic-1#28
org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.TimeoutException: Topic consumer-topic.DLT not present in metadata after 60000 ms.
at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:660) ~[spring-kafka-2.8.5.jar!/:2.8.5]
.
.
.
2022-07-05 15:18:56.751 ERROR 7 --- [ntainer#0-0-C-1] o.s.kafka.listener.DefaultErrorHandler : Recovery of record (consumer-topic-1#28) failed
2022-07-05 15:18:56.758 INFO 7 --- [ntainer#0-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-consumer-topic-group-test-1, groupId=consumer-topic-group-test] Seeking to offset 28 for partition c-1
2022-07-05 15:18:56.761 ERROR 7 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : Error handler threw an exception
org.springframework.kafka.KafkaException: Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.support.serializer.DeserializationException: failed to deserialize; nested exception is java.lang.IllegalStateException: No type information in headers and no default type provided
As we can see on the logs that the application has started idempotent producer automatically and after starting it started throwing some errors.
Context: We have two microservices, one microservice publish the messages and contains the producer config. Second microservice only consuming the messages and does not contains any producer cofig.
The yml configurations for producer application:
kafka:
bootstrap-servers: "kafka urls"
properties:
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"${KAFKA_USER}\" password=\"${KAFKA_PWD}\";"
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
spring.json.trusted.packages: "*"
acks: 1
YML configuration for consumer application:
kafka:
bootstrap-servers: "kafka URL, kafka url2"
properties:
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"${KAFKA_USER}\" password=\"${KAFKA_PWD}\";"
consumer:
enable-auto-commit: true
auto-offset-reset: latest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.json.trusted.packages: "*"
consumer-topic:
topic: consumer-topic
group-abc:
group-id: consumer-topic-group-abc
The kafka bean for default error handler
#Bean
public CommonErrorHandler errorHandler(KafkaOperations<Object, Object> kafkaOperations) {
return new DefaultErrorHandler(new DeadLetterPublishingRecoverer(kafkaOperations));
}
We know a temporary fix, if we delete the group id and recreate it then the application works successfully. But after some deployment, this issue is raising back and we don't know the root cause for it.
Please guide.
I have a single step springbatch application. The job is as follows:
#Bean
public Job databaseCursorJob(#Qualifier("databaseCursorStep") Step exampleJobStep,
JobBuilderFactory jobBuilderFactory) {
return jobBuilderFactory.get("databaseCursorJob")
.incrementer(new RunIdIncrementer())
.flow(exampleJobStep)
.end()
.build();
}
I start the job from a springboot application. This afternoon, I attempted to add a second step to the job. Essentially as follows:
#Bean
public Job databaseCursorJob(#Qualifier("databaseCursorStep") Step exampleJobStep,
JobBuilderFactory jobBuilderFactory) {
return jobBuilderFactory.get("databaseCursorJob")
.incrementer(new RunIdIncrementer())
.flow(exampleJobStep).next(partitionStep())
.end()
.build();
}
In other words, just adding the "next(partitionStep()). However, ever since I did this, the job finishes without executing any step (see shell output below). In fact, even after removing the second step and going back to the original job, it refuses to execute the step. Before attempting to add the second step, I never once encountered this problem. I have gone so far as restarting my VM and it still skips the step. I am rather dead in the water until I resolved this. Grateful for any insights. thanks.
2020-09-01 14:49:00.260 INFO 6913 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8087 (http) with context path ''
2020-09-01 14:49:00.263 INFO 6913 --- [ main] f.p.r.Application : Started Application in 7.752 seconds (JVM running for 9.092)
2020-09-01 14:49:00.268 INFO 6913 --- [ main] o.s.b.a.b.JobLauncherCommandLineRunner : Running default command line with: []
2020-09-01 14:49:00.579 INFO 6913 --- [ main] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=databaseCursorJob]] launched with the following parameters: [{}]
2020-09-01 14:49:00.698 INFO 6913 --- [ main] o.s.batch.core.job.SimpleStepHandler : Step already complete or not restartable, so no action to execute: StepExecution: id=120, version=4, name=databaseCursorStep, status=COMPLETED, exitStatus=COMPLETED, readCount=1, filterCount=0, writeCount=1 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=2, rollbackCount=0, exitDescription=
2020-09-01 14:49:00.730 INFO 6913 --- [ main] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=databaseCursorJob]] completed with the following parameters: [{}] and the following status: [COMPLETED]
My issue was that my job had no way recover if there was an error or stuck in an unknown state. The step was not "already complete", it never completed. Its status was still "STARTED", and exit code "UNKNOWN" because it never exited. Anyway, my job repository is not in memory, but captured to a local DB, which is why it never resolved itself even after restarting VM (shame on me for not remembering this). So, I was able to fix by wiping out the job instance history, however that was a band-aid. I still have to fix my code to prevent it from happening again.
I also learned I could diagnose by examining the job repository in the database (its all there).
I really resolved this thanks Mr Hassine who responded above several times and pointed me in the right direction. The solution to prevent in the future is indeed addressed in the link he provided in his first response: Spring Batch error (A Job Instance Already Exists) and RunIdIncrementer generates only once
I am using ZIPKINS for distributed tracing the problem is when I am trying to test ZIPKINS by sending 10 request at a time to that service from other service by using loop, checked the UI for the logs of that, I had got only 2 logs i.e for first and last, I haven't received logs of the remaining requests. Can you help in figuring out what is the problem in that. Trace ids and span ids are generated for all the request, I am unable to see that logs for the same. Logs that are received:
2020-03-04 17:38:57.379 INFO [,7c8075c14691f988,43521ecc69b84d84,true] 10576 --- [nio-8081-exec-7] c.i.f.service.ProducerServiceImpl : Received Message ='ServiceInvocation [communicationID=COMM_0121, 2020-03-04 17:38:57.438 INFO [,7552e8c3d87d013a,89769451aafec094,false] –
10576 --- [nio-8081-exec-8] c.i.f.service.ProducerServiceImpl : Received Message ='ServiceInvocation [communicationID=COMM_0122, 2020-03-04 17:38:57.519 INFO [,79f38c25211dfab8,49ea12575eab0bcf,false] 10576 --- [nio-8081-exec-2] c.i.f.service.ProducerServiceImpl : Received –
Message ='ServiceInvocation [communicationID=COMM_0123, 2020-03-04 17:38:57.626 INFO [,294da34664fac032,ad98ed1fbce485df,false] 10576 --- [io-8081-exec-10] c.i.f.service.ProducerServiceImpl : Received Message ='ServiceInvocation [communicationID=COMM_0124, 2020-03-04 17:38:57.879 INFO [,8763a2ca3d6dfc44,9871d046cd7eacf1,false] 10576 --- [nio-8081-exec-1] c.i.f.service.ProducerServiceImpl : Received Message ='ServiceInvocation [communicationID=COMM_0125, 2020-03-04 17:38:57.923 INFO [,be1e3a490e114e92,2435ee34d215459c,false] –
10576 --- [nio-8081-exec-6] c.i.f.service.ProducerServiceImpl : Received Message ='ServiceInvocation [communicationID=COMM_0126, 2020-03-04 17:38:57.980 INFO [,21855ca20670de31,6213a3fdc0a23189,false] 10576 --- [nio-8081-exec-3] c.i.f.service.ProducerServiceImpl : Received Message ='ServiceInvocation [communicationID=COMM_0127, 2020-03-04 17:38:58.043 INFO [,4d9795e7d2dbf50c,21f83b3384381833,false] 10576 --- [nio-8081-exec-4] c.i.f.service.ProducerServiceImpl : Receive
Log format is: [application name, traceId, spanId, export]
So the last value true/false is actually the export value which means:
Export – This property is a boolean that indicates whether or not this log was exported to an aggregator like Zipkin. Zipkin is beyond the scope of this article but plays an important role in analyzing logs created by Sleuth.
As export values are false, zipkin is not receiving those values. That is expected behavior.
Root Cause: Some of the logs are exporting and some are not is because of sampler rate. Not all of the logs are meant to be sent. Anyway if you want all of the logs to be sampled try adding this property:
spring.sleuth.sampler.probability=1.0
Reference: sleuth-not-sending-trace-information-to-zipkin
I using SpringBoot and want read data from Kafka using batch. My application.yml look like this:
spring:
kafka:
bootstrap-servers:
- localhost:9092
properties:
schema.registry.url: http://localhost:8081
consumer:
auto-offset-reset: earliest
max-poll-records: 50000
enable-auto-commit: true
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
group-id: "batch"
properties:
fetch.min.bytes: 1000000
fetch.max.wait.ms: 20000
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
listener:
type: batch
My listener:
#KafkaListener(id = "bar2", topics = "TestTopic")
public void listen(List<ConsumerRecord<String, GenericRecord>> records) {
log.info("start of batch receive. Size::{}", records.size());
}
In log I see:
2019-10-04 11:08:19.693 INFO 2123 --- [ bar2-0-C-1] kafka.batch.demo.DemoApplication : start of batch receive. Size::33279
2019-10-04 11:08:19.746 INFO 2123 --- [ bar2-0-C-1] kafka.batch.demo.DemoApplication : start of batch receive. Size::33353
2019-10-04 11:08:19.784 INFO 2123 --- [ bar2-0-C-1] kafka.batch.demo.DemoApplication : start of batch receive. Size::33400
2019-10-04 11:08:19.821 INFO 2123 --- [ bar2-0-C-1] kafka.batch.demo.DemoApplication : start of batch receive. Size::33556
2019-10-04 11:08:39.859 INFO 2123 --- [ bar2-0-C-1] kafka.batch.demo.DemoApplication : start of batch receive. Size::16412
I set the required, settings: fetch.min.bytes and fetch.max.wait.ms, but they do not give any effect.
In a log I see that a pack in the size no more than 33 thousand at any settings. I broke my mind and I don't understand why is this happening?
max.poll.records is simply a maximum.
There are other properties that influence how many records you get
fetch.min.bytes - The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency.
fetch.max.wait.ms- The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.
See the documentation.
There is no way to exactly control the minimum number of records (unless they are all identical in length).
Here it is my part of application properties:
spring.cloud.stream.rabbit.bindings.studentInput.consumer.exchange-type=direct
spring.cloud.stream.rabbit.bindings.studentInput.consumer.delayed-exchange=true
But it appears that in the RabbitMQ Admin page, it does not have x-delayed-type: direct in the Args in feature of my queue. I am referencing to this Spring Cloud Stream documentation: https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/
What am I doing wrong? Thanks in advance :D
I just tested it and it worked fine.
Did you enable the plugin? If not, you should see this in the log...
2018-07-09 08:52:04.173 ERROR 156 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: connection error; protocol method: #method(reply-code=503, reply-text=COMMAND_INVALID - unknown exchange type 'x-delayed-message', class-id=40, method-id=10)
See the plugin documentation.
Another possibility is the exchange already existed. Exchange configuration is immutable; you will see a message like this...
2018-07-09 09:04:43.202 ERROR 3309 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method(reply-code=406, reply-text=PRECONDITION_FAILED - inequivalent arg 'type' for exchange 'so51244078' in vhost '/': received ''x-delayed-message'' but current is 'direct', class-id=40, method-id=10)
In this case you have to delete the exchange first.
By the way, you will need a routing key too; by default the queue will be bound with the topic exchange wildcard #.