Spring Integration Kafka adaptor not producing message - java

I am struggling this for days now.
I am using SI adaptor for kafka under Spring-boot container.
I have configured zookeeper and kafka on my machine.
I also created console producer and consumer tested it and everything works fine(I manage to produce vis the console messages and have the console consumer to consume them).
I tried now to produce messages via Spring integration kafka outbound adapter but the console consumer wont consume the message
SI/Spring xd xml:
<int:publish-subscribe-channel id="inputToKafka"/>
<int-kafka:outbound-channel-adapter id="kafkaOutboundChannelAdapter"
kafka-producer-context-ref="kafkaProducerContext"
auto-startup="true"
order="1"
channel="inputToKafka">
</int-kafka:outbound-channel-adapter>
<int-kafka:producer-context id="kafkaProducerContext">
<int-kafka:producer-configurations>
<int-kafka:producer-configuration broker-list="localhost:9092"
async="true"
topic="zerg.hydra"
compression-codec="default"/>
</int-kafka:producer-configurations>
</int-kafka:producer-context>
<task:executor id="taskExecutor" pool-size="5" keep-alive="120" queue-capacity="500"/>
</beans>
Java:
#Named
public class KafkaProducer {
#Autowired
#Qualifier("inputToKafka")
MessageChannel inputToKafka;
public void sendMessageToKafka(String message)
{
inputToKafka.send(
MessageBuilder.withPayload(message)
.setHeader("messageKey", "3")
.setHeader("topic", "zerg.hydra").build());
}
}
this is how I run kafka console consumer:
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic zerg.hydra --from-beginning
logs:
Testing started at 12:49 PM ...
12:49:54 PM: Executing external tasks 'cleanTest test'...
:cleanTest
:compileJava
:processResources UP-TO-DATE
:classes
:compileTestJava
:processTestResources UP-TO-DATE
:testClasses
:test
12:50:07,165 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
12:50:07,166 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
12:50:07,166 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/Users/idan/dev/Projects/CalcMicroService/build/resources/test/logback.xml]
12:50:07,167 |-WARN in ch.qos.logback.classic.LoggerContext[default] - Resource [logback.xml] occurs multiple times on the classpath.
12:50:07,167 |-WARN in ch.qos.logback.classic.LoggerContext[default] - Resource [logback.xml] occurs at [file:/Users/idan/dev/Projects/CalcMicroService/build/resources/test/logback.xml]
12:50:07,167 |-WARN in ch.qos.logback.classic.LoggerContext[default] - Resource [logback.xml] occurs at [file:/Users/idan/dev/Projects/CalcMicroService/build/resources/main/logback.xml]
12:50:07,247 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
12:50:07,250 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
12:50:07,259 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [stdout]
12:50:07,281 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
12:50:07,327 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [reactor] to INFO
12:50:07,327 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.projectreactor] to INFO
12:50:07,328 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.springframework] to WARN
12:50:07,328 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.springframework.integration] to DEBUG
12:50:07,328 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
12:50:07,328 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [stdout] to Logger[ROOT]
12:50:07,331 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
12:50:07,332 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator#3de433b4 - Registering current configuration as safe fallback point
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.1.9.RELEASE)
12:50:09.530 [Test worker] INFO o.s.i.config.IntegrationRegistrar - No bean named 'integrationHeaderChannelRegistry' has been explicitly defined. Therefore, a default DefaultHeaderChannelRegistry will be created.
12:50:09.544 [Test worker] DEBUG o.s.i.config.IntegrationRegistrar - SpEL function '#xpath' isn't registered: there is no spring-integration-xml.jar on the classpath.
12:50:10.717 [Test worker] INFO o.s.i.c.DefaultConfiguringBeanFactoryPostProcessor - No bean named 'errorChannel' has been explicitly defined. Therefore, a default PublishSubscribeChannel will be created.
12:50:10.719 [Test worker] INFO o.s.i.c.DefaultConfiguringBeanFactoryPostProcessor - No bean named 'taskScheduler' has been explicitly defined. Therefore, a default ThreadPoolTaskScheduler will be created.
12:50:10.973 DEBUG [Test worker][org.jboss.logging] Logging Provider: org.jboss.logging.Log4jLoggerProvider
12:50:10.974 INFO [Test worker][org.hibernate.validator.internal.util.Version] HV000001: Hibernate Validator 5.0.3.Final
12:50:10.991 DEBUG [Test worker][org.hibernate.validator.internal.engine.resolver.DefaultTraversableResolver] Cannot find javax.persistence.Persistence on classpath. Assuming non JPA 2 environment. All properties will per default be traversable.
12:50:11.006 DEBUG [Test worker][org.hibernate.validator.internal.engine.ConfigurationImpl] Setting custom MessageInterpolator of type org.springframework.validation.beanvalidation.LocaleContextMessageInterpolator
12:50:11.011 DEBUG [Test worker][org.hibernate.validator.internal.engine.ConfigurationImpl] Setting custom ConstraintValidatorFactory of type org.springframework.validation.beanvalidation.SpringConstraintValidatorFactory
12:50:11.018 DEBUG [Test worker][org.hibernate.validator.internal.engine.ConfigurationImpl] Setting custom ParameterNameProvider of type com.sun.proxy.$Proxy46
12:50:11.024 DEBUG [Test worker][org.hibernate.validator.internal.xml.ValidationXmlParser] Trying to load META-INF/validation.xml for XML based Validator configuration.
12:50:11.032 DEBUG [Test worker][org.hibernate.validator.internal.xml.ValidationXmlParser] No META-INF/validation.xml found. Using annotation based configuration only.
12:50:12.089 [Test worker] INFO o.a.catalina.core.StandardService - Starting service Tomcat
12:50:12.091 [Test worker] INFO o.a.catalina.core.StandardEngine - Starting Servlet Engine: Apache Tomcat/7.0.56
12:50:14.162 [localhost-startStop-1] INFO o.a.c.c.C.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext
12:50:16.567 [Test worker] INFO o.s.i.k.support.ProducerFactoryBean - Using producer properties => {metadata.broker.list=localhost:9092, compression.codec=0, producer.type=async}
12:50:17.036 INFO [Test worker][kafka.utils.VerifiableProperties] Verifying properties
12:50:17.096 INFO [Test worker][kafka.utils.VerifiableProperties] Property compression.codec is overridden to 0
12:50:17.096 INFO [Test worker][kafka.utils.VerifiableProperties] Property metadata.broker.list is overridden to localhost:9092
12:50:17.096 INFO [Test worker][kafka.utils.VerifiableProperties] Property producer.type is overridden to async
12:50:17.591 DEBUG [Test worker][org.hibernate.validator.internal.engine.resolver.DefaultTraversableResolver] Cannot find javax.persistence.Persistence on classpath. Assuming non JPA 2 environment. All properties will per default be traversable.
12:50:17.591 DEBUG [Test worker][org.hibernate.validator.internal.engine.ConfigurationImpl] Setting custom MessageInterpolator of type org.springframework.validation.beanvalidation.LocaleContextMessageInterpolator
12:50:17.591 DEBUG [Test worker][org.hibernate.validator.internal.engine.ConfigurationImpl] Setting custom ConstraintValidatorFactory of type org.springframework.validation.beanvalidation.SpringConstraintValidatorFactory
12:50:17.591 DEBUG [Test worker][org.hibernate.validator.internal.engine.ConfigurationImpl] Setting custom ParameterNameProvider of type com.sun.proxy.$Proxy46
12:50:17.592 DEBUG [Test worker][org.hibernate.validator.internal.xml.ValidationXmlParser] Trying to load META-INF/validation.xml for XML based Validator configuration.
12:50:17.592 DEBUG [Test worker][org.hibernate.validator.internal.xml.ValidationXmlParser] No META-INF/validation.xml found. Using annotation based configuration only.
12:50:18.967 [Test worker] DEBUG o.s.i.c.GlobalChannelInterceptorProcessor - No global channel interceptors.
12:50:18.978 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - Adding {mongo:outbound-channel-adapter:mongoAdapter.adapter} as a subscriber to the 'mongoAdapter' channel
12:50:18.979 [Test worker] INFO o.s.i.channel.DirectChannel - Channel 'application:8091.mongoAdapter' has 1 subscriber(s).
12:50:18.979 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - started mongoAdapter.adapter
12:50:18.979 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - Adding {mongo:outbound-channel-adapter:adapterWithConverter.adapter} as a subscriber to the 'adapterWithConverter' channel
12:50:18.979 [Test worker] INFO o.s.i.channel.DirectChannel - Channel 'application:8091.adapterWithConverter' has 1 subscriber(s).
12:50:18.979 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - started adapterWithConverter.adapter
12:50:18.979 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - Adding {message-handler:kafkaOutboundChannelAdapter} as a subscriber to the 'inputToKafka' channel
12:50:18.979 [Test worker] INFO o.s.i.c.PublishSubscribeChannel - Channel 'application:8091.inputToKafka' has 1 subscriber(s).
12:50:18.979 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - started kafkaOutboundChannelAdapter
12:50:18.979 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
12:50:18.979 [Test worker] INFO o.s.i.c.PublishSubscribeChannel - Channel 'application:8091.errorChannel' has 1 subscriber(s).
12:50:18.979 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - started _org.springframework.integration.errorLogger
12:50:19.026 [Test worker] INFO o.a.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler ["http-nio-8091"]
12:50:19.040 [Test worker] INFO o.a.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-8091"]
12:50:19.047 [Test worker] INFO o.a.tomcat.util.net.NioSelectorPool - Using a shared selector for servlet write/read
12:50:19.338 [Test worker] DEBUG o.s.i.c.PublishSubscribeChannel - preSend on channel 'inputToKafka', message: [Payload String content=Hello Kafka From SI][Headers={messageKey=3, topic=zerg.hydra, id=be46fb68-c762-f16e-6ccb-6841ef3fe868, timestamp=1416999019338}]
12:50:19.338 [Test worker] DEBUG o.s.i.k.o.KafkaProducerMessageHandler - org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler#0 received message: [Payload String content=Hello Kafka From SI][Headers={messageKey=3, topic=zerg.hydra, id=be46fb68-c762-f16e-6ccb-6841ef3fe868, timestamp=1416999019338}]
12:50:19.362 [Test worker] DEBUG o.s.i.c.PublishSubscribeChannel - postSend (sent=true) on channel 'inputToKafka', message: [Payload String content=Hello Kafka From SI][Headers={messageKey=3, topic=zerg.hydra, id=be46fb68-c762-f16e-6ccb-6841ef3fe868, timestamp=1416999019338}]
Empty test suite.
12:50:19.402 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - Removing {mongo:outbound-channel-adapter:mongoAdapter.adapter} as a subscriber to the 'mongoAdapter' channel
12:50:19.417 [Thread-4] INFO o.s.i.channel.DirectChannel - Channel 'application:8091.mongoAdapter' has 0 subscriber(s).
12:50:19.419 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - stopped mongoAdapter.adapter
12:50:19.420 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - Removing {mongo:outbound-channel-adapter:adapterWithConverter.adapter} as a subscriber to the 'adapterWithConverter' channel
12:50:19.422 [Thread-4] INFO o.s.i.channel.DirectChannel - Channel 'application:8091.adapterWithConverter' has 0 subscriber(s).
12:50:19.422 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - stopped adapterWithConverter.adapter
12:50:19.423 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - Removing {message-handler:kafkaOutboundChannelAdapter} as a subscriber to the 'inputToKafka' channel
12:50:19.424 [Thread-4] INFO o.s.i.c.PublishSubscribeChannel - Channel 'application:8091.inputToKafka' has 0 subscriber(s).
12:50:19.424 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - stopped kafkaOutboundChannelAdapter
12:50:19.425 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - Removing {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
12:50:19.426 [Thread-4] INFO o.s.i.c.PublishSubscribeChannel - Channel 'application:8091.errorChannel' has 0 subscriber(s).
12:50:19.426 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - stopped _org.springframework.integration.errorLogger
BUILD SUCCESSFUL
Total time: 25.469 secs
12:50:20 PM: External tasks execution finished 'cleanTest test'.
I tried in the same app to use the Offical Kafka client producer and it worked just fine:
#Named
public class KafkaProducerJava {
ProducerConfig config=null;
Producer<String, String> producer;
Properties props=null;
#PostConstruct
public void init()
{
props = new Properties();
props.put("metadata.broker.list", "localhost:9092");
props.put("serializer.class", "kafka.serializer.StringEncoder");
// props.put("partitioner.class", "example.producer.SimplePartitioner");
props.put("request.required.acks", "1");
}
public void sendMsgToKafka(String msg)
{
config= new ProducerConfig(props);
producer=new Producer<String, String>(config);
KeyedMessage<String, String> data = new KeyedMessage<String, String>("test", "", msg);
producer.send(data);
producer.close();
}
Any idea why the message never reached to my consumer via the Spring Integration kafka adaptor??

I just ran it in XD and it worked fine for me...
$ bin/xd-shell
_____ __ _______
/ ___| (-) \ \ / / _ \
\ `--. _ __ _ __ _ _ __ __ _ \ V /| | | |
`--. \ '_ \| '__| | '_ \ / _` | / ^ \| | | |
/\__/ / |_) | | | | | | | (_| | / / \ \ |/ /
\____/| .__/|_| |_|_| |_|\__, | \/ \/___/
| | __/ |
|_| |___/
eXtreme Data
1.1.0.BUILD-SNAPSHOT | Admin Server Target: http://localhost:9393
Welcome to the Spring XD shell. For assistance hit TAB or type "help".
xd:>stream create --name foo --definition "time | kafka --topic=test" --deploy
Created and deployed new stream 'foo'
xd:>stream destroy foo
Destroyed stream 'foo'
xd:>
.
$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
2014-11-26 10:03:09
2014-11-26 10:03:10
2014-11-26 10:03:11
2014-11-26 10:03:12
2014-11-26 10:03:13
I also wrote this test case...
public class OutboundTests {
#Test
public void test() throws Exception {
KafkaProducerContext<String, String> kafkaProducerContext = new KafkaProducerContext<String, String>();
ProducerMetadata<String, String> producerMetadata = new ProducerMetadata<String, String>("test");
producerMetadata.setValueClassType(String.class);
producerMetadata.setKeyClassType(String.class);
Encoder<String> encoder = new StringEncoder<String>();
producerMetadata.setValueEncoder(encoder);
producerMetadata.setKeyEncoder(encoder);
ProducerFactoryBean<String, String> producer = new ProducerFactoryBean<String, String>(producerMetadata, "localhost:9092");
ProducerConfiguration<String, String> config = new ProducerConfiguration<String, String>(producerMetadata, producer.getObject());
kafkaProducerContext.setProducerConfigurations(Collections.singletonMap("test", config));
KafkaProducerMessageHandler<String, String> handler = new KafkaProducerMessageHandler<String, String>(kafkaProducerContext);
handler.handleMessage(MessageBuilder.withPayload("foo")
.setHeader("messagekey", "3")
.setHeader("topic", "test")
.build());
}
}
...to simulate what you are doing and that worked too. Are you seeing any logged exceptions? I see you are not setting the key/value types and encoders.
EDIT:
As discussed in the comments, the issue is that you are using an async producer. In your test example that "works", you are closing the producer, which flushes the queue. By default, the queue won't be flushed for 5 seconds and your test case is not waiting long enough.
I updated my test to include an XML configured version, and reduced the queue.buffering.max.ms to 500ms and added code to the test to wait a couple of seconds before terminating.
See the new commit for details.

I was facing the same issue since morning and finally found the solution. If you prefix the topic with kafka as given below
.setHeader("kafka_topic", "zerg.hydra")
This is a issue with API as internally, while looking up the key from from the message header, kafka_ is getting prefixed.

Related

Kafka Spring boot automatically starting idempotent producer in the consumer application

We are experiencing the issue in dev environment which was working well previously. In the local environment, the application is running without starting an idempotent producer and that is the expected behavior here.
Issue: Sometimes an Idempotent producer is starting to run
automatically when compiling the spring boot application. And hence
the consumer fails to consume the message produced by the actual
producer.
Adding a snippet of relevant log info:
2022-07-05 15:17:54.449 WARN 7 --- [ntainer#0-0-C-1] o.s.k.l.DeadLetterPublishingRecoverer : Destination resolver returned non-existent partition consumer-topic.DLT-1, KafkaProducer will determine partition to use for this topic
2022-07-05 15:17:54.853 INFO 7 --- [ntainer#0-0-C-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [kafka server urls]
buffer.memory = 33554432
.
.
.
2022-07-05 15:17:55.047 INFO 7 --- [ntainer#0-0-C-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Instantiated an idempotent producer.
2022-07-05 15:17:55.347 INFO 7 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.0.1
2022-07-05 15:17:55.348 INFO 7 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId:
2022-07-05 15:17:57.162 INFO 7 --- [ad | producer-1] org.apache.kafka.clients.Metadata : [Producer clientId=producer-1] Cluster ID: XFGlM9HVScGD-PafRlFH7g
2022-07-05 15:17:57.169 INFO 7 --- [ad | producer-1] o.a.k.c.p.internals.TransactionManager : [Producer clientId=producer-1] ProducerId set to 6013 with epoch 0
2022-07-05 15:18:56.681 ERROR 7 --- [ntainer#0-0-C-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='byte[63]' to topic consumer-topic.DLT:
org.apache.kafka.common.errors.TimeoutException: Topic consumer-topic.DLT not present in metadata after 60000 ms.
2022-07-05 15:18:56.748 ERROR 7 --- [ntainer#0-0-C-1] o.s.k.l.DeadLetterPublishingRecoverer : Dead-letter publication to consumer-topic.DLTfailed for: consumer-topic-1#28
org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.TimeoutException: Topic consumer-topic.DLT not present in metadata after 60000 ms.
at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:660) ~[spring-kafka-2.8.5.jar!/:2.8.5]
.
.
.
2022-07-05 15:18:56.751 ERROR 7 --- [ntainer#0-0-C-1] o.s.kafka.listener.DefaultErrorHandler : Recovery of record (consumer-topic-1#28) failed
2022-07-05 15:18:56.758 INFO 7 --- [ntainer#0-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-consumer-topic-group-test-1, groupId=consumer-topic-group-test] Seeking to offset 28 for partition c-1
2022-07-05 15:18:56.761 ERROR 7 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : Error handler threw an exception
org.springframework.kafka.KafkaException: Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.support.serializer.DeserializationException: failed to deserialize; nested exception is java.lang.IllegalStateException: No type information in headers and no default type provided
As we can see on the logs that the application has started idempotent producer automatically and after starting it started throwing some errors.
Context: We have two microservices, one microservice publish the messages and contains the producer config. Second microservice only consuming the messages and does not contains any producer cofig.
The yml configurations for producer application:
kafka:
bootstrap-servers: "kafka urls"
properties:
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"${KAFKA_USER}\" password=\"${KAFKA_PWD}\";"
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
spring.json.trusted.packages: "*"
acks: 1
YML configuration for consumer application:
kafka:
bootstrap-servers: "kafka URL, kafka url2"
properties:
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"${KAFKA_USER}\" password=\"${KAFKA_PWD}\";"
consumer:
enable-auto-commit: true
auto-offset-reset: latest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.json.trusted.packages: "*"
consumer-topic:
topic: consumer-topic
group-abc:
group-id: consumer-topic-group-abc
The kafka bean for default error handler
#Bean
public CommonErrorHandler errorHandler(KafkaOperations<Object, Object> kafkaOperations) {
return new DefaultErrorHandler(new DeadLetterPublishingRecoverer(kafkaOperations));
}
We know a temporary fix, if we delete the group id and recreate it then the application works successfully. But after some deployment, this issue is raising back and we don't know the root cause for it.
Please guide.

How to end spring #Schedule task gracefully?

I'm trying to get a spring boot service to end gracefully.It has a method with a #Scheduled annotation. The service uses spring-data for the DB and spring-cloud-stream for RabbitMQ. It's vital that the DB and RabbitMQ are accessible until the scheduled method ends. There's an autoscaler in place which frequently starts/stops service instances, crashing while stopping is not an option.
From this post Spring - Scheduled Task - Graceful Shutdown I take that it should be enough to add
#Bean
TaskSchedulerCustomizer taskSchedulerCustomizer() {
return taskScheduler -> {
taskScheduler.setWaitForTasksToCompleteOnShutdown(true);
taskScheduler.setAwaitTerminationSeconds(30);
};
}
and spring should wait to shut down the application until the scheduled method has finished or 30s have expired.
When the service is stopped while the scheduled method is executed I can see the following things from the log
spring-cloud-stream is closing the connections without waiting for the method to finish.
spring-data is also closing the db-connection right away.
The method is not stopped and tries to finish but fails since it cannot access the Db anymore.
Any ideas how to get the scheduled method including db-connection and rabbitMq access to finish gracefully?
This is my application class:
#SpringBootApplication(scanBasePackages = {
"xx.yyy.infop.dao",
"xx.yyy.infop.compress"})
#EntityScan("ch.sbb.infop.common.entity")
#EnableJpaRepositories({"xx.yyy.infop.dao", "xx.yyy.infop.compress.repository"})
#EnableBinding(CompressSink.class)
#EnableScheduling
public class ApplicationCompress {
#Value("${max.commpress.timout.seconds:300}")
private int maxCompressTimeoutSeconds;
public static void main(String[] args) {
SpringApplication.run(ApplicationCompress.class, args);
}
#Bean
TaskSchedulerCustomizer taskSchedulerCustomizer() {
return taskScheduler -> {
taskScheduler.setWaitForTasksToCompleteOnShutdown(true);
taskScheduler.setAwaitTerminationSeconds(maxCompressTimeoutSeconds);
};
}
}
And this the bean:
#Component
#Profile("!integration-test")
public class CommandReader {
private static final Logger LOGGER = LoggerFactory.getLogger(CommandReader.class);
private final CompressSink compressSink;
private final CommandExecutor commandExecutor;
CommandReader(CompressSink compressSink, CommandExecutor commandExecutor) {
this.compressSink = compressSink;
this.commandExecutor = commandExecutor;
}
#PreDestroy
private void preDestory() {
LOGGER.info("preDestory");
}
#Scheduled(fixedDelay = 1000)
public void poll() {
LOGGER.debug("Start polling.");
ParameterizedTypeReference<CompressCommand> parameterizedTypeReference = new ParameterizedTypeReference<>() {
};
if (!compressSink.inputSync().poll(this::execute, parameterizedTypeReference)) {
compressSink.inputAsync().poll(this::execute, parameterizedTypeReference);
}
LOGGER.debug("Finished polling.");
}
private void execute(Message<?> message) {
CompressCommand compressCommand = (CompressCommand) message.getPayload();
// uses spring-data to write to DB
CompressResponse compressResponse = commandExecutor.execute(compressCommand);
// Schreibt die Anwort in Rensponse-Queue
compressSink.outputResponse().send(MessageBuilder.withPayload(compressResponse).build());
}
}
And here some lines from the log (see https://pastebin.com/raw/PmmqhH1P for the full log):
2020-05-15 11:59:35,640 [restartedMain] - INFO org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler.initialize - traceid= - Initializing ExecutorService 'taskScheduler'
2020-05-15 11:59:44,976 [restartedMain] - INFO org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor.initialize - traceid= - Initializing ExecutorService 'applicationTaskExecutor'
Disconnected from the target VM, address: '127.0.0.1:52748', transport: 'socket'
2020-05-15 12:00:01,228 [SpringContextShutdownHook] - INFO org.springframework.cloud.stream.binder.BinderErrorChannel.adjustCounterIfNecessary - traceid= - Channel 'application-1.kompressSync.komprimierungSyncProcessingGroup.errors' has 1 subscriber(s).
2020-05-15 12:00:01,229 [SpringContextShutdownHook] - INFO org.springframework.cloud.stream.binder.BinderErrorChannel.adjustCounterIfNecessary - traceid= - Channel 'application-1.kompressSync.komprimierungSyncProcessingGroup.errors' has 0 subscriber(s).
2020-05-15 12:00:01,232 [SpringContextShutdownHook] - INFO org.springframework.cloud.stream.binder.BinderErrorChannel.adjustCounterIfNecessary - traceid= - Channel 'application-1.kompressAsync.komprimierungAsyncProcessingGroup.errors' has 1 subscriber(s).
2020-05-15 12:00:01,232 [SpringContextShutdownHook] - INFO org.springframework.cloud.stream.binder.BinderErrorChannel.adjustCounterIfNecessary - traceid= - Channel 'application-1.kompressAsync.komprimierungAsyncProcessingGroup.errors' has 0 subscriber(s).
2020-05-15 12:00:01,237 [SpringContextShutdownHook] - INFO org.springframework.integration.endpoint.EventDrivenConsumer.logComponentSubscriptionEvent - traceid= - Removing {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2020-05-15 12:00:01,237 [SpringContextShutdownHook] - INFO org.springframework.integration.channel.PublishSubscribeChannel.adjustCounterIfNecessary - traceid= - Channel 'application-1.errorChannel' has 0 subscriber(s).
2020-05-15 12:00:01,237 [SpringContextShutdownHook] - INFO org.springframework.integration.endpoint.EventDrivenConsumer.stop - traceid= - stopped bean '_org.springframework.integration.errorLogger'
2020-05-15 12:00:01,244 [SpringContextShutdownHook] - INFO org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor.shutdown - traceid= - Shutting down ExecutorService 'applicationTaskExecutor'
2020-05-15 12:00:01,245 [SpringContextShutdownHook] - INFO yy.xxx.infop.compress.CommandReader.preDestory - traceid= - preDestory
2020-05-15 12:00:01,251 [SpringContextShutdownHook] - INFO org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.destroy - traceid= - Closing JPA EntityManagerFactory for persistence unit 'default'
2020-05-15 12:00:01,256 [SpringContextShutdownHook] - INFO org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler.shutdown - traceid= - Shutting down ExecutorService 'taskScheduler'
2020-05-15 12:00:01,256 [scheduling-1] - INFO yy.xxx.infop.compress.CommandExecutor.doInTransactionWithoutResult - traceid=d22b696edc90e123 - 4
2020-05-15 12:00:02,257 [scheduling-1] - INFO yy.xxx.infop.compress.CommandExecutor.doInTransactionWithoutResult - traceid=d22b696edc90e123 - 5
2020-05-15 12:00:03,258 [scheduling-1] - INFO yy.xxx.infop.compress.CommandExecutor.doInTransactionWithoutResult - traceid=d22b696edc90e123 - 6
2020-05-15 12:00:04,260 [scheduling-1] - INFO yy.xxx.infop.compress.CommandExecutor.doInTransactionWithoutResult - traceid=d22b696edc90e123 - 7
2020-05-15 12:00:05,260 [scheduling-1] - INFO yy.xxx.infop.compress.CommandExecutor.doInTransactionWithoutResult - traceid=d22b696edc90e123 - 8
2020-05-15 12:00:06,261 [scheduling-1] - INFO yy.xxx.infop.compress.CommandExecutor.doInTransactionWithoutResult - traceid=d22b696edc90e123 - 9
2020-05-15 12:00:07,262 [scheduling-1] - INFO yy.xxx.infop.compress.CommandExecutor.doInTransactionWithoutResult - traceid=d22b696edc90e123 - end
2020-05-15 12:00:07,263 [scheduling-1] - INFO yy.xxx.infop.compress.condense.VmLaufVerdichter.verdichte - traceid=d22b696edc90e123 - VarianteTyp=G, vmId=482392382, vnNr=8416
2020-05-15 12:00:07,326 [scheduling-1] -ERROR yy.xxx.infop.compress.CommandExecutor.execute - traceid=d22b696edc90e123 -
org.springframework.beans.factory.BeanCreationNotAllowedException: Error creating bean with name 'inMemoryDatabaseShutdownExecutor': Singleton bean creation not allowed while singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:208) ~[spring-beans-5.2.4.RELEASE.jar:5.2.4.RELEASE]
2020-05-15 12:00:08,332 [scheduling-1] - INFO yy.xxx.infop.compress.CommandExecutor.execute - traceid=d22b696edc90e123 - Compress started. compressCommand=yy.xxx.infop.compress.client.CompressCommand#247ec0d[hostName=K57176,jobId=b1211ee8-4a54-47f2-a58b-92b3560bbddd,cmdId=1,userId=goofy2,commandTyp=verdichtet G, T und komprimiert G, T,vmId=482392382,started=1589536752609]
2020-05-15 12:00:08,337 [scheduling-1] -ERROR yy.xxx.infop.compress.CommandExecutor.execute - traceid=d22b696edc90e123 -
org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is java.lang.IllegalStateException: EntityManagerFactory is closed
at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:448) ~[spring-orm-5.2.4.RELEASE.jar:5.2.4.RELEASE]
2020-05-15 12:00:10,339 [scheduling-1] - INFO yy.xxx.infop.compress.CommandExecutor.execute - traceid=d22b696edc90e123 - Compress started. compressCommand=yy.xxx.infop.compress.client.CompressCommand#247ec0d[hostName=K57176,jobId=b1211ee8-4a54-47f2-a58b-92b3560bbddd,cmdId=1,userId=goofy2,commandTyp=verdichtet G, T und komprimiert G, T,vmId=482392382,started=1589536752609]
2020-05-15 12:00:10,343 [scheduling-1] -ERROR yy.xxx.infop.compress.CommandExecutor.execute - traceid=d22b696edc90e123 -
org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is java.lang.IllegalStateException: EntityManagerFactory is closed
2020-05-15 12:00:10,351 [scheduling-1] -DEBUG yy.xxx.infop.compress.CommandReader.poll - traceid=d22b696edc90e123 - Finished polling.
2020-05-15 12:00:10,372 [SpringContextShutdownHook] - INFO org.springframework.integration.monitor.IntegrationMBeanExporter.destroy - traceid= - Summary on shutdown: bean 'response'
2020-05-15 12:00:10,372 [SpringContextShutdownHook] - INFO org.springframework.integration.monitor.IntegrationMBeanExporter.destroy - traceid= - Summary on shutdown: nullChannel
2020-05-15 12:00:10,373 [SpringContextShutdownHook] - INFO org.springframework.integration.monitor.IntegrationMBeanExporter.destroy - traceid= - Summary on shutdown: bean 'errorChannel'
2020-05-15 12:00:10,373 [SpringContextShutdownHook] - INFO org.springframework.integration.monitor.IntegrationMBeanExporter.destroy - traceid= - Summary on shutdown: bean '_org.springframework.integration.errorLogger.handler' for component '_org.springframework.integration.errorLogger'
2020-05-15 12:00:10,374 [SpringContextShutdownHook] - INFO com.zaxxer.hikari.HikariDataSource.close - traceid= - HikariPool-1 - Shutdown initiated...
2020-05-15 12:00:10,405 [SpringContextShutdownHook] - INFO com.zaxxer.hikari.HikariDataSource.close - traceid= - HikariPool-1 - Shutdown completed.
Process finished with exit code 130
I've tested this configuration which should do the same as your TaskSchedulerCustomizer:
spring.task.scheduling.shutdown.await-termination=true
spring.task.scheduling.shutdown.await-termination-period=30s
Spring waits 30 seconds with all services available before shutting down anything if there are active tasks. If there are no active tasks, the shutdown is immediate.
It is worth mentioning that what brought me to this question was the graceful shutdown of #Async methods which is configured in a very similar way:
spring.task.execution.shutdown.await-termination=true
spring.task.execution.shutdown.await-termination-period=1s
or in code:
#Bean
public TaskExecutorCustomizer taskExecutorCustomizer() {
// Applies to #Async tasks, not #Scheduled as in the question
return (customizer) -> {
customizer.setWaitForTasksToCompleteOnShutdown(true);
customizer.setAwaitTerminationSeconds(10);
};
}
Back to your case, my guess is that the TaskSchedulerCustomizer is not actually executed or is overridden by something else after it executes.
For the first option, validate by adding logging statements or setting a breakpoint in taskSchedulerCustomizer().
For the second option, I suggest setting a breakpoint in TaskSchedulerBuilder::configure() to see what happens. Once the debugger breaks in that method, add a data breakpoint on the ExecutorConfigurationSupport::awaitTerminationMillis property of the taskScheduler to see if that property is modified elsewhere.
You can see the final termination period used in the shutdown process in the method ExecutorConfigurationSupport::awaitTerminationIfNecessary.

how to run citrus test from a main application

I'm trying to use Citrus to build a configurable mockup that I'd like to run from the command line passing different parameters to the test each time I run it.
I tried to use this as a reference to get started https://github.com/citrusframework/citrus/issues/325 but could not get it to work in my case.
I've got a running test running with one test case inside and I can run it like that
mvn clean verify -Dit.test=myTest#myTestCase
But when trying to run the following app:
package com.grge.citrus;
import com.consol.citrus.dsl.design.DefaultTestDesigner;
import com.consol.citrus.Citrus;
import com.consol.citrus.context.TestContext;
import org.springframework.context.ConfigurableApplicationContext;
import com.grge.citrus.*;
import com.consol.citrus.dsl.testng.TestNGCitrusTestDesigner;
public class App {
public static void main(String[] args) {
String suiteName = "mySuite";
Citrus citrus = Citrus.newInstance();
// my test with one testcase inside
TestNGCitrusTestDesigner myTest = new MyTest();
try {
citrus.beforeSuite(suiteName);
// This works it come from the sample
citrus.run(new SampleJavaDslTest().getTestCase());
// fails null pointer exception
citrus.run(myTest.getTestCase());
citrus.afterSuite(suiteName);
} finally {
((ConfigurableApplicationContext) citrus.getApplicationContext()).close();
}
}
private static class SampleJavaDslTest extends DefaultTestDesigner {
public SampleJavaDslTest() {
super();
echo("Hello from Java DSL!");
}
}
}
It fails with the following error
22:08:21,691 DEBUG citrus.Citrus| Loading Citrus application properties
22:08:21,696 DEBUG citrus.Citrus| Setting application property citrus.spring.java.config=com.grge.citrus.VCenterActorConfigSSL
22:08:22,025 DEBUG BeanDefinitionReader| Loaded 0 bean definitions from location pattern [classpath*:citrus-context.xml]
22:08:23,059 DEBUG server.HttpServer| Starting server: vCenterServer ...
22:08:23,217 DEBUG CachingServletFilter| Initializing filter 'request-caching-filter'
22:08:23,220 DEBUG CachingServletFilter| Filter 'request-caching-filter' configured successfully
22:08:23,220 DEBUG et.GzipServletFilter| Initializing filter 'gzip-filter'
22:08:23,220 DEBUG et.GzipServletFilter| Filter 'gzip-filter' configured successfully
22:08:23,222 DEBUG rusDispatcherServlet| Initializing servlet 'vCenterServer-servlet'
22:08:23,241 INFO rusDispatcherServlet| FrameworkServlet 'vCenterServer-servlet': initialization started
22:08:23,250 DEBUG rusDispatcherServlet| Servlet with name 'vCenterServer-servlet' will try to create custom WebApplicationContext context of class 'org.springframework.web.context.support.XmlWebApplicationContext', using parent context [null]
22:08:23,865 DEBUG rusDispatcherServlet| Unable to locate MultipartResolver with name 'multipartResolver': no multipart request handling provided
22:08:23,869 DEBUG rusDispatcherServlet| Unable to locate LocaleResolver with name 'localeResolver': using default [org.springframework.web.servlet.i18n.AcceptHeaderLocaleResolver#16fb356]
22:08:23,873 DEBUG rusDispatcherServlet| Unable to locate ThemeResolver with name 'themeResolver': using default [org.springframework.web.servlet.theme.FixedThemeResolver#1095f122]
22:08:23,894 DEBUG rusDispatcherServlet| No HandlerExceptionResolvers found in servlet 'vCenterServer-servlet': using default
22:08:23,896 DEBUG rusDispatcherServlet| Unable to locate RequestToViewNameTranslator with name 'viewNameTranslator': using default [org.springframework.web.servlet.view.DefaultRequestToViewNameTranslator#733c423e]
22:08:23,910 DEBUG rusDispatcherServlet| No ViewResolvers found in servlet 'vCenterServer-servlet': using default
22:08:23,915 DEBUG rusDispatcherServlet| Unable to locate FlashMapManager with name 'flashMapManager': using default [org.springframework.web.servlet.support.SessionFlashMapManager#681aad3b]
22:08:23,922 DEBUG rusDispatcherServlet| Published WebApplicationContext of servlet 'vCenterServer-servlet' as ServletContext attribute with name [org.springframework.web.servlet.FrameworkServlet.CONTEXT.vCenterServer-servlet]
22:08:23,922 INFO rusDispatcherServlet| FrameworkServlet 'vCenterServer-servlet': initialization completed in 678 ms
22:08:23,923 DEBUG rusDispatcherServlet| Servlet 'vCenterServer-servlet' configured successfully
22:08:54,266 INFO server.HttpServer| Started server: vCenterServer
22:08:54,362 INFO port.LoggingReporter|
22:08:54,362 INFO port.LoggingReporter| ------------------------------------------------------------------------
22:08:54,362 INFO port.LoggingReporter| .__ __
22:08:54,362 INFO port.LoggingReporter| ____ |__|/ |________ __ __ ______
22:08:54,362 INFO port.LoggingReporter| _/ ___\| \ __\_ __ \ | \/ ___/
22:08:54,362 INFO port.LoggingReporter| \ \___| || | | | \/ | /\___ \
22:08:54,362 INFO port.LoggingReporter| \___ >__||__| |__| |____//____ >
22:08:54,362 INFO port.LoggingReporter| \/ \/
22:08:54,362 INFO port.LoggingReporter|
22:08:54,363 INFO port.LoggingReporter| C I T R U S T E S T S 2.7.8
22:08:54,363 INFO port.LoggingReporter|
22:08:54,363 INFO port.LoggingReporter| ------------------------------------------------------------------------
22:08:54,363 DEBUG port.LoggingReporter| BEFORE TEST SUITE
22:08:54,363 INFO port.LoggingReporter|
22:08:54,363 INFO port.LoggingReporter|
22:08:54,363 INFO port.LoggingReporter| BEFORE TEST SUITE: SUCCESS
22:08:54,363 INFO port.LoggingReporter| ------------------------------------------------------------------------
22:08:54,363 INFO port.LoggingReporter|
22:08:54,370 DEBUG t.TestContextFactory| Created new test context - using global variables: '{}'
22:08:54,370 INFO port.LoggingReporter|
22:08:54,370 INFO port.LoggingReporter| ------------------------------------------------------------------------
22:08:54,370 DEBUG port.LoggingReporter| STARTING TEST SampleJavaDslTest <com.grge.citrus>
22:08:54,370 INFO port.LoggingReporter|
22:08:54,370 DEBUG citrus.TestCase| Initializing test case
22:08:54,372 DEBUG context.TestContext| Setting variable: citrus.test.name with value: 'SampleJavaDslTest'
22:08:54,372 DEBUG context.TestContext| Setting variable: citrus.test.package with value: 'com.grge.citrus'
22:08:54,372 DEBUG citrus.TestCase| Test variables:
22:08:54,372 DEBUG citrus.TestCase| citrus.test.name = SampleJavaDslTest
22:08:54,372 DEBUG citrus.TestCase| citrus.test.package = com.grge.citrus
22:08:54,373 INFO actions.EchoAction| Hello from Java DSL!
22:08:54,493 INFO port.LoggingReporter|
22:08:54,493 INFO port.LoggingReporter| TEST SUCCESS SampleJavaDslTest (com.grge.citrus)
22:08:54,493 INFO port.LoggingReporter| ------------------------------------------------------------------------
22:08:54,493 INFO port.LoggingReporter|
Exception in thread "main" java.lang.NullPointerException
at com.consol.citrus.dsl.testng.TestNGCitrusTestDesigner.getTestCase(TestNGCitrusTestDesigner.java:113)
at com.grge.citrus.App.main(App.java:25)
Effectively using the debugger I can see that when entering the try section, myTest has:
applicationContext
testDesigner
and many other attributes set to null and so the testDesigner.getTestCase() raise an Exception
Thx
Since Citrus 2.7 the framework provides a main CLI class that does exactly what you want:
https://github.com/citrusframework/citrus/blob/10d3954fd6c051d32e9cd974f685322c6454c779/modules/citrus-core/src/main/java/com/consol/citrus/main/CitrusApp.java#L38
Citrus application option usage:
-h or --help = Displays cli option usage
-d or --duration = Maximum time in milliseconds the server should be up and running - server will terminate automatically when time exceeds
-c or --config = Custom Spring configuration class
-s or --skipTests = Skip test execution
-p or --package = Test package to execute
-D or --properties = Default system properties to set
-e or --exit = Force system exit when finished
-t or --test = Test class/method to execute
-j or --jar = External test jar to load tests from

Spring Batch JobInstance to run more than once

I am new to Spring Batch and trust me I have read a lot those day about it to try to be familiar with its concepts. I am a bit confused about how JobInstance, RunIdIncrementer, JobParamaters work and I would like to understand some aspects :
When you run a Job and the JobInstance name is already in the BATCH_JOB_INSTANCE table, the Job is not launched. So, what is the best way to generate a new name for my JobInstance?
Is it a good practice to always generate a new name when I want to launch my job?
As job is supposed to be scheduled to run many times. What is the best practice to create a Batch (Job) to be scheduled to run many times without generating a new name?
Does the RunIdIncrementer() is supposed to create an id to generate a new JobName?
Edit : See the code below
#Bean
public Job batchExecution() {
return jobs
.get("BatchJob")
.incrementer(new JobIdIncrementer())
.start(downloadFile())
.next(archiveFile())
.next(readFile())
.build();
}
The JobIdIncrementer :
public class JobIdIncrementer implements JobParametersIncrementer {
private static String RUN_ID_KEY = "run.id";
private String key;
public JobIdIncrementer() {
this.key = RUN_ID_KEY;
}
public void setKey(String key) {
this.key = key;
}
#Override
public JobParameters getNext(JobParameters parameters) {
JobParameters params = parameters == null ? new JobParameters() : parameters;
long id = new Date().getTime();
return (new JobParametersBuilder(params)).addLong(this.key, Long.valueOf(id)).toJobParameters();
}
}
When I start the Batch the fist time, I have this log (it works fine) :
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.0.0.RELEASE)
"2018-08-08 15:36:03 - Starting Application on MA18-012.local with PID 39543
""2018-08-08 15:36:05 - HikariPool-1 - Starting...
""2018-08-08 15:36:05 - HikariPool-1 - Start completed.
""2018-08-08 15:36:06 - HHH000204: Processing PersistenceUnitInfo [
name: default
...]
""2018-08-08 15:36:06 - HHH000412: Hibernate Core {5.2.14.Final}
""2018-08-08 15:36:06 - HHH000206: hibernate.properties not found
""2018-08-08 15:36:06 - HHH80000001: hibernate-spatial integration enabled : true
""2018-08-08 15:36:06 - HCANN000001: Hibernate Commons Annotations {5.0.1.Final}
""2018-08-08 15:36:06 - HHH000400: Using dialect: org.hibernate.spatial.dialect.mysql.MySQL5SpatialDialect
""2018-08-08 15:36:06 - HHH000400: Using dialect: org.hibernate.spatial.dialect.mysql.MySQLSpatialDialect
""2018-08-08 15:36:09 - Started Application in 6.294 seconds (JVM running for 6.812)
""2018-08-08 15:36:09 - Loading the file name
""2018-08-08 15:36:23 - Downloading the file
""2018-08-08 15:36:24 - Archiving the file
""2018-08-08 15:36:24 - Unzipping the file
""2018-08-08 15:36:24 - Removing the file
""2018-08-08 15:36:51 - Reading the file
""2018-08-08 15:36:52 - HHH000397: Using ASTQueryTranslatorFactory
""2018-08-08 15:36:54 - HikariPool-1 - Shutdown initiated...
""2018-08-08 15:36:54 - HikariPool-1 - Shutdown completed.
The second time when start the Batch, I have this one (no error, but it starts and closes immediatly):
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.0.0.RELEASE)
"2018-08-08 15:38:28 - Starting Application on MA18-012.local with PID 39638
""2018-08-08 15:38:28 - No active profile set, falling back to default profiles: default
""2018-08-08 15:38:30 - HikariPool-1 - Starting...
""2018-08-08 15:38:30 - HikariPool-1 - Start completed.
""2018-08-08 15:38:30 - HHH000204: Processing PersistenceUnitInfo [
name: default
...]
""2018-08-08 15:38:30 - HHH000412: Hibernate Core {5.2.14.Final}
""2018-08-08 15:38:30 - HHH000206: hibernate.properties not found
""2018-08-08 15:38:30 - HHH80000001: hibernate-spatial integration enabled : true
""2018-08-08 15:38:30 - HCANN000001: Hibernate Commons Annotations {5.0.1.Final}
""2018-08-08 15:38:31 - HHH000400: Using dialect: org.hibernate.spatial.dialect.mysql.MySQL5SpatialDialect
""2018-08-08 15:38:31 - HHH000400: Using dialect: org.hibernate.spatial.dialect.mysql.MySQLSpatialDialect
""2018-08-08 15:38:33 - Started Application in 6.376 seconds (JVM running for 6.873)
""2018-08-08 15:38:34 - HikariPool-1 - Shutdown initiated...
""2018-08-08 15:38:34 - HikariPool-1 - Shutdown completed.
When you run a Job and the JobInstance name is already in the BATCH_JOB_INSTANCE table, the Job is not launched. So, what is the best way to generate a new name for my JobInstance?
You don't need to change the job name each time. Your job can have multiple job instances, each one is identified by a key which is the hash of the (identifying) job parameters used to run your job.
Is it a good practice to always generate a new name when I want to launch my job?
No, you don't need to generate a new name (it is the same job so the name should not change). What you need to do is to create a new job instance each time by specifying different job parameters.
As job is supposed to be scheduled to run many times. What is the best practice to create a Batch (Job) to be scheduled to run many times without generating a new name?
Depending on the frequency of your job, you can add the date as a job parameter. For example, if your job is scheduled to run daily, the current date is a good parameter. Check this section of the documentation: https://docs.spring.io/spring-batch/4.0.x/reference/html/domain.html#job it provides an example.
Does the RunIdIncrementer() is supposed to create an id to generate a new JobName?
The RunIdIncrementer increments the run.id job parameter so you get a new instance of JobParameters which will result in a new job instance. But the job name will still the same. Here is how it works: https://github.com/spring-projects/spring-batch/blob/master/spring-batch-core/src/main/java/org/springframework/batch/core/launch/support/RunIdIncrementer.java#L44

kafka-log4j-appender 0.9 does not work

I added a log4j kafka appender in my log4j.properties, but it did not work as my expected.
Before I post this question, I checked my log4j.properties based on this similar question on stackoverflow, which is about 0.8. However, I'm not luck.
Here is my log4j.properties
log4j.appender.Kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.Kafka.topic=my-topic
log4j.appender.Kafka.brokerList=localhost:9092
log4j.appender.Kafka.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Kafka.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
When I started my app, I could see Kafka producer was started:
[main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Kafka producer started
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.internals.Sender - Starting Kafka producer I/O thread.
But the appender did not work and threw an exception:
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
I also checked my Kafka + Zookeeper env, it is correct in my log4j.properties. Now, I have no idea about it. And hope someone could give me a hand. the following is the whole output:
[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 60000
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
client.id =
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
request.timeout.ms = 30000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = 1
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 0
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
linger.ms = 0
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bufferpool-wait-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name buffer-exhausted-records
[main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(nodes = [Node(-1, localhost, 9092)], partitions = [])
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name batch-size
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name compression-rate
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name queue-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name request-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name produce-throttle-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-per-request
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-retries
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name errors
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-size-max
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.9.0.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : fc7243c2af4b2b4a
[main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Kafka producer started
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.internals.Sender - Starting Kafka producer I/O thread.
...
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Thanks
Finally, I fixed it. Here is a my new log4j.properties
log4j.rootLogger=DEBUG, Console
log4j.appender.Console=org.apache.log4j.ConsoleAppender
log4j.appender.Console.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Console.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
log4j.category.foxgem=DEBUG, Kafka
log4j.additivity.foxgem=false
log4j.appender.Kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.Kafka.topic=logTopic
log4j.appender.Kafka.brokerList=localhost:9092
log4j.appender.Kafka.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Kafka.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
log4j.logger.io.vertx=WARN
log4j.logger.io.netty=WARN
I also created an example to show how to use this appender on github.
The changes I made:
remove kafka appender from rootLogger. In my old log4j.properties
log4j.rootLogger=DEBUG, Console, Kafka
add a log category for those packages whose log output will go to Kafka
log4j.category.foxgem=DEBUG, Kafka
log4j.additivity.foxgem=false
I think the reason is: with old rootLogger, the log output from Kafka also went to Kafka, which caused time out.
I had a problem like this with both log4j and logback.
When appender level was INFO everything worked fine but when I was changing level to DEBUG I was getting this error after some time:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
The problem was that KafkaProducer itself had trace and debug logs and was trying to append these log to Kafka too, so it was stuck in a loop.
Changing org.apache.kafka package log level to INFO or changing its appender to write logs to file or stdout solved the problem.

Categories