Spring Cloud Stream Application Exits Before Supplier Finishes - java

I have a "task" application that is short lived and produces messages to Kafka based on statuses from a database. I'm using spring cloud stream to produce the messages using the below format of my application. I followed this format from the Spring Cloud Stream documentation to send arbitrary data to the output binding.
private EmitterProcessor<Message<GenericRecord>> processor;
#Override
public void run(ApplicationArguments arg0) {
// ... create Message<GenericRecord> producerRecord
this.processor.onNext(producerRecord);
}
#Bean
public Supplier<Flux<Message<GenericRecord>>> supplier() {
return () -> this.processor;
}
public static void main(String[] args) {
ConfigurableApplicationContext ctx = SpringApplication.run(Application.class, args);
ctx.close();
}
The application runs, creates the records, runs onNext(), and then exits. I then look to see if any messages have been published but there are none on the topic. I then added a Thread.sleep(10000) after each message is produced and the messages end up on the topic.
After looking at the documentation for Reactor I didn't seen any clear ways to accomplish this. Is there a way to wait for the EmitterProcessor to finish publishing the messages before the Spring application exits?

Do you have a specific reason to use the EmittorProcessor? I think this use-case can be solved by using StreamBridge. For e.g.
#Autowired StreamBridge streamBridge;
#Override
public void run(ApplicationArguments arg0) {
// ... create Message<GenericRecord> producerRecord
this.streamBridge.send("process-out-0", producerRecord);
}
Then provide configuration for: spring.cloud.stream.source: process
You can find more details on StreamBridge in the ref docs.

Related

Strategies to implement callback mechanism / notify, when all the asynchrous spring integration flows/threads execution is completed

I have spring integration flow that gets triggered once a every day, that pulls all parties from database and sends each party to an executorChannel.
The next flow would pull data for each party and then process them parallelly by sending in to a different executor channel.
Challenge i'm facing is how do i know when this entire process ends. Any ideas on how to acheve this .
Here's my pseudo code of executor channels and integration flows.
#Bean
public IntegrationFlow fileListener() {
return IntegrationFlows.from(Files.inboundAdapter(new
File("pathtofile"))).channel("mychannel").get();
}
#Bean
public IntegrationFlow flowOne() throws ParserConfigurationException {
return IntegrationFlows.from("mychannel").handle("serviceHandlerOne",
"handle").nullChannel();
}
#Bean
public IntegrationFlow parallelFlowOne() throws ParserConfigurationException {
return IntegrationFlows.from("executorChannelOne").handle("parallelServiceHandlerOne",
"handle").nullChannel();
}
#Bean
public IntegrationFlow parallelFlowTwo() throws ParserConfigurationException {
return IntegrationFlows.from("executorChannelTwo").handle("parallelServiceHandlerTwo",
"handle").nullChannel();
}
#Bean
public MessageChannel executorChannelOne() {
return new ExecutorChannel(
Executors.newFixedThreadPool(10));
}
#Bean
public MessageChannel executorChannelTwo;() {
return new ExecutorChannel(
Executors.newFixedThreadPool(10));
}
#Component
#Scope("prototype")
public class ServiceHandlerOne{
#Autowired
MessageChannel executorChannelOne;
#ServiceActivator
public Message<?> handle(Message<?> message) {
List<?> rowDatas = repository.findAll("parties");
rowDatas.stream().forEach(data -> {
Message<?> message = MessageBuilder.withPayload(data).build();
executorChannelOne.send(message);
});
return message;
}
}
#Component
#Scope("prototype")
public class ParallelServiceHandlerOne{
#Autowired
MessageChannel executorChannelTwo;;
#ServiceActivator
public Message<?> handle(Message<?> message) {
List<?> rowDatas = repository.findAll("party");
rowDatas.stream().forEach(data -> {
Message<?> message = MessageBuilder.withPayload(data).build();
executorChannelTwo;.send(message);
});
return message;
}
}
First of all no reason to make your services as #Scope("prototype"): I don't see any state holding in your services, so they are stateless, therefore can simply be as singleton. Second: since you make your flows ending with the nullChannel(), therefore point in returning anything from your service methods. Therefore just void and flow is going to end over there naturally.
Another observation: you use executorChannelOne.send(message) directly in the code of your service method. The same would be simply achieved if you just return that new message from your service method and have that executorChannelOne as the next .channel() in your flow definition after that handle("parallelServiceHandlerOne", "handle").
Since it looks like you do that in the loop, you might consider to add a .split() in between: the handler return your List<?> rowDatas and splitter will take care for iterating over that data and replies each item to that executorChannelOne.
Now about your original question.
There is really no easy to say that your executors are not busy any more. They might not be at the moment of request just because the message for task has not reached an executor channel yet.
Typically we recommend to use some async synchronizer for your data. The aggregator is a good way to correlate several messages in-the-flight. This way the aggregator collects a group and does not emit reply until that group is completed.
The splitter I've mentioned above adds a sequence details headers by default, so subsequent aggregator can track a message group easily.
Since you have layers in your flow, it looks like you would need a several aggregators: two for your executor channels after splitting, and one top level for the file. Those two would reply to the top-level for the final, per-file grouping.
You also may think about making those parties and party calls in parallel using a PublishSubscribeChannel, which also can be configured with a applySequence=true. This info then will be used by the top-level aggregator for info per file.
See more in docs:
https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations-publishsubscribechannel
https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#splitter
https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#aggregator

Implementing EventProcessingConfigurer,registerErrorHandler to properly handle #EventHandler errors

I am trying to add a ErrorHandler via the EventProcessingConfigurer.registerErrorHandler() method and while it is showing on the configuration the class itself is not being called.
Am currently using Axon 4.1.1 (With out Axon server) and Spring Boot 2.1.6.RELEASE.
i have based my code off github/AxonFramework but it isn't acting the same.
Config:
#Autowired
public void configure(final EventProcessingConfigurer config) {
TestErrorHandler testErrorHandler = new TestErrorHandler();
config.registerErrorHandler("SolrProjection", configuration -> testErrorHandler);
}
ErrorHander:
public class TestErrorHandler implements ErrorHandler, ListenerInvocationErrorHandler {
#Override
public void handleError(final ErrorContext errorContext) throws Exception {
System.out.println("TestErrorHandler.handleError()");
}
#Override
public void onError(final Exception exception, final EventMessage<?> event, final EventMessageHandler eventHandler) {
System.out.println("TestErrorHandler.onError()");
}
}
Projection:
#Configuration
#RequiredArgsConstructor
#ProcessingGroup("SolrProjection")
public class SolrProjection {
#EventHandler
public void onEvent(final TestEvent event,
#SequenceNumber Long sequenceNumber,
#Timestamp final Instant requestTimestamp,
#MessageIdentifier final String messageIdentifier,
final MetaData metaData) {
if (true) {
throw new IllegalStateException();
}
}
even thou i am directly throwing an error, i do not ever see the two system.out's in console. and putting log statements in the #EventHandler are properly being called.
The ErrorHandler is tasked to dealing with different exceptions than what you expect.
When it comes to handling events, Axon Framework deduces two layers:
The internal EventProcessor layer
The Event Handling Components written by framework users
Exceptions thrown within the EventProcessor are dealt with by the ErrorHandler you've configured.
For customizing the process for handling exceptions from your own Event Handlers, you
will have to configure the ListenerInvocationErrorHandler.
To configure a general/default ListenerInvocationErrorHandler, you can use the following method in your first snippet:
EventProcessingConfigurer#registerDefaultListenerInvocationErrorHandler(
Function<Configuration, ListenerInvocationErrorHandler>
)
You can also check out Axon's Reference Guide at this page for more info on this.
Hope this helps you out #sherring!

How to acces to the active JMS Connection/Session in Spring Boot that #JmsListener uses

I'm trying to recover messages that are sent back to the ActiveMQ queue due to the destination being unreachable. I'm avoiding the re-delivery policy as it doesn't fit my requirements. I need to recover these messages at exact time in my application with session.recover().
I'm currently using a close to default jms configuration for spring boot that enables the use of the #JmsListener annotation. However I cannot find a way that to get grab the handle of the active jms session this annotation uses?
Just add a Session parameter to the listener...
#SpringBootApplication
public class So55038881Application {
public static void main(String[] args) {
SpringApplication.run(So55038881Application.class, args);
}
#JmsListener(destination = "so55038881")
public void listen(String in, Session session) {
System.out.println(in + ":" + session);
}
#Bean
public ApplicationRunner runner(JmsTemplate template) {
return args -> template.convertAndSend("so55038881", "foo");
}
}
and
foo:Cached JMS Session: ActiveMQSession {id=ID:host.local-52659-1551967879238-4:1:1,started=true} java.lang.Object#5bad3a2d

How does spring.kafka.consumer.auto-offset-reset works in spring-kafka

KafkaProperties java doc:
/**
* What to do when there is no initial offset in Kafka or if the current offset
* does not exist any more on the server.
*/
private String autoOffsetReset;
I have hello world appllication which contains application.properties
spring.kafka.consumer.group-id=foo
spring.kafka.consumer.auto-offset-reset=latest
At this case #KafkaListener method is invoked for all entries. But expected result was that #KafkaListener method is invoked only for latest 3 options I send. I tried to use another option:
spring.kafka.consumer.auto-offset-reset=earlisest
But behaviour the same.
Can you explain this stuff?
P.S.
code sample:
#SpringBootApplication
public class Application implements CommandLineRunner {
public static Logger logger = LoggerFactory.getLogger(Application.class);
public static void main(String[] args) {
SpringApplication.run(Application.class, args).close();
}
#Autowired
private KafkaTemplate<String, String> template;
private final CountDownLatch latch = new CountDownLatch(3);
#Override
public void run(String... args) throws Exception {
this.template.send("spring_kafka_topic", "foo1");
this.template.send("spring_kafka_topic", "foo2");
this.template.send("spring_kafka_topic", "foo3");
latch.await(60, TimeUnit.SECONDS);
logger.info("All received");
}
#KafkaListener(topics = "spring_kafka_topic")
public void listen(ConsumerRecord<?, ?> cr) throws Exception {
logger.info(cr.toString());
latch.countDown();
}
}
Update:
Behaviour doesn't depends on
spring.kafka.consumer.auto-offset-reset
it is only depends on spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit
if I set spring.kafka.consumer.enable-auto-commit=false - I see all records.
if I set spring.kafka.consumer.enable-auto-commit=true - I see only 3 last records.
Please clarify menaning of spring.kafka.consumer.auto-offset-reset property
The KafkaProperties in Spring Boot does this:
public Map<String, Object> buildProperties() {
Map<String, Object> properties = new HashMap<String, Object>();
if (this.autoCommitInterval != null) {
properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,
this.autoCommitInterval);
}
if (this.autoOffsetReset != null) {
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
this.autoOffsetReset);
}
This buildProperties() is used from the buildConsumerProperties() which, in turn in the:
#Bean
#ConditionalOnMissingBean(ConsumerFactory.class)
public ConsumerFactory<?, ?> kafkaConsumerFactory() {
return new DefaultKafkaConsumerFactory<Object, Object>(
this.properties.buildConsumerProperties());
}
So, if you use your own ConsumerFactory bean definition be sure to reuse those KafkaProperties: https://docs.spring.io/spring-boot/docs/1.5.7.RELEASE/reference/htmlsingle/#boot-features-kafka-extra-props
UPDATE
OK. I see what's going on.
Try to add this property:
spring.kafka.consumer.enable-auto-commit=false
This way we won't have async auto-commits based on some commit interval.
The logic in our application is based on the exit fact after the latch.await(60, TimeUnit.SECONDS);. When we get 3 expected records we exit. This way the async auto-commit from the consumer might not happen yet. So, the next time you run the application the consumer polls data from the uncommited offset.
When we turn off auto-commit, we have an AckMode.BATCH, which is performed synchronously and we have an ability to see really latest recodrs in the topic for this foo consumer group.

Apache camel 2.16 enrich - No consumers available on endpoint in JUnit

I upgraded to camel 2.16 and one of my route Unit Tests started failing.
Here is my route definition:
public class Route extends RouteBuilder{
#Override
public void configure() throws Exception {
from(start).enrich("second");
from("direct:second")
.log(LoggingLevel.DEBUG, "foo", "Route [direct:second] started.");
}
}
Here is my test:
#RunWith(MockitoJUnitRunner.class)
public class RouteTest extends CamelTestSupport {
private Route builder;
#Produce(uri = "direct:start")
protected ProducerTemplate template;
#Before
public void config() {
BasicConfigurator.configure();
}
#Override
protected RouteBuilder createRouteBuilder() {
builder = new Route();
return builder;
}
#Override
protected CamelContext createCamelContext() throws Exception {
SimpleRegistry registry = new SimpleRegistry();
return new DefaultCamelContext(registry);
}
#Test
public void testPrimeRouteForSubscriptionId() {
Exchange exchange = ExchangeBuilder.anExchange(new DefaultCamelContext()).build();
exchange.getIn().setBody(new String("test"));
template.send(exchange);
}
}
The error I'm getting when I run the test is:
org.apache.camel.component.direct.DirectConsumerNotAvailableException: No consumers available on endpoint: Endpoint[direct://second]. Exchange[][Message: test]
Worthy of note is the following line in the camel 2.16 notes:
http://camel.apache.org/camel-2160-release.html
The resourceUri and resourceRef attributes on and has been removed as they now support a dynamic uris computed from an Expression.
Thanks in advance for any help.
Swap the order so the the direct route is started before the enrich.
http://camel.apache.org/configuring-route-startup-ordering-and-autostartup.html
Or use seda instead of direct in your unit test: http://camel.apache.org/seda
Or use ?block=true in the direct uri to tell Camel to block and wait for a consumer to be started and ready before it sends a message to it: http://camel.apache.org/direct
This is a somewhat old issue, but since i pulled out most of my hair out last night, trying to figure out why it was ok to use to("direct:myEndpoint") but not enrich("direct:myEndpoint"), I'll post the answer anyway - maybe it'll save somebody else from getting bald spots ;-)
It turns out to be a test-issue. In case of Direct endpoints, enrich checks whether there is a running route in the context before passing the Exchange to it, but it does so by looking at the CamelContext held by the Exchange it is currently handling. Since you passed your ProducerTemplate an Exchange what was created with a new DefaultCamelContext(), it has no "direct:second" route available.
Luckily there is a couple of simple solutions. Either create the Exchange using the CamelContext from CamelTestSupport, or use the ProducerTemplate sendBody(...) method instead:
#Test
public void testWithSendBody() {
template.sendBody(new String("test"));
}
#Test
public void testPrimeRouteForSubscriptionId() {
Exchange exchange = ExchangeBuilder.anExchange(context()).build();
exchange.getIn().setBody(new String("test"));
template.send(exchange);
}
The blueprint test keeps throwing exception, No Consumers available.
My scenario was that I have an osgi svc which exposes a method which can be called from any another osgi svc.
So the exposed svc method makes a call to a direct:
#EndpointInject(uri = "direct-vm:toRestCall")
ProducerTemplate toRestCall;
svcMethod(Exchange xch){
exchange.setOut(
toRestCall.send("seda:toDirectCall", xch -> {
try{
xch.getIn().setBody("abc");
}catch (Exception ex){
ex.getMessage();
}
}
}).getIn());
And when I tested the direct that it calls, Blueprint advice with JUnit used to keep throwing the following exception:
org.apache.camel.component.direct.DirectConsumerNotAvailableException:
No consumers available on endpoint: Endpoint. Exchange[Message: {..........

Categories