I have a #Repository reading from a real-time data source. I am making the data available using Flux.create() { sink->sink.next() }
A #Service is doing the following;
#Autowired MyRepository myRepository;
#PostConstruct() public void startUp() {
ConnectableFlux<Object> cf = myRepository.flux.publish();
cf.subscribe(System.out::println);
cf.connect();
}
That works and prints the data, but I do not get "Netty started" in the logs and #Controllers do not respond. If I omit cf.connect(), Netty starts. So I assume that cf.connect() is blocking Netty.
Ideally, I want the subscription to auto-start. Is using connect() in #PostConstuct too early? Should I listen for a "Netty Started" event, then connect(), or is my subscription just plain wrong?
Edit: If connect is run within a deamon Thread, Netty does start and the subscription works.
Putting #EnableAsync on the Spring Boot's main Application class and #Async on the method above seemed to work.
Edit:
I found a better solution here Connectable Flux blocks on toIterator.forEach, while Flux does not. #1549. So my code now looks like this;
Flux<MyClass> flux = Flux.create(
sink -> {
while(condition) {
sink.next(nextValueFromDataSource);
}
sink.complete();
}
)
.publish()
.autoConnect(1);
Also because of autoConnect(1), there is no need for #Async.
Related
I have a spring-webflux app which must consume messages from rabbitMQ. In previous apps when NOT using spring-webflux I've been able to:
Configure a retry policy when declaring the queue
Setup a rabbit listener using the #RabbitListener annotation
Trigger a retry by throwing an exception in the handler function
In spring-webflux I'm not able to throw an error, I just have a MonoError, how do I trigger a retry?
My code looks like something like this currently
#Component
#RequiredArgsConstructor
public class vehicleUpdateListener {
private final VehicleService service;
private final OperationFactory operationFactory;
#RabbitListener(queues = VEHICLE_UPDATE_QUEUE)
void handleVehicleUpdated(Message message) {
Mono.just(message)
.map(operationFactory::generateOperationFromMessage)
.flatMap(service::handleOperation) // want to retry if downstream app is down
.subscribe();
}
}
EDIT
I have now worked out that it is possible. If client code for example returns a Mono<Exception> then this will trigger retries. Likewise I could conditionally trigger retries my mapping to a Mono<Exception>. For example if I want to trigger a retry when a product from a message does not exist, I could do the following
repository.findByProductId(product.getProductId())
.hasElement()
.filter(exists -> !exists)
.flatMap(missing -> Mono.error(new Exception("my exception")))
.then(...) // carry on if it does exist
Using reactor with a non-reactive listener container has many challenges.
You must use MANUAL acks and ack/nack the delivery after the reactive flow completes.
You must use reactor's retry mechanisms.
Consider looking at the https://github.com/reactor/reactor-rabbitmq project instead of Spring AMQP. At some time in the future we hope to build reactive #RabbitListeners, but they are not there yet.
I would like to write an integration test for whole kafka flow.
In my production code I have:
#KafkaListener(topics = "myTopic")
public void listen(#Payload String payload) {
log.debug("processing payload: '{}' ", payload);
// business logic here
}
In my test code I use KafkaProducer<String, String> producer; to send messages to specific topic.
I would like to have a hook that would indicate that #KafkaListener was called.
I could insert some delay into test but it's a bad practice and I want to avoid it.
Is there any better way to wait for #KafkaListener being processed?
If your listener invokes some service, you can inject a mock for that service and verify it was called.
Also, you can wrap your listener in the test case and add a count down latch.
See this answer for an example.
I'm building a REST API application with Spring Boot 2.1.6. I want to use JMS messaging in my app with Active MQ package (org.apache.activemq). I have MyEventController class which receives all kinds of events through http requests. I then want to send the information about the events as a JMS message to a topic so that the message consumer will update the database with the information from the events.
The reason I want to use JMS here is to not hold the Spring thread which handle http request and have the consumer open a separate thread to do potentially a lot of time consuming updates to the database. However I'm wondering if JMSTemplate stays always one thread. Because if a new thread is opened for each http request then the solution is not so scalable.
This is my code for producer:
#RestController
public class MyEventController {
#Autowired
private DBHandler db;
#Autowired
private JmsTemplate jmsTemplate;
#RequestMapping(method=GET, path=trackingEventPath)
public ResponseEntity<Object> handleTrackingEvent(
#RequestParam(name = Routes.pubId) String pubId,
#RequestParam(name = Routes.event) String event) {
jmsTemplate.convertAndSend("topic1", "info#example.com");
return new ResponseEntity<>(null, new HttpHeaders(), HttpStatus.OK);
}
consumer:
#Component
public class JSMListener {
#JmsListener(destination = "topic1", containerFactory = "topicListenerFactory")
public void receiveTopicMessage(String event) {
// do something...
}
}
JmsTemplate has no concept of background threads or async sending. It's a class design for simplifying usage of java.jms.Session and embedding it with usual Spring concepts e.g. declarative transaction management with #Transactional.
In your example convertAndSend() will execute as part of the request processing thread. The method will block until the JMS broker responds to the application that the message was added to the destination queue or throw an exception if there was a problem e.g. queue was full.
I've recently started playing with Apache Camel, and one of the things I've been having issues with is properly performing shutdown logic on selective routes. Since the shutdown logic would vary between routes, Camel's RoutePolicy made the most sense. Here's an example of why I'm trying to do.
public class ProcessingRouteBuilder extends RouteBuilder {
private ProducerTemplate prodTemplate;
public class ProcessingRouteBuilder(ProducerTemplate aProdTemplate) {
prodTemplate = aProdTemplate;
}
#Override
public void configure() {
from("direct://processing")
.routePolicy(new RoutePolicySupport() {
#Override
public void onStop(Route route) {
super.onStop(route);
prodTemplate.sendBody("direct://shutdownRoute", "msg");
}
})
.process(ex -> // Do stuff)
from("direct://shutdownRoute")
.log("Running shutdown A route body - ${body}");
}
}
The shutdown is done like (http://camel.apache.org/how-can-i-stop-a-route-from-a-route.html). The ProducerTemplate comes from the primary CamelContext (read that it is good practice to create one ProducerTemplate per context).
Running this gives me a DirectConsumerNotAvailableException, I've used seda and vm (i don't plan to interact with multiple contexts, but I gave this a shot anyways), both don't exception, but the shutdown routes are never hit. Some questions I have
I might be using the Producer Template wrong? It doesn't look like it's creating an exchange.
Can I even use the ProducerTemplate once the Shutdown hook has been initiated? I'm not sure how Camel performs the shutdown, but it makes sense that it wouldn't allow new messages to be sent, and if the shutdown route is even available at the time of sending.
One thing to note, that I'm not handling here, is ensuring that the shutdown route is performed after the processing route finishes processing all messages in its queue. I'm not entirely sure if the onStop() method is called after there are no more inflight messages and if not, how to enforce it?
I figure another approach is to use when/choice at the beginning of each route and send some sort of shutdown notifier or message, but this seems a little more clunkier.
Thanks guys!
To programmatic shut down a route you can also use the Control Bus EIP.
However the "stop" logic is not clear as you'd want to send a message to the shutdownroute when the processing route stops, but if the stop happen because you are shutting down the camel context it may be possible that the shutdownRoute has already been stopped.
I'm trying to create a Spring Cloud Stream Source Bean inside a Spring Boot Application that simply sends the results of a method to a stream (underlying Kafka topic is bound to the stream).
Most of the Stream samples I've seen use #InboundChannelAdapter annotation to send data to the stream using a poller. But I don't want to use a poller. I've tried setting the poller to an empty array but the other problem is that when using #InboundChannelAdapter you are unable to have any method parameters.
The overall concept of what I am trying to do is read from an inbound stream. Do some async processing, then post the result to an outbound stream. So using a processor doesn't seem to be an option either. I am using #StreamListener with a Sink channel to read the inbound stream and that works.
Here is some code i've been trying but this doesn't work at all. I was hoping it would be this simple because my Sink was but maybe it isn't. Looking for someone to point me to an example of a source that isn't a Processor (i.e. doesn't require listening on an inbound channel) and doesn't use #InboundChannelAdapter or to give me some design tips to accomplish what I need to do in a different way. Thanks!
#EnableBinding(Source.class)
public class JobForwarder {
#ServiceActivator(outputChannel = Source.OUTPUT)
#SendTo(Source.OUTPUT)
public String forwardJob(String message) {
log.info(String.format("Forwarding a job message [%s] to queue [%s]", message, Source.OUTPUT));
return message;
}
}
Your orginal requirement can be achieved through the below steps.
Create your custom Bound Interface (you can use the default #EnableBinding(Source.class) as well)
public interface CustomSource {
String OUTPUT = "customoutput";
#Output(CustomSource.OUTPUT)
MessageChannel output();
}
Inject your bound channel
#Component
#EnableBinding(CustomSource.class)
public class CustomOutputEventSource {
#Autowired
private CustomSource customSource;
public void sendMessage(String message) {
customSource.output().send(MessageBuilder.withPayload(message).build());
}
}
Test it
#RunWith(SpringRunner.class)
#SpringBootTest
public class CustomOutputEventSourceTest {
#Autowired
CustomOutputEventSource output;
#Test
public void sendMessage() {
output.sendMessage("Test message from JUnit test");
}
}
So if you don't want to use a Poller, what causes the forwardJob() method to be called?
You can't just call the method and expect the result to go to the output channel.
With your current configuration, you need an inputChannel on the service containing your inbound message (and something to send a message to that channel). It doesn't have to be bound to a transport; it can be a simple MessageChannel #Bean.
Or, you could use a #Publisher to publish the result of the method invocation (as well as being returned to the caller) - docs here.
#Publisher(channel = Source.OUTPUT)
Thanks for the input. It took me a while to get back to the problem. I did try reading the documentation for #Publisher. It looked to be exactly what I needed but I just couldn't get the proper beans initialized to get it wired properly.
To answer your question the forwardJob() method is called after some async processing of the input.
Eventually I just implemented using spring-kafka library directly and that was much more explicit and felt easier to get going. I think we are going to stick to kafka as the only channel binding so I think we'll stick with that library.
However, we did eventually get the spring-cloud-stream library working quite simply. Here was the code for a single source without a poller.
#Component
#EnableBinding(Source.class)
public class JobForwarder {
private Source source;
#Autowired
public ScheduledJobForwarder(Source source) {
this.source = source;
}
public void forwardScheduledJob(String message) {
log.info(String.format("Forwarding a job message [%s] to queue [%s]", message, Source.OUTPUT));
source.output().send(MessageBuilder.withPayload(message).build());
}
}