spring-cloud-contract-verifier cleanup messages in camel kafka topics - java

During contract test I run main FLOW, which produces 2 events into different kafka topics (TOPIC_1 and TOPIC_2). And I have two different tests to check sending of this events (TEST_1 for TOPIC_1 and TEST_2 for TOPIC_2). So both TEST_1 and TEST_2 runs the same FLOW, and for TEST_1 I have side effect of sending event into TOPIC_2, and for TEST_2 - TOPIC_1. Consider example, where I run TEST_1 and then TEST_2. During TEST_2 I will have 2 events inside TOPIC_2 - one produced by TEST_1 and the second produced by TEST_2. And of course, my TEST_2 will fail, because during the verification it supposes to receive message, produced by himself, nothing else.
So, that's why I need to skip all old messages in all topics before each test. How does it possible to do with usage of org.springframework.cloud.contract.verifier.messaging.internal.ContractVerifierMessaging

I found out a solution where I poll all messages from each endpoint inside CamelContext
#Autowired
private org.apache.camel.CamelContext camelContext;
#org.junit.Before
public void cleanUpCamelEndpoints() {
for (Endpoint endpoint : camelContext.getEndpoints()) {
try {
PollingConsumer pollingConsumer = endpoint.createPollingConsumer();
Exchange exchangeToSkip;
while ((exchangeToSkip = pollingConsumer.receiveNoWait()) != null) {
log.debug("Skipped side effect exchange: {}", exchangeToSkip);
}
} catch (Exception exception) {
log.debug("Exception while receive exchange to skip: " + exception);
}
}
}

Related

Spring transaction synchronisation not working (TransactionalEventListener)

I am aware this question has been asked in slightly different formats on this site but following the advices given on those posts took me nowhere. I already spent close to two days on this and I am out of ideas.
We have a spring boot micro service which does nothing more than listening for a message coming into an IBM MQ queue do a little bit of transformation and forwarding it to a Kafka topic. We want this to be transactional so there would be no message lost (critical to our business). We also want to be able to react on transaction commit and rollback events for the purpose of monitoring and support.
I just followed a few "how to" places on the internet and I can easily achieve transactional behaviour in a declarative way using #Transactional annotation like below:
#Transactional(transactionManager = "chainedTransactionManager", rollbackFor = Throwable.class)
#JmsListener(destination = "DEV.QUEUE.1", containerFactory = "mqListenerContainerFactory", concurrency = "10")
public void receiveMessage(#Headers Map<String, Object> jmsHeaders, String message) {
// Some work here including forward to Kafka topic:
// ...
// ...
// Then publish an event which is supposed to be acted on:
applicationEventPublisher.publishEvent(new MqConsumedEvent("JMS Correlation ID", "Message Payload"));
// Uncommented exception below to create a rollback scenario
// or comment it out to have the processing completed
throw new RuntimeException("No good Pal!");
}
As expected when playing a message with the exception in place the processing will spin forever because of the transaction manager rollbacking again and again. This is good for us.
Now we expect the MqConsumedEvent being published inside our listener method to be intercepted by the onRollback method below:
#Component
#Slf4j
public class MqConsumedEventListener {
#TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT, classes = MqConsumedEvent.class)
public void onCommit(MqConsumedEvent event) {
log.info("MQ message with correlation id {} committed to Kafka", event.getCorrelationId());
}
#TransactionalEventListener(phase = TransactionPhase.AFTER_ROLLBACK, classes = MqConsumedEvent.class)
public void onRollback(MqConsumedEvent event) {
log.info("Failed to commit MQ message with correlation id {} to Kafka", event.getCorrelationId());
}
}
This is not happening. Similar commenting out the Exception throwing in the listener makes our MQ message being passed to Kafka. However the onCommit method is not executed.
From further research and spring debug I believe this is not executing because spring thinks there is no active transaction when publishing the event and such my event it is just ignored. Evaluating TransactionSynchronizationManager.isActualTransactionActive() and printing it in the logs shows false which is hard to explain because as I said the transaction rollbacks as expected when an exception is thrown on purpose.
Thank you in advance for your inputs.
UPDATE:
The breakpoints I put brought me to the execution of this ApplicationListenerMethodTransactionalAdapter class:
#Override
public void onApplicationEvent(ApplicationEvent event) {
if (TransactionSynchronizationManager.isSynchronizationActive() &&
TransactionSynchronizationManager.isActualTransactionActive()) {
TransactionSynchronization transactionSynchronization = createTransactionSynchronization(event);
TransactionSynchronizationManager.registerSynchronization(transactionSynchronization);
}
else if (this.annotation.fallbackExecution()) {
if (this.annotation.phase() == TransactionPhase.AFTER_ROLLBACK && logger.isWarnEnabled()) {
logger.warn("Processing " + event + " as a fallback execution on AFTER_ROLLBACK phase");
}
processEvent(event);
}
else {
// No transactional event execution at all
if (logger.isDebugEnabled()) {
logger.debug("No transaction is active - skipping " + event);
}
}
}
For reason I am not understanding the first if condition is false. Then fallback execution is false as I haven't set it true in my #TransactionalEventListener usage it will end up on the else branch and just skip the event.
I had the same problem. In my case it turns out that I had defined an ApplicationEventMulticaster in my project.
#Bean
public ApplicationEventMulticaster applicationEventMulticaster() {
var eventMulticaster = new SimpleApplicationEventMulticaster();
eventMulticaster.setTaskExecutor(new SimpleAsyncTaskExecutor());
return eventMulticaster;
}
That make the ApplicationListenerMethodTransactionalAdapter to be executed in a different thread (not the one where the event was published). That's why TransactionSynchronizationManager.isActualTransactionActive() ends up to be false and the event do not get executed.
Removing the definition of the ApplicationEventMulticaster worked fine for me.

RSocket Channel with Spring Boot - Clients miss their own first message

Suppose I have a simple RSocket and Spring Boot Server. The server broadcasts all incoming client messages to all connected clients (including the sender). Client and server look like this:
Server:
public RSocketController() {
this.processor = DirectProcessor.<String>create().serialize();
this.sink = this.processor.sink();
}
#MessageMapping("channel")
Flux<String> channel(final Flux<String> messages) {
this.registerProducer(messages);
// breakpoint here
return processor
.doOnSubscribe(subscription -> logger.info("sub"))
.doOnNext(message -> logger.info("[Sent] " + message));
}
private Disposable registerProducer(Flux<String> flux) {
return flux
.doOnNext(message -> logger.info("[Received] " + message))
.map(String::toUpperCase)
// .delayElements(Duration.ofSeconds(1))
.subscribe(this.sink::next);
}
Client:
#ShellMethod("Connect to the server")
public void connect(String name) {
this.name = name;
this.rsocketRequester = rsocketRequesterBuilder
.rsocketStrategies(rsocketStrategies)
.connectTcp("localhost", 7000)
.block();
}
#ShellMethod("Establish a channel")
public void channel() {
this.rsocketRequester
.route("channel")
.data(this.fluxProcessor.doOnNext(message -> logger.info("[Sent] {}", message)))
.retrieveFlux(String.class)
.subscribe(message -> logger.info("[Received] {}", message));
}
#ShellMethod("Send a lower case message")
public void send(String message) {
this.fluxSink.next(message.toLowerCase());
}
The problem is: the first message a client sends is processed by the server, but does not reach the sender again. All subsequent messages are delivered without any problems. All other clients already connected will receive all messages.
What I noticed so far while debugging
when I call channel() in the client, retrieveFlux() and subscribe() are called. But on the server the breakpoint is not triggered in the corresponding method.
Only when the client sends the first message with send() is the breakpoint triggered on the server.
Using the .delayElements() on the server seems to "solve" the problem.
What am i doing wrong here?
And why does it need the send() first to trigger the servers breakpoint?
Thanks in advance!
A DirectProcessor does not have a buffer. If it does not have a subscriber, the message is dropped.
(Citing from its Javadoc: If there are no Subscribers, upstream items are dropped)
I think that when RSocketController.registerProducer() calls flux.[...].subscribe() it immediately starts processing the incoming messages from flux and passing them to the sink of the processor, but subscription to the processor has not happened yet. Thus the messages are dropped.
I guess that subscription to the processor is done by the framework, after returning from RSocketController.channel(...) method. -- I think that you are able to set a breakpoint in your processor.doOnSubscribe(..) method to see where it actually happens.
Thus maybe moving a registerProducer() call into a processor.doOnSubscribe() callback will solve your issue, like this:
#MessageMapping("channel")
Flux<String> channel(final Flux<String> messages) {
return processor
.doOnSubscribe(subscription -> this.registerProducer(messages))
.doOnSubscribe(subscription -> logger.info("sub"))
.doOnNext(message -> logger.info("[Sent] " + message));
}
But I think that personally I would prefer to replace a DirectProcessor with UnicastProcessor.create().onBackpressureBuffer().publish(). So that broadcasting to multiple subscribers is moved into a separate operation, so that there could be a buffer between the sink and subscribers, and late subscribers and backpressure could be handled in a better way.

Kafka SpringBoot StreamListener - how to consume multiple topics in order?

I have multiple StreamListener-annotated methods consuming from different topics. But some of these topics need to be read from the "earliest" offset to populate an in-memory map (something like a state machine) and then consume from other topics that might have commands in them that should be executed against the "latest" state machine.
Current code looks something like:
#Component
#AllArgsConstructor
#EnableBinding({InputChannel.class, OutputChannel.class})
#Slf4j
public class KafkaListener {
#StreamListener(target = InputChannel.EVENTS)
public void event(Event event) {
// do something with the event
}
#StreamListener(target = InputChannel.COMMANDS)
public void command(Command command) {
// do something with the command only after all events have been processed
}
}
I tried to add some horrible code that gets the kafka topic offset metadata from the incoming event messages and then uses a semaphore to block the command until a certain percentage of the total offset is reached by the event. It kinda works but makes me sad, and it will be awful to maintain once we have 20 or so topics that all depend on one another.
Does SpringBoot / Spring Streams have any built-in mechanism to do this, or is there some common pattern that people use that I'm not aware of?
TL;DR: How do I process all messages from topic A before consuming any from topic B, without doing something dirty like sticking a Thread.sleep(60000) in the consumer for topic B?
See the kafka consumer binding property resetOffsets
resetOffsets
Whether to reset offsets on the consumer to the value provided by startOffset. Must be false if a KafkaRebalanceListener is provided; see Using a KafkaRebalanceListener.
Default: false.
startOffset
The starting offset for new groups. Allowed values: earliest and latest. If the consumer group is set explicitly for the consumer 'binding' (through spring.cloud.stream.bindings..group), 'startOffset' is set to earliest. Otherwise, it is set to latest for the anonymous consumer group. Also see resetOffsets (earlier in this list).
Default: null (equivalent to earliest).
You can also add a KafkaBindingRebalanceListener and perform seeks on the consumer.
EDIT
You can also set autoStartup to false on the second listener, and start the binding when you are ready. Here's an example:
#SpringBootApplication
#EnableBinding(Sink.class)
public class Gitter55Application {
public static void main(String[] args) {
SpringApplication.run(Gitter55Application.class, args);
}
#Bean
public ConsumerEndpointCustomizer<KafkaMessageDrivenChannelAdapter<?, ?>> customizer() {
return (endpoint, dest, group) -> {
endpoint.setOnPartitionsAssignedSeekCallback((assignments, callback) -> {
assignments.keySet().forEach(tp -> callback.seekToBeginning(tp.topic(), tp.partition()));
});
};
}
#StreamListener(Sink.INPUT)
public void listen(String value, #Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) byte[] key) {
System.out.println(new String(key) + ":" + value);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<byte[], byte[]> template,
BindingsEndpoint bindings) {
return args -> {
while (true) {
template.send("gitter55", "foo".getBytes(), "bar".getBytes());
System.out.println("Hit enter to start");
System.in.read();
bindings.changeState("input", State.STARTED);
}
};
}
}
spring.cloud.stream.bindings.input.group=gitter55
spring.cloud.stream.bindings.input.destination=gitter55
spring.cloud.stream.bindings.input.content-type=text/plain
spring.cloud.stream.bindings.input.consumer.auto-startup=false

Spring 5 reactive websockets: Clients not receiving same data from hot stream

I have this in my WebSocketHandler implementation:
#Override
public Mono<Void> handle(WebSocketSession session) {
return session.send(
session.receive()
.flatMap(webSocketMessage -> {
int id = Integer.parseInt(webSocketMessage.getPayloadAsText());
Flux<EfficiencyData> flux = service.subscribeToEfficiencyData(id);
var publisher = flux
.<String>handle((o, sink) -> {
try {
sink.next(objectMapper.writeValueAsString(o));
} catch (JsonProcessingException e) {
e.printStackTrace();
}
})
.map(session::textMessage);
return publisher;
})
);
}
The Flux<EfficiencyData> is currently generated for testing in the service as follows:
public Flux<EfficiencyData> subscribeToEfficiencyData(long weavingLoomId) {
return Flux.interval(Duration.ofSeconds(1))
.map(aLong -> {
longAdder.increment();
return new EfficiencyData(new MachineSpeed(
RotationSpeed.ofRpm(longAdder.intValue()),
RotationSpeed.ofRpm(0),
RotationSpeed.ofRpm(400)));
}).publish().autoConnect();
}
I am using publish().autoConnect() to make it a hot stream. I created a unit test that starts 2 threads that do this on the returned Flux:
flux.log().handle((s, sink) -> {
LOGGER.info("{}", s.getMachineSpeed().getCurrent());
}).subscribe();
In this case, I see both threads printing out the same value every second.
However, when I open 2 browser tabs, I don't see the same values in both my web pages. The more websocket clients that connect, the more the values are apart (So each value from the original Flux seems to be sent to a different client, instead of sent to all of them).
Managed to fix this thanks to Brian Clozel on twitter.
The issue is that for each connecting websocket client, I call the service.subscribeToEfficiencyData(id) method, which returns a new Flux every time it is called. So of course, those independent Flux'es are not being shared between the different websocket clients.
To fix the issue, I create the Flux instance in the constructor or a PostConstruct method of my service so the subscribeToEfficiencyData returns the same Flux instance every time.
Note that .publish().autoConnect() on the Flux remains important, because without that websocket clients will again see different values!

Camel: PollEnrich generating a lot of Timed Waiting threads

I have this camel route
from("file:{{PATH_INPUT}}?charset=iso-8859-1&delete=true")
.process(new ProcessorName())
.pollEnrich().simple("${property.URI_FILE}", String.class).aggregationStrategy(new Estrategia()).timeout(10000).aggregateOnException(true)
.choice()
.when(simple("${property.result} == 'OK'"))
.to(URI_OUTPUT)
.endChoice();
This route takes a file from PATH_INPUT, compare it with the file URI_FILE (I generate URI_FILE property in ProccessorName()) and if URI_FILE body contains a specific data, then the result is "OK" and send it to URI_OUTPUT (activeMQ).
This works ok, but later I noticed that this generated a lot of waiting threads, one for each exchange.
I don't know why is this happening. I have tried with a ConsumerTemplate and the results are the same.
Yes this is expected if you generate a unique URI per endpoint you poll. I assume you generate a dynamic fileName which you specify in that URI, and that you see a thread per endpoint?
I have logged a ticket to make this easier in the future
https://issues.apache.org/jira/browse/CAMEL-11250
If you just want to set the message body to a specify file name, then the fastest and easiest is to use setBody as a java.io.File type:
.setBody(simple("${property.URI_FILE}", java.io.File))
I have run into the same trouble and faced memory leak. As a workaround, I implemented my own 'org.apache.camel.spi.PollingConsumerPollStrategy' which catches the Consumer when it is begun (by pollEnrich) and sends it to a bean that shall hold all of these consumers in a Map.
Then, I added a timer-route only to trigger a purge action onto the Map that checks if a given time limit has been reached for each of them. If so, it stops the Consumer (leading to interrupt its related thread) and then removes it from the Map.
Like this:
from("direct://foo")
.to("an endpoint that returns the file name")
.pollEnrich()
.simple("file://{{app.runtime.draft.path}}"
+ "?fileName=${body}"
+ "&recursive=true"
+ "&delete=true"
+ "&pollStrategy=#myFilePollingStrategy" // my poll strategy
+ "&maxMessagesPerPoll=1")
.timeout(6 * 1000L)
.end()
.to("direct://a")
.to("direct://b")
.to("direct://c")
.end();
from("timer://file-consumer-purge?period=5s")
.bean(fileConsumerController, "purge")
.end();
#Component
public class FileConsumerController {
private Map<Consumer, Long> mapConsumers = new ConcurrentHashMap<>();
private static final long LIMIT = 25 * 1000L; // 25 seconds
public void hold(Consumer consumer) {
mapConsumers.put(consumer, System.currentTimeMillis());
}
public void purge() {
mapConsumers.forEach((consumer, startTime) -> {
if (System.currentTimeMillis() - startTime > LIMIT) {
try {
consumer.stop();
} catch (Exception e) {
e.printStackTrace();
} finally {
mapConsumers.remove(consumer);
}
}
});
}
}
#Component
public class MyFilePollingStrategy extends DefaultPollingConsumerPollStrategy {
#Autowired
FileConsumerController fileConsumerController;
#Override
public boolean begin(Consumer consumer, Endpoint endpoint) {
fileConsumerController.hold(consumer);
return super.begin(consumer, endpoint);
}
}
Notes:
I monitored the behavior through jconsole;
I've only overwritten the begin() method and haven't tested the effects over unexpected / error scenarios.
Hope this helps for now, and may the component be evolved. :)

Categories