Intercepting Spring Cloud Stream Messages from Consumer only - java

I am currently using Spring Cloud Stream with Kafka binders with a GlobalChannelInterceptor to perform message-logging for my Spring Boot microservices.
I have:
a producer to publish messages to a SubscribableChannel
a consumer to listen from the Stream (using the #StreamListener annotation)
Throughout the process when a message is published to the Stream from the producer and listened by the consumer, it is observed that the preSend method was triggered twice:
Once at producer side - when the message is published to the Stream
Once at consumer side - when the message is listened from the Stream
However, for my logging purposes, I only need to intercept and log the message at consumer side.
Is there any way to intercept the SCS message ONLY at one side (e.g. consumer side)?
I would appreciate any thoughts on this matter. Thank you!
Ref:
GlobalChannelInterceptor documentation - https://docs.spring.io/spring-integration/api/org/springframework/integration/config/GlobalChannelInterceptor.html
EDIT
Producer
public void sendToPushStream(PushStreamMessage message) {
try {
boolean results = streamChannel.pushStream().send(MessageBuilder.withPayload(new ObjectMapper().writeValueAsString(message)).build());
log.info("Push stream message {} sent to {}.", results ? "successfully" : "not", StreamChannel.PUSH_STREAM);
} catch (JsonProcessingException ex) {
log.error("Unable to parse push stream message.", ex);
}
}
Producer's streamChannel
public interface StreamChannel {
String PUSH_STREAM = "PushStream";
#Output(StreamChannel.PUSH_STREAM)
SubscribableChannel pushStream();
}
Consumer
#StreamListener(StreamChannel.PUSH_STREAM)
public void handle(Message<PushStreamMessage> message) {
log.info("Incoming stream message from {}, {}", streamChannel.pushStream(), message);
}
Consumer's streamChannel
public interface StreamChannel {
String PUSH_STREAM = "PushStream";
#Input(StreamChannel.PUSH_STREAM)
SubscribableChannel pushStream();
}
Interceptor (Common Library)
public class GlobalStreamInterceptor extends ChannelInterceptorAdapter {
#Override
public Message<?> preSend(Message<?> msg, MessageChannel mc) {
log.info("presend " + msg);
return msg;
}
#Override
public void postSend(Message<?> msg, MessageChannel mc, boolean sent) {
log.info("postSend " + msg);
}
}

Right, why don't follow GlobalChannelInterceptor options and don't apply
An array of simple patterns against which channel names will be matched.
?
So, you may have something like this:
#GlobalChannelInterceptor(patterns = Processor.INPUT)
Or use a custom name of input channel to your SCSt app.

Related

Remove "ActiveMQ.Advisory.Producer.x" prefix

Problem:
Somehow producer is sending event to "ActiveMQ.Advisory.Producer.Queue.Queue" instead of "Queue"
Active-MQ admin console in Topics section Screenshot with producer-queue: (Not sure why it has queue and 0 consumers and number of message enqueued = 38)
Active-MQ admin console in Queues section Screenshot with consumer-queue: (it shows consumers = 1 but number of message enqueued = 0)
Attaching Producer, Consumer and Config code.
Producer
public void sendMessage(WorkflowRun message){
var queue = "Queue";
try{
log.info("Attempting Send message to queue: "+ queue);
jmsTemplate.convertAndSend(queue, message);
} catch(Exception e){
log.error("Recieved Exception during send Message: ", e);
}
}
Listener
#JmsListener(destination = "Queue")
public void messageListener(SystemMessage systemMessage) {
LOGGER.info("Message received! {}", systemMessage);
}
Config
#Value("${spring.active-mq.broker-url}")
private String brokerUrl;
#Bean
public ConnectionFactory connectionFactory() throws JMSException {
ActiveMQConnectionFactory activeMQConnectionFactory = new ActiveMQConnectionFactory();
activeMQConnectionFactory.setBrokerURL(brokerUrl);
activeMQConnectionFactory.setWatchTopicAdvisories(false);
activeMQConnectionFactory.createQueueConnection(ActiveMQConnectionFactory.DEFAULT_USER,
ActiveMQConnectionFactory.DEFAULT_PASSWORD);
return activeMQConnectionFactory;
}
When your producer starts, the ActiveMQ broker produces an 'Advisory Message' and sends it to that topic. The count indicates how many producers have been created for the queue://Queuee-- in this case 38 producers have been created.
Since the message is not being produced, it appears that in your Spring wiring, you have the connection, session and producer objects being created-- but the messages are not being sent.
Additionally, if you are showing queue://ActiveMQ.Advisory.. showing up you probably have a bug in some other part of the app (or monitoring tool?) that should be configured to consume from topic://ActiveMQ.Advisory.. instead of queue://

Google PubSub resent messages aren't being processed

I've used the subscriber example from the google documentation for Google PubSub
the only modification I've made is commenting out the acknowledgement of the messages.
The subscriber doesn't add messages to the queue anymore while messages should be resent according to the interval set in the google cloud console.
Why is this happening or am I missing something?
public class SubscriberExample {
use the default project id
private static final String PROJECT_ID = ServiceOptions.getDefaultProjectId();
private static final BlockingQueue<PubsubMessage> messages = new LinkedBlockingDeque<>();
static class MessageReceiverExample implements MessageReceiver {
#Override
public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) {
messages.offer(message);
//consumer.ack();
}
}
/** Receive messages over a subscription. */
public static void main(String[] args) throws Exception {
// set subscriber id, eg. my-sub
String subscriptionId = args[0];
ProjectSubscriptionName subscriptionName = ProjectSubscriptionName.of(
PROJECT_ID, subscriptionId);
Subscriber subscriber = null;
try {
// create a subscriber bound to the asynchronous message receiver
subscriber = Subscriber.newBuilder(subscriptionName, new MessageReceiverExample()).build();
subscriber.startAsync().awaitRunning();
// Continue to listen to messages
while (true) {
PubsubMessage message = messages.take();
System.out.println("Message Id: " + message.getMessageId());
System.out.println("Data: " + message.getData().toStringUtf8());
}
} finally {
if (subscriber != null) {
subscriber.stopAsync();
}
}
}
}
When you do not acknowledge a messages, the Java client library calls modifyAckDeadline on the message until maxAckExtensionPeriod passes. By default, this value is one hour. Therefore, if you don't ack/nack the message or change this value, it is likely the message will not be redelivered for an hour. If you want to change the max ack extension period, set it on the builder:
subscriber = Subscriber.newBuilder(subscriptionName, new MessageReceiverExample())
.setMaxAckExtensionPeriod(Duration.ofSeconds(60))
.build();
It is also worth noting that when you don't ack or nack messages, then flow control may prevent the delivery of more messages. By default, the Java client library allows up to 1000 messages to be outstanding, i.e., waiting for ack or nack or for the max ack extension period to pass.

Validating multiple messages on the same JMS endpoint in Citrus Framework

I'm sending a single message that produces multiple messages, two of which arrive on the same JMS endpoint.
runner.send(sendMessageBuilder -> sendMessageBuilder.endpoint(inputMessage.getEndpoint())
.messageType(MessageType.XML)
.payload(inputMessage.getPayload())
.header(JMSOUTPUTCORRELATIONID, correlationId));
for(OutputMessage outputMessage : inputMessage.getOutputMessages()) {
runner.receive(receiveMessageBuilder -> receiveMessageBuilder.endpoint(outputMessage.getEndpoint())
.schemaValidation(false)
.payload(outputMessage.getPayload())
.header(JMSOUTPUTCORRELATIONID, correlationId));
}
When validating two messages on the same endpoint I'm having trouble finding a way to match them to their respective expected outputs.
I was wondering if Citrus has a built in way to do this or if I could build in a condition that checks the other expected outputs if the first one fails.
I've added a custom validator.
List<OutputMessage> outputMessages = inputMessage.getOutputMessages();
while(outputMessages.size() > 0) {
OutputMessage outputMessage = outputMessages.get(0);
runner.receive(receiveMessageBuilder -> receiveMessageBuilder.endpoint(outputMessage.getEndpoint())
.schemaValidation(true)
.validator(new MultipleOutputMessageValidator(outputMessages))
.header(JMSOUTPUTCORRELATIONID, correlationId));
}
The validator is provided with the the list of expected outputs that have not yet been validated. It will then try to validate each of the expected outputs in the list against the received message and if the validation is succesful removes that expected output from the list.
public class MultipleOutputMessageValidator extends DomXmlMessageValidator {
private static Logger log = LoggerFactory.getLogger(MultipleOutputMessageValidator.class);
private List<OutputMessage> controlMessages;
public MultipleOutputMessageValidator(List<OutputMessage> controlMessages) {
this.controlMessages = controlMessages;
}
#Override
public void validateMessagePayload(Message receivedMessage, Message controlMessage, XmlMessageValidationContext validationContext, TestContext context) throws ValidationException {
Boolean isValidated = false;
for (OutputMessage message : this.controlMessages) {
try {
super.validateMessagePayload(receivedMessage, message, validationContext, context);
isValidated = true;
controlMessages.remove(message);
break;
} catch (ValidationException e) {
// Do nothing for now
}
}
if (!isValidated) {
throw new ValidationException("None of the messages validated");
}
}
}
You should use JMS message selectors so you can "pick" one of the messages from that queue based on a technical identifier. This selector can be a JMS message header for instance (in your case the header JMSOUTPUTCORRELATIONID). This way you make sure to receive the message that you want to validate first.
Example usage:
receive(action -> action.endpoint(someEndpoint)
.selector("correlationId='Cx1x123456789' AND operation='getOrders'"));
Citrus message selector support is described here

How can we make producer in spring amqp rabbitmq waiting after sending all messages and release upon receiving all?

I am queuing all messages to rabbitmq queue and processing those on remote server. Below is my producer and reply handler in same class.
public class AmqpAsynchRpcItemWriter<T> implements ItemWriter<T>,
MessageListener {
protected String exchange;
protected String routingKey;
protected String queue;
protected String replyQueue;
protected RabbitTemplate template;
// Reply handler
#Override
public void onMessage(Message message) {
try {
String corrId = new String(message.getMessageProperties()
.getCorrelationId(), "UTF-8");
System.out.println("received " + corrId + " from " + this.replyQueue);
} catch (IOException e) {
e.printStackTrace();
}
}
//Producer
#Override
public void write(List<? extends T> items) throws Exception {
for (T item : items) {
System.out.println(item);
System.out.println("Queing " + item + " to " + this.queue);
Message message = MessageBuilder
.withBody(item.toString().getBytes())
.setContentType(MessageProperties.CONTENT_TYPE_TEXT_PLAIN)
.setReplyTo(this.replyQueue)
.setCorrelationId(item.toString().getBytes()).build();
template.send(this.exchange, this.routingKey, message);
System.out.println("Queued " + item + " to " + this.queue);
}
// It should wait here untill we get all replies in onMessage, How can we do this ?
}
I am sending all messages in write method and getting replies in onMessage. This is working properly but write doesnt wait for replies, it returns to caller and spring-batch step is marked completed.
But I want the process to wait for replies after sending all message till we get all replies in onMessage. How can we do this ?
You can use any number of synchronization techniques; for example have the listener put the replies in a LinkedBlockingQueue and have the sender take (or poll with timeout) from the queue until all the replies are received.
Or, don't use a listener at all and simply use the same RabbitTemplate to receive() from the reply queue until all the replies are received.
However, receive() returns null if the queue is empty so you'll have to sleep between receives to avoid spinning the CPU.

Rabbit prefetch

I use spring amqp with rabbitmq. I want get one message without prefetch.
I configured with
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(rabbitConnectionFactory());
container.setQueueNames(
ProjectConfigs.getInstance().get_RABBIT_TASK_QUEUE()
);
container.setMessageListener(taskListener());
container.setConcurrentConsumers(1);
container.setPrefetchCount(1);
container.setTxSize(1);
return container;
How to disable prefetch and get only one message/
prefetch simply controls how many messsages the broker allows to be outstanding at the consumer at a time. When set to 1, this means the broker will send 1 message, wait for the ack, then send the next.
It defaults to 1. Setting it to 0 will mean the broker will send unlimited messages to the consumer, regardless of acks.
If you only want one message and then stop, you shouldn't use a container, you can use one of the RabbitTemplate.receive() methods.
I try do it with Spring AMQP
#Bean
public MessageListener taskListener() {
return new MessageListener() {
public void onMessage(Message message) {
try {
LOGGER.info(new String(message.getBody(), "UTF-8"));
Converter converter = new Converter();
converter.startConvert(new String(message.getBody(), "UTF-8"));
} catch (Exception e) {
LOGGER.error(getStackTrace(e));
}
}
};
}

Categories