spring cloud aws multiple sqs listener - java

There are 2 sqs listener in my project. I want one of them to have the same setting and one of them different setting. The only value I want to change is maxNumberOfMessages.
What is the most practical way to do this ? ı want set different maxNumberOfMessages value for one of listener.
this is my config ;
#Bean
public AWSCredentialsProvider awsCredentialsProvider(#Value("${cloud.aws.profile}") String profile,
#Value("${cloud.aws.region.static}") String region,
#Value("${cloud.aws.roleArn}") String role,
#Value("${cloud.aws.user}") String user) {
...
return new AWSStaticCredentialsProvider(sessionCredentials);
}
#Bean
#Primary
#Qualifier("amazonSQSAsync")
public AmazonSQSAsync amazonSQSAsync(#Value("${cloud.aws.region.static}") String region, AWSCredentialsProvider awsCredentialsProvider) {
return AmazonSQSAsyncClientBuilder.standard()
.withCredentials(awsCredentialsProvider)
.withRegion(region)
.build();
}
#Bean
#Primary
public SimpleMessageListenerContainerFactory simpleMessageListenerContainerFactory(AmazonSQSAsync amazonSqs) {
SimpleMessageListenerContainerFactory factory = new SimpleMessageListenerContainerFactory();
factory.setAmazonSqs(amazonSqs);
factory.setMaxNumberOfMessages(1);
factory.setWaitTimeOut(10);
factory.setQueueMessageHandler(new SqsQueueMessageHandler());
return factory;
}
This is listener;
#SqsListener(value = "${messaging.queue.blabla.source}", deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void listen(Message message, Acknowledgment acknowledgment, #Header("MessageId") String messageId) {
log.info("Message Received");
try {
....
acknowledgment.acknowledge().get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
} catch (Exception ex) {
throw new RuntimeException(ex.getMessage());
}
}

Following hack worked for me (if each listener listens to different queue)
#Bean
public SimpleMessageListenerContainerFactory simpleMessageListenerContainerFactory(AmazonSQSAsync amazonSqs) {
return new SimpleMessageListenerContainerFactory() {
#Override
public SimpleMessageListenerContainer createSimpleMessageListenerContainer() {
SimpleMessageListenerContainer simpleMessageListenerContainer = new SimpleMessageListenerContainer() {
#Override
protected void startQueue(String queueName, QueueAttributes queueAttributes) {
// A place to configure queue based maxNumberOfMessages
try {
if (queueName.endsWith(".fifo")) {
FieldUtils.writeField(queueAttributes, "maxNumberOfMessages", 1, true);
}
} catch (IllegalAccessException e) {
throw new RuntimeException(e);
}
super.startQueue(queueName, queueAttributes);
}
};
simpleMessageListenerContainer.setAmazonSqs(amazonSqs);
return simpleMessageListenerContainer;
}
};
}

ı found the solution and share on example repo on github.
github link
if ı add #EnableAsync annotation on listener class and #Async annotation to handler method my problem is solving :)

Unfortunately, the solution from Sushant didn't compile for me in Kotlin(because QueueAttributes is static protected class), but I used it to write following:
#Bean
fun simpleMessageListenerContainerFactory(sqs: AmazonSQSAsync): SimpleMessageListenerContainerFactory =
object : SimpleMessageListenerContainerFactory() {
override fun createSimpleMessageListenerContainer(): SimpleMessageListenerContainer {
val container = object : SimpleMessageListenerContainer() {
override fun afterPropertiesSet() {
super.afterPropertiesSet()
registeredQueues.forEach { (queue, attributes) ->
if (queue.contains(QUEUE_NAME)) {
FieldUtils.writeField(
attributes,
"maxNumberOfMessages",
NEW_MAX_NUMBER_OF_MESSAGES,
true
)
}
}
}
}
container.setWaitTimeOut(waitTimeOut)
container.setMaxNumberOfMessages(maxNumberOfMessages)
container.setAmazonSqs(sqs)
return container
}
}

Related

Guaranteed message delivery asynchronously

I wrote a code that guarantees the delivery of messages and their processing. But it works in one thread.
How to refactor code so that it works in parallel threads or asynchronously? In this case, messages must be guaranteed to be delivered even if the application crashes. They will be delivered after a new start of the application or with the help of other running instances of this application.
Producer:
#Async("threadPoolTaskExecutor")
#EventListener(condition = "#event.queue")
public void start(GenericSpringEvent<RenderQueueObject> event) {
RenderQueueObject renderQueueObject = event.getWhat();
send(RENDER_NAME, renderQueueObject);
}
private void send(String routingKey, Object queue) {
try {
log.info("SEND message");
rabbitTemplate.convertAndSend(routingKey, objectMapper.writeValueAsString(queue));
} catch (JsonProcessingException e) {
log.warn("Can't send event!", e);
}
}
Consumer
#Slf4j
#RequiredArgsConstructor
#Service
public class RenderRabbitEventListener extends RabbitEventListener {
private final ApplicationEventPublisher eventPublisher;
#RabbitListener(bindings = #QueueBinding(value = #Queue(Queues.RENDER_NAME),
exchange = #Exchange(value = Exchanges.EXC_RENDER_NAME, type = "topic"),
key = "render.#")
)
public void onMessage(Message message, Channel channel) {
String routingKey = parseRoutingKey(message);
log.debug(String.format("Event %s", routingKey));
RenderQueueObject queueObject = parseRender(message, RenderQueueObject.class);
handleMessage(queueObject);
}
public void handleMessage(RenderQueueObject render) {
GenericSpringEvent<RenderQueueObject> springEvent = new GenericSpringEvent<>(render);
springEvent.setRender(true);
eventPublisher.publishEvent(springEvent);
}
}
public class Exchanges {
public static final String EXC_RENDER_NAME = "render.exchange.topic";
public static final TopicExchange EXC_RENDER = new TopicExchange(EXC_RENDER_NAME, true, false);
}
public class Queues {
public static final String RENDER_NAME = "render.queue.topic";
public static final Queue RENDER = new Queue(RENDER_NAME);
}
And so my message is processed. If I add #Async, then there will be parallel processing, but if the application crashes, then at a new start, messages will not be sent again.
#EventListener(condition = "#event.render")
public void startRender(GenericSpringEvent<RenderQueueObject> event) {
RenderQueueObject render = event.getWhat();
storageService.updateDocument(
render.getGuid(),
new Document("$set", new Document("dateStartRendering", new Date()).append("status", State.rendering.toString()))
);
Future<RenderWorkObject> submit = taskExecutor.submit(new RenderExecutor(render));
try {
completeResult(submit);
} catch (IOException | ExecutionException | InterruptedException e) {
log.info("Error when complete results after invoke executors");
}
}
private void completeResult(Future<RenderWorkObject> renderFuture) throws IOException, ExecutionException, InterruptedException {
RenderWorkObject renderWorkObject = renderFuture.get();
State currentState = renderWorkObject.getState();
if (Stream.of(result, error, cancel).anyMatch(isEqual(currentState))) {
storageService.updateDocument(renderWorkObject.getGuidJob(), new Document("$set", toUpdate));
}
}
I tried to customize the configuration to fit my needs. But it didn’t work:
#Bean
Queue queue() {
return Queues.RENDER;
}
#Bean
TopicExchange exchange() {
return Exchanges.EXC_RENDER;
}
#Bean
Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(Queues.RENDER_NAME);
}
#Bean
public RabbitTemplate rabbitTemplate(#Qualifier("defaultConnectionFactory") ConnectionFactory connectionFactory) {
RabbitTemplate template = new RabbitTemplate(connectionFactory);
return template;
}
#Bean
public SimpleMessageListenerContainer container(#Qualifier("defaultConnectionFactory") ConnectionFactory connectionFactory, RabbitEventListener listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setQueueNames(Queues.RENDER_NAME);
container.setQueues(Queues.RENDER);
container.setExposeListenerChannel(true);
container.setMaxConcurrentConsumers(20);
container.setConcurrentConsumers(10);
container.setPrefetchCount(1000);
container.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return container;
}
#Bean
public ConnectionFactory defaultConnectionFactory() {
CachingConnectionFactory cf = new CachingConnectionFactory();
cf.setAddresses("127.0.0.1:5672");
cf.setUsername("guest");
cf.setPassword("guest");
cf.setVirtualHost("/");
cf.setPublisherConfirms(true);
cf.setPublisherReturns(true);
cf.setChannelCacheSize(25);
ExecutorService es = Executors.newFixedThreadPool(20);
cf.setExecutor(es);
return cf;
}
I would be grateful for any idea
I think I found a solution. I changed the RenderRabbitEventListener so that it again sent the message to the queue if the message was received from Rabbit in case of crash. Thanks to this, my consumer will always work in parallel. This will work in parallel in the event of a failure of all nodes, as well as in the event of a failure of one node.
Here are the changes I made:
#RabbitListener(bindings = #QueueBinding(value = #Queue(Queues.RENDER_NAME),
exchange = #Exchange(value = Exchanges.EXC_RENDER_NAME, type = "topic"),
key = "render.#")
)
public void onMessage(Message message, Channel channel,
#Header(AmqpHeaders.DELIVERY_TAG) long tag
) {
RenderQueueObject queueObject = parseRender(message, RenderQueueObject.class);
if (message.getMessageProperties().isRedelivered()) {
log.info("Message Redelivered, try also");
try {
channel.basicAck(tag, false);
MessageConverter messageConverter = rabbitTemplate.getMessageConverter();
String valueAsString = parseBody(message);
Message copyMessage = messageConverter.toMessage(valueAsString, new MessageProperties());
rabbitTemplate.convertAndSend(
message.getMessageProperties().getReceivedRoutingKey(),
copyMessage);
return;
} catch (IOException e) {
log.info("basicAck exception");
}
}
log.info("message not redelievered");
String routingKey = parseRoutingKey(message);
log.debug(String.format("Event %s", routingKey));
handleMessage(queueObject);
}

How to implement simple echo socket service in Spring Integration DSL

Please,
could you help with implementation of a simple, echo style, Heartbeat TCP socket service in Spring Integration DSL? More precisely how to plug Adapter/Handler/Gateway to IntegrationFlows on the client and server side. Practical examples are hard to come by for Spring Integration DSL and TCP/IP client/server communication.
I think, I nailed most of the code, it's just that bit about plugging everything together in the IntegrationFlow.
There is an sample echo service in SI examples, but it's written in the "old" XML configuration and I really struggle to transform it to the configuration by code.
My Heartbeat service is a simple server waiting for client to ask "status", responding with "OK".
No #ServiceActivator, no #MessageGateways, no proxying, everything explicit and verbose; driven by a plain JDK scheduled executor on client side; server and client in separate configs and projects.
HeartbeatClientConfig
#Configuration
#EnableIntegration
public class HeartbeatClientConfig {
#Bean
public MessageChannel outboudChannel() {
return new DirectChannel();
}
#Bean
public PollableChannel inboundChannel() {
return new QueueChannel();
}
#Bean
public TcpNetClientConnectionFactory connectionFactory() {
TcpNetClientConnectionFactory connectionFactory = new TcpNetClientConnectionFactory("localhost", 7777);
return connectionFactory;
}
#Bean
public TcpReceivingChannelAdapter heartbeatReceivingMessageAdapter(
TcpNetClientConnectionFactory connectionFactory,
MessageChannel inboundChannel) {
TcpReceivingChannelAdapter heartbeatReceivingMessageAdapter = new TcpReceivingChannelAdapter();
heartbeatReceivingMessageAdapter.setConnectionFactory(connectionFactory);
heartbeatReceivingMessageAdapter.setOutputChannel(inboundChannel); // ???
heartbeatReceivingMessageAdapter.setClientMode(true);
return heartbeatReceivingMessageAdapter;
}
#Bean
public TcpSendingMessageHandler heartbeatSendingMessageHandler(
TcpNetClientConnectionFactory connectionFactory) {
TcpSendingMessageHandler heartbeatSendingMessageHandler = new TcpSendingMessageHandler();
heartbeatSendingMessageHandler.setConnectionFactory(connectionFactory);
return heartbeatSendingMessageHandler;
}
#Bean
public IntegrationFlow heartbeatClientFlow(
TcpNetClientConnectionFactory connectionFactory,
TcpReceivingChannelAdapter heartbeatReceivingMessageAdapter,
TcpSendingMessageHandler heartbeatSendingMessageHandler,
MessageChannel outboudChannel) {
return IntegrationFlows
.from(outboudChannel) // ??????
.// adapter ???????????
.// gateway ???????????
.// handler ???????????
.get();
}
#Bean
public HeartbeatClient heartbeatClient(
MessageChannel outboudChannel,
PollableChannel inboundChannel) {
return new HeartbeatClient(outboudChannel, inboundChannel);
}
}
HeartbeatClient
public class HeartbeatClient {
private final MessageChannel outboudChannel;
private final PollableChannel inboundChannel;
private final Logger log = LogManager.getLogger(HeartbeatClient.class);
public HeartbeatClient(MessageChannel outboudChannel, PollableChannel inboundChannel) {
this.inboundChannel = inboundChannel;
this.outboudChannel = outboudChannel;
}
#EventListener
public void initializaAfterContextIsReady(ContextRefreshedEvent event) {
log.info("Starting Heartbeat client...");
start();
}
public void start() {
Executors.newSingleThreadScheduledExecutor().scheduleAtFixedRate(() -> {
while (true) {
try {
log.info("Sending Heartbeat");
outboudChannel.send(new GenericMessage<String>("status"));
Message<?> message = inboundChannel.receive(1000);
if (message == null) {
log.error("Heartbeat timeouted");
} else {
String messageStr = new String((byte[]) message.getPayload());
if (messageStr.equals("OK")) {
log.info("Heartbeat OK response received");
} else {
log.error("Unexpected message content from server: " + messageStr);
}
}
} catch (Exception e) {
log.error(e);
}
}
}, 0, 10000, TimeUnit.SECONDS);
}
}
HeartbeatServerConfig
#Configuration
#EnableIntegration
public class HeartbeatServerConfig {
#Bean
public MessageChannel outboudChannel() {
return new DirectChannel();
}
#Bean
public PollableChannel inboundChannel() {
return new QueueChannel();
}
#Bean
public TcpNetServerConnectionFactory connectionFactory() {
TcpNetServerConnectionFactory connectionFactory = new TcpNetServerConnectionFactory(7777);
return connectionFactory;
}
#Bean
public TcpReceivingChannelAdapter heartbeatReceivingMessageAdapter(
TcpNetServerConnectionFactory connectionFactory,
MessageChannel outboudChannel) {
TcpReceivingChannelAdapter heartbeatReceivingMessageAdapter = new TcpReceivingChannelAdapter();
heartbeatReceivingMessageAdapter.setConnectionFactory(connectionFactory);
heartbeatReceivingMessageAdapter.setOutputChannel(outboudChannel);
return heartbeatReceivingMessageAdapter;
}
#Bean
public TcpSendingMessageHandler heartbeatSendingMessageHandler(
TcpNetServerConnectionFactory connectionFactory) {
TcpSendingMessageHandler heartbeatSendingMessageHandler = new TcpSendingMessageHandler();
heartbeatSendingMessageHandler.setConnectionFactory(connectionFactory);
return heartbeatSendingMessageHandler;
}
#Bean
public IntegrationFlow heartbeatServerFlow(
TcpReceivingChannelAdapter heartbeatReceivingMessageAdapter,
TcpSendingMessageHandler heartbeatSendingMessageHandler,
MessageChannel outboudChannel) {
return IntegrationFlows
.from(heartbeatReceivingMessageAdapter) // ???????????????
.handle(heartbeatSendingMessageHandler) // ???????????????
.get();
}
#Bean
public HeartbeatServer heartbeatServer(
PollableChannel inboundChannel,
MessageChannel outboudChannel) {
return new HeartbeatServer(inboundChannel, outboudChannel);
}
}
HeartbeatServer
public class HeartbeatServer {
private final PollableChannel inboundChannel;
private final MessageChannel outboudChannel;
private final Logger log = LogManager.getLogger(HeartbeatServer.class);
public HeartbeatServer(PollableChannel inboundChannel, MessageChannel outboudChannel) {
this.inboundChannel = inboundChannel;
this.outboudChannel = outboudChannel;
}
#EventListener
public void initializaAfterContextIsReady(ContextRefreshedEvent event) {
log.info("Starting Heartbeat");
start();
}
public void start() {
Executors.newSingleThreadExecutor().execute(() -> {
while (true) {
try {
Message<?> message = inboundChannel.receive(1000);
if (message == null) {
log.error("Heartbeat timeouted");
} else {
String messageStr = new String((byte[]) message.getPayload());
if (messageStr.equals("status")) {
log.info("Heartbeat received");
outboudChannel.send(new GenericMessage<>("OK"));
} else {
log.error("Unexpected message content from client: " + messageStr);
}
}
} catch (Exception e) {
log.error(e);
}
}
});
}
}
Bonus question
Why channel can be set on TcpReceivingChannelAdapter (inbound adapter) but not TcpSendingMessageHandler (outbound adapter)?
UPDATE
Here is the full project source code if anyone is interested for anyone to git clone it:
https://bitbucket.org/espinosa/spring-integration-tcp-demo
I will try to put all suggested solutions there.
It's much simpler with the DSL...
#SpringBootApplication
#EnableScheduling
public class So55154418Application {
public static void main(String[] args) {
SpringApplication.run(So55154418Application.class, args);
}
#Bean
public IntegrationFlow server() {
return IntegrationFlows.from(Tcp.inboundGateway(Tcp.netServer(1234)))
.transform(Transformers.objectToString())
.log()
.handle((p, h) -> "OK")
.get();
}
#Bean
public IntegrationFlow client() {
return IntegrationFlows.from(Gate.class)
.handle(Tcp.outboundGateway(Tcp.netClient("localhost", 1234)))
.transform(Transformers.objectToString())
.handle((p, h) -> {
System.out.println("Received:" + p);
return null;
})
.get();
}
#Bean
#DependsOn("client")
public Runner runner(Gate gateway) {
return new Runner(gateway);
}
public static class Runner {
private final Gate gateway;
public Runner(Gate gateway) {
this.gateway = gateway;
}
#Scheduled(fixedDelay = 5000)
public void run() {
this.gateway.send("foo");
}
}
public interface Gate {
void send(String out);
}
}
Or, get the reply from the Gate method...
#Bean
public IntegrationFlow client() {
return IntegrationFlows.from(Gate.class)
.handle(Tcp.outboundGateway(Tcp.netClient("localhost", 1234)))
.transform(Transformers.objectToString())
.get();
}
#Bean
#DependsOn("client")
public Runner runner(Gate gateway) {
return new Runner(gateway);
}
public static class Runner {
private final Gate gateway;
public Runner(Gate gateway) {
this.gateway = gateway;
}
#Scheduled(fixedDelay = 5000)
public void run() {
String reply = this.gateway.sendAndReceive("foo"); // null for timeout
System.out.println("Received:" + reply);
}
}
public interface Gate {
#Gateway(replyTimeout = 5000)
String sendAndReceive(String out);
}
Bonus:
Consuming endpoints are actually comprised of 2 beans; a consumer and a message handler. The channel goes on the consumer. See here.
EDIT
An alternative, for a single bean for the client...
#Bean
public IntegrationFlow client() {
return IntegrationFlows.from(() -> "foo",
e -> e.poller(Pollers.fixedDelay(Duration.ofSeconds(5))))
.handle(Tcp.outboundGateway(Tcp.netClient("localhost", 1234)))
.transform(Transformers.objectToString())
.handle((p, h) -> {
System.out.println("Received:" + p);
return null;
})
.get();
}
For anyone interested, here is one of the working solutions I made with help from Gary Russell. All credits to Gary Russell. Full project source code here.
Highlights:
IntegrationFlows: Use only inbound and outbound Gateways.
No Adapters or Channels needed; no ServiceActivators or Message Gate proxies.
No need for ScheduledExecutor or Executors; client and server code got significatn
IntegrationFlows directly calls methods on client class and server class; I like this type of explicit connection.
Split client class on two parts, two methods: request producing part and response processing part; this way it can be better chained to flows.
explicitly define clientConnectionFactory/serverConnectionFactory. This way more things can be explicitly configured later.
HeartbeatClientConfig
#Bean
public IntegrationFlow heartbeatClientFlow(
TcpNetClientConnectionFactory clientConnectionFactory,
HeartbeatClient heartbeatClient) {
return IntegrationFlows.from(heartbeatClient::send, e -> e.poller(Pollers.fixedDelay(Duration.ofSeconds(5))))
.handle(Tcp.outboundGateway(clientConnectionFactory))
.handle(heartbeatClient::receive)
.get();
}
HeartbeatClient
public class HeartbeatClient {
private final Logger log = LogManager.getLogger(HeartbeatClient.class);
public GenericMessage<String> send() {
log.info("Sending Heartbeat");
return new GenericMessage<String>("status");
}
public Object receive(byte[] payload, MessageHeaders messageHeaders) { // LATER: use transformer() to receive String here
String messageStr = new String(payload);
if (messageStr.equals("OK")) {
log.info("Heartbeat OK response received");
} else {
log.error("Unexpected message content from server: " + messageStr);
}
return null;
}
}
HeartbeatServerConfig
#Bean
public IntegrationFlow heartbeatServerFlow(
TcpNetServerConnectionFactory serverConnectionFactory,
HeartbeatServer heartbeatServer) {
return IntegrationFlows
.from(Tcp.inboundGateway(serverConnectionFactory))
.handle(heartbeatServer::processRequest)
.get();
}
HeartbeatServer
public class HeartbeatServer {
private final Logger log = LogManager.getLogger(HeartbeatServer.class);
public Message<String> processRequest(byte[] payload, MessageHeaders messageHeaders) {
String messageStr = new String(payload);
if (messageStr.equals("status")) {
log.info("Heartbeat received");
return new GenericMessage<>("OK");
} else {
log.error("Unexpected message content from client: " + messageStr);
return null;
}
}
}

How to catch errors in spring-integration socket server?

I have the following working socket server configuration, and would like to add a handler if any exception occurs, eg inside the Deserializer during read of the message.
Therefore I added a #ServiceActivator(inputChannel = "errorChannel"). But the method is never invoked. Why?
#MessageEndpoint
public class SocketEndpoint {
#ServiceActivator(inputChannel = "mainChannel")
public String handleMessage(String message) {
return "normal response";
}
#ServiceActivator(inputChannel = "errorChannel")
public String handleError(MessagingException message) {
//TODO this is never invoked!
return "some error";
}
}
#Bean
public TcpInboundGateway mainGateway(
#Qualifier("tcpFactory") TcpConnectionFactoryFactoryBean factory,
#Qualifier("mainChannel") MessageChannel mainChannel,
#Qualifier("errorChannel") MessageChannel errorChannel) throws Exception {
TcpInboundGateway g = new TcpInboundGateway();
g.setConnectionFactory(factory.getObject());
g.setRequestChannel(mainChannel);
g.setErrorChannel(errorChannel);
return g;
}
#Bean
public TcpConnectionFactoryFactoryBean fact() {
TcpConnectionFactoryFactoryBean f = new TcpConnectionFactoryFactoryBean();
f.setType("server");
//....
f.setDeserializer(new MyDeserializer());
return f;
}
class MyDeserializer implements Deserializer<String> {
#Override
public String deserialize(InputStream inputStream)
throw new RuntimeException("catch me in error-channel");
}
}
throw new RuntimeException("catch me in error-channel");
It can't go to the error channel since there's no message yet (messages sent to error channels are messages that fail downstream processing).
The standard deserializers (that extend AbstractByteArraySerializer) publish a TcpDeserializationExceptionEvent when deserialization fails. See the ByteArrayCrLfSerializer for an example:
https://github.com/spring-projects/spring-integration/blob/master/spring-integration-ip/src/main/java/org/springframework/integration/ip/tcp/serializer/ByteArrayCrLfSerializer.java#L78
public int fillToCrLf(InputStream inputStream, byte[] buffer) throws IOException {
int n = 0;
int bite;
if (logger.isDebugEnabled()) {
logger.debug("Available to read: " + inputStream.available());
}
try {
...
}
catch (SoftEndOfStreamException e) {
throw e;
}
catch (IOException e) {
publishEvent(e, buffer, n);
throw e;
}
catch (RuntimeException e) {
publishEvent(e, buffer, n);
throw e;
}
}
See the documentation. The Deserializer needs to be a bean so that it gets an event publisher.
You can then listen for the event(s) with an ApplicationListener< TcpDeserializationExceptionEvent> or an #EventListener method.

Listen to multiple IBM MQ using single spring boot application

I need to listen to multiple queues (exists in same Queue Manager). I have the working spring boot application code for listening to single queue. But is there any way I can connect to multiple queues from the single spring boot application?
Also is there anyway if I can switch listeners from one queue to another queue at runtime?
I have the code to read from single queue which is as follows:
public class ConnectionConfiguration {
private static final Logger logger = LogManager.getLogger(ConnectionConfiguration.class);
#Value("${LDR_LOCAL_MQ_READ_FACTORYNAME}")
String connectionFactory;
#Value("${LDR_LOCAL_MQ_QUEUENAME}")
String localQueue;
#Value("${jmsConcurrency}")
String concurrency;
#Value("${servers.mq.host}")
private String host;
#Value("${servers.mq.port}")
private Integer port;
#Value("${servers.mq.queue-manager}")
private String queueManager;
#Value("${servers.mq.channel}")
private String channel;
#Value("${servers.mq.queue}")
private String queue;
#Value("${servers.mq.timeout}")
private long timeout;
#Bean
public ConnectionFactory connectionFactory() {
JndiObjectFactoryBean jndiObjectFactoryBean = new JndiObjectFactoryBean();
jndiObjectFactoryBean.setResourceRef(true);
jndiObjectFactoryBean.setJndiName(connectionFactory);
try {
jndiObjectFactoryBean.afterPropertiesSet();
} catch (IllegalArgumentException e) {
logger.error(e.getMessage(), e);
e.printStackTrace();
} catch (NamingException e) {
logger.error(e.getMessage(), e);
e.printStackTrace();
}
return (ConnectionFactory) jndiObjectFactoryBean.getObject();
}
#Bean
public MQQueueConnectionFactory mqQueueConnectionFactory() {
MQQueueConnectionFactory mqQueueConnectionFactory = new MQQueueConnectionFactory();
try {
mqQueueConnectionFactory.setHostName(host);
mqQueueConnectionFactory.setQueueManager(queueManager);
mqQueueConnectionFactory.setPort(port);
mqQueueConnectionFactory.setChannel(channel);
mqQueueConnectionFactory.setTransportType(WMQConstants.WMQ_CM_CLIENT);
mqQueueConnectionFactory.setCCSID(1208);
} catch (Exception e) {
e.printStackTrace();
}
return mqQueueConnectionFactory;
}
#Bean
public SimpleMessageListenerContainer queueContainer(MQQueueConnectionFactory mqQueueConnectionFactory) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(mqQueueConnectionFactory);
container.setDestinationName(queue);
container.setMessageListener(getListenerWrapper());
container.start();
return container;
}
#Bean
public MQListener getListenerWrapper() {
return new MQListener();
}
#Bean
public JmsTemplate getJmsTemplate() {
try {
return new JmsTemplate(mqQueueConnectionFactory());
} catch (Exception exp) {
logger.error(exp.getMessage(), exp);
}
return null;
}
}
Just add a listener container bean for each queue.
To change the queue, call stop(), then shutdown(), change the destination, then initialize(), then start().

spring boot and BlockingQueue listener

I have implement jms with spring boot, I am using #JmsListener to listen the topic
#Component
public class AMQListner {
BlockingQueue<MessageBO> queue = new ArrayBlockingQueue<>(1024);
#JmsListener(destination = "${spring.activemq.topic}")
public void Consume(TextMessage message) {
try {
String json = message.getText();
MessageBO bo = ObjectMapperConfig.getInstance().readValue(json, MessageBO.class);
queue.add(bo);
} catch (JMSException e) {
e.printStackTrace();
} catch (JsonParseException e) {
e.printStackTrace();
} catch (JsonMappingException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
}
Now I want a listener that listen that blocking-queue, if it has value , then process. can we achieve this using annotation in spring boot ?
First of all, the proper way is to create a handler bean instead of having a member with the message queue in the receiver class.
public interface MessageHandler extends Consumer<MessageBO> {
public default void handle(MessageBO msg) { accept(msg); }
}
#Component
public class AMQListener {
#Resource("multiplexer")
MessageHandler handler;
#JmsListener(destination = "${spring.activemq.topic}")
public void Consume(TextMessage message) {
try {
String json = message.getText();
MessageBO bo = ObjectMapperConfig.getInstance().readValue(json, MessageBO.class);
handler.handle(bo);
} catch (Exception e) {
e.printStackTrace();
}
}
}
Then, you would have the queue in the handler bean
#Component("multiplexer")
public class MessageMultiplexer implements MessageHandler {
#Autowired
MessageHandler actualConsumer;
ExecutorService executor = Executors.newFixedThreadPool(4);
public void accept(MessageBO msg) {
executor.submit(msg -> actualConsumer.handle(msg));
}
}
The Executor is pretty much the queue in this case.
Caveat: you do not have your 1024 limit in this way. You can do that by using the ThreadPoolExecutor constructor and pass it a limited queue.

Categories