I'm creating a queue in Wildlfy 9, this queue will get about 100 or more Messages per seconds, so Im trying to find out which is the best way I can send those messages to the queue, in order to get the best performance(no timeouts or delays). Below is what I have so far, I've tested it and it's worked, to be honest I don't know if I should use Queue Session or not. I just need to send the messages and the MDB will process them.
#Singleton
#Startup
public class JMSUtil {
#Resource(name = "ConnectionFactory")
private QueueConnectionFactory objQueueFactory;
#Resource(name = "jms/queue")
private Queue objQueue;
private JMSContext context;
#PostConstruct
public void init() {
context = objQueueFactory.createContext();
}
#Lock(LockType.READ)
public void sendEvent(String trace) {
context.createProducer().send(objQueue, trace);
}
}
Related
We're using RabbitMq for communication between some of our services. Sometimes there are a lot of messages beeing queued at once. We want to be able to see that there are still unhandled messages, i.e. if the Service handling the messages is busy.
I've been looking around for a programmatical way to check if a queue has messages and found this
channel.queueDeclarePassive(queueName).getMessageCount()
The problem is: I dont have a channel object. Our RabbitMq setup has been created a couple of years ago and usually looks like this:
#Configuration
#EnableRabbit
public class RabbitMqConfig {
public static final String RENDER_HTML_QUEUE = "render.html";
private String rabbitUri;
private int connectionTimeout;
private String exchangeName;
private int concurrentConsumers;
public RabbitMqConfig(
#Value("${rabbitmq.uri}") String rabbitUri,
#Value("${rabbitmq.exchange.name}") String exchangeName,
#Value("${rabbitmq.connection.timeout}") int timeout,
#Value("${rabbitmq.concurrent-consumers:1}") int concurrentConsumers) {
this.exchangeName = exchangeName;
this.rabbitUri = rabbitUri;
this.connectionTimeout = timeout;
this.concurrentConsumers = concurrentConsumers;
}
#Bean
DirectExchange directExchangeBean() {
return new DirectExchange(this.exchangeName, true, false);
}
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory, MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(RENDER_HTML_QUEUE);
container.setConcurrentConsumers(concurrentConsumers);
container.setMessageListener(listenerAdapter);
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(RenderItemMessageConsumer receiver) {
return new MessageListenerAdapter(receiver, "reciveMessageFromRenderQueue");
}
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory;
try {
connectionFactory = new CachingConnectionFactory(new URI(this.rabbitUri));
connectionFactory.setConnectionTimeout(this.connectionTimeout);
} catch (URISyntaxException e) {
throw new ApiException(e, BaseErrorCode.UNKOWN_ERROR, e.getMessage());
}
return connectionFactory;
}
#Bean
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(connectionFactory());
}
#Bean
public RabbitTemplate rabbitTemplate() {
return new RabbitTemplate(connectionFactory());
}
#Bean
public Queue renderRenderQueue() {
return new Queue(RENDER_HTML_QUEUE, true);
}
#Bean
Binding rendererRenderBinding() {
return BindingBuilder.bind(renderRenderQueue()).to(directExchangeBean()).with(
RENDER_HTML_QUEUE);
}
}
Messages are then consumed like this:
#Component
public class RenderItemMessageConsumer {
#RabbitListener(queues = RENDER_HTML_QUEUE)
public void reciveMessageFromRenderQueue(String message) {
//...
}
The exchangeName is shared across services. So generally I need a way to get the channel that probably is created for the queue and connection to see how many messages are inside. Ideally I want to access that information at the other service that produces the messages consumed in the rendering service.
Or am I doing something wrong? Do I have to explicitly create a channel and connect the queue to it? I'm not even sure what channels are created under the hood, as I mentioned I've set this up some years ago and didnt dig deeper after everything was running fine.
Can I maybe somehow use the amqpAdmin to get all channels?
Turns out I can answer my own question after just a little more trying around:
#Autowired
private ConnectionFactory connectionFactory;
public Boolean isBusy(String queue) throws IOException {
Connection connection = connectionFactory.createConnection();
Channel channel = connection.createChannel(false);
return channel.queueDeclarePassive(queue).getMessageCount() > 0;
}
Since all my services have a similar setup exposing the connectionFactory as a bean and they all connect to the shared rabbitMq server using the same exchange name, I can just use any service to do the above. I can put that snippet behind a rest resource and thus from my management UI can request all the information about the queues of which I know the names to decide if I want to post another batch of messages to it.
I'm sending messages to ibm mq with some correlationId (unique for each message). Then I want to read from output queue this concrete message with specific correlationId, and i want it to be non-blocking to use it in java webflux controller.
I'm wondering if there is a way to do it without lot of pain? Options like jmsTemplate.receiveSelected(...) is blocking, while creating a bean implementing interface MessageListener doesn't provide a way to select message by dynamic selector(i.e. correlationId is unique for each message).
You could use spring MessageListener to retrieve all messages and connect it with controller by Mono.create(...) and your own event listener which trigger result Mono
// Consumes message and trigger result Mono
public interface MyEventListener extends Consumer<MyOutputMessage> {}
Class to route incoming messages to correct MyEventListener
public class MyMessageProcessor {
// You could use in-memory cache here if you need ttl etc.
private static final ConcurrentHashMap<String, MyEventListener> REGISTRY
= new ConcurrentHashMap<>();
public void register(String correlationId, MyEventListener listener) {
MyEventListener oldListeer = REGISTRY.putIfAbsent(correlationId, listener);
if (oldListeer != null)
throw new IllegalStateException("Correlation ID collision!");
}
public void unregister(String correlationId) {
REGISTRY.remove(correlationId);
}
public void accept(String correlationId, MyOutputMessage myOutputMessage) {
Optional.ofNullable(REGISTRY.get(correlationId))
.ifPresent(listener -> listener.accept(myOutputMessage));
}
}
Webflux controller
private final MyMessageProcessor messageProcessor;
....
#PostMapping("/process")
Mono<MyOutputMessage> process(Mono<MyInputMessage> inputMessage) {
String correlationId = ...; //generate correlationId
// then send message asynchronously
return Mono.<MyOutputMessage>create(sink ->
// create and save MyEventListener which call MonoSink.success
messageProcessor.register(correlationId, sink::success))
// define timeout if you don't want to wait forever
.timeout(...)
// cleanup MyEventListener after success, error or cancel
.doFinally(ignored -> messageProcessor.unregister(correlationId));
}
And into onMessage of your JMS MessageListener implementation you could call
messageProcessor.accept(correlationId, myOutputMessage);
You could find similar example for Flux in the reactor 3 reference guide
Environment
Spring Boot: 1.5.13.RELEASE
Cloud: Edgware.SR3
Cloud AWS: 1.2.2.RELEASE
Java 8
OSX 10.13.4
Problem
I am trying to write an integration test for SQS.
I have a local running localstack docker container with SQS running on TCP/4576
In my test code I define an SQS client with the endpoint set to local 4576 and can successfully connect and create a queue, send a message and delete a queue. I can also use the SQS client to receive messages and pick up the message that I sent.
My problem is that if I remove the code that is manually receiving the message in order to allow another component to get the message nothing seems to be happening. I have a spring component annotated as follows:
Listener
#Component
public class MyListener {
#SqsListener(value = "my_queue", deletionPolicy = ON_SUCCESS)
public void receive(final MyMsg msg) {
System.out.println("GOT THE MESSAGE: "+ msg.toString());
}
}
Test
#RunWith(SpringRunner.class)
#SpringBootTest(properties = "spring.profiles.active=test")
public class MyTest {
#Autowired
private AmazonSQSAsync amazonSQS;
#Autowired
private SimpleMessageListenerContainer container;
private String queueUrl;
#Before
public void setUp() {
queueUrl = amazonSQS.createQueue("my_queue").getQueueUrl();
}
#After
public void tearDown() {
amazonSQS.deleteQueue(queueUrl);
}
#Test
public void name() throws InterruptedException {
amazonSQS.sendMessage(new SendMessageRequest(queueUrl, "hello"));
System.out.println("isRunning:" + container.isRunning());
System.out.println("isActive:" + container.isActive());
System.out.println("isRunningOnQueue:" + container.isRunning("my_queue"));
Thread.sleep(30_000);
System.out.println("GOT MESSAGE: " + amazonSQS.receiveMessage(queueUrl).getMessages().size());
}
#TestConfiguration
#EnableSqs
public static class SQSConfiguration {
#Primary
#Bean(destroyMethod = "shutdown")
public AmazonSQSAsync amazonSQS() {
final AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration("http://127.0.0.1:4576", "eu-west-1");
return new AmazonSQSBufferedAsyncClient(AmazonSQSAsyncClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("key", "secret")))
.withEndpointConfiguration(endpoint)
.build());
}
}
}
In the test logs I see:
o.s.c.a.m.listener.QueueMessageHandler : 1 message handler methods found on class MyListener: {public void MyListener.receive(MyMsg)=org.springframework.cloud.aws.messaging.listener.QueueMessageHandler$MappingInformation#1cd4082a}
2018-05-31 22:50:39.582 INFO 16329 ---
o.s.c.a.m.listener.QueueMessageHandler : Mapped "org.springframework.cloud.aws.messaging.listener.QueueMessageHandler$MappingInformation#1cd4082a" onto public void MyListener.receive(MyMsg)
Followed by:
isRunning:true
isActive:true
isRunningOnQueue:false
GOT MESSAGE: 1
This demonstrates that in the 30 second pause between sending the message the container didn't pick it up and when I manually poll for the message it is there on the queue and I can consume it.
My question is, why isn't the listener being invoked and why is the isRunningOnQueue:false line suggesting that it's not auto started for that queue?
Note that I also tried setting my own SimpleMessageListenerContainer bean with autostart set to true explicitly (the default anyway) and observed no change in behaviour. I thought that the org.springframework.cloud.aws.messaging.config.annotation.SqsConfiguration#simpleMessageListenerContainer that is set up by #EnableSqs ought to configure an auto started SimpleMessageListenerContainer that should be polling for me message.
I have also set
logging.level.org.apache.http=DEBUG
logging.level.org.springframework.cloud=DEBUG
in my test properties and can see the HTTP calls create the queue, send a message and delete etc but no HTTP calls to receive (apart from my manual one at the end of the test).
I figured this out after some tinkering.
Even if the simple message container factory is set to not auto start, it seems to do its initialisation anyway, which involves determining whether the queue exists.
In this case, the queue is created in my test in the setup method - but sadly this is after the spring context is set up which means that an exception occurs.
I fixed this by simply moving the queue creation to the context creation of the SQS client (which happens before the message container is created). i.e.:
#Bean(destroyMethod = "shutdown")
public AmazonSQSAsync amazonSQS() {
final AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration("http://localhost:4576", "eu-west-1");
final AmazonSQSBufferedAsyncClient client = new AmazonSQSBufferedAsyncClient(AmazonSQSAsyncClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("dummyKey", "dummySecret")))
.withEndpointConfiguration(endpoint)
.build());
client.createQueue("test-queue");
return client;
}
I am building a message driven service in spring which will run in a cluster and needs to pull messages from a RabbitMQ queue in a round robin manner. The implementation is currently pulling messages off the queue in a first come basis leading to some servers getting backed up while others are idle.
The current QueueConsumerConfiguration.java looks like :
#Configuration
public class QueueConsumerConfiguration extends RabbitMqConfiguration {
private Logger LOG = LoggerFactory.getLogger(QueueConsumerConfiguration.class);
private static final int DEFAULT_CONSUMERS=2;
#Value("${eventservice.inbound}")
protected String inboudEventQueue;
#Value("${eventservice.consumers}")
protected int queueConsumers;
#Autowired
private EventHandler eventtHandler;
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory());
template.setRoutingKey(this.inboudEventQueue);
template.setQueue(this.inboudEventQueue);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean
public Queue inboudEventQueue() {
return new Queue(this.inboudEventQueue);
}
#Bean
public SimpleMessageListenerContainer listenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory());
container.setQueueNames(this.inboudEventQueue);
container.setMessageListener(messageListenerAdapter());
if (this.queueConsumers > 0) {
LOG.info("Starting queue consumers:" + this.queueConsumers );
container.setMaxConcurrentConsumers(this.queueConsumers);
container.setConcurrentConsumers(this.queueConsumers);
} else {
LOG.info("Starting default queue consumers:" + DEFAULT_CONSUMERS);
container.setMaxConcurrentConsumers(DEFAULT_CONSUMERS);
container.setConcurrentConsumers(DEFAULT_CONSUMERS);
}
return container;
}
#Bean
public MessageListenerAdapter messageListenerAdapter() {
return new MessageListenerAdapter(this.eventtHandler, jsonMessageConverter());
}
}
Is it a case of just adding
container.setChannelTransacted(true);
to the configuration?
RabbitMQ treats all consumers the same - it knows no difference between multiple consumers in one container Vs. one consumer in multiple containers (e.g. on different hosts). Each is a consumer from Rabbit's perspective.
If you want more control over server affinity, you need to use multiple queues with each container listening to its own queue.
You then control the distribution on the producer side - e.g. using a topic or direct exchange and specific routing keys to route messages to a specific queue.
This tightly binds the producer to the consumers (he has to know how many there are).
Or you could have your producer use routing keys rk.0, rk.1, ..., rk.29 (repeatedly, resetting to 0 when 30 is reached).
Then you can bind the consumer queues with multiple bindings -
consumer 1 gets rk.0 to rk.9, 2 gets rk.10 to rk.19, etc, etc.
If you then decide to increase the number of consumers, just refactor the bindings appropriately to redistribute the work.
The container will scale up to maxConcurrentConsumers on demand but, practically, scaling down only occurs when the entire container is idle for some time.
I'm rather new to programming in the Java EE environment, so this question will probably sound amateurish, but here goes:
I'm writing a simple JMS application for demonstration purposes. One of the features that has to be implemented is the ability to get messages from a topic after setting a message selector in a dynamic manner, menaing the user has to be able to set certain attributes that will determine whether he gets a message or not. The messages are sent from a different application that is running on the same local server as the application that is receiving the messages.
So, I'm using injected JMSContext components on both the sender side and on the receiver side to handle the messaging itself.
Here are the functions for sending
#Inject
#JMSConnectionFactory("jms/myConnectionFactory")
JMSContext context;
#Resource(lookup = "jms/myTopic")
private Topic topic;
//some more code
public void produceTopicForCreate(Object obj) {
ObjectMessage message = contextCreate.createObjectMessage(obj);
try {
//setting properties
} catch (JMSException ex) {
//logging
}
context.createProducer().send(topic, message)
}
And on the receiver side
#Inject
#JMSConnectionFactory("jms/myConnectionFactory")
private JMSContext context;
#Resource(lookup = "jms/myTopic")
private Topic topic
private JMSConsumer consumer;
private List<MyClass> listOfMessages;
//more code
public void subscribe(String selector) {
this.consumer = this.context.createDurableConsumer(topic, "durableClient", selector, false);
}
public void receiveMessage() {
try {
this.listOfMessages.add(this.consumer.receiveBody(MyClass.class));
} catch (Exception e) {
//logging
}
}
So, as you can see, I have created a durable consumer to consume messages from the topic. Now, whenever I try to invoke the receiveMessage method after a message has been sent to the topic, I get an exception, stating that the "Producer is closed". I looked all over the net, but found no indication as to what is the problem.
If anyone here could help in any way, I would greatly appreciate it! Thanks in advance!
Some important details:
the bean that is doing the sending is RequestScoped in app A
the bean that is doing the receiving is a Singleton the implements
the the environment is GlassFish 4.1/NetBeans 8.1