I have a spring boot project, deploying in two servers and using nginx. One method in the project will do:
set some key-values in redis
insert something in db
After 1, I want to do 2 in async way.
One solution is to let doDB() be a springboot #async method:
Class A {
public void ***() {
doRedis() // 1.set some key-values in redis
doDB() // 2.insert something in db
}
}
Class B {
#async
doDB()
}
Another solution is to send message to MQ:
Class A {
public void ***() {
doRedis() // 1.set some key-values in redis
sendMessage()
}
}
Class B {
onMessage(){
doDB()
}
}
If Class A and B are both in the spring boot project, just deploying this project in two servers. I think using #async is enough, there is no need to use MQ to achieve the async way because there is no difference between server one to do Class B doDB() and server two to do Class B doDB(). If class B is in another project, then using MQ is good because it's decoupling for project one doing redis work and project two doing db work.
Is it right? Thanks!
Basically, you are right, if it is going to be in the same application within the same server, no need for MQ because async is already has a queue. But there are some key points you should be decided on even if in the same application
if you care about ordering message queue is more meaningful, you can use async in this case too but you have to configure the thread pool to use only one thread to process async
if you care about losing messages and if some bad things happen before processing messages, you should use an MQ that saves messages to the disk or somewhere to process the rest of the messages later on
if your application gets a lot of requests and you did not carefully set threads in the async thread pool, you could get overflow errors or other problems with using machine resources
Choose within capabilities within your application, do not over-engineer from day one, you spin up from what you have and what you already tested
Related
we have a larger multi service java spring app that declares about 100 exchanges and queues in RabbitMQ on startup. Some are declared explicitly via Beans, but most of them are declared implicitly via #RabbitListener Annotations.
#Component
#RabbitListener(
bindings = #QueueBinding(key = {"example.routingkey"},
exchange = #Exchange(value = "example.exchange", type = ExchangeTypes.TOPIC),
value = #Queue(name = "example_queue", autoDelete = "true", exclusive = "true")))
public class ExampleListener{
#RabbitHandler
public void handleRequest(final ExampleRequest request) {
System.out.println("got request!");
}
There are quite a lot of these listeners in the whole application.
The services of the application sometimes talk to each other via RabbitMq, so take a example Publisher that publishes a message to the Example Exchange that the above ExampleListener is bound to.
If that publish happens too early in the application lifecycle (but AFTER all the Spring Lifecycle Events are through, so after ApplicationReadyEvent, ContextStartedEvent), the binding of the Example Queue to the Example Exchange has not yet happend and the very first publish and reply chain will fail. In other words, the above Example Listener would not print "got request".
We "fixed" this problem by simply waiting 3 seconds before we start sending any RabbitMq messages to give it time to declare all queues,exchanges and bindings but this seems like a very suboptimal solution.
Does anyone else have some advice on how to fix this problem? It is quite hard to recreate as I would guess that it only occurs with a large amount of queues/exchanges/bindings that RabbitMq can not create fast enough. Forcing Spring to synchronize this creation process and wait for a confirmation by RabbitMq would probably fix this but as I see it, there is no built in way to do this.
Are you using multiple connection factories?
Or are you setting usePublisherConnection on the RabbitTemplate? (which is recommended, especially for a complex application like yours).
Normally, a single connection is used and all users of it will block until the admin has declared all the elements (it is run as a connection listener).
If the template is using a different connection factory, it will not block because a different connection is used.
If that is the case, and you are using the CachingConnectionFactory, you can call createConnection().close() on the consumer connection factory during initialization, before sending any messages. That call will block until all the declarations are done.
I have this method implemented in a SpringBoot application
#Scheduled(fixedDelay = 5000)
public void pullMessage() {
MessageDTO message = null;
try {
message = rabbitTemplate.receiveAndConvert(properties.getQueueName(), new ParameterizedTypeReference<MessageDTO>() {});
// more code here...
}
every 5 seconds I'm pulling a message from RabbitMQ and processing something with it. The application is running on Kubernetes and right now I have to duplicate the POD. In this scenario, could the two pods pull the same message?
If the queue is the same for all the instances, then no: only one consumer takes a message from a queue. That's fundamental purpose of the queue pattern at all.
See AMQP docs for publish-subscribe patterns: https://www.rabbitmq.com/tutorials/tutorial-three-java.html
No only a single instance will process the message at one time, the whole purpose of having multiple consumers is not to have any downtime for the application!
Refer the official documentation of RabbitMQ for more clarification!
https://www.rabbitmq.com/tutorials/tutorial-one-java.html
I've got a project where we are going to have hundreds (potentially thousands) of queues in rabbit and each of these queues will need to be consumed by a pool of consumers.
In rabbit (using spring-amqp), you have the rabbitlistener annotation which allows me to statically assign the queues this particular consumer(s) will handle.
My question is - with rabbit and spring, is there a clean way for me to grab a section of queues (lets say queues that start with a-c) and then also listen for any queues that are created while the consumer is running.
Example (at start):
ant-queue
apple-queue
cat-queue
While consumer is running:
Add bat-queue
Here is the (very simple) code I currently have:
#Component
public class MessageConsumer {
public MessageConsumer() {
// ideally grab a section of queues here, initialize a parameter and give to the rabbitlistener annotation
}
#RabbitListener(queues= {"ant-queue", "apple-queue", "cat-queue"})
public void processQueues(String messageAsJson) {
< how do I update the queues declared in rabbit listener above ? >
}
}
Edit:
I should add - I've gone through the spring amqp documentation I found online and I haven't found anything outside of statically (either hardcoded or via properties) declaring the queues
Inject (#Autowired or otherwise) the RabbitListenerEndpointRegistry.
Get a reference to the listener container (use the id attribute on the annotation to give it a known id) (registry.getListenerContainer(id)).
Cast the container to an AbstractMessageListenerContainer and call addQueues() or addQueueNames().
Note that is more efficient to use a DirectMessageListenerContainer when adding queues dynamically; with a SimpleMessageListenerContainer the consumer(s) are stopped and restarted. With the direct container, each queue gets its own consumer(s).
See Choosing a container.
I have a simple (thats what I think) Spring boot application. There are 4 layers:
Rest Controller
Application Service (called by the Rest Controller)
Domain Service (called by Application Service. It connects to the database - repository layer)
Adapter Service (called by Application Service for outbound calls via Hystrix)
Now the problem is that it can only handle a max of 15 parallel calls. If any additional REST API request arrives while these calls are being processed, it makes it to the Application Service layer and then waits. Once one of those 15 parallel call returns, then the new request proceeds to make call to the Domain Service Layer and return.
I tried multiple things:
Increasing spare threads for the server in application.properties file
server.tomcat.min-spare-threads=1000
server.tomcat.max-connections=1000
server.tomcat.max-threads=1000
Once I do this, I see the # of http-nio-* threads increase to 1000 but the hanging issue is not fixed.
I found this snippet online to customize the tomcat container but it didn't help either:
#Bean
public WebServerFactoryCustomizer<TomcatServletWebServerFactory> containerCustomizer() {
return new WebServerFactoryCustomizer<TomcatServletWebServerFactory>() {
#Override
public void customize(TomcatServletWebServerFactory factory) {
factory.addConnectorCustomizers(new TomcatConnectorCustomizer() {
#Override
public void customize(Connector connector) {
Arrays.stream(connector.getProtocolHandler().findUpgradeProtocols())
.filter(upgradeProtocol -> upgradeProtocol instanceof Http2Protocol)
.map(upgradeProtocol -> (Http2Protocol) upgradeProtocol)
.forEach(http2Protocol -> {
http2Protocol.setMaxConcurrentStreamExecution(1000);
});
}
});
}
};
}
I tried configuring the thread pool via code
#Bean(name = "taskExecutor")
public TaskExecutor threadPoolTaskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(200);
executor.setMaxPoolSize(300);
executor.setQueueCapacity(300);
executor.setThreadNamePrefix("anniversary");
executor.initialize();
System.out.println("******* name " + executor.getThreadNamePrefix());
System.out.println("********** core pool size " + executor.getCorePoolSize());
return executor;
}
But none of this helps and I believe the issue is not with the number of threads but elsewhere since the request is not able to go from one service to another. There are hundreds of http-nio-* threads in waiting state and when a new request comes in, its assigned its own thread and I can see that in the Debug mode.
Any pointers, help, tips are much appreciated. What resource is required for service to service invocation by Spring boot?
I believe your observation is right - it's most likely not tomcat who's the bottleneck here. From what you write, would rather look at the domain service. Is the domain service doing some communication with the database or talking to something else over the network (for example over HTTP)?
If you happen to do database in there, check for spring's datasource configuration. There is going to be a database connection pool with a limited number of maximum concurrent connections to the database. Once these connections are all in use, threads that want to talk to the DB will be blocked until one of the connection becomes free again.
Similar connection pools are in place with many other things that talk over network (e.g. Apache HTTP Client also has a connection pool that can be configured).
That's where i would look next.
Cheers,
Matthias
I have a EJB to send a message to JMS queue and wait the reply from it. I want to test the EJB, it's easy to use OpenEJB to do the JUnit test of the EJB. But the problem is this EJB will wait the JMS response to continue process.
Although I can send message in my junit code, but because the EJB is still on-going, I cannot run it before the EJB is completed.
2nd solution is I can initialize a MDB to listen and reply the JMS message form the EJB, but the problem is the MDB must in src\main\java and cannot in src\test\java. The problem is this is just a test code and I should not package it to production environment. (I use Maven)
Or should I use mock object ?
You're on the right track. There area few ways to handle this. Here are a couple tips for unit testing with OpenEJB and Maven.
Test beans
You can write all sorts of EJBs and other testing utilities and have them deployed. All you need is a ejb-jar.xml for the test code like so:
src/main/resources/ejb-jar.xml (the normal one)
src/test/resources/ejb-jar.xml (the testing beans)
As usual the ejb-jar.xml file only needs to contain <ejb-jar/> and nothing more. Its existence simply tells OpenEJB to inspect that part of the classpath and scan it for beans. Scanning the entire classpath is very slow, so this is just convention to speed that up.
TestCase injection
With the above src/test/resources/ejb-jar.xml you could very easily add that test-only MDB and have it setup to process the request in a way that the TestCase needs. But the src/test/resources/ejb-jar.xml also opens up some other interesting functionality.
You could have the TestCase itself do it by declaring references to whatever JMS resources you need and have them injected.
import org.apache.openejb.api.LocalClient;
#LocalClient
public class ChatBeanTest extends TestCase {
#Resource
private ConnectionFactory connectionFactory;
#Resource(name = "QuestionBean")
private Queue questionQueue;
#Resource(name = "AnswerQueue")
private Queue answerQueue;
#EJB
private MyBean myBean;
#Override
protected void setUp() throws Exception {
Properties p = new Properties();
p.put(Context.INITIAL_CONTEXT_FACTORY, "org.apache.openejb.client.LocalInitialContextFactory");
InitialContext initialContext = new InitialContext(p);
initialContext.bind("inject", this); // here's the magic!
}
}
Now you're just one thread away from being able to respond to the JMS message the testcase itself. You can launch off a little runnable that will read a single message, send the response you want, then exit.
Maybe something like:
public void test() throws Exception {
final Thread thread = new Thread() {
#Override
public void run() {
try {
final Connection connection = connectionFactory.createConnection();
connection.start();
final Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
final MessageConsumer incoming = session.createConsumer(requestQueue);
final String text = ((TextMessage) incoming.receive(1000)).getText();
final MessageProducer outgoing = session.createProducer(responseQueue);
outgoing.send(session.createTextMessage("Hello World!"));
} catch (JMSException e) {
e.printStackTrace();
}
}
};
thread.setDaemon(true);
thread.start();
myBean.doThatThing();
// asserts here...
}
See
Alternate Descriptors
If you did want to use the MDB solution and only wanted to enable it for just the one test and not all tests, you could define it in a special src/test/resources/mockmdb.ejb-jar.xml file and enable it in the specific test case(s) where it is needed.
See this doc for more information on how to enable that descriptor and the various options of alternate descriptors.
I think you should use mocks for this. If you're sending messages to a real JMS server, listening for them, replying to them, etc. then you're doing something other than a unit test. I'm not going to get into the argument about what that should be called, but I think it's pretty well universally accepted that a unit-test shouldn't be talking to live databases, message queues, etc.
If I've understood your question correct - It's a bad design to have an EJB send a JMS message and then await a response, in fact contradictory to the whole idea of EJB.
You send a JMS message, and then forget about it. You have an MDB to receive the message. If the EJB depends on a response, JMS is not the way to go, but rather use another EJB.
To test the sending, mock the JMS classes, test the MDB separately.
EJB's are designed for synchronous tasks, JMS for asynchronous tasks - if you have to do asynchronous communication to an external system, I suggest you design your system after that, and do proper asynchronous flows. An EJB that sits and waits for a JMS reply is at best an ugly hack, and will not add any good to your system design.
Thanks for David's answer, it's what I want. I know unit test should not depend on other external resource like JMS server. But if I use Maven + OpenEJB, I still can let the test code in a closed environment. It can help to do automatically test with external resource dependency, especially for some old programs which not easy to refactor.
And if you see the following error message in initialContext.bind("inject", this)
Ensure that class was annotated with #org.apache.openejb.api.LocalClient and was successfully discovered and deployed.
One reference is http://openejb.apache.org/3.0/local-client-injection.html, but add "openejb.tempclassloader.skip=annotations" doesn't work for me. Please check this doc OpenEJB Local Client Injection Fails. There is already a patch for it, I think it will be fixed in OpenEJB 3.1.5 or 4.0
Also I've found it is best practice to actually break out your logic in your MDB to a different class. This isolates your business logic from being in an MDB and allows you to expose your logic as more than one way (MDB, EJB, Web Service, POJO, etc.). It also allows you to more easily test your business logic without the need to test the protocol (JMS in this case).
As for testing JMS, mocking may be the better choice. Or if you really need to test the protocol "in container" look at using something like the JBoss Microcontainer (I believe you can get this packaged with some of the JBoss projects like Seam). Then you can fire up a mini-container for testing things like EJB and JMS.
But overall, it is best to avoid having to need a container unless absolutely necessary. That's why separating your business logic from your implementation logic (even if you don't use mocks) is a good practice.