How can we achieve asynchronous communication in Java RMI using Blocking Priority Queue?
In my application, I have LeadServer and two slave servers. Lead Server calls the methods of slave server and pass object over the RMI. I am able to do it synchronously but now I want to do it asynchronously. So how can we achieve this using Blocking Priority Queue?
Related
I am writing a service which serves stateless incoming requests. The requests are all mathematical calculation, which does not take very long to execute ( max 2ms).
I use Tibco EMS to communicate between client/server. A client library is provided which wraps the client side logic (e.g. convert data into EMS message etc) and send the request to a request queue. The server side processes the request and send the response into a separate queue. This works fine.
The server side is multi-threaded. A new thread is created when a new incoming request is received. Requests are therefore handled concurrently.
The server side uses one single EMS connection to the EMS server. However, because EMS Session is not thread safe, if I want to be able to write the response to EMS queue in each thread, I have to create one session for each thread using the connectionFactory. This degraded the performance.
The time spent on traffic is around 3-4ms, i.e. Time between a request is sent and a response is received is around 5-6ms.(3-4ms for transportation, marshal/unmarshal, 2ms for calculation).
Is there any solution which allows me to concurrently send to a EMS queue without creating two much JMS objects?
Are there any other important rules I need to follow to further optimize the service? Some basic optimization guidelines are already followed:
Use CachedConnectionPool
Send JMS message as NON_PERSISTANT
Use one EMS connection for all requests.
Thank you very much.
The behavior you are experiencing is not specific to EMS. The behavior is dictated by the JMS specification itself. Here is an extract from section 2.8 of the JMS Specification:
There are two reasons for restricting concurrent access to Sessions. First, Sessions are the JMS entity that supports transactions. It is very difficult to implement transactions that are multithreaded. Second, Sessions support asynchronous message consumption. It is important that JMS not require that client code used for asynchronous message consumption be capable of handling multiple, concurrent messages. In addition, if a Session has been set up with multiple, asynchronous consumers, it is important that the client is not forced to handle the case where these separate consumers are concurrently executing. These restrictions make JMS easier to use for typical clients. More sophisticated clients can get the concurrency they desire by using multiple sessions.
If you want to avoid the creation (and destruction) of that many objects, you might want to pre-create a pool of threads, and allocate a session to each thread up front.
planning on moving a lot of our single threaded synchronous processing batch jobs to a more distributed architecture with workers. the thought is having a master process read records off the database, and send them off to a queue. then have a multiple workers read off the queue to process the records in parallel.
is there any well known java pattern for a simple CLI/batch job that constantly runs to poll/listen for messages on queues? would like to use that for all the workers. or is there a better way to do this? should the listener/worker be deployed in an app container or can it be just a standalone program?
thanks
edit: also to note, im not looking to use JavaEE/JMS, but more hosted solutions like SQS, a hosted RabbitMQ, or IronMQ
If you're using a JavaEE application server (and if not, you should), you don't have to program that logic by hand since the application server does it for you.
You then implement and deploy a message driven bean that listens to a queue and processes the message received. The application server will manage a connection pool to listen to queue messages and create a thread with an instance of your message driven bean which will receive the message and be able to process it.
The messages will be processed concurrently since the application server will have a connection pool and a thread pool available to listen to the queue.
All JavaEE-featured application servers like IBM Websphere or JBoss have configurations available in their admin consoles to create Message Queue listeners depending or the message queue implementation and then bind this message queue listeners to your Message Driven Bean.
I don't a lot about this, and I maybe don't really answer your question, but I tried something a few month ago that might interest you to deals with message queue.
You can have a look at this: http://www.rabbitmq.com/getstarted.html
I seems Work Queue could fix your requirements.
I have implemented a server in Java, upon receiving data from some client it simply forwards the data to all other clients (including the sender). I'm happy with my OO-design, I wrap all sockets in classes that provide 'callbacks'. These are called when some data are ready (or when the socket closes) -- using this design I could easily implement a simple TLV protocol to atomically send packets: the callback is not called until a full packet is received.
Now, I use the java.io package blocking I/O calls to the socket streams (and make them appear 'asynchronous' through those callbacks). So I use threads inside my socket wrapper classes: when a socket is opened, that function returns a Runnable implementation that, when run, will do the blocking calls to the InputStream, buffer data and eventually call the callback.
=> In a client application, I simply launch this Runnable in a Thread instance, because it's just one thread.
=> In my server, I submit all Runnable implementations I get upon creating new sockets (i. e. when accepting new clients) into a ThreadPoolExecutor. (FYI: the callbacks of the sockets simply put the received packets in a BlockingQueue. A single, separate (non-pooled) "dispatcher" Thread instance constantly takes the packets from this queue and writes them to all sockets currently connected to the server.)
QUESTION: This all works great, however I'm unsure about my use of the ThreadPoolExecutor, because the Runnable instances submitted are almost always blocking. Will the ThreadPoolExecutor react to this? Or will the pooled threads simply block? Because, if all pooled threads are all blocking while executing their Runnable and next, a new Runnable is submitted, then what? Suspend the new Runnable? That's not good, because then the newly connected client will have zero responsiveness until some older client disconnects. If by contrast the thread pool chooses to spawn a new thread to handle the Runnable, then I actually get a thread-per-client scenario.
I want the thread pool to 'preempt' the blocking threads and use them to handle other sockets, like an operating system that suspends I/O bound processes and doesn't schedule them again until their I/O is complete. Is that at all possible, or will I have to rewrite everything using nio in order to do this? (if nio is required, could you point out where I should start reading?)
Thanks in advance!
About the ThreadPoolExecutor: it depends. An Executors.newCachedThreadPool() will just create new threads for new Runnables. See also this question and the accepted answer. But you will end up with a thread-per-client scenario.
Nio prevents the thread-per-client scenario (if there are many clients sending relative small messages with pauses in between, see also (the summary of) this article), I advice against trying to build your own nio clone.
Implementing nio from the ground up is not easy, a tutorial can be found here. It might be easier to use a nio server like Netty.
Another alternative is to use a technology designed to handle many clients that send and receive small messages. It takes some time to learn and setup, but I managed to get a Tomcat WebSockets server talking with a Jetty WebSocket client pretty quickly. A rewrite to use this technology could be less work.
RabbitMQ RPC
I decided to use RabbitMQ RPC as described here.
My Setup
Incoming web requests (on Tomcat) will dispatch RPC requests over RabbitMQ to different services and assemble the results. I use one reply queue with one custom consumer that listens to all RPC responses and collects them with their correlation id in a simple hash map. Nothing fancy there.
This works great in a simple integration test on controller level.
Problem
When I try to do this in a web project deployed on Tomcat, Tomcat refuses to shut down. jstack and some debugging learned me a thread is spawn to listen for the RPC response and is blocking Tomcat from shutting down gracefully. I guess this is because the created thread is created on application level instead of request level and is not managed by Tomcat. When I set breakpoints in Servlet.destroy() or ServletContextListener.contextDestroyed(ServletContextEvent sce), they are not reached, so I see no way to manually clean things up.
Alternative
As an alternative, I could use a new reply queue (and simple QueueingConsumer) for each web request. I've tested this, it works and Tomcat shuts down as it should. But I'm wondering if this is the way to go.. Can a RabbitMQ cluster deal with thousands (or even millions) of short living queues/consumers? I can imagine queues aren't that big, but still.. constantly broadcasting to all cluster nodes.. the total memory footprint..
Question
So in short, is it wise do create a queue for each incoming web request or how should I setup RabbitMQ with one queue and consumer so Tomcat can shutdown gracefully?
I found a solution for my problem:
The Java client is creating his own threads. There is the possibility to add your own ExecutorService when creating a new connection. Doing so in the ServletContextListener.initialized() method, one can keep track of the ExecutorService and shut it down manually in the ServletContextListener.destroyed() method.
executorService.shutdown();
executorService.awaitTermination(20, TimeUnit.SECONDS);
I used Executors.newCachedThreadPool(); as the threads have many short executions, and they get cleaned up when being idle for more then 60s.
This is the link to the RabbitMQ Google group thread (thx to Michael Klishin for showing me the right direction)
How do you implement a Thread that handles client requests on the server using UDP. I have read somewhere you can use ThreadPoolExecutor, is using this method ok. Becuase there isnt much articles on the web that give you any examples of using Multithreaded UDP applications.
So my question is should i use ThreadPoolExecutor?
Does someone have an example of how to implement A Multithreaded UDP Server/ Client application?
This is simple to do using TCP so i have used TCP Multithreading, just wanted to grasp how UDP works this way.
The thing is Executors is not at all an issue here. It doesnot matter if you use a ThreadPoolExecutor or use manual threads to do it. ThreadPoolExecutor or any other ExecutorService for that matter is just a service to manage threads and work accordingly. It has got nothing do with what your RUnnable's or Callable's are.
In your program you will only be giving Runnables or Callables to a executor. ExecutorService doesn't care of what is inside the runnable because its his job to execute them. So the way of using ExecutorService for TCP server or UDP server doesn't change as far as dealing with ThreadPoolExecutor. Just modify the RUnnable to be sent and all done :)
The main trick with UDP is that it is unreliable. You have to implement your own detection/handling of lost packets.
Once you have requests, you can use an Executors.newXxxxxx() just the same as TCP.