Blocking vs Non blocking main thread in tomcat - java

Normally in tomcat, a thread will be running and when a request comes in,it will assign the responsibility of servicing the request to a thread from thread pool.
Does it matter if that main thread is blocking or non blocking in terms of scalability?

Non Blocking IO has the following advantages:
Highly Scalable : Because no-more you require one thread per client. It can effectively support more number of clients.
High Keep Alive : Blocking IO requires to block until the keepalive time for the next request. Non-Blocking being notification model, it can support high keepalive times.
Better Performance on High Load : Because in blocking IO has one thread per connection, it requires n threads for n connections. As the value n increases, the performance degrades because more thread context switching.

When an incoming request is processed in tomcat it will assign the connection to a thread from its thread pool.
What matters here is to run the thread as fast as possible. You typically run blocking io calls in this thread, for file io, db and so on.
You need to adjust the size of this thread pool apropriatley to handle your expected traffic.
Esentially when using the Java EE servlet spec you are forced into handling your requests in a one thread per incoming connection manner.
There are a few non blocking frameworks out there. Check out http://www.playframework.org/ and Jetty ( Jetty nonblocking by default? )

Related

Java: Does making async calls increase number of threads?

So let's say I have a web app and for every request we spawn a new thread. Hundreds of requests come in, somewhere in the web server code we make synchronous calls to several services, we block and wait. This approach bloats the number of threads that we have as the sync calls create a bottleneck.
Supposedly, if we switch these calls to async requests we get rid of the bottleneck as the threads can continue and the callbacks will handle whatever needs to happen.
As far as I understand, in Java, in order to make an async call we spawn a new thread that makes the network call and contains the callback (I won't be implementing this, I'm assuming thats how some of the Java http libraries work).
So my question, how is this solving the problem of many threads? Async requests end up creating more threads (one for each request) and then go to sleep until something is returned, doesn't this create many sleeping threads?
The problem I am trying to solve is that at some point, when there's too many threads, the JVM explodes.
Specifically in web service / servlet environments:
In the simplest configuration, common web servers (Jetty, Tomcat) are configured with a fixed number of threads, or range of number of threads. If more requests arrive than there are threads, then those requests will pile up in the kernel connection queue. A thread accepts a connection and does all the work. When the response is sent, the thread is available for another connection. Adding your own thread pool or executor service won't help that.
In more complex configurations, the web container accepts connections on one pool of threads, and then dispatches the work on another, with a queue in between. Then, instead of blocking clients on connect, or having them fail to connect, they just wait.
In async Servlet processing, such as the JAX-RS #suspended AsyncResponse object, you get to control the details of this yourself. The servlet calls you with a data structure that includes the connection. Your code can put that object into some queue (possibly just the queue built into an Executor Service), and return. That frees the web server thread to accept another container. Your threads, probably from an Executor Service, work through the queue, processing requests and sending responses.
What you never do is create an unbounded number of threads.
Asynchronous means the request is processed by another thread. It doesn't have to be a dedicated thread, let alone a new thread.
For instance, consider JAX-RS asynchronous client callbacks:
target().path("http://example.com/resource/")
.request().async().get(new InvocationCallback<String>() {
#Override
public void completed(String dataFromBackendServer) {
respondWith(dataFromBackendServer);
}
#Override
public void failed(Throwable throwable) {
respondWithError(throwable);
}
});
Here, the InvocationCallback is executed in a thread provided by the JAX-RS implementation, that waits for a response to any pending backend request, then processes that response using the appropriate InvocationCallback. Because a single thread can wait for any number of pending backend requests, fewer threads are needed.
That said, synchronous processing is often easier to implement, and while it does not scale quite as well as asynchronous processing, it scales sufficiently for many applications. That is, unless you have thousands of concurrent requests, the plain old synchronous processing model will do.
There is no problem like too many threads, servers always have a thread pool. They assign a thread from the thread pool to each request, if a thread is not available server just makes the request socket wait in the queue of the ServerSocket.
The problem that async request processing in Servlet 3 is trying to solve is the less utilization of resources due to blocking of request processing threads.
So if there are long running requests which just wait for I/O, they are put on hold till the response is received from I/O channel, and that thread is assigned to another request waiting in the socket queue.
This provides us with better resource (CPU mainly) utilization, and more resource through put as more requests (ones with short duration) are served per second.

how asynchronous servlet processing improve performance

I read from http://docs.oracle.com/javaee/7/tutorial/doc/servlets012.htm
Java EE provides asynchronous processing support for servlets and filters. If a servlet or a filter reaches a potentially blocking operation when processing a request, it can assign the operation to an asynchronous execution context and return the thread associated with the request immediately to the container without generating a response. The blocking operation completes in the asynchronous execution context in a different thread, which can generate a response or dispatch the request to another servlet.
I am wondering where is the
different thread
come from? Assuming the container has 10 threads, 5 of them are processing request, we have to use another 5 to process the long running business logic?. where do we get the performance improvement? the total threads available to use is limited, right?
Thanks.
Read more about Servlet 3 0 final-spec - Section 2.3.3.3 - Asynchronous processing where it is explained in detail.
It causes the container to dispatch a thread, possibly from a managed thread pool, to run the specified Runnable. AsyncContext is a standard way defined in Servlet 3.0 specification to handle HTTP requests asynchronously.
Basically HTTP request is no longer tied to an HTTP thread, allowing us to handle it later, possibly using fewer threads. It turned out that the specification provides an API to handle asynchronous threads in a different thread pool out of the box.
Read more about Executors.newFixedThreadPool() that creates a thread pool that reuses a fixed number of threads operating off a shared unbounded queue. At any point, at most nThreads threads will be active processing tasks. If additional tasks are submitted when all threads are active, they will wait in the queue until a thread is available.
Please have a look at ExecutorService to read more about it along with sample code.

Netty client uses only one thread

I'm implementing a binary protocol above TCP/IP and using Netty to achieve this. My problem is that the performance is rather poor (600 msg/s). I'm connecting as a client to a server with one connection only. When I investigated running instance with JTop I saw that Netty was using 1 worker thread very heavily and the other 5 worker threads are doing nothing (0% ussage). I was digging on the web and all I found is mention of ExecutionHandler. But why should I use this if those 6 worker threads should be enough. Or am I misunderstanding how Netty uses these threads?
My Netty init code:
this.channelFactory = new NioClientSocketChannelFactory(this.executors, DaemonExecutors.newCachedDaemonThreadPool(), 1, 6);
this.clientBootstrap = new ClientBootstrap(channelFactory);
this.channelGroupHandler = new ClientChannelGroupHandler(this.channels);
this.clientBootstrap.getPipeline().addLast("ChannelGroupHandler", this.channelGroupHandler);
Thanks for any hints
Matous
NIO, or rather the non-blocking version of NIO ("New" I/O) allows you to use a single thread for multiple connections, since the thread doesn't block (hence the name) on the read/write operations. Blocking I/O requires a thread for each connection, as the blocking would prevent you from handling traffic between different connections.
This allows you to perform more efficient communication, since you no longer have thread overhead for one.
A decent tutorial is available here (the original Oracle tutorial seems to have vanished from the face of the Google).
The reason you only see one worker thread being used is because you are making only a single connection to the server. Had you made multiple connections, more worker threads would have been used.
If each connection's work is suited for parallelization, then you can implement a handler that uses threads internally, but Netty won't to do that for you.
As for the NIO/OIO distinction, it's true that the idea of NIO is to have one thread handling the events for multiple connections. However, this doesn't mean one thread will handle the all the work. The "single thread" only dispatches work to other (i.e. worker) threads.
Here is an excerpt from the Netty doc:
How threads work There are two types of threads in a
NioServerSocketChannelFactory; one is boss thread and the other is
worker thread.
Boss threads Each bound ServerSocketChannel has its own boss thread.
For example, if you opened two server ports such as 80 and 443, you
will have two boss threads. A boss thread accepts incoming connections
until the port is unbound. Once a connection is accepted successfully,
the boss thread passes the accepted Channel to one of the worker
threads that the NioServerSocketChannelFactory manages.
Worker threads One NioServerSocketChannelFactory can have one or more
worker threads. A worker thread performs non-blocking read and write
for one or more Channels in a non-blocking mode.

does slow connections effect netty performance?

CODE-1
new NioServerSocketChannelFactory(Executors.newCachedThreadPool(), Executors.newCachedThreadPool(),WORKER_SIZE)
CODE-2
OrderedMemoryAwareThreadPoolExecutor executor = new OrderedMemoryAwareThreadPoolExecutor(48, 0, 0, 1, TimeUnit.SECONDS);
pipeline.addLast("executor", new ExecutionHandler(executor));
If IO worker thread pool size (default is 2*count of cpu) can be set from CODE-1, what is the purpose of adding executer (a thread pool) to pipeline in CODE-2 ?
IO operations are done from worker threads. Does that mean, a client with slow connection or bad network keeps IO worker thread busy until data is completely sent ? If so, increasing WORKER_SIZE would help me prevent latencies ?
Slow Connections does not affect Netty threads in NIO normally (check the update note).
Some points about Netty server internal threads
by default there will be only one Boss Thread per server port, and it
will accept connection and handover the connection to worker
thread.
to be precise: WORKER_SIZE is the maximum number of NioWorker
runnables a server can have. for example If the server has
only one connection, then there will be 1 worker thread. If number of connections are increasing and it can not be assigned to next worker (active connections > WORKER_SIZE), then connections will be assigned to a worker in a round robin fashion.
If IO worker thread pool size (default is 2*count of cpu) can be set from CODE-1, what is the purpose of adding executer (a thread pool) to pipeline in CODE-2 ?
If your upstream tasks are blocking then you should execute them in a separate thread pool using a execution handler. Otherwise Nio read/write will not work on time (latency?). I think having a execution handler will help to reduce the latency than setting big value to WORKER_SIZE.
IO operations are done from worker threads. Does that mean, a client with slow connection or bad network keeps IO worker thread busy until data is completely sent ? If so, increasing WORKER_SIZE would help me prevent latencies ?
Generally speaking, increasing the WORKER_SIZE >= number of cpu * 2 does not help because,
NIO is non blocking and If I am not mistaken, its CPU intensive.For CPU intensive task CPU * 2 number of threads are chosen mostly.
Update:
NioWorker runs a loop with selector.select(500ms) to receive OP_READ, selector.select with timeout a blocking call and if most of the connections are slow, performance may reduce?. You can reduce the timeout in org.jboss.netty.channel.socket.nio.SelectorUtil.java and test.
The thread pool[s] you are adding in CODE-1 are for the boss threads, and worker threads. The boss threads accept connections and pass it on to worker thread to handle.
The executor you add in CODE-2 is for handling the messages read by the worker threads.
Slow connections will not affect performance since you are using a non-blocking architecture (NIO) - which is set in Netty to not block (it could if it wanted to)

Netty threads being blocked

I have 3 ThreadPoolExecutors in my system.
One for Netty's Master process, another for netty's worker process and last one for processing ad-hoc processing (sending request to mail server).
ExecutorService bossExecutors = Executors.newFixedThreadPool(1,
new ServerThreadFactory("netty-boss"));
ExecutorService workerExecutors = Executors.newFixedThreadPool(10,
new ServerThreadFactory("netty-worker"));
ChannelFactory factory = new NioServerSocketChannelFactory(
bossExecutors,
workerExecutors,
Runtime.getRuntime().availableProcessors());
ExecutorService mailExecutor = Executors.newFixedThreadPool(40);
This works perfectly fine until mailExecutor starts making request to mail server. Until, that batch requests using mailExecutor, generally making 5000+ requests to mail server is completed, netty threads get blocked.
I don't understand why netty threads seem to be getting blocked that time since, I have allocated definite thread pools. During that time, Netty can't even process single request.
Any idea why it's happening or what I'm doing wrong?
Can you provide a thread-dump ?
jstack <pid>
Also you should never use a fixed threadpool for the worker / poss threadpool. Use a cached one, this way you can be sure you never get into any starvation. You should specify the worker count with the 3 argument in the constructor.
It sounds like a scheduling issue. You have 40 threads under heavy load, vs the availableProcessors number of threads for handling Netty work (what is your availableProcessors() count at the time you create your factory?).
So it could just be that the Netty threads are too few and are being starved since they never happen to be picked for execution compared to the 40 threads handling the mail work.
It may also be that for some reason, your worker threads are blocked on the mail threads finishing, perhaps due to some shared object that is being synchronized on (is there some queue or list of mail to be sent that the netty threads need to write to, and which the mail threads have locked while they send?).

Categories