Monitoring and killing blocked threads in Java - java

In a servlet-based app I'm currently writing we have separate thread classes for readers and writers. Data is transmitted from a writer to multiple readers by using LinkedBlockingQueue<byte[]>, so a reader safely blocks if there is no new data to get from the writer. The problem is that if remote clients served by these reader threads terminate connection, Tomcat won't throw a broken pipe unless the writer sends in new data and attempts to transmit this new chunk to the remote clients. In other words, the following attack can be performed against our service:
Start a streaming write request and don't write any data to it at all.
Keep creating and dropping read connections. Since the writer doesn't produce any data, reading threads attached to it remain blocked and consume memory and other resources.
Observe the server run out of RAM quickly.
Should I create a single maintenance thread that would monitor sockets belonging to blocked reader threads and send interrupt() to those that appear to have lost connection to their respective clients? Are there any major flaws in the architecture described above? Thank you.

Sounds to me that the vulnerability lies in the fact that your readers wait forever, regardless of the state of the incoming connection (which, of course, you can't know about).
Thus a straightforward way to address this, if appropriate, would be to use the poll method on BlockingQueue, rather than take.Calling poll allows you to specify a timeout, after which the reader will return null if no data has been added to the queue.
In this way the readers won't stay blocked forever, and should relatively quickly fall back into the normal processing loop, allowing their resources to be freed as appropriate.
(This isn't a panacea of course; while the timeout is still running, the readers will consume resources. But ultimately a server with finite resources will have some vulnerability to a DDOS attack - and this reduces its impact to a customisably small window, instead of leaving your server permanently crippled, at least.)

The approach I taken in the past is to have blocking connections and only have reader threads. When you want to write to multiple connections, you do that in the current thread. If you are concerned about a write blocking for every, you can have a single monitoring thread which closes blocked connections.
You can still have resources tied up in unused sockets, but I would have another thread which finds unused sockets and closes them.
This leaves you will one thread per connection, plus a couple of monitoring threads.

Related

Scenario in multi-threaded socket communication

I have socket client application, during application startup the socket is created(connection established with server) and it starts two threads which run in parallel.
Thread-1: continuously reads the socket using read method (blocks until data is received)
Thread-2: continuously writes the data.
While writing the socket, if thread-2 receives IO exception, then it discards the existing socket and creates new socket and starts communication. Since thread-2 discards the socket, the thread-1 receives null pointer exception.
Do we have any strategy to handle this
Thread 2 needs to shutdown the socket for input before closing it. That will cause thread to receive and end of stream, which should cause it to close the socket and exit. Then thread 2 can create another socket and start another read thread.
You are beginning to encounter the problems associated with the proactor style of system design. Solving this problem requires some communication between the two threads. Choosing what this communication is is where it gets messy. It has to be something that stops thread1 from trying to read the socket. I'm not so good with Java, but in C, this means using a signal.
I suggest you avoid signals, even if there is an equivalent in Java.
One better option is to have thread1 blocked on a call to select() (or whatever the Java equivalent is), waiting on the socket and on a pipe. Thread2 writes to the pipe when it wants to close the socket, thread1 returns from select(), writes a response to thread2 down the pipe, and calls select() again but only on the pipe. Thread2 reads that response, closes the socket, opens a new one, sends something else down the pipe to wake up thread1 again, which can now go back to select() but this time on the pipe and the new socket. This achieves an execution rendezvous between thread1 and thread2; thread2 can close the old socket and open a new one because it knows (via the pipe communication) when thread1 is not using the socket.
This is somewhat messy. And also becoming more like the reactor design pattern. In which case one may as well simply have just one thread that uses select() to choose whether to read the socket as part of whatever loop it is executing. This single thread would be reading data when it is available, not doing a blocking read in the hope that data arrives. If something goes wrong with a socket write and the socket needs to be replaced, it simply does so; there's no other thread to sync with. Assuming your socket is connected to a remote server on a network (rather than a service on the same machine), the speed of the Ethernet will still be the dominant bottleneck; reactor style systems are no slower.
In general, dealing with network failures is far easier with the reactor systems style, because you don't have threads committed to carrying out actions that other threads know to be inappropriate. Unfortunately, most programming environments are proactor, eg Windows, Boost ASIO, RabbitMQ, etc. Proactor systems are fine until something goes wrong, after which it is often necessary to throw the whole process away because it can easily become insanely complicated for the programmer to sort out all the borked callbacks and async IOs.
One option is to use ZeroMQ if you can. This requires you to be using ZeroMQ everywhere (server too), but it makes it far easier to deal with network problems. It is a reactor, not a proactor.

How network-connection will survive thread switching?

I've a general question. If cpu has one core and I run multiple threads on it. Each thread is used for a GET request. How will network connection survive the thread-switching?
What happens if one thread starts receiving response from server and suddenly a thread-switch happens, considering HTTP use TCP comm., how things would end-up?
Thanks.
TL;DR Connection will survive unless the thread gets control back too late when the server terminates it by timeout.
To understand why it works this way, consider how data gets from a wire (or air) to an application.
The network interface collects data from medium (wire) into internal hardware buffer and when some chunk of data is complete it emits so called hardware interruption (which is just a low-level event). OS handles the interruption using a driver of the network interface and that chunk of data gets to a buffer in the main memory of a computer. The buffer is controlled by OS. When the application reads data from the connection it actually reads data from that buffer.
When thread-switch happens, content of the main memory is never lost. So when the thread gets control back, it just proceeds with its task from the point it was suspended.
If the thread gets back to work when the server has already closed the connection by timeout, an IOError is thrown by the method that tries to read the data from the connection.
This explanation is oversimplified and may be even wrong in details but should give an overall impression about how the things work.

Number of threads for NioEventLoopGroup with persistent connections

I would like to use Java Netty to create a TCP server for a large number of persistent connections from a clients. In other words, imaging that there are 1000 client devices out there, and all of them create and maintain a persistent connection to the TCP server. There will be a reasonable amount of traffic (mostly lines of text) that go back and forth across each of these persistent connections. How can I determine the best number of threads to use in the boss and worker groups for NioEventLoopGroup?
My understanding is that when the connection is created, Netty creates a SimpleChannelInboundHandler<String> object to handle the connection. When the connection is created then the handler channelActive method is called, and every time it gets a new message from the client, the method messageReceived gets called (or channelRead0 method in Netty 4.0.24).
Is my understanding correct?
What happens if I have long running code to run in messageReceived -
do I need to launch this code in yet another thread
(java.util.Thread)?
What happens if my messageReceived method blocks on something or
takes a long time to complete? Does that bring Netty to a grinding
halt?
Basically I need to write a TCP socket server that can serve a large number of persistent connections as quickly as possible.
Is there any guidance available on number of threads for NioEventLoopGroup and on how to use any threads inside the handler?
Any help would be greatly appreciated.
How can I determine the best number of threads to use in the boss and worker groups for NioEventLoopGroup?
About Boss Thread,if you are saying that you need persistent connections , there is no sense to use a lot of boss threads, because boss threads only responsible for accepting new connections. So I would use only one boss thread.
The number of worker threads should depends on your processor cores.
Don't forget to add -XmsYYYYM and -XmxYYYYM as your VM attributes, because without them you can face the case, when your JVM are not using all cores.
What happens if I have long running code to run in messageReceived - do I need to launch this code in yet another thread (java.util.Thread)?
Do you really need to do it? Probably you should think of doing your logic another way, if not then probably you should consider OIO with new thread for each connection.
What happens if my messageReceived method blocks on something or takes a long time to complete?
You should avoid using thread blocking actions in your handlers.
Does that bring Netty to a grinding halt?
Yep, it does.

Socket vs SocketChannel

I am trying to understand SocketChannels, and NIO in general. I know how to work with regular sockets and how to make a simple thread-per-client server (using the regular blocking sockets).
So my questions:
What is a SocketChannel?
What is the extra I get when working with a SocketChannel instead of a Socket.
What is the relationship between a channel and a buffer?
What is a selector?
The first sentance in the documentation is A selectable channel for stream-oriented connecting sockets.. What does that mean?
I have read the also this documentation, but somehow I am not getting it...
A Socket is a blocking input/output device. It makes the Thread that is using it to block on reads and potentially also block on writes if the underlying buffer is full. Therefore, you have to create a bunch of different threads if your server has a bunch of open Sockets.
A SocketChannel is a non-blocking way to read from sockets, so that you can have one thread communicate with a bunch of open connections at once. This works by adding a bunch of SocketChannels to a Selector, then looping on the selector's select() method, which can notify you if sockets have been accepted, received data, or closed. This allows you to communicate with multiple clients in one thread and not have the overhead of multiple threads and synchronization.
Buffers are another feature of NIO that allows you to access the underlying data from reads and writes to avoid the overhead of copying data into new arrays.
By now NIO is so old that few remember what Java was like before 1.4, which is what you need to know in order to understand the "why" of NIO.
In a nutshell, up to Java 1.3, all I/O was of the blocking type. And worse, there was no analog of the select() system call to multiplex I/O. As a result, a server implemented in Java had no choice but to employ a "one-thread-per-connection" service strategy.
The basic point of NIO, introduced in Java 1.4, was to make the functionality of traditional UNIX-style multiplexed non-blocking I/O available in Java. If you understand how to program with select() or poll() to detect I/O readiness on a set of file descriptors (sockets, usually), then you will find the services you need for that in NIO: you will use SocketChannels for non-blocking I/O endpoints, and Selectors for fdsets or pollfd arrays. Servers with threadpools, or with threads handling more than one connection each, now become possible. That's the "extra".
A Buffer is the kind of byte array you need for non-blocking socket I/O, especially on the output/write side. If only part of a buffer can be written immediately, with blocking I/O your thread will simply block until the entirety can be written. With non-blocking I/O, your thread gets a return value of how much was written, leaving it up to you to handle the left-over for the next round. A Buffer takes care of such mechanical details by explicitly implementing a producer/consumer pattern for filling and draining, it being understood that your threads and the JVM's kernel will not be in sync.
Even though you are using SocketChannels, It's necessary to employ thread pool to process channels.
Thinking about the scenairo you use only one thread which is responsible for both polling select() and processing the SocketChannels selected from Selectors, if one channel takes 1 seconds for processing, and there are 10 channels in queue, it means you have to wait 10 seconds before next polling which is untolerable. so there should be a thread pool for channels processing.
In this sense, i don't see tremendous difference to the thread-per-client blocking sockets pattern. the major difference is in NIO pattern, the task is smaller, it's more like thread-per-task, and tasks could be read, write, biz process etc. for more detail, you can take a look at Netty's implementation of NioServerSocketChannelFactory, which is using one Boss thread accepting connection, and dispatch tasks to a pool of Worker threads for processing
If you are really fancy at one thread, the bottom-line is at least you shold have pooled I/O threads, because I/O operations is often oders of magnitude slower than instruction-processing cycles, you would not want the precious one thread being blocked by I/O, and this is exactly NodeJS doing, using one thread accept connection, and all I/O are asynchornous and being parallelly processed by back-end I/O threads pool
is the old style thread-per-client dead?
I don't think so, NIO programming is complex, and multi-threads is not naturally evil, Keep in mind that modern operating systems and CPU's become better and better at multitasking, so the overheads of multithreading becomes smaller over time.

Making sense of Java's non blocking I/O

Let's say I'm running a server, and set client SocketChannels that I accept as non blocking, and read them through a thread pool's threads. But what does that buy me? I anyway need to read the full client request before processing it, which means I need to make multiple read calls.
I've also come across articles saying that threads should block naturally so it gives a chance to other threads to run. However this won't happen in the aforementioned case as these threads will not block.
So how would non blocking IO be efficient? How to make sense of this all? Some multi-core CPU angle to it perhaps? But how?
EDIT: found a pretty good link that explains it programmatically:
http://rox-xmlrpc.sourceforge.net/niotut/
The problem using blocking IO starts when you want to scale your server program. You'd have to hold a blocking thread-per-request. Many many requests will introduce man many threads. This might make some hard time for a server application that serves thousands and more of IO involving concurrent requests.
Using nio non-blocking IO, this request-to-thread coupling is redundant. You can use any thread to complete the IO operation of any request. This lets you use the great pooling pattern for your IO handling threads, and decrease significantly the thread creation and management overhead. On the other hand, you'd have to work harder to sustain data consistency, but that would be the price of scalability.
Unless you want to use busy waiting (which sounds unlikely) if you want to use non-blocking you usually use a small number of threads (may be only one) and a Selector.
If you are going to use blocking IO, that is when you dedicate one or two threads per connection.

Categories