thread per request architecture implementation - java

I'm trying to build a multithreaded server with TCP connections, that can talk to multiple clients and concurrently stream some data. I am using Java, Java IO and Java Thread libraries. I believe my implementation should be built as a 'thread per request' model. Any idea where I can kickstart or a tutorial you can point me into?

A thread per request model is quite simple to write, as multithreaded code goes. Basically what you need is:
A thread pool
A server socket
A main thread that dispatches worker threads
Set up the server socket to listen for requests. Have the main thread run a loop. The loop waits for a request to come in, then takes a thread from the thread pool to service the request.

If you want to write a efficient Server. Use NIO. One thread per client is old way. It's memory/cpu intensive.
See this one. This is a good place to start.
http://rox-xmlrpc.sourceforge.net/niotut/
Once you understand NIO and implement your server that way you will be glad you implemented it. In past I converted a One thread per client model to NIO and performance gains were tremendous.

Related

How are blocking calls like database access handled internally by a Jetty Non Blocking IO servlet?

I have read a lot of material to try and clearly understand the gains a Jetty Non Blocking Web Application Server can or can't offer.
So far what I understand (in part by referring to this: How do Jetty and other containers leverage NIO while sticking to the Servlet specification?) is that with a non blocking IO model a web server like Jetty runs a single (or one per CPU core) thread - the Selector thread - that determines connections that are ready for some I/O. Connections that are ready with some I/O are dispatched for processing on to an internal thread pool to process the request.
I can see how such an architecture could allow you to serve many more connections with far fewer resources. However, what I am not clear about is this:
If I wrote a servlet that ran a long running database operation using a standard JDBC driver performing blocking I/O, wouldn't the handler thread dispatched from the pool to handle this request block?
And if requests came through faster than database requests are fulfilled, the handler thread pool would exhaust at some point?
And so with an application such as this is there any benefit to be run on a Non Blocking Jetty webserver? Is the non-blocking benefit only truly accrued if the servlet itself used another layer of non-blocking access to the database? Or is there something I am missing?
Please do explain if there's some magic through which Jetty will pay less of a price for the blocking database operations than say, a blocking web server.
P.S: For a contrast I read about Node.js here - How the single threaded non blocking IO model works in Node.js - it seems to suggest that Node uses libuv underneath and applies other techniques to translate all blocking operations in code (such as database access and sleep()) into event callbacks ensuring the event loop and the internal thread pool never get blocked in a blocking callback. While it's still a little gobbledygook to me, but assuming that's true for Node, can Jetty promise the same? That too for servlets etc that are not written in a non-blocking way?

Java Inter-thread communication in socket programming issues

I started working on Java Socket programming. I had already made following apps:
1. client send message to server and server responds on recieving
2. client and server are chatting like Point to point chat
Now i want to develop an application in which, whenever a request arrives at server, it'll generate a thread for it. But now the problem is, i am unable to recognize the threads. If one thread is sending message then how server can identify that a message came from this client and i have to forward this to that client. how two clients (actually their threads) can communicate?
I searched on it, i found synchronized keyword and i know its use, i know about wait(), notify() and notifyAll() but still i am unable to provide a communication between them.
Please give me knowledge regarding this if i am doing something wrong or i need to know about some concepts before jumping into this one.
TIA
You cannot pass data to threads, you can only pass data to a data structure they are reading.
I suggest you use this simple pattern using two threads per connection and an output queue. These threads are created when the connection is accepted and run until the connection is closed. You can use a thread pool if you have short lived connections.
Each connection has a reader thread which reads from the socket until the connection is closed using blocking IO. THis reading thread also processes the work on the clients behave.
Another thread for each connection reads from a BlockingQueue for a message which it writes to the socket.
When a user connects they pass a unique token to say who they are and this is stored in a ConcurrentMap<String, BlockingQueue> where the BlockingQueue is the output queue for that connection.
This way whenever a connection sends a message to go to a particular users, you add it to the queue associated with that user.
You can reduce this model to use less threads e.g. with Selectors you only need one thread, but this is much more complex.

Socket best practices in Java

Writing any kind of web server in Java (be it a webserver, RESTful webapp or a microservice) you get to use Sockets for dual channel communication between client and server.
Using the common Socket and ServerSocket class is trivial, but since Sockets are blocking, you end up creating a thread for each request. Using this threaded system, your server will work perfectly but won't scale very well.
The alternative is using Streams by means of SocketChannel, ServerSocketChannel and Selector, and is clearly not as trivial as common Sockets.
My question is: which of these two systems are used in production ready code? I'm talking about medium to big projects like Tomcat, Jetty, Sparkjava and the like?
I suppose they all use the Stream approach, right?
To make a web server really scalable, you'll have to implement it with non-blocking I/O - which means that you should make it in such a way that threads will never get blocked waiting for I/O operations to complete.
Threads are relatively expensive objects. For example, for each thread memory needs to be allocated for its call stack. By default this is in the order of one or a few MB. Which means that if you create 1000 threads, just the call stacks for all those threads will already cost you ~ 1 GB memory.
In a naïve server application, you might create a thread for each accepted connection (each client). This won't scale very well if you have many concurrent users.
I don't know the implementation details of servers like Tomcat and Jetty, but they are most likely implemented using non-blocking I/O.
Some info about non-blocking I/O in Tomcat: Understanding the Tomcat NIO Connector
One of the most well-known non-blocking I/O libraries in Java is Netty.

Java thread pooled server using blocking I/O

I have implemented a server in Java, upon receiving data from some client it simply forwards the data to all other clients (including the sender). I'm happy with my OO-design, I wrap all sockets in classes that provide 'callbacks'. These are called when some data are ready (or when the socket closes) -- using this design I could easily implement a simple TLV protocol to atomically send packets: the callback is not called until a full packet is received.
Now, I use the java.io package blocking I/O calls to the socket streams (and make them appear 'asynchronous' through those callbacks). So I use threads inside my socket wrapper classes: when a socket is opened, that function returns a Runnable implementation that, when run, will do the blocking calls to the InputStream, buffer data and eventually call the callback.
=> In a client application, I simply launch this Runnable in a Thread instance, because it's just one thread.
=> In my server, I submit all Runnable implementations I get upon creating new sockets (i. e. when accepting new clients) into a ThreadPoolExecutor. (FYI: the callbacks of the sockets simply put the received packets in a BlockingQueue. A single, separate (non-pooled) "dispatcher" Thread instance constantly takes the packets from this queue and writes them to all sockets currently connected to the server.)
QUESTION: This all works great, however I'm unsure about my use of the ThreadPoolExecutor, because the Runnable instances submitted are almost always blocking. Will the ThreadPoolExecutor react to this? Or will the pooled threads simply block? Because, if all pooled threads are all blocking while executing their Runnable and next, a new Runnable is submitted, then what? Suspend the new Runnable? That's not good, because then the newly connected client will have zero responsiveness until some older client disconnects. If by contrast the thread pool chooses to spawn a new thread to handle the Runnable, then I actually get a thread-per-client scenario.
I want the thread pool to 'preempt' the blocking threads and use them to handle other sockets, like an operating system that suspends I/O bound processes and doesn't schedule them again until their I/O is complete. Is that at all possible, or will I have to rewrite everything using nio in order to do this? (if nio is required, could you point out where I should start reading?)
Thanks in advance!
About the ThreadPoolExecutor: it depends. An Executors.newCachedThreadPool() will just create new threads for new Runnables. See also this question and the accepted answer. But you will end up with a thread-per-client scenario.
Nio prevents the thread-per-client scenario (if there are many clients sending relative small messages with pauses in between, see also (the summary of) this article), I advice against trying to build your own nio clone.
Implementing nio from the ground up is not easy, a tutorial can be found here. It might be easier to use a nio server like Netty.
Another alternative is to use a technology designed to handle many clients that send and receive small messages. It takes some time to learn and setup, but I managed to get a Tomcat WebSockets server talking with a Jetty WebSocket client pretty quickly. A rewrite to use this technology could be less work.

Java - UDP Multithreaded Server

How do you implement a Thread that handles client requests on the server using UDP. I have read somewhere you can use ThreadPoolExecutor, is using this method ok. Becuase there isnt much articles on the web that give you any examples of using Multithreaded UDP applications.
So my question is should i use ThreadPoolExecutor?
Does someone have an example of how to implement A Multithreaded UDP Server/ Client application?
This is simple to do using TCP so i have used TCP Multithreading, just wanted to grasp how UDP works this way.
The thing is Executors is not at all an issue here. It doesnot matter if you use a ThreadPoolExecutor or use manual threads to do it. ThreadPoolExecutor or any other ExecutorService for that matter is just a service to manage threads and work accordingly. It has got nothing do with what your RUnnable's or Callable's are.
In your program you will only be giving Runnables or Callables to a executor. ExecutorService doesn't care of what is inside the runnable because its his job to execute them. So the way of using ExecutorService for TCP server or UDP server doesn't change as far as dealing with ThreadPoolExecutor. Just modify the RUnnable to be sent and all done :)
The main trick with UDP is that it is unreliable. You have to implement your own detection/handling of lost packets.
Once you have requests, you can use an Executors.newXxxxxx() just the same as TCP.

Categories