We need advice for a server software implementation with Java NIO - java

I'm trying to calculate the load on a server I have to build.
I need to create a server witch have one million users registered in an SQL database. During a week each user will approximately connect 3-4 times. Each time a user will up and download 1-30 MB data, and it will take maybe 1-2 minutes.
When an upload is complete it will be deleted within minutes.
(Update text removed error in calculations)
I know how to make and query an SQL database but what to consider in this situation?

What you want exactly is Netty. It's an API written in NIO and provides another event driven model instead of the classic thread model.
It doesn't use a thread per request, but it put the requests in a queue. With this tool you can make up to 250,000 requests per second.

I am using Netty for a similar scenario. It is just working!
Here is a starting point for using netty:
public class TCPListener {
private static ServerBootstrap bootstrap;
public static void run(){
bootstrap = new ServerBootstrap(
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() throws Exception {
TCPListnerHandler handler = new MyHandler();
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("handler", handler);
return pipeline;
}
});
bootstrap.bind(new InetSocketAddress(9999)); //port number is 9999
}
public static void main(String[] args) throws Exception {
run();
}
}
and MyHandler class:
public class MyHandler extends SimpleChannelUpstreamHandler {
#Override
public void messageReceived(
ChannelHandlerContext ctx, MessageEvent e) {
try {
String remoteAddress = e.getRemoteAddress().toString();
ChannelBuffer buffer= (ChannelBuffer) e.getMessage();
//Now the buffer contains byte stream from client.
} catch (UnsupportedEncodingException ex) {
ex.printStackTrace();
}
byte[] output; //suppose output is a filled byte array
ChannelBuffer writebuffer = ChannelBuffers.buffer(output.length);
for (int i = 0; i < output.length; i++) {
writebuffer.writeByte(output[i]);
}
e.getChannel().write(writebuffer);
}
#Override
public void exceptionCaught(
ChannelHandlerContext ctx, ExceptionEvent e) {
// Close the connection when an exception is raised.
e.getChannel().close();
}
}

At first I was thinking this many
users would require a non-blocking
solution but my calculations show that
I dont, [am I] right?
On modern operating systems and hardware, thread-per-connection is faster than non-blocking I/O, at least unless the number of connections reaches truely extreme levels. However, for writing the data to disk, NIO (channels and buffers) may help, because it can use DMA and avoid copy operations.
But overall, I also think network bandwidth and storage are your main concerns in this application.

The important thing to remember is that most users do not access a system evenly in every hour of every day of the week. Your system need to perform correctly during the busiest hour of the week.
Say the busiest hour of the week, 1/50 of all uploads are made. In the busiest hour each upload could be 30 MB, a total of 1.8 TB. This means you need to have an Internet upload bandwidth to support this. 1.8 TB/hour * 8 bits/byte / 60 min/hour / 60 sec/min = 4 Gbit/s Internet connection.
If for example, you have only a 1 Gbit/s connection, this will limit access to your server.
The other thing to consider is your retention time for these uploads. If each upload is 15 MB on average, you will be getting 157 TB per week or 8.2 PB (8200 TB) per year. You may need a significant amount of storage to retain this.
Once you have spend a significant amount of money on Internet connectivity and disk, the cost of buying a couple of servers is minor. You could use Apache MIMA, however a single server with a 10 Gbit/s connection can support 1 GB easily using any software you care to chose.
A single PC/server/labtop can handle 1,000 I/O threads so 300-600 is not a lot.
The problem will not be in the software but in the network/hardware you chose.

Related

Android - multithread TCP connection

I've been searching for an answer to my problem, but none of the solutions so far have helped me solve it. I'm working on an app that communicates with another device that works as a server. The app sends queries to the server and receives appropriate responses to dynamically create fragments.
In the first implementation the app sent the query and then waited to receive the answer in a single thread. But that solution wasn't satisfactory since the app did not receive any feedback from the server. The server admin said he was receiving the queries, however he hinted that the device was sending the answer back too fast and that the app probably wasn't already listening by the time the answer arrived.
So what I am trying to achieve is create seperate threads: one for listening and one for sending the query. The one that listens would start before we sent anything to the server, to ensure the app does not miss the server response.
Implementing this so far hasn't been succesful. I've tried writing and running seperate Runnable classes and AsyncTasks, but the listener never received an answer and at some points one of the threads didn't even execute. Here is the code for the asynctask listener:
#Override
protected String doInBackground(String... params) {
int bufferLength = 28;
String masterIP = "192.168.1.100";
try {
Log.i("TCPQuery", "Listening for ReActor answers ...");
Socket tcpSocket = new Socket();
SocketAddress socketAddress = new InetSocketAddress(masterIP, 50001);
try {
tcpSocket.connect(socketAddress);
Log.i("TCPQuery", "Is socket connected: " + tcpSocket.isConnected());
} catch (IOException e) {
e.printStackTrace();
}
while(true){
Log.i("TCPQuery", "Listening ...");
try{
Log.i("TCPQuery", "Waiting for ReActor response ...");
byte[] buffer = new byte[bufferLength];
tcpSocket.getInputStream().read(buffer);
Log.i("TCPQuery", "Received message " + Arrays.toString(buffer) + " from ReActor.");
}catch(Exception e){
e.printStackTrace();
Log.e("TCPQuery", "An error occured receiving the message.");
}
}
} catch (Exception e) {
Log.e("TCP", "Error", e);
}
return "";
}
And this is how the tasks are called:
if (Build.VERSION.SDK_INT>=Build.VERSION_CODES.HONEYCOMB) {
listener.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR, "");
sender.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR, "");
}
else {
listener.execute();
sender.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR);
}
How exactly would you approach this problem? If this code is not sufficient I would be glad to post more.
This is because Android's AsyncTask is actually only one thread, no matter how many you create, so if you really want 2 threads running at the same time, I suggest you use standard Java concurrent package tools, not AsyncTask. As explained in the documentation:
AsyncTask is designed to be a helper class around Thread and Handler
and does not constitute a generic threading framework. AsyncTasks
should ideally be used for short operations (a few seconds at the
most.) If you need to keep threads running for long periods of time,
it is highly recommended you use the various APIs provided by the
java.util.concurrent pacakge such as Executor, ThreadPoolExecutor and
FutureTask.
Look this is tcp connection. So you don't need to bother about data lose. This is port to port connection and it never sends end of stream (-1). Perhaps you have to care about read functionality. Because you can not conform all steams are received or not. Tcp read method is a blocking call. If your read buffer size is smaller than available stream size then it block until it can read fully. And you are using android device, perhaps available stream can vary depending upon your device network. So you have 2 options,
1) your buffer size should be dynamic. At first check your available input stream size by using is.available() and create your buf size by this size. If available size is zero then sleep for a certain time to check it is lost its stream availability or not.
2) set your input stream timeout. It really works, because it reads its available stream and wait for the timeout delay, if any stream is not available within the timeout period then it throws timeout exception.
Try to change your code.

Java threaded socket connection timeouts

I have to make simultaneous tcp socket connections every x seconds to multiple machines, in order to get something like a status update packet.
I use a Callable thread class, which creates a future task that connects to each machine, sends a query packet, and receives a reply which is returned to the main thread that creates all the callable objects.
My socket connection class is :
public class ClientConnect implements Callable<String> {
Connection con = null;
Statement st = null;
ResultSet rs = null;
String hostipp, hostnamee;
ClientConnect(String hostname, String hostip) {
hostnamee=hostname;
hostipp = hostip;
}
#Override
public String call() throws Exception {
return GetData();
}
private String GetData() {
Socket so = new Socket();
SocketAddress sa = null;
PrintWriter out = null;
BufferedReader in = null;
try {
sa = new InetSocketAddress(InetAddress.getByName(hostipp), 2223);
} catch (UnknownHostException e1) {
e1.printStackTrace();
}
try {
so.connect(sa, 10000);
out = new PrintWriter(so.getOutputStream(), true);
out.println("\1IDC_UPDATE\1");
in = new BufferedReader(new InputStreamReader(so.getInputStream()));
String [] response = in.readLine().split("\1");
out.close();in.close();so.close(); so = null;
try{
Integer.parseInt(response[2]);
} catch(NumberFormatException e) {
System.out.println("Number format exception");
return hostnamee + "|-1" ;
}
return hostnamee + "|" + response[2];
} catch (IOException e) {
try {
if(out!=null)out.close();
if(in!=null)in.close();
so.close();so = null;
return hostnamee + "|-1" ;
} catch (IOException e1) {
// TODO Auto-generated catch block
return hostnamee + "|-1" ;
}
}
}
}
And this is the way i create a pool of threads in my main class :
private void StartThreadPool()
{
ExecutorService pool = Executors.newFixedThreadPool(30);
List<Future<String>> list = new ArrayList<Future<String>>();
for (Map.Entry<String, String> entry : pc_nameip.entrySet())
{
Callable<String> worker = new ClientConnect(entry.getKey(),entry.getValue());
Future<String> submit = pool.submit(worker);
list.add(submit);
}
for (Future<String> future : list) {
try {
String threadresult;
threadresult = future.get();
//........ PROCESS DATA HERE!..........//
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
}
The pc_nameip map contains (hostname, hostip) values and for every entry i create a ClientConnect thread object.
My problem is that when my list of machines contains lets say 10 pcs (which most of them are not alive), i get a lot of timeout exceptions (in alive pcs) even though my timeout limit is set to 10 seconds.
If i force the list to contain a single working pc, I have no problem.
The timeouts are pretty random, no clue what's causing them.
All machines are in a local network, the remote servers are written by my also (in C/C++) and been working in another setup for more than 2 years without any problems.
Am i missing something or could it be an os network restriction problem?
I am testing this code on windows xp sp3. Thanks in advance!
UPDATE:
After creating two new server machines, and keeping one that was getting a lot of timeouts, i have the following results :
For 100 thread runs over 20 minutes :
NEW_SERVER1 : 99 successful connections/ 1 timeouts
NEW_SERVER2 : 94 successful connections/ 6 timeouts
OLD_SERVER : 57 successful connections/ 43 timeouts
Other info :
- I experienced a JRE crash (EXCEPTION_ACCESS_VIOLATION (0xc0000005)) once and had to restart the application.
- I noticed that while the app was running my network connection was struggling as i was browsing the internet. I have no idea if this is expected but i think my having at MAX 15 threads is not that much.
So, fisrt of all my old servers had some kind of problem. No idea what that was, since my new servers were created from the same OS image.
Secondly, although the timeout percentage has dropped dramatically, i still think it is uncommon to get even one timeout in a small LAN like ours. But this could be a server's application part problem.
Finally my point of view is that, apart from the old server's problem (i still cannot beleive i lost so much time with that!), there must be either a server app bug, or a JDK related bug (since i experienced that JRE crash).
p.s. I use Eclipse as IDE and my JRE is the latest.
If any of the above ring any bells to you, please comment.
Thank you.
-----EDIT-----
Could it be that PrintWriter and/or BufferedReader are not actually thread safe????!!!?
----NEW EDIT 09 Sep 2013----
After re-reading all the comments and thanks to #Gray and his comment :
When you run multiple servers does the first couple work and the rest of them timeout? Might be interesting to put a small sleep in your fork loop (like 10 or 100ms) to see if it works that way.
I rearanged the tree list of the hosts/ip's and got some really strange results.
It seems that if an alive host is placed on top of the tree list, thus being first to start a socket connection, has no problem connecting and receiving packets without any delay or timeout.
On the contrary, if an alive host is placed at the bottom of the list, with several dead hosts before it, it just takes too long to connect and with my previous timeout of 10 secs it failed to connect. But after changing the timeout to 60 seconds (thanks to #EJP) i realised that no timeouts are occuring!
It just takes too long to connect (more than 20 seconds in some occasions).
Something is blobking new socket connections, and it isn't that the hosts or network is to busy to respond.
I have some debug data here, if you would like to take a look :
http://pastebin.com/2m8jDwKL
You could simply check for availability before you connect to the socket. There is an answer who provides some kind of hackish workaround https://stackoverflow.com/a/10145643/1809463
Process p1 = java.lang.Runtime.getRuntime().exec("ping -c 1 " + ip);
int returnVal = p1.waitFor();
boolean reachable = (returnVal==0);
by jayunit100
It should work on unix and windows, since ping is a common program.
My problem is that when my list of machines contains lets say 10 pcs (which most of them are not alive), i get a lot of timeout exceptions (in alive pcs) even though my timeout limit is set to 10 seconds.
So as I understand the problem, if you have (for example) 10 PCs in your map and 1 is alive and the other 9 are not online, all 10 connections time out. If you just put the 1 alive PC in the map, it shows up as fine.
This points to some sort of concurrency problem but I can't see it. I would have thought that there was some sort of shared data that was not being locked or something. I see your test code is using Statement and ResultSet. Maybe there is a database connection that is being shared without locking or something? Can you try just returning the result string and printing it out?
Less likely is some sort of network or firewall configuration but the idea that one failed connection would cause another to fail is just strange. Maybe try running your program on one of the servers or from another computer?
If I try your test code, it seems to work fine. Here's the source code for my test class. It has no problems contacting a combination of online and offline hosts.
Lastly some quick comments about your code:
You should close the streams, readers, and sockets in a finally block. Check my test class for a better pattern there.
You should return a small Result class instead of passing back a String that they has to be parsed.
Hope this helps.
After a lot of reading and experimentation i will have to answer my own question (if i am allowed to do of course).
Java just can't handle concurrent multiple socket connections without adding a big performance overhead. At least in a Core2Duo/4GB RAM/ Windows XP machine.
Creating multiple concurrent socket connections to remote hosts (using of course the code i posted) creates some kind of resource bottleneck, or blocking situation, wich i am still not aware of.
If you try to connect to 20 hosts simultaneously, and a lot of them are disconnected, then you cannot guarantee a "fast" connection to the alive ones.
You will get connected but could be after 20-25 seconds. Meaning that you'll have to set socket timeout to something like 60 seconds. (not acceptable for my application)
If an alive host is lucky to start its connection try first (having in mind that concurrency is not absolute. the for loop still has sequentiality), then he will probably get connected very fast and get a response.
If it is unlucky, the socket.connect() method will block for some time, depending on how many are the hosts before it that will timeout eventually.
After adding a small sleep between the pool.submit(worker) method calls (100 ms) i realised that it makes some difference. I get to connect faster to the "unlucky" hosts. But still if the list of dead hosts is increased, the results are almost the same.
If i edit my host list and place a previously "unlucky" host at the top (before dead hosts), all problems dissapear...
So, for some reason the socket.connect() method creates a form of bottleneck when the hosts to connect to are many, and not alive. Be it a JVM problem, a OS limitation or bad coding from my side, i have no clue...
I will try a different coding approach and hopefully tommorow i will post some feedback.
p.s. This answer made me think of my problem :
https://stackoverflow.com/a/4351360/2025271

Insert data into Cassandra

I have data (network packets) to be inserted in a Cassandra database!
Unfortunately, my application needs at about 1min to insert 10000 packets!
I'm looking for if there is anyone who can help me to operate the java multithreading concept to accelerate the insertion! Here is my code:
PcapPacketHandler<String> jpacketHandler;
jpacketHandler = new PcapPacketHandler<String>() {
GestionPacketDAO g1;
int row=0;
public void nextPacket(PcapPacket packet, String user) {
row++;
String s = packet.toHexdump();
try {
g1 = new GestionPacketDAO();
g1.Insert(s, row);// Insert is the function which inserts data into database
}
catch (InvalidRequestException exg) {
Logger.getLogger(AccueilInsertion.class.getName()).log(Level.SEVERE, null, exg);
}
catch (TException exg) {
Logger.getLogger(AccueilInsertion.class.getName()).log(Level.SEVERE, null, exg);
}
}
}
A common pattern is:
Use a ThreadPoolExecutor with maybe 10 threads.
Use a client library that does connection pooling (e.g. Astyanax or the DataStax CQL3 java driver). Ensure there are at least as many connections as threads.
Back the ThreadPoolExecutor by a queue of fixed size (e.g. ArrayBlockingQueue)
The producer, in your case the nextPacket function, calls ThreadPoolExecutor.execute, which adds a Runnable to the queue. You need to handle when your queue is full appropriately by handling RejectedExecutionException. You can sleep and block reading your packets or drop the packet or some alternative.
An alternative is to have multiple threads running your packet handler if that is possible. Each one can have its own Cassandra connection and write directly. That will be more efficient if you can do it.

BindException/Too many file open while using HttpClient under load

I have got 1000 dedicated Java threads where each thread polls a corresponding url every one second.
public class Poller {
public static Node poll(Node node) {
GetMethod method = null;
try {
HttpClient client = new HttpClient(new SimpleHttpConnectionManager(true));
......
} catch (IOException ex) {
ex.printStackTrace();
} finally {
method.releaseConnection();
}
}
}
The threads are run every one second:
for (int i=0; i <1000; i++) {
MyThread thread = threads.get(i) // threads is a static field
if(thread.isAlive()) {
// If the previous thread is still running, let it run.
} else {
thread.start();
}
}
The problem is if I run the job every one second I get random exceptions like these:
java.net.BindException: Address already in use
INFO httpclient.HttpMethodDirector: I/O exception (java.net.BindException) caught when processing request: Address already in use
INFO httpclient.HttpMethodDirector: Retrying request
But if I run the job every 2 seconds or more, everything runs fine.
I even tried shutting down the instance of SimpleHttpConnectionManager() using shutDown() with no effect.
If I do netstat, I see thousands of TCP connections in TIME_WAIT state, which means they are have been closed and are clearing up.
So to limit the no of connections, I tried using a single instance of HttpClient and use it like this:
public class MyHttpClientFactory {
private static MyHttpClientFactory instance = new HttpClientFactory();
private MultiThreadedHttpConnectionManager connectionManager;
private HttpClient client;
private HttpClientFactory() {
init();
}
public static HttpClientFactory getInstance() {
return instance;
}
public void init() {
connectionManager = new MultiThreadedHttpConnectionManager();
HttpConnectionManagerParams managerParams = new HttpConnectionManagerParams();
managerParams.setMaxTotalConnections(1000);
connectionManager.setParams(managerParams);
client = new HttpClient(connectionManager);
}
public HttpClient getHttpClient() {
if (client != null) {
return client;
} else {
init();
return client;
}
}
}
However after running for exactly 2 hours, it starts throwing 'too many open files' and eventually cannot do anything at all.
ERROR java.net.SocketException: Too many open files
INFO httpclient.HttpMethodDirector: I/O exception (java.net.SocketException) caught when processing request: Too many open files
INFO httpclient.HttpMethodDirector: Retrying request
I should be able to increase the no of connections allowed and make it work, but I would just be prolonging the evil. Any idea what is the best practise to use HttpClient in a situation like above?
Btw, I am still on HttpClient3.1.
This happened to us a few months back. First, double check to make sure you really are calling releaseConnection() every time. But even then, the OS doesn't actually reclaim the TCP connections all at once. The solution is to use the Apache HTTP Client's MultiThreadedHttpConnectionManager. This pools and reuses the connections.
See http://hc.apache.org/httpclient-3.x/performance.html for more performance tips.
Update: Whoops, I didn't read the lower code sample. If you're doing releaseConnection() and using MultiThreadedHttpConnectionManager, consider whether your OS limit on open files per process is set high enough. We had that problem too, and needed to extend the limit a bit.
There is nothing wrong with first error. You just depleted empirical ports available. Each TCP connection can stay in TIME_WAIT state for 2 minutes. You generate 2000/seconds. Soon or later, the socket can't find any unused local port and you will get that error. TIME_WAIT designed exactly for this purpose. Without it, your system might hijack a previous connection.
The second error means you have too many sockets open. On some system, there is a limit of 1K open files. Maybe you just hit that limit due to lingering sockets and other open files. On Linux, you can change this limit using
ulimit -n 2048
But that's limited by a system-wide max value.
As sudo or root edit the /etc/security/limits.conf file. At the end of the file just above “# End of File” enter the following values:
* soft nofile 65535
* hard nofile 65535
This will set the number of open files to unlimited.

Threading in javax.websockets / Tyrus

I'm writing a Java app that sends and receives messages from a websocket server. When the app receives a message it might take some time to process it. Therefore I'm trying to use multiple threads to receive messages. To my understanding Grizzly has selector threads as well as worker threads. By default there is 1 selecter thread and 2 worker threads, in the following example I'm trying to increase those to 5 and 10 respectively.
In the below example I'm pausing the the thread that calls the onMessage method for 10sec to simulate processing of the incoming information. The information comes in every second, therefore 10 threads should be able to handle the amount of traffic.
When I profile the run, only 1 selector thread is running and 2 working threads. Furthermore, messages are only received at a 10sec interval. Indicating that only 1 thread is handling the traffic - I find this very odd. During profiling, one worker thread e.g. Grizzly(1) receives the first message sent. Then 10 seconds later 'Grizzly(2)' receives the second message - then Grizzly(2) keeps on receiving the messages, and Grizzly(1) does not perform any actions.
Can someone please explain this odd behavior and how to change it to e.g. 10 threads constantly waiting in line for a message?
Main:
public static void main(String[] args) {
WebsocketTextClient client = new WebsocketTextClient();
client.connect();
for (int i = 0; i < 60; i++) {
client.send("Test message " + i);
try {
Thread.sleep(1000);
} catch (Exception e) {
System.out.println("Error sleeping!");
}
}
}
WebsocketTextClient.java:
import java.net.URI;
import javax.websocket.ClientEndpointConfig;
import javax.websocket.EndpointConfig;
import javax.websocket.Session;
import javax.websocket.Endpoint;
import javax.websocket.MessageHandler;
import org.glassfish.tyrus.client.ClientManager;
import org.glassfish.tyrus.client.ThreadPoolConfig;
import org.glassfish.tyrus.container.grizzly.client.GrizzlyClientProperties;
public class WebsocketTextClient {
private ClientManager client;
private ClientEndpointConfig clientConfig;
WebsocketTextClientEndpoint endpoint;
public WebsocketTextClient() {
client = ClientManager.createClient();
client.getProperties().put(GrizzlyClientProperties.SELECTOR_THREAD_POOL_CONFIG, ThreadPoolConfig.defaultConfig().setMaxPoolSize(5));
client.getProperties().put(GrizzlyClientProperties.WORKER_THREAD_POOL_CONFIG, ThreadPoolConfig.defaultConfig().setMaxPoolSize(10));
}
public boolean connect() {
try {
clientConfig = ClientEndpointConfig.Builder.create().build();
endpoint = new WebsocketTextClientEndpoint();
client.connectToServer(endpoint, clientConfig, new URI("wss://echo.websocket.org"));
} catch (Exception e) {
return false;
}
return true;
}
public boolean disconnect() {
return false;
}
public boolean send(String message) {
endpoint.session.getAsyncRemote().sendText(message);
return true;
}
private class WebsocketTextClientEndpoint extends Endpoint {
Session session;
#Override
public void onOpen(Session session, EndpointConfig config) {
System.out.println("Connection opened");
this.session = session;
session.addMessageHandler(new WebsocketTextClientMessageHandler());
}
}
private class WebsocketTextClientMessageHandler implements MessageHandler.Whole<String> {
#Override
public void onMessage(String message) {
System.out.println("Message received from " + Thread.currentThread().getName() + " " + message);
try {
Thread.sleep(10000);
} catch (Exception e) {
System.out.println("Error sleeping!");
}
System.out.println("Resuming");
}
}
}
What you appear to be asking is for WebSockets to be able to receive multiple messages sent by the same client connection, to process those messages in separate threads, and to send the responses when they are ready - which means, potentially out of order. This scenario can only happen if the client is multi-threaded.
To deal with multiple threads on the same WebSocket session would generally require the ability for WebSockets to multiplex the data going to and from the client. This is not currently a feature of WebSockets, but could certainly be built on top of it. However, multiplexing those client and server threads on a single channel introduces a fair bit of complexity, because you need to stop all the client and server threads from inadvertently overwriting or starving one another.
The Java spec for MessageHandler is perhaps a little ambiguous about the threading model;
https://docs.oracle.com/javaee/7/api/javax/websocket/MessageHandler.html says:
Each web socket session uses no more than one thread at a time to call its MessageHandlers.
But the important term here is "socket session". If your client is sending multiple messages within the same WebSocket session, the server side handler will execute within a single thread. This doesn't mean you can't do lots of interesting stuff within the thread, particularly if you're using Input/OutputStreams (or Writers) on both ends. It does mean that communication with the client is mediated by just one thread. If you want to multiplex the communication, you'd have to write something on top of the socket to do so; that would include developing your own threading model for dispatching the requests.
An easier solution would be to create a new Session for each client request. Each client request starts a session (ie, TCP connection), sends the data, and waits for the result. This gives you multiple MessageHandler threads - one per session, per the spec.
This is the most straightforward way to get multi-threading on the server side; any other approach will tend to need a multiplexing mechanism - which, depending on your use case, is perhaps not worth the effort, and certainly carries some complexity and risk.
If you are concerned about the number of sessions (TCP/HTTP connections) between client/s and server/s, you could consider creating a pool of Sessions on the client side, and use each client Session one at a time, returning the session to the pool whenever the client is done with it.
Finally, perhaps not directly relevant: I found that when I used Payara Micro to serve the WebSocket endpoint, I needed to set this:
<resources>
...
<managed-executor-service maximum-pool-size="200" core-pool-size="10" long-running-tasks="true" keep-alive-seconds="300" hung-after-seconds="300" task-queue-capacity="20000" jndi-name="concurrent/__defaultManagedExecutorService" object-type="system-all"></managed-executor-service>
The default ManagedExecutorService only provides a single thread. This appears to be the case in Glassfish as well. This had me running around for hours thinking that I didn't understand the threading model, when it was just the pool size that was confusing me.

Categories