Linking two Threads in a Client-Server Socket program - Java - java

I create threads of class A and each sends a serialized object to a Server using ObjectOutputStream.
The Server creates new Threads B for each socket connection (whenever a new A client connects)
B will call a synchronized method on a Shared Resource Mutex which causes it (B) to wait() until some internal condition in the Mutex is true.
In this case how A can know that B is currently waiting?
Hope this description is clear.
Class Arrangement:
A1--------->B1-------->| |
A2--------->B2-------->| Mutex |
A3--------->B3-------->| |
EDIT:
it's a must to have wait(), notify() or notifyAll(), since this is for an academic project where concurrency is tested.

Normally A would read on the socket, which would "block" (i.e. not return, hang up) until some data was sent back by B. It doesn't need to be written to deal with the waiting status of B. It just reads and that inherently involves waiting for something to read.
Update So you want A's user interface to stay responsive. By far the best way to do that is take advantage of the user interface library's event queue system. All GUI frameworks have a central event loop that dispatches events to handlers (button click, mouse move, timer, etc.) There is usually a way for a background thread to post something to that event queue so that it will be executed on the main UI thread. The details will depend on the framework you're using.
For example, in Swing, a background thread can do this:
SwingUtilities.invokeAndWait(someRunnableObject);
So suppose you define this interface:
public interface ServerReplyHandler {
void handleReply(Object reply);
}
Then make a nice API for your GUI code to use when it wants to submit a request to the server:
public class Communications {
public static void callServer(Object inputs, ServerReplyHandler handler);
}
So your client code can call the server like this:
showWaitMessage();
Communications.callServer(myInputs, new ServerReplyHandler() {
public void handleReply(Object myOutputs) {
hideWaitMessage();
// do something with myOutputs...
}
});
To implement the above API, you'd have a thread-safe queue of request objects, which store the inputs object and the handler for each request. And a background thread which just does nothing but pull requests from the queue, send the serialised inputs to the server, read back the reply and deserialise it, and then do this:
final ServerReplyHandler currentHandler = ...
final Object currentReply = ...
SwingUtilities.invokeAndWait(new Runnable() {
public void run() {
currentHandler.handleReply(currentReply);
}
});
So as soon as the background thread has read back the reply, it passes it back into the main UI thread via a callback.
This is exactly how browsers do asynchronous communication from JS code. If you're familiar with jQuery, the above Communications.callServer method is the same pattern as:
showWaitMessage();
$.get('http://...', function(reply) {
hideWaitMessage();
// do something with 'reply'
});
The only difference in this case is that you are writing the whole communication stack by hand.
Update 2
You asked:
You mean I can pass "new ObjectOutputStream().writeObject(obj)" as
"myInputs" in Communications.callServer?
If all information is passed as serialised objects, you can build the serialisation into callServer. The calling code just passes some object that supports serialisation. The implementation of callServer would serialise that object into a byte[] and post that to the work queue. The background thread would pop it from the queue and send the bytes to the server.
Note that this avoids serialising the object on the background thread. The advantage of this is that all background thread activity is separated from the UI code. The UI code can be completely unaware that you're using threads for communication.
Re: wait and notify, etc. You don't need to write your own code to use those. Use one of the standard implementations of the BlockingQueue interface. In this case you could use LinkedBlockingQueue with the default constructor so it can accept an unlimited number of items. That means that submitting to the queue will always happen without blocking. So:
private static class Request {
public byte[] send;
public ServerReplyHandler handler;
};
private BlockingQueue<Request> requestQueue;
public static callServer(Object inputs, ServerReplyHandler handler) {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
new ObjectOutputStream(byteStream).writeObject(inputs);
Request r = new Request();
r.send = byteStream.toByteArray();
r.handler = handler;
requestQueue.put(r);
}
Meanwhile the background worker thread is doing this:
for (;;) {
Request r = requestQueue.take();
if (r == shutdown) {
break;
}
// connect to server, send r.send bytes to it
// read back the response as a byte array:
byte[] response = ...
SwingUtilities.invokeAndWait(new Runnable() {
public void run() {
currentHandler.handleReply(
new ObjectInputStream(
new ByteArrayInputStream(response)
).readObject()
);
}
});
}
The shutdown variable is just:
private static Request shutdown = new Request();
i.e. it's a dummy request used as a special signal. This allows you to have another public static method to allow the UI to ask the background thread to quit (would presumably clear the queue before putting shutdown on it).
Note the essentials of the pattern: UI objects are never accessed on the background thread. They are only manipulated from the UI thread. There is a clear separation of ownership. Data is passed between threads as byte arrays.
You could start multiple workers if you wanted to support more than one request happening simultaneously.

Related

How to pass a message from TimerTask to main thread?

I have a main client which keeps background timers for each peer. These timers run in a background thread, and in 30s (the timeout period) are scheduled to perform the task of marking the respective peer as offline. The block of code to do this is:
public void startTimer() {
timer = new Timer();
timer.schedule(new TimerTask() {
public void run() {
status = false;
System.out.println("Setting " + address.toString() + " status to offline");
// need to send failure message somehow
thread.sendMessage();
}
}, 5*1000);
}
Then, in the main program, I need some way to detect when the above timer task has been run, so that the main client can then send a failure message to all other peers, something like:
while (true)
if (msgFromThreadReceived)
notifyPeers();
How would I be able to accomplish this with TimerTask? As I understand, the timer is running in a separate thread, and I want to somehow pass a message to the main thread to notify the main thread that the task has been run.
I would have the class that handles the timers for the peers take a concurrent queue and place a message in the queue when the peer goes offline. Then the "main" thread can poll the queue(s) in an event-driven way, receiving and processing the messages.
Please note that this "main" thread MUST NOT be the event dispatch thread of a GUI framework. If there is something that needs to be updated in the GUI when the main thread receives the message, it can invoke another piece of code on the event dispatch thread upon reception of the message.
Two good choices for the queue would be ConcurrentLinkedQueue if the queue should be unbounded (the timer threads can put any number of messages in the queue before the main thread picks them up), or LinkedBlockingQueue if there should be a limit on the size of the queue, and if it gets too large, the timer threads have to wait before they can put another message on it (this is called backpressure, and can be important in distributed, concurrent systems, but may not be relevant in your case).
The idea here is to implement a version of the Actor Model (q.v.), in which nothing is shared between threads (actors), and any data that needs to be sent (which should be immutable) is passed between them. Each actor has an inbox in which it can receive messages and it acts upon them. Only, your timer threads probably don't need inboxes, if they take all their data as parameters to the constructor and don't need to receive any messages from the main thread after they're started.
public record PeerDownMessage(String peerName, int errorCode) {
}
public class PeerWatcher {
private final Peer peer;
private final BlockingQueue<PeerDownMessage> queue;
public PeerWatcher(Peer peer, BlockingQueue<PeerDownMessage> queue) {
this.peer = Objects.requireNonNull(peer);
this.queue = Objects.requireNonNull(queue);
}
public void startTimer() {
// . . .
// time to send failure message
queue.put(new PeerDownMessage(peer.getName(), error));
// . . .
}
}
public class Main {
public void eventLoop(List<Peer> peers) {
LinkedBlockingQueue<PeerDownMessage> inbox =
new LinkedBlockingQueue<>();
for (Peer peer : peers) {
PeerWatcher watcher = new PeerWatcher(peer, inbox);
watcher.startTimer();
}
while (true) {
PeerDownMessage message = inbox.take();
SwingWorker.invokeLater(() {
// suppose there is a map of labels for each peer
JLabel label = labels.get(message.peerName());
label.setText(message.peerName() +
" failed with error " + message.errorCode());
});
}
}
}
Notice that to update the GUI, we cause that action to be performed on yet another thread, the Swing Event Dispatch Thread, which must be different from our main thread.
There are big, complex frameworks you can use to implement the actor model, but the heart of it is this: nothing is shared between threads, so you never need to synchronize or make anything volatile, anything an actor needs it either receives as a parameter to its constructor or via its inbox (in this example, only the main thread has an inbox since the worker threads don't need to receive anything once they are started), and it is best to make everything immutable. I used a record instead of a class for the message, but you could use a regular class. Just make the fields final, set them in the constructor, and guarantee they can't be null, as in the PeerWatcher class.
I said the main thread can poll the "queue(s)," implying there could be more than one, but in this case they all send the same type of message, and they identify which peer the message is for in the message body. So I just gave every watcher a reference to the same inbox for the main thread. That's probably best. An actor should just have one inbox; if it needs to do multiple things, it should probably be multiple actors (that's the Erlang way, and that's where I've taken the inspiration for this from).
But if you really needed to have multiple queues, main could poll them like so:
while (true) {
for (LinkedBlockingQueue<PeerDownMessage> queue : queues) {
if (queue.peek() != null) {
PeerDownMessage message = queue.take();
handleMessageHowever(message);
}
}
}
But that's a lot of extra stuff you don't need. Stick to one inbox queue per actor, and then polling the inbox for messages to process is simple.
I initially wrote this to use ConcurrentLinkedQueue but I used put and take which are methods of BlockingQueue. I just changed it to use LinkedBlockingQueue but if you prefer ConcurrentLinkedQueue, you can use add and poll but on further consideration, I would really recommend BlockingQueue for the simplicity of its take() method; it lets you easily block while waiting for the next available item instead of busy waiting.

Do while loop behaving unexpectedly, for some inexplicable reason

I've been all over the internet and the Java docs regarding this one; I can't seem to figure out what it is about do while loops I'm not understanding. Here's the background: I have some message handler code that takes some JSON formatted data from a REST endpoint, parses it into a runnable task, then adds this task to a linked blocking queue for processing by the worker thread. Meanwhile, on the worker thread, I have this do while loop to process the message tasks:
do {
PublicTask currentTask = pubMsgQ.poll();
currentTask.run();
} while(pubMsgQ.size() > 0);
pubMsgQ is a LinkedBlockingQueue<PublicTask> (PublicTask implements the Runnable interface). I can't see any problems with this loop (obviously, or else I wouldn't be here), but this is how it behaves during execution: Upon entering the do block, pubMsgQ is polled and returns the runnable task as expected. The task is then run successfully with expected results, but then we get to the while statement. Now, according to the Java docs, poll() should return and remove the head of the queue, so I should expect that pubMsgQ.size() will return 0, right? Wrong I guess, because somehow the while statement passes and the program enters the do block again; of course this time pubMsgQ.poll() returns null (as I would have expected it should) and the program crashes with NullPointerException. What? Please explain like I'm five...
EDIT:
I decided to leave my original post as is above; because I think I actually explain the undesired behavior of that specific piece of the code quite succinctly (the loop is being executed twice while I'm fairly certain there is no way the loop should be executing twice). However, I realize that probably doesn't give enough context for that loop's existence and purpose in the first place, so here is the complete breakdown for what I am actually trying to accomplish with this code as I am sure there is a better way to implement this altogether anyways.
What this loop is actually a part of is a message handler class which implements the MessageHandler interface belonging to my Client Endpoint class [correction from my previous post; I had said the messages coming in were JSON formatted strings from a REST endpoint. This is technically not true: they are JSON formatted strings being received through a web socket connection. Note that while I am using the Spring framework, this is not a STOMP client; I am only using the built-in javax WebSocketContainer as this is more lightweight and easier for me to implement]. When a new message comes in onMessage() is called, which passes the JSON string to the MessageHandler; so here is the code for the entire MessageHandler class:
public class MessageHandler implements com.innotech.gofish.AutoBrokerClient.MessageHandler {
private LinkedBlockingQueue<PublicTask> pubMsgQ = new LinkedBlockingQueue<PublicTask>();
private LinkedBlockingQueue<AuthenticatedTask> authMsgQ = new LinkedBlockingQueue<AuthenticatedTask>();
private MessageLooper workerThread;
private CyclicBarrier latch = new CyclicBarrier(2);
private boolean running = false;
private final boolean authenticated;
public MessageHandler(boolean authenticated) {
this.authenticated = authenticated;
}
#Override
public void handleMessage(String msg) {
try {
//Create new Task and submit it to the message queue:
if(authenticated) {
AuthenticatedTask msgTsk = new AuthenticatedTask(msg);
authMsgQ.put(msgTsk);
} else {
PublicTask msgTsk = new PublicTask(msg);
pubMsgQ.put(msgTsk);
}
//Check status of worker thread:
if(!running) {
workerThread = new MessageLooper();
running = true;
workerThread.start();
} else if(running && !workerThread.active) {
latch.await();
latch.reset();
}
} catch(InterruptedException | BrokenBarrierException e) {
e.printStackTrace();
}
}
private class MessageLooper extends Thread {
boolean active = false;
public MessageLooper() {
}
#Override
public synchronized void run() {
while(running) {
active = true;
if(authenticated) {
do {
AuthenticatedTask currentTask = authMsgQ.poll();
currentTask.run();
if(GoFishApplication.halt) {
GoFishApplication.reset();
}
} while(authMsgQ.size() > 0);
} else {
do {
PublicTask currentTask = pubMsgQ.poll();
currentTask.run();
} while(pubMsgQ.size() > 0);
}
try {
active = false;
latch.await();
} catch (InterruptedException | BrokenBarrierException e) {
e.printStackTrace();
}
}
}
}
}
You may probably see where I'm going with this...what this Gerry-rigged code is trying to do is act as a facsimile for the Looper class provided by the Android Development Kit. The actual desired behavior is as messages are received, the handleMessage() method adds the messages to the queue for processing and the messages are processed on the worker thread separately as long as there are messages to process. If there are no more messages to process, the worker thread waits until it is notified by the handler that more messages have been received; at which point it resumes processing those messages until the queue is once again empty. Rinse and repeat until the user stops the program.
Of course, the closest thing the JDK provides to this is the ThreadPoolExecutor (which I know is probably the actual proper way to implement this); but for the life of me I couldn't figure out how to for this exact case. Finally, as a quick aside so I can be sure to explain everything fully, The reason why there are two queues (and a public and authenticated handler) is because there are two web socket connections. One is an authenticated channel for sending/receiving private messages; the other is un-authenticated and used only to send/receive public messages. There should be no interference, however, given that the authenticated status is final and set at construction; and each Client Endpoint is passed it's own Handler which is instantiated at the time of server connection.
You appear to have a number of concurrency / threading bugs in your code.
Assumptions:
It looks like there could be multiple MessageHandler objects, each with its own pair of queues and (supposedly) at most one MessageLooper thread. It also looks as if a given MessageHandler could be used by multiple request worker threads.
If that is the case, then one problem is that MessageHandler is not thread-safe. Specifically, the handleMessage is accessing and updating fields of the MessageHandler instance without doing any synchronization.
Some of the fields are initialized during object creation and then never changed. They are probably OK. (But you should declare them as final to be sure!) But some of the variables are supposed to change during operation, and they must be handled correctly.
One section that rings particular alarm bells is this:
if (!running) {
workerThread = new MessageLooper();
running = true;
workerThread.start();
} else if (running && !workerThread.active) {
latch.await();
latch.reset();
}
Since this is not synchronized, and the variables are not volatile:
There are race conditions if two threads call this code simultaneously; e.g. between testing running and assigning true to it.
If one thread sets running to true, there are no guarantees that a second thread will see the new value.
The net result is that you could potentially get two or more MessageLooper threads for a given set of queues. That breaks your assumptions in the MessageLooper code.
Looking at the MessageLooper code, I see that you have declared the run method as synchronized. Unfortunately, that doesn't help. The problem is that the run method will be synchronizing on this ... which is the specific instance of MessageLooper. And it will acquire the lock once and release it once. On short, the synchronized is wrong.
(For Java synchronized methods and synchronized blocks to work properly, 1) the threads involved need to synchronize on the same object (i.e. the same primitive lock), and 2) all read and write operations on the state guarded by the lock need to be done while holding the lock. This applies to use of Lock objects as well.)
So ...
There is no synchronization between a MessageLooper thread and any other threads that are adding to or removing from the queues.
There are no guarantees that the MessageLooper thread will notice changes to the running flag.
As I previously noted, you could have two or more MessageLooper polling the same pair of queues.
In short, there are lots of possible explanations for strange behavior in the code in the Question. This includes the specific problem you noticed with the queue size.
Writing correct multi-threaded code is difficult. This is why you should be using an ExecutorService rather than attempting to roll your own code.
But it you do need to roll your own concurrency code, I recommend buying and reading "Java: Concurrency in Practice" by Brian Goetz et al. It is still the only good textbook on this topic ...

Is there a non-Thread alternative to run objects concurrently or to run never ending loop without blocking the main thread?

My goal is to run multiple objects concurrently without creating new Thread due to scalability issues. One of the usage would be running a keep-alive Socket connection.
while (true) {
final Socket socket = serverSocket.accept();
final Thread thread = new Thread(new SessionHandler(socket)).start();
// this will become a problem when there are 1000 threads.
// I am looking for alternative to mimic the `start()` of Thread without creating new Thread for each SessionHandler object.
}
For brevity, I will use Printer anology.
What I've tried:
Use CompletableFuture, after checking, it use ForkJoinPool which is a thread pool.
What I think would work:
Actor model. Honestly, the concept is new to me today and I am still figuring out how to run an Object method without blocking the main thread.
main/java/SlowPrinter.java
public class SlowPrinter {
private static final Logger logger = LoggerFactory.getLogger(SlowPrinter.class);
void print(String message) {
try {
Thread.sleep(100);
} catch (InterruptedException ignored) {
}
logger.debug(message);
}
}
main/java/NeverEndingPrinter.java
public class NeverEndingPrinter implements Runnable {
private final SlowPrinter printer;
public NeverEndingPrinter(SlowPrinter printer) {
this.printer = printer;
}
#Override
public void run() {
while (true) {
printer.print(Thread.currentThread().getName());
}
}
}
test/java/NeverEndingPrinterTest.java
#Test
void withThread() {
SlowPrinter slowPrinter = new SlowPrinter();
NeverEndingPrinter neverEndingPrinter = new NeverEndingPrinter(slowPrinter);
Thread thread1 = new Thread(neverEndingPrinter);
Thread thread2 = new Thread(neverEndingPrinter);
thread1.start();
thread2.start();
try {
Thread.sleep(1000);
} catch (InterruptedException ignored) {
}
}
Currently, creating a new Thread is the only solution I know of. However, this became issue when there are 1000 of threads.
The solution that many developers in the past have come up with is the ThreadPool. It avoids the overhead of creating many threads by reusing the same limited set of threads.
It however requires that you split up your work in small parts and you have to link the small parts step by step to execute a flow of work that you would otherwise do in a single method on a separate thread. So that's what has resulted in the CompletableFuture.
The Actor model is a more fancy modelling technique to assign the separate steps in a flow, but they will again be executed on a limited number of threads, usually just 1 or 2 per actor.
For a very nice theoretical explanation of what problems are solved this way, see https://en.wikipedia.org/wiki/Staged_event-driven_architecture
If I look back at your original question, your problem is that you want to receive keep-alive messages from multiple sources, and don't want to use a separate thread for each source.
If you use blocking IO like while (socket.getInputStream().read() != -1) {}, you will always need a thread per connection, because that implementation will sleep the thread while waiting for data, so the thread cannot do anything else in the mean time.
Instead, you really should look into NIO. You would only need 1 selector and 1 thread where you continuously check the selector for incoming messages from any source (without blocking the thread), and use something like a HashMap to keep track of which source is still sending messages.
See also Java socket server without using threads
The NIO API is very low-level, BTW, so using a framework like Netty might be easier to get started.
You're looking for a ScheduledExecutorService.
Create an initial ScheduledExecutorService with a fixed appropriate number of threads, e.g. Executors.newScheduledThreadPool(5) for 5 threads, and then you can schedule a recurring task with e.g. service.scheduleAtFixedRate(task, initialDelay, delayPeriod, timeUnit).
Of course, this will use threads internally, but it doesn't have the problem of thousands of threads that you're concerned about.

Java - can two threads on client side use the same input stream from server?

I'm working on a Java client/server application with a pretty specific set of rules as to how I have to develop it. The server creates a ClientHandler instance that has input and output streams to the client socket, and any input and output between them is triggered by events in the client GUI.
I have now added in functionality server-side that will send out periodic updates to all connected clients (done by storing each created PrintWriter object from the ClientHandlers in an ArrayList<PrintWriter>). I need an equivalent mechanism client-side to process these messages, and have been told this needs to happen in a second client-side thread whose run() method uses a do...while(true) loop until the client disconnects.
This all makes sense to me so far, what I am struggling with is the fact that the two threads will have to share the one input stream, and essentially 'ignore' any messages that aren't of the type that they handle. In my head, it should look something like this:
Assuming that every message from server sends a boolean of value true on a message-to-all, and one of value false on a message to an individual client...
Existing Client Thread
//method called from actionPerformed(ActionEvent e)
//handles server response to bid request
public void receiveResponse()
{
//thread should only process to-specific-client messages
if (networkInput.nextBoolean() == false)
{
//process server response...
}
}
Second Client-side Thread
//should handle all messages set to all clients
run()
{
do {
if (networkInput.nextBoolean() == true)
{
//process broadcasted message...
} while (true);
}
As they need to use the same input stream, I would obviously be adding some synchronized, wait/notify calls, but generally, is what I'm looking to do here possible? Or will the two threads trying to read in from the same input stream interfere with each other too much?
Please let me know what you think!
Thanks,
Mark
You can do it, though it will be complicated to test and get right. How much is "too much" depends on you. A simpler solution is to have a reader thread pass messages to the two worker threads.
ExecutorService thread1 = Executors.newSingleThreadedExecutors();
ExecutorService thread2 = Executors.newSingleThreadedExecutors();
while(running) {
Message message = input.readMessage();
if (message.isTypeOne())
thread1.submit(() -> process(message));
else if (message.isTypeTwo())
thread2.submit(() -> process(message));
else
// do something else.
}
thread1.shutdown();
thread2.shutdown();

Async NIO: Same client sending multiple messages to Server

Regarding Java NIO2.
Suppose we have the following to listen to client requests...
asyncServerSocketChannel.accept(null, new CompletionHandler <AsynchronousSocketChannel, Object>() {
#Override
public void completed(final AsynchronousSocketChannel asyncSocketChannel, Object attachment) {
// Put the execution of the Completeion handler on another thread so that
// we don't block another channel being accepted.
executer.submit(new Runnable() {
public void run() {
handle(asyncSocketChannel);
}
});
// call another.
asyncServerSocketChannel.accept(null, this);
}
#Override
public void failed(Throwable exc, Object attachment) {
// TODO Auto-generated method stub
}
});
This code will accept a client connection process it and then accept another.
To communicate with the server the client opens up an AsyncSocketChannel and fires the message.
The Completion handler completed() method is then invoked.
However, this means if the client wants to send another message on the same AsyncSocket instance it can't.
It has to create another AsycnSocket instance - which I believe means another TCP connection - which is performance hit.
Any ideas how to get around this?
Or to put the question another way, any ideas how to make the same asyncSocketChannel receive multipe CompleteionHandler completed() events?
edit:
My handling code is like this...
public void handle(AsynchronousSocketChannel asyncSocketChannel) {
ByteBuffer readBuffer = ByteBuffer.allocate(100);
try {
// read a message from the client, timeout after 10 seconds
Future<Integer> futureReadResult = asyncSocketChannel.read(readBuffer);
futureReadResult.get(10, TimeUnit.SECONDS);
String receivedMessage = new String(readBuffer.array());
// some logic based on the message here...
// after the logic is a return message to client
ByteBuffer returnMessage = ByteBuffer.wrap((RESPONSE_FINISHED_REQUEST + " " + client
+ ", " + RESPONSE_COUNTER_EQUALS + value).getBytes());
Future<Integer> futureWriteResult = asyncSocketChannel.write(returnMessage);
futureWriteResult.get(10, TimeUnit.SECONDS);
} ...
So that's it my server reads a message from the async channe and returns an answer.
The client blocks until it gets the answer. But this is ok. I don't care if client blocks.
Whent this is finished, client tries to send another message on same async channel and it doesn't work.
There are 2 phases of connection and 2 different kind of completion handlers.
First phase is to handle a connection request, this is what you have programmed (BTW as Jonas said, no need to use another executor). Second phase (which can be repeated multiple times) is to issue an I/O request and to handle request completion. For this, you have to supply a memory buffer holding data to read or write, and you did not show any code for this. When you do the second phase, you'll see that there is no such problem as you wrote: "if the client wants to send another message on the same AsyncSocket instance it can't".
One problem with NIO2 is that on one hand, programmer have to avoid multiple async operations of the same kind (accept, read, or write) on the same channel (or else an error occur), and on the other hand, programmer have to avoid blocking wait in handlers. This problem is solved in df4j-nio2 subproject of the df4j actor framework, where both AsyncServerSocketChannel and AsyncSocketChannel are represented as actors. (df4j is developed by me.)
First, you should not use an executer like you have in the completed-method. The completed-method is already handled in a new worker-thread.
In your completed-method for .accept(...), you should call asychSocketChannel.read(...) to read the data. The client can just send another message on the same socket. This message will be handled with a new call to the completed-method, perhaps by another worker-thread on your server.

Categories