Async NIO: Same client sending multiple messages to Server - java

Regarding Java NIO2.
Suppose we have the following to listen to client requests...
asyncServerSocketChannel.accept(null, new CompletionHandler <AsynchronousSocketChannel, Object>() {
#Override
public void completed(final AsynchronousSocketChannel asyncSocketChannel, Object attachment) {
// Put the execution of the Completeion handler on another thread so that
// we don't block another channel being accepted.
executer.submit(new Runnable() {
public void run() {
handle(asyncSocketChannel);
}
});
// call another.
asyncServerSocketChannel.accept(null, this);
}
#Override
public void failed(Throwable exc, Object attachment) {
// TODO Auto-generated method stub
}
});
This code will accept a client connection process it and then accept another.
To communicate with the server the client opens up an AsyncSocketChannel and fires the message.
The Completion handler completed() method is then invoked.
However, this means if the client wants to send another message on the same AsyncSocket instance it can't.
It has to create another AsycnSocket instance - which I believe means another TCP connection - which is performance hit.
Any ideas how to get around this?
Or to put the question another way, any ideas how to make the same asyncSocketChannel receive multipe CompleteionHandler completed() events?
edit:
My handling code is like this...
public void handle(AsynchronousSocketChannel asyncSocketChannel) {
ByteBuffer readBuffer = ByteBuffer.allocate(100);
try {
// read a message from the client, timeout after 10 seconds
Future<Integer> futureReadResult = asyncSocketChannel.read(readBuffer);
futureReadResult.get(10, TimeUnit.SECONDS);
String receivedMessage = new String(readBuffer.array());
// some logic based on the message here...
// after the logic is a return message to client
ByteBuffer returnMessage = ByteBuffer.wrap((RESPONSE_FINISHED_REQUEST + " " + client
+ ", " + RESPONSE_COUNTER_EQUALS + value).getBytes());
Future<Integer> futureWriteResult = asyncSocketChannel.write(returnMessage);
futureWriteResult.get(10, TimeUnit.SECONDS);
} ...
So that's it my server reads a message from the async channe and returns an answer.
The client blocks until it gets the answer. But this is ok. I don't care if client blocks.
Whent this is finished, client tries to send another message on same async channel and it doesn't work.

There are 2 phases of connection and 2 different kind of completion handlers.
First phase is to handle a connection request, this is what you have programmed (BTW as Jonas said, no need to use another executor). Second phase (which can be repeated multiple times) is to issue an I/O request and to handle request completion. For this, you have to supply a memory buffer holding data to read or write, and you did not show any code for this. When you do the second phase, you'll see that there is no such problem as you wrote: "if the client wants to send another message on the same AsyncSocket instance it can't".
One problem with NIO2 is that on one hand, programmer have to avoid multiple async operations of the same kind (accept, read, or write) on the same channel (or else an error occur), and on the other hand, programmer have to avoid blocking wait in handlers. This problem is solved in df4j-nio2 subproject of the df4j actor framework, where both AsyncServerSocketChannel and AsyncSocketChannel are represented as actors. (df4j is developed by me.)

First, you should not use an executer like you have in the completed-method. The completed-method is already handled in a new worker-thread.
In your completed-method for .accept(...), you should call asychSocketChannel.read(...) to read the data. The client can just send another message on the same socket. This message will be handled with a new call to the completed-method, perhaps by another worker-thread on your server.

Related

RMI Service run similar to sockets

So if I have a socket server, I can accept each socket and pass it to a executory
while(true){
Socket conn = socketServ.accept();
Runnable task = new Runnable() {
#Override
public void run() {
try{
server.executor(conn);
} catch(IOException e){
}
}
};
exec1.execute(task);
}
Doing this allows my server to run on my threads and does not block the same thread. Because I also have reference to that socket... called "conn" I can successfully return messages as well.
Now I have an RMI interface, which basically lets me call methods back and forth.
for example if I had this method:
public MusicServerResponseImpl CreatePlayerlist(String Name, UserObjectImpl uo) throws RemoteException {
MusicServerResponseImpl res = new MusicServerResponseImpl();
return res;
}
Which returns a serializable object. My concern is when this message gets called, I think it is going to get called in the main thread of the server, and thus will block that thread and slow down parallelism.
What I think is the solution is to have every single RMI method also create a task for an executor.. to speed up the execution of everything...this issue I am seeing however is unlike the socket where I have an object to send information back to, I am unsure how I would return a response from the RMI method, without somehow having to block the thread.
Does that make sense? Basically I am asking how I can execute in parallel with RMI methods while still being able to return results!
Thanks for the help!
Does that make sense?
No. Concurrent calls are natively supported.
See this documentation page and look for the property named maxConnectionThreads.
You could also have tested your assumptions by, for example, printing the current thread name in your server code, and trying to execute concurrent calls and see what happens.

Java - can two threads on client side use the same input stream from server?

I'm working on a Java client/server application with a pretty specific set of rules as to how I have to develop it. The server creates a ClientHandler instance that has input and output streams to the client socket, and any input and output between them is triggered by events in the client GUI.
I have now added in functionality server-side that will send out periodic updates to all connected clients (done by storing each created PrintWriter object from the ClientHandlers in an ArrayList<PrintWriter>). I need an equivalent mechanism client-side to process these messages, and have been told this needs to happen in a second client-side thread whose run() method uses a do...while(true) loop until the client disconnects.
This all makes sense to me so far, what I am struggling with is the fact that the two threads will have to share the one input stream, and essentially 'ignore' any messages that aren't of the type that they handle. In my head, it should look something like this:
Assuming that every message from server sends a boolean of value true on a message-to-all, and one of value false on a message to an individual client...
Existing Client Thread
//method called from actionPerformed(ActionEvent e)
//handles server response to bid request
public void receiveResponse()
{
//thread should only process to-specific-client messages
if (networkInput.nextBoolean() == false)
{
//process server response...
}
}
Second Client-side Thread
//should handle all messages set to all clients
run()
{
do {
if (networkInput.nextBoolean() == true)
{
//process broadcasted message...
} while (true);
}
As they need to use the same input stream, I would obviously be adding some synchronized, wait/notify calls, but generally, is what I'm looking to do here possible? Or will the two threads trying to read in from the same input stream interfere with each other too much?
Please let me know what you think!
Thanks,
Mark
You can do it, though it will be complicated to test and get right. How much is "too much" depends on you. A simpler solution is to have a reader thread pass messages to the two worker threads.
ExecutorService thread1 = Executors.newSingleThreadedExecutors();
ExecutorService thread2 = Executors.newSingleThreadedExecutors();
while(running) {
Message message = input.readMessage();
if (message.isTypeOne())
thread1.submit(() -> process(message));
else if (message.isTypeTwo())
thread2.submit(() -> process(message));
else
// do something else.
}
thread1.shutdown();
thread2.shutdown();

Websockets and Threaded server-side Endpoint

Kind of a "noob" problem here.
I have a small application to write (like a simple game). There is server-side and client-side. It has to use websockets as the way of communication. Server has a server class (with main() that starts the server) as well as server endpoint class. However, the game is not turn based, but real time based. So the server has to do certain computations every "tick" b/c of the dynamic field.
I assume that Threads would suit well in this case, but I don't know how to put threads with this kind of server.
As I can see, the only thing that can receive/send messages is endpoint. If I make it implement Runnable and pause every 0.5 of a sec, it won't accept messages during that pause time. If I define a different class for that purpose, I have no idea how I start it inside of an endpoint and make a way for them to communicate.
Does anyone have any suggestions/info/links/anything that may help?
Thank you in advance.
Server endpoint will continuously receive data from client side. All you have to do is to process that data in some other thread. You can define a different class for that purpose (a thread). This thread class will have two different queues.
In queue - to receive data from the endpoint
Out queue - to send data to the endpoint
(You can use ConcurrentLinkedQueue for that. more help -> How to use ConcurrentLinkedQueue?)
Start this processing thread inside the endpoint. when endpoint receives data, put them into the In Queue. Continuously listen to the Out Queue and send that data again to the client side.
Endpoint code
#OnMessage
public void onMessage(String message,Session peer) throws IOException{
processingThread t = new processThread(peer);
t.inQueue.add(data);
t.start();
String s;
//listen to the Out Queue
while (true) {
while ((s =t.outQueue.poll()) != null) {
peer.getBasicRemote.sendText(dataToBeSent);
}
}
}
processingThread Code
public class processingThread extends Thread{
public ConcurrentLinkedQueue<String> inQueue = new ConcurrentLinkedQueue<String>();
public ConcurrentLinkedQueue<String> outQueue = new ConcurrentLinkedQueue<String>();
public void run(){
//listen to in queue and process
//after processing put to the out queue
}
}
Hope this will help :)

Play Framework SSE Closing Chunked Response

I'm trying to implement Server-Side Events server in Play Framework 1.2.5
How can I know if the client called EventSource.close() (or closed its browser window, for example)? This is a simplified piece of server code I'm using:
public class SSE extends Controller {
public static void updater() {
response.contentType = "text/event-stream";
response.encoding = "UTF-8";
response.status = 200;
response.chunked = true;
while (true) {
Promise<String> promise = Producer.getNextMessage();
String msg = await(promise);
response.writeChunk("data: " + msg + "\n\n");
}
}
}
Producer should deal with queuing, Promise objects, and produce the output, but I should know when to stop it (filling its queue). I would expect some exception thrown by response.writeChunk() if the output stream is closed, but there's no any.
There's a similar example that does not deal with SSE, but only chunks instead, at http://www.playframework.com/documentation/1.2.5/asynchronous#HTTPresponsestreaming
Since play.mvc.Controller doesn't let me know if the output stream is closed during the execution, I solved the problem through the Producer itself:
In Producer.getNextMessage(), the current time is remembered.
In Producer.putMessage(String), the time since last 'get' is checked. If it's greater than some threshold, we can consider that SSE channel is closed.
There's also this class play.libs.F.EventStream which can be useful within the Producer.
Plus, Producer might not be the right name here, since it's more of a dispatching queue...

Netty client multiple requests

First, I'll explain the situation and the logic that I'm trying to implement:
I have multiple threads, each put result of it work, some object called Result into queue QueueToSend
My NettyClient runs in thread and takes Result from QueueToSend every 1 milisecond and should connect to server and send a message, that is created from Result.
I also need this connections to be asynch. So I need the Result list to be known by NettyHandler to send right message and process right result and then again send response.
So I initialize NettyClient bootstrap
bootstrap = new ClientBootstrap(
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
and sets pipeline once when app starts.
Then, every milisecond I take Result object from QueueToSend and connect to server
ChannelFuture future = bootstrap.connect(new InetSocketAddress(host,port);
ResultConcurrentHashMap.put(future.getChannel().getId(), result);
I decided to use static ConcurrentHashMap to save every result object taken from QueueToSend assosiated with channel.
The first problem takes place in NettyHandler in method channelConnected, when I am trying to take Result object assosiated with channel from ResultConcurrentHashMap.
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) {
Channel channel = ctx.getPipeline.getChannel();
Result result = ResultConcurrentHashMap.get(channel.getId());
}
But sometimes result is null (1 of 50), even thought it should be in ResultConcurrentHashMap. I think it happens cause that channelConnected event happens before NettyClient runs this code:
ResultConcurrentHashMap.put(future.getChannel().getId(), result);
May be it will not appear if I run NettyServer and NettyClient not on localhost both, but remotely, it will take moretime to estabilish the connection. But I need a solution for this issue.
Another issue is that I am sending messages every 1 milisecond asynchromously and I suppose that messages are may be mixed and server can not read them properly. If I run them one by one it will be ok :
future.getChannel().getCloseFuture().awaitUninterruptibly();
But I need asynchromus sending, and process right results, assosiated with channel and send responses.
What should I implement?
ChannelFutures are executed asynchronously before the events get fired. For example channel connect future will be completed before firing the channel connected event.
So you have to register a channel future listener after calling bootstrap.connect() and write your code in the listener to initialize the HashMap, then it will be visible to the handler.
ChannelFuture channelFuture = bootstrap.connect(remoteAddress, localAddress);
channelFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
resultConcurrentHashMap.put(future.getChannel().getId(), result);
}
});

Categories