How does downstream events work in jboss's netty? - java

Just started playing around with netty in implementing my own server. Took me a while to get the hang of it but now I was able to accept clients by writing my own MessageHandler and inside messageReceived I was able to read from the buffer and did some business logic related to the data received.
However the question now is, how do I write data into connected clients? I saw the sample code where you can write to the channel in the event of a new message like this:
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
Channel ch = e.getChannel();
ch.write(e.getMessage());
}
but what if you don't want to write the data back at that point? What if the client stays connected in the socket and waits until some event occurs in the server? In that case how will my server find the right socket to write to? Am I suppose to keep a reference to the channel object? Is this the convention?
I looked further into the code and saw a method called writeRequested. Is that related? Who calls that? And is it needed?

As long as you have the reference to the Channel (or ChannelHandlerContext), you can call Channel.write() (or Channels.write()) from anywhere, any thread.
writeRequested() is called when you trigger the writeRequested event by calling Channel.write() or calling ChannelHandlerContext.sendDownstream(MessageEvent).

Related

Netty - writeAndFlush and message ordering

I'm trying to implement distributed actor model which uses Netty as communication protocol - the NIO version with TCP connections.
Lets say we have 2 nodes (machines), each have Netty's server instances that pass the incoming messages to actors on that node.
I would like to keep message ordering per same pair of remote actors, so my solution was to use asynchronous writeAndFlush method to send messages to remote node and actor - when another message needs to be sent to the same actor before the first one was delivered, I would add it to buffer and with the callback of the writeAndFlush message, process the next one from buffer. It looks like this:
channel.writeAndFlush(message).addListener(new MessageListener(mailboxOfSelector));
the callback method is:
#Override
public void operationComplete(ChannelFuture future) throws Exception {
Queue<RemoteMessage> unsentToMailbox = unsentMessages.get(mailboxOfSelector);
if (!unsentToMailbox.isEmpty()) {
RemoteMessage message = unsentToMailbox.poll();
channel.writeAndFlush(message).addListener(this);
}
}
So if A and B are 2 server instances connected with Channel and we send from A -> B - my question would be: what does isSuccess flag mean in depth? and when does the callback actually return?
Does it return when it finished with last handler on A or actually when it is delivered to the first handler on the B?
In Netty5. Version alpha2. after flush the data to socketchannel,Netty then callback the operationComplete method .In this case, it dose not means the data reach the client. It means the data has been sent to the TCP protocol stack.You can see these in source code:
io.netty.channel.ChannelOutboundBuffer.java
It will calls the promise.trySuccess() from the remove() method or remove(Cause cause), witch can trigger the operationComplete() method.

Using netty with 3rd party blocking API

I am using a 3rd party blocking API. I am going to be using this API as follows:
while(true){
blockingAPI();
sendResultSomewhere();
}
blockingAPI() polls a server for a specific property until it gets a response.
In order to make things asynchronous to some extent I could spawn this API call within a separate thread. and have a callback implemented in Java to handle the response. I was wondering if I can use the netty framework in this scenario, and how I could do this? The examples I have seen involve a server that listens and communicates with a client, and I am not sure how my use case fits in.
If netty cannot be used, would my best bet be spawning a new thread and implementing a callback in Java?
Not sure what you really try to do:
Spawn internally a new thread: you could use LocalChannel with Netty to have intra-JVM process communication and therefore having something like you want, without any network consideration (only within the JVM). The blockingAPI will be computed within ServerLocalChannel side, while the result will be written once the client get back a response through the same LocalChannel.
Spawn but with a request from outside (network), then Netty could of course be used too there. Maybe still keeping a LocalChannel logic to separate network to compute.
Note that I could recommand to use asynchronous operation using LocalChannel (executing the blocking task), such that the send somewhere else is done without blocking the Netty's Network IO thread.
Network Handler side:
localChannel = creationWithinNetworkHandler(networkChannelCtx);
localChannel.writeAndFlush(something);
while LocalChannel handler server side could be as:
void read0(ChannelHandlerContext ctx, someData) {
blockingAPI();
ctx.channel().writeAndFlush(answear).addFutureListener(Channels.CloseFuture);
}
and LocalChannel handler client side could be as:
void read0(ChannelHandlerContext ctx, answear) {
//Using ctx from Network channel side
networkCtx.writeAndFlush(answear);
}

Handle multiple clients with one server

I try to get a connection to multiple clients using the Sockets in Java. Everything seems to work, but the problem is, that the server just listens to the first client. If there are multiple clients, the server can send them all messages, but he can just listen to the messages that came from the first client. I tried this all out (I'm at this problem since yesterday). So I'm pretty sure, that the fault has to be in the class "ClientListener".
Explanation:
There is a List with clients (connection to communicate with Strings). In the GUI there is a list, where I can choose, with which client I'd like to communicate. If I change the client, the variable currentClient (int) switches to another number
networkClients is an ArrayList, where all the different connections are "stored".
The first connected client is exactly the same as the other clients, there is nothing special about him. He is called, when the variable currentClient is set to 0 (per default). The variable-switching is working. Like I said, all the clients give me a response if I send them an order, but just networkClients.get(0) is heard by the server (ClientListener).
class ClientListener implements Runnable {
String request;
#Override
public void run() {
try {
while (networkClients.size() < 1) {
Thread.sleep(1000);
}
//***I'm pretty sure, that the problem is in this line
while ((request = networkClients.get(currentClient).getCommunicationReader().readLine()) != null) {
//***
myFileList.add(new MyFile(request));
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
I hope someone can help me. I tried many things, but nothing worked.
EDIT: Like I wrote in the code example, is it possible that the while-loop isn't able to switch the number of "currentClient" (which is handled by another Thread)? I tested/simulated something similar in a testclass and the result was, that a while-loop of course can can update the state in it (meaning, that if a variable changes in the () of a while loop, it will of course be checked after every repeat).
You should take a look at multithreading.
Your server program should be made out of:
- The main thread
- A thread that handles new connections.
(Upon creating a new connection, start a new thread and pass the connection on to that thread)
- A thread for each connected client, listening to the each client separately
Take a look at some examples like: (1) (2)
I found the solution:
The Thread sits in the declared method I mentioned in the starting post (in the code snippet) and waits unlimited time for a new response of the client.
So changing the index of the list "networkClients" won't do anything, because nothing will happen there, until there is a new order sent by the client (which lets the thread go further).
So you need to implement an extra listener for each client.

Async NIO: Same client sending multiple messages to Server

Regarding Java NIO2.
Suppose we have the following to listen to client requests...
asyncServerSocketChannel.accept(null, new CompletionHandler <AsynchronousSocketChannel, Object>() {
#Override
public void completed(final AsynchronousSocketChannel asyncSocketChannel, Object attachment) {
// Put the execution of the Completeion handler on another thread so that
// we don't block another channel being accepted.
executer.submit(new Runnable() {
public void run() {
handle(asyncSocketChannel);
}
});
// call another.
asyncServerSocketChannel.accept(null, this);
}
#Override
public void failed(Throwable exc, Object attachment) {
// TODO Auto-generated method stub
}
});
This code will accept a client connection process it and then accept another.
To communicate with the server the client opens up an AsyncSocketChannel and fires the message.
The Completion handler completed() method is then invoked.
However, this means if the client wants to send another message on the same AsyncSocket instance it can't.
It has to create another AsycnSocket instance - which I believe means another TCP connection - which is performance hit.
Any ideas how to get around this?
Or to put the question another way, any ideas how to make the same asyncSocketChannel receive multipe CompleteionHandler completed() events?
edit:
My handling code is like this...
public void handle(AsynchronousSocketChannel asyncSocketChannel) {
ByteBuffer readBuffer = ByteBuffer.allocate(100);
try {
// read a message from the client, timeout after 10 seconds
Future<Integer> futureReadResult = asyncSocketChannel.read(readBuffer);
futureReadResult.get(10, TimeUnit.SECONDS);
String receivedMessage = new String(readBuffer.array());
// some logic based on the message here...
// after the logic is a return message to client
ByteBuffer returnMessage = ByteBuffer.wrap((RESPONSE_FINISHED_REQUEST + " " + client
+ ", " + RESPONSE_COUNTER_EQUALS + value).getBytes());
Future<Integer> futureWriteResult = asyncSocketChannel.write(returnMessage);
futureWriteResult.get(10, TimeUnit.SECONDS);
} ...
So that's it my server reads a message from the async channe and returns an answer.
The client blocks until it gets the answer. But this is ok. I don't care if client blocks.
Whent this is finished, client tries to send another message on same async channel and it doesn't work.
There are 2 phases of connection and 2 different kind of completion handlers.
First phase is to handle a connection request, this is what you have programmed (BTW as Jonas said, no need to use another executor). Second phase (which can be repeated multiple times) is to issue an I/O request and to handle request completion. For this, you have to supply a memory buffer holding data to read or write, and you did not show any code for this. When you do the second phase, you'll see that there is no such problem as you wrote: "if the client wants to send another message on the same AsyncSocket instance it can't".
One problem with NIO2 is that on one hand, programmer have to avoid multiple async operations of the same kind (accept, read, or write) on the same channel (or else an error occur), and on the other hand, programmer have to avoid blocking wait in handlers. This problem is solved in df4j-nio2 subproject of the df4j actor framework, where both AsyncServerSocketChannel and AsyncSocketChannel are represented as actors. (df4j is developed by me.)
First, you should not use an executer like you have in the completed-method. The completed-method is already handled in a new worker-thread.
In your completed-method for .accept(...), you should call asychSocketChannel.read(...) to read the data. The client can just send another message on the same socket. This message will be handled with a new call to the completed-method, perhaps by another worker-thread on your server.

Design(Classes, methods, interfaces) of real time applications(server/client)

IĀ“ve been looking for a good book or article about this topic but didnt find much. I didnt find a good example - piece of code - for a specific scenario. Like clients/server conversation.
In my applicationĀ“s protocol they have to send/recieve messages. Like:
Server want to send a file to a client
Client can accpet or no,
if he accepts, server will send bytes over the same connection/socket.
The rest of my application all uses blocking methods, server has a method
Heres what I did:
Server method:
public synchronized void sendFile(File file)
{
//send messsage asking if I can send a file
//block on read, waiting for client responde
//if client answers yes, start sending the bytes
//else return
}
Client methods:
public void reciveCommand()
{
//read/listen for a command from socket
//if is a send file command handleSendFileCommand();
//after the return of handleSendFileCommand() listen for another command
}
public void handleSendFileCommand()
{
//get the file server want to send
//check if it already has the file
//if it already has, then send a command to the socket saying it already has and return
//else send a command saying server can send the file
//create a FileInputStream, recive bytes and then return method
}
I am 100% sure this is wrong because, there is no way server and clients would talk bidirecional, I mean, when server wants to send a command to a server, they have to follow an order of commands until that conversation is finished, only then, they can send/recive another sequence of commands. Thats why I made all methods that send requests synchronized
It didnt took me a lot of time to realize I need to study about design patterns for that kind of application...
I read about Chain of Responsibility design pattern but I dont get it how can I use it or another good design pattern in that situation.
I hope someone can help me with some code example-like.
Thanks in advance
synchronized keyword in Java means something completely different - it marks a method or a code block as a critical section that only single thread can execute at a time. You don't need it here.
Then, a TCP connection is bi-directional on the byte-stream level. The synchronization between the server and a client is driven by the messages exchanged. Think of a client (same pretty much applies to the server) as a state machine. Some types of messages are acceptable in the current state, some are not, some switch the node into different state.
Since you are looking into design patterns, the State pattern is very applicable here.

Categories