mina writefuture returns written=true despite failing - java

This is the code
WriteFuture writeFuture = session.write(message);
writeFuture.addListener(this);
writeFuture.awaitUninterruptibly();
sentMessage = writeFuture.isWritten();
Before sending a message, I disconnect the server from the network (pull cable) so that the message cannot possibly be sent. However, sentMessage will return true anyway. On wiresharks output you can see three TCP Retransmissions (and obviously no acks). After a few more messages (not the same message as the first) it will realize the link is down and return false.
I thought this isWritten() told you if the packet was successfully sent but apparently this is not so. How do I know if the packet has arrived? I tried mina version 2.0.7 and 2.0.4

The write success is declared when the message is pushed to the kernel.
This is how socket works you can't know when the TCP message is sent or acked

Related

Indefinite stale of TCP packet reception

Deplyment environment:
I have created a TCP server using JAVA over windows 10 OS. My TCP client program is written in VC++ and runs on windows 7 OS (I don't have any control over this part of the code, it is a black box to me).
My TCP server code is like this:
Socket s = ss.accept();
s.setReceiveBufferSize(2000);
s.setSendBufferSize(2000);
s.setTcpNoDelay(true);
s.setKeepAlive(true);
new TcpConnectionHandler(s,this.packetHandler);
Following is the TCP connection handler snippet:
InputStream incomingPacketBuffer = this.clientSocket.getInputStream();
OutputStream outgoingPacketBuffer = this.clientSocket.getOutputStream();
int bufferLen=0;
byte inBuffer[] = new byte[this.clientSocket.getReceiveBufferSize()];
byte outBuffer[] = new byte[this.clientSocket.getSendBufferSize()];
while(this.clientSocket.isConnected())
{
bufferLen = incomingPacketBuffer.read(inBuffer);
if(bufferLen>0)
{
outBuffer = (byte[]) this.packetHandlerModule.invoke(this.packetHandler,Arrays.copyOf(inBuffer, bufferLen));
}
if(outBuffer != null)
{
if(this.clientSocket.isConnected())
{
outgoingPacketBuffer.write(outBuffer);
outgoingPacketBuffer.flush();
}
}
}
this.clientSocket.close();
The communication is packet based and the protocol/parsing is handled by packetHandler.
Two more variant I've tried:
I have tried to close the socket as and when a reply is sent back to the client. That is, after receiving one packet of data, I reply to the client and close the connection.
I used inputStream.available before using the read method.
The problem I face:
Most of the time the TCP server replies to incoming packets within a second. If the server receives a packet after some idle time, the server doesn't reply to the packet. Sometimes even when there is active communication is going on, the reply is not being transmitted. Secondly, the isConnected function returns true even when the client socket closed the connection.
Debugging attempts:
I used teraterm to send packets and checked it. The behavior is same. As long as I send packets one after another, I don't have an issue. If one packet doesn't get a reply, then every packet sent after that does not get reply from the server.
When I press Ctrl+C in server console, all the packets sent from teraterm is processed by TCP server and reply is sent back. After this the server works properly for some duration.
I checked the packet flow with wireshark. When the replies are sent back normally, it is sent along with the ACK of client request (SYN, SYN+ACK, ACK, PSH, PSH+ACK, FYN, FYN+ACK, ACK). When the reply gets staled (may not be the right term, it is stuck in inputStream.available or inputStream.read), only ACK packet is sent by server (SYN, SYN+ACK, ACK, PSH, ACK).
I checked many forums and other threads in stackexchange, learned about Nagle's algorithm, applicaion must take care of packetization in TCP, TCP may receive 10+10 packets as 8+12 or 15+5 or any such manner. The server code takes care of packetization, setKeepAlive is set to true (there is no problem when a packet is sent from server).
Problem in short: "At times, TCP read call is getting blocked for a long duration even when there is incoming packets. When Ctrl+C is pressed, they are getting processed."
PS: I just started posting queries on stackexchange, so kindly let me know if there is any issues in the way of formulating the query.
PPS: Sorry for such a long post.
UPDATE
The comment from EJB helped me to identify the peer disconnect.
I made another setup with Ubuntu 16.04 as operating system for server. It has been 3 days, windows system had the issue occasionally. Ubuntu 16.04 never staled.
Some things to consider;
the TCP buffer sizes are usually 8K at least and I don't think you can skink them to 2000 bytes, or if you can, I don't think it's a good idea.
the size of the byte[] doesn't really matter over about 2K, you may as well pick a value.
you can't need to be creating a buffer more than once.
So in short I would try.
Socket s = ss.accept();
s.setTcpNoDelay(true);
s.setKeepAlive(true);
new TcpConnectionHandler(s,this.packetHandler);
and
try {
InputStream in = this.clientSocket.getInputStream();
OutputStream out = this.clientSocket.getOutputStream();
int bufferLen = 0;
byte[] buffer = new byte[2048];
while ((bufferLen = in.read(buffer)) > 0) {
out.write(buffer, 0, bufferLen); // not buffered so no need to flush
}
} finally {
this.clientSocket.close();
}
At times, TCP read call is getting blocked for a long duration even when there is incoming packets.
Would write a test Java client to see that this is not due to behaviour in Java.

Jms How to know subscriber is not alive anymore

I have a distributed system application that uses JBoss as an application server. I have a client application that serves as a simulation engine. When client is up, it sends an registration message(JMS message) to Server, then some field is set in the database. When Server is up, it sends a message ( a topic) to all clients to check that they are alive. If clients are alive, they can read message and send a response to server (queue) that it is alive.
If user close client normally, client send a message to server that I will unregister. Then server unregisters it. This is done in database side.
If user close client abnormally(kill) , then client can not send a message to server for unregistration. Then server does not know this client is not alive anymore. This causes inconsistency in my application. So I need a way to understand that client subscribed a topic is not subscribed anymore.
Server sends a message to topic to check that clients are alive.
#Schedule(hour = "*", minute = "*", second = "30", persistent = false)
public void sendNodeStatusRequest() {
Message msg = MessageFactory.createStatusRequestMessage();
publishNodeMessage(msg);
}
After a time, Server show following logs. Could I catch this warning from Java?
07:17:00,698 WARN [org.hornetq.core.protocol.core.impl.RemotingConnectionImpl] Connection failure
has been detected: Did not receive ping from /127.0.0.1:61888. It is likely
the client has exited or crashed without closing its connection, or the
network between the server and client has failed. The connection will now be closed. [code=3]
07:17:00,698 WARN [org.hornetq.core.server.impl.ServerSessionImpl] Client
connection failed, clearing up resources for session 4e4e9dc6-153e-11e7-
80fa-742b62812c29
To me the whole point of messaging system is decoupled communication. The sender (server in your case) send its stuff to the topic without actually knowing who will get the message. The clients come and go, and they should be able to read the message whenever it (still) resides in the topic.
Now from your question I understand that the server keeps track of all the connected clients by means of receiving the message back to the dedicated queue.
So I'm asking myself - maybe its something wrong with the design here.
Let me propose slightly different way of implementation.
The server should not be aware of any client, at most (because your system seems to work this way) it should know that client A, B and C are alive now only because these clients passed to the server this knowledge.
Why just don't make clients sending the "keep-alive" message every, say 1 minute (or less, depending on your needs) to the server queue without prior message from the server.
The message can include some client identifier and probably time if its not added by the infrastructure or something)
So the server will just get this message and it will keep track in memory the list of available clients along with the last time they've sent something.
So if some client disconnects "gracefully" - it can send a special message to the server like "I'm client A and consider me disconnected". Otherwise (abnormal termination/network outage/whatever) - it just won't send anything, the server will have a special process that will check whether there are stale clients on the list and if it finds them - it knows that something went wrong.
If you still want to stick with JMS way of doing, then you can try to send the message synchronously, meaning the producer will wait until it hears from the consumer. More information here : http://docs.oracle.com/javaee/6/tutorial/doc/bncfa.html

Java TCP packets via HTTP proxy

I am sending TCP packets just few bits each (one line of text or so). I am sending them to remote server via HTTP proxy however for some reason when the connection with the proxy is slow or interrupted to the server arrives just a fragment of the packet and not entire packet and it causes exceptions on the server side, how it that posible ? Is there any way on the client side how to prevent sending fragment of the packet instead of entire packet ?
Example: I am trying to send this packet:
packetHead: id (1-99)
integer: 1
short: 0
byte: 4
And in my case sometimes happens that to the server arrives just packetHead and integer and the rest of the packet is lost somewhere when the connection with the proxy is bad.
I have no access to modify server source code so I need to fix it on the client side.
Thanks for any tips.
Please show how you send your data. Every time I had a similar problem it was my fault for not flushing the stream. Especially if the stream is compressed you need to call close/complete on the GZIP or similar object to actually send out everything.

Heart-beating in STOMP client

The design of my current stomp client process is as follows:
Open stomp connection (sending CONNECT frame)
Subscribe to a feed (send a SUBSCRIBE frame)
Do a loop to continually receive feed:
while (true) {
connection.begin("txt1");
StompFrame message = connection.receive();
System.out.println("message get header"+message.toString());
LOG.info(message.getBody());
connection.ack(message, "txt1");
connection.commit("txt1");
}
My problem with this process is that I get
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)...
and I think the cause of this is mostly because the feed I am subscribed to gives information slower on certain times (as I normally get this error when the weekend comes, holidays or evenings).
I have been reading up on this here and I think this would help with my problem. However, I'm not so sure how to incorporate it with the current layout of my stomp client. Would I have to send a CONNECT header within Step 3?
I am currently using activemq to create my stomp client if that helps.
In the stomp spec we have:
Regarding the heart-beats themselves, any new data received over the
network connection is an indication that the remote end is alive. In a
given direction, if heart-beats are expected every milliseconds:
the sender MUST send new data over the network connection at least every milliseconds
if the sender has no real STOMP frame to send, it MUST send a single newline byte (0x0A)
if, inside a time window of at least milliseconds, the receiver did not receive any new data, it CAN consider the
connection as dead
because of timing inaccuracies, the receiver SHOULD be tolerant and take into account an error margin
Would that mean my client would need to send a newline bye every n seconds?
The stomp server you are connected to has timed out your connection due to innactivity.
Providing the server supports Stomp version 1.1 or newer, the easiest solution for your client is to include a heart-beat instruction in the header of your CONNECT, such as "0,10000". This tells the server that you cannot send heart-beats, but you want it to send one every 10 seconds. This way you don't need to implement them, and the server will keep the connection active by sending them to you.
Of course the server will have its own requirements of the client. In your comment it responds to your request with "1000,0". This indicates that it will send a heart-beat every 1000 millisecs, and it expects you to send one every 0 millisecs, 0 indicating none at all. So your job will be minimal.

UDP socket and multiple replies

I'm a learner, so please be patient and clear. I am writing an echo client with Java sockets (DatagramSocket).
After the client sends a message to the echo server, the server deliberately sends 1-10 copies of the message back to simulate message duplication in UDP.
However, my code can only receive the first of those messages sent back, never the full number sent by the server. My receive code is like this:
socket.receive(receivePacket);
How would I put my client in a state where you can enter a string to echo, say "Hi", it is then sent to the server, but can then receive all the replies? I am assuming that they all make it back to the client (I am testing this on my local machine so there will be no loss)
Call socket.receive again to receive additional packets. Set a timeout to wait a reasonable amount of time before deciding the server has sent all its packets.

Categories