Write Timeout while writing bytes to web socket in apache tomcat - java

I have Java web socket web application. The websocket endpoint interacts with mobile clients. In one of the use case, the web application needs to write bytes of the size of 10MB or more to the web-socket outputstream. Following is the code which writes to the output stream :
if (webSocSession.isOpen()) {
webSocSession.getBasicRemote().sendBinary(byteBuffer);
byteBuffer.clear();
}
I am getting the following exception at times when the writing to the web-socket :
IOException writing to web-socket
java.io.IOException: java.net.SocketTimeoutException: Write timeout
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendMessageBlock(WsRemoteEndpointImplBase.java:324)
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendMessageBlock(WsRemoteEndpointImplBase.java:259)
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendBytes(WsRemoteEndpointImplBase.java:131)
at org.apache.tomcat.websocket.WsRemoteEndpointBasic.sendBinary(WsRemoteEndpointBasic.java:43)
at test.web.websocket.LIMSEndpoint$SocketWorker.writeToWebSocket(LIMSEndpoint.java:1188)
at test.web.websocket.LIMSEndpoint$SocketWorker.run(LIMSEndpoint.java:1127)
Caused by: java.net.SocketTimeoutException: Write timeout
at org.apache.tomcat.util.net.SocketWrapperBase.vectoredOperation(SocketWrapperBase.java:1458)
at org.apache.tomcat.util.net.SocketWrapperBase.write(SocketWrapperBase.java:1376)
at org.apache.tomcat.util.net.SocketWrapperBase.write(SocketWrapperBase.java:1347)
at org.apache.tomcat.websocket.server.WsRemoteEndpointImplServer.doWrite(WsRemoteEndpointImplServer.java:93)
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.writeMessagePart(WsRemoteEndpointImplBase.java:509)
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendMessageBlock(WsRemoteEndpointImplBase.java:311)
I tried setting the up the following SEND timeout property to 0 ( infinite write timeout ) , but it does not help. The following Apache Tomcat doc helped - https://tomcat.apache.org/tomcat-8.5-doc/web-socket-howto.html
org.apache.tomcat.websocket.BLOCKING_SEND_TIMEOUT
The web socket session max idle timeout has been set to 0 ( not to timeout )
Session.setMaxIdleTimeout(0)
Any help in this regard will be great help.

I would suspect that the clients receive buffer is filled up and it cuts the connection. I know this will happen the other way around, when a client sends large data to an apache server and the server isn't configured to have an extended binary buffer. See: org.apache.tomcat.websocket.binaryBufferSize
Perhaps you can get around the problem by sending the data in smaller chunks, sendBytes have a partial version.
https://tomcat.apache.org/tomcat-10.0-doc/websocketapi/index.html?jakarta/websocket/Session.html

Related

Java TCP packets via HTTP proxy

I am sending TCP packets just few bits each (one line of text or so). I am sending them to remote server via HTTP proxy however for some reason when the connection with the proxy is slow or interrupted to the server arrives just a fragment of the packet and not entire packet and it causes exceptions on the server side, how it that posible ? Is there any way on the client side how to prevent sending fragment of the packet instead of entire packet ?
Example: I am trying to send this packet:
packetHead: id (1-99)
integer: 1
short: 0
byte: 4
And in my case sometimes happens that to the server arrives just packetHead and integer and the rest of the packet is lost somewhere when the connection with the proxy is bad.
I have no access to modify server source code so I need to fix it on the client side.
Thanks for any tips.
Please show how you send your data. Every time I had a similar problem it was my fault for not flushing the stream. Especially if the stream is compressed you need to call close/complete on the GZIP or similar object to actually send out everything.

Stream over HTTP SSL is not flushed

I have a web application running behind nginx. Some pages are accessible via http, some others via https. I have some "pages", which are rather streams as the application does not close the connection and feeds data as they come. The feed then looks like this:
TIME1 MESSAGE1
TIME2 MESSAGE2
...
TIMEn MESSAGEn
After each line I write "\n" and then call flush(). Over http, it works correctly and my client can listen to new data. However, over https the client is not receiving any data until the connection is closed.
ServletOutputStream stream = applicationModel.getOutputStream();
OutputStreamWriter streamWriter = new OutputStreamWriter(stream);
BufferedWriter writer = new BufferedWriter(streamWriter);
while (true) {
wait();
writer.write(newMessage);
writer.flush();
}
Unless the application is tightly integrated with the web server a flush on the writer will only flush the buffers inside your application, so that the data gets send to the web server. Inside the web server there are more buffers, which are necessary to optimize the traffic by sending larger TCP packets and thus decrease the overhead for the data. And, if you use SSL there is yet another layer to watch, because your data will be encapsulated into an SSL frame which again adds overhead, so it is good to not only have a few bytes payload inside. Finally you have the buffering at the OS kernel, which might defer the sending of a small TCP packet for some time if there is hope that there will be more data.
Please be aware, that your wish to control the buffers is against a fundamental design concept of HTTP. HTTP is based on the idea that you have a request from the client to the server and then a response from the server, ideally with a known content-length up-front. There is no idea in the original design of a response which evolves slowly and where the browser will update the display once new data arrive. The real way to get updates would be instead to let the client send another request and then send the new response back. Another way would be to use WebSockets.

Heart-beating in STOMP client

The design of my current stomp client process is as follows:
Open stomp connection (sending CONNECT frame)
Subscribe to a feed (send a SUBSCRIBE frame)
Do a loop to continually receive feed:
while (true) {
connection.begin("txt1");
StompFrame message = connection.receive();
System.out.println("message get header"+message.toString());
LOG.info(message.getBody());
connection.ack(message, "txt1");
connection.commit("txt1");
}
My problem with this process is that I get
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)...
and I think the cause of this is mostly because the feed I am subscribed to gives information slower on certain times (as I normally get this error when the weekend comes, holidays or evenings).
I have been reading up on this here and I think this would help with my problem. However, I'm not so sure how to incorporate it with the current layout of my stomp client. Would I have to send a CONNECT header within Step 3?
I am currently using activemq to create my stomp client if that helps.
In the stomp spec we have:
Regarding the heart-beats themselves, any new data received over the
network connection is an indication that the remote end is alive. In a
given direction, if heart-beats are expected every milliseconds:
the sender MUST send new data over the network connection at least every milliseconds
if the sender has no real STOMP frame to send, it MUST send a single newline byte (0x0A)
if, inside a time window of at least milliseconds, the receiver did not receive any new data, it CAN consider the
connection as dead
because of timing inaccuracies, the receiver SHOULD be tolerant and take into account an error margin
Would that mean my client would need to send a newline bye every n seconds?
The stomp server you are connected to has timed out your connection due to innactivity.
Providing the server supports Stomp version 1.1 or newer, the easiest solution for your client is to include a heart-beat instruction in the header of your CONNECT, such as "0,10000". This tells the server that you cannot send heart-beats, but you want it to send one every 10 seconds. This way you don't need to implement them, and the server will keep the connection active by sending them to you.
Of course the server will have its own requirements of the client. In your comment it responds to your request with "1000,0". This indicates that it will send a heart-beat every 1000 millisecs, and it expects you to send one every 0 millisecs, 0 indicating none at all. So your job will be minimal.

Apache Mina - Multiple small write to client

I built a tcp server based on apache mina 2.0.4, and have some problems writing back to the client.
We have some tcp clients that can handle only one message at a time and with a buffer size of 256 bytes max. When I send 2+ messages (< 256 bytes) to the client, they arrive in one or two big blocks that the client can't handle, instead of 2+ separated messages.
I tried to set sessionConfig.setTcpNoDelay(true/false); with no success, as well as sessionConfig.setSendBufferSize( 256 );.
In the message response encoder I also tried to flush the output:
int capacity = 256;
IoBuffer buffer = IoBuffer.allocate(capacity, false);
buffer.setAutoExpand(false);
buffer.setAutoShrink(true);
buffer.putShort(type);
buffer.putShort(length);
buffer.put(gmtpMsg.getMessage().getBytes());
buffer.flip();
out.write(buffer);
out.flush();
And in the thread responsible to send the messages, I tried to wait for the message to be written
for (Entry<Long, OutgoingMessage> outgoingMsg : outgoingMsgs.entrySet()) {
WriteFuture future = session.write(outgoingMsg.getValue());
future.awaitUninterruptibly();
}
All this fails miserably, and the only solution working is a ridiculous 500 msec sleep between the session write, which is hardly acceptable.
Anyone see what I am doing wrong?
After reading a bit more on the tcp protocol and specially https://stackoverflow.com/a/6614586/1280034, it is clear that the problem is on the client side, not handling the packets correctly.
Since we can't rebuilt the clients, my only solution is to delay each outgoing messages by approx 500ms. To do so I created an extra queue in charge of the writting to clients in order to let the server continue its normal job.

java.net.SocketTimeoutException: Read timed out

I have an application with client server architecture. The client
use Java Web Start with Java Swing / AWT and the sert uses HTTP server / Servlet with
Tomcat.
The communication is made from the serialization of objects, create a
ObjectOutput serializes a byte array and send to the server
respectively called the ObjectInputStream and deserializes.
The application follows communicating correctly to a certain
time of concurrency where starting to show error
"SocketException read timeout". The erro happens when the server invoke the method
ObjectInputStream.getObject() in my servlet doPost method.
The tomcat will come slow and the errors start to decrease server response time until the crash time where i must restart the server and after everything works.
Someone went through this problem ?
Client Code
URLConnection conn = url.openConnection();
conn.setDoOutput(true);
OutputStream os = conn.getOutputStream();
ObjectOutputStream oss = new ObjectOutputStream(os);
oss.writeUTF("protocol header sample");
oss.writeObject(_parameters);
oss.flush();
oss.close();
Server Code
ObjectInputStream input = new ObjectInputStream(_request.getInputStream());
String method = input.readUTF();
parameters = input.readObject();
input.readObject() is where the error is
You haven't given us much information to go on, especially about the client side. But my suspicion is that the client side is:
failing to setting the Content-length header (or setting it to the wrong value),
failing to flush the output stream, and/or
not closing the output side of the socket.
Mysterious.
Based on your updated question, it looks like none of the above. Here are a couple of other possibilities:
For some reason the client side is either locking up entirely during serialization or taking a VERY LONG TIME.
There is a proxy between the client and server that is causing problems.
You are experiencing load-related network problems, or network hardware problems.
Another possible explanation is that you have a memory leak, and that the slowdown is caused by the GC taking more and more time as you run out of memory. This will show up in the GC logs if you have them enabled.
I think During high Concurrency, the Socket Timeout set in Tomcat is Expired and the connection is closed. The next read by Tomcat for that connection is greater than the server socket timeout specified in the server.
If you want to avoid this problem you have to increase the timeout on the server-side which is expired in your case. But not advisable.
BTW you did not give enough information. Did you increase the no of threads for connection in Tomcat? If you did, this surely would happen.

Categories