Issue with SocketTimeoutException and Jetty - java

I'm using a Jetty based servlet to do RPC and I'm having an issue where a request that takes a long time throws the following exception on the server:
2012-02-11 21:07:07,673 [btpool0-4] DEBUG org.mortbay.log - EXCEPTION
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(Unknown Source)
at org.mortbay.io.ByteArrayBuffer.readFrom(ByteArrayBuffer.java:168)
at org.mortbay.io.bio.StreamEndPoint.fill(StreamEndPoint.java:99)
at org.mortbay.jetty.bio.SocketConnector$Connection.fill(SocketConnector
.java:190)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:277)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:203)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:357)
at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.
java:217)
at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool
.java:475) 2012-02-11 21:07:07,674 [btpool0-4] DEBUG org.mortbay.log
- EOF
I tried setting the Connection,Keep-Alive http request property but that had no effect and from what I can gather, http 1.1 (which I'm pretty sure I'm using) is persistent by default.
So I think there are 2 ways I can try to address this:
figure out how to prevent the timeout exception from being
thrown at all
Have the client issue the initial request without waiting
for a response, and then ping with separate requests to check when
the server is done.
Update (2/12/2012): I set the maxIdleTime as Tim suggested and that did extend the time before the timeout occurred, but then I started getting a new exception:
2012-02-11 23:24:01,187 [btpool0-1] DEBUG org.mortbay.log - EXCEPTION
java.io.IOException: An existing connection was forcibly closed by the
remote host at sun.nio.ch.SocketDispatcher.read0(Native Method) at
sun.nio.ch.SocketDispatcher.read(Unknown Source) at
sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source) at
sun.nio.ch.IOUtil.read(Unknown Source) at
sun.nio.ch.SocketChannelImpl.read(Unknown Source) at
org.mortbay.io.nio.ChannelEndPoint.fill(ChannelEndPoint.java:129) at
org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:277) at
org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:203) at
org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:357) at
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:329)
at
org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:475)
So something outside of Jetty was killing the connection, I suspect most likely a firewall. So what I ended up doing was making the server process the request with multiple threads; the original thread would immediately respond to the http request and a second thread would be kicked off to perform the action that was taking a long time. The client would then poll with http requests to check when the action on the server was complete.

This is a socket timeout, so nothing you do at the HTTP level can fix it - hence your keep alive not achieving anything.
Try setting the maxIdleTime on the SocketConnector
See here: http://docs.codehaus.org/display/JETTY/Configuring+Connectors ( archive link )

Related

Webflux, reactor connection reset by peer unhandled

So basically I am simulating a 'Connection reset by peer' locally using Toxiproxy or Wiremock (same behaviour for both of them) and I get the exception below.
The tricky part is that it will bypass any defined exception handlers and there's no way to revert the flow if there's a failure.
I tried configuring the WebClient as mentioned here: https://github.com/reactor/reactor-netty/issues/388 by either removing the connection pool or simply setting the HttpClient keepAlive property to false.
I defined a global exception handling mechanism by extending ErrorWebExceptionHandler but it doesn't go through.
Do you have any suggestions on how to manage this one? Or a proper configuration which I could try for either TcpClient / HttpClient or WebClient? What am I missing?
There must be a way to handle this properly..
[26/04/22 20:02:28] lvl=ERROR [ioEventLoopGroup-4-3] r.n.r.PooledConnectionProvider - [id: 0xaf559a88, L:0.0.0.0/0.0.0.0:56755 ! R:localhost/127.0.0.1:8989] Pooled connection observed an error
java.io.IOException: Connection reset by peer
at java.base/sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276)
at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:233)
at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:223)
at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:358)
at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:247)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1140)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:347)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:697)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:632)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:549)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:511)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)

How to make the distinction between different exceptions in Netty's onChannelFailure callback

I am writing an HTTP client with Netty 4.1.12.Final and I have unit tests simulating the crash of the HTTP server in order to be able to handle it.
I noticed that, when it happens, the exceptionCaught callback method of my inbound handler is called with:
java.io.IOException: Une connexion existante a dû être fermée par l’hôte distant
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(Unknown Source)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source)
at sun.nio.ch.IOUtil.read(Unknown Source)
at sun.nio.ch.SocketChannelImpl.read(Unknown Source)
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1100)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:372)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
Where the english equivalent of the exception message is quite probably:
java.io.IOException: An existing connection was forcibly closed by the remote host
Since this callback method is also called when an exception is thrown from my channelRead0 method of my inbound handler, I am asking a few questions:
Should I always consider an IOException "received" in the exceptionCaught callback as an indication that there is no point in continuing using the channel?
Since channelRead0 is declared to throw Exception, should I catch all IOException inside it in order to be sure that, when "receiving" an IOException in the exceptionCaught callback, it is related to the Channel?
Is there a way to know if an exception "received" in the exceptionCaught callback is related to I/O operations or to handlers operations?
Thank you for any hint!
1) If we are talking about a TCP connection then yes every IOException will result in having the connection closed automatically by netty as there is no way to recover.
2) I think I don't understand the question completely as each exception passed through the exceptionCaught(...) method is related the the channel which can be obtained by ctx.channel()
3) No there is no way in general. That said if its a TCP connection and its an IOException and its triggered by the actual transport we will close the connection.

Terminate XMPPTCPConnection.connect attempt

I've been trying to figure out how to properly terminate a XMPPTCPConnection.connect() attempt. I've been running into an issue when a user inputs incorrect information and is forced to wait out the Timeout of 60s. I've tried the following but I'm assuming they fail because the connection hasnt been established yet.
connection.disconnect();
connection.instantShutdown();
What is the proper way to terminate a connection attempt so that you dont have to wait on the time out error
WARNING: shutdownDone was not marked as successful by the writer thread
org.jivesoftware.smack.SmackException$NoResponseException: No response received within reply timeout. Timeout was 60000ms (~60s). Used filter: No filter used or filter was 'null'.
at org.jivesoftware.smack.SmackException$NoResponseException.newWith(SmackException.java:106)
at org.jivesoftware.smack.SmackException$NoResponseException.newWith(SmackException.java:85)
at org.jivesoftware.smack.SynchronizationPoint.checkForResponse(SynchronizationPoint.java:253)
at org.jivesoftware.smack.SynchronizationPoint.checkIfSuccessOrWait(SynchronizationPoint.java:146)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketWriter.shutdown(XMPPTCPConnection.java:1269)
at org.jivesoftware.smack.tcp.XMPPTCPConnection.shutdown(XMPPTCPConnection.java:494)
at org.jivesoftware.smack.tcp.XMPPTCPConnection.instantShutdown(XMPPTCPConnection.java:483)
at org.jivesoftware.smack.tcp.XMPPTCPConnection.notifyConnectionError(XMPPTCPConnection.java:863)
at org.jivesoftware.smack.tcp.XMPPTCPConnection.access$2700(XMPPTCPConnection.java:139)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketWriter.writePackets(XMPPTCPConnection.java:1419)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketWriter.access$2800(XMPPTCPConnection.java:1170)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketWriter$1.run(XMPPTCPConnection.java:1218)
at java.lang.Thread.run(Unknown Source)

VoltDB Timeout for createConnection

Below is a simple code snippet that shows how to connect to a VoltDB server.
ClientConfig clientConfig = new ClientConfig();
Client client = ClientFactory.createClient(clientConfig);
String server = "192.168.43.32";
client.createConnection(server);
Based on my experiments, if the server is down or just not connectable from network layer, it will take about 75 seconds to get the response.
SEVERE: Failed to connect to 192.168.43.32, in 75,359 ms
java.net.ConnectException: Operation timed out
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:458)
at sun.nio.ch.Net.connect(Net.java:450)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at java.nio.channels.SocketChannel.open(SocketChannel.java:189)
at org.voltdb.client.ConnectionUtil.getAuthenticatedConnection(ConnectionUtil.java:154)
at org.voltdb.client.ConnectionUtil.getAuthenticatedConnection(ConnectionUtil.java:142)
at org.voltdb.client.ConnectionUtil.getAuthenticatedConnection(ConnectionUtil.java:134)
at org.voltdb.client.Distributer.createConnectionWithHashedCredentials(Distributer.java:878)
at org.voltdb.client.ClientImpl.createConnectionWithHashedCredentials(ClientImpl.java:189)
at org.voltdb.client.ClientImpl.createConnection(ClientImpl.java:682)
at src.java.tutorial.voltdb.integration.ConnectionTest.main(ConnectionTest.java:27)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
Is there any ways to set the time out time, so the application needs not to wait for such a long time. A successful connection normally takes just tens of milliseconds, so I think if the connection cannot be established within 1000 milliseconds, something is definitely wrong already.
I have tried the setting of below
clientConfig.setConnectionResponseTimeout(1000);
In this case, it has no effects at all. So I guess it is not for this purpose.
Normally when the database is down and your client tries to connect it will get an immediate Connection refused exception, for example:
Exception in thread "main" java.net.ConnectException: Connection refused
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:364)
at sun.nio.ch.Net.connect(Net.java:356)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:623)
at java.nio.channels.SocketChannel.open(SocketChannel.java:184)
at org.voltdb.client.ConnectionUtil.getAuthenticatedConnection(ConnectionUtil.java:165)
at org.voltdb.client.ConnectionUtil.getAuthenticatedConnection(ConnectionUtil.java:153)
at org.voltdb.client.ConnectionUtil.getAuthenticatedConnection(ConnectionUtil.java:145)
at org.voltdb.client.Distributer.createConnectionWithHashedCredentials(Distributer.java:890)
at org.voltdb.client.ClientImpl.createConnectionWithHashedCredentials(ClientImpl.java:191)
at org.voltdb.client.ClientImpl.createConnection(ClientImpl.java:684)
at benchmark.Benchmark.<init>(Benchmark.java:17)
at benchmark.Benchmark.main(Benchmark.java:78)
In general, a "java.net.ConnectException: Connection timed out" can occur if there is a firewall that prevents the client from receiving any sort of response, or there could be other causes. The first thing to check might be if you have any firewall or network settings that would prevent access to port 21212 (the default VoltDB database connection port).
The ClientConfig setConnectionResponseTimeout() setting is used to cause a live connection to be closed if it hasn't received a response from a procedure call or a ping for the given number of milliseconds, but it is not used for creating a new connection.

Java Console Corba Client Application (Windows) and Ctrl+C throws CORBA.COMM_FAILURE Exception

I have an Java console application (Windows) that I want to shut down gracefuly when user hit Ctrl+C. That's why I use addShutdownHook to call interrupt() on my worker thread.
The problem is that when I am in the middle of corba operation (in my worker thread) and press Ctrl+C then I get an CORBA.COMM_FAILURE Exception. Do you know why is that?
This is the stack trace:
org.omg.CORBA.COMM_FAILURE: vmcid: SUN minor code: 203 completed: No
at com.sun.corba.se.impl.logging.ORBUtilSystemException.writeErrorSend(Unknown Source)
at com.sun.corba.se.impl.logging.ORBUtilSystemException.writeErrorSend(Unknown Source)
at com.sun.corba.se.impl.transport.SocketOrChannelConnectionImpl.writeLock(Unknown Source)
at com.sun.corba.se.impl.transport.SocketOrChannelConnectionImpl.sendCancelRequestWithLock(Unknown Source)
at com.sun.corba.se.impl.protocol.CorbaMessageMediatorImpl.sendCancelRequestIfFinalFragmentNotSent(Unknown Source)
at com.sun.corba.se.impl.protocol.CorbaClientRequestDispatcherImpl.endRequest(Unknown Source)
at com.sun.corba.se.impl.protocol.CorbaClientDelegateImpl.releaseReply(Unknown Source)
at org.omg.CORBA.portable.ObjectImpl._releaseReply(Unknown Source)
at INVS._PKICMSStub.UpdateDataInput(_PKICMSStub.java:223)
at com.infonotary.invssignature.SignerThread.createDetachedCMS(SignerThread.java:289)
at com.infonotary.invssignature.SignerThread.createDetachedSignature(SignerThread.java:321)
at com.infonotary.invssignature.SignerThread.signFile(SignerThread.java:378)
at com.infonotary.invssignature.SignerThread.signDirectory(SignerThread.java:451)
at com.infonotary.invssignature.SignerThread.run(SignerThread.java:506)
Edit: On the server side the error is:
#1 0x4054c489 in omni::giopStream::CommFailure::_raise
(minor=1096024068,
status=CORBA::COMPLETED_NO, retry=false,
filename=0x405a2259 "giopStream.cc", linenumber=878,
message=0x405a242c "Error in network receive (start of message)",
strand=0x8c879b8) at giopStream.cc:581
#2 0x4054cc8c in omni::giopStream::errorOnReceive (this=0x8c533ec,
rc=-1,
filename=0x405a2259 "giopStream.cc", lineno=878, buf=0x8cba610,
Edit: I hit Ctrl+C on the client console window.
Edit: May be CORBA code is using sleep operations (when send and receive) and when I interupt the working thread it throws exception and that's the result?
If you are in the middle of a CORBA call and you shut down the client, I would expect the server to receive a COMM_FAILURE.

Categories