This question already has answers here:
Socket error 10052 on UDP socket
(3 answers)
Closed 5 years ago.
I've written a multi-threaded UDP Proxy in Java using DatagramChannels.
It works fine until the following exception appears:
java.net.SocketException: Network dropped connection on reset: no further information
at sun.nio.ch.DatagramChannelImpl.receive0(Native Method)
at sun.nio.ch.DatagramChannelImpl.receiveIntoNativeBuffer(Unknown Source)
at sun.nio.ch.DatagramChannelImpl.receive(Unknown Source)
at sun.nio.ch.DatagramChannelImpl.receive(Unknown Source)
at com.fabio.rotumaster.proxy.ProxyMain.handlePacket(ProxyMain.java:189)
at com.fabio.rotumaster.proxy.ProxyMain.run(ProxyMain.java:169)
at java.lang.Thread.run(Unknown Source)
In ProxyMain.java on line 189 there is only the receive method being called:
SocketAddress sender = this.clientChannel.receive(buffer);
The error appears randomly from time to time. Sometimes only once and sometimes 5 of them in a row.
Does anyone have an idea?
This is Winsock error 10052: WSAENETRESET:
Network dropped connection on reset.
The connection has been broken due to keep-alive activity detecting a failure while the operation was in progress. It can also be returned by setsockopt if an attempt is made to set SO_KEEPALIVE on a connection that has already failed.
How you can possibly get that on a UDP socket appears to be a mystery, but MSDN also says under recvfrom():
For a datagram socket, this error indicates that the time to live has expired.
And #David Schwartz says:
This is one of many errors that UDP implementations tend to uselessly report to applications. You pretty much have to ignore them all.
Related
I know there are a ton of these posts, but this is a little different. We are using vended code for part of our data processing system, and part of the system sends emails to clients if certain events take place on data insertion or deletion. Recently we have started getting address already in use exceptions. We checked the repository history, and nothing has changed in our code in the last 6 months for this system. We have already tried the typical solutions for this issue including increasing the number of connections allowed to the port with little success. We had a meeting with the vendor, and I asked if anything had changed in their code, and if they would assure that all connections in their code are explicitly closed. They indicated that they are explicitly closing all sockets. However they didn't show us the code so there is no way for us to know if this is true other than taking their word for it. So, the only thing I can think of to do is continue to increase the number of connections to the port until we stop getting bind exceptions. So, what is the industry standard for max number of connections to port 25; is there one? Also if anyone has any other suggestions I would greatly appreciate it? Thanks so much in advance, Robert
20210505112127.716 ERROR m.fiserv.ppx.business.notification.EmailNotifier : MessagingException from notify
javax.mail.MessagingException: Could not connect to SMTP host: SERVER.URL.COM, port: 25;
nested exception is:
java.net.BindException: Address already in use: connect
at com.sun.mail.smtp.SMTPTransport.openServer(SMTPTransport.java:1545)
at com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:453)
Caused by:
java.net.BindException: Address already in use: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:90)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:380)
20210505131529.950 ERROR erv.ppx.web.controller.AuditReportViewController : Error while generating HTML
net.sf.jasperreports.engine.JRException: Error writing to OutputStream writer : CorpAdminAuditReport
at net.sf.jasperreports.engine.export.JRHtmlExporter.exportReport(JRHtmlExporter.java:496)
at com.fiserv.ppx.web.controller.AuditReportViewController.generateReport(AuditReportViewController.java:184)
Caused by:
com.ibm.wsspi.webcontainer.ClosedConnectionException: OutputStream encountered error during write
at com.ibm.ws.webcontainer.channel.WCCByteBufferOutputStream.write(WCCByteBufferOutputStream.java:188)
at com.ibm.ws.webcontainer.srt.SRTOutputStream.write(SRTOutputStream.java:97)
20210505140706.240 ERROR com.fiserv.ppx.business.db.DBConnectionUtil : Exception in getting for AppServer connection from DataSource.
com.ibm.websphere.ce.cm.ConnectionWaitTimeoutException: J2CA1010E: Connection not available; timed out waiting for 180,005 seconds.
at com.ibm.ws.rsadapter.AdapterUtil.toSQLException(AdapterUtil.java:1680)
at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:661)
at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:611)
Caused by:
com.ibm.websphere.ce.j2c.ConnectionWaitTimeoutException: J2CA1010E: Connection not available; timed out waiting for 180,005 seconds.
at com.ibm.ejs.j2c.FreePool.createOrWaitForConnection(FreePool.java:1781)
at com.ibm.ejs.j2c.PoolManager.reserve(PoolManager.java:3834)
at com.ibm.ejs.j2c.PoolManager.reserve(PoolManager.java:3082)
20210505140731.341 ERROR com.fiserv.ppx.business.db.DBConnectionUtil : Exception in getting for AppServer connection from DataSource.
com.ibm.websphere.ce.cm.ConnectionWaitTimeoutException: J2CA1010E: Connection not available; timed out waiting for 180,010 seconds.
at com.ibm.ws.rsadapter.AdapterUtil.toSQLException(AdapterUtil.java:1680)
at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:661)
at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:611)
Caused by:
com.ibm.websphere.ce.j2c.ConnectionWaitTimeoutException: J2CA1010E: Connection not available; timed out waiting for 180,010 seconds.
at com.ibm.ejs.j2c.FreePool.createOrWaitForConnection(FreePool.java:1781)
at com.ibm.ejs.j2c.PoolManager.reserve(PoolManager.java:3904)
at com.ibm.ejs.j2c.PoolManager.reserve(PoolManager.java:3082)
20210505140731.341 ERROR com.fiserv.ppx.sso.controller.SSOController : SSO Configuration error
java.lang.NullPointerException
at com.fiserv.ppx.business.db.PPXDbTransactionManager.<init>(PPXDbTransactionManager.java:60)
at com.fiserv.ppx.sso.impl.SSOLoginAuthenticator.authenticateSSOUser(SSOLoginAuthenticator.java:157)
I am writing an HTTP client with Netty 4.1.12.Final and I have unit tests simulating the crash of the HTTP server in order to be able to handle it.
I noticed that, when it happens, the exceptionCaught callback method of my inbound handler is called with:
java.io.IOException: Une connexion existante a dû être fermée par l’hôte distant
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(Unknown Source)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source)
at sun.nio.ch.IOUtil.read(Unknown Source)
at sun.nio.ch.SocketChannelImpl.read(Unknown Source)
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1100)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:372)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
Where the english equivalent of the exception message is quite probably:
java.io.IOException: An existing connection was forcibly closed by the remote host
Since this callback method is also called when an exception is thrown from my channelRead0 method of my inbound handler, I am asking a few questions:
Should I always consider an IOException "received" in the exceptionCaught callback as an indication that there is no point in continuing using the channel?
Since channelRead0 is declared to throw Exception, should I catch all IOException inside it in order to be sure that, when "receiving" an IOException in the exceptionCaught callback, it is related to the Channel?
Is there a way to know if an exception "received" in the exceptionCaught callback is related to I/O operations or to handlers operations?
Thank you for any hint!
1) If we are talking about a TCP connection then yes every IOException will result in having the connection closed automatically by netty as there is no way to recover.
2) I think I don't understand the question completely as each exception passed through the exceptionCaught(...) method is related the the channel which can be obtained by ctx.channel()
3) No there is no way in general. That said if its a TCP connection and its an IOException and its triggered by the actual transport we will close the connection.
guyz!
I have some protobuf objects to send to server.
Server uses Netty.
When I use Netty for client with Netty's ProtobufDecoder, all goes nice.
But when I try to send protobuf object through vanilla Java Socket, I have "broken pipe" exception. But Socket object stay connected and I can send another object and get exception again. =(
Here is netty-based client's pipeline setup:
ChannelPipeline pipeline = channel.pipeline();
pipeline.addLast("frameDecoder", new LengthFieldBasedFrameDecoder(134217728, 0, 4, 0, 4));
pipeline.addLast("protobufDecoder", new ProtobufDecoder(mSocketListener.getInMessage()));
I don't know, where to specify maxFrameLength and other params.
Here is error log:
Exception in thread "main" java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:141)
at SocketClientMain.main(SocketClientMain.java:28)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
What's going wrong?
May be I have broken hands?
Or I need to ask server-owner for some info?
java.net.SocketException: Broken pipe is caused by trying to write to a connection, while the other side has already closed that same connection. It is not you, who closed the connection, else another exception would have been thrown. You can not recover the connection, you have to open a new one.
Or in case you keep getting the exception, it just simply means the application protocol is poorly defined or poorly implemented.
If I set a socket SoTimeout, and read from it. when read time exceed the timeout limit, I'll get an "SocketTimeoutException: Read timed out".
and here is the stack in my case:
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:150)
at java.net.SocketInputStream.read(SocketInputStream.java:121)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:277)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:527)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:462)
but here I encountered "IOExcetion: Connection timed out", i don't know how it happened.
Stacks:
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:198)
at sun.nio.ch.IOUtil.read(IOUtil.java:171)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:245)
at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at java.io.FilterInputStream.read(FilterInputStream.java:116)
at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:277)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
at java.io.DataInputStream.readInt(DataInputStream.java:370)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:527)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:462)
Can someone tell me what's the differences between the two exceptions, Thanks.
A connection timeout means you attempted to connect to the remote IP/port pair and failed to do so: it did not answer at all. Another possible error at that stage would be connection refused, in which this pair is available but rejected your connection attempt. Both of these errors appear on the initial setup of a socket. Note that these errors only occur with TCP, since a TCP connection requires the establishment of a session.
When you have a socket read timeout, it means you are connected, but failed to read data in time. Timeouts on sockets are configurable. You may also get a connection reset error, which means you did connect successfully, but the other end decided that after all you're not worth it :p
Simple answer:
In one case (Connection timed out) your application cannot connect to the server in a timely manner. In the other case (Read timed out) the connection can be established but during read the connection times out.
'Connection timed out' after the connect phase means that something has gone seriously wrong with the connection and it must be closed. 'Read timeout' just means that no data arrived within the specified receive timeout period: it isn't fatal.
I'm using a Jetty based servlet to do RPC and I'm having an issue where a request that takes a long time throws the following exception on the server:
2012-02-11 21:07:07,673 [btpool0-4] DEBUG org.mortbay.log - EXCEPTION
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(Unknown Source)
at org.mortbay.io.ByteArrayBuffer.readFrom(ByteArrayBuffer.java:168)
at org.mortbay.io.bio.StreamEndPoint.fill(StreamEndPoint.java:99)
at org.mortbay.jetty.bio.SocketConnector$Connection.fill(SocketConnector
.java:190)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:277)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:203)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:357)
at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.
java:217)
at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool
.java:475) 2012-02-11 21:07:07,674 [btpool0-4] DEBUG org.mortbay.log
- EOF
I tried setting the Connection,Keep-Alive http request property but that had no effect and from what I can gather, http 1.1 (which I'm pretty sure I'm using) is persistent by default.
So I think there are 2 ways I can try to address this:
figure out how to prevent the timeout exception from being
thrown at all
Have the client issue the initial request without waiting
for a response, and then ping with separate requests to check when
the server is done.
Update (2/12/2012): I set the maxIdleTime as Tim suggested and that did extend the time before the timeout occurred, but then I started getting a new exception:
2012-02-11 23:24:01,187 [btpool0-1] DEBUG org.mortbay.log - EXCEPTION
java.io.IOException: An existing connection was forcibly closed by the
remote host at sun.nio.ch.SocketDispatcher.read0(Native Method) at
sun.nio.ch.SocketDispatcher.read(Unknown Source) at
sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source) at
sun.nio.ch.IOUtil.read(Unknown Source) at
sun.nio.ch.SocketChannelImpl.read(Unknown Source) at
org.mortbay.io.nio.ChannelEndPoint.fill(ChannelEndPoint.java:129) at
org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:277) at
org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:203) at
org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:357) at
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:329)
at
org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:475)
So something outside of Jetty was killing the connection, I suspect most likely a firewall. So what I ended up doing was making the server process the request with multiple threads; the original thread would immediately respond to the http request and a second thread would be kicked off to perform the action that was taking a long time. The client would then poll with http requests to check when the action on the server was complete.
This is a socket timeout, so nothing you do at the HTTP level can fix it - hence your keep alive not achieving anything.
Try setting the maxIdleTime on the SocketConnector
See here: http://docs.codehaus.org/display/JETTY/Configuring+Connectors ( archive link )