guyz!
I have some protobuf objects to send to server.
Server uses Netty.
When I use Netty for client with Netty's ProtobufDecoder, all goes nice.
But when I try to send protobuf object through vanilla Java Socket, I have "broken pipe" exception. But Socket object stay connected and I can send another object and get exception again. =(
Here is netty-based client's pipeline setup:
ChannelPipeline pipeline = channel.pipeline();
pipeline.addLast("frameDecoder", new LengthFieldBasedFrameDecoder(134217728, 0, 4, 0, 4));
pipeline.addLast("protobufDecoder", new ProtobufDecoder(mSocketListener.getInMessage()));
I don't know, where to specify maxFrameLength and other params.
Here is error log:
Exception in thread "main" java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:141)
at SocketClientMain.main(SocketClientMain.java:28)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
What's going wrong?
May be I have broken hands?
Or I need to ask server-owner for some info?
java.net.SocketException: Broken pipe is caused by trying to write to a connection, while the other side has already closed that same connection. It is not you, who closed the connection, else another exception would have been thrown. You can not recover the connection, you have to open a new one.
Or in case you keep getting the exception, it just simply means the application protocol is poorly defined or poorly implemented.
Related
I am writing an HTTP client with Netty 4.1.12.Final and I have unit tests simulating the crash of the HTTP server in order to be able to handle it.
I noticed that, when it happens, the exceptionCaught callback method of my inbound handler is called with:
java.io.IOException: Une connexion existante a dû être fermée par l’hôte distant
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(Unknown Source)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source)
at sun.nio.ch.IOUtil.read(Unknown Source)
at sun.nio.ch.SocketChannelImpl.read(Unknown Source)
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1100)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:372)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
Where the english equivalent of the exception message is quite probably:
java.io.IOException: An existing connection was forcibly closed by the remote host
Since this callback method is also called when an exception is thrown from my channelRead0 method of my inbound handler, I am asking a few questions:
Should I always consider an IOException "received" in the exceptionCaught callback as an indication that there is no point in continuing using the channel?
Since channelRead0 is declared to throw Exception, should I catch all IOException inside it in order to be sure that, when "receiving" an IOException in the exceptionCaught callback, it is related to the Channel?
Is there a way to know if an exception "received" in the exceptionCaught callback is related to I/O operations or to handlers operations?
Thank you for any hint!
1) If we are talking about a TCP connection then yes every IOException will result in having the connection closed automatically by netty as there is no way to recover.
2) I think I don't understand the question completely as each exception passed through the exceptionCaught(...) method is related the the channel which can be obtained by ctx.channel()
3) No there is no way in general. That said if its a TCP connection and its an IOException and its triggered by the actual transport we will close the connection.
This question already has answers here:
Socket error 10052 on UDP socket
(3 answers)
Closed 5 years ago.
I've written a multi-threaded UDP Proxy in Java using DatagramChannels.
It works fine until the following exception appears:
java.net.SocketException: Network dropped connection on reset: no further information
at sun.nio.ch.DatagramChannelImpl.receive0(Native Method)
at sun.nio.ch.DatagramChannelImpl.receiveIntoNativeBuffer(Unknown Source)
at sun.nio.ch.DatagramChannelImpl.receive(Unknown Source)
at sun.nio.ch.DatagramChannelImpl.receive(Unknown Source)
at com.fabio.rotumaster.proxy.ProxyMain.handlePacket(ProxyMain.java:189)
at com.fabio.rotumaster.proxy.ProxyMain.run(ProxyMain.java:169)
at java.lang.Thread.run(Unknown Source)
In ProxyMain.java on line 189 there is only the receive method being called:
SocketAddress sender = this.clientChannel.receive(buffer);
The error appears randomly from time to time. Sometimes only once and sometimes 5 of them in a row.
Does anyone have an idea?
This is Winsock error 10052: WSAENETRESET:
Network dropped connection on reset.
The connection has been broken due to keep-alive activity detecting a failure while the operation was in progress. It can also be returned by setsockopt if an attempt is made to set SO_KEEPALIVE on a connection that has already failed.
How you can possibly get that on a UDP socket appears to be a mystery, but MSDN also says under recvfrom():
For a datagram socket, this error indicates that the time to live has expired.
And #David Schwartz says:
This is one of many errors that UDP implementations tend to uselessly report to applications. You pretty much have to ignore them all.
I'm the new to redis, I start the server about this tutorial. And it work. Then I use write the code using java to connect redis, then it's ok, like this:
Jedis jedis = new Jedis("localhost");
System.out.println("Connection to server sucessfully");
//store data in redis list
jedis.lpush("tutorial-list", "Redis");
jedis.lpush("tutorial-list", "Mongodb");
jedis.lpush("tutorial-list", "Mysql");
But, when I use multithread to push the redis, it will throw the exception "read time out":
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601) at
org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:58)
Caused by: redis.clients.jedis.exceptions.JedisConnectionException:
java.net.SocketTimeoutException: Read timed out at
redis.clients.util.RedisInputStream.ensureFill(RedisInputStream.java:201)
at
redis.clients.util.RedisInputStream.readByte(RedisInputStream.java:40)
at redis.clients.jedis.Protocol.process(Protocol.java:141) at
redis.clients.jedis.Protocol.read(Protocol.java:205) at
redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:297)
at
redis.clients.jedis.Connection.getBinaryMultiBulkReply(Connection.java:233)
at redis.clients.jedis.Jedis.keys(Jedis.java:185) at
org.v11.redis_mongo_task.UpdateApp.jobDetail(UpdateApp.java:23) at
org.v11.redis_mongo_task.UpdateApp.main(UpdateApp.java:42) ... 5 more
Caused by: java.net.SocketTimeoutException: Read timed out at
java.net.SocketInputStream.socketRead0(Native Method) at
java.net.SocketInputStream.read(SocketInputStream.java:150) at
java.net.SocketInputStream.read(SocketInputStream.java:121) at
java.net.SocketInputStream.read(SocketInputStream.java:107) at
redis.clients.util.RedisInputStream.ensureFill(RedisInputStream.java:195)
... 13 more
What happened for redis? why it can work in single thread?
According to this answer, a single Jedis instance is not threadsafe. You will have to use JedisPool for multithreading. You can read here on how use it and here to set the max connections and what will happen if those connections are all occupied.
I'm posting links since two of them are SO answers an they should get the credit and one is from github official repo, so if anything gets updated it should be reflected here too.
Below is a simple code snippet that shows how to connect to a VoltDB server.
ClientConfig clientConfig = new ClientConfig();
Client client = ClientFactory.createClient(clientConfig);
String server = "192.168.43.32";
client.createConnection(server);
Based on my experiments, if the server is down or just not connectable from network layer, it will take about 75 seconds to get the response.
SEVERE: Failed to connect to 192.168.43.32, in 75,359 ms
java.net.ConnectException: Operation timed out
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:458)
at sun.nio.ch.Net.connect(Net.java:450)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at java.nio.channels.SocketChannel.open(SocketChannel.java:189)
at org.voltdb.client.ConnectionUtil.getAuthenticatedConnection(ConnectionUtil.java:154)
at org.voltdb.client.ConnectionUtil.getAuthenticatedConnection(ConnectionUtil.java:142)
at org.voltdb.client.ConnectionUtil.getAuthenticatedConnection(ConnectionUtil.java:134)
at org.voltdb.client.Distributer.createConnectionWithHashedCredentials(Distributer.java:878)
at org.voltdb.client.ClientImpl.createConnectionWithHashedCredentials(ClientImpl.java:189)
at org.voltdb.client.ClientImpl.createConnection(ClientImpl.java:682)
at src.java.tutorial.voltdb.integration.ConnectionTest.main(ConnectionTest.java:27)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
Is there any ways to set the time out time, so the application needs not to wait for such a long time. A successful connection normally takes just tens of milliseconds, so I think if the connection cannot be established within 1000 milliseconds, something is definitely wrong already.
I have tried the setting of below
clientConfig.setConnectionResponseTimeout(1000);
In this case, it has no effects at all. So I guess it is not for this purpose.
Normally when the database is down and your client tries to connect it will get an immediate Connection refused exception, for example:
Exception in thread "main" java.net.ConnectException: Connection refused
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:364)
at sun.nio.ch.Net.connect(Net.java:356)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:623)
at java.nio.channels.SocketChannel.open(SocketChannel.java:184)
at org.voltdb.client.ConnectionUtil.getAuthenticatedConnection(ConnectionUtil.java:165)
at org.voltdb.client.ConnectionUtil.getAuthenticatedConnection(ConnectionUtil.java:153)
at org.voltdb.client.ConnectionUtil.getAuthenticatedConnection(ConnectionUtil.java:145)
at org.voltdb.client.Distributer.createConnectionWithHashedCredentials(Distributer.java:890)
at org.voltdb.client.ClientImpl.createConnectionWithHashedCredentials(ClientImpl.java:191)
at org.voltdb.client.ClientImpl.createConnection(ClientImpl.java:684)
at benchmark.Benchmark.<init>(Benchmark.java:17)
at benchmark.Benchmark.main(Benchmark.java:78)
In general, a "java.net.ConnectException: Connection timed out" can occur if there is a firewall that prevents the client from receiving any sort of response, or there could be other causes. The first thing to check might be if you have any firewall or network settings that would prevent access to port 21212 (the default VoltDB database connection port).
The ClientConfig setConnectionResponseTimeout() setting is used to cause a live connection to be closed if it hasn't received a response from a procedure call or a ping for the given number of milliseconds, but it is not used for creating a new connection.
We are using Jersey Server-Sent Events (SSE) to allow remote components of our application to listen to events raised by our Jersey/Tomcat server. This works great.
However, it is crucial that our server have an accurate list of currently-connected listeners (our remote components). To this end, our server sends a tiny message to each caller (via eventOutput.write) once every five seconds. If our remote component is shut down while SSE-connected, or if the remote computer is powered off while SSE-connected, our server's eventOutput.write throws the ClientAbortException/SocketException exception shown below. That's perfect: we catch the exception, mark that caller as no longer connected, and move on.
Now, for the problem. As I mentioned, eventOutput.write throws an exception in cases where our remote component software is not running, or where the computer it runs on has been powered down. However, there are two cases where calling eventOutput.write to a no-longer-connected computer does NOT throw an exception: 1) if the Ethernet cable of the remote computer is simply pulled while the caller is SSE-connected, and 2) if the network adapter in the remote computer is turned off (i.e., by an administrative action) while the caller is SSE-connected. In these two cases, we can call eventOutput.write to the remote computer every five seconds for hours and no exception is thrown. This makes it impossible to detect that the remote computer is no longer connected.
I see that EventOutput (and ChunkedOutput) has very few methods and properties, but I wonder if there is any way to configure or use it that will cause an exception to be thrown when writing to a remote computer that has been disconnected by having its Ethernet cable pulled or network adapter turned off.
And here is the (good/useful) exception we get in cases where eventOutput.write DOES throw the exception we want:
org.apache.catalina.connector.ClientAbortException: null
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:371) ~[catalina.jar:7.0.53]
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:333) ~[catalina.jar:7.0.53]
at org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:101) ~[catalina.jar:7.0.53]
at org.glassfish.jersey.servlet.internal.ResponseWriter$NonCloseableOutputStreamWrapper.flush(ResponseWriter.java:303) ~[jaxrs-ri-2.13.jar:2.13.]
at org.glassfish.jersey.message.internal.CommittingOutputStream.flush(CommittingOutputStream.java:292) ~[jaxrs-ri-2.13.jar:2.13.]
at org.glassfish.jersey.server.ChunkedOutput$1.call(ChunkedOutput.java:240) ~[jaxrs-ri-2.13.jar:2.13.]
at org.glassfish.jersey.server.ChunkedOutput$1.call(ChunkedOutput.java:190) ~[jaxrs-ri-2.13.jar:2.13.]
at org.glassfish.jersey.internal.Errors.process(Errors.java:315) ~[jaxrs-ri-2.13.jar:2.13.]
at org.glassfish.jersey.internal.Errors.process(Errors.java:242) ~[jaxrs-ri-2.13.jar:2.13.]
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:347) ~[jaxrs-ri-2.13.jar:2.13.]
at org.glassfish.jersey.server.ChunkedOutput.flushQueue(ChunkedOutput.java:190) ~[jaxrs-ri-2.13.jar:2.13.]
at org.glassfish.jersey.server.ChunkedOutput.write(ChunkedOutput.java:180) ~[jaxrs-ri-2.13.jar:2.13.]
at com.appserver.webservice.AgentSsePollingManager$ConnectionChecker.run(AgentSsePollingManager.java:174) ~[AgentSsePollingManager$ConnectionChecker.class:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_71]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) [na:1.7.0_71]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) [na:1.7.0_71]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method) ~[na:1.7.0_71]
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113) ~[na:1.7.0_71]
at java.net.SocketOutputStream.write(SocketOutputStream.java:159) ~[na:1.7.0_71]
at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:215) ~[tomcat-coyote.jar:7.0.53]
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:480) ~[tomcat-coyote.jar:7.0.53]
at org.apache.coyote.http11.InternalOutputBuffer.flush(InternalOutputBuffer.java:119) ~[tomcat-coyote.jar:7.0.53]
at org.apache.coyote.http11.AbstractHttp11Processor.action(AbstractHttp11Processor.java:799) ~[tomcat-coyote.jar:7.0.53]
at org.apache.coyote.Response.action(Response.java:174) ~[tomcat-coyote.jar:7.0.53]
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:366) ~[catalina.jar:7.0.53]
... 19 common frames omitted
I do not think it will be possible to account for all the possible failures by adding the code around SSE sockets even if Jersey adds all information accessible from the socket interface. The only viable solution is proper two way communication. In case of SSE chunked output stream, pulled cable does not cause any interruption because nothing is supposed to tell it that remote host is now unreachable (until OS closes the socket).
Your first step is right - implement heartbeats every N seconds. Then all you need to do is to report back with another tiny http call every so often that you still listen. It is up to you to do acknowledgments every 5 seconds or every minute - depends on how fast do you need a problem detection.
You can do it in the same Jersey Resource by implementing #POST (in RESTful terms it reads "create new ack that you receive events").
Note: browsers are good at re-establishing SSE connection on their own in case of network interrupts, no need to fiddle with it.