Netty connection timeout on other pc - java

I wrote server and client on Java using Netty.When i run client on my PC it works just fine. When i am trying to run client on other PC it throws me:
java.net.ConnectException: connection timed out
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:391)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:289)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
17-Sep-2012 10:58:55 AM org.jboss.netty.channel.SimpleChannelHandler
What is the reason of this?

Check the connection parameters.
Is the server visible from another client? (try pinging the server from the client).
Are there any firewalls between? Try switching them off.
Check the connection string. Make sure you aren't connecting to localhost.
Check the server configuration. Does it listen on the proper network interface.
If you check everything and it seems OK. Post the network connection code here.
Happy coding :)

Related

Looking for exact cause and resolution of : java.io.IOException: An existing connection was forcibly closed by the remote host

The exception is thrown using Apache FTPClient in LOCAL_PASSIVE_MODE. The process is running from a PC in a remote site over a satellite connection which I know to be less than stable. The same code works flawlessly over a stable connection to the same server, but I'm not sure if the difference comes from the stability of the connection, or the speed (ie, server configured timeouts)
My questions are as follows :
Is it necessarily the case that the connection was closed by the remote host (as stated in the exception), or is it possible that an interupted internet connection will generate the same exception ?
If it is true that an interupted internet connection will cause this exception how can I parameterize the FTPClient, or the underlying Socket to retry and resume the connection ?
How can I test if the connection was closed by the remote server or interupted ?
The FTPClient is configured for :
Connection timeout : 10 minutes
Data timeout : 10 minutes
Keep alive : enabled
Keep alive signals sent every 10 seconds
Keep alive reply from server timeout : 10 minutes
FTP buffer size : 1024 x 1024
I am waiting to receive the server configuration file and the server logs.
The stack trace is as follows :
java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(Unknown Source)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source)
at sun.nio.ch.IOUtil.read(Unknown Source)
at sun.nio.ch.SocketChannelImpl.read(Unknown Source)
at org.apache.mina.transport.socket.nio.NioProcessor.read(NioProcessor.java:317)
at org.apache.mina.transport.socket.nio.NioProcessor.read(NioProcessor.java:45)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:683)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:659)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:648)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.access$600(AbstractPollingIoProcessor.java:68)
at org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:1120)
at org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Is it necessarily the case that the connection was closed by the remote host (as stated in the exception), or is it possible that an interupted internet connection will generate the same exception?
No. The localhost received an RST from the peer. If the Internet connection was interrupted this would cause the local TCP to abort the connection, eventually, with a different message, such as 'software caused connection abort' or 'the connection was aborted by the software in your local host', whatever the exact wording is on your system. If it says 'by the remote host', it means it.
If it is true that an interupted internet connection will cause this exception
It isn't.
how can I parameterize the FTPClient, or the underlying Socket to retry and resume the connection?
I can't answer for the FTPClient, but a Socket that has had this exception is dead and must be closed. It cannot retry anything.
It would be more to the point to examine why the peer aborted the connection. For example, you may be violating an upload-size limit.
How can I test if the connection was closed by the remote server or interrupted?
Via the error message, unfortunately. They could have mapped the various errno values that can arise in sockets to different IOException or indeed SocketException subclasses, but they only did so in a few cases, such as ConnectException, SocketTimeoutException, ...

javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake while load Test using JMeter

I am doing the load test using JMeter of my web application which is hosted on tomcat 7 with SSL configured.
My load test is working fine for 100,200,300,500 https get requests, URL is below:
https://testapi.myapp.com/myapptor/tempo/getinfo?id=4E92D41E&groupid=test
But when i m trying to put more load say 600 or greater than 600 requests, I am getting below error as response for some of the get requests whereas some of the get request having the correct response data:
javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
at sun.security.ssl.SSLSocketImpl.readRecord(Unknown Source)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source)
at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:436)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at org.apache.jmeter.protocol.http.sampler.MeasuringConnectionManager$MeasuredConnection.open(MeasuringConnectionManager.java:107)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:643)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:479)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.executeRequest(HTTPHC4Impl.java:517)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.sample(HTTPHC4Impl.java:331)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:74)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1146)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1135)
at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:434)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:261)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(Unknown Source)
... 20 more
Can you please advice me, why I am getting above error and who is that Remote host mentioned in error?
My tomcat server have max threads to 300 and accept count to 100.
That's exactly the kind of thing that starts happening when the load is too great for your system. 'Remote host' means the tomcat instance.
Either accept that 500 connections is the most your system will handle, or start investigating scaling up.
This might be a bug in Tomcat that has recently been fixed. Please have a look at this bug report and see if it matches your issue.

Java Socket Timesout when I try connecting to server with external IP

I'm trying to make a simple pong game with multiplayer support, but when I try to connect to the server application with my external ip it fails.
What I've Tried
localhost, 127.0.0.1, and 192.168.0.10 all work.
When I check to see if my port is open with an external tool it always returns true right away if I have the server running.
Turning off firewall or adding exceptions hasn't helped.
pinging my external IP returns instant response.
Code for sockets creation in Java and exception
Server Socket Creation
serverSocket = new ServerSocket(7777);
for(;;){
socket = serverSocket.accept();
}
Client Socket Creation
socket = new Socket(IP, 7777);
Exception thrown by client
java.net.ConnectException: Connection timed out: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(Unknown Source)
at java.net.AbstractPlainSocketImpl.doConnect(Unknown Source)
at java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source)
at java.net.AbstractPlainSocketImpl.connect(Unknown Source)
at java.net.PlainSocketImpl.connect(Unknown Source)
at java.net.SocksSocketImpl.connect(Unknown Source)
at java.net.Socket.connect(Unknown Source)
at java.net.Socket.connect(Unknown Source)
at java.net.Socket.<init>(Unknown Source)
at java.net.Socket.<init>(Unknown Source)
at Client.<init>(Client.java:93)
at Client.main(Client.java:153)
From what you've told us, the setup looks like this:
You have port forwarded port 7777 from external IP to internal IP B
Your server listens on internal IP B and 127.0.0.1
Your client successfully connects to internal IP B:7777 and 127.0.0.1:7777
Your client does not connect if you point it to external IP:7777
This setup just does not work with most home routers/NAT gateways, they will not port forward a TCP connection destined for the external IP that comes from the internal network itself - it'll only port forward connections that actually comes from the outside(the internet).
Have you tried to accept the connection client yet?
By accept method from serverSocket
boolean isStopped = false;
while(!isStopped){
Socket clientSocket = serverSocket.accept();
Connection time out means your client doesn't get response of its request, possible problems are:
a) The IP/domain or port is incorrect
b) The IP/domain or port (i.e service) is down
c) The IP/domain is taking longer than your default timeout to respond
d) You have a firewall that is blocking requests or responses on whatever port you are using
e) You have a firewall that is blocking requests to that particular host
f) Your internet access is down
Note that firewalls and port or IP blocking may be in place by your ISP

What could cause socket ConnectException: Connection timed out?

We have a Webstart client that communicates to the server by sending serialized objects over HTTPS using java.net.HttpsURLConnection.
Everything works perfectly fine on my local machine and on test servers located in our office, but I'm experiencing a very, very strange issue which is only occurring on our production and staging servers (and sporadically at that). The main difference I know of between those servers and the ones in our office is that they are located elsewhere and client-server communication with them is considerably slower, but it worked fine for a long time in production prior to this as well.
Anyway, here's what's happening:
The client, after setting options such as read timeout and properties such as Content-Type on the HttpURLConnection, calls getOutputStream() on it to get the stream to write to.
At this point, from what I can tell, the client hangs for some period of time.
The client then throws the following exception:
java.net.ConnectException: Connection timed out: connect
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(Unknown Source)
at java.net.PlainSocketImpl.connectToAddress(Unknown Source)
at java.net.PlainSocketImpl.connect(Unknown Source)
at java.net.SocksSocketImpl.connect(Unknown Source)
at java.net.Socket.connect(Unknown Source)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.connect(Unknown Source)
at com.sun.net.ssl.internal.ssl.BaseSSLSocketImpl.connect(Unknown Source)
at sun.net.NetworkClient.doConnect(Unknown Source)
at sun.net.www.http.HttpClient.openServer(Unknown Source)
at sun.net.www.http.HttpClient.openServer(Unknown Source)
at sun.net.www.protocol.https.HttpsClient.(Unknown Source)
at sun.net.www.protocol.https.HttpsClient.New(Unknown Source)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(Unknown Source)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(Unknown Source)
Note that this is not a SocketTimeoutException, which the connect() method on HttpURLConnection says it throws if the timeout expires before a connection can be established. Also, when this happens I am able to call conn.getResponseCode() and I get a response code of 200.
On the server side, an EOFException is thrown in ObjectInputStream's constructor, which tries to read the serialization header but fails because the client never gets the OutputStream to write to.
In case it helps, here are the calls being made on the HttpsURLConnection prior to the call to getOutputStream() (edited to show only the calls being made rather than the whole structure of the code doing this):
HttpsURLConnection conn = (HttpsURLConnection) url.openConnection();
conn.setUseCaches(false);
conn.setReadTimeout(30000);
conn.setRequestProperty("Cookie", cookie);
conn.setDoOutput(true);
conn.setRequestProperty("Content-Type", "application/x-java-serialized-object");
conn.getOutputStream();
The thing is, I have no idea how any of this could be happening, especially given that it only happens occasionally (no clear pattern of activity that I can tell) and even then only when there's (relatively) high latency between the client and the server.
Given what I've been able to find so far about java.net.ConnectException: Connect timed out, I wondered if it weren't some network or firewall issue on the network our servers are running on... but that doesn't make much sense to me given that the request is clearly getting through to the servlet. Also, other apps running on the same network have not reported similar issues.
Does anyone have any idea what the cause of this could be, or even what I should investigate?
We have come across these in a similar case to yours. Usually at high load and not easy to reproduce on test. Have not fixed it yet but this is the steps we went through.
If it's a firewall issue, we would get a Connection Refused or the SocketTimeout exception.
1) Are you able to track these requests in the access log on the server - do they show an HTTP status 200 or 404 or something else? In our case, the server (IIS in this case) logs showed the client closed the connection and not the server. So that was a mystery.
Update: If the client always gets a 200, then the server has actually sent back some response but I suspect the response byte-size (if this is recorded in the access logs) will show a different value from that of the normal response size for that request.
If it shows the same size of response, then you have a (may be not plausible) condition that the server actually responded correctly but the client did not get the response back because the connection terminated somewhere in between.
2) The network admin teams looked at the TCP/IP traffic to determine which end (or intermediate router) is terminating the HTTP / TCP-IP conversation. And once we understand which end is terminating the connection is to look at why. Someone knowledgable enough could run snoop
3) Is there a max number of requests configured/restricted on the server - and is that throttling your connections?
4) Are there any intermediate load balancers at which requests could be dropped?
Update: One more thing we wanted to, but did not complete is to create a static route between client and server to reduce the number of hops in between and ensure no network related connection drops. See http://en.wikipedia.org/wiki/Static_routing
5) Another suggestion is setting the ConnectTimeout too to see if these work with a higher value.
Update: You might want to try conn.getErrorStream()
Returns the error stream if the
connection failed but the server sent
useful data nonetheless. If the
connection was not connected, or if
the server did not have an error while
connecting or if the server had an
error but no error data was sent, this
method will return null.
6) Could also try taking a set of thread dumps on the server 5 seconds apart, to see if any thread shows these incoming requests on the server.
Update: As of today we learnt to live with this problem, because we totalled the failure rate to be 200-300 out of 400,000 requests per day which is 0.00075 %
We also experience sporadic timeouts when using it on our servers. We are able to fix it with two things:
Use specific ContentLength via setFixedLengthStreamingMode (brought down the error rate from ~150 to 10)
Retry if a timeout occurs (Error rate from 10 to 0. After max. one retry everything went through)
pseudo code:
//set timeouts to 6s
try{
//open connection here and write etc.
//use a timeout of 6s (since retry is in place)
}
catch (java.io.InterruptedIOException e) {
//read- or connection time out try again
}
Another theory why this is happening could be the following:
In the documentation of the HttpURLConnection/HttpsURLConnection one can read the following:
Each HttpURLConnection instance is used to make a single request but
the underlying network connection to the HTTP server may be
transparently shared by other instances.
So now calling close() only would be ok but also calling disconnect() would terminate the socket for the other users / transparently shared connections which would then run into a SocketTimeOut after the timeout period is reached.

java RMI connection to server

I have a very simple rmi client / server application. I don't use the "rmiregistry" application though, I use this to create the server:
server = new RemoteServer();
registry = LocateRegistry.createRegistry(PORT);
registry.bind("RemoteServer", server);
The client part is:
registry = LocateRegistry.getRegistry(IPADDRESS, PORT);
remote = (IRemoteServer) registry.lookup("RemoteServer");
Here is the fascinating problem: The application works perfectly when both server and client are running in my (private) local network. As soon as I put the server on a public server, the application hangs for a minute, then gives me the following error:
java.rmi.ServerException: RemoteException occurred in server thread; nested exception is:
java.rmi.ConnectException: Connection refused to host: 192.168.x.y; nested exception is:
java.net.ConnectException: Connection timed out: connect
at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
at sun.rmi.transport.Transport$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown Source ... (the rest is truncated)
The key I think is that the client (running on my private network) cannot connect to myself (my address is 192.168.x.y where x.y is some other numbers, but the real error message shows my ip address listed there)
If I kill the rmi server on the public internet, then I instantly get a "connection refused to host: a.b.c.d") message, so I know that something at the server end is at least working.
Any suggestions?
EDIT: just to make this a little more clear: 192.168.x.y is the client address, a.b.c.d is the server address. The stacktrace shows the client cannot connect to the client.
Try running the server with this parameter for the virtual machine:
-Djava.rmi.server.hostname=hostname_of_the_server
192.168.* contains private IP addresses (as does 10.*). These will not be routed on the open internet. You need to make sure that you are using an public IP address for the server, and any firewalls (including NAT) are configured for access. You can do a quick check with telnet on the required port.
I would believe that the server tries to open a socket back to the client and it fails, and the exception is a bit unclear on its wording.
RMI is not very NAT-friendly, IIRC.

Categories