Java TCP/IP Server Closing Connections Improperly - java

I've created an MMO for the Android phone and use a Java server with TCP/IP sockets. Everything generally works fine, but after about a day of clients logging on and off my network becomes extremely laggy -- even if there aren't clients connected. NETSTAT shows no lingering connections, but there is obviously something terribly wrong going on.
If I do a full reboot everything magically is fine again, but this isn't a tenable solution for the long-term. This is what my disconnect method looks like (on both ends):
public final void disconnect()
{
Alive = false;
Log.write("Disconnecting " + _socket.getRemoteSocketAddress());
try
{
_socket.shutdownInput();
}
catch (final Exception e)
{
Log.write(e);
}
try
{
_socket.shutdownOutput();
}
catch (final Exception e)
{
Log.write(e);
}
try
{
_input.close();
}
catch (final Exception e)
{
Log.write(e);
}
try
{
_output.close();
}
catch (final Exception e)
{
Log.write(e);
}
try
{
_socket.close();
}
catch (final Exception e)
{
Log.write(e);
}
}
_input and _output are BufferedInputStream and BufferedOutputStream spawned from the socket. According to documentation calling shutdownInput() and shutdownOutput() shouldn't be necessary, but I'm throwing everything I possibly can at this.
I instantiate the sockets with default settings -- I'm not touching soLinger, KeepAlive, noDelay or anything like that. I do not have any timeouts set on send/receive. I've tried using WireShark but it reveals nothing unusual, just like NETSTAT.
I'm pretty desperate for answers on this. I've put a lot of effort into this project and am frustrated with what appears to be a serious hidden flaw in Java's default TCP implementation.

Get rid of shutdownInput() and shutdownOutput() and all the closes except the close for the BufferedOutputStream, and a subsequent close on the socket itself in a finally block as a belt & braces. You are shutting down and closing everything else before the output stream, which prevents it from flushing. Closing the output stream flushes it and closes the socket. That's all you need.

OP here, unable to comment on original post.
Restarting the server process does not appear to resolve the issue. The network remains very "laggy" even several minutes after shutting down the server entirely.
By "laggy" I mean the connection becomes extremely slow with both up and down traffic. Trying to load websites, or upload to my FTP, is painfully slow like I'm on a 14.4k modem (I'm on a 15mbs fiber). Internet Speed Tests don't even work when it is in this state -- I get an error about not finding the file, when the websites eventually load up.
All of this instantly clears up after a reboot, and only after a reboot.
I modified my disconnect method as EJP suggested, but the problem persists.
Server runs on a Windows 7 installation, latest version of Java / Java SDK. The server has 16gb of RAM, although it's possible I'm not allocating it properly for the JVM to use fully. No stray threads or processes appear to be present. I'll see what JVISUALVM says. – jysend 13 mins ago
Nothing unusual in JVISUALVM -- 10mb heap, 50% CPU use, 3160 objects (expected), 27 live threads out of 437 started. Server has been running for about 18 hours; loading up CNN's front page takes about a minute, and the normal speed test I use (first hit googling Speed Test) won't even load the page. NETSTAT shows no lingering connections. Ran all up to date antivirus. Server has run 24/7 in the past without any issues -- it is only when I started running this Java server on it that this started to happen.

Related

How prevent too many file open from close_wait connections

My program is fetching some images on a min.io server via their Java SDK.
The issue is that even after inputStream.close() the connections remain open from the java code. I can see it with lsof -p <PID>.
After a while, it disappears but sometimes it does not, I guess fast enough, and the java server throws some too many open files errors.
Is there like a garbage collector that removes the connections from the operating system?
How can I prevent these too many open files errors?
Just in case, here is the code:
public static byte[] getImageByImageBinaryId(String imagId) throws IOException {
InputStream object = null;
try {
object = getMinioClientClient().getObject(ServerProperties.MINIO_BUCKET_NAME, imagId);
return IOUtils.toByteArray(object);
} catch (Exception e) {
log.error(e);
} finally {
IOUtils.closeQuietly(object);
}
return null;
}
Internally minio-java uses OkHttp to make HTTP calls. OkHttp, like many Http clients, internally uses a connection pool to speed up repeated calls to the same location. If you need for connections to not persist you can pass in your own OkHttp client to one of the available constructors with your own pooling config but I do not recommend it.
Minio should probably expose a close method to clean up these resources but their expected use case probably involves clients living the whole life of your application.

tomcat leaves time_wait connection

we are using an Apache Sever as a front server an a Tomcat server for the backend. The frontend client is a java swing application. The protocol is hessian.
Sometimes we got a lot of small requests. When doing a "nestat -a" there are a lot of TIME_WAIT connections, which are blocking the server to open new connections. Only the connections to the tomcat seem to stay. The connections to the apache seem to be closed.
We are using a rewrite rule to forward the requests to the tomcat
RewriteEngine On
RewriteCond %{REQUEST_URI} .*\.servlet.*$
RewriteRule ^/(.*)$ http://localhost:8080/$1 [P]
Any ideas?
UPDATE:
Thanks for your advice,
but it still doesn't work. Every stream is closed and there are still these TIME_WAIT's:
if (conn != null) {
try {
IOUtils.closeQuietly(conn.getInputStream());
} catch (IOException e) {
// do nothing
}
try {
IOUtils.closeQuietly(conn.getOutputStream());
} catch (Exception ex) {
// do nothing
}
try {
IOUtils.closeQuietly(((HttpURLConnection) conn).getErrorStream());
} catch (Exception ex) {
// do nothing
}
}
if (conn instanceof HttpURLConnection) {
((HttpURLConnection) conn).disconnect();
}
Most likely you don't close the Input/Output Stream in the Swing application when making the request.
From here:
If the result is an InputStream, it is very important that the
InputStream.close() be put in an finally block, because Hessian will
not close the underlying HTTP stream until all the data is read and
the input stream is closed.
TIME_WAIT is a normal state of a recently closed connection, it will remain in it for a kernel-defined timeout. You did everything you can from the application's side.
If you have connections flicker too fast and experience a shortage of available ports, you could tweak your system's recycling of them. Huge amount of TIME_WAIT connections gives an overview for Linux as well as basic theory, Tuning Windows for TCP/IP performance lists corresponding Windows parameters with much breifer explanation.
It's not Tomcat, it's your operating system.
The OS is leaving connection in TIME_WAIT in order to avoid interference caused by too quick port reuse. Imagine packets from the old connection arriving late while a new connection is open in the same port.

Could delay in PING cause file descriptors leak?

I have java application running on Jboss server.
And i usually see that whenever ping delay i.e network issue happens, file descriptors grow up tremendously and never comes back. It only ends up in restarting the jvm.
If PING from the server to the client isn't arriving in time, say it's taking too much time due to network slowness, could it be the cause of FIle Descriptor's leak?
It depends on how the pinging is implemented by your application ....
However, it is plausible that timeouts on a Socket or DatagramSocket could lead to an exception that (if not coded properly) could lead to a file descriptor leak.
Here's an example of the wrong, and then right way to do this kind of thing:
// Wrong way
Socket sock = new Socket(...);
sock.connect();
// do stuff including a read that might time out
sock.close();
The problem is that the time-out exception is liable to cause the close() call to be skipped as control jumps to the handler for the exception ... further up the stack.
// Right way (Java 6 and earlier)
Socket sock = new Socket(...);
try {
sock.connect();
// do stuff including a read that might time out
} finally {
sock.close();
}
// Right way (Java 7 and later)
try (Socket sock = new Socket(...)) {
sock.connect();
// do stuff including a read that might time out
}
In the latter case, there is an implicit handler that closes sock automatically.
Obviously, you will need to adapt this pattern to what your real code is doing.

Java threaded socket connection timeouts

I have to make simultaneous tcp socket connections every x seconds to multiple machines, in order to get something like a status update packet.
I use a Callable thread class, which creates a future task that connects to each machine, sends a query packet, and receives a reply which is returned to the main thread that creates all the callable objects.
My socket connection class is :
public class ClientConnect implements Callable<String> {
Connection con = null;
Statement st = null;
ResultSet rs = null;
String hostipp, hostnamee;
ClientConnect(String hostname, String hostip) {
hostnamee=hostname;
hostipp = hostip;
}
#Override
public String call() throws Exception {
return GetData();
}
private String GetData() {
Socket so = new Socket();
SocketAddress sa = null;
PrintWriter out = null;
BufferedReader in = null;
try {
sa = new InetSocketAddress(InetAddress.getByName(hostipp), 2223);
} catch (UnknownHostException e1) {
e1.printStackTrace();
}
try {
so.connect(sa, 10000);
out = new PrintWriter(so.getOutputStream(), true);
out.println("\1IDC_UPDATE\1");
in = new BufferedReader(new InputStreamReader(so.getInputStream()));
String [] response = in.readLine().split("\1");
out.close();in.close();so.close(); so = null;
try{
Integer.parseInt(response[2]);
} catch(NumberFormatException e) {
System.out.println("Number format exception");
return hostnamee + "|-1" ;
}
return hostnamee + "|" + response[2];
} catch (IOException e) {
try {
if(out!=null)out.close();
if(in!=null)in.close();
so.close();so = null;
return hostnamee + "|-1" ;
} catch (IOException e1) {
// TODO Auto-generated catch block
return hostnamee + "|-1" ;
}
}
}
}
And this is the way i create a pool of threads in my main class :
private void StartThreadPool()
{
ExecutorService pool = Executors.newFixedThreadPool(30);
List<Future<String>> list = new ArrayList<Future<String>>();
for (Map.Entry<String, String> entry : pc_nameip.entrySet())
{
Callable<String> worker = new ClientConnect(entry.getKey(),entry.getValue());
Future<String> submit = pool.submit(worker);
list.add(submit);
}
for (Future<String> future : list) {
try {
String threadresult;
threadresult = future.get();
//........ PROCESS DATA HERE!..........//
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
}
The pc_nameip map contains (hostname, hostip) values and for every entry i create a ClientConnect thread object.
My problem is that when my list of machines contains lets say 10 pcs (which most of them are not alive), i get a lot of timeout exceptions (in alive pcs) even though my timeout limit is set to 10 seconds.
If i force the list to contain a single working pc, I have no problem.
The timeouts are pretty random, no clue what's causing them.
All machines are in a local network, the remote servers are written by my also (in C/C++) and been working in another setup for more than 2 years without any problems.
Am i missing something or could it be an os network restriction problem?
I am testing this code on windows xp sp3. Thanks in advance!
UPDATE:
After creating two new server machines, and keeping one that was getting a lot of timeouts, i have the following results :
For 100 thread runs over 20 minutes :
NEW_SERVER1 : 99 successful connections/ 1 timeouts
NEW_SERVER2 : 94 successful connections/ 6 timeouts
OLD_SERVER : 57 successful connections/ 43 timeouts
Other info :
- I experienced a JRE crash (EXCEPTION_ACCESS_VIOLATION (0xc0000005)) once and had to restart the application.
- I noticed that while the app was running my network connection was struggling as i was browsing the internet. I have no idea if this is expected but i think my having at MAX 15 threads is not that much.
So, fisrt of all my old servers had some kind of problem. No idea what that was, since my new servers were created from the same OS image.
Secondly, although the timeout percentage has dropped dramatically, i still think it is uncommon to get even one timeout in a small LAN like ours. But this could be a server's application part problem.
Finally my point of view is that, apart from the old server's problem (i still cannot beleive i lost so much time with that!), there must be either a server app bug, or a JDK related bug (since i experienced that JRE crash).
p.s. I use Eclipse as IDE and my JRE is the latest.
If any of the above ring any bells to you, please comment.
Thank you.
-----EDIT-----
Could it be that PrintWriter and/or BufferedReader are not actually thread safe????!!!?
----NEW EDIT 09 Sep 2013----
After re-reading all the comments and thanks to #Gray and his comment :
When you run multiple servers does the first couple work and the rest of them timeout? Might be interesting to put a small sleep in your fork loop (like 10 or 100ms) to see if it works that way.
I rearanged the tree list of the hosts/ip's and got some really strange results.
It seems that if an alive host is placed on top of the tree list, thus being first to start a socket connection, has no problem connecting and receiving packets without any delay or timeout.
On the contrary, if an alive host is placed at the bottom of the list, with several dead hosts before it, it just takes too long to connect and with my previous timeout of 10 secs it failed to connect. But after changing the timeout to 60 seconds (thanks to #EJP) i realised that no timeouts are occuring!
It just takes too long to connect (more than 20 seconds in some occasions).
Something is blobking new socket connections, and it isn't that the hosts or network is to busy to respond.
I have some debug data here, if you would like to take a look :
http://pastebin.com/2m8jDwKL
You could simply check for availability before you connect to the socket. There is an answer who provides some kind of hackish workaround https://stackoverflow.com/a/10145643/1809463
Process p1 = java.lang.Runtime.getRuntime().exec("ping -c 1 " + ip);
int returnVal = p1.waitFor();
boolean reachable = (returnVal==0);
by jayunit100
It should work on unix and windows, since ping is a common program.
My problem is that when my list of machines contains lets say 10 pcs (which most of them are not alive), i get a lot of timeout exceptions (in alive pcs) even though my timeout limit is set to 10 seconds.
So as I understand the problem, if you have (for example) 10 PCs in your map and 1 is alive and the other 9 are not online, all 10 connections time out. If you just put the 1 alive PC in the map, it shows up as fine.
This points to some sort of concurrency problem but I can't see it. I would have thought that there was some sort of shared data that was not being locked or something. I see your test code is using Statement and ResultSet. Maybe there is a database connection that is being shared without locking or something? Can you try just returning the result string and printing it out?
Less likely is some sort of network or firewall configuration but the idea that one failed connection would cause another to fail is just strange. Maybe try running your program on one of the servers or from another computer?
If I try your test code, it seems to work fine. Here's the source code for my test class. It has no problems contacting a combination of online and offline hosts.
Lastly some quick comments about your code:
You should close the streams, readers, and sockets in a finally block. Check my test class for a better pattern there.
You should return a small Result class instead of passing back a String that they has to be parsed.
Hope this helps.
After a lot of reading and experimentation i will have to answer my own question (if i am allowed to do of course).
Java just can't handle concurrent multiple socket connections without adding a big performance overhead. At least in a Core2Duo/4GB RAM/ Windows XP machine.
Creating multiple concurrent socket connections to remote hosts (using of course the code i posted) creates some kind of resource bottleneck, or blocking situation, wich i am still not aware of.
If you try to connect to 20 hosts simultaneously, and a lot of them are disconnected, then you cannot guarantee a "fast" connection to the alive ones.
You will get connected but could be after 20-25 seconds. Meaning that you'll have to set socket timeout to something like 60 seconds. (not acceptable for my application)
If an alive host is lucky to start its connection try first (having in mind that concurrency is not absolute. the for loop still has sequentiality), then he will probably get connected very fast and get a response.
If it is unlucky, the socket.connect() method will block for some time, depending on how many are the hosts before it that will timeout eventually.
After adding a small sleep between the pool.submit(worker) method calls (100 ms) i realised that it makes some difference. I get to connect faster to the "unlucky" hosts. But still if the list of dead hosts is increased, the results are almost the same.
If i edit my host list and place a previously "unlucky" host at the top (before dead hosts), all problems dissapear...
So, for some reason the socket.connect() method creates a form of bottleneck when the hosts to connect to are many, and not alive. Be it a JVM problem, a OS limitation or bad coding from my side, i have no clue...
I will try a different coding approach and hopefully tommorow i will post some feedback.
p.s. This answer made me think of my problem :
https://stackoverflow.com/a/4351360/2025271

Winsock "connect" hangs. Visual studio reports possible deadlock

I have this code. (Used it in other old project of mine, worked wonderfully)
SOCKET Connect(char * host, int port){
struct sockaddr_in sin = {0};
struct hostent * entry = 0;
SOCKET s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if(s == INVALID_SOCKET){
return INVALID_SOCKET;
}
entry = gethostbyname(host);
if(entry == 0){
closesocket(s);
return INVALID_SOCKET;
}
sin.sin_addr = *((LPIN_ADDR)*entry->h_addr_list);
sin.sin_family = AF_INET;
sin.sin_port = htons(port);
// The process becomes dealocked after this line
if( connect(s,(const LPSOCKADDR)&sin,sizeof(SOCKADDR)) == SOCKET_ERROR){
closesocket(s);
return INVALID_SOCKET;
}
return s;
}
I started this morning working on a Delphi project using TTcpClient and Indy's TIdTcpClient wrappers and I noticed the process did not make any connections rather it just hung after calling connect. I then switched to C/C++ and tried with this code which does the same thing. After it hangs, there's no way to kill it (unless when it's being debugged where I had to exit the debugger). TaskManager, Process Explorer didn't do shit.
There are no threads or loops or whatever that may cause it to hang just this code and another function that writes to the socket after it connects.
When debugging with Visual Studio, after sometime there's a message (below)
Even Wireshark doesn't show anything at all. Restarted my computer and still the same problem.
So has anyone ever had this problem before?
Used compilers
Visual Studio 2010
Pelles-C
Delphi 7
OS : Windows 7 64 bit, Ultimate
Winsock Version: 2.2
Update:
So I thought I would getaway and switched to Java only to find out the same problem after a couple of times. What the hell is wrong here. The Java takes around 2 minutes to connect even on localhost. This simple code takes ~2 minutes during which java.exe can't be killed also.
long startTime = System.currentTimeMillis(), endTime;
Socket clientSock = new Socket("localhost",80); // running Apache on localhost
endTime = System.currentTimeMillis();
Log("Connection time " + (endTime - startTime) + " ms");
clientSock.close();
run:
Connection time 125088 ms
As for Java I did some searches and this problem was a bug in version 1 of the JDK but the change log showed it had been patched. But then again this happens in the underlying winsock library. WHY ? This program connects instantly and it also uses winsock: http://flatassembler.net/examples/quetannon.zip
So now I have to re-write 976 lines of JAVA in assembly just because of this? Help me out here people.
Since you are encountering the same problem in multiple wrappers that all ultimately delegate to Winsock, its safe to assume that this is an OS issue, not a coding issue. Something on your system has hosed your Winsock installation, or the OS is having networking problems in general, especially since a simple OS reboot did not clear the issue. Try using Windows' command-line netsh tool to reset both the TCP and Winsock subsystems, the command-line ipconfig tool to flush the DNS cache, reboot, and see if the problem continues.
On the coding side, you should implement a timeout on the connect() to avoid further deadlocks. There are two ways to do that:
Put the socket into non-blocking mode and then call select() if connect() returns a WSAEWOULDBLOCk error. If select() times out, close the socket.
Leave the socket in blocking mode and use a separate thread to manage the timeout. Call connect() in the thread, or run your timeout logic in the thread, it does not really matter, but if the timeout elapses while connect() is still running then you can close the socket, aborting connect(). This is the approach that TIdTCPClient uses.
Ok. For the JAVA part at least I solved it by using the following code based on the answer here Java Socket creation takes more time.
So basically the default timeout value is (possibly) huge.So what I did was set a 3 second timeout then once the timeout exception is thrown, the next call works instantly.
private static final int CONNECT_TIMEOUT = 3000; // 3 seconds
private static Socket AttemptConnection(String host, int port) {
Socket temp;
try {
temp = new Socket();
temp.connect(new InetSocketAddress(host, port), CONNECT_TIMEOUT);
return temp;
} catch (Exception ex) {
temp = null;
lastException = ex.getMessage();
return temp;
}
}
And somewhere in your code (at least in my app)
while ( (clientSock = AttemptConnection("localhost",80)) == null ){
Log("Attempting connection. Last exception: " + lastException);
try{Thread.sleep(2500);}catch(Exception ex){} /* This is necessary in my application */
}
So looking at this I think the fix to all the socket implementations (JAVA,Delphi, etc) is to set a small timeout value then connect again.
EDIT:
The root of the problem was found: I have a HIPS program (COMODO Firewall) running on my laptop. If COMODO's cmdagent.exe is active, it'll show me an alert of an outgoing connection to which I can accept/deny. If not, it will silently deny the connection, so therefore something becomes deadlocked in the low levels.I was worried my PC was effed up.

Categories