As all of us might know, websocket maintains opened connection between server and client to achieve server push not like server pull where the connection wouldn't remains open. My question is how many number of TCP connections can be open at one time? What is the limitation of server push compared to server pull in this regard?
Default maximum number of websocket connections allowed in FireFox is 200.
Source: https://developer.mozilla.org/en/docs/WebSockets#Gecko_7.0
From the comment given in line #48 of http://src.chromium.org/viewvc/chrome/trunk/src/net/socket/client_socket_pool_manager.cc?r1=128044&r2=128043&pathrev=128044 seems to talk of dynamic numbers with minimum of 6 for normal socket pool and 30 for websocket pool.
More Info: https://groups.google.com/a/chromium.org/forum/#!topic/chromium-reviews/4sHNK-Eotn0
Related
I am running a Java multi threaded network application on my server and when I increase the thread count to more than 2000, I start seeing exceptions that it can not connect to an external server. "Connection refused" error. This is from the client side, I do not get errors when I do 1000 threads on different servers
Is there anyway to increase this limit?
No doubts: each server has limit of incoming connections. On occupying all of them DOS attacks are based.
Using 'thread per connection' in Java is a bad idea when you are using 2000 connections. You might want to refer to Java NIO
I have a Java client that connects to MQ with 10 connections. These remain open for the duration that the Java client runs. For each thread we create a message, create a session, send the message and close the session. We are using the Spring CachingConnectionFactory and have a sessionCacheSize of 100. We have been told by our MQ engineering team is that that our queue manager has a max connections of 500 and that we are exceeding this. The QM.ini file has:
maxChannels=500
maxActiveChannels=256
maxHandles=256
What I have observed in MQ explorer is that the open output count on the queue remains static at 10, however if we load balance across 2 queues it's 10 on each, even though we still only have 10 connections. So what I'd like to know is what do jms connections and sessions equate to in MQ terminology?
I did think that a connection equates to an active channel and a session is a handle, so it was the handles that we were possibly exceeding as the amount of sessions we open (and close) run into hundreds or thousands, whereas we only have 10 connections. Although going against this, the snippet below from IBM's Technote "Explanation of connection pool and session pool settings for JMS connection factories", suggests that the max channels should be greater than the max sessions, however we never know this as it depends on the load (unless this should be greater than the sessionCacheSize?).
Each session represents a TCP/IP connection to the queue manager.
With the settings mentioned here, there can be a maximum of 100 TCP/IP
connections. If you are using WebSphere MQ, it is important to tune
the queue manager's MaxChannels setting, located in the qm.ini file,
to a value greater than the sum of the maximum possible number of
sessions from each JMS connection factory that connects to the queue
manager.
Any assistance on how best to configure MQ would be appreciated.
Assuming that your maximum number of conversations is set to 1 on the MQ Channel (the default is 10 in MQ v7 and v7.5) then a JMS Connection will result in a TCP connection (MQ channel instance) and a JMS Session will result in another TCP connection (MQ channel instance).
From your update it sounds like you have 10 JMS Connections configured and the sessionCacheSize in Spring set to 100, so 10 x 100 means 1000 potential MQ channel instances being created. The open output count will show how many 'active' JMS Sessions are attempting send a message and not necessarily how many have been 'cached'.
The conversation sharing on the MQ channel might help you here as this defines how many logical connections can be shared over one TCP connection (MQ channel instance). So the default of 10 conversations means you can have 10 JMS Sessions created that operate over just one TCP connection.
I currently have a JVM-based network client that does an HTTP long poll (aka comet) request using the standard java.net.HttpURLConnection. I have timeout set very high for the connection (1 hour). For most users it works fine. But some users do not receive the data sent from the server and eventually time out after 1 hour.
My theory is that a (NAT) router is timing out and discarding their connections because they are idle too long before the server sends any data.
My questions then are:
Can I enable TCP keep-alive for the connections used by java.net.HttpURLConnection? I could not find a way to do this.
Is there a different API (than HttpURLConnection) I should be using instead?
Other solutions?
java.net.HttpURLConnection handles Keep-Alive header transparently, it can be controlled and it is on by default. But your problem is not in Keep-Alive, which is a higher level flag indicating that the server should close the connection after handling the first request but rather waiting for the next one.
In your case probably something on the lower level of OSI stack interrupts the connection. Because keeping an open but idle TCP connection for such a long period of time is never a good choice (FTP protocol with two open connections: one for commands and one for data has the same problem), I would rather implement some sort of disconnect/retry fail-safe procedure on the client side.
In fact safe limit would probably be just few minutes, not hours. Simply disconnect from the HTTP server pro-actively every 60 seconds or 5 minutes. Should do the trick.
There does not appear to be a way to turn on TCP keep-alive for HttpURLConnection.
Apache HttpComponents will be an option when version 4.2 comes out with TCP keep-alive support.
I am using Jersey 1.4, the ApacheHttpClient, and the Apache MultiThreadedHttpConnectionManager class to manage connections. For the HttpConnectionManager, I set staleCheckingEnabled to true, maxConnectionsPerHost to 1000 and maxTotalConnections to 1000. Everything else is default. We are running in Tomcat and making connections out to multiple external hosts using the Jersey client.
I have noticed that after after a short period of time I will begin to see sockets in a CLOSE_WAIT state that are associated with the Tomcat process. Some monitoring with tcpdump shows that the external hosts appear to be closing the connection after some time but it's not getting closed on our end. Usually there is some data in the socket read queue, often 24 bytes. The connections are using https and the data seems to be encrypted so I'm not sure what it is.
I have checked to be sure that the ClientRequest objects that get created are closed. The sockets in CLOSE_WAIT do seem to get recycled and we're not running out of any resources, at least at this time. I'm not sure what's happening on the external servers.
My question is, is this normal and should I be concerned?
Thanks,
John
This is likely to be a device such as the firewall or the remote server timing out the TCP session. You can analyze packet captures of HTTPS using Wireshark as described on their SSL page:
http://wiki.wireshark.org/SSL
The staleCheckingEnabled flag only issues the check when you go to actually use the connection so you aren't using network resources (TCP sessions) when they aren't needed.
I have a problem with MantaRay JMS: I use a static world map because autodiscovery wouldn't work in our network. If more than 10 peers are offline, I get an error 4226.
The problem is: Microsoft set a limit of 10 half-open connections with Windows XP SP2. MantaRay tries to contact every peer, and starts a lot of connections. The first 10 connections are Ok, then when the 11th starts, our software must wait for another connection to time out. Any other program trying to access the network on the same PC times out.
Strange thing is: on some PC the connection times out after 1-2 seconds, and the problem has almost no consequences, on some other, we have to wait 10 or 20 seconds. But according to Microsoft, there's no way to configure the default TCP connect timeout directly, and there are other factors (network switches, routers, VPN...) which could influence that.
I looked at the MantaRay source code, and tried to find a way to set the TCP connect timeout, but MantaRay uses SohetChannels instead of "regular" sockets, and the connect() method has no timeout. Am I missing something?
You could also patch the TCP/IP connection limit of WinXP... if you don't mind using such things. There are several sites offering patches. Just search Google for "change winxp tcp connection limit" and you'll find most of them. But use those tools on your own risk. Patching the code to work around that limit should be a better approach.
Problem solved.
I replaced the whole MantaRay with a much simpler JMS provider I wrote: I send a first test message over UDP, a peer is allowed to open a TCP connection only after this first message was received.
This taught me to be careful when using open-source (GPL) software.