Problem running into java.net.bindexception cannot assign requested address - java

i am currently testing a server with an automatic test client that simulates a large number of users. Both the server and the client are written in Java. The client opens a tcp/ip connection for every user. Both the server and client run on Ubuntu linux, client runs on 11.04 and server on 10.04.
The testing went good up till 27000 concurrently open connections, after that i decided to jump to 36000 (the servers and clients resources weren't really all that used up at 27000 so i decided to make a slightly bigger jump). When i tried running the test for 36k i got the following exception on the client side:
java.net.BindException: cannot assign requested address
As far as i know at 36k i should still have free ports since not much else is running on either machine and tcp limits the port number at 2^16 which is 65536. Now since it is linux i also set the number of open files for the user to 100k with ulimit -n 100000.
But i am still getting the same exception.
I'm wondering what else could be a possible cause for the mentioned exception, or does linux in some other way limit the number of outgoing connections ?
Thanks in advance,
Danijel

By default, Linux picks dynamically assigned ports from the range 32768..61000. The others are available for static assignment, if you bind to a specific port number. The range can be changed if you want more of the ports to be available for dynamic assignment, but just be careful that you do not include ports that are used for specific services that you need (e.g. 6000 for X11). Also you should not allow ports < 1024 to be dynamically assigned since they are privileged. To check or change the range:
$ cat /proc/sys/net/ipv4/ip_local_port_range
32768 61000
# echo "16384 65535" > /proc/sys/net/ipv4/ip_local_port_range

Related

mysql jdbc communication exception

I have a Java application which initially reads 3 lakhs of data from my MYSQL database.Then it calls an API using an ExecutorService with newFixedThreadPool size=20.
After getting the response from the API it is inserting the responses to my DB.It is working fine for first 2000 rows(nearby).After that I am getting an error like following.
SQLError-com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The
driver was unable to create a connection due to an inability to
establish the client portion of a socket.
This is usually caused by a limit on the number of sockets imposed by
the operating system. This limit is usually configurable.
For Unix-based platforms, see the manual page for the 'ulimit'
command. Kernel or system reconfiguration may also be required.
For Windows-based platforms, see Microsoft Knowledge Base Article
196271 (Q196271).
Anyone could help me to fix this issue?
I was able to fix this problem by increasing the # of sockets that can be opened in Windows:
From the Windows Start menu, run the regedit.exe application
In the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters directory, create a new DWORD value named MaxUserPort with a decimal value of 65,000 (the default is 5,000)
After closing the regedit application, restart your computer.
(See also Increasing the number of Windows sockets or ports, Maximum Socket Connections)
A note of caution: When an application is using more than 5,000 socket connections, that may be an indication that system resources are not being used in a sustainable way. It would be good to investigate the root cause for why so many sockets are being opened simultaneously.

How can I work around WinXP using ports 1025-5000 as ephemeral?

If you create a TCP client socket with port 0 instead of a non-zero port, then the operating system chooses any free ephemeral port for you. Most OSes choose ephemeral ports from the IANA dynamic port range of 49152-65535. However in Windows Server 2003 and earlier (including XP) Microsoft used ports 1025-5000 as the ephemeral range, according to their bind() documentation.
I run multiple Java services on the same hardware. On rare occasions, this range collides with well-known ports that I use for other services (e.g. port 4160 for Jini discovery). While rare, this has caused real problems. Is there any easy way to tell Windows or Java to use a different port range for client sockets? Microsoft's docs indicate that I can change the high end of that range via the MaxUserPort TcpIP registry setting, but I see no way to change the low end.
Update: I've made some progress on this. It looks like Microsoft has a concept of reserved ports that are exceptions to the ephemeral port range. There's a registry setting that lets you change this permanently and apparently there must be an API to do the same thing because there's a data structure that holds high/low values for reserved port ranges, but I can't find the actual function call anywhere... The registry solution may work, but now I'm fixated on this API.
Update2: I accepted a solution on ServerFault for how to do this via the Windows registry. I'd still like a way to do it via API, but I guess I'm satisfied for now.
It's not as elegant as using OS support for ephemeral ports, but the docs show that you should be able to specify a port for your socket to bind to. Specify a port at the base of the range you want and if it is used an exception will be thrown, in which case increment the port and try again. Given that windows isn't using the port range that you want, there shouldn't be many collisions.

How do I open 20000 clients in Java without increasing file limit?

Whenever I open a socket channel. If the client accepts then 1 file descriptor is created internally so I can create a maximum of 1024 clients in Linux.
But I want to create more clients without increasing file descriptor limit in Linux
(ulimit -n 20000)
So how can I create more sockets in Java?
If your session is limited to 1024 file descriptors you can't use more then that from a single JVM.
But since the ulimit is a per-process limitation, you could probably get around it by starting more JVMs (i.e. to get 2048 connections start two JVMs each using 1024).
If you are using UDP, can you multiplex on a single local socket youself? You'll be able to separate incoming packets by their source address and port.
If it's TCP you're out of luck, and the TIME_WAIT period after closing each socket will make things worse.
Why cant you increase the ulimit ? It seems like an artificial limitation. There is no way from java code (afaik) that allows you access to the system to reset the ulimit - it needs to be set before the process starts - in a startup script or something similar.
The JBoss startup scripts peform a 'ulimit -n $MAX_FD' before they start Jboss ...
Len
The limit RLIMIT_NOFILE is enforced by the operative system and limits the highest fd a process can create. One fd is used for every file, pipe and socket that is opened.
There are hard and soft limits. Any process (like your shell or jvm) is permitted to change the soft value but only a privileged process (like a shell run by the root user) can change the hard value .
a) If you are not permitted to change the limit on the machine, find someone that are.
b) If you for some reason can't be bothered to type ulimit, I guess you can call the underlying system call using JNA : man setrlimit(2). (.exec() won't do as it's a built in command)
See also Working With Ulimit
We recently upped our ulimit because our java process was throwing lots of "Too many files open" exceptions.
It is now 65536 and we have not had any issues.
If you really are looking at coping with a huge number of connections then the bast way to do it scalably would be to implement a lightweight dataserver process that has no responsibility other than accepting and forwarding data to a parent process.
That way as the each dataserver gets saturated you simply spawn a new instance to give yourself another 1024 connections. You could even have them exist on seperate machines if needed.

Too Many Files Open error in java NIO

Hi I have created a socket and client program using java NIO.
My server and client are on different computers ,Server have LINUX OS and CLIENT have WINDOWS OS. Whenever I have created 1024 sockets on client my client machines supports but in server I got too many files open error.
So How to open 15000 sockets without any error in server.
Or is there any other way to connect with 15000 clients at the same time?
Thanks
Bapi
Ok, questioning why he needs 15K sockets is a separate discussion.
The answer is that you are hitting the user's file descriptor limit.
Log with the user you will use in the listener and do $ulimit -n to see the current limit.
Most likely 1024.
Using root edit the file /etc/security/limits.conf
and set ->
{username} soft nofile 65536
{username} hard nofile 65536
65536 is just a suggestion, you will need to figure that out from your app.
Log off, log in again and recheck with ulimit -n, to see it worked.
Your are probably going to need more than 15 fds for all that. Monitor your app with lsof.
Like this:
$lsof -p {pid} <- lists all file descriptors
$lsof -p {pid} | wc -l <- count them
By the way, you might also hit the system wide fd limit, so you need to check it:
$cat /proc/sys/fs/file-max
To increase that one, add this line to the /etc/sysctl.conf
#Maximum number of open FDs
fs.file-max = 65535
Why do you need to have 15000 sockets on one machine? Anyway, look at ulimit -n
If you're going to have 15,000 clients talking to your server (and possibly 200,000 in the future according to your comments) then I suspect you're going to have scalability problems servicing those clients once they're connected (if they connect).
I think you may need to step back and look at how you can architect your application and/or deployment to successfully achieve these sort of numbers.

Why would my java program send multicast packets with a TTL of 1?

I have a java client program that uses mdns with service discovery to find its associated server. After much testing on a single network with Windows, Fedora 10, and Ubuntu 8.10, we delivered a test build to a customer. They report that the client and server never connect. They sent us a wireshark capture that shows the mdns packets have a TTL of 1 even though our code sets it to 32. When we test locally, the TTL is 32 just like we set it. The customer is using Redhat Enterprise 5.
I saw Java Multicast Time To Live is always 0 but it leaves me curious as to why that question asker has a TTL of 0, but mine is 1.
Did you check out the answer to Java Multicast Time To Live is always 0? This may fix your problem as well. The answer there references the answerer's blog entry.

Categories