Too Many Files Open error in java NIO - java

Hi I have created a socket and client program using java NIO.
My server and client are on different computers ,Server have LINUX OS and CLIENT have WINDOWS OS. Whenever I have created 1024 sockets on client my client machines supports but in server I got too many files open error.
So How to open 15000 sockets without any error in server.
Or is there any other way to connect with 15000 clients at the same time?
Thanks
Bapi

Ok, questioning why he needs 15K sockets is a separate discussion.
The answer is that you are hitting the user's file descriptor limit.
Log with the user you will use in the listener and do $ulimit -n to see the current limit.
Most likely 1024.
Using root edit the file /etc/security/limits.conf
and set ->
{username} soft nofile 65536
{username} hard nofile 65536
65536 is just a suggestion, you will need to figure that out from your app.
Log off, log in again and recheck with ulimit -n, to see it worked.
Your are probably going to need more than 15 fds for all that. Monitor your app with lsof.
Like this:
$lsof -p {pid} <- lists all file descriptors
$lsof -p {pid} | wc -l <- count them
By the way, you might also hit the system wide fd limit, so you need to check it:
$cat /proc/sys/fs/file-max
To increase that one, add this line to the /etc/sysctl.conf
#Maximum number of open FDs
fs.file-max = 65535

Why do you need to have 15000 sockets on one machine? Anyway, look at ulimit -n

If you're going to have 15,000 clients talking to your server (and possibly 200,000 in the future according to your comments) then I suspect you're going to have scalability problems servicing those clients once they're connected (if they connect).
I think you may need to step back and look at how you can architect your application and/or deployment to successfully achieve these sort of numbers.

Related

Linux - "Too many open files" with pipe, how to debug

I have a Java program which will throw 'Too many open files' error after running for about 3 minutes. Increasing the open file limit doesn't work, because it still uses up all the limit, just slower. So there is something wrong with my program and I need to find out.
Here is what I did, 10970 is the pid
Check opened files of the Java process using cat /proc/10970/fd and find out most of them are pipes
Use lsof -p 10970 | grep FIFO to list all pipes and find about 450 pipes
Pipes look like below
java 10970 service 1w FIFO 0,8 0t0 5890 pipe
java 10970 service 2w FIFO 0,8 0t0 5890 pipe
java 10970 service 169r FIFO 0,8 0t0 2450696 pipe
java 10970 service 201r FIFO 0,8 0t0 2450708 pipe
But I don't know how to continue. 0,8 in the output above means device numbers. How can I find devices with these numbers?
Update
The program is a TCP server and receiving socket connections from client and processing messages. I have two environments. In Production environment it works fine, but in Test environment it has this issue recently. In Production environment I don't see so many pipes. The code and infrastructure of these two environments are same, both managed by Chef.
But I don't know how to continue.
What you need to do is to identify the place or places in your Java code where you are opening these pipes ... and make sure that they are always closed when you are done with them.
The best way to ensure that the pipes are closed is to explicitly close them when you are done with them. For example (using input streams instead of sockets ...):
InputStream is = new FileInputStream("somefile.txt");
try {
// Use file
} finally {
is.close();
}
In Java 7 or later, you can write that more succinctly as ///
try (InputStream is = new FileInputStream("somefile.txt")) {
// Use file
}
In the latter, the InputStream object is automatically closed when the try completes ... in an implicit finally block.
0,8 in the output above means device numbers. How can I find devices with these numbers?
That is probably irrelevant to solving the problem. Focus on why the file descriptors are not being closed. The knowing what device numbers mean doesn't help.
In Production environment I don't see so many pipes.
That's probably a red-herring too. It could be caused by the GC running more frequently, and closing the orphaned file descriptors before the become a problem.
(But forcing the GC to run is not a solution. You should not rely on the GC to close file descriptors. It is inefficient and unreliable.)

mysql jdbc communication exception

I have a Java application which initially reads 3 lakhs of data from my MYSQL database.Then it calls an API using an ExecutorService with newFixedThreadPool size=20.
After getting the response from the API it is inserting the responses to my DB.It is working fine for first 2000 rows(nearby).After that I am getting an error like following.
SQLError-com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The
driver was unable to create a connection due to an inability to
establish the client portion of a socket.
This is usually caused by a limit on the number of sockets imposed by
the operating system. This limit is usually configurable.
For Unix-based platforms, see the manual page for the 'ulimit'
command. Kernel or system reconfiguration may also be required.
For Windows-based platforms, see Microsoft Knowledge Base Article
196271 (Q196271).
Anyone could help me to fix this issue?
I was able to fix this problem by increasing the # of sockets that can be opened in Windows:
From the Windows Start menu, run the regedit.exe application
In the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters directory, create a new DWORD value named MaxUserPort with a decimal value of 65,000 (the default is 5,000)
After closing the regedit application, restart your computer.
(See also Increasing the number of Windows sockets or ports, Maximum Socket Connections)
A note of caution: When an application is using more than 5,000 socket connections, that may be an indication that system resources are not being used in a sustainable way. It would be good to investigate the root cause for why so many sockets are being opened simultaneously.

Apache multiple requests with jmeter

I am using Jmeter to test multiple requests to my web application.
I used NumberOfThread in Jmeter as 50.
My process is as follows:
Login page.
Login with userID and password.
Show menu page.
Click search page.
Go to search page.
Click search button.
Click the searched result link to go to update page.
Update data and click update button.
Show the updated result page.
Go back to search page.
Log out button click.
In the above process, I used looping controller for process number 5 to 10 with 5 looping.
In that situation, if I used more than 25 thread to run Jmeter test, address already in use, the socket binding exception has occur.
I would like to know how to solve that problem.
Looks like your client ran out of ephemeral port or there's some problem with your client environment.
Are you using Windows?
You can possibly do at least the following:
Windows: look into this article for solution for Windows system as host for jmeter.
Use Linux system instead as host to run you Jmeter load-scenarios.
As well you possibly will find this article useful for your testing activities (I've seen Jboss in tags).
UPDATE:
Once more from linked article above:
When an HTTP request is made, an ephemeral port is allocated for the
TCP / IP connection. The ephemeral port range is 32678 – 61000. After
the client closes the connection, the connection is placed in the
TIME-WAIT state for 60 seconds.
If JMeter (HttpClient) is sending thousands of HTTP requests per
second and creating new TCP / IP connections, the system will run out
of available ephemeral ports for allocation.
. . .
Otherwise, the following messages may appear in the JMeter JTL files:
Non HTTP response code: java.net.BindException
Non HTTP response message: Address already in use
The solution is to enable fast recycling TIME_WAIT sockets.
echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
Other options include TCP_FIN_TIMEOUT to reduce how long a connection
is placed in the TIME_WAIT state and TCP_TW_REUSE to allow the system
to reuse connections placed in the TIME_WAIT state.
On the server's side:
This enables fast recycling of TIME_WAIT sockets
/sbin/sysctl -w net.ipv4.tcp_tw_recycle=1
This allows reusing sockets in TIME_WAIT state for new connections - a safer alternative to tcp_tw_recycle
/sbin/sysctl -w net.ipv4.tcp_tw_reuse=1
The tcp_tw_reuse setting is particularly useful in environments where numerous short connections are open and left in TIME_WAIT state, such as web-servers. Reusing the sockets can be very effective in reducing server load.
Maximum number of timewait sockets held by system simultaneously
/sbin/sysctl -w net.ipv4.tcp_max_tw_buckets=30000
or the same but in another way - add the lines below to the /etc/sysctl.conf file so that the change survives reboot:
net.ipv4.tcp_max_tw_buckets = 30000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
As well on the server's side look onto result of ulimit -n.
Default value is 1024 for the limit of max open files, which can explain appearance of BindExceptions at 1000 connections.
As well you can monitor number of connections between the server and jmeter during test-run using e.g.
netstat -an | grep SERVER_PORT | wc -l
to define limit of connections - if any.

Problem running into java.net.bindexception cannot assign requested address

i am currently testing a server with an automatic test client that simulates a large number of users. Both the server and the client are written in Java. The client opens a tcp/ip connection for every user. Both the server and client run on Ubuntu linux, client runs on 11.04 and server on 10.04.
The testing went good up till 27000 concurrently open connections, after that i decided to jump to 36000 (the servers and clients resources weren't really all that used up at 27000 so i decided to make a slightly bigger jump). When i tried running the test for 36k i got the following exception on the client side:
java.net.BindException: cannot assign requested address
As far as i know at 36k i should still have free ports since not much else is running on either machine and tcp limits the port number at 2^16 which is 65536. Now since it is linux i also set the number of open files for the user to 100k with ulimit -n 100000.
But i am still getting the same exception.
I'm wondering what else could be a possible cause for the mentioned exception, or does linux in some other way limit the number of outgoing connections ?
Thanks in advance,
Danijel
By default, Linux picks dynamically assigned ports from the range 32768..61000. The others are available for static assignment, if you bind to a specific port number. The range can be changed if you want more of the ports to be available for dynamic assignment, but just be careful that you do not include ports that are used for specific services that you need (e.g. 6000 for X11). Also you should not allow ports < 1024 to be dynamically assigned since they are privileged. To check or change the range:
$ cat /proc/sys/net/ipv4/ip_local_port_range
32768 61000
# echo "16384 65535" > /proc/sys/net/ipv4/ip_local_port_range

How do I open 20000 clients in Java without increasing file limit?

Whenever I open a socket channel. If the client accepts then 1 file descriptor is created internally so I can create a maximum of 1024 clients in Linux.
But I want to create more clients without increasing file descriptor limit in Linux
(ulimit -n 20000)
So how can I create more sockets in Java?
If your session is limited to 1024 file descriptors you can't use more then that from a single JVM.
But since the ulimit is a per-process limitation, you could probably get around it by starting more JVMs (i.e. to get 2048 connections start two JVMs each using 1024).
If you are using UDP, can you multiplex on a single local socket youself? You'll be able to separate incoming packets by their source address and port.
If it's TCP you're out of luck, and the TIME_WAIT period after closing each socket will make things worse.
Why cant you increase the ulimit ? It seems like an artificial limitation. There is no way from java code (afaik) that allows you access to the system to reset the ulimit - it needs to be set before the process starts - in a startup script or something similar.
The JBoss startup scripts peform a 'ulimit -n $MAX_FD' before they start Jboss ...
Len
The limit RLIMIT_NOFILE is enforced by the operative system and limits the highest fd a process can create. One fd is used for every file, pipe and socket that is opened.
There are hard and soft limits. Any process (like your shell or jvm) is permitted to change the soft value but only a privileged process (like a shell run by the root user) can change the hard value .
a) If you are not permitted to change the limit on the machine, find someone that are.
b) If you for some reason can't be bothered to type ulimit, I guess you can call the underlying system call using JNA : man setrlimit(2). (.exec() won't do as it's a built in command)
See also Working With Ulimit
We recently upped our ulimit because our java process was throwing lots of "Too many files open" exceptions.
It is now 65536 and we have not had any issues.
If you really are looking at coping with a huge number of connections then the bast way to do it scalably would be to implement a lightweight dataserver process that has no responsibility other than accepting and forwarding data to a parent process.
That way as the each dataserver gets saturated you simply spawn a new instance to give yourself another 1024 connections. You could even have them exist on seperate machines if needed.

Categories