Apache multiple requests with jmeter - java

I am using Jmeter to test multiple requests to my web application.
I used NumberOfThread in Jmeter as 50.
My process is as follows:
Login page.
Login with userID and password.
Show menu page.
Click search page.
Go to search page.
Click search button.
Click the searched result link to go to update page.
Update data and click update button.
Show the updated result page.
Go back to search page.
Log out button click.
In the above process, I used looping controller for process number 5 to 10 with 5 looping.
In that situation, if I used more than 25 thread to run Jmeter test, address already in use, the socket binding exception has occur.
I would like to know how to solve that problem.

Looks like your client ran out of ephemeral port or there's some problem with your client environment.
Are you using Windows?
You can possibly do at least the following:
Windows: look into this article for solution for Windows system as host for jmeter.
Use Linux system instead as host to run you Jmeter load-scenarios.
As well you possibly will find this article useful for your testing activities (I've seen Jboss in tags).
UPDATE:
Once more from linked article above:
When an HTTP request is made, an ephemeral port is allocated for the
TCP / IP connection. The ephemeral port range is 32678 – 61000. After
the client closes the connection, the connection is placed in the
TIME-WAIT state for 60 seconds.
If JMeter (HttpClient) is sending thousands of HTTP requests per
second and creating new TCP / IP connections, the system will run out
of available ephemeral ports for allocation.
. . .
Otherwise, the following messages may appear in the JMeter JTL files:
Non HTTP response code: java.net.BindException
Non HTTP response message: Address already in use
The solution is to enable fast recycling TIME_WAIT sockets.
echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
Other options include TCP_FIN_TIMEOUT to reduce how long a connection
is placed in the TIME_WAIT state and TCP_TW_REUSE to allow the system
to reuse connections placed in the TIME_WAIT state.
On the server's side:
This enables fast recycling of TIME_WAIT sockets
/sbin/sysctl -w net.ipv4.tcp_tw_recycle=1
This allows reusing sockets in TIME_WAIT state for new connections - a safer alternative to tcp_tw_recycle
/sbin/sysctl -w net.ipv4.tcp_tw_reuse=1
The tcp_tw_reuse setting is particularly useful in environments where numerous short connections are open and left in TIME_WAIT state, such as web-servers. Reusing the sockets can be very effective in reducing server load.
Maximum number of timewait sockets held by system simultaneously
/sbin/sysctl -w net.ipv4.tcp_max_tw_buckets=30000
or the same but in another way - add the lines below to the /etc/sysctl.conf file so that the change survives reboot:
net.ipv4.tcp_max_tw_buckets = 30000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
As well on the server's side look onto result of ulimit -n.
Default value is 1024 for the limit of max open files, which can explain appearance of BindExceptions at 1000 connections.
As well you can monitor number of connections between the server and jmeter during test-run using e.g.
netstat -an | grep SERVER_PORT | wc -l
to define limit of connections - if any.

Related

No connection available in pool, netstat shows non-zero values in Recv-Q

I have an old Java application working on java 1.6 on Tomcat 6. Due to way how it is set up in the environment, it is hard to do any inner diagnostics - basically I can't touch it so - it is a blackbox.
The app is crashing due to lack of free connections. The limits are set high (max 255 parallel connections) but, even if the number of open connections is like 60, it is still crashing.
Netstat shows that there are a lot of data in recvQ (just an example):
tcp 1464 0 localhost:7076 remote-host1:3120 ESTABLISHED
tcp 2512 0 localhost:7611 remote-host2:3120 ESTABLISHED
tcp 6184 0 localhost:4825 remote-host3:3120 ESTABLISHED
I couldn't find any useful hints about the case (similar issue is here: https://serverfault.com/questions/672730/no-connection-available-in-pool-netstat-recvq-shows-high-number).
The questions:
1) Why the application is not reading all the data received?
2) Because all the data is not read, another connection is opened to the DB. Am I right?
Any ideas will be appreciated.
1) What the application does with the read data? May be it can't write to disk, it's waiting for other conditions, there's a thread lock, etc.
2) New connections are opened because those ones are still in use regardless of the recvQ.
Regarding the number of connections, you should count half closed connections too, these TCP status mean the connection is still active
ESTABLISHED
FIN_WAIT_1
FIN_WAIT_2
TIME_WAIT
On Linux:
netstat -ant | grep -E 'ESTABLISHED|FIN_WAIT_1|FIN_WAIT_2|TIME_WAIT' | sort -k 6,6
To further troubleshoot it's suggested to get thread and/or heap dumps and analyze them.
Another case involving TIME_WAIT.

Huge latency on https websockets (wss)

I'm working on a mmo browser game called mope.io (The game is live at https://mope.io) - we recently added https support, but have noticed a HUGE amount of latency over wss websockets. On many of our game servers with https (the issue seems to happen randomly to some servers), there is a delay of several seconds on wss that wasn't seen before on ws.
Quick info: our game server sends 10 update packets per second, giving info on what changed in-game.
We use cloudflare for our site (setting Full:Strict), through our own wildcard certificate (*.mope.io). All of our game servers have matching DNS records which fall under this certificate (so that the websockets can work over https- we connect to eg. wss://server1.mope.io:7020 instead of ws://1.2.3.4:7020).
The game servers are written in Java, using the following library: https://github.com/TooTallNate/Java-WebSocket
Any ideas on reasons that websockets could perform so terribly slow under wss/tls? This even happens when I'm the only one connected to the server. Any help/guidance is greatly appreciated :)
Extra info: I've noticed a 11s time-to-first-byte for the first https request when connecting to the site, before cloudflare cached it, what could cause this!?
My friend had similar problem in his Node.js wss game server and he suspected that it was DoS attack. He managed to reproduce the "freezing" in his server with just single computer by opening fastly WSS connections. I am not sure if this affects only secure websocket servers or both insecure and secure.
Problem occurred on both WS and UWS library.
In websockets/ws library, lag happened even when rejecting the connections on verifyClient() to prevent it ever getting into applications own connection handling logic (which could be reasonable explanation for the lag) so I wonder could the bottleneck be somewhere in Nodes underlying socket stuff with handling secure connections.
Our solution to prevent the freezing was to setup iptables with rate limit. Replace 1234 with your server port.
sudo iptables -I INPUT -p tcp --dport 1234 -m state --state NEW -m recent --set
sudo iptables -I INPUT -p tcp --dport 1234 -m state --state NEW -m recent --update --seconds 1 --hitcount 2 -j DROP
That will allow 2 connections in second per IP address. Also save iptables if necessary since its resets on system restart.
https://serverfault.com/questions/296838/traffic-filtering-for-websockets
https://debian-administration.org/article/187/Using_iptables_to_rate-limit_incoming_connections

Problem running into java.net.bindexception cannot assign requested address

i am currently testing a server with an automatic test client that simulates a large number of users. Both the server and the client are written in Java. The client opens a tcp/ip connection for every user. Both the server and client run on Ubuntu linux, client runs on 11.04 and server on 10.04.
The testing went good up till 27000 concurrently open connections, after that i decided to jump to 36000 (the servers and clients resources weren't really all that used up at 27000 so i decided to make a slightly bigger jump). When i tried running the test for 36k i got the following exception on the client side:
java.net.BindException: cannot assign requested address
As far as i know at 36k i should still have free ports since not much else is running on either machine and tcp limits the port number at 2^16 which is 65536. Now since it is linux i also set the number of open files for the user to 100k with ulimit -n 100000.
But i am still getting the same exception.
I'm wondering what else could be a possible cause for the mentioned exception, or does linux in some other way limit the number of outgoing connections ?
Thanks in advance,
Danijel
By default, Linux picks dynamically assigned ports from the range 32768..61000. The others are available for static assignment, if you bind to a specific port number. The range can be changed if you want more of the ports to be available for dynamic assignment, but just be careful that you do not include ports that are used for specific services that you need (e.g. 6000 for X11). Also you should not allow ports < 1024 to be dynamically assigned since they are privileged. To check or change the range:
$ cat /proc/sys/net/ipv4/ip_local_port_range
32768 61000
# echo "16384 65535" > /proc/sys/net/ipv4/ip_local_port_range

Sockets in CLOSE_WAIT from Jersey Client

I am using Jersey 1.4, the ApacheHttpClient, and the Apache MultiThreadedHttpConnectionManager class to manage connections. For the HttpConnectionManager, I set staleCheckingEnabled to true, maxConnectionsPerHost to 1000 and maxTotalConnections to 1000. Everything else is default. We are running in Tomcat and making connections out to multiple external hosts using the Jersey client.
I have noticed that after after a short period of time I will begin to see sockets in a CLOSE_WAIT state that are associated with the Tomcat process. Some monitoring with tcpdump shows that the external hosts appear to be closing the connection after some time but it's not getting closed on our end. Usually there is some data in the socket read queue, often 24 bytes. The connections are using https and the data seems to be encrypted so I'm not sure what it is.
I have checked to be sure that the ClientRequest objects that get created are closed. The sockets in CLOSE_WAIT do seem to get recycled and we're not running out of any resources, at least at this time. I'm not sure what's happening on the external servers.
My question is, is this normal and should I be concerned?
Thanks,
John
This is likely to be a device such as the firewall or the remote server timing out the TCP session. You can analyze packet captures of HTTPS using Wireshark as described on their SSL page:
http://wiki.wireshark.org/SSL
The staleCheckingEnabled flag only issues the check when you go to actually use the connection so you aren't using network resources (TCP sessions) when they aren't needed.

Too Many Files Open error in java NIO

Hi I have created a socket and client program using java NIO.
My server and client are on different computers ,Server have LINUX OS and CLIENT have WINDOWS OS. Whenever I have created 1024 sockets on client my client machines supports but in server I got too many files open error.
So How to open 15000 sockets without any error in server.
Or is there any other way to connect with 15000 clients at the same time?
Thanks
Bapi
Ok, questioning why he needs 15K sockets is a separate discussion.
The answer is that you are hitting the user's file descriptor limit.
Log with the user you will use in the listener and do $ulimit -n to see the current limit.
Most likely 1024.
Using root edit the file /etc/security/limits.conf
and set ->
{username} soft nofile 65536
{username} hard nofile 65536
65536 is just a suggestion, you will need to figure that out from your app.
Log off, log in again and recheck with ulimit -n, to see it worked.
Your are probably going to need more than 15 fds for all that. Monitor your app with lsof.
Like this:
$lsof -p {pid} <- lists all file descriptors
$lsof -p {pid} | wc -l <- count them
By the way, you might also hit the system wide fd limit, so you need to check it:
$cat /proc/sys/fs/file-max
To increase that one, add this line to the /etc/sysctl.conf
#Maximum number of open FDs
fs.file-max = 65535
Why do you need to have 15000 sockets on one machine? Anyway, look at ulimit -n
If you're going to have 15,000 clients talking to your server (and possibly 200,000 in the future according to your comments) then I suspect you're going to have scalability problems servicing those clients once they're connected (if they connect).
I think you may need to step back and look at how you can architect your application and/or deployment to successfully achieve these sort of numbers.

Categories