How to get maxConnections of an application running on Apache Tomcat - java

I implemented a HttpSessionListener which counts the active sessions of our JSF WebApp. That works fine.
Is there a way to find out how many connections are allowed? Our JSF-WebApp is running on several tomcats with different settings.
What I need is a java method, that returns the max allowed connections of the environment to be able to show a warning if the conecctions may run out.
Any ideas?
Thanks a lot

Feels like you should rather configure this in your monitoring environment, e.g. monitoring JMX.
The maximum number of connections (something completely different from number of sessions) can be configured in server.xml - check maxConnections:
The maximum number of connections that the server will accept and
process at any given time. When this number has been reached, the
server will accept, but not process, one further connection. This
additional connection be blocked until the number of connections being
processed falls below maxConnections at which point the server will
start accepting and processing new connections again. Note that once
the limit has been reached, the operating system may still accept
connections based on the acceptCount setting. The default value is
8192.
For NIO/NIO2 only, setting the value to -1, will disable the
maxConnections feature and connections will not be counted.
What you see here is that it's not only dependent on tomcat, but also on your Operating System. And, of course, on your application: It might fold a lot earlier than that number.
I consider a system that just monitors the number of connections, not the other resources, to not be well tuned: If you figure out that your maximum configured connections are saturated, but technically you could "survive" twice as many: Who configured it? If your page load time goes down below the acceptable time at half of the connections: Who cares about the number of connections available still?
And if a reverse proxy comes in, that could queue some additional connections as well...
To come back to your original question: JMX might be a solution (find the relevant bean). Or, if you want to go homegrown, a servlet filter can keep track of the number of currently handled requests.

Related

Jetty maxes out number of Connectors

We start the Jetty Server in Java and have a pretty straight forward process for that: API gets called, server starts. There will be no magic happening while starting Jetty, its a static process that shouldn't depend on anyhting. However we experience issues with some startups, where Jetty will try to open (presumably endlessly many) Connectors and maxes out the ThreadPool's size. OK-Runs will have exactly one Server Connector, as I would expect with a 4-core CPU and no Server Connector count set.
We also tried to navigate around the problem by setting the number of Server Connectors (to one), still Jetty would ramp up the count and fail to start, because there would be not enough threads available.
Even more curious, another API-User (different application) never had this issue once. This has all been tested on the same machine, same OS, often the same day even.
We use Jetty 9.4.38. This is what the Exception says:
could not subscribe connector
java.lang.IllegalStateException: Insufficient configured threads: required=200 < max=200 for QueuedThreadPool[qtp370296980]#16124894{STARTED,8<=144<=200,i=0,r=-1,q=0}[ReservedThreadExecutor#3825f21{s=0/6,p=0}]
at org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:165)
at org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:141)
at org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:191)
at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:320)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
The ThreadPoolBudget being reported is indicating that you have some combination of technologies and/or components that are requiring a thread load of 200 threads, but your max threads is exactly 200.
The output from INFO logging of the named logger org.eclipse.jetty.util.thread.ThreadPoolBudget will report the lease / budget of the various components you have.
Output will have a format indicating ...
INFO: <component> requires <num> threads from <thread-pool>
That will tell you what components are requesting that amount of threads.
The <component> will include the hashcode / object identity of the component, if you see the same one repeated (eg: Foo#1234 and Foo#789 would be different object instances, but Bar#575 and Bar#575 would be the same object instance), then you have a bad initialization of your Server. (likely not using the LifeCycle properly)

Failed to connect to Tomcat server on ec2 instance

UPDATE:
My goal is to learn what factors could overwhelm my little tomcat server. And when some exception happens, what I could do to resolve or remediate it without switching my server to a better machine. This is not a real app in a production environment but just my own experiment (Besides some changes on the server-side, I may also do something on my client-side)
Both of my client and server are very simple: the server only checks the URL format and send 201 code if it is correct. Each request sent from my client only includes an easy JSON body. There is no database involved. The two machines (t2-micro) only run client and server respectively.
My client is OkHttpClient(). To avoid timeout exceptions, I already set timeout 1,000,000 milli secs via setConnectTimeout, setReadTimeout, and setWriteTimeout. I also go to $CATALINA/conf/server.xml on my server and set connectionTimeout = "-1"(infinite)
ORIGINAL POST:
I'm trying to stress out my server by having a client launching 3000+ threads sending HTTP requests to my server. Both of my client and server reside on different ec2 instances.
Initially, I encountered some timeout issues, but after I set the connection, read and write timeout to a bigger value, this exception has been resolved. However, with the same specification, I'm getting java.net.ConnectException: Failed to connect to my_host_ip:8080 exception. And I do not know its root cause. I'm new to multithreading and distributed system, can anyone please give me some insights of this exception?
Below is some screenshot of from my ec2:
1. Client:
2. Server:
Having gone through similar exercise in past I can say that there is no definitive answer to the problem of scaling.
Here are some general trouble shooting steps that may lead to more specific information. I would suggest trying out tests by tweaking a few parameters in each test and measure the changes in Cpu, logs etc.
Please provide what value you have put for the timeout. Increasing timeout could cause your server (or client) to run out of threads quickly (cause each thread can process for longer). Question the need for increasing timeout. Is there any processing that slows your server?
Check application logs, JVM usage, memory usage on the client and Server. There will be some hints there.
Your client seems to be hitting 99%+ and then come down. This implies that there could be a problem at the client side in that it maxes out during the test. Your might want to resize your client to be able to do more.
Look at open file handles. The number should be sufficiently high.
Tomcat has some limit on thread count to handle load. You can check this in server.xml and if required change it to handle more. Although cpu doesn't actually max out on server side so unlikely that this is the problem.
If you a database then check the performance of the database. Also check jdbc connect settings. There is thread and timeout config at jdbc level as well.
Is response compression set up on the Tomcat? It will give much better throughout on server especially if the data being sent back by each request is more than a few kbs.
--------Update----------
Based on update on question few more thoughts.
Since the application is fairly simple, the path in terms of stressing the server should be to start low and increase load in increments whilst monitoring various things (cpu, memory, JVM usage, file handle count, network i/o).
The increments of load should be spread over several runs.
Start with something as low as 100 parallel threads.
Record as much information as you can after each run and if the server holds up well, increase load.
Suggested increments 100, 200, 500, 1000, 1500, 2000, 2500, 3000.
At some level you will see that the server can no longer take it. That would be your breaking point.
As you increase load and monitor you will likely discover patterns that suggest tuning of specific parameters. Each tuning attempt should then be tested again the same level of multi threading. The improvement of available will be obvious from the monitoring.

High latency with Tomcat first request

We have an application which is using an embedded tomcat version 7.0.32. I am observing a peculiar situation with respect to latency.
I am doing some load tests on the application, what i have observed is the very first request to tomcat takes quite some amount of time, e.g. rate of about 300+ ms. Subsequent requests take about 10-15ms.
I am using a BIO connector. I know that persistent connections are used since i am using HTTP 1.1, which has that support by default. So ideally only 1 TCP connection is created and all request pushed on the same connection, till the keep alive timeout is elapsed.
I get the creating a TCP connection will have some costs involved, but the difference is just large.
Any idea what could be causing this huge difference in latency between the 1st and subsequent request and can we do anything to reduce/eliminate it.
Thanks,
Vikram
If you are using JSPs, they are compiled.
If you are connecting to databases, the connection pool might be empty before.
Generally speaking, if you have singletons which are initialized lazily, the first request has to wait.
On top of this, the JIT plays its role: So after the first request, the JIT might have applied some optimizations.
If it is a load test (or perfomance test), I would just ignore the first requests/runs, because this is still the "warm up" phase.
Update
You might find the information regarding a micro benchmark interesting.

Glassfish thread pool issues

We're using Glassfish 3.0.1 and experiencing very long response times; in the order of 5 minutes for 25% of our POST/PUT requests, by the time the response comes back the front facing load balancer has timed out.
My theory is that the requests are queuing up and waiting for an available thread.
The reason I think this is because the access logs reveal that the requests are taking a few seconds to complete however the time at which the requests are being executed are five minutes later than I'd expect.
Does anyone have any advice for debugging what is going on with the thread pools? or what the optimum settings should be for them?
Is it required to do a thread dump periodically or will a one off dump be sufficient?
At first glance, this seems to have very little to do with the threadpools themselves. Without knowing much about the rest of your network setup, here are some things I would check:
Is there a dead/nonresponsive node in the load balancer pool? This can cause all requests to be tried against this node until they fail due to timeout before being redirected to the other node.
Is there some issue with initial connections between the load balancer and the Glassfish server? This can be slow or incorrect DNS lookups (though the server should cache results), a missing proxy, or some other network-related problem.
Have you checked that the clocks are synchronized between the machines? This could cause the logs to get out of sync. 5min is a pretty strange timeout period.
If all these come up empty, you may simply have an impedance mismatch between the load balancer and the web server and you may need to add webservers to handle the load. The load balancer should be able to give you plenty of stats on the traffic coming in and how it's stacking up.
Usually you get this behaviour if you configured not enough worker threads in your server. Default values range from 15 to 100 threads in common webservers. However if your application blocks the server's worker threads (e.g. by waiting for queries) the defaults are way too low frequently.
You can increase the number of workers up to 1000 without problems (assure 64 bit). Also check the number of workerthreads (sometimes referred to as 'max concurrent/open requests') of any in-between server (e.g. a proxy or an apache forwarding via mod_proxy).
Another common pitfall is your software sending requests to itself (e.g. trying to reroute or forward a request) while blocking an incoming request.
Taking threaddump is the best way to debug what is going on with the threadpools. Please take 3-4 threaddumps one after another with 1-2 seconds gap between each threaddump.
From threaddump, you can find the number of worker threads by their name. Find out long running threads from the multiple threaddumps.
You may use TDA tool (http://java.net/projects/tda/downloads/download/tda-bin-2.2.zip) for analyzing threaddumps.

Tomcat not recovering from excess trafic

When my tomcat (6.0.20) maxThreads limit is reached, i get the expected error:
Maximum number of threads (XXX) created for connector with address null and port 80
And then request starts hanging on queue and eventually timing out. so far, so good.
The problem is that when the load goes down, the server does not recover and is forever paralysed, instead of coming back to life.
Any hints?
Consider switching to NIO, then you don't need to worry about the technical requirement of 1 thread per connection. Without NIO, the limit is about 5K threads (5K HTTP connections), then it blows like that. With NIO, Java will be able to manage multiple resources by a single thread, so the limit is much higher. The border is practically the available heap memory, with about 2GB you can go up to 20K connections.
Configuring Tomcat to use NIO is as simple as changing the protocol attribute of the <Connector> element in /conf/server.xml to "org.apache.coyote.http11.Http11NioProtocol".
I think may be a bug in Tomcat and according to the issue:
https://issues.apache.org/bugzilla/show_bug.cgi?id=48843
should be fixed in Tomcat 6.0.27 and 5.5.30

Categories