failed HTTP requests on Tomcat - java

My web app is running on 64-bit Java 6.0.23, Tomcat 6.0.29 (with Apache Portable Runtime 1.4.2), on Linux (CentOS). Tomcat's JAVA_OPTS includes -Xincgc, which is supposed to help prevent long garbage collections.
The app is under heavy load and has intermittent failures, and I'd like to troubleshoot it.
Here is the symptom: Very intermittently, an HTTP client will send an HTTP request to the web app and get an empty response back.
The app doesn't use a database, so it's definitely not a problem with JDBC connections. So I figure the problem is perhaps one of: memory (perhaps long garbage collections), out of threads, or out of file descriptors.
I used javamelody to view the number of threads that are being used, and it seems that maxThreads is set high enough to not be running out of threads. Similarly, we have the number of available of file descriptors set to a very high number.
The app does use a lot of memory. Does it seem like memory is probably the culprit here, or is there something else that I might be overlooking?
I guess my main confusion, though, is why garbage collections would cause HTTP requests to fail. Intuitively, I would guess that a long garbage collection might cause an HTTP request to take a long time to run, but I would not guess that a long garbage collection would cause an HTTP request to fail.
Additional info in response to Jon Skeet's comments...
The client is definitely not timing out. The empty response happens fairly quickly. When it fails, there is no data and no HTTP headers.

I very much doubt that garbage collection is responsible for the issue.
You really really need to find out exactly what this "empty response" consists of:
Does the server just chop the connection?
Does the client perhaps time out?
Does the server give a valid HTTP response but with no data?
Each of these could suggest very different ways of finding out what's going on. Determining the failure mode should be your primary concern, IMO. Until you know that, it's complete guesswork.

Related

Failed to connect to Tomcat server on ec2 instance

UPDATE:
My goal is to learn what factors could overwhelm my little tomcat server. And when some exception happens, what I could do to resolve or remediate it without switching my server to a better machine. This is not a real app in a production environment but just my own experiment (Besides some changes on the server-side, I may also do something on my client-side)
Both of my client and server are very simple: the server only checks the URL format and send 201 code if it is correct. Each request sent from my client only includes an easy JSON body. There is no database involved. The two machines (t2-micro) only run client and server respectively.
My client is OkHttpClient(). To avoid timeout exceptions, I already set timeout 1,000,000 milli secs via setConnectTimeout, setReadTimeout, and setWriteTimeout. I also go to $CATALINA/conf/server.xml on my server and set connectionTimeout = "-1"(infinite)
ORIGINAL POST:
I'm trying to stress out my server by having a client launching 3000+ threads sending HTTP requests to my server. Both of my client and server reside on different ec2 instances.
Initially, I encountered some timeout issues, but after I set the connection, read and write timeout to a bigger value, this exception has been resolved. However, with the same specification, I'm getting java.net.ConnectException: Failed to connect to my_host_ip:8080 exception. And I do not know its root cause. I'm new to multithreading and distributed system, can anyone please give me some insights of this exception?
Below is some screenshot of from my ec2:
1. Client:
2. Server:
Having gone through similar exercise in past I can say that there is no definitive answer to the problem of scaling.
Here are some general trouble shooting steps that may lead to more specific information. I would suggest trying out tests by tweaking a few parameters in each test and measure the changes in Cpu, logs etc.
Please provide what value you have put for the timeout. Increasing timeout could cause your server (or client) to run out of threads quickly (cause each thread can process for longer). Question the need for increasing timeout. Is there any processing that slows your server?
Check application logs, JVM usage, memory usage on the client and Server. There will be some hints there.
Your client seems to be hitting 99%+ and then come down. This implies that there could be a problem at the client side in that it maxes out during the test. Your might want to resize your client to be able to do more.
Look at open file handles. The number should be sufficiently high.
Tomcat has some limit on thread count to handle load. You can check this in server.xml and if required change it to handle more. Although cpu doesn't actually max out on server side so unlikely that this is the problem.
If you a database then check the performance of the database. Also check jdbc connect settings. There is thread and timeout config at jdbc level as well.
Is response compression set up on the Tomcat? It will give much better throughout on server especially if the data being sent back by each request is more than a few kbs.
--------Update----------
Based on update on question few more thoughts.
Since the application is fairly simple, the path in terms of stressing the server should be to start low and increase load in increments whilst monitoring various things (cpu, memory, JVM usage, file handle count, network i/o).
The increments of load should be spread over several runs.
Start with something as low as 100 parallel threads.
Record as much information as you can after each run and if the server holds up well, increase load.
Suggested increments 100, 200, 500, 1000, 1500, 2000, 2500, 3000.
At some level you will see that the server can no longer take it. That would be your breaking point.
As you increase load and monitor you will likely discover patterns that suggest tuning of specific parameters. Each tuning attempt should then be tested again the same level of multi threading. The improvement of available will be obvious from the monitoring.

High latency with Tomcat first request

We have an application which is using an embedded tomcat version 7.0.32. I am observing a peculiar situation with respect to latency.
I am doing some load tests on the application, what i have observed is the very first request to tomcat takes quite some amount of time, e.g. rate of about 300+ ms. Subsequent requests take about 10-15ms.
I am using a BIO connector. I know that persistent connections are used since i am using HTTP 1.1, which has that support by default. So ideally only 1 TCP connection is created and all request pushed on the same connection, till the keep alive timeout is elapsed.
I get the creating a TCP connection will have some costs involved, but the difference is just large.
Any idea what could be causing this huge difference in latency between the 1st and subsequent request and can we do anything to reduce/eliminate it.
Thanks,
Vikram
If you are using JSPs, they are compiled.
If you are connecting to databases, the connection pool might be empty before.
Generally speaking, if you have singletons which are initialized lazily, the first request has to wait.
On top of this, the JIT plays its role: So after the first request, the JIT might have applied some optimizations.
If it is a load test (or perfomance test), I would just ignore the first requests/runs, because this is still the "warm up" phase.
Update
You might find the information regarding a micro benchmark interesting.

Java - first http request to specific host significantly slower

I am writing a benchmarking tool to run against a web application. The problem that I am facing is that the first request to the server always takes significantly longer than subsequent requests.
I have experienced this problem with the apache http client 3.x, 4.x and the Google http client. The apache http client 4.x shows the biggest difference (first request takes about seven times longer than subsequent ones. For Google and 3.x it is about 3 times longer.
My tool has to be able to benchmark simultaneous requests with threads. I can not use one instance of e.g. HttpClient and call it from all the threads, since this throws a Concurrent exception. Therefore, I have to use an individual instance in each thread, which will only execute a single request. This changes the overall results dramatically.
I do not understand this behavior. I do not think that it is due to a caching mechanism on the server because a) the webapp under consideration does not employ any caching (to my knowledge) and b) this effect is also visible when first requesting www.hostxy.com and afterwards www.hostxy.com/my/webapp.
I use System.nanoTime() immediately before and after calling client.execute(get) or get.execute(), respectively.
Does anyone have an idea where this behavior stems from? Do these httpclients themselves do any caching? I would be very grateful for any hints.
Read this: http://hc.apache.org/httpcomponents-client-ga/tutorial/html/connmgmt.html for connection pooling.
Your first connection probably takes the longest, because its a Connect: keep-alive connection, thus following connections can reuse that connection, once it has been established. This is justa guess
Are you hitting a JSP for the first time after server start? If the server flushes it's working directory on each start, then the first hit the JSPs compile and it takes a long time.
Also done on the first transaction: If the transaction uses a ca cert trust store, it will be loaded and cached.
You better once see it about caching
http://hc.apache.org/httpcomponents-client-ga/tutorial/html/caching.html
If your problem is that "first http request to specific host significantly slower", maybe the cause of this symptom is on the server, while you are concerned about the client.
If the "specific host" that you are calling is an Google App Engine application (or any other Cloud Plattform), it is normal that the first call to that application make you wait a little more. That is because Google put it under a dormant state upon inactivity.
After a recent call (that usually takes longer to respond), the subsequent calls have faster responses, because the server instances are all awake.
Take a look on this:
Google App Engine Application Extremely slow
I hope it helps you, good luck!

How to diagnose leaked http connections (org.apache.http.impl.conn.tsccm.ConnPoolByRoute)

I have a multithreaded java program that runs on Amazon's EC2. It queries and fetches data items from a vendor via HttpPost and HttpGet, using a org.apache.http.impl.client.DefaultHttpClient. Concurrently, it pushes the retrieved data items into S3 using AWS's Java SDK.
After a few days of running, I get the symptoms that normally come with http connection leaks:
org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection
at org.apache.http.impl.conn.tsccm.ConnPoolByRoute.getEntryBlocking(ConnPoolByRoute.java:417)
at org.apache.http.impl.conn.tsccm.ConnPoolByRoute$1.getPoolEntry(ConnPoolByRoute.java:300)
at org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager$1.getConnection(ThreadSafeClientConnManager.java:224)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:391)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:754)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:732)
Since both AWS and my requests to the data vendor use Http connections, I am not quite sure where exactly I forget to HttpEntity.consume(), or S3ObjectInputStream.close() (unless it is yet something else...).
So here is my question: are there ways to monitor org.apache.http.impl.conn.tsccm.ConnPoolByRoute so that at least I can detect when I am starting to leak connections/entities not properly consumed/http streams not closed? (I have a feeling it happens only under certain conditions, e.g. when certain exceptions are being thrown, by-passing the logic in my code that consumes HttpEntities, closes streams, etc.) Any idea on how to diagnose what eventually causes all my http connections to fail with that ConnectionPoolTimeoutException would be most welcome. I don't feel like waiting 4+ days between attempts to fix the root cause of the problem.
If you're using the PoolingClientConnectionManager note there are the methods getTotalStats() and getStats(final HttpRoute route) which will give you a PoolStats object with the data you're looking to monitor.
Just fetch the ConnectionManager from your httpclient:
PoolingClientConnectionManager poolManager = (PoolingClientConnectionManager) httpClient.getConnectionManager();
If you can access the org.apache.http.impl.conn.tsccm.ConnPoolByRoute then set it's connTTL to a low enough value so that it's WaitingThreadAborter will eventually terminate a connection. It will show a nice stacktrace there. The other option is to use CGLIB or some other bytecode manipulating framework to create a proxy class wrapping org.apache.http.impl.conn.tsccm.ConnPoolByRoute. Depending on your environment it might not be that easy to set it up, but it's a rather valuable tool to debug issues like yours. (And yes, if you happen to use spring or just plain Aspects the setup will be supereasy :) )

What are some common SocketExceptions and what is causing them?

I've been caught catching SocketExceptions belonging to subspecies like for example Broken pipe or Connection reset. The question is what to do with the slippery bastards once they're caught.
Which ones may I happily ignore and which need further attention? I'm looking for a list of different SocketExceptions and their causes.
In terms of Java web development, a Broken pipe or a Connection reset basically means that the other side has closed the connection. This can under each be caused by the client pressing Esc while the request is still running or navigating away by link/bookmark/addressbar while the request is still running. You see this particular error often in long running requests such as large file downloads and unnecessarily large/slow business tasks (which is not good for the impatient user, about 3 secs is really the max). In rare cases it can also be caused by a hardware/network problem, such as a network outage at either server or client side.
This exception can be thrown when a flush() or close() on the outputstream of the response is invoked. You as server side cannot do anything against it. You cannot recover from it as you cannot (re)connect the client due to security restrictions in HTTP. In most cases you also shouldn't even try to, because this is often client's own decision. Just ignore it or log it for pure statistics.
One of the other causes is usually the TCP/IP stack settings on the Operating System. Haven't tried it on Linux yet but one platform i've worked on is Sun's Solaris 9/10 Operating System. The basic idea is that Solaris has a tunable TCP/IP stack which you can tune while running your web applications.
So there are two parameters that you should be aware of
tcp_conn_req_max_q0 - queue of incomplete handshakes
tcp_conn_req_max_q1 - queue of complete handshakes
tcp_keepalive_interval - keepalive
tcp_time_wait_interval - time of a TCP segment that's considered alive
in the internet
All the above parameters affect how much load can the system take (from a TCP/IP perspective) and on the flipside affects the occurrence of certain types of SocketExceptions - such as the ones BalusC pointed above.
This is obviously quite convoluted but the point i'm trying to make is that the OS you're hosting your apps on more often than not, offers you mitigation strategies.

Categories