I have a REST (Jersey) web server using Tomcat 7 + Hibernate + Spring + Ehcache (as local cache).
The server randomly stops responding. I haven't captured (reproduced) the stopping behavior so it is hard to tell exactly when the server hangs. Once the server hangs, if I send a request, the request can't even hit the server (I don't see any request coming in from the application log file)
I understand it is very generic questions. But where do I need to take a look to find out more info?
After spending googling quite some time, I found out that I need to look at catalina.out log file and need to see the heap dump for possible deadlock, and JDBC connection, etc.
Where/How do I find out the heap dump? And where do I see any logs for JDBC connections?
I am using Spring + Hibernate and use transaction manager to manage the transaction. Is there any particular configuration I need to specify in the data source?
Very hard to give any definitive advice with such a generic question.
Before going for a heap dump, I would start with a thread dump using the jstack tool found in a JDK install.
This could give you an idea of what your Tomcat is doing (or not doing) when it stops responding.
Related
I'm new to java and tomcat. I'm developing a website in java using spring mvc. It's deployed to a linux server that's running Tomcat 8. Everything works fine when I deploy, it connects to the database great. The issue is that the site seems to go idle very quickly. I haven't been able to time it exactly, but it seems like it only takes about a minute of inactivity for the entire site to go idle. Then the next request is extremely slow, loading in all my classes. I'm losing my sessions as well.
Is this a common occurrence? Does it sound like I'm doing something wrong in java? Tomcat? Both?
EDIT: In light of StuPointerException's comment, I've updated my database connection management. I'm now using Apache dbcp. I will update if this resolves the problem. I want to give my QA tester ample time to hit my site some more.
It's difficult to answer your question directly without more information about your server setup.
For what it's worth though, every time I see this kind of behaviour it's down to a mis-configured database connection pool. There can be a significant overhead in creating new database connections.
If you don't use a connection pool or you're allowing connections in the pool to die (due to missing validation queries/checks) then you will start to see performance problems over time due to connection timeouts.
This is my first time asking a question on Stack Overflow. I recently configured an Ubuntu 16.04 virtual private server to host a web application. I run ngnix on a Tomcat server that reads and writes to a MySQL database. The application runs fine except for the fact that Tomcat restarts itself once in a while which results in a 500 error that stems from a "broken-pipe" when anyone tries to login (i.e. make a connection to the database).
I will post an image of the 500 next time it happens. I went into my vps and looked at my Tomcat restart message. This is what I see: Tomcat status message.
I also did a little diving into the Tomcat logs and this is a log file that corresponds with that restart time: Tomcat log file
I did some research to try and solve this myself, but with no success. I believe that the exit=143 is the process being terminated by another program or the system itself. I also have done some moving of the mysql-connector-java.jar. I read that it should be located in the Tomcat/lib directory and not in the WEB-INF of the web application. Perhaps I need to configure other settings.
Any help or any direction would be much appreciated. I've fought this issue for a week with having learned much, but accomplished little.
Thanks
Look at the timeline. It starts at 19:49:23.766 in the Tomcat log with this message:
A valid shutdown command was received via the shutdown port. Stopping the Server instance.
Exit code 143 is a result of that shutdown and doesn't indicate anything.
The question you need answered is: Who send that shutdown command, and why?
On a side note: The earlier messages indicates that Tomcat lost connection to the database, and that you didn't configure a validation query. You should always configure that, since database connections in the connection pool will go stale, and that needs to be detected.
Theory: Do you have some monitoring service running that tests your application being up? Does that monitoring detect a timed-out database connection, classify that as a hung webapp and auto-restart Tomcat?
While I don't think I am able to see to the core of the problem you have with your overall setup given the small excerpt of your log files, one thing strikes the eye. In the Tomcat log, there is the line
A valid shutdown command was received via the shutdown port. Stopping the server instance.
This explains why the server was restarted. Someone (some external process, a malicious attacker, script, or whatever. Could be anything depending on the setup of your server) sent a shutdown command to Tomcat's shutdown port (8005 by default) which made the Tomcat shut down.
Refer to OWASP's recommendations for securing a Tomcat server instance for fixing this possible security whole.
Regarding the ostensible Hibernate problems you have, I don't get enough information from your logs to make a useful statement. But you can leave the MySQL jar in Tomcat/lib, since this is not the root cause of your problem.
We have deployed an Spring MVC application on GlassFish Server Open Source Edition 3.1.2.2. Server log is on warning level, so after deployment I observed that lots of server.log files are generated almost 95-97% of logs are filled with:
[#|2015-10-15T20:19:20.995+0530|WARNING|glassfish3.1.2|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=13;_ThreadName=Thread-2;|GRIZZLY0023: Interrupting idle Thread: http-thread-pool-80(7).|#]
While google search I come to know about issue posted on JIRA and one patch added on it, I haven't tried that patch yet but I wanted to know the reason behind this WARNING. Some doubts are in my mind :
Is this warning safe to ignore?
Why glassfish service is interrupting threads? what is actually
happening in glassfish service?
How can I avoid this warning to generate? and what will cause if I
ignore this (what will be the impact)?
1) If your CPU usage is high it is not safe to ignore, as it could cause the death of your server
2) Most probably you see this problem because servlet/webapp processes
request longer than 15mins (by default).
3) If above-mentioned is ok for you, you will need to change the request timeout (disable it). But on the other hand, it will not be safe if long processing times is not something, that you really except to.
Try this patch or check your web app. If you will provide more information about the servlet/webapp that is causing this issue, it would be easier to answer.
My app client access my Tomcat. Some times it works well, but sometimes it times out - especially when two people quickly flush the frame to access the server. What might be the problem?
I can make sure that my database doesn't hang. Because I also have a management system on my Tomcat and they use the same database. The system works well even if my app can't access the server.
First check your server tomcat running system configuration, like ram capacity and internet speed ect.. because it seems to be you are using same system for data base also.
Some time bad/ slow network connections in client side also will cause
this kind of time out errors, So just add conn.setTimeout(60000) line in from your client code near http call.
I am currently load testing my web application ( Spring + Hibernate based) on a standalone tomcat server (v7.0.27)on a Windows Server 2008 machine. I need to know how tomcat behaves as bulk requests come. e.g.
300 requests recevied - current heap size, server is hung up, server is unable to process, heap size, size of objects, number of objects. So on and so forth.
Is there a way to see this already ? (Info from the manager app is insufficient "current Threads active and memory occupied is not for my requirement).
P.S. maxThreads property for Connector element is 350.
Update : Another issue I faced while load testing - (Tomcat hangs up when i send 300 requests in some cases).
Any help would be highly and greatly appreciated.
you can use jconsole that ships with jdk.
http://docs.oracle.com/javase/6/docs/technotes/guides/management/jconsole.html
If the server hangs, there might be a deadlock.
You can try to attach with JProfiler, the monitoring section will show you the current locking situation and a possible deadlock.
Disclaimer: My company develops JProfiler.