Before redeploying the application war, I checked the xd.lck file from one of the environment path:
Private property of Exodus: 20578#localhost
jetbrains.exodus.io.LockingManager.lock(LockingManager.kt:89)
I'm testing from both Nginx Unit and Payara server to eliminate the possibility that this is an isolated case with Unit.
And process 20578 shows from htop:
20578 root 20 0 2868M 748M 7152 S 0.7 75.8 14:05.75 /usr/lib/jvm/zulu-8-amd64/bin/java -cp /
After redeployment finished successfully, accessing the web application throws:
java.lang.Thread.run(Thread.java:748)
at jetbrains.exodus.log.Log.tryLock(Log.kt:799)
at jetbrains.exodus.log.Log.<init>(Log.kt:120)
at jetbrains.exodus.env.Environments.newLogInstance(Environments.java:142)
at jetbrains.exodus.env.Environments.newLogInstance(Environments.java:121)
at jetbrains.exodus.env.Environments.newLogInstance(Environments.java:10
And checking the same xd.lck file shows the same content. Meaning to say that "lock is not immediately released" contrary to what is described here.
My assumption is for this specific case with Payara Server (based on Glassfish) is that, the server does not kill the previous process even after redeployment has completed. Maybe perhaps for "zero-downtime" redeployment, not sure, Payara experts can correct me here.
Checking with htop the process 20578 is still running even after the redeployment.
As with Xodus, since most application servers behave this way, what would be the best solution and/or workaround so we don't need to manually delete all lock files of each environment (if can be deleted) every time we redeploy?
Solution is for the Java application to look for the process locking the file then do a kill -15 signal for example to gracefully make the Java handle the signal to be able to close environments:
// Get all PersistentEntityStore's
entityStoreMap.forEach((dir, entityStore) -> {
entityStore.getEnvironment().close();
entityStore.close();
}
Related
I've got a Java8 application running on RHEL 6.10. This application registers a shutdown handler via the usual method:
Thread shutdownThread = new Thread(()=>{
Logger.info("Got shutdown signal");
// Do cleanup
});
Runtime.getRuntime().addShutdownHook(shutdownThread);
This application is kicked off by a Jenkins build (with the BUILD_ID env var set to dontkillme). The application initializes successfully, but then after ~30 seconds the shutdown hook is called and the application terminates. I'm trying to figure out who is shutting me down and why. I've monitored top and it doesn't appear that memory is an issue while it's running, so I don't think the OOM killer is the culprit. I've also looked at /var/log/dmesg and /var/log/messages and don't see anything relevant there either. I don't think Jenkins would be killing me, both since I set BUILD_ID and also because the application dies while the "parent" Jenkins job is still running.
What other methods / tools can I use to see what's happening? Note that my environment is very locked down, so it would be difficult to download and run something from the internet, hopefully there's something in a standard RHEL6 install I could use.
The answer turned out to be unique to the application in question. The application uses the zookeeper-3.5.5 library to connect to a Zookeeper instance. This client library has a runtime dependency on the zookeeper-jute jar, and when that jar was not present in the executing directory, this issue presented itself.
You're probably wondering why the application silently shut itself down and didn't throw a ClassNotFoundException which would have helped me debug this. Great question! I have no idea.
This is my first time asking a question on Stack Overflow. I recently configured an Ubuntu 16.04 virtual private server to host a web application. I run ngnix on a Tomcat server that reads and writes to a MySQL database. The application runs fine except for the fact that Tomcat restarts itself once in a while which results in a 500 error that stems from a "broken-pipe" when anyone tries to login (i.e. make a connection to the database).
I will post an image of the 500 next time it happens. I went into my vps and looked at my Tomcat restart message. This is what I see: Tomcat status message.
I also did a little diving into the Tomcat logs and this is a log file that corresponds with that restart time: Tomcat log file
I did some research to try and solve this myself, but with no success. I believe that the exit=143 is the process being terminated by another program or the system itself. I also have done some moving of the mysql-connector-java.jar. I read that it should be located in the Tomcat/lib directory and not in the WEB-INF of the web application. Perhaps I need to configure other settings.
Any help or any direction would be much appreciated. I've fought this issue for a week with having learned much, but accomplished little.
Thanks
Look at the timeline. It starts at 19:49:23.766 in the Tomcat log with this message:
A valid shutdown command was received via the shutdown port. Stopping the Server instance.
Exit code 143 is a result of that shutdown and doesn't indicate anything.
The question you need answered is: Who send that shutdown command, and why?
On a side note: The earlier messages indicates that Tomcat lost connection to the database, and that you didn't configure a validation query. You should always configure that, since database connections in the connection pool will go stale, and that needs to be detected.
Theory: Do you have some monitoring service running that tests your application being up? Does that monitoring detect a timed-out database connection, classify that as a hung webapp and auto-restart Tomcat?
While I don't think I am able to see to the core of the problem you have with your overall setup given the small excerpt of your log files, one thing strikes the eye. In the Tomcat log, there is the line
A valid shutdown command was received via the shutdown port. Stopping the server instance.
This explains why the server was restarted. Someone (some external process, a malicious attacker, script, or whatever. Could be anything depending on the setup of your server) sent a shutdown command to Tomcat's shutdown port (8005 by default) which made the Tomcat shut down.
Refer to OWASP's recommendations for securing a Tomcat server instance for fixing this possible security whole.
Regarding the ostensible Hibernate problems you have, I don't get enough information from your logs to make a useful statement. But you can leave the MySQL jar in Tomcat/lib, since this is not the root cause of your problem.
We have deployed an Spring MVC application on GlassFish Server Open Source Edition 3.1.2.2. Server log is on warning level, so after deployment I observed that lots of server.log files are generated almost 95-97% of logs are filled with:
[#|2015-10-15T20:19:20.995+0530|WARNING|glassfish3.1.2|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=13;_ThreadName=Thread-2;|GRIZZLY0023: Interrupting idle Thread: http-thread-pool-80(7).|#]
While google search I come to know about issue posted on JIRA and one patch added on it, I haven't tried that patch yet but I wanted to know the reason behind this WARNING. Some doubts are in my mind :
Is this warning safe to ignore?
Why glassfish service is interrupting threads? what is actually
happening in glassfish service?
How can I avoid this warning to generate? and what will cause if I
ignore this (what will be the impact)?
1) If your CPU usage is high it is not safe to ignore, as it could cause the death of your server
2) Most probably you see this problem because servlet/webapp processes
request longer than 15mins (by default).
3) If above-mentioned is ok for you, you will need to change the request timeout (disable it). But on the other hand, it will not be safe if long processing times is not something, that you really except to.
Try this patch or check your web app. If you will provide more information about the servlet/webapp that is causing this issue, it would be easier to answer.
My situation is like this:
Everytime before uploading war file to web-app folder, I stop Tomcat by calling sh shutdown.sh. It used to take about 30 seconds for a total shutdown. But now it doesn't work well anymore.
Actually, it did some work, because when I access the application from web-page it throws 503 error (Under Maintenance). But when I use ps aux | grep tomcat to check, the tomcat process is still there. And it will be there for around 5 - 10 mins.
I understand that it may need to take extra times to complete all the tasks, but it is way too slow (5 - 10 minutes), before it is stop totally. I don't understand why this happens, but there must be some reason. Maybe there's something to do with the code, or the new script of deployment we used recently. I just have almost no clue about where to check.
This is important to our team because we are using "auto-deployment", in which we use a script to auto-package war file, uploading and deploy on a specific time. If we started a new tomcat instance before the old one successfully shutdown, it will hang there for eternal, and cleaning up task by "kill -9" is daunting.
Is there anyone who has experimented this issue? Any clue would be appreciated.
Hoàng Long -
Thank you for the update.
1) The fact that you see your Quartz jobs running, and the error message, are both significant:
SEVERE: The web application [/project] appears to have started a
thread named [Resource Destroyer in BasicResourcePool.close()] but has
failed to stop it. This is very likely to create a memory leak.
2) One suggestion is configuration:
http://forum.springsource.org/showthread.php?17833-Spring-Quartz-Tomcat-no-shutdown
I had the same problem. I fixed it by adding
destroy-method="destroy" to the SchedulerFactoryBean definition.
This way spring closes down the scheduler when the application is
stopped.
3) Another suggestion is to add a shutdown listener:
http://forums.terracotta.org/forums/posts/list/15/4341.page
Using a context listener and introducing a timeout on shutdown solves
the issue for me. I just wait a second after shutting down:
public void contextDestroyed(ServletContextEvent sce) {
try {
factory.getScheduler().shutdown();
Thread.sleep(1000);
If this is something that mystically started to happen within the last few days, perhaps you're running into the Linux leap second bug? For more information, see
https://serverfault.com/questions/403732/anyone-else-experiencing-high-rates-of-linux-server-crashes-during-a-leap-second
https://access.redhat.com/knowledge/articles/15145
http://pedroalves-bi.blogspot.fi/2012/07/java-leap-second-bug-how-to-fix-your.html
I'm using Apache and Tomcat on a Windows server and since this morning, Tomcat stops working without any logs. It doesn't hang, it just shut down.
There's no log in Tomcat, the CPU/Memory are fines, there are no System.Exit in my code.
Anybody ever had this problem?
It happens at random, after 5-10 minutes. The application responds normally and sometime, boom.. stops working.
UPDATE : Still no clue. The Admin team will install the webapp on another box...
My script to start tomcat had last line tail -f catalina.out.
Sometime I did not kill this script, the shell then timed out and killed the script with all child processes, including tomcat.
This sounds like the JVM is crashing. Have you looked for a JVM crash log? It typically has a name like hs_err_pid*.log and is created in the JVM's working directory.
If you find a file like this and upload it, then we can probably help more.
Some questions:
Have you recently changed the version of Java you are using?
What is the exact version of Tomcat you are using?
Are you using Tomcat Native (the Apache Portable Runtime)?
Faced this issue recently.
Scenario : Tomcat started successfully but automatically gets shut down after 1 hour and sometimes this happened after 1 day and nothing is there in tomcat logs.
Issue : Actual issue was high memory usage and no free SWAP memory.
How I found the solution
If tomcat don't show any logs, then there must be something in system logs so, I checked /var/log/messages but since permission denied for me I tried /var/log/dmesg and got this
"Out of memory: Kill process 14606 (java) score 106 or sacrifice child".
In the output I noticed Swap Memory free 0 K. Ran top command to confirm the same. So, somehow there was a high memory usage which caused the OS to kill my tomcat process.
After spending hours finally got the reason.
ps -ef | grep tomcat showed that there were several tomcat processes running for the same application. It seems that, earlier tomcat shutdowns might not have taken successfully and the processes were not killed even after the shutdown due to some reason, which was causing the high memory usage.
So, killed all running tomcat processes using kill. SWAP memory got freed.
Started tomcat again, worked fine. :)
Recently I had this problem, If somebody faces the same issue in future I hope this will help.
Scenario: Tomcat shuts down without any logs or errors
Root Cause for my problem: synchronized method accessed from a task using TimerTask
I had a singleton class with a synchronized method accessed from various threads based on timer or user action
some times this method will take up to few minutes to complete. When TimerTask is waiting on this method for sometime (I guess timer is timed out /thread is killed or something is happening in the background) and the moment the lock on the method is released the tomcat is getting killed.
So I removed synchronized keyword and removed singleton and made some code changes for thread safety. Then the problem is gone.
How I found out: I had a log statement in the first line of synchronized method and everytime the tomcat shutdowns i found this message in the last few lines.
Regards,
Phanindra Kasturi
things to look for in debugging an issue like this:
Look at the logs directory ($TOMCAT_ROOT/logs) to make sure none of the log files have any stack traces
Look at the tomcat startup script to make check the location of the log files to see if the logs are not being written to another directory.
Another reason could be some other user/process could be issuing a kill -9 that could kill tomcat without giving it any chance to log errors.
another possibility is that some process was started this morning on the box that is binding to a port that your server requires.
Are your servlets or one of it's dependencies allowed to call System.exit()? (Not sure how locked down Tomcat VMs are in that sense)
I've had developers thinking it's ok to use exit(666); on detecting a non-invertable matrix (which isn't good, but sure as heck not fatal). Arrgh. Perhaps you have some similar culprit in your system?
I noticed CATALINA_OPTS in my path and that was set for a lower JVM size. Hence, the crash and no log trace of tomcat was found. The server automatically shutdown in less than 2 hrs.
check, CATALINA_OPTS or JAVA_OPTS -- these might have jvm settings. either increase them or comment them out and increase the swap memory.
“The Service on local computer started and then stopped ,Some services stop automatically if there are not in use by other services or programs.”
I gone through the problem i have tried so many ways to get out of the problem finally i got the solution as follows.
1) Click Run Command from start button.
2) Enter Services.msc then click OK,you will get all the services in your computer.
3) Select your service and right click on the service and select Properties
4) Goto Logon Properties and select Local System Account then click OK.
This will work.
Sometime it happens if some other program is running on the same port. For example Skype. Shut down that program before you start Tomcat.
try to clean your elipse projects because you could have tried to add another server which used port 8080 then when you try to execute the tomcat server externally that defaulty uses port 8080 the tomcat server automatically shutdowns after cleaning the project copy the new war file and paste it in bin it works fine
conclusion: when the server tries to use the port which has already been acquired you will see such type of issues.