I have a Grails 2.2.4 application which is packed as war and deployed to my tomcat7 server on ubuntu 12.04 LTS 64-bit, 8GB RAM.
My setenv.sh file contains the following:
CATALINA_OPTS="
-server
-Xms1G
-Xmx2G
-XX:MaxPermSize=512m";
I used htop to get the number of running processes. I figured out that there are more than 20 running java processes on my system.
Each process of the more than 20 processes looks like this:
PRI NI VIRT RES SHR S CPU% MEM* TIME+ Command
20 0 6028M 1290M 11140 S 0.0 16.2 0:01.21 /usr/lib/jvm/java-7-oracle/bin/java -Djava.util.logging.config.file
When I stat tomcat with ./bin/startup.sh my application starts without errors. When I access my application with different browsers I get more than 20 Java processes running. The only other Java process I have running is elasticsearch.
Why is tomcat starting so many processes for my application?
Do I have to limit them? If so how?
What you are probably seeing is threads not processes. According to man htop you can hide user threads interactively using the H command.
For the record, Tomcat will create a number of worker threads for processing incoming HTTP requests. If you (really) need to control the number of worker threads, there are Tomcat configuration options for doing that.
Related
I have deployed one java application in tomcat server. And tomcat server was configured as windows service in one of my VMs.
Our VMs are windows servers with 64GB RAM and 8 core 2.4 GHz Intel Xeon Processors.
Below are the software details and JVM args configured.
JDK 1.7.0_67
Tomcat 7.0.90
JVM args for Tomcat :
-Xms2g -Xmx40g -XX:PermSize=1g -XX:MaxPermSize=2g
But still getting this issue, could you please any one help.
You can enable JMX (which is a technology to monitor java applications) by adding the -Dcom.sun.management ..... jvm options on the startup script and connect your application via JConsole with JTop Plugin which shows the top CPU consuming threads. Refer :https://arnhem.luminis.eu/top-threads-plugin-for-jconsole/
I am trying to deploy several Java (spring boot) apps in docker containers in 1 host, where I set memory limits (--memory=30m --memory-swap=50m) for each.
However when I check the limits using docker container stats, I see each container is using >400MB of the host's RAM. Due to this I cannot start all the containers I need as the kernel kills some of them (OOM).
What do I need to do to ensure that the containers' memory is controlled using the docker memory options?
My host is a digital ocean centos 7. Thanks
Main reason for this issue is that JRE is not aware it is running inside a container.
Let JVM detect how much memory is available in Docker container
https://blog.csanchez.org/2017/05/31/running-a-jvm-in-a-container-without-getting-killed
JAVA_OPTS="-server -Djava.net.preferIPv4Stack=true -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1"
Make sure you have JDK 8-131 or above version
In case of JDK-9 it will be available to correctly detect available memory inside container.
Additional Referece: Docker run -m doesnt set the limit (JVM takes up the entire host machine's size for xms and xmx)
and
https://developers.redhat.com/blog/2017/03/14/java-inside-docker/
I have two weblogic domains each one has one managed server, the problem is that every 3 or 4 hours may be less than the four process are killed suddenly and in domain console i found that.
./startWebLogic.sh: line 175: 53875 Killed ${JAVA_HOME}/bin/java ${JAVA_VM} ${MEM_ARGS} -Dweblogic.Name=${SERVER_NAME} -Djava.security.policy=${WL_HOME}/server/lib/weblogic.policy ${JAVA_OPTIONS} ${PROXY_SETTINGS} ${SERVER_CLASS}
There is no problem in free memory in server.
free memory
Two possible explanations for this message are the Linux OOM killer and the WebLogic node manager.
You should be able to find evidence for the first in /var/log/messages (grep -i -n 'killed process' /var/log/messages). If so, add up all the Xmx parameters of the running java processes, add 35% and see if that total tops the total amount of memory in the machine. If it does, tweak the Xmx parameters downwards.
The easier way to test for the second is to kill the nodemanager process, keep it down and see if the problem persists (kill -9 `ps -ef | grep odeManager | awk '{print $2}'`). If the problem does not reoccur, check the WebLogic admin console on how the "Panic action" and "Failure action" are configured for each server and set them to "No Action". In that case also check the nodemanager and server logs to figure out why the node manager killed your managed server processes.
i'm with a weird problem that i tried everything and i couldn't solve it.
I have a instance of Wildfly 8.2 running a JavaEE application that controls a CallCenter, this application use like 2 ~ 8 gb memory depends on how much peopple are working, the application controls the telephony, and a web interface for configuration / reports and other sutffs.
Randomly the wildfly gets killed and i see in console the following message:
*** JBossAS process XXXX received kill signal ***
And i need to start it again.
I read about that probably being the linux OOM Killer that was killing my process, so i set in the /proc/wildfly_pid/oom_adj the value -17, as i read in documentation it makes the oom killer ignores the process, but it seems to don't work, and wildfly keeps getting killed, i did a cron job to configure the oom_adj each 1 min, and checked it, was configured correct, but nothing helps.
I was monitoring the application and the memory was like on 3 gb and its get killed, it works for some hours but randomly gets killed.
I don't know what to do, i'm using Debian 7.8 on and server that is from my client with 16gb memory and Wildfly 8.2 in standalone mode with the following java opts
-server -Xms256m -Xmx8192m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true
Any help would be very appreciated.
A link for the dmesg output dmesg
*** JBossAS process XXXX received kill signal ***
This message corresponds to Java heap dump not created on OutOfMemoryError.
This can be resolved by increasing memory limits for running your task/application.
I'm planning to do a heap dump with jmap jdk1.5 tool on a production weblogic (10) instance.
Actually there are 3 EAR (perhaps more, don't really know i don't have access) deployed on this weblogic instance.
Someone told me "weblogic creates a JVM for each EAR"
Can someone confirm this?
With jmap i need the jvm pid as parameter to do the heap dump...
Since i have 3 EAR i guess i have 3 pid so i wonder how to know which pid correspond to which EAR JVM?
Nope - each Weblogic server (or any java process) runs in it's own JVM with it's own PID. So all your EARs will appear in the same heap dump.
If you have multiple Weblogic server instances running on the same machine, each will have a separate PID and a separate process
As #josek says, you'll have one JVM per WebLogic server, so if all your EARs are under the same WebLogic server you'l only have one pid to dump. But you may still have multiple servers - maybe an admin server and a managed server, maybe other unrelated instances - so if you just do something like ps -ef | grep java (I'm assuming this is on Unix?) you could see a lot of pids, even if you can filter it to your WebLogic's JDK_HOME.
One way to identify which pid belongs to a particular server is to go to the <domains>/servers/<your server>/tmp directory, and in there run fuser -f <your server>.lok. This will list the pids of all the processes related to that server, one of which will be the JVM java process. (May be others for JDBC etc.) One way to find just the java process (and I'm sure someone will point out another, better way!) is something like:
cd <domains>/servers/<your server>/tmp
ps -p "`fuser -f <your server>.lok 2>/dev/null`" | grep java
If each EAR is in its own server, I guess you'll have to look at config.xml to see which you need.