How to troubleshoot/Optimize CPU usage in a Springboot application. Are the allocated resources sufficient for an application having a total of around a 300k user base? The application isn't heavy at all. It just calls third-party APIs and do the necessary checks and gives the response.
How to identify exact codes that could have been using more resources than normally required? I found out somewhere that tracking the processes id from top command and reaching to thread dump and looking up for the corresponding hexadecimal value of processid that could have been using more CPU is one way to figure out. This wasn't easily achievable as some of the commands suggested didn't work. I would appreciate any help or suggestions.
Thanks in advance.
Htop command output
Htop when it's normal
The process of Collection of Thread Stack is no different for a spring boot app. Before a boot app is containerized it is still a Jar. If you suspect that its your application that is actually contributing to the high CPUT then run your jar and attach a profiler to it and trace the code contributing to the high CPU on load. If you can not do it then take the thread dump of the running jar/java process and use any free or opensource tool to analyze the trace. The second logic explained is applicable for the containerized application as well.
Follow this steps to take the thread dump of a java app/boot app running inside a docker container :-
docker exec -it <containerName> jstack > someFile.txt
Take multiple snapshot of it for better visiblity and comparision.
If you have not added JMX enable options to the JVM commandline, do it to begin with:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=10000
-Dcom.sun.management.jmxremote.rmi.port=10000
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
Then on your local machine you start "jmc" from your JDK bin folder and connect to your spring boot server.
You will then be able to see all the threads and enable both CPU load and thread locks on all active threads.
Be aware though that the above opens up JVM for unauthenticated entry so keep the port safe.
Next if your JVM dies send a "kill -3" SIGHUP which will tell the JVM to dump the core. that can then be read via the Eclipse MAT plugin in order to analyze the JVM inner doings.
Another way is to install jolokia into your server for other ways to retrieve the same info.
Related
I have written a program to print number from 1 to 200 using 2 threads.
Now I want to monitor this program using JConsole.
Basically I want to learn how to use JConsole for monitoring an application.
I searched google but couldn't find something useful.
How I can achieve this?
When I started jconsole.exe in bin folder. It asks for hostname and port number.
Here in my case, there is none, I guess.
Can somebody guide.
You need to enable JMX by adding the following JVM arguments :
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.port=8484
-Dcom.sun.management.jmxremote.ssl=false
These parameters will allow any JMX monitoring tool to access and monitoring your application.
Also i suggest you to use visualVM its more powerful tool.
some features for visualVM:
Provide a CPU profiling.
Provide all info about Threads.
Provide the JVM Heap and the memory states.
Provide Info about the GC activities.
Let's say you have a class Test under package p1 where you have the code to print numbers from 1 to 200 using 2 threads(which you want to monitor).
So to use jconsole for monitoring your application, you would need to compile and execute your code first and while your code is executing...
Start -> Run -> jconsole.exe and hit/press Enter
Select the application which you want to monitor and then click Connect.
Alternatively,you can use VisualVM for this purpose as well.
JConsole find all the running application at the time of JConsole start-up. Then only the currently running applications port and host will be displayed in the list. So first you need to start the application then start the JConsole.
I am running a durable Java program on a remote Ubuntu server, where I have root user rights. After some time, the usage on some CPU cores goes up to 100%. The logs show nothing suspicious and the application still works, but with reduced throughput.
How can I debug the JVM so that I can find out the cause of this, while it's still running?
One option is VisualVM, which is included in the JDK starting with Java 1.6. I have found it useful in some situations in the past.
You may connect to local applications or remote applications.
To connect to a remote app, run jstatd on your remote server, and then run VisualVM locally and enter your server's IP address. You should be provided with a list of running Java applications including the one you wish to explore. If you have any trouble listing your application, good documentation is available at the VisualVM website.
Connect to the process with jvisualvm
This tool will allow you to connect to the running process and view all of the threads and their state. This could show you which thread is the culprit merely by looking at which one is awake all the time. You can do a thread dump to see the stack trace for each thread and see what each thread is doing.
It's a very powerful tool for just this kind of debugging. It is distributed with the JDK only, so you will need more than just the JVM runtime installed to have access. Be sure you install the same version of the JDK that the JVM is running.
You will need to have your X display forwarded for this to work.
If you want to see the stack trace on linux just issue kill -SIGQUIT <java-program-pid>. That is one way to see where the the code is executing.
I have a Tomcat running as a Windows Service, and those are known not to work well with jstack. jconsole is working well, on the other hand, and I can see stacks of individual threads (I'm connecting to "localhost:port" to access it).
How can I use jconsole or a similar tool to dump all the thread stacks into a file? (similar to jstack)
You can use the ThreadMXBean management interface.
This FullThreadDump class demonstrates the capability to get a full thread dump and also detect deadlock remotely using JMX.
Nowadays you can use jvisualvm tool to connect to your remote JVM through JMX and create a thread dump. Don't know if this was available
Here's another code sample that will write a stack dump to a file:
http://pastebin.com/zwcKC0hz
We use this over JMX to give us an approximation of the stack dump you get when you make a JMX request or if the process detects high, unexpected load.
It would be helpful if you take a flight recording to get a deeper view on the JVM behavior, specially focusing on the Hot Methods.
Usually, a recording of half an hour is enough. To trigger a recording, you must be logged in to the machines, and issue the following command:
If using Java HotSpot 1.8.x:
$JAVA_HOME/bin/jcmd VM.unlock_commercial_features
$JAVA_HOME/bin/jcmd JFR.start duration=1800s settings=profile filename=/tmp/recording.jfr
IF using java HotSpot 1.7.x:
Edit your $HOME/conf/wrapper.conf file by adding the following parameters on JVM startup:
wrapper.java.additiona.=-XX:+UnlockCommercialFeatures
wrapper.java.additional.=-XX:+FlightRecorder
(replace with the corresponding positional number )
Then, have your instances restarted. Once done, issue the following command :
$JAVA_HOME/bin/jcmd JFR.start duration=1800s settings=profile filename=/tmp/recording.jfr
The flight recording wil produce a file on /tmp/recording.jfr upon termination.
I'm having problems with jetty crashing intermittently, I'm using Jetty 6.1.24.
I'm running a neo4j Spring MVC webapp, Jetty will stay running for approx 1 hour and then I have to restart Jetty. It is running on small amazon ec2 instance, debian with 1.7gb of RAM.
I start Jetty using java -Xmx900m -server -jar start.jar
I am connecting to the server using putty, when Jetty crashes the putty session disconnects, I cannot see what error caused it to crash.
I would like to be able to see if it is an error generated by Spring, I'm not sure how to log the output from the spring app with Jetty. Or if it is Jetty or a memory issue, what would be the best way to monitor Jetty? I cannot recreate this on my local machine running windows. What do you think would be the best way to approach this? Thanks
This isn't really a programmer question; perhaps it'll be moved over to ServerFault.
You didn't specifically state which operating system you're using, but I'm hazarding a guess at some Linux distribution. You have two options of figuring out what's wrong:
Start your session in screen. Screen will live for as long as the actual machine is powered on, until you reboot the operating system (or you exit screen).
you start screen like this
screen
and you get a new prompt where you can start your program (cd foo, jetty, etc). When you're happy and you just need to go somewhere, you can disconnect the screen by hitting CTRL+A and then CTRL+D. you'll drop back to the place you were before you invoked screen.
To get back to seeing the screen you type screen -R which means to resume an existing screen. you should see jetty again.
The nice thing is that if you lose connection (or you close putty by accident or whatever) then you can use screen -list to get a list of running screens, and then forcibly detach them -D and reattach them to the current putty -R, no harm done!
Use nohup. Nohup more or less detaches the process you're running from the console, so none of its output comes to the terminal. You start your program in the normal fashion, but you add the word nohup to your command.
For example:
nohup ls -l &
After ls -l is complete, your output is stored in nohup.out.
When you say crash do you mean the JVM segfaults and disappears? If that's the case I'd check and make sure you aren't exhausting the machine's available memory. Java on linux will crash when the system memory gets so low the JVM cannot allocate up to its maximum memory. For example, you've set the max JVM memory to 500MB of which it's using 250MB at the moment. However, the Linux OS only has 128MB available. This produces unstable results and the JVM will segfault.
On windows the JVM is more well behaved in this scenario and throws OutOfMemoryError when the system is running low on memory.
Validate how much system memory is available around the time of your crashes.
Verify if other processes on your box are eating up a lot of memory. Turn off anything that could be competing with the JVM.
Run jconsole and connect it to your JVM. That will tell you how memory is being used in your JVM process and give you a history to look back through when it does crash.
Eliminate any native code you might be loading into the JVM when doing this type of testing.
I believe Jetty has some native code to do high volume request processing. Make sure that's not being used. You want to isolate the crashes to Java and NOT some strange native lib. If you take out the native stuff and find it works then you have your answer as to what's causing it. If it continues to crash then it very well could be what I'm describing.
You can force the JVM to allocate all the memory at startup with -Xms900m that can make sure the JVM doesn't fight with other processes for memory. Once it has the full Xmx amount allocated it won't crash. Not a solution, but you can easily test it this way.
When you start java, redirect both outputs (stdout and stderr) to a file:
Using Bash:
java -Xmx900m -server -jar start.jar > stdout.txt 2> stderr.txt
After the crash, inspect those files.
If the crash is due to a signal (like SEGV=segmentation fault), there should be a file dump by the JVM at the location you've started java. For Sun VM (hotspot), it's something like hs_err_pid12121.log (here 12121 is the process ID).
Putty disconnecting STRONGLY hints that the server is running out of memory and starts shutting down processes left and right. It is probably your jetty instance growing too big.
The easiest thing to do now, is adding 1-2 Gb more swap space and do it again. Also note that you can use the jvisualvm to attach to the jetty instance to get runtime information directly.
I have a production web application (Struts, iBatis, Hibernate) that runs in Tomcat that would hang while serving requests after 6 - 7 days of running but would run fine again after doing a thread dump.
I have a hard time figuring out why that is the case.
I was just wondering whether anyone else has ever encountered something similar.
Maybe this will help you find the cause of your problem.
I have enable JMX on tomcat
(set these optional vm arguments when starting tomcat)
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=30188 (whatever port you want jmx to run on for tc)
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
I then wrote a little app that monitors memory usage (via jmx) and notifies me if the memory usage goes over , say 80%.
I would then know as soon as something is starting to go wrong. Then I will get a histogram for in-memory objects (see http://java.sun.com/javase/6/docs/technotes/tools/share/jmap.html for how to get that).
At the end it turned out that one of my ejbQL queries caused a huge amount of memory being used.
Hope it might help in some way ......
First of all try to reproduce this in a test environment. You can use JMeter to stress the app. You can start tomcat using the -verbose:gc and -XX:+PrintGCDetails which will give you more insight on what is happening while GC runs. Then, when the site is not responding, you can get a thread dump and if this unblocks the site have a look at the GC details for more info.