I have to run a couple of java services on my machine to obtain a certain dev environment (and get my not-java-related work done)
java -Xmx400m -jar foo-app/target/foo-app-SNAPSHOT.jar
java -Xmx250m -jar bar-app/target/bar-app-SNAPSHOT.jar
...
To not run out of memory, I need to limit the memory usage. The default (512m afaik) ist too high for my machine so I lowered them somewhat (on a wild as guessing basis). Except for one, where I learned the hard way (crashed, even freezes, and thankfully some .pid error files left behind in the project folder...), that I better settle a little higher:
java -Xmx800m -jar doo-app/target/doo-app-SNAPSHOT.jar
Question: is there a way, to track memory usage of a certain app over time?
By some java command line parameter or even with ps -ae, htop or similar? (thus not fiddling in the source itself, remap garbage collectors, etc, etc)
I see plenty of numbers, but figuring out which belong to which java project running, and what could roughly indicate me a proper peak memory consumption (in a -Xmx___m sense)... I have no idea.
I work under Ubuntu-MATE 16.04, x64.
The best way to analyze memory consumption is a profiler. In your jdk there comes the jvisualvm profiler, which is absolutely sufficient for this task. A (lengthy) tutorial can be found here: https://engineering.talkdesk.com/ninjas-guide-to-getting-started-with-visualvm-f8bff061f7e7
Other approaches are basically shotgun-style -reduce the xmx and then generate load in the system and see if it runs oom. If you do NOT have a straight controll flow you have no way to predict the used memory.
How do I find top 10 or top few CPU intensive thread stack traces created by Java process in Linux ? I would like to know how much time spent as well if possible
This is simple and easy. And it worked! We need more tools like this in Java.
https://github.com/patric-r/jvmtop
You get the below information by using one command jvmptop.sh <pid>
Standard linux tool like top will just give top processes, which are consuming the most cpu. but will not be able to tell that what all the threads inside a single java process is taking most of the cpu.
You need a profiling tool like YourKit to determine what threads in java process is using most of the cpu and you can enable trace based sampling in yourkit to even get the invocation count of a method as well.
please refer https://www.yourkit.com/docs/java/help/cpu_intro.jsp doc , on how to get started with CPU profiling using yourkit.
I need to monitor the performance of a Java process and take reports automatically. The reports should contain data on memory utilization thread usage, process usage etc. But I'm unsure how to accomplish this. Any suggestions?
I need to monitor the performance of a Java process and take reports automatically.
You need to determine what measures are important to the users of the application like latency and throughput. These are often impacted even if everything looks fine system wise. For example an 8 cpu system which is only 6% busy over 5 minutes might sound fine, except it could be that there is one request every 5 minutes which is taking more than 2 minutes.
The reports should contain data on memory utilization thread usage,
A key feature of threads share objects by default. This means the thread local memory usage is almost always trivial and not worth measuring in general.
process usage etc.
This can be useful for capacity planning of a long period of time, but not useful for find application specific problems (see above).
But I'm unsure how to accomplish this. Any suggestions?
Work out what metrics will help you find problems which impact the users of the application.
You may use JMX API for this purpose if want to get the data via program. Here is oracle tutorial on this topic.
If you just want to monitor the process, there are tools like VisualVM.
VisualVM is a nice tool to monitor memory utilization and other things VisualVM
We have an Java ERP type of application. Communication between server an client is via RMI. In peak hours there can be up to 250 users logged in and about 20 of them are working at the same time. This means that about 20 threads are live at any given time in peak hours.
The server can run for hours without any problems, but all of a sudden response times get higher and higher. Response times can be in minutes.
We are running on Windows 2008 R2 with Sun's JDK 1.6.0_16. We have been using perfmon and Process Explorer to see what is going on. The only thing that we find odd is that when server starts to work slow, the number of handles java.exe process has opened is around 3500. I'm not saying that this is the acual problem.
I'm just curious if there are some guidelines I should follow to be able to pinpoint the problem. What tools should I use? ....
Can you access to the log configuration of this application.
If you can, you should change the log level to "DEBUG". Tracing the DEBUG logs of a request could give you a usefull information about the contention point.
If you can't, profiler tools are can help you :
VisualVM (Free, and good product)
Eclipse TPTP (Free, but more complicated than VisualVM)
JProbe (not Free but very powerful. It is my favorite Java profiler, but it is expensive)
If the application has been developped with JMX control points, you can plug a JMX viewer to get informations...
If you want to stress the application to trigger the problem (if you want to verify whether it is a charge problem), you can use stress tools like JMeter
Sounds like the garbage collection cannot keep up and starts "halt-the-world" collecting for some reason.
Attach with jvisualvm in the JDK when starting and have a look at the collected data when the performance drops.
The problem you'r describing is quite typical but general as well. Causes can range from memory leaks, resource contention etcetera to bad GC policies and heap/PermGen-space allocation. To point out exact problems with your application, you need to profile it (I am aware of tools like Yourkit and JProfiler). If you profile your application wisely, only some application cycles would reveal the problems otherwise profiling isn't very easy itself.
In a similar situation, I have coded a simple profiling code myself. Basically I used a ThreadLocal that has a "StopWatch" (based on a LinkedHashMap) in it, and I then insert code like this into various points of the application: watch.time("OperationX");
then after the thread finishes a task, I'd call watch.logTime(); and the class would write a log that looks like this: [DEBUG] StopWatch time:Stuff=0, AnotherEvent=102, OperationX=150
After this I wrote a simple parser that generates CSV out from this log (per code path). The best thing you can do is to create a histogram (can be easily done using excel). Averages, medium and even mode can fool you.. I highly recommend to create a histogram.
Together with this histogram, you can create line graphs using average/medium/mode (which ever represents data best, you can determine this from the histogram).
This way, you can be 100% sure exactly what operation is taking time. If you can't determine the culprit, binary search is your friend (fine grain the events).
Might sound really primitive, but works. Also, if you make a library out of it, you can use it in any project. It's also cool because you can easily turn it on in production as well..
Aside from the GC that others have mentioned, Try taking thread dumps every 5-10 seconds for about 30 seconds during your slow down. There could be a case where DB calls, Web Service, or some other dependency becomes slow. If you take a look at the tread dumps you will be able to see threads which don't appear to move, and you could narrow your culprit that way.
From the GC stand point, do you monitor your CPU usage during these times? If the GC is running frequently you will see a jump in your overall CPU usage.
If only this was a Solaris box, prstat would be your friend.
For acute issues like this a quick jstack <pid> should quickly point out the problem area. Probably no need to get all fancy on it.
If I had to guess, I'd say Hotspot jumped in and tightly optimised some badly written code. Netbeans grinds to a halt where it uses a WeakHashMap with newly created objects to cache file data. When optimised, the entries can be removed from the map straight after being added. Obviously, if the cache is being relied upon, much file activity follows. You probably wont see the drive light up, because it'll all be cached by the OS.
I am trying to use VisualVM to profile a Java (Sun JDK 1.6) standalone application. I have a scripted performance test environment, where I can run my application and get it to report some metrics I care about.
Is there some way to get JVM to collect some CPU profiling snapshot which I can later analyze with VisualVM?
I am looking for something similar to -XX:+HeapDumpOnOutOfMemoryError flag which writes a heap dump to disk just before an OutOfMemoryError is thrown.
There is the hprof tool built into the JVM (http://java.sun.com/developer/technicalArticles/Programming/HPROF.html) which allows you to capture basic profiling information, its dog slow and produces massive files.
VisualVM AFAIK does not yet have these abilities, but yourkit has the ability to do what you want though its agent, and programmatically.
Yourkit via agent line (-agentlib:yjpagent=onexit=snapshot)
http://www.yourkit.com/docs/80/help/additional_agent_options.jsp
Programmatically
http://www.yourkit.com/docs/80/api/index.html
As an aside I would suggest that you are careful with measuring CPU along with performance testing as it will definatly skew your results, have you considered looking at something like https://japex.dev.java.net/ around your core code ?