I've got an app running on a grid of uniform java processes (potentially on different physical machines). I'd like to collect cpu usage statistics from a single run of this app. I've went over profiling tools looking for an option of automatic collection of data but failed to find any in netbeans, tptp, jvisualvm, yourkit etc.
Maybe I'm looking in a wrong way?
What I was thinking is:
run the processes on the grid with some special setup that allows them to dump profiling info
run my app as usual - it will push tasks to the grid, the processes will execute the tasks and publish profiling info
uses some tool to collect and analyze the profiling results
but I can't find anything even remotely similar to this.
Any thoughts, experience, suggestions?
Thank you!
If you have allowed remote JMX access and if you are using SUN JDK 1.6 then try using jvisualvm. It has the option of remote JMX connection. Though I haven't it used for profiling CPU in a distributed environment.
Note: For CPU profiling your application should be running on SUN JDK 1.6 or above.
Have a look at these links:
JVisualVM
JVisualVM - Working with Remote Applications
Get heap dump from a remote application in Java using JVisualVM
Unable to profile JBoss 5 using jvisualvm
http://www.taranfx.com/java-visualvm
I have used CA Introscope for this type of monitoring. It uses Instrumentation to collect metrics over time. As an example, it can be configured to provide you a view of all nodes and their performance over time. From that node view, you can drill down to the method level to help you figure out where your bottle necks are.
Yes, it will provide CPU utilization.
It's a commercial $$$ tool, but its a great tool for collecting, monitoring and interrogating performance data.
if you look at something like zabbix (though there are tons others of monitoring tools), this allows for gathering data via JMX from a Java app. And if you enable JMX in your app and allow it to be queried externally (via TCP/IP) you will have access to a lot of the hotspot internals (free memory etc) also thread stacks etc. Then you could have these values graphed as well. It does need configuration but what you're looking for don't think can be done with a one line of a script.
Just to add that profiling information on each node usually contain timestamps.
To match these timestamps all machines should have exactly the same time (10 millis delta maximum)
cluster nodes should synchronize with single source network time server (NTP)
You can use some JMX library, e.g. jmxterm and wrap it in some code to connect to multiple hosts an poll them for changes. If you are abit familiar with Python, look at mys simple script here for some inspiration: http://rostislav-matl.blogspot.com/2011/02/monitoring-tomcat-with-jmxterm.html .
http://www.hyperic.com/products/open-source-systems-monitoring
I never tried other tools mentioned in other answers. I was more than satisfied with hyperic.
It exposes webservices API as well which you can use to write your own analysis tools.
If you know the critical paths you want to analyse I would suggest time stamping your process in key places and combining the logs yourself. This is likely to be a useful addition to your profiling, can be used in production and may be even more useful as a result. (It is for my project)
I have used YourKit to monitor a number of processes at once. It can show you what is happening in each in real time and collect the results when all is finished.
I don't know if it provides a combined view of what is happening.
I was looking for something similar and found Hyperic
Claims are the tool can monitor most common applications ans systems, gather all information and present them in a conveniant fashion.
To be honest this is on my todo list, so I can't say if it will do the job or not. Anyway, it seem impressive.
Related
There was a problem with the high response time of my Spring application.
My colleagues advised me to use VisualVM, simultaneously running the load using Jmeter. I want to check which method takes the most time
However, in VisualVM I get an uninformative answer - there are no methods of my application.
Can you tell me if I'm going the right way and how to display information on methods inside the application?
I think you need to go for a more comprehensive tool like JProfiler or YourKit or an APM tool if you have an appropriate license (there are free and open source like Apache Skywalking as well)
Theoretically it's possible to use JVisualVM however it's better to go for the "snapshot" mode, check out Profiling With VisualVM article series for instructions.
You can also get some JMX metrics using JMeter's PerfMon Plugin
My question is related to find out performance issues for Java based web project with MySQL backend.
I want to check JVM performance what is best way to get JMX values.
How to check thread performance like how many threads are running in one transaction and where is the bottleneck related to
Resource Starving threads , Blocking thread if any,
Objects / classes load and unload time and sizes.
I am using Packetbeat Logstash with Kibana to collect log and monitoring but didn't find this info.
Is there any shortcut or simple way to check these performance issues?
If my question is vague please let me know I will try to add more details.
Many Thanks
You can use Java VisualVM packaged along with your jdk to monitor the Threads , GC , etc to analyze the JVM performance . If you want more fine grained details , go with some JVM profiling tools like YourKit
You could also use JProfiler .JProfiler works both as a stand-alone application and as a plug-in for the Eclipse software development environment.
But JProfiler is licensed one.
I have to display jvm memory usage data on a page. I need to find the jvm memory stats such as free memory and max memory.
java runtime functions give data only of one jvm. How do I find this for a jvm cluster consisting of 4 jvms.
If possible it could be a unix command or some java function.
since JVM doesnt support clustering out of the box. (assuming you are referring to the standard oracle distribution)
you will have to develop an aggregation of JVM memory stats from different JVMs .
There is no such thing as a "jvm cluster" since JVMs can't really be clustered. Ie, there is no clustering capability in the JVM itself.
Programs (themselves running on a jvm) can be clustered using a third party tool or library (or by writing the relevant code yourself, which I would advise against).
This means that, since there is no core-jdk support for clustering, there is also no java function call that can give certain values for the cluster. The software/tool/library you are using to cluster your program might be able to give this information but you'd have to look that up in the documentation.
For the same reason, there's also no unix call. *nix OSes know nothing about your java cluster, they just know that there are processes running on them that use the CPU and memory and probably do some I/O. They have no idea about any clustering and therefore can not help you with your question.
So, to find what you are looking for:
If it's a true scaling cluster, ie the workload gets automatically divided over the different jvms in the cluster, you'd have to take a look at the documentation for the clustering software (tool/library) you use to find out if they can give you that information.
If you use a third party application (such as Zabbix) to monitor different JVMs you might construct a screen or view which can show you the data for multiple JVMs in one screen. Again, you'd have to look this up in the documentation for that tool.
At present I have a set of benchmark tests for recording the speed at which a Java application connects submits and returns data from varying RDBMS housed on varying server platforms. The application uses a simple algorithm for recording the time taken associated with each test. The application itself is a simple Java interface for a user to specify the tests, this seemed easier than hard coding each test or using an IDE to perform each test (bare in mind with the combination of RDBMS, Server O.S and client O.S there are in the region of several hundred individual tests). I would like to further my findings by introducing the cpu usage and memory usage during these tests on the client side where the application resides, I could hard code the algorithm for doing so in my application(My Preference) or use a third party software for monitoring this (Bare in mind it would need to be suitable for cross platform use, Windows 7, Solaris and Ubuntu).
So my question is how could I record the usage of CPU and Memory during a test through either hard coding in my Java application or Using a third party software? If you believe a third party would be the solution please could you mention the actual product and how it is possible to do this?
Thankyou to all who take the time to answer.
Check VisualVM. Has a lot of features
I used VisualVM and help to much to get memory leaks.
Here has a video who show most important VisualVM features
There are plenty of commercial products for this. JProbe is my favorite these days, but I'm also using YourKit. In the free arena, Eclipse has "TPTT" -- "Test Platform something something" -- but it seems to be a rare person who can actually get the darn thing to work. Never works for me.
I am calling a vendor's Java API, and on some servers it appears that the JVM goes into a low priority polling loop after logging into the API (CPU at 100% usage). The same app on other servers does not exhibit this behavior. This happens on WebSphere and Tomcat. The environment is tricky to set up so it is difficult to try to do something like profiling within Eclipse.
Is there a way to profile (or some other method of inspecting) an existing Java app running in Tomcat to find out what methods are being executed while it's in this spinwait kind of state? The app is only executing one method when it gets in this state (vendor's method). Vendor can't replicate the behavior (of course).
Update:
Using JConsole I was able to determine who was running and what they were doing. It took me a few hours to then figure out why it was doing it. The problem ended up being that the vendor's API jar that was being used did not match exactly to the the database configuration that it was using. It was defaulting to having tracing and performance monitoring enabled on the servers that had the slight mis-match in configuration. I used a different jar and all is well.
So thanks, Joshua, for your answer. JConsole was extremely easy to setup and use to monitor an existing application.
#Cringe - I did some experimenting with some of the options you suggested. I had some problems with getting JProfiler set up, it looks good (but pricey). Going forward I went ahead and added the Eclipse Profiler plugin and I'll be looking over the different open source profilers to compare functionality.
If you are using Java 5 or later, you can connect to your application using jconsole to view all running threads. jstack also will do a stack dump. I think this should still work even inside a container like Tomcat.
Both of these tools are included with JDK5 and later (I assume the process needs to be at least Java 5, though I could be wrong)
Update:
It's also worth noting that starting with JDK 1.6 update 7 there is now a bundled profiler called VisualVM which can be launched with 'jvisualvm'. It looks like it is a java.net project, so additional info may be available at that page. I haven't used this yet but it looks useful for more serious analysis.
Hope that helps
Facing the same problem I used YourKit profiler. It's loader doesn't activate unless you actually connect to it (though it does open a port to listen for connections). The profiler itself has a nice "get amount of time spent in each method" while working in it's less obtrusive mode.
Another way is to detect CPU load (via JNI, so you'd need an external library for this) in a "watchdog" thread with highest priority and start logging all threads when the CPU is high enough for a long enough time. You might find this article enlightining.
If it's for professional purpose and you have some money to spend, try to get your hands on JProfiler. If you just want to get some insights, try out the Eclipse Profiler Plugin. I used it several times, but I don't know the current state.
A new(?) project from the eclipse project itself is available too: http://www.eclipse.org/tptp/ (See this article). Never used it, so I can't tell if it is worth the effort.
There's also a very good list of open source profilers available at http://www.manageability.org/blog/stuff/open-source-profilers-for-java
If JConsole can't be used you can
press CTRL+BREAK under Windows
send kill -3 <process id> under Linux
to get a full Thread Dump. This doesn't affect performance and can always be run in production.
JRockit Mission Control Latency Analyzer.
The Latency Analyzer that comes with JRockit shows you what the JVM is "doing" when it's not doing anything. In the latest version you can see latencies for:
Java wait/blocked/sleep/parked.
File I/O
Network I/O
Memory allocation
GC pauses
JVM latencies, e.g code generation and class loading
Thread suspension
The tool will give you the stack trace when the latency occurred. You can view the latency data in many different ways (aggregated traces, as a histogram, in a thread graph etc.). The tool also allows you to see transitions between threads, for instance when one thread notifies another.
latency analyzer http://blogs.oracle.com/hirt/WindowsLiveWriter/The.0LatencyAnalyserMigratedfromtheoldBE_7246/latency_graph_2.png
The overhead is negligible and unlike many other tools it can be used in a production environment.
This blog post gives you a brief introduction and the program can be downloaded here.
It's free to use for development!
Use a profiler. Yes they cost money, and using them can occasionally be a bit awkward, but they do provide you with a great deal more real evidence rather than guesswork.
Human beings are universally bad at guessing where performance bottlenecks are. It just seems to be something our brains aren't build to do very well. It may seem obvious, you may have great ideas about what the problem is, but the real world often turns out to be doing something different. And optimising the wrong part of code means, at best, lots of work for minimal benefit. More often it makes things slower, and sometimes it breaks things entirely. So before you make any changes for the sake of optimisation, you should always have real evidence from a profiler or other accurate tool.
As mentioned, both JProfiler and YourKit are both fairly good and not prohibitively expensive. Last time I looked, they both had free demos too.
For completeness sake: even though my company more or less standardizes on Eclipse we use Netbeans (6 and up) with its included, free profiler on a daily basis. It works better than the Eclipse TPTP plugin (last checked 3 months ago) and for us it removes any need for a commercial profiler such as JProfiler, which is excellent, but fast becoming unnecessary.
VisualVM should be the profiler from netbeans as standalone. I tried the TPTP for eclipse but visualVm seems as a much nicer option!