We have an application running on Weblogic 8.1.3, using the bundled 1.4.2 JDK, and it's leaking memory moderately rapidly.
I've done some reading around about how to fix memory leaks, but most of it seems to assume that the JDK being used is 5 or higher. Are there any tools available for earlier versions?
Other than that, there's very little information that we've found: the leak only seems to occur on the full production environment, rather than the test environments.
We have two machines running weblogic, clustered for load balancing
The leak occurs on one of the clustered servers at a time (?!), but never both
The leak sometimes, but not always, switches from server to server when Weblogic is restarted.
So I figure that there must be an object created at server startup that can be created on one (but not both) servers that is behind the leak. Does this seem a reasonable place to start looking?
JProfiler supports profiling Java 1.4 in its current version (7.0)
You could have a look at this screen cast on how to search for memory leaks with JProfiler.
Disclaimer: My company develops JProfiler
Have you tryed to run jvisualvm and look into the memory used (heap dump)?
-> http://download.oracle.com/javase/6/docs/technotes/tools/share/jvisualvm.html
Related
I am running a Web Based Java application on JBOSS and OFBIZ. I am suspecting some memory leak is happening so, did some memory profiling of the JVM on which the application along with JBOSS and OFBIZ are running. I am suspecting garbage collection is not working as expected for the application.
I used VisulaVM, JConsole, YourKit, etc to do the memory profiling. I could see how much heap memory is getting used, how many classes are getting loaded, how many threads are getting created, etc. But I need to know how much of memory is used only by the application, how much by JBOSS and how much by OFBIZ, respectively. I want to find out who is using how much memory and what is the usage pattern. That will help me identify where the memory leak is happening, and where tuning is needed.
But the memory profilers I ran so far, I was unable to differentiate the usage of each application separately. Can you please tell me which tool can help me with that?
There is no way to do this with Java since the Java runtime has no clear way to say "this is application A and this is B".
When you run several applications in one Java VM, you're just running one: JBoss. JBoss then has a very complex classloader but the app you're profiling is actually JBoss.
To do what you want to do, you have to apply filters but this only works when you have a memory leak in a class which isn't shared between applications (so when com.pany.app.a.Foo leaks, you can do this).
If you can't use filters, you have to look harder at the numbers to figure out what's going on. That means you'll probably have to let the app server run out of memory, create a heap dump and then look for what took most of the memory and work from there.
The only other alternative is to install a second server, deploy just one app there and watch it.
You can install and create Docker containers, allowing you to run processes in isolation. This will allow you to use multiple containers with the same base and without having to install the JDK multiple times. The advantage of this is separation of concerns- Every application can be deployed in a separate container. With this, you can then profile any specific application running on the JVM because each namespace is provided with a completely isolated application's view of the operating environment, including process trees, network, user ids and mounted file system.
Here are a couple of resources for Docker:
Deploying Java applications with Docker
JVM plus Docker: Better together
Docker
Please let me know if you have any questions!
Another good tool to use to find java memory leaks is Plumbr. you can try it out for free, it will find the cause for the java.lang.OutOfMemoryError and even shows you the exact location of the problem along with solution guidelines.
I explored various Java memory profilers, and found that YourKit can give me the closest result. In YourKit dashboard you can get links to individual classes running. So, if you are familiar with the codebase, you will know which class belongs to which app. You click on any class, you will see the CPU, Memory usage related to that. Also, if you notice any issues, YourKit can help you trace back to the particular line of the code in your source java files!
If you add YourKit to Eclipse, clicking on the object name in the 'issue area', will highlight the code line in the particular source file, which is the source of the problem.
Pretty cool!!
I'm writing a Java servlet that I'm planning to deploy on Amazon AWS using Elastic Beanstalk. My tests show that things run well using a Small EC2 instance using their stock Tomcat AMI that Beanstalk uses.
I'm trying to figure out how to properly allocate Java heap space for this configuration. A Small instance has 1.7 GB of memory, so I'm thinking that a 1024MB heap will work well. I realize that memory will be needed for other things even though the only "real" purpose of this instance is to run Tomcat. And I also know that there's some point with large heaps where the standard Sun/Oracle JVM doesn't really work.
Is this a reasonable way to allocate the memory? Should I be using more or less? What tools can I use to help determine the optimal configuration?
1024 seems ok, maybe a bit much.
I'm not exactly sure what your servlet is doing, but to give you an idea, I have an ecommerce application with about 1000 daily users running on 2 small ec2 instances. Tomcat load is distributed via mod_jk.
I did not tune the JVM at all and kept the default settings. I'm also using terracotta distributed object caching, and that process seems to be consuming most of the memory.
Are you deployed on a linux based os or windows? IMO, linux does a better job at managing available memory.
As to tools, I would suggest deploying your application as is to a small ec2 instance and using a tool like JMeter to stress test the application. During the stress test, you can have the top utility open(assuming your app is on linux and top is installed).
Try to break your application by seeing how much load it can handle. That's the beauty of ec2, you can setup a test environment in minutes and discard it right after.
Check newrelic for java to determine you heap usage pattern.
I have recently downloaded a trial version of YourKit and after playing with it for a while it looks great, but I am concerned about running it in the production environment. Previous profilers I have used have put unacceptable overhead on the servers.
Does anyone know the typical overhead of the YourKit software? Or has anyone had any problems running YourKit in a production environment?
I am running YourKit for Java. The server I am profiling is RedHat running JBoss 4.
For everbody wondering: Initially we already have had really strange performance issues, which we couldn't place for quite a while.
So we installed yourkit on our production servers (Tomcat) and disabled the telemetry (disableexceptiontelemetry,disablestacktelemetry) as advised.
Then we started tuning the Garbage Collection but this did not seem to make any difference.
Still from time to time randomly one of the servers would start to show real bad performance. Sometimes it did recovered by itself, more often a restart was the only solution.
After a lot of debugging & log-reading we found some very long periods of reference checking concerning the JNI Weak References in the gc log.
The probes of yourkit seemed to mess with that somehow.
After disabling the probes as well (builtinprobes=none) everything got back to normal.
The tuning of the gc config had solved our initial performance problems already, but leaving the yourkit probes active created a new issue, which was so similar, that we couldn't tell them apart.
See here for more details:
Java G1 GC Processing Reference objects works slow
I have used Yourkit on JBOSS and Mule servers in production. What I felt is that when the load increases , it is throwing out of memory error. Then we stopped using it in production and we use it only for local testing.
And we use Jconsole in production server to monitor server resources like CPU , memory,Threads.
It is really helpful.
I have used yourkit in production but on a tomcat server. It works pretty well and we didn't noticed any major performance overheads.
We had many instances of Tomcat server running behind a load balancer. So we put yourkit on one of the servers and things work out pretty well.
I'm new into Java profiling, and I'd like to ask about it.
Does it make sense to profile an application on the server, where I only have the console?
Is there any console profiler which make sense?
Or should I profile the application only on localhost?
VisualVM is able to connect to a remote host. The profiling works the same way as local. It's part of the JDK since JDK 6 update 7.
When you do profiling you should generally try to reproduce the production environment as closely as possible. Differences in hardware (# of cores, memory, etc) and software (OS, JVM version) can make your profiling results as unique as the runtime environment.
For example, what looks like a CPU bottleneck worth optimizing on your local machine might disappear entirely or turn into a disk bottleneck on your production server depending on the differences in the CPU.
All modern profilers will allow to attach to a remotely running JVM so you don't need to worry about only having console access.
What profiler you decide to use will depend on your needs and preferences. Certain profilers will show you "hotspots" where your code is spending the majority of the time and these are often good candidates for optimization.
I prefer to use JProfiler for its extensive features and good performance. I previously used YourKit but switched to JProfiler for its memory and thread profiling features.
Does it make sense to profile an
application on the server, where I
only have the console?
Fortunately, that does not matter, since Java has always (well, for a long time, anyway) supported remote profiling, i.e. the Profiler can run on a different machine than the JVM being profiled and get its data via the network.
All Java profilers that I've ever seen support this, including visualvm, which comes with recent JDKs (in the bin directory).
There's a "quick and dirty" but effective way to find performance problems in Java.
This is a language-agnostic explanation of why it works.
Note that the JDK comes with a built-in profiler, HPROF. HPROF is a bit simplistic, but will find many problems. It is activated simply by invoking the JVM with parameter -agentlib:hprof; it will then automatically run as long as your JVM is running. It collects data until the JVM terminates, then dumps it into a file on the server, which you can analyze.
See e.g. http://java.sun.com/developer/technicalArticles/Programming/HPROF.html for a good introduction. A nice graphical analyzer for HPROF's results is PerfAnal: http://java.sun.com/developer/technicalArticles/Programming/perfanal/
Has anyone used the new Java 1.6 JDK tool, VisualVM, to profile a production application and how does the application perform while being profiled?
The documentation say that it is designed for both Production and Development use, but based on previous profiling experience, with other profiling tools, I am hesitant.
While i haven't personally used VisualVM, I saw this blog post just today that might have some useful information for you. He talks about profiling a production app using it.
I tried it on a dev box and found that when I turned off profiling it would shut Tomcat down unexpectedly. I'd be very cautious about rolling this out to production- can you simulate load in a staging environment instead? It's not as good as the real thing, but it probably won't get you fired if it goes wrong...
I've used VisualVM before to profile something running locally. A big win was that I just start it up, and it can connect to the running JVM. It's easier to use than other profiling tools I've used before and didn't seem to have as much overhead.
I think it does sampling. The overhead on a CPU intensive application didn't seem significant. I didn't measure anything (I was interested in how my app performed, not how the tool performed), but it definitely didn't have the factor of 10 slowdown I'm used to seeing from profiling.
For just monitoring your application, running VisualVM remotely should not slow it down much. If the system is not on the edge of collapsing, I still haven't seen any problems. It's basically just reading out information from the coarse grained built-in instrumentation of the JVM. If you start profiling, however, you'll have the same issues as with other profilers. Basically because they all work almost they same way, often using the support in the JVM.
Many people have problems with running VisualVM remotely, due to firewall issues, but you can even run Visual VM remotely over ssh, with some system properties set.
It is possible to remote connect to your server from a different computer using VisualVM. You just need to right click on the "Remote" node and say "Add Remote Host."
This would at least eliminate the VisualVM overhead (if there is any) from impacting performance while it is running.
This may not eliminate all performance concerns, especially in Production environments, but it will help a little.
I've used the Net Beans profiler which uses the same underpinnings as Visual VM.
I was working with an older version of Weblogic, which meant using the 1.5 JVM, so I couldn't do a dynamic attach. The application I was profiling had several thousand classes and my workstation was pretty much unusable while the profiler instrumented them all. Once instrumentation was complete, the system was sluggish but not completely unusable. The amount of slowdown really depends on what you need to capture. The basic CPU metrics are pretty light weight. Profiling memory allocation slows things down a lot.
I would not use it on a production system. Aside from the potential for slowdown, I eventually ran out of PermGen space because the profiler reinstruments and reloads classes when you change settings. (This may be fixed in the 1.6 agent, I don't know)
I've been using VisualVM a lot since before it was included in the JDK. It has a negligable impact on the performance of the system. I've never noticed it cause a problem with performance on the system, but then again, our Java server had enough headroom at the time to support a little extra load. If your server is running at a level that is completely tacked out and can't handle the VisualVM running, then I would say its more likely that you need to buy another server . Any production server should have some memory headroom , otherwise what you have is a disaster just waiting to happen.
I have used VVM(VavaVoom?) quite extensively, works like a charm in the light mode, i.e. no profiling, just getting the basic data from the VM. But once you start profiling and there are many classes, then there is considerable slowdown. I wouldn't profile in a production environment even if you have 128 core board with 2 tera of memory purely because the reloading and re-defining of the classes is tricky, the server classloaders are another thing, also vary from one server implementation to another, interfering with them in production is not a very good idea.