I'm very new to Java. I'd like to help one of our teams that is working on an internal bank application. I'm one of the server support that has access to Windows and Linux servers. Anyways, the java app is running on linux. What tools would be appropriate for capturing memory leaks that can help pinpoint which code block is actually making the memory leak? Found few tools like jstat and jmap. Are these good? Is this safe to run in production? Or how would you debug an issue where cpu goes to 100%?
Related
I have ran nohup java -jar sample.jar & on my linux server as a backend service of web application but service is automatically killed after few hours and need to re-run the command.
I want to know what is the reason and how to resolve the issue
On Linux, one possibility is that it is the oomkiller process. This watches for processes that appears to be triggering excessive paging activity, and sends them a SIGKILL signal. The oom killer's purpose is to prevent the paging system from thrashing ... which can bring the entire system to its knees.
If it is the oom killer killing your Java app, you should see a log message in /var/log/messages or /var/log/syslog.
The solution may be as simple as reducing the Java app's heap size so that it all fits in the available physical memory. Alternatively, if your Java app's heap usage is growing all of the time, you may need to look for a possible memory leak in your app. (Search for articles on finding Java memory leaks ...)
I run Spring Boot via the same command and only a handful of times have I seen this behavior. It turned out that the JVM ran out of memory and it crashed. When I changed the heap size settings it was rock solid again.
I'm using Google App Engine v.1.9.1, Java edition, running inside Eclipse Kepler SR2. I've got JDK 1.7. My logging.properties is sending the logs to the java.util.logging.ConsoleHandler only.
My [edit] development server running in Eclipse [/edit] receives a lot of data from another server and just dumps it into a database. This generates a lot of logs. I get OutOfMemoryError after only a few hours.
I've run JProfiler and I figured out the object being kept around is com.google.apphosting.api.logservice.LogServicePb$LogLine. Somehow this isn't being discarded, ever, keeping millions of instances in memory.
Sure I can reduce the amount of data logged but that will only delay the problem.
I've looked everywhere to figure out how to flush out the log lines but I can't find any setting for this. The only option available is for Python not Java.
Any idea what's causing this and how to fix it?
As #Martin Berends said, the development server inside Eclipse is only for developing. It seems that log statements are kept in memory in that environment. Once I moved my app to a test server, the memory usage seems to be flat.
So the bottom line is; when running in a development environment, reduce the amount of logging and restart the server once in a while to avoid OutOfMemoryErrors. Secondly, do your tests on a real test server.
Let's say I have a very large Java application that's deployed on Tomcat. Over the course of a few weeks, the server will run out of memory, application performance is degraded, and the server needs a restart.
Obviously the application has some memory leaks that need to be fixed.
My question is.. If the application were deployed to a different server, would there be any change in memory utilization?
Certainly the services offered by the application server might vary in their memory utilization, and if the server includes its own unique VM -- i.e., if you're using J9 or JRockit with one server and Oracle's JVM with another -- there are bound to be differences. One relevant area that does matter is class loading: some app servers have better behavior than others with regard to administration. Warm-starting the application after a configuration change can result in serious memory leaks due to class loading problems on some server/VM combinations.
But none of these are really going to help you with an application that leaks. It's the program using the memory, not the server, so changing the server isn't going to affect much of anything.
There will probably be a slight difference in memory utilisation, but only in as much as the footprint differs between servlet containers. There is also a slight chance that you've encountered a memory leak with the container - but this is doubtful.
The most likely issue is that your application has a memory leak - in any case, the cause is more important than a quick fix - what would you do if the 'new' container just happens to last an extra week etc? Moving the problem rarely solves it...
You need to start analysing the applications heap memory, to locate the source of the problem. If your application is crashing with an OOME, you can add this to the JVM arguments.
-XX:-HeapDumpOnOutOfMemoryError
If the performance is just degrading until you restart the container manually, you should get into the routine of triggering periodic heap dumps. A timeline of dumps is often the most help, as you can see which object stores just grow over time.
To do this, you'll need a heap analysis tool:
JHat or IBM Heap Analyser or whatever your preference :)
Also see this question:
Recommendations for a heap analysis tool for Java?
Update:
And this may help (for obvious reasons):
How do I analyze a .hprof file?
We have a curious problem with our java processes dying.
The application doesn't stacktrace, or write anything to the logs, the process just randomly dies. It's a heavily used application, but the problem only appears about once a month.
We're currently looking into using Process Monitor but any other suggestions would be welcome.
Edit:
It's a distributed Java application, running on Weblogic with an in-house web framework (Yes, this is a terrible idea, but it's been running for eight years), connecting to Oracle.
-
Out of Memory?
Our logs would catch java.lang.OutOfMemoryException, according to Brian Agnew.
Write crashes to a log? I don't think Java ever gets the chance, the death is happening at a process level, rather than Java exiting.
Can you wrap it in some shell script that captures the log files (stdout/stderr) and the exit code (which should give some indication as to how it died) ? On JVM exit you can also capture machine level stats using WMI
IF the VM itself is crashing it'll leave behind an hs_err_pid... file that contains stacktraces, machine-level debug info. You can then use that to diagnose the VM issue. See this blog entry for further information.
If the problem is related to the app's behaviour, it may be worth looking at JConsole, although from your description of the issue, this sounds much more like a low level VM issue.
(I assume you're on the latest VM for your Java version number etc.)
You can use a Linux NAGIOS Server to monitor the health of your Windows machines and services! Have a look at: nagios-monitoring-windows.
If you have such problems with your java app! You should test it and debug it! Applications shouldn't die without a trace! Look for logfiles! From which vendor is the app? Or is it self written? Try to enforce another Log4J/Logger/Debug Level. Monitor your System with cacti etc. to reduce the possibilities for such a crash. Talk to the software vendor.
Is enogh memory available? Maybe the app runs out of memory? Is it a standalone java process or a java process from a tomcat/jboss server?
Have you written down the crash times to a log? Appear they in different time-slices? Or appear they nearly time-circular?
VisualVM is a new tool which makes monitoring Java applications easier:
https://visualvm.dev.java.net/description.html
"VisualVM is a tool that provides detailed information about Java applications while they are running. It provides an intuitive graphical user interface that allows you to easily see information about multiple Java applications."
I've been running Tomcat 5.5 with Java 1.4 for a while now with a huge webapp. Most of the time it runs fine, but sometimes it will just hang, with no exception generated, and no apparant way of getting it to run again other than re-starting Tomcat. The tomcat instance is allowed a gigabyte of memory on the heap, but rarely exceeds 300 MB. Has anyone else run into this issue, and is there a solution for it?
For clarification: I determined how much memory it is using via Task Manager and via Eclipse (I've also tried running it outside of Eclipse, but get the same problem eventually, though it takes a little longer). With Eclipse, I look at the memory allocated via its little (optional) memory pane and the amount allocated to javaw.exe via the task manager. I use the sysdeo? tomcat plugin for Eclipse.
For any jvm process, force a thread dump. In windows, this can be done with CTRL-BREAK, I believe, in the console window.
In *nix, it is almost always "kill -3 jvm-pid".
This may show if you have threads waiting on db connection pool/thread pool, etc.
Another thing to check out is how many connections you have currently to the JVM -- either use NETSTAT or SysInternals utility such as tcpconn/tcpview (google it).
Also, try to run with the verbose:gc JVM flag. For Sun's JVM, run like "java -verbose:gc". This will show your garbage collections. If it is collecting a lot (FULL COLLECTIONS, expecially) then you probably have a memory leak. The full collections are costly, especially on large heaps like that.
How are you determining that only 300mb are being used?
It sounds like you're hitting a deadlock.
If you can reproduce it in a dev environment then try attaching a debugger once it's happened. Take a look at your threads and see if you have any deadlocks.
If you can't get a debugger to attach you should be able to generate a thread dump, as Dustin pointed out.
Try increasing the logging sensitivity for the Tomcat application server.
http://tomcat.apache.org/tomcat-5.5-doc/logging.html
You can increase the sensitivity to FINEST or ALL for most of them for a few days and see if that helps you catch anything.
I agree with creating multiple thread dumps and viewing them though this: Thread Dump Analyzer