hi we are getting out of memory exception for one of our process which is running in unix environmnet . how to identify the bug (we observed that there is very little chance of memory leaks in our java process). so whatelse we need analyse to find the rootcauase
I would suggest using a profiler like YourKit (homepage) so that you can easily find what is allocating so much memory.
In any case you should check which settings are specified for your JVM to understand if you need more heap memory for your program. You can set it by specifying -X params:
java -Xmx2g -Xms512m
would start JVM with 2Gb of maximum heap and a starting size of 512Mb
If there are no memory leaks then the application needs more memory. Are you getting out of heap memory, or perm memory or native memory? For heap memory and perm memory you can increase allocation using -Xmx.or -XX:PermSize arguments respectively.
But first try using a profiler to verify that your application is really not leaking any memory.
Related
I have a Java program that has been running for days, it processes incoming messages and forward them out.
A problem I noticed today is that, the heap size I printed via Runtime.totalMemory() shows only ~200M,but the RES column in top command shows it is occupying 1.2g RAM.
The program is not using direct byte buffer.
How can I find out why JVM is taking this much extra RAM?
Some other info:
I am using openjdk-1.8.0
I did not set any JVM options to limit the heap size, the startup command is simply: java -jar my.jar
I tried heap dump using jcmd, the dump file size is only about 15M.
I tried pmap , but there seemed to be too much info printed and I don't know which of them is useful.
The Java Native Memory Tracking tool is very helpful in situations like this. You enable it by starting the JVM with the flag -XX:NativeMemoryTracking=summary.
Then when your process is running you can get the stats by executing the following command:
jcmd [pid] VM.native_memory
This will produce a detailed output listing e.g. the heap size, metaspace size as well as memory allocated directly on the heap.
You can also use this tool to create a baseline to monitor allocations over time.
As you will be able to see using this tool, the JVM reserves by default about 1GB for the metaspace, even though just a fraction may be used. But this may account for the RSS usage you are seeing.
One thing is that if your heap is not taking much memory, then check from a profiler tool how much has it taken for your non-heap memory. If that amount is high and even after a GC cycle, if its not coming down, then probably you should be looking for a memory leak ( non-heap ).
If the non-heap memory is not taking much and everything looks good when you look into the memory using profiling tools, then I guess its the JVM which holds the memory rather releasing them.
So you better check if your GC hasn't work at all or if GC is being forcefully executed using a profiling tool, whether the memory comes down do does it expands or what is happening.
JVM memory and Heap memory are having 2 different behaviors and JVM could assume that it should expand after a GC cycle based on
-XX:MinHeapFreeRatio=
-XX:MaxHeapFreeRatio=
above parameters. So the basic concept behind this is that after a GC cycle, the JVM starts to get measures of free memory and used memory and starts to expand itself or shrink down based on the values for above JVM flags. By default they are set to 40 and 70, which you may interested in tuning up. This is critical specially in containerized environment.
You can use VisualVM to monitor what is happening inside your JVM. You can also use JConsole for a primary overview. It comes with JDK itself. If your JDK is setup with an environment variable, then start it from teriminal with jconsole. Then select your application and start monitoring.
Getting java.lang.OutOfMemoryError : java Heap space on Jboss 7
The entry in jboss configuration is
set "JAVA_OPTS=-Xms1G -Xmx2G -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=2096M"
The Error was
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3332) [rt.jar:1.8.0_231]
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) [rt.jar:1.8.0_231]
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448) [rt.jar:1.8.0_231]
at java.lang.StringBuffer.append(StringBuffer.java:270) [rt.jar:1.8.0_231]
at java.io.StringWriter.write(StringWriter.java:112) [rt.jar:1.8.0_231]
at java.io.PrintWriter.write(PrintWriter.java:456) [rt.jar:1.8.0_231]
at java.io.PrintWriter.write(PrintWriter.java:473) [rt.jar:1.8.0_231]
at java.io.PrintWriter.print(PrintWriter.java:603) [rt.jar:1.8.0_231]
at java.io.PrintWriter.println(PrintWriter.java:756) [rt.jar:1.8.0_231]
at java.lang.Throwable$WrappedPrintWriter.println(Throwable.java:765) [rt.jar:1.8.0_231]
at java.lang.Throwable.printEnclosedStackTrace(Throwable.java:698) [rt.jar:1.8.0_231]
at java.lang.Throwable.printEnclosedStackTrace(Throwable.java:710) [rt.jar:1.8.0_231]
You are encountering OutOfMemoryError: Java heap space,in this case you dont have to increase MetaSpace. I will suggest you to increase heap allocation (Xms3G -Xmx3G). Make sure you have same values for Xms and Xmx. If you still encounter same issue with this then add -XX:+HeapDumpOnOutOfMemoryError option. This option will generate heap dump when OOM error occurs. You can analyze this heap dump through tools like eclipse mat to check which objects consume more memory and if there is any memory leak.
Depending on the application deployed on a JBoss, even 2 GB of a heap could be not enough.
Potential problems:
Xmx configuration is not applied (configuration is made in a wrong file)
The application just requires more heap
There is a memory leak in the application
If you run JBoss on Windows, set in standalone.conf.bat file in the JAVA_OPTS variable the following values -Xmx2G -XX:MaxMetaspaceSize=1G.
standalone.conf file is ignored on Windows and applied on *nix systems only.
Verify that these values are applied by connecting to the JVM using JConsole (that is a part of JDK) or JVisualVM.
Using these tools you can monitor heap usage and see if more heap is required.
If the heap size is big enough (e.g. 4+ GB) and heap still constantly grows while garbage collection (GC) doesn't free space, probably there is a memory leak.
For analysis add to the JAVA_OPTS the following flags: -Xloggc:gc.log -XX:+PrintGCTimeStamps -XX:+HeapDumpOnOutOfMemoryError.
With these flags JVM will log GC activity into the gc.log file where you can explicitly see how much space is freed after the GC.
If subsequent GC executions according to the log doesn't free any space, probably there is a memory leak and you should analyze a heap dump created manually using JVisualVM or created by JVM itself on OutOfMemoryError. Heap dumps can be analyzed using JVisualVM or Eclipse Memory Analyzer (MAT).
in case you use IntelliJ use the following image
inside VM option add what you need
java.lang.OutOfMemoryError: OutOfMemoryError usually means that you’re doing something wrong, either configuration
issue(where the specified heap size is insufficient for the application) or holding onto objects too long(this prevents the objects from being garbage collected), or trying to process too much data at a time.
Possible Solutions:
1) Try setting "JAVA_OPTS=-Xms1G -Xmx1G -XX:MaxPermSize=256M " to maximum value and restarting your server.
2) Check your code for any memory leak. Use some heap dump reader to check the same. (There are multiple plugins available for IDE's like Eclipse, IntelliJ, etc)
3) Check your code(90% of the times issues are in the code): Check if you are loading excess data from a database or some other source into your heap memory and if it's really required. If you are calling multiple web-services and multiple DB Read-only operations cross-check the db query(If Joins are perfectly used with right where clauses) and amount of data returned from db query and web service.
4) If the issue is due to some recent code change, then try to analyze the same.
5) Also, check if cache and session elements are cleared once used.
6) To be sure if the issue is not due to Jboss, you can run the same code on some other server for testing purpose(Tomcat, Websphere, etc)
7) Check Java documentation for more understanding on Out of memory error: Documentation Link
Make sure you have provided enough space in your standalone.conf file inside bin directory
JAVA_OPTS="-Xms512m -Xmx1024m -XX:MetaspaceSize=512M -XX:MaxMetaspaceSize=1024m -Djava.net.preferIPv4Stack=true"
You should have to increase MaxMetaSpaceSize upto 1024m and MetaspaceSize to 256m hope it will works.
Try also to monitor your process , try to run jvisualvm ( is included into the jdk ) and a UI will open , from there you will be able to get a lot of info.
Hope this help you.
Usually, this error is thrown when there is insufficient space to allocate an object in the Java heap. In this case, The garbage collector cannot make space available to accommodate a new object, and the heap cannot be expanded further.
Please do the below possible steps for fixing java.lang.OutOfMemoryError.
Step 1: Check JBoss configuration settings. The setting depends on your system configuration, java version and JBoss version. Check configuration here.
Step 2: Check Heap dump settings here.
Step 3: Java code/ Library problem.
Analyze your memory here And Jboss memory profiler.
Other causes for this OutOfMemoryError message are more complex and are caused by a programming error.
I am trying to solve a memory issue I am having with my tomcat servers and I have some questions about memory usage.
When I check my process memory usage with top I see its using 1Gb physical memory, after creating a core dump using gdb, the core file size is 2.5GB , and when analyzing the HPROF file created by jmap , it states that 240MB is used.
So if top shows 1GB why does the hprof file show only 240MB where did 760MB go ?
Have you tried running Jmap using -heap:format option set? JVM usually runs a GC before taking a dump.
Also, JVM memory is not just Heap Memory. It contains Code, Stack, Native method, Direct memory, even Threads are not free to use. you could read more about it here. Just make sure to see if all these also add up to it.
I would suggest using VisualVM or yourkit and compare the memory. Also, which GC are you using? Some GC's don't usually shrink the heap memory after increasing, but if GC got triggered during heapdump it might have freed up some memory(Try G1GC).
Looking at the Heap and Non-Heap memory usage and total memory consumption on Heroku I see unexpected results.
Heap memory usage around 175MB
Non-Heap memory usage around 125MB
Total memory usage: 525MB
As the total memory usage is above 512MB I get R14 errors and the memory utilisation > 100% (102.8%).
Is this expected?
EDIT
I'm not using a custom Procfile and the Heroku dashboard displays following command to be used for starting the dyno: web java -Dserver.port=$PORT $JAVA_OPTS -jar build/libs/*.jar
It is using JDK 11 without any specific GC settings.
EDIT 2
Added total memory usage graph.
Note for whole graph app is "idle" nobody is using it except some minimal number of request from my site.
With no specific memory setting (using Heroku defaults) 525MB is used.
Changing memory setting to -Xmx256m memory usage drops to 400 MB right away.
Changing it to -Xmx272m the memory goes up to 500MB.
The 175MB of used heap is only representative of what portion the JVM is using of the total heap. If you're using the defaults, the committed heap is probably 300MB (this is not shown in the Heroku dashboard unfortunately).
The additional memory will come from the following:
Non-heap (metaspace, code cache, some other things)
Threads (each thread has it's own stack and consumed memory)
JNI (native memory not managed by the JVM, see this blog)
External processes (if the JVM launches any other JVM or non-JVM process it will be reflected in total dyno memory)
I recommend the following:
Lower your heap size (256MB may be enough)
Lower some other defaults (-XX:ReservedCodeCacheSize=50m -XX:MaxMetaspaceSize=80m)
Reduce the size of any thread pools (Tomcat?)
Finally, there is a case where lots of temporary direct memory maps remain associated with the process even though the JVM is done with them (i.e. the OS doesn't reclaim them and swaps them to disk). If you aren't seeing any adverse performance effects, this may be your case. This happens when there is lots of IO (like parsing tons of XML or JSON files, or generating PDFs).
In my Tomcat application I am eventually getting "Out of memory" and "Cannot allocate memory" errors. I suppose it is nothing to do with the heap as it completely fulls up the system memory and I am hardly able to run bash commands.
How this problem is connected to the heap? How can I correctly set up heap size so that the application has enough memory and so it does not consume too much of the system resources?
Strange thing is that "top" command keeps saying that tomcat consumes only 20% of mem and there is still free memory, once the problem happens.
Thanks.
EDIT:
Follow-up:
BufferedImage leaks - are there any alternatives?
Problems with running bash scripts may indicate I/O issues, and this might be the case if your JVM is doing Full GCs all the time (which is the case, if your heap is almost-full).
The first thing to do, is to increase the heap with -Xmx. This may solve the problem, or - if you have a memory leak, it won't, and you will eventually get OutOfMemoryError again.
In this case, you need to analyze memory dumps. See my answer in this thread for some instructions.
Also, it might be useful to enable Garbage Collection Logs (using -Xloggc:/path/to/log.file -XX:+PrintGCDetails) and then analyzing them with GCViewer or HPJmeter.
You can set JVM Heap size by specifying the options
-Xmx1024m //for 1024 MB
Refer this for setting the option forTomcat
If you have 4 GB ram then can allocate 3GB to HEAP -
-Xmx3GB
you can also change the available perm gen size by using the following commands:
-XX:PermSize=128m
-XX:MaxPermSize=256m