My problem is that I have a Weblogic 12c and when I try to make an analysis of a data on my site, the data is so big that it give me PermGenSpaceError. Well my question is what can I do to fix it because it happen just when I analyze big data which have more than 8060576 total memory because that is all my memory RAM from server and how I can do. If you can give me an tutorial or video I will be very graceful because I searched over a day on this problem and I am still without an answer.
Thank you very much,
Vlad.
A bit of insight on how things work when you run a java-based application. Your application is compiled (set classes, etc) and they are run by JVM that stands for Java virtual machine.
Java virtual machine memory is composed of regions. The stack region where variables and methods are stored and the heap space region where everything else is stored. The Java heap space is once again structured in different regions called generations and where an object is stored depends on how long the object lives.
Now you get an idea where the PermGenSpace (Permanent Generation Space) comes from.
In your case, you need to increase the size of the PermGenSpace memory in order to allow your application to be able to process huge amount of data. There are loads of answers in stackoverflow that can teach you how to do that, listed below:
Dealing with "java.lang.OutOfMemoryError: PermGen space" error
Weblogic increase memory
facing perm gen space error in weblogic
Oracle help centre - tunning Java Virtual Machine
WebLogic PermnGenSpace Settings
More on PermnGen and what is causing it here
There are many more external link that can help you on how to setup PermGenSpace memory.
Before I conclude, I would like to mention another great tool that you can use to find out how your weblogic instance behaves. The Java Visual VM allows you to monitor how your application makes use of Java Heap Space, processor, network, etc resources. The Java Visual VM once installed, detects the running java-based application (local - you can also setup Java Visual VM to do monitor remote servers using RMI) and shows you details of the VM, the existing amount of memory allocated for Java heap space, the amount that application uses, the maximum memory of the computer, etc.
I had issues with an image management app when analysing image files (6 GB) in size. The problem I found was basically the image management app was not garbage collecting properly and Java Visual VM helped me spot the issue.
So I recommend, download and install Java Visual VM, run your application and then see how your memory and other resources are used. If the Java heap space is indeed running out of memory then you can increase the size using above links and then re-run the application and monitor its performance using Java Visual VM.
Related
I am running Tomcat-6.0.32 on the RHEL 5.4 with JDK-1.6.0_23 version. I am running almost more than 15 applications. Applications are small applications only. My RAM is 8GB and swap is 12GB. I set the heap size from 512Mb to 4GB.
The issue is after a few hours or days of running, the tomcat is not providing service though it is up and running. While I could see the catalina.out log file, it is showing memory leak problem.
Now, my concern is I need to show a solution to that issue or at least I need to highlight the application which is causing the memory leaks.
Could anyone explain how I can discover which application is causing the memory leak issue?
One option is to use heap dumps (see How to get a thread and heap dump of a Java process on Windows that's not running in a console) and analyze heap dump later on.
Or another option is to analyse process directly using tools like jmap, VisualVM and similar.
You may use the combination of jmap/jhat tools (Both these are unsupported as of Java 8) to gather the heap dump (using mmap) and identify the top objects in heap (using jhat). Try to co-relate these objects with the application and identify the rogue one.
I'm debugging a fairly large project I've been working on (but did not originally create) and I've noticed that sometimes it crashes with an OutOfMemoryError. The code is loading a lot of data from files so this isn't entirely surprising in general.
However, what confuses me is that I'm using VisualVM 1.3.4 to profile the program, and it behaves inconsistently. Most times I've run it, the heap gradually expands up to about 2GB (the computer has 16GB of RAM; it's for academic research) with the used heap spiking higher and higher underneath it. Around 2GB, it will crash. The program isn't proccessing more information as time goes on though, so it shouldn't grow the heap to 2GB in just a few minutes.
Sometimes, though, I get a sudden crash after about 30 seconds, with a heap size of 250MB and only about 100MB in use. How am I getting a java.lang.OutOfMemoryError: Java heap space if my heap isn't full?
Edit: I'm using Eclipse to run the program, and I have the VisualVM plugin so it gets launched automatically. Also, I'm using Java 7.
Start the application with the VM argument -XX:+HeapDumpOnOutOfMemoryError.
Analyse the Heap Dump and find out what is causing the issue.
Eclipse MAT is an excellent tool for finding out such issues.
you need to setup the JVMs min and max heap memory
set JAVA_OPTS="-Xms128m -Xmx256m"
something like that but with bigger values like 2G, 4G whatever
LE: As you all know you can't force JVM to run the garbage collector (even though you can ask for it), but there are some ways of convincing it to get rid of some items by null-ing their references. Another thing to watch is the database object that might be lazy initialised. That error could appear when you try to create an object exceding the max heap memory.
Another ideea could be some retarded developer that programatically threw the OutOfMemoryError in some method for some retarded reason. When you reach that part of code, that's what you get (search the project)
There can be at least 2 reasons for the application to crash with OutOfMemoryError.
Your java heap is just too small for the amount of data it needs to process. Then you can either increase it as suggested Matei, or analyze heap dump as suggest Ajay.
Your application leaks memory. Which means that it leaves some unneeded data in memory after processing it. Then increasing heap will not help in the long run. And your options are either heap dump analysis (again) or specialised memory leak detection tool, such as Plumbr
Turned out the crash was caused by using the OpenJDK JRE rather than Oracle's JRE. I don't know exactly what the bug is in OpenJDK that makes it crash like this, but changing to Oracle's JRE ultimately solved the problem.
(I was using OpenJDK because I'm on a Linux computer that someone was using for open-source work before me. When I mentioned the crash to him he had the idea that that might be the cause. He was right.)
Do you have a 32bit operative system without large memory support (PAE on win, huge mem kernel on linux...)? If yes, you may encounter the 2GB memory segment limit per process on 32 bit systems.
As a work around, try to set the JVM parameter -Xss192k to dispose 192kb of stack space per thread, and the parameter -Xmx1024m to use no more than 1GB of heap.
Let's say I have a very large Java application that's deployed on Tomcat. Over the course of a few weeks, the server will run out of memory, application performance is degraded, and the server needs a restart.
Obviously the application has some memory leaks that need to be fixed.
My question is.. If the application were deployed to a different server, would there be any change in memory utilization?
Certainly the services offered by the application server might vary in their memory utilization, and if the server includes its own unique VM -- i.e., if you're using J9 or JRockit with one server and Oracle's JVM with another -- there are bound to be differences. One relevant area that does matter is class loading: some app servers have better behavior than others with regard to administration. Warm-starting the application after a configuration change can result in serious memory leaks due to class loading problems on some server/VM combinations.
But none of these are really going to help you with an application that leaks. It's the program using the memory, not the server, so changing the server isn't going to affect much of anything.
There will probably be a slight difference in memory utilisation, but only in as much as the footprint differs between servlet containers. There is also a slight chance that you've encountered a memory leak with the container - but this is doubtful.
The most likely issue is that your application has a memory leak - in any case, the cause is more important than a quick fix - what would you do if the 'new' container just happens to last an extra week etc? Moving the problem rarely solves it...
You need to start analysing the applications heap memory, to locate the source of the problem. If your application is crashing with an OOME, you can add this to the JVM arguments.
-XX:-HeapDumpOnOutOfMemoryError
If the performance is just degrading until you restart the container manually, you should get into the routine of triggering periodic heap dumps. A timeline of dumps is often the most help, as you can see which object stores just grow over time.
To do this, you'll need a heap analysis tool:
JHat or IBM Heap Analyser or whatever your preference :)
Also see this question:
Recommendations for a heap analysis tool for Java?
Update:
And this may help (for obvious reasons):
How do I analyze a .hprof file?
Tomcat 5.5.x and 6.0.x
Grails 1.6.x
Java 1.6.x
OS CentOS 5.x (64bit)
VPS Server with memory as 384M
JAVA_OPTS : tried many combinations- including the following
export JAVA_OPTS='-Xms128M -Xmx512M -XX:MaxPermSize=1024m'
export JAVA_OPTS='-server -Xms128M -Xmx128M -XX:MaxPermSize=256M'
(As advised by http://www.grails.org/Deployment)
I have created a blank Grails application i.e simply by giving the command grails create-app and then WARed it
I am running Tomcat on a VPS Server
When I simply start the Tomcat server, with no apps deployed, the free memory is about 236M
and used memory is about 156M
When I deploy my "blank" application, the memory consumption spikes to 360M and finally the Tomcat instance is killed as soon as it takes up all free memory
As you have seen, my app is as light as it can be.
Not sure why the memory consumption is as high it is.
I am actually troubleshooting a real application, but have narrowed down to this scenario which is easier to share and explain.
UPDATE
I tested the same "blank" application on my local Tomcat 5.5.x on Windows and it worked fine
The memory consumption of the Java process shot from 32 M to 107M. But it did not crash and it remained under acceptable limits
So the hunt for answer continues... I wonder if something is wrong about my Linux box. Not sure what though...
UPDATE 2
Also see this http://www.grails.org/Grails+Test+On+Virtual+Server
It confirms my belief that my simple-blank app should work on my configuration.
It is a false economy to try to run a long running Java-based application in the minimal possible memory. The garbage collector, and hence the application will run much more efficiently if it has plenty of regular heap memory. Give an application too little heap and it will spend too much time garbage collecting.
(This may seem a bit counter-intuitive, but trust me: the effect is predictable in theory and observable in practice.)
EDIT
In practical terms, I'd suggest the following approach:
Start by running Tomcat + Grails with as much memory as you can possibly give it so that you have something that runs. (Set the permgen size to the default ... unless you have clear evidence that Tomcat + Grails are exhausting permgen.)
Run the app for a bit to get it to a steady state and figure out what its average working set is. You should be able to figure that out from a memory profiler, or by examining the GC logging.
Then set the Java heap size to be (say) twice the measured working set size or more. (This is the point I was trying to make above.)
Actually, there is another possible cause for your problems. Even though you are telling Java to use heaps of a given size, it may be that it is unable to do this. When the JVM requests memory from the OS, there are a couple of situations where the OS will refuse.
If the machine (real or virtual) that you are running the OS does not have any more unallocated "real" memory, and the OS's swap space is fully allocated, it will have to refuse requests for more memory.
It is also possible (though unlikely) that per-process memory limits are in force. That would cause the OS to refuse requests beyond that limit.
Finally, note that Java uses more virtual memory that can be accounted for by simply adding the stack, heap and permgen numbers together. There is the memory used by the executable + DLLs, memory used for I/O buffers, and possibly other stuff.
384MB is pretty small. I'm running a small Grails app in a 512MB VPS at enjoyvps.net (not affiliated in any way, just a happy customer) and it's been running for months at just under 200MB. I'm running a 32-bit Linux and JDK though, no sense wasting all that memory in 64-bit pointers if you don't have access to much memory anyway.
Can you try deploying a tomcat monitoring webapp e.g. psiprobe and see where the memory is being used?
Having analyzed a light-load web application running in tomcat, using JMX Console, it turns out the "PS Old Gen" is growing slowly but constant. It starts with 200MB and grows around 80MB/Hour.
CPU is not an issue, it runs at 0-1% on average, but somewhere it leaks memory, so it will become unstable some days after deployment.
How do i find out what objects are allocated on the heap? Are there any good tutorials or tools you know?
You could try jmap, one of the JDK Development Tools. You can use jhat with the output to walk heap dumps using your web browser.
See this answer for a short explanation.
This comes up quite often, so searching SO for those tools should turn up some alternatives.
I've used the HeapAnalyzer tool from IBM's alphaWorks with good success. It takes output from Java's heap profile, hprof, and analyzes it to show you the most likely memory leaks.
You can use NetBeans profiler. It has 2 modes, launching tomcat profiled directly from ide (for localhost) or using a remote profiling with a JAR provided and some run config on server.
I used it in a project for a memory leak and it was useful.
See my answer here:
Strategies for the diagnosis of Java memory issues
And there are also tips here:
How can I figure out what is holding on to unfreed objects?
What you are seeing is normal, unless you can prove otherwise.
You do not need to analyze the heap when the additional "consumed space" disappears when a GC in the old space happens.
At some point, when the used space reaches your maximum heap size you will observe a pause caused by the default GC you use and afterwards the used memory should go down a lot. Only if it does not go down after a GC you might be interested what is still holding onto those objects.
JRockit Mission Control can analyze memory leaks while connected to JVM. No need to take snapshots all the time. This can be useful if you have a server with a large heap.
Just hook the tool up to the JVM and it will give you a trend table where you can see which type of objects that are growing the most, and then you can explore references to those objects. You can also get allocations traces, while the JVM is running, so you can see where in the application the objects are allocated.
You can download it here for free