I currently have the following problem:
I have created a Java (FX) Application which runs on Fedora. When I start it, it first runs the cache builder. This loads 40 MB into the memory. Then it just shows a screen and in a background thread it keeps the cache up to date.
The problem is that the application closes after 1.5h of inactivity of the user. I first thought of a memory issue, so I did some research.
If I do not use the application for something else, the CPU and memory usage is as follows:
As you can see, the Garbage Collection is called by the application, and the memory is freed. It stays at a level of 40 MB, which is perfect.
So if memory is not the issue, what else could it be?
I can not find an error log of the JVM anywhere, so it doesn't look like the JVM is crashing. I am using Java 8u25.
If you need any more information, please let me know.
Any help is greatly appreciated!
Related
I'm debugging my app, it should be running during several hours when deployed.
I've let the app running and I found it crashed after 4-5 hours with an Out of Memory Error.
I'm on a Mac, OSX 10.8.2.
What I'm seeing in the Activity Monitor is that the process has a stable Real Memory Size (around 350 Mb), but it's Virtual Memory Size it's slowly increasing.
Is it normal? Can this be the origin of my problem?
Thanks as always for your support
I'm going to reply my own question to help anyone with the same issue....
After lot of debugging, after breaking apart my app in little chunks, looks like my memory leak it's created by PGraphics object ONLY if it's render mode is set to P3D.
I don't know why, the issue it's not solved but by finding the problem I could code a workaround
Good bet that your application is accumulating data without ever releasing it. If you're using anything dynamically allocating like HashMaps or ArrayLists or the like, those are prime suspects. Depending on how big your code is, you may have to start reducing your codebase and monitoring memory usage over 10 minute spans to find out at what point memory no longer accumulates.
We have a Java EE application(jsp/servlet,jdbc) running on Apache-tomcat server. The response time slows with time. It slows down at faster rate when continuously worked on.
The response time is back to normal after restart of the server.
I connected Jconsole to the server and I am attaching the screen shot of the heap memory usage,which goes up when doing intensive work and garbage collector kicks off periodically and the memory usage comes down.
However, when testing towards the end, despite kicking off garbage collector manually the response time is not going down. I
I also checked the connections and they seem to be closing off properly. i.e I do not notice any zcx
Any help is appreciated.
Attach with jvisualvm in the JDK. It allows you to profile Tomcat and find where the time goes.
My guess right now is the database connections. Either they go stale or the pool runs dry.
How much slower are the response times? Have you done any profiling or logging to help know which parts of your app are slower? It might be useful to setup a simple servlet to see if that also slows down as the other does. That might tell you if Tomcat or something in your app is slowing down.
Did you fine-tune your tomcat memory settings? Perhaps you need to increase perm gen size a bit.
e.g. -XX:MaxPermSize=512M
You can know it for sure if you can get a heap dump and load it to a tool like MemoryAnalyzer.
I just can't figure it out, why i get this error. It is not always shown, but once it appears, my application refuses to accept connections (can't create new Socket-Threads, and also other threads i create in my JAVA-application for some of them i use ThreadPool).
top and htop shows me, there is ~ 900 MB of 2048MB used.
and there is also enough heap memory, about 200MB free.
cat /proc/sys/kernel/threads-max outputs:
1196032
and also, everything worked fine few days ago, it's a multiplayer-online game, and we had over 200 users online(~500 threads in total). But now even with 80 users online(~200 threads) after 10 min or few hours my application gets somehow broken with this OutOfMemoryError. In this case i do restart my application and again it works only for this short period of time.
I am very curious about, what if JVM act strangely on VPS, since other VPS on the same physical machine do also use JVM. Is that even possible?
Is there some sort of limit by provider what is not visible to me?
Or is there some sort of server attack?
I should also mention, by the time this error occours, sometimes munin fails to log the data for about only 10 min. Looking at graph-images, there is just white-space, like munin is not working at all. And again there is about 1 GB memory free as htop tells me by that time.
It might be also we case, i somehow produced a bug in my application. And start getting this error after I've done update. But even so, where do i begin the debugging ?
try increasing the stack size (-Xss)
You seem to host your app in some remote vps server. Are you sure the server, not your development box, has sufficient ram. People very often confuse their own machine with the remote machine.
Because if Bash is running out of memory too, is obviously a System Memory issue, not an App Memory issue. Post the results of free -m and ulimit -a on the remote machine to get more data.
If you distrust yout your provider to be using some troyanized htop, free and ulimit , you can test the real available memory with a simple C progran where you allocate with malloc 70~80% of your available ram and assigning random bytes on it in no more than 10 lines of ANSI C code. You can compile it statically on your box to avoid any crooked libc, and then transfer it with scp. That being said I heard rumors of vps providers giving less than promised but never encounter any.
Well moving from a VPS to a dedicated server solved my problem.
Additionally i found this
https://serverfault.com/questions/168080/java-vm-problem-in-openvz
this might be exactly the case, because on VPS i had there was really too low value for "privvmpages". It seems there is really some weird JVM behaviour in VPS.
As i already wrote in comments, even other programs(ls, top, htop, less) were not able to start at some time, although enough memory were available/free.
And.. provider did really made some changes on their System.
And also thank you everyone, for very fast reply and helping me solving this mystery.
You should try JRockit VM it is work perfect on my OpenVZ VPS, it consumes memory much less then Sun/Oracle jvm.
Hi
I am debugging a Java application that fails when certain operations are invoked after VM memory is swapped to disk. Since I have to wait about an hour for windows to swap I was wondering if there is a way of forcing windows into swapping
You can create another application that allocates and accesses a large amount of memory. Assuming that you don't have enough memory for both to run, Windows will be forced to swap the inactive app to make room for the active app.
But before you do that, you might find help if you describe the exact problem that you're having with your app (including stack traces and sample code). The likelihood of swapping causing any problems other than delays is infinitesimally low.
We've been debugging this JBoss server problem for quite a while. After about 10 hours of work, the server goes into 100% CPU panic attacks and just stalls. During this time you cannot run any new programs, so you can't even kill -quit to get a stack trace. These high 100% SYS CPU loads last 10-20 seconds and repeat every few minutes.
We have been working on for quite a while. We suspect it has something to do with the GC, but cannot confirm it with a smaller program. We are running on i386 32bit, RHEL5 and Java 1.5.0_10 using -client and ParNew GC.
Here's what we have tried so far:
We limited the CPU affinity so we can actually use the server when the high load hits. With strace we see an endless loop of SIGSEGV and then the sig return.
We tried to reproduce this with a Java program. It's true that SYS CPU% climbs high with WeakHashMap or when accessing null pointers. Problem was that fillStackTrace took a lot of user CPU% and that's why we never reached 100% SYS CPU.
We know that after 10 hours of stress, GC goes crazy and full GC sometimes takes 5 seconds. So we assume it has something to do with memory.
jstack during that period showed all threads as blocked. pstack during that time, showed MarkSweep stack trace occasionally, so we can't be sure about this as well. Sending SIGQUIT yielded nothing: Java dumped the stack trace AFTER the SYS% load period was over.
We're now trying to reproduce this problem with a small fragment of code so we can ask Sun.
If you know what's causing it, please let us know. We're open to ideas and we are clueless, any idea is welcome :)
Thanks for your time.
Thanks to everybody for helping out.
Eventually we upgraded (only half of the java servers,) to JDK 1.6 and the problem disappeared. Just don't use 1.5.0.10 :)
We managed to reproduce these problems by just accessing null pointers (boosts SYS instead of US, and kills the entire linux.)
Again, thanks to everyone.
If you're certain that GC is the problem (and it does sound like it based on your description), then adding the -XX:+HeapDumpOnOutOfMemoryError flag to your JBoss settings might help (in JBOSS_HOME/bin/run.conf).
You can read more about this flag here. It was originally added in Java 6, but was later back-ported to Java 1.5.0_07.
Basically, you will get a "dump file" if an OutOfMemoryError occurs, which you can then open in various profiling tools. We've had good luck with the Eclipse Memory Analyzer.
This won't give you any "free" answers, but if you truly have a memory leak, then this will help you find it.
Have you tried profiling applications. There are some good profiling applications that can run on production servers. Those should give you if GC is running into trouble and with which objects
I had a similar issue with JBoss (JBoss 4, Linux 2.6) last year. I think in the end it did turn out to be related to an application bug, but it was definitely very hard to figure out. I would keep trying to send a 'kill -3' to the process, to get some kind of stack trace and figure out what is blocking. Maybe add logging statements to see if you can figure out what is setting it off. You can use 'lsof' to figure out what files it has open; this will tell you if there is a leak of some resource other than memory.
Also, why are you running JBoss with -client instead of -server? (Not that I think it will help in this case, just a general question).
You could try adding the command line option -verbose:gc which should print GC and heap sizes out to stdout. pipe stdout to a file and see if the high cpu times line up with a major gc.
I remember having similar issues with JBoss on Windows. Periodically the cpu would go 100%, and the Windows reported mem usage would suddenly drop down to like 2.5 MB, much smaller than possible to run JBoss, and after a few seconds build itself back up. As if the entire server came down and restarted itself. I eventually tracked my issue down to a prepared statement cache never expiring in Apache Commons.
If it does seem to be a memory issue, then you can start taking periodic heap dumps and comparing the two, or use something like JProbe Memory profiler to track everything.