I've got a java program that gets quite RAM-hungry over time (for as yet undetermined reasons). So to prevent it going crazy with the page file, I run it with the parameter -Xmx1536m, so that Java will just crash once the program's taken up one and a half gig of RAM. Better that then having my whole computer lock up.
The trouble is, it's not working. When I check Task Manager, sometimes Java is hogging 2.8GB of RAM, or higher! The program should be crashing well before it gets to that point!
Why isn't it?
Related
TL;DR - A user has an error (ORA-01438: value larger than specified precision allows for this column). I can't recreate it locally because when my machine runs the multithreaded app, only one of ten threads runs at a time, in sequence. Furthermore, running it often results in my machine running out of heap even with 8GB allocated to the heap, and then I happen to hit a NullPointerException instead of the user's issue.
I'm attempting to debug a multithreaded legacy Java app (JDK 1.6) written years ago by people that are no longer around. It is attempting to insert some data into an Oracle DB. The app usually runs on a Weblogic 11G server and takes about 5 minutes to finish running the calculations. However, debugging locally, the threads don't work concurrently, they're taking turns on my local machine. This makes the running time go from the aforementioned 5 minutes to ~1 hour and still manages to run out of heap (I gave it 8GB) or throw a NullPointerException if I'm lucky, but that isn't the business user's error. I've thought about cutting it down to use only one thread since it's taking turns anyway, but after touching this for a week, the business impact is becoming real and I can't just keep hitting it with a hammer.
This may be a long shot given I haven't provided and of the code, but does anyone have experience with a similar issue? Specifically why the threads are taking turns.
EDIT: the user's error is a constraint violation, so I think it's modifying the inputted data and doing something like adding extra precision.
The problem: The application's 10 threads are working in sequence rather than concurrently and the code potentially contains a memory leak, resulting in the app crashing and not hitting the same code the constraint violation exception that the business user is encountering.
Edit 2: I suspect the threads trading off, rather than running concurrently, could be causing them not to run garbage collection on my local machine perhaps? Though, it still doesn't explain the issue of me receiving a different error than the business user if I'm lucky enough not to run out of heap.
You may well be correct in your instincts which tell you that the "threads" are working against you and that your predecessor simply left you with an unworkable design which he could never manage to fix.
"The eventual recipient," in all cases, "is the [Oracle ...] database." No matter what the application does in presenting requests to it, the only thing that matters is the requests that it receives. Obviously the clients are colliding with themselves, and it is therefore probable that there's no reason for having multiple threads at all.
I currently have the following problem:
I have created a Java (FX) Application which runs on Fedora. When I start it, it first runs the cache builder. This loads 40 MB into the memory. Then it just shows a screen and in a background thread it keeps the cache up to date.
The problem is that the application closes after 1.5h of inactivity of the user. I first thought of a memory issue, so I did some research.
If I do not use the application for something else, the CPU and memory usage is as follows:
As you can see, the Garbage Collection is called by the application, and the memory is freed. It stays at a level of 40 MB, which is perfect.
So if memory is not the issue, what else could it be?
I can not find an error log of the JVM anywhere, so it doesn't look like the JVM is crashing. I am using Java 8u25.
If you need any more information, please let me know.
Any help is greatly appreciated!
I have an application that will run fine the first time it is ran. Now if I leave the application and force stop it in manage applications and come back in it will crash with OutOfMemory. Why would it work the first time and run out of memory on subsequent calls?
There is really no way to know with the information you provide. However that is totally possible, it is likely a process that runs when the app loads is leaking memory massively.
Hi
I am debugging a Java application that fails when certain operations are invoked after VM memory is swapped to disk. Since I have to wait about an hour for windows to swap I was wondering if there is a way of forcing windows into swapping
You can create another application that allocates and accesses a large amount of memory. Assuming that you don't have enough memory for both to run, Windows will be forced to swap the inactive app to make room for the active app.
But before you do that, you might find help if you describe the exact problem that you're having with your app (including stack traces and sample code). The likelihood of swapping causing any problems other than delays is infinitesimally low.
An application that we have recently started sporadically crashing with a message about "java.lang.OutOfMemoryError: requested 8589934608 bytes for Chunk::new. Out of swap space?".
I've looked around on the net, and everywhere suggestions are limited to
revert to a previous version of Java
fiddle with the memory settings
use client instead of server mode
Reverting to a previous version implies that the new Java has a bug, but I haven't seen any indication of that. The memory isn't an issue at all; the server has 32GB available, and Xmx is set to 20 while Xms is 10. I can't see the the JVM running out of the remaining 12GB (less the amount given to the handful of other processes on the machine). And we're stuck with server mode due to the nature of the application and environment.
When I look at the memory and CPU usage for the application, I see constant memory usage for the whole day, but then suddenly right before it dies CPU usage goes up to 100% and the memory usage goes from X, to X + 2GB, to X + 4GB, to (sometimes) X + 8GB, to JVM death. It would appear that there is maybe cycle of repeated array resizing going on in the JIT compilation.
I've now seen the error occur with the above 8GB request and also 16GB requests. All times, the method being compiled when this happens is the same. It is a simple method that has non-nested loops, no recursion, and uses methods on objects that return static member fields or instance member fields directly with little computation.
So I have 2 questions:
Does anybody have any suggestions?
Can I test out whether there is a problem compiling this specific method on a test environment, without running the whole application, invoking the JIT compiler directly? Or should I start up the application and tell it to compile methods after a much smaller call count (like 2) to force it to compile the method almost instantly instead of at a random point in the day?
#StephenC
The JVM is 1.6.0_20 (previously 1.6.0_0), running on Solaris. I know it's the compilation that is causing a problem for a couple reasons.
ps in the seconds leading up to it shows that a java thread with id corresponding to the compiler thread (from jstack) is taking up 100% of the CPU time
jstack shows the issue is in JavaThread "CompilerThread1" daemon [_thread_in_native, id=34, ...]
The method mentioned in jstack is always the same one, and is one we wrote. If you look at sample jstack output you will know what I mean, but for obvious reasons I can't provide code samples or filenames. I will say that it is a very simple method. Essentiall a handful of null checks, 2 for loops that do equality checks and possibly assign values, and some simple method calls after. All in all maybe 40 lines of code.
This issue has happened 2 times in 2 weeks, though the application runs every day and is restarted daily. In addition, the application wasn't under heavy load any of these times.
You can exclude a particular method from being JIT'ed by creating a file called .hotspot_compiler and putting it in your applications 'working directory'. Simply add an entry in the file in the following format:
exclude com/amir/SomeClass someMethod
And the console output from the compiler will look like:
### Excluding compile: com.amir.SomeClasst::someMethod
For more information, read this. If you're not sure what you're applications 'working directory' is, use the
-XX:CompileCommandFile=/my/excludefile/location/.hotspot_compiler
in your Java start script or command line.
Alternatively, if you're not sure its the JIT compilers fault, and want to see if you can reproduce the problem without any JIT'ing, run your Java process with -Xint.
Okay, I did a quick search and found a thread on sun java forums that discusses this. Hope it helps.
Here another entry on Oracles forum. Similiar sporadic crash. There is one answer where one solved the problem by reconfiguring the gc's survivor ratio.