I have an application that will run fine the first time it is ran. Now if I leave the application and force stop it in manage applications and come back in it will crash with OutOfMemory. Why would it work the first time and run out of memory on subsequent calls?
There is really no way to know with the information you provide. However that is totally possible, it is likely a process that runs when the app loads is leaking memory massively.
Related
I've got a java program that gets quite RAM-hungry over time (for as yet undetermined reasons). So to prevent it going crazy with the page file, I run it with the parameter -Xmx1536m, so that Java will just crash once the program's taken up one and a half gig of RAM. Better that then having my whole computer lock up.
The trouble is, it's not working. When I check Task Manager, sometimes Java is hogging 2.8GB of RAM, or higher! The program should be crashing well before it gets to that point!
Why isn't it?
TL;DR - A user has an error (ORA-01438: value larger than specified precision allows for this column). I can't recreate it locally because when my machine runs the multithreaded app, only one of ten threads runs at a time, in sequence. Furthermore, running it often results in my machine running out of heap even with 8GB allocated to the heap, and then I happen to hit a NullPointerException instead of the user's issue.
I'm attempting to debug a multithreaded legacy Java app (JDK 1.6) written years ago by people that are no longer around. It is attempting to insert some data into an Oracle DB. The app usually runs on a Weblogic 11G server and takes about 5 minutes to finish running the calculations. However, debugging locally, the threads don't work concurrently, they're taking turns on my local machine. This makes the running time go from the aforementioned 5 minutes to ~1 hour and still manages to run out of heap (I gave it 8GB) or throw a NullPointerException if I'm lucky, but that isn't the business user's error. I've thought about cutting it down to use only one thread since it's taking turns anyway, but after touching this for a week, the business impact is becoming real and I can't just keep hitting it with a hammer.
This may be a long shot given I haven't provided and of the code, but does anyone have experience with a similar issue? Specifically why the threads are taking turns.
EDIT: the user's error is a constraint violation, so I think it's modifying the inputted data and doing something like adding extra precision.
The problem: The application's 10 threads are working in sequence rather than concurrently and the code potentially contains a memory leak, resulting in the app crashing and not hitting the same code the constraint violation exception that the business user is encountering.
Edit 2: I suspect the threads trading off, rather than running concurrently, could be causing them not to run garbage collection on my local machine perhaps? Though, it still doesn't explain the issue of me receiving a different error than the business user if I'm lucky enough not to run out of heap.
You may well be correct in your instincts which tell you that the "threads" are working against you and that your predecessor simply left you with an unworkable design which he could never manage to fix.
"The eventual recipient," in all cases, "is the [Oracle ...] database." No matter what the application does in presenting requests to it, the only thing that matters is the requests that it receives. Obviously the clients are colliding with themselves, and it is therefore probable that there's no reason for having multiple threads at all.
I have a java .jar file that i launch on an AWS instance in detached mode. So when i exit the ssh session, it still runs.
The app does some network stuff, and is expected to run for days until it finishes it task.
I have made logs all over the app, made log in the end of main method. I also made a global try/catch and added logging to the catch section.
Still, after some days i enter into ssh and see that the app just stopped running. No exceptions, main method did not complete because the log in the end did not trigger. It seems that the process was just killed in the middle of working. Sometimes it works for 5 hours, sometimes for 3-4 days without stopping.
I have no idea what could be the cause of this. I expect the java process to run until it finished, or until it crashes. Am i missing something?
upd:
it is an aws t2.micro, i think, the free tier one. It runs ubuntu 18.04.3 LTS
You need to monitor the server and application. The first thing to look at is your instance cloudwatch statistics for any CPU or memory spikes. If you find one, you know what you need to fix if you want to run your application on micro instance. For further reading
Monitoring Your Instances Using CloudWatch
Alternatively, you can collect and dump the java process statistics regularly when you are running the application. This can give insight of how heap,stack and cpu usage. Check this SO post for further details :
How do I monitor the computer's CPU, memory, and disk usage in Java?
I have a very strange issue. I have a java web app (spring boot 1.5) which runs inside a docker container.
At some point the app starts consuming the CPU quire hard.
So i was thinking that the app itself has a bug of some sort.
BUT
If i remove the app from the load balancer, so it will no accept any connections, the app continues to consume a lot of CPU, even it is not accessed at all.
I continue to see a lot of GC log entries from the app in the log file.
It seems that the JVM continues to run GC on Young gen every 300ms, even when the app should be completely idle (and it is idle as there is nothing in the logfile)!
The app itself is just a website using spring boot. Nothing really special there (no scheduled task or what so ever).
Any idea what might going on here ? Can it be docker related ?
Thanks in advance
OK,
it turns out this had nothing to do with docker. Was a bug in the app, creating many(unnecessary) short lived objects which needed GC.
Here's how my application works:
The Launcher activity starts a service in the foreground which monitors clipboard changes and fires up the launcher activity everytime a specific kind of string is copied. I'm new to Java programming, I've tried to use all the best practices in the application(using worker threads and keeping the UI thread from hiccupping) and so far everything is butter smooth. The problem is RAM consumption, on a fresh start of the app(after Service is started) the app reports 24M memory consumption in the android running processes. Here's where the erroneous behavior lies:
- The Memory Monitor in Android Studio reports something else
- So does the adb shell dumpsys meminfo mypackage command
Screenshots of both have been attached
These behaviors are incomprehensible for me. 50M is a lot of RAM. Also whenever the Launcher activity is launched by the Service, the app consumes around 1M more memory than it is already using. Can anyone help me debug this?
Thanks
The problem is likely a result of how Android handles Services and
Activities running in the same application process:
As long as a (started) Service is running in the process, the "memory
priority" of the whole process is elevated above other processes that
are only running (background) Activities.
However, since Activities
are never recycled by Android even under memory pressure (contrary to
some statements in the official docs),
this effectively keeps your Activity alive much longer than necessary. This is essentially a shortcoming of Android's process model.
If your memory usage drops to a few megabytes after you force-kill your application process (and Android subsequently relaunches your Service), or if the memory usage is different depending on whether you leave your activity by pressing the home or back button, this confirms that you are facing this problem.
If you really depend on your Service continuously running in the background and want to minimize memory usage, you could try to move it to its own process (where memory-intensive UI resources like Views in Activities would never be loaded).
Of course, this also increases overhead; you might be better off by just keeping your implementation the way it is. Android will still kill your process under memory pressure, and will later relaunch your Service (but not your Activities), which will minimize your memory usage without any intervention.
Save the heapdump as a HPROF file and convert it to an extension that Java Profiler can read
Then you will be able to see what is using so much ram