The main Activity I use in my Android application uses a fair amount of memory, meaning that on a less powerful phone, it is susceptible to being killed off when not at the front. Normally this is fine, but it also happens when I am still inside my application, but have a different activity at the top of the stack (such as a preference activity).
Obviously it's a problem if my application is killed while the user is still running it. Is there any way to disable the OS's ability to kill off the application for low memory problems?
Thanks.
No, there's no way. Your options are:
Read about Activity lifecycle and Activity and Task Design and implement these correctly and efficiently.
Use a Service.
It can't be done sadly. You see the linux Kernel will kill your application if it threatens the OS's ability to function. Sadly your application cannot prevent this. If it could I'm sure you can see the security implications of such things.
Sorry.
Related
I have an android application that performs image analysis, which is managed with an IntentService - the process takes a couple of seconds each time and works accurately and quickly.
But when the process is repeated in the app around 50 times (as illustrated) it begins to get very slow until the point in which the app and device becomes unusable. When the device is restarted and app opens again it runs as usual.
Inspecting with Android Studio I can see that each time I run the analysis that the memory allocation for the app goes up and up every time by around 1MB. So it is clearly running out of memory when it crashes.
I have used this flag on finishing the analysis and going to the result to try fix background activities;
intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK | Intent.FLAG_ACTIVITY_CLEAR_TASK);
which has had minimal effect, and I understand the IntentService manages itself shutting down. So not sure what else I can do to try and reduce the memory allocation or at least clear down allocation and stop adding to it?
Further details:
The application is using the camera implementation based on Google Camera2
The analysis is done with a C++ library through the IntentService
It seems you are not handling the resources (variables, image files,etc) properly, and its creating memory leaks in your application.
you can find here in this blog Written by Johan on handling the memory leaks in your application or see this SO Question.
Avoid memory leaks on Android
If the memory leaks are being generated in the c++ library then you can easily find the resource which is leaking the memory in debug mode.
After the result activity you should call the garbage collector as suggested by Grisgram and close any unused resources.
It would be good if you could provide the stack trace in the question.
Try using leakCanary https://github.com/square/leakcanary to find out what is causing the leak and use a weakReference https://developer.android.com/reference/java/lang/ref/WeakReference.html to allow it to be garbage collected when necessary. It may also be that the device you are using does not have enough memory to hold 50 high res images in memory at the same time. You could try lowering the resolution of the images if you are keeping them in memory and be sure you are recycling bitmaps https://developer.android.com/topic/performance/graphics/manage-memory.html
I would also consider using a threadPoolExecutor instead of an intent service, they are much more configurable https://developer.android.com/reference/java/util/concurrent/ThreadPoolExecutor.html
I wanted to add something to Ali786's answer.
Intent Services are not really the best choice for something that is going to repeat itself. The next time you call the service it goes into a queue.
Intent Services work like HandlerThreads. They have their own MessageQueues and after you start the service with Intent it will wait for the previous one.
Normal services which run on the UI Thread are running in parallel.
I am not sure if you are doing something after you send the information of the analys to your activity, but if you do, the intent service will not die and the next one will have to wait. Intent services are not the best choice for communicating with your UI Thread, Asynctask might be better in your case. If you give us some more information (code) maybe we can give you a more accurate answer.
Hope this helps!
One thing may be it happens like this, If your working intent service after job complete may be your not destroy the service
Check in settings running service list for running service of your app
I have a JavaFX application, when the user closes the window, I want to destroy all of the JavaFX related resources and only have a tray icon, where the user can then reopen the application.
I have many background threads running, which should stay running when the GUI is closed. I have tried using Platform.exit() however it has no impact on the RAM usage of the program.
What is the best way to accomplish this? My goal is to reduce the impact on the system from my program as much as possible when the application is closed, but still running all of the background threads.
One option is to run the application as a separate process, launching the process when you want to create the application and exiting the process when the application is no longer needed (so completing a full application lifecycle). That way you will be absolutely sure that the application is not consuming any resources when it is not being used, because it won't be running.
How you would accomplish the launching and any communication between your tray service and the application would be up to you. You can research various mechanisms and, if you decide to go this route, ask some new follow up questions on accomplishing certain aspects of the task.
Some example routes you could look at are the ProcessBuilder, which is admittedly a pretty finicky and horrible API or the new Process API updates that will be available with Java 9. If wish to ensure at most a single instance of the application process is ever used, there are solutions for that. If you need to send a signal to the running application process, you could use something like RMI or run a basic HTTP REST server in your application and send messages into that.
As an aside, years ago there used to be some ongoing work on building multi-process JVMs, but there was never any wide uptake of the idea for Java. Though most modern browser clients, such as Chrome and Firefox, are multi-process architectures, the linked articles give some insight into this architecture, some of the potential implications of it and why it used for those applications.
Before going such a route, I would advise you to ensure that such an approach is truly necessary for your application (as pointed out by user npace in comments).
Here's how my application works:
The Launcher activity starts a service in the foreground which monitors clipboard changes and fires up the launcher activity everytime a specific kind of string is copied. I'm new to Java programming, I've tried to use all the best practices in the application(using worker threads and keeping the UI thread from hiccupping) and so far everything is butter smooth. The problem is RAM consumption, on a fresh start of the app(after Service is started) the app reports 24M memory consumption in the android running processes. Here's where the erroneous behavior lies:
- The Memory Monitor in Android Studio reports something else
- So does the adb shell dumpsys meminfo mypackage command
Screenshots of both have been attached
These behaviors are incomprehensible for me. 50M is a lot of RAM. Also whenever the Launcher activity is launched by the Service, the app consumes around 1M more memory than it is already using. Can anyone help me debug this?
Thanks
The problem is likely a result of how Android handles Services and
Activities running in the same application process:
As long as a (started) Service is running in the process, the "memory
priority" of the whole process is elevated above other processes that
are only running (background) Activities.
However, since Activities
are never recycled by Android even under memory pressure (contrary to
some statements in the official docs),
this effectively keeps your Activity alive much longer than necessary. This is essentially a shortcoming of Android's process model.
If your memory usage drops to a few megabytes after you force-kill your application process (and Android subsequently relaunches your Service), or if the memory usage is different depending on whether you leave your activity by pressing the home or back button, this confirms that you are facing this problem.
If you really depend on your Service continuously running in the background and want to minimize memory usage, you could try to move it to its own process (where memory-intensive UI resources like Views in Activities would never be loaded).
Of course, this also increases overhead; you might be better off by just keeping your implementation the way it is. Android will still kill your process under memory pressure, and will later relaunch your Service (but not your Activities), which will minimize your memory usage without any intervention.
Save the heapdump as a HPROF file and convert it to an extension that Java Profiler can read
Then you will be able to see what is using so much ram
I need to test serialization\deserialization of application in next cases:
app was in background a lot of time (idle mode) and was killed by GC;
app was in background and was killed by GC by reason of resources (memory\cp) lack;
On some devices it can be simulated by launching 1-2 games.
But on quad-core devices with 1gb memory it's very-very hard with 4-10 heavy games and takes a lot of time.
I try to implement some demo where emulating loading on resources:
create bitmaps arrays
create objects arrays
launch a lot of services
launch a lot of activities
But no result, application still works (even on old devices) and my demo is crashed with OutOfMemoryException.
How can i simulate high load in demo application?
Thanks!
Well, the "GC" is actually abused "Out Of Memory Killer" and that kills the applications as if by signal 9. In rooted device you should be able to invoke kill(1) command from shell or kill(2) function from native library (I am not sure whether it's bound to Java) and kill your application whenever you want.
The system normally calls onStop in the Activity when it's going to background and than kills the application without further warning and without chance to react. So if you leave the application and kill it, it's appropriate simulation of it being OOM-killed.
Install any memory cleaner on play store. I install this : easymemorycleaner
once u cleaned your memory. Your stored variable inside memory will be gone. Unless you stored data in Parcelable. It will be persistent.
I want to reduce the CPU usage/ROM usage/RAM usage - generally, all system resources that my app uses - who doesn't? :)
For this reason I want to split the preferences window from the rest of the application,
and let the preferences window to run as independent program.
The preferences program should write to a Property file(not a problem at all) and to send a "update signal" to the main program - which means it should call the update method (that i wrote) that found in the Main class.
How can I call the update method in the Main program from the preferences program?
To put it another way, is a way to build preferences window that take system resources just when the window appears?
Is this approach - of separating programs and let them talk to each other (somehow) - the right approach for speeding up my programs?
What you're describing sounds like Premature Optimisation. If you're writing something other than a toy application, it's important to be confident that your optimisations are actually addressing a real problem. Is your program running slowly? If so, have you run it through a profiler or otherwise identified where the poor performance is happening?
If you have identified that what you want to do will address your performance issue, I suggest you look at running the components concurrently in different threads, not different processes. Then your components can avoid blocking each other, you will be able to take advantage of multi-core processors and you do not take on the complexity and performance overhead of inter-process communication over network sockets and the like.
You can communicate back and forth using sockets. Here's a tutorial of how to do something similar..
Unfortunately, I don't think this is going to help you minimize CPU usage, RAM, etc... If anything it might increase the CPU usage, RAM usage etc, because you need to run two JVM's instead of one. Unless you have some incredibly complicated preferences window, it is not likely taking that many resources that you need to worry about it. By adding the network communication, you are just adding more complexity without adding any benefit.
Edit:
If you have read the book Filthy Rich Clients, one of the main points of the book is that Rich Effects do not need to be resource intensive. Most of the book is devoted to showing how to add cool effects to an app with out taking a lot of resources. Throughout the book they are very careful to time everything to show what takes a long time and what doesn't. This is crucial when making your app less resource hungry. Write your app, see what feels slow, add timing code to those particular items that are slow, and speed up those particular parts of the code. Check with your timing code to see if it is actually faster. Rinse and repeat. Otherwise you are doing optimization that may not make any difference. Without timing your code you don't know if code needs to be sped up even if you've sped up the code after doing your optimizing.
Others have mentioned loading the properties window in a separate thread. It's important to remember that Swing has only one thread called the EDT that does all of the painting of pixels to the screen. Any code that causes pixels on the screen to change should be called from the EDT and thus should not be called from a separate thread. So, if you have something that may take a while to run (perhaps a web service call or some expensive computation), you would launch a separate thread off of the EDT, and when it finishes run code on the EDT to do the UI update. There are libraries such as SwingWorker to make this easier. If you are setting a dialog to be visible, this should not be on a separate thread, but it may make sense to build the data structures in a separate thread if it is time consuming to build these data structures.
Using Swing Worker is one of many valuable ideas in Filthy Rich Clients for making UI's feel more responsive. Using the ideas in this book I have taken some fairly resource intensive UI's and made them so the UI was hardly using any resources at all.
You could create a ServerSocket in the main window and have the preferences app connect to that with a regular Socket the protocol to use may be extremely simple, but... I think you should really look for the second approach: to build preferences window that take system resources just when it's appear?
To do that, you have to build the window and all it resources until the user performs the Preferences action, save your file ( or pass the content to the main app ) and dispose all the resources of the preference window by making all of its reference non accessible. The garbage collector will handle the rest.
Maybe you could use some sort of directory watcher like this or maybe implement some sort of semaphore.
Honestly, I think that you should be able to solve the problem if you have some sort of menu item that the user can access. Once that the user saves the preferences, these are written to a file. The application then loads the values from the file whenever it needs them.
If your system is operating slowly, or hanging, you might consider the use of threads, or increase the number of threads.
Actually, as others have explained, you can use socket for inter-process communication.
However, that won't reduce your overall CPU / RAM usage at all. (might even slightly worsen your resources usage)
For your case, you can launch the Perference window in a different Thread rather than a different Process.
Thread is lighter for OS to handle and poses no additional complexity for inter-process communications.
Nobody seems to have mentioned the DBUS - available to developers on a Linux system. I guess that's no good if you're trying to make a Windows/Cross Platform application, but the DBUS is a ready-made application-communication platform. It helps address issues such as:
Someone else might already be using the port you're trying. There's no way for you client application (The "Preferences" window I guess) to know whether the thing listening on that port is your main application, or just something else that happens to be there, so you'll have to do some sort of handshake, and implement a conflict-resolution mechanism
It's not going to be obvious to either the future you, or anyone who comes to maintain your app why you're on the port you are. This might not seem important, but communicating on Socket 5574 just doesn't seem as neat to me as communicating on channel org.yourorganisation.someapp .
Firewalls (as I think someone's already said) can be a little over-zealous
Also, it's worth getting your hand in on DBUS - it's useful for communicating with a whole bunch of other applications such as the little popup notification thing you'll find in recent Ubuntu distributions, or certain instant messaging clients, etc.
You can read up on what I'm talking about (and maybe correct me on some of the things I've said) here: http://www.freedesktop.org/wiki/Software/dbus . It looks like they're working on making it happen on Windows too, which is nice.