Android: Can a memory leak happen on the same thread? - java

I am new to handling the memory leak situations, but one thing that I have noticed is that all the examples showing memory leaks have the activity contexts on a different thread.
So I need to know if a memory leak can happen if there is an object reference on the same thread as well, because the activity reference is stored somewhere in other classes.
Thanks in advance!

A Memory Leak is a situation when there are objects present in the heap that are no longer used, but the garbage collector is unable to remove them from memory and, thus they are unnecessarily maintained.
Memory leaks can happend in the same thread as well. For example if a method stored data in a static variable which it does need to refer in the subsequent call.
E.g: In the code below we are storing numbers generates in a static list even though we do not require those generated numbers in subsequent calls.
public class MemoryLeak{
public static List<Double> list = new ArrayList<>();
public void doSomething() {
for (int i = 0; i < 10000000; i++) {
list.add(Math.random());
}
Log.info("Debug Point 2");
}
public static void main(String[] args) {
Log.info("Debug Point 1");
new MemoryLeak().doSomething();
Log.info("Debug Point 3");
}
}

Related

Does While true loop always cause Out of memory error?

I always thought that a while (true) {...Any code..} would always result in a out of memory error.
But as I go deeper in java it seems it might not be like that.
I'm not able to confirm but if we have a while true that only does calculations, We are not expected to have an out of memory error, only a very detrimental cpu performance, right?
On the other hand if we are always requiring more memory it is expected to have a out of memory error.
I've 3 cases below.
calculations only (I think no memory is being allocated under the hood)
Ever increasing arraylist which it looks an obvious out of memory error
always instanting arraylist with new keyword. I dont know if it causes an out of memory error, because of garbage collector.
I'm not testing im my pc because I only have one, hope someone has the knowledge.
Code
import java.util.*;
public class HelloLeak{
//calculations only, MemoryLeak?
public static void outofmemo1(){
long i = 0;
while (true)
{
i = i * i;
}
}
//adding infinite number of objects, memory leak confirmed.
public static void outofmemo2(){
int i = 0;
List<Integer> l = new ArrayList<>();
while (true)
{
l.add(i);
}
}
//Creating infinite number of ArrayList objects, will garbage collector clear the unused objects or we will get memory leak?
public static void outofmemo3(){
List<Integer> l = new ArrayList<>();
while (true)
{
l = new ArrayList<>();
}
}
public static void main(String []args){
outofmemo1();
//outofmemo2();
//outofmemo3();
}
}
Will do absolutly nothing except ending in an endless loop.
Will crash with an OutOfMemoryError, because you add always a new element to the list, until the heap is filled.
Will be like 1. but you may have spikes up to for example 2GB, then the GC will come, see that there are unused objects, removes them. After that it will spike again, and so on

SoftReference is not getting cleared by Java GC

I was trying to understand SoftReferences in Java which basically ensures clearing memories of SoftReferenced objects before throwing StackOverflowError.
public class Temp
{
public static void main(String args[])
{
Temp temp2 = new Temp();
SoftReference<Temp> sr=new SoftReference<Temp>(temp2);
temp2=null;
Temp temp=new Temp();
temp.infinite(sr);
}
public void infinite(SoftReference sr)
{
try
{
infinite(sr);
}
catch(StackOverflowError ex)
{
System.out.println(sr.get());
System.out.println(sr.isEnqueued());
}
}
}
However the outcome of above was
test.Temp#7852e922
false
Can someone explain me why the object was not cleared by GC? How can I make it work?
Looks like you may have some confusion with the StackOverFlowError and OutOfMemoryError. StackOverFlowError and OutOfMemoryError error are different. StackOverFlowError happens when there is no space in the call stack: OutOfMemoryError occurs when the JVM is unable to allocate memory in the heap space for a new object. Your code leads to StackOverflow: that means stack memory is full, not the heap space. I believe there will be enough space to store your SoftReference that's why it does not GCd the object.

can java 8 lambdas cause memory leaks?

well I found this code in a blog, and wanted to understand why it would cause a memory leak, if it is potential of causing a memory leak.
class Test {
public static void main(String[] args) {
Runnable runnable = new EnterpriseBean()
.runnable();
runnable.run(); // Breakpoint here
}
}
#ImportantDeclaration
#NoMoreXML({
#CoolNewValidationStuff("Annotations"),
#CoolNewValidationStuff("Rock")
})
class EnterpriseBean {
Object[] enterpriseStateObject =
new Object[100_000_000];
Runnable runnable() {
return () -> {
System.out.println("Hello from: " + this);
};
}
}
The provided code does not have a memory leak, and the blog entry from which it is drawn does not say otherwise. What it says is that the object returned by EnterpriseBean.runnable() has much (much) larger state than you might naively expect, and that that state cannot be garbage collected before the Runnable itself is.
However, there is nothing in that code that would prevent the Runnable from eventually being collected, and at that time all the extra state will be eligible for collection, too.
So no, the code is not an example of a memory leak, and does not suggest a means to produce one.

I was expecting outOfMemory but here I get stackOverFlow in java

package com.atul;
public class StackOverFlow {
public StackOverFlow() {
callStackOverFlow();
}
public void callStackOverFlow() {
StackOverFlow st = new StackOverFlow();
}
public static void main(String[] args) {
StackOverFlow st2 = new StackOverFlow();
}
}
In above program I was trying to get OutOfMemory error but I get StackOverFlow error. As per my knowledge all the objects are created in the Heap. Here we are doing recursion with constructor, still I get the StackOverFlow error.
Why?
You run out of stack (which has a maximum depth around 10,000 for simple cases) long before you run out of heap memory. This is because every thread has its own stack so it must be a lot smaller than the shared heap.
If you want to run out of memory, you need to use up the heap faster.
public class OutOfMemoryMain {
byte[] bytes = new byte[100*1024*1024];
OutOfMemoryMain main = new OutOfMemoryMain();
public static void main(String... args) {
new OutOfMemoryMain();
}
}
The stack size in the JVM is limited (per-thread) and configurable via -Xss.
If you want to generate an OOM, I would suggest looping infinitely and instantiating a new object per loop, and storing it in a collection (otherwise the garbage collection will destory each instance)
Before the memory get full of objects and program aborts due to out of memory; you ran out of stack which stores the method call and hence you are getting Stackoverflow Error.
Overflow error would come when your objects would fill up the heap space...

reduce in performance when used multithreading in java

I am new to multi-threading and I have to write a program using multiple threads to increase its efficiency. At my first attempt what I wrote produced just opposite results. Here is what I have written:
class ThreadImpl implements Callable<ArrayList<Integer>> {
//Bloom filter instance for one of the table
BloomFilter<Integer> bloomFilterInstance = null;
// Data member for complete data access.
ArrayList< ArrayList<UserBean> > data = null;
// Store the result of the testing
ArrayList<Integer> result = null;
int tableNo;
public ThreadImpl(BloomFilter<Integer> bloomFilterInstance,
ArrayList< ArrayList<UserBean> > data, int tableNo) {
this.bloomFilterInstance = bloomFilterInstance;
this.data = data;
result = new ArrayList<Integer>(this.data.size());
this.tableNo = tableNo;
}
public ArrayList<Integer> call() {
int[] tempResult = new int[this.data.size()];
for(int i=0; i<data.size() ;++i) {
tempResult[i] = 0;
}
ArrayList<UserBean> chkDataSet = null;
for(int i=0; i<this.data.size(); ++i) {
if(i==tableNo) {
//do nothing;
} else {
chkDataSet = new ArrayList<UserBean> (data.get(i));
for(UserBean toChk: chkDataSet) {
if(bloomFilterInstance.contains(toChk.getUserId())) {
++tempResult[i];
}
}
}
this.result.add(new Integer(tempResult[i]));
}
return result;
}
}
In the above class there are two data members data and bloomFilterInstance and they(the references) are passed from the main program. So actually there is only one instance of data and bloomFilterInstance and all the threads are accessing it simultaneously.
The class that launches the thread is(few irrelevant details have been left out, so all variables etc. you can assume them to be declared):
class MultithreadedVrsion {
public static void main(String[] args) {
if(args.length > 1) {
ExecutorService es = Executors.newFixedThreadPool(noOfTables);
List<Callable<ArrayList<Integer>>> threadedBloom = new ArrayList<Callable<ArrayList<Integer>>>(noOfTables);
for (int i=0; i<noOfTables; ++i) {
threadedBloom.add(new ThreadImpl(eval.bloomFilter.get(i),
eval.data, i));
}
try {
List<Future<ArrayList<Integer>>> answers = es.invokeAll(threadedBloom);
long endTime = System.currentTimeMillis();
System.out.println("using more than one thread for bloom filters: " + (endTime - startTime) + " milliseconds");
System.out.println("**Printing the results**");
for(Future<ArrayList<Integer>> element: answers) {
ArrayList<Integer> arrInt = element.get();
for(Integer i: arrInt) {
System.out.print(i.intValue());
System.out.print("\t");
}
System.out.println("");
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
I did the profiling with jprofiler and
![here]:(http://tinypic.com/r/wh1v8p/6)
is a snapshot of cpu threads where red color shows blocked, green runnable and yellow is waiting. I problem is that threads are running one at a time I do not know why?
Note:I know that this is not thread safe but I know that I will only be doing read operations throughout now and just want to analyse raw performance gain that can be achieved, later I will implement a better version.
Can anyone please tell where I have missed
One possibility is that the cost of creating threads is swamping any possible performance gains from doing the computations in parallel. We can't really tell if this is a real possibility because you haven't included the relevant code in the question.
Another possibility is that you only have one processor / core available. Threads only run when there is a processor to run them. So your expectation of a linear speed with the number of threads and only possibly achieved (in theory) if is a free processor for each thread.
Finally, there could be memory contention due to the threads all attempting to access a shared array. If you had proper synchronization, that would potentially add further contention. (Note: I haven't tried to understand the algorithm to figure out if contention is likely in your example.)
My initial advice would be to profile your code, and see if that offers any insights.
And take a look at the way you are measuring performance to make sure that you aren't just seeing some benchmarking artefact; e.g. JVM warmup effects.
That process looks CPU bound. (no I/O, database calls, network calls, etc.) I can think of two explanations:
How many CPUs does your machine have? How many is Java allowed to use? - if the threads are competing for the same CPU, you've added coordination work and placed more demand on the same resource.
How long does the whole method take to run? For very short times, the additional work in context switching threads could overpower the actual work. The way to deal with this is to make a longer job. Also, run it a lot of times in a loop not counting the first few iterations (like a warm up, they aren't representative.)
Several possibilities come to mind:
There is some synchronization going on inside bloomFilterInstance's implementation (which is not given).
There is a lot of memory allocation going on, e.g., what appears to be an unnecessary copy of an ArrayList when chkDataSet is created, use of new Integer instead of Integer.valueOf. You may be running into overhead costs for memory allocation.
You may be CPU-bound (if bloomFilterInstance#contains is expensive) and threads are simply blocking for CPU instead of executing.
A profiler may help reveal the actual problem.

Categories