I was wondering what happens when you try to catch an StackOverflowError and came up with the following method:
class RandomNumberGenerator {
static int cnt = 0;
public static void main(String[] args) {
try {
main(args);
} catch (StackOverflowError ignore) {
System.out.println(cnt++);
}
}
}
Now my question:
Why does this method print '4'?
I thought maybe it was because System.out.println() needs 3 segments on the call stack, but I don't know where the number 3 comes from. When you look at the source code (and bytecode) of System.out.println(), it normally would lead to far more method invocations than 3 (so 3 segments on the call stack would not be sufficient). If it's because of optimizations the Hotspot VM applies (method inlining), I wonder if the result would be different on another VM.
Edit:
As the output seems to be highly JVM specific, I get the result 4 using
Java(TM) SE Runtime Environment (build 1.6.0_41-b02)
Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01, mixed mode)
Explanation why I think this question is different from Understanding the Java stack:
My question is not about why there is a cnt > 0 (obviously because System.out.println() requires stack size and throws another StackOverflowError before something gets printed), but why it has the particular value of 4, respectively 0,3,8,55 or something else on other systems.
I think the others have done a good job at explaining why cnt > 0, but there's not enough details regarding why cnt = 4, and why cnt varies so widely among different settings. I will attempt to fill that void here.
Let
X be the total stack size
M be the stack space used when we enter main the first time
R be the stack space increase each time we enter into main
P be the stack space necessary to run System.out.println
When we first get into main, the space left over is X-M. Each recursive call takes up R more memory. So for 1 recursive call (1 more than original), the memory use is M + R. Suppose that StackOverflowError is thrown after C successful recursive calls, that is, M + C * R <= X and M + C * (R + 1) > X. At the time of the first StackOverflowError, there's X - M - C * R memory left.
To be able to run System.out.prinln, we need P amount of space left on the stack. If it so happens that X - M - C * R >= P, then 0 will be printed. If P requires more space, then we remove frames from the stack, gaining R memory at the cost of cnt++.
When println is finally able to run, X - M - (C - cnt) * R >= P. So if P is large for a particular system, then cnt will be large.
Let's look at this with some examples.
Example 1: Suppose
X = 100
M = 1
R = 2
P = 1
Then C = floor((X-M)/R) = 49, and cnt = ceiling((P - (X - M - C*R))/R) = 0.
Example 2: Suppose that
X = 100
M = 1
R = 5
P = 12
Then C = 19, and cnt = 2.
Example 3: Suppose that
X = 101
M = 1
R = 5
P = 12
Then C = 20, and cnt = 3.
Example 4: Suppose that
X = 101
M = 2
R = 5
P = 12
Then C = 19, and cnt = 2.
Thus, we see that both the system (M, R, and P) and the stack size (X) affects cnt.
As a side note, it does not matter how much space catch requires to start. As long as there is not enough space for catch, then cnt will not increase, so there are no external effects.
EDIT
I take back what I said about catch. It does play a role. Suppose it requires T amount of space to start. cnt starts to increment when the leftover space is greater than T, and println runs when the leftover space is greater than T + P. This adds an extra step to the calculations and further muddies up the already muddy analysis.
EDIT
I finally found time to run some experiments to back up my theory. Unfortunately, the theory doesn't seem to match up with the experiments. What actually happens is very different.
Experiment setup:
Ubuntu 12.04 server with default java and default-jdk. Xss starting at 70,000 at 1 byte increments to 460,000.
The results are available at: https://www.google.com/fusiontables/DataSource?docid=1xkJhd4s8biLghe6gZbcfUs3vT5MpS_OnscjWDbM
I've created another version where every repeated data point is removed. In other words, only points that are different from the previous are shown. This makes it easier to see anomalies. https://www.google.com/fusiontables/DataSource?docid=1XG_SRzrrNasepwZoNHqEAKuZlHiAm9vbEdwfsUA
This is the victim of bad recursive call. As you are wondering why the value of cnt varies, it is because the stack size depends on the platform. Java SE 6 on Windows has a default stack size of 320k in the 32-bit VM and 1024k in the 64-bit VM. You can read more here.
You can run using different stack sizes and you will see different values of cnt before the stack overflows-
java -Xss1024k RandomNumberGenerator
You don't see the value of cnt being printed multiple times even though the value is greater than 1 sometimes because your print statement is also throwing error which you can debug to be sure through Eclipse or other IDEs.
You can change the code to the following to debug per statement execution if you'd prefer-
static int cnt = 0;
public static void main(String[] args) {
try {
main(args);
} catch (Throwable ignore) {
cnt++;
try {
System.out.println(cnt);
} catch (Throwable t) {
}
}
}
UPDATE:
As this getting a lot more attention, let's have another example to make things clearer-
static int cnt = 0;
public static void overflow(){
try {
overflow();
} catch (Throwable t) {
cnt++;
}
}
public static void main(String[] args) {
overflow();
System.out.println(cnt);
}
We created another method named overflow to do a bad recursion and removed the println statement from the catch block so it doesn't start throwing another set of errors while trying to print. This works as expected. You can try putting System.out.println(cnt); statement after cnt++ above and compile. Then run multiple times. Depending on your platform, you may get different values of cnt.
This is why generally we do not catch errors because mystery in code is not fantasy.
The behavior is dependent upon the stack size (which can be manually set using Xss. The stack size is architecture specific. From JDK 7 source code:
// Default stack size on Windows is determined by the executable (java.exe
// has a default value of 320K/1MB [32bit/64bit]). Depending on Windows version, changing
// ThreadStackSize to non-zero may have significant impact on memory usage.
// See comments in os_windows.cpp.
So when the StackOverflowError is thrown, the error is caught in catch block. Here println() is another stack call which throws exception again. This gets repeated.
How many times it repeates? - Well it depends on when JVM thinks it is no longer stackoverflow. And that depends on the stack size of each function call (difficult to find) and the Xss. As mentioned above default total size and size of each function call (depends on memory page size etc) is platform specific. Hence different behavior.
Calling the java call with -Xss 4M gives me 41. Hence the correlataion.
I think the number displayed is the number of time the System.out.println call throws the Stackoverflow exception.
It probably depend on the implementation of the println and the number of stacking call it is made in it.
As an illustration:
The main() call trigger the Stackoverflow exception at call i.
The i-1 call of main catch the exception and call println which trigger a second Stackoverflow. cnt get increment to 1.
The i-2 call of main catch now the exception and call println. In println a method is called triggering a 3rd exception. cnt get increment to 2.
this continue until println can make all its needed call and finally display the value of cnt.
This is then dependent of the actual implementation of println.
For the JDK7 either it detect cycling call and throws the exception earlier either it keep some stack resource and throw the exception before reaching the limit to give some room for remediation logic either the println implementation doesn't make calls either the ++ operation is done after the println call thus is by pass by the exception.
main recurses on itself until it overflows the stack at recursion depth R.
The catch block at recursion depth R-1 is run.
The catch block at recursion depth R-1 evaluates cnt++.
The catch block at depth R-1 calls println, placing cnt's old value on the stack. println will internally call other methods and uses local variables and things. All these processes require stack space.
Because the stack was already grazing the limit, and calling/executing println requires stack space, a new stack overflow is triggered at depth R-1 instead of depth R.
Steps 2-5 happen again, but at recursion depth R-2.
Steps 2-5 happen again, but at recursion depth R-3.
Steps 2-5 happen again, but at recursion depth R-4.
Steps 2-4 happen again, but at recursion depth R-5.
It so happens that there is enough stack space now for println to complete (note that this is an implementation detail, it may vary).
cnt was post-incremented at depths R-1, R-2, R-3, R-4, and finally at R-5. The fifth post-increment returned four, which is what was printed.
With main completed successfully at depth R-5, the whole stack unwinds without more catch blocks being run and the program completes.
After digging around for a while, I can't say that I find the answer, but I think it's quite close now.
First, we need to know when a StackOverflowError will be thrown. In fact, the stack for a java thread stores frames, which containing all the data needed for invoking a method and resume. According to Java Language Specifications for JAVA 6, when invoking a method,
If there is not sufficient memory available to create such an activation frame, an StackOverflowError is thrown.
Second, we should make it clear what is "there is not sufficient memory available to create such an activation frame". According to Java Virtual Machine Specifications for JAVA 6,
frames may be heap allocated.
So, when a frame is created, there should be enough heap space to create a stack frame and enough stack space to store the new reference which point to the new stack frame if the frame is heap allocated.
Now let's go back to the question. From the above, we can know that when a method is execute, it may just costs the same amount of stack space. And invoking System.out.println (may) needs 5 level of method invocation, so 5 frames need to be created. Then when StackOverflowError is thrown out, it has to go back 5 times to get enough stack space to store 5 frames' references. Hence 4 is print out. Why not 5? Because you use cnt++. Change it to ++cnt, and then you will get 5.
And you will notice that when the size of stack go to a high level, you will get 50 sometimes. That is because the amount of available heap space need to be taken into consideration then. When the stack's size is too large, maybe heap space will run out before stack. And (maybe) the actual size of stack frames of System.out.println is about 51 times of main, therefore it goes back 51 times and print 50.
This is not exactly an answer to the question but I just wanted to add something to the original question that I came across and how I understood the problem:
In the original problem the exception is caught where it was possible:
For example with jdk 1.7 it is caught at first place of occurence.
but in earlier versions of jdk it looks like the exception is not being caught at the first place of occurence hence 4, 50 etc..
Now if you remove the try catch block as following
public static void main( String[] args ){
System.out.println(cnt++);
main(args);
}
Then you will see all the values of cnt ant the thrown exceptions (on jdk 1.7).
I used netbeans to see the output, as the cmd will not show all the output and exception thrown.
Related
This question already has answers here:
Why is the max recursion depth I can reach non-deterministic?
(4 answers)
Closed 5 years ago.
A simple class for demonstration purposes:
public class Main {
private static int counter = 0;
public static void main(String[] args) {
try {
f();
} catch (StackOverflowError e) {
System.out.println(counter);
}
}
private static void f() {
counter++;
f();
}
}
I executed the above program 5 times, the results are:
22025
22117
15234
21993
21430
Why are the results different each time?
I tried setting the max stack size (for example -Xss256k). The results were then a bit more consistent but again not equal each time.
Java version:
java version "1.8.0_72"
Java(TM) SE Runtime Environment (build 1.8.0_72-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.72-b15, mixed mode)
EDIT
When JIT is disabled (-Djava.compiler=NONE) I always get the same number (11907).
This makes sense as JIT optimizations are probably affecting the size of stack frames and the work done by JIT definitely has to vary between the executions.
Nevertheless, I think it would be beneficial if this theory is confirmed with references to some documentation about the topic and/or concrete examples of work done by JIT in this specific example that leads to frame size changes.
The observed variance is caused by background JIT compilation.
This is how the process looks like:
Method f() starts execution in interpreter.
After a number of invocations (around 250) the method is scheduled for compilation.
The compiler thread works in parallel to the application thread. Meanwhile the method continues execution in interpreter.
As soon as the compiler thread finishes compilation, the method entry point is replaced, so the next call to f() will invoke the compiled version of the method.
There is basically a race between applcation thread and JIT compiler thread. Interpreter may perform different number of calls before the compiled version of the method is ready. At the end there is a mix of interpreted and compiled frames.
No wonder that compiled frame layout differs from interpreted one. Compiled frames are usually smaller; they don't need to store all the execution context on the stack (method reference, constant pool reference, profiler data, all arguments, expression variables etc.)
Futhermore, there is even more race possibilities with Tiered Compilation (default since JDK 8). There can be a combination of 3 types of frames: interpreter, C1 and C2 (see below).
Let's have some fun experiments to support the theory.
Pure interpreted mode. No JIT compilation.
No races => stable results.
$ java -Xint Main
11895
11895
11895
Disable background compilation. JIT is ON, but is synchronized with the application thread.
No races again, but the number of calls is now higher due to compiled frames.
$ java -XX:-BackgroundCompilation Main
23462
23462
23462
Compile everything with C1 before execution. Unlike previous case there will be no interpreted frames on the stack, so the number will be a bit higher.
$ java -Xcomp -XX:TieredStopAtLevel=1 Main
23720
23720
23720
Now compile everything with C2 before execution. This will produce the most optimized code with the smallest frame. The number of calls will be the highest.
$ java -Xcomp -XX:-TieredCompilation Main
59300
59300
59300
Since the default stack size is 1M, this should mean the frame now is only 16 bytes long. Is it?
$ java -Xcomp -XX:-TieredCompilation -XX:CompileCommand=print,Main.f Main
0x00000000025ab460: mov %eax,-0x6000(%rsp) ; StackOverflow check
0x00000000025ab467: push %rbp ; frame link
0x00000000025ab468: sub $0x10,%rsp
0x00000000025ab46c: movabs $0xd7726ef0,%r10 ; r10 = Main.class
0x00000000025ab476: addl $0x2,0x68(%r10) ; Main.counter += 2
0x00000000025ab47b: callq 0x00000000023c6620 ; invokestatic f()
0x00000000025ab480: add $0x10,%rsp
0x00000000025ab484: pop %rbp ; pop frame
0x00000000025ab485: test %eax,-0x23bb48b(%rip) ; safepoint poll
0x00000000025ab48b: retq
In fact, the frame here is 32 bytes, but JIT has inlined one level of recursion.
Finally, let's look at the mixed stack trace. In order to get it, we'll crash JVM on StackOverflowError (option available in debug builds).
$ java -XX:AbortVMOnException=java.lang.StackOverflowError Main
The crash dump hs_err_pid.log contains the detailed stack trace where we can find interpreted frames at the bottom, C1 frames in the middle and lastly C2 frames on the top.
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
J 164 C2 Main.f()V (12 bytes) # 0x00007f21251a5958 [0x00007f21251a5900+0x0000000000000058]
J 164 C2 Main.f()V (12 bytes) # 0x00007f21251a5920 [0x00007f21251a5900+0x0000000000000020]
// ... repeated 19787 times ...
J 164 C2 Main.f()V (12 bytes) # 0x00007f21251a5920 [0x00007f21251a5900+0x0000000000000020]
J 163 C1 Main.f()V (12 bytes) # 0x00007f211dca50ec [0x00007f211dca5040+0x00000000000000ac]
J 163 C1 Main.f()V (12 bytes) # 0x00007f211dca50ec [0x00007f211dca5040+0x00000000000000ac]
// ... repeated 1866 times ...
J 163 C1 Main.f()V (12 bytes) # 0x00007f211dca50ec [0x00007f211dca5040+0x00000000000000ac]
j Main.f()V+8
j Main.f()V+8
// ... repeated 1839 times ...
j Main.f()V+8
j Main.main([Ljava/lang/String;)V+0
v ~StubRoutines::call_stub
First of all, the following has not been researched. I have not "deep dived" the OpenJDK source code to validate any of the following, and I don't have access to any inside knowledge.
I tried to validate your results by running your test on my machine:
$ java -version
openjdk version "1.8.0_71"
OpenJDK Runtime Environment (build 1.8.0_71-b15)
OpenJDK 64-Bit Server VM (build 25.71-b15, mixed mode)
I get the "count" varying over a range of ~250. (Not as much as you are seeing)
First some background. A thread stack in a typical Java implementation is a contiguous region of memory that is allocated before the thread is started, and that is never grown or moved. A stack overflow happens when the JVM tries to create a stack frame to make a method call, and the frame goes beyond the limits of the memory region. The test could be done by testing the SP explicitly, but my understanding is that it is normally implemented using a clever trick with the memory page settings.
When a stack region is allocated, the JVM makes a syscall to tell the OS to mark a "red zone" page at the end of the stack region read-only or non-accessible. When a thread makes a call that overflows the stack, it accesses memory in the "red zone" which triggers a memory fault. The OS tells the JVM via a "signal", and the JVM's signal handler maps it to a StackOverflowError that is "thrown" on the thread's stack.
So here are a couple of possible explanations for the variability:
The granularity of hardware-based memory protection is the page boundary. So if the thread stack has been allocated using malloc, the start of the region is not going to be page aligned. Therefore the distance from the start of the stack frame to the first word of the "red zone" (which >is< page aligned) is going to be variable.
The "main" stack is potentially special, because that region may be used while the JVM is bootstrapping. That might lead to some "stuff" being left on the stack from before main was called. (This is not convincing ... and I'm not convinced.)
Having said this, the "large" variability that you are seeing is baffling. Page sizes are too small to explain a difference of ~7000 in the counts.
UPDATE
When JIT is disabled (-Djava.compiler=NONE) I always get the same number (11907).
Interesting. Among other things, that could cause stack limit checking to be done differently.
This makes sense as JIT optimizations are probably affecting the size of stack frames and the work done by JIT definitely has to vary between the executions.
Plausible. The size of the stackframe could well be different after the f() method has been JIT compiled. Assuming f() was JIT compiled at some point you stack will have a mixture of "old" and "new" frames. If the JIT compilation occurred at different points, then the ratio will be different ... and hence the count will be different when you hit the limit.
Nevertheless, I think it would be beneficial if this theory is confirmed with references to some documentation about the topic and/or concrete examples of work done by JIT in this specific example that leads to frame size changes.
Little chance of that, I'm afraid ... unless you are prepared to PAY someone to do a few days research for you.
1) No such (public) reference documentation exists, AFAIK. At least, I've never been able to find a definitive source for this kind of thing ... apart from deep diving the source code.
2) Looking at the JIT compiled code tells you nothing of how the bytecode interpreter handled things before the code was JIT compiled. So you won't be able to see if the frame size has changed.
The exact functioning of Java stack undocumented, but it totally depends on the memory allocated to that thread.
Just try using the Thread constructor with stacksize and see if it gets constant. I have not tried it it, so please share the results.
I have a small system sampler project that I have made in Java.
I want to find a way to get the method execution time (self-time) of methods from all threads, similar to VisualVM. However, I simply don't want to use any instrumentation.
So I have two main questions, a broad question and something slightly more specific for my case:
Broad question: Is there a way to calculate self-time of a method using solely Java + JMX? If yes, how accurate is your implementation?
More specific to my problem question: In my project, I can get the CPU time spent per method by sampling all thread stack traces, getting the delta CPU time between the samples and applying that to the top frame of the stack (in my data structure).
Could I infer a basic execution from this data and the length between samples?
Here is a simplified version of my code:
private static final ThreadMXBean MX = ManagementFactory.getThreadMXBean();
private long lastCpuTime;
private Map<Long, ThreadTimerData> threadCache = new HashMap<Long, ThreadTimerData>();
public void sample()
{
final ThreadInfo[] threadInfos = MX.getThreadInfo( MX.getAllThreadIds(), Integer.MAX_VALUE );
for( ThreadInfo threadInfo : threadInfos )
{
final long threadId = threadInfo.getThreadId();
ThreadTimerData data = threadCache.get(threadId); // Just assume we already have this in our Map.
final StackTraceElement[] trace = threadInfo.getStackTrace();
if( trace == null )
{
continue;
}
final long cpuTime = MX.getThreadCpuTime( threadId );
data.update(trace[0].getClassName() + "." + trace[0].getMethodName(), cpuTime - lastCpuTime); // Another map, holding the name string against the delta.
}
lastCpuTime = cpuTime;
}
The sample method is being called at a 200ms interval (within it's own thread) -- this can be changed.
I believe I found the solution for those curious:
Essentially, if any element (method on the stack frame) is on top of the stack it is technically executing. We need to measure for how long this element has been there for, so we sample the stack trace at an interval. This also answers the 2nd part of my first question, the accuracy depends on the interval -- a shorter interval would mean a more meaningful self-time as there's a lesser chance another element appeared and disappeared on the stack between any two samples.
But I digress a bit, we get the time difference between the two sample times (ideally using nano second precision) and this gives us how long the method has been executing. We aggregate this after several samples until the method has stopped executing (when the element leaves the stack trace) or just called something else (a new element has been pushed onto the stack).
Once the stack has changed, we repeat the process. Everything on the stack is using CPU time, additionally. The tricky part of all of this is creating the most efficient data structure to store, retrieve and update the methods from the stack.
I have the most curious index problem I could imagine. I have the following innocent-looking code:
int lastIndex = givenOrder.size() - 1;
if (lastIndex >= 0 && givenOrder.get(lastIndex).equals(otherOrder)) {
givenOrder.remove(lastIndex);
}
Looks like a proper pre-check to me. (The list here is declared as List, so there is no direct access to the last element, but that is immaterial for the question anyway.) I get the following stack trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 1
at java.util.ArrayList.rangeCheck(ArrayList.java:604) ~[na:1.7.0_17]
at java.util.ArrayList.remove(ArrayList.java:445) ~[na:1.7.0_17]
at my.code.Here:48) ~[Here.class:na]
At runtime, it’s a simple ArrayList. Now, index 0 should be quite inside the bounds!
Edit
Many people have suggested that introducing synchronization would solve the problem. I do not doubt it. But the core of my (admittedly unexpressed) question is different: How is that behaviour even possible?
Let me elaborate. We have 5 steps here:
I check the size and compute lastIndex (size is 1 here)
I even access that last element
I request removal
ArrayList checks the bounds, finding them inadequate
ArrayList constructs the exception message and throws
Strictly speaking, granularity could be even finer. Now, 50,000 times it works as expected, no concurrency issues. (Frankly, I haven’t even found any other place where that list could be modified, but the code is too large to rule that out.)
Then, one time it breaks. That’s normal for concurrency issues. However, it breaks in an entirely unexpected way. Somewhere after step 2 and before step 4, the list is emptied. I would expect an exception saying IndexOutOfBoundsException: Index: 0, Size: 0, which is bad enough. But I never saw an exception like this in the last months!
Instead, I see IndexOutOfBoundsException: Index: 0, Size: 1 which means that after step 4 but before step 5 the list gains one element. While this is possible, it seems about as unlikely as the phenomenon above. Yet, it happens each time that the error occurs! As a mathematician, I say that this is just very unprobable. But my common sense tells me that there is another issue.
Moreover, looking at the code in ArrayList, you see very short functions there that are run hundreds of times, and no volatile variable anywhere. That means that I would very much expect the hotspot compiler to have elided the function calls, making the critical section much smaller; and the elided the double access to the size variable, making the observed behaviour impossible. Clearly, this isn’t happening.
So, my question is why this can happen at all and why it happens in this weird way. Suggesting synchronization is not an answer to the question (it may be a solution to the problem, but that is a different matter).
So I have checked the source code for ArrayList implementation of rangeCheck - method that throws exception and this is what I have found:
private void rangeCheck(int paramInt) //given index
{
if (paramInt < this.size) // compare param with list size
return;
throw new IndexOutOfBoundsException(outOfBoundsMsg(paramInt)); // here we have exception
}
and relevant outOfBoundsMsg method
private String outOfBoundsMsg(int paramInt)
{
return "Index: " + paramInt + ", Size: " + this.size; /// OOOoo we are reading size again!
}
So as you can probably see, size (this.size) of list is accessed 2 times. First time it is read to check condition and the condition is not fullfilled, so the message is build for the exception. While creating message for the exception, only paramInt is persistent between calls, but size of list is read second time. And here we have our culprit.
In real, you should get Message : Index:0 Size:0, but the size value used for checking is not locally stored (microoptimalization). So between these 2 reads of this.size list has been changed.
That is why message is missleading.
Conclussion:
Such situation is possible in hightly concurrent environement, and can be very hard to reproduce. To solve that problem use synchronized version of ArrayList (like #JordiCastillia suggested). This solution can have impact on performance as every operation (add/remove and probably get) will be synchronized. Other solution would be to put your code into synchronized block, but this will only synchronized your calls in this piece of code, and the problem can still occure in the future, as different parts of the system can still access whole object async.
This is most likely a concurrency issue.
The size gets somehow modified before/after you tried to access the index.
Use Collections.synchronizedList().
Tested in a simple main it works:
List<String> givenOrder = new ArrayList<>();
String otherOrder = "null";
givenOrder.add(otherOrder);
int lastIndex = givenOrder.size() - 1;
if (lastIndex >= 0 && givenOrder.get(lastIndex).equals(otherOrder)) {
System.out.println("remove");
givenOrder.remove(lastIndex);
}
Are you on a thread-safe process? Your List is modified by some other thread or process.
I'm writing a function that will call itself up to about 5000 times. Ofcourse, I get a StackOverflowError. Is there any way that I can rewrite this code in a fairly simple way?:
void checkBlocks(Block b, int amm) {
//Stuff that might issue a return call
Block blockDown = (Block) b.getRelative(BlockFace.DOWN);
if (condition)
checkBlocks(blockDown, amm);
Block blockUp = (Block) b.getRelative(BlockFace.UP);
if (condition)
checkBlocks(blockUp, amm);
//Same code 4 more times for each side
}
By the way, what is the limitation of how deep we may call the functions?
Use an explicit stack of objects and a loop, rather than the call stack and recursion:
void checkBlocks(Block b, int amm) {
Stack<Block> blocks = new Stack<Block>();
blocks.push(b);
while (!blocks.isEmpty()) {
b = blocks.pop();
Block blockDown = (Block) b.getRelative(BlockFace.DOWN);
if (condition)
blocks.push(block);
Block blockUp = (Block) b.getRelative(BlockFace.UP);
if (condition)
blocks.push(block);
}
}
default stack size in java is 512kb. if you exceed that program will terminate throwing StackOverflowException
you can increase the stack size by passing a JVM argument :
-Xss1024k
now stack size is 1024kb. you may give higher value based on your environment
I don't think we can programmatically change this
You can increase the stack size by using -Xss4m.
You may put your "Block"s into a queue/stack and iterate as long as Blocks are available.
It's obvious that you get StackOverflow with such branching factor of your recursion. In other languages it can be achieved by Tail Call Optimization. But I suppose your problem needs another way to solve.
Ideally, you perform some check on Block. Maybe you can obtain list of all blocks and check each of them iteratively?
In most cases recursion is used in a wrong way. You shouldn't get a stack over flow exception.
Your method has no return type/value.
How do you ensure your initial Block b is valid?
If you are using recursion, answer yourself the following question:
what is my recursion anchor (when do i stop with recursion)
what is my recursion step (how do I reduce my number of calculations)
Example:
n! => n*n-1!
my recursion anchor is n == 2 (result is 2), so I can calculate all results beginnging from this anchor.
my recursion step is n-1 (so each step I get closer to the solution (and in this fact to my recursion anchor))
I have encountered a somewhat baffling problem with the simple task of filling an Array dynamically in Java. The following is a snapshot from where the problem originates:
entries = new Object[ (n = _entries.length + 1) ] ;
for(i = 0 ; i < n ; i++) {
entry = ( i == (n - 1) ) ? addition : _entries[i] ;
entries[i] = entry ;
//...
}
Where _entries is a source Array (field of the class); entries is initialized as an Array of Objects
Object[] entries = null ;
and addition is the Object to be added (passed as an Argument to the method this code is in).
The code passes the compiler but results in a memory-leak when called. I was able to narrow down the cause to the line where the code attempts to fill the new Array
entries[i] = entry ;
however, I cannot think of any reason why this would cause a memory-leak. I'm guessing the root of the issue must be either an extremely stupid fault on my part or an extremely arcane problem with Java. :-)
If you need more background let me know.
Edit:
Tomcat's log tells me:
A web application appears to have started a thread named ... but has failed to stop it.
This is very likely to create a memory leak.
Other than that obviously the page loading the class does not finish loading or loads very slowly.
Edit:
The problem might be somewhere else (at a more expected location) after all. Apparently Tomcat wasn't loading the class files all the time when I tried to pin down the faulty code and this mislead me a bit. I now suspect a infinte for-each loop caused by a defective Iterator implementation up in the call stack to be at fault.
In any case, thanks for your input! Always much appreciated!
I will use a Collection (probably a Vector) instead of an Array as a work-around; still, I'd like to know what the problem here is.
TIA,
FK82
So, about your Tomcat log message:
A web application appears to have started a thread named ... but has failed to stop it. This is very likely to create a memory leak.
This says that your servlet or something similar started a new thread, and this thread is still running when your servlet finished its operation. It doesn't relate at all to your example code (if this code isn't the one starting the thread).
Superfluous threads, even more when each HTTP-request starts a new one (which does not finish soon) can create a memory leak, since each thread needs quite some space for its stack, and also may inhibit garbage-collection by referencing objects who are not needed anymore. Make sure that your thread is really needed, and think about using a threadpool instead (preferably container-managed, if this is possible).
I cannot see a memory leak, but your code is more complicated than it needs to be. How about this:
newLength = $entries.length + 1;
entries = new Object[ newLength ] ;
for(i = 0 ; i < newLength - 1 ; i++) {
entries[i] = $entries[i];
//...
}
entries[ newLength - 1 ] = addition;
No need to check if you are at the last entry all the time and you could use a array copy method as suggested by Alison.
Think of this post as a comment. I just posted it as an answer because I don't know how code is formatted in comments...
It is working for me,
please find the sample code. and change it to accordingly
class test {
public static void main(String[] args) {
String[] str = new String[]{"1","2","3","4","5","6"};
int n=0;
Object[] entries = new Object[ (n = 5 + 1) ] ;
for(int i = 0 ; i < n ; i++) {
Object entry = ( i == (n - 1) ) ? new Object() : str [i] ;
entries[i] = entry ;
}
System.out.println(entries[3]);
}
}
Perhaps by Memory Leak you are meaning an OutOfMemoryException? Sometime you get that in Java if you do not have the minimum heap size set high enough (and also a well defined max heap size too) when you start up. If there is not enough heap created at startup then you can sometimes use it up faster than the JVM has time to allocate more memory to the heap or to garbage collect. Unfortunately, there is no "right answer" here. You just have to play with different settings to get the right result (ie, known as "tuning the JVM"). In other words, this is more of an art than a science.
And in case you didn't know, you pass the arguments to the JVM on the command line when firing up your program -Xmin 250m -Xmax 1024m is an example. You must specify the values in megabytes. The first sets the minimum heap (at startup) to 250 megabytes. The second argument sets the max heap size at one gigabyte.
Just another thought to go by as I too am puzzled by how you could trace a memory leak to one line of code.