Java Recursive Stack Size [duplicate] - java

This question already has answers here:
What is the default stack size, can it grow, how does it work with garbage collection?
(2 answers)
Closed yesterday.
Whenever I try to solve some problems, through recursion in Java, I get StackOverflow. I have checked my implementation but they are right. And when the same code is written in C++ with the same logic, it runs perfectly fine.
Is there a specific recursive stack size in Java and C++??
If yes, how does that work and what is the limit?

The stack size is actually the stack size as used by the Java VM. As such it is an implementation detail. How much stack space is available can often be controlled. For the standard Java VM that's the -Xss setting, where X basically means that it is specific to that particular Java version (i.e. it may be retained, but no assurance is given). See the Java documentation for more details, as you can see it is still present in Java 19.
Note that this will increase the stack size for each thread, so use with some care. You can e.g. try and increase it to 4 MB by performing java -Xss4M .... Usually applications such as Tomcat allow you to define these kind of settings through a configuration file or something similar.
I'm myself not a big fan of deeply recursive functions, and prefer to use other methodologies (looping, lambda's etc.) to achieve the same. Stack ops are expensive and the issue of running out of memory remains very real as it is dependent on the input rather than the application code itself.

Related

Java Mission Control code profiler empty

I'm having a problem using Java Mission Control when the application being profiled sets the XX:MaxJavaStackTraceDepth system property to -1.
To reproduce:
Fire up a java applciation: java -XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:MaxJavaStackTraceDepth=-1
Make Flight Recording for that app using default settings
View the "Code" section in the generated record. It'll be empty, like this: http://imgur.com/if27cUu
System: Ubuntu 14.04/amd64. Java 1.8.0_72.
Any suggestions to why this happens? I'd like to keep my stacktraces unlimited (due to some rare stack overflow exceptions which are very hard to find unless you have the beginning of the stack trace).
The -1 value comes from here: http://stas-blogspot.blogspot.se/2011/07/most-complete-list-of-xx-options-for.html#MaxJavaStackTraceDepth
Edit:
The original question wrongly put the blame on java.endorsed.dirs system property. I had a bunch of property set and must have gotten confused in the process of elimination.
I've been able to reproduce the problem with -XX:MaxJavaStackTraceDepth=-1, and found at least one related bug - https://bugs.openjdk.java.net/browse/JDK-7179701, which is a low priority bug currently targeted for JDK 10.... My advice would be to use -XX:FlightRecorderOptions=stackdepth=2048 instead. I can't say I've done much experimenting with this option either, but at least it's designed to work with JFR.

want to look at memory used by one Java object in eclipse

I have a java project written in eclipse (RAD, actually); it uses a significant amount of memory by virtue of using iText. I am looking at using a different way of generating my iText document that is supposed to use less memory. I want to know how much less memory it uses.
I know what object will be the root for the largest portion of the memory; it would be fine for my purposes if I could set a breakpoint and then do something that would tell me the deep-copy memory used starting with that object (i.e., the memory used by it and all of its direct and indirect references).
I've been looking at memory monitors and heap dump analyzers and so forth for an hour now, and am pretty confused. All of them appear to be pointed at answering a different problem, or at least pointed to such a general class of problems that, even if I could get them installed and working, it is not clear whether they would answer MY question.
Can someone point me to a reasonably simple way to answer this limited question? I figure if I run the code once the current way, find out the memory used by this object and maybe one or two others, then run it again and look at the same values, I'll know how much good the new technique is doing.
rc
http://www.eclipse.org/mat/
works great for me... tutorials are included
You can fire up JVisualVM, shipped with all Oracle JDK and available independently.
Monitor your process, and at some point, you can do a Heap Dump.
In the heapdump tab, you can go to the OQL console and select your object[s].
When viewing the instance of an object, you can request to compute the retained size. That will give you the total size of your object.

How do I debug Segfaults occurring in the JVM when it runs my code?

My Java application has started to crash regularly with a SIGSEGV and a dump of stack data and a load of information in a text file.
I have debugged C programs in gdb and I have debugged Java code from my IDE. I'm not sure how to approach C-like crashes in a running Java program.
I'm assuming I'm not looking at a JVM bug here. Other Java programs run just fine, and the JVM from Sun is probably more stable than my code. However, I have no idea how I could even cause segfaults with Java code. There definitely is enough memory available, and when I last checked in the profiler, heap usage was around 50% with occasional spikes around 80%. Are there any startup parameters I could investigate? What is a good checklist when approaching a bug like this?
Though I'm not so far able to reliably reproduce the event, it does not seem to occur entirely at random either, so testing is not completely impossible.
ETA: Some of the gory details
(I'm looking for a general approach, since the actual problem might be very specific. Still, there's some info I already collected and that may be of some value.)
A while ago, I had similar-looking trouble after upgrading my CI server (see here for more details), but that fix (setting -XX:MaxPermSize) did not help this time.
Further investigation revealed that in the crash log files the thread marked as "current thread" is never one of mine, but either one called "VMThread" or one called "GCTaskThread"- I f it's the latter, it is additionally marked with the comment "(exited)", if it's the former, the GCTaskThread is not in the list. This makes me suppose that the problem might be around the end of a GC operation.
I'm assuming I'm not looking at a JVM bug here. Other Java programs
run just fine, and the JVM from Sun is probably more stable than my
code.
I don't think you should make that assumption. Without using JNI, you should not be able to write Java code that causes a SIGSEGV (although we know it happens). My point is, when it happens, it is either a bug in the JVM (not unheard of) or a bug in some JNI code. If you don't have any JNI in your own code, that doesn't mean that you aren't using some library that is, so look for that. When I have seen this kind of problem before, it was in an image manipulation library. If the culprit isn't in your own JNI code, you probably won't be able to 'fix' the bug, but you may still be able to work around it.
First, you should get an alternate JVM on the same platform and try to reproduce it. You can try one of these alternatives.
If you cannot reproduce it, it likely is a JVM bug. From that, you can either mandate a particular JVM or search the bug database, using what you know about how to reproduce it, and maybe get suggested workarounds. (Even if you can reproduce it, many JVM implementations are just tweaks on Oracle's Hotspot implementation, so it might still be a JVM bug.)
If you can reproduce it with an alternative JVM, the fault might be that you have some JNI bug. Look at what libraries you are using and what native calls they might be making. Sometimes there are alternative "pure Java" configurations or jar files for the same library or alternative libraries that do almost the same thing.
Good luck!
The following will almost certainly be useless unless you have native code. However, here goes.
Start java program in java debugger, with breakpoint well before possible sigsegv.
Use the ps command to obtain the processid of java.
gdb /usr/lib/jvm/sun-java6/bin/java processid
make sure that the gdb 'handle' command is set to stop on SIGSEGV
continue in the java debugger from the breakpoint.
wait for explosion.
Use gdb to investigate
If you've really managed to make the JVM take a sigsegv without any native code of your own, you are very unlikely to make any sense of what you will see next, and the best you can do is push a test case onto a bug report.
I found a good list at http://www.oracle.com/technetwork/java/javase/crashes-137240.html. As I'm getting the crashes during GC, I'll try switching between garbage collectors.
I tried switching between the serial and the parallel GC (the latter being the default on a 64-bit Linux server), this only changed the error message accordingly.
Reducing the max heap size from 16G to 10G after a fresh analysis in the profiler (which gave me a heap usage flattening out at 8G) did lead to a significantly lower "Virtual Memory" footprint (16G instead of 60), but I don't even know what that means, and The Internet says, it doesn't matter.
Currently, the JVM is running in client mode (using the -client startup option thus overriding the default of -server). So far, there's no crash, but the performance impact seems rather large.
If you have a corefile you could try running jstack on it, which would give you something a little more comprehensible - see http://download.oracle.com/javase/6/docs/technotes/tools/share/jstack.html, although if it's a bug in the gc thread it may not be all that helpful.
Try to check whether c program carsh which have caused java crash.use valgrind to know invalid and also cross check stack size.

How to Use posix_spawn() in Java

I've inherited a legacy application that uses ProcessBuilder.start() to execute a script on a Solaris 10 server.
Unfortunately, this script call fails due to a memory issue, as documented here
Oracle's recommendation is to use posix_spawn() since, under the covers, ProcessBuilder.start() is using fork/exec.
I have been unable to find any examples (e.g., how to call "myScript.sh")
using posix_spawn() in Java, or even what are the packages that are required.
Could you please, point me to a simple example on how to use posix_spawn() in Java?
Recent version of Java 7 and 8 support posix_spawn internally.
command line option
-Djdk.lang.Process.launchMechanism=POSIX_SPAWN
or enable at runtime
System.setProperty("jdk.lang.Process.launchMechanism", "POSIX_SPAWN");
I'm a little confused as to which Java version/OS combinations have this enabled by default, but I'm sure you could test and find out pretty quickly whether setting this option makes a difference.
For reference, to go back to the old fork method simply use
-Djdk.lang.Process.launchMechanism=fork
To prove whether this option is respected in your JVM version use
-Djdk.lang.Process.launchMechanism=dummy
and you will get an error next time you exec. This way you know the JVM is receiving this option.
An alternative, which does not require JNI, is to create a separate "process spawner" application. I would probably have this application expose an RMI interface, and create a wrapper object that is a drop-in replacement for ProcessBuilder.
You might also want to consider having this "spawner" application be the thing that starts your legacy application.
You will need to familiarize yourself with JNI first. Learn how to call out into a native routine from Java code. Once you do - you can look at this example and see if it helps with your issue. Of particular interest to you is:
if( (RC=posix_spawn(&pid, spawnedArgs[0], NULL, NULL, spawnedArgs, NULL)) !=0 ){
printf("Error while executing posix_spawn(), Return code from posix_spawn()=%d",RC);
}
A much simpler solution would be to keep your code unchanged and simply add more virtual memory to your server.
i.e.:
mkfile 2g /somewhere/swap-1
swap -a /somewhere/swap-1
Edit: To clarify as the link present in the question is now broken:
the question is about a system out of virtual memory due to the JVM being forked. Eg, assuming the JVM uses 2 GB of VM, an extra 2 GB of VM is required for the fork to succeed on Solaris. There is no pagination involved here, just memory reservation. Unlike the Linux kernel which by default overcommits memory, Solaris makes sure allocated memory is backed by either RAM or swap. As there is not enough swap available, fork is failing. Enlarging the swap allows the fork to succeed without any performance impact. Just after the fork, the exec "unreserves" this 2GB of RAM and revert to a situation identical to the posix_spawn one.
See also this page for an explanation about memory allocation under Solaris and other OSes.

How to free up memory?

We have been facing Out of Memory errors in our App server for sometime. We see the used heap size increasing gradually until finally it reaches the available heap in size. This happens every 3 weeks after which a server restart is needed to fix this.
Upon analysis of the heap dumps we find the problem to be objects used in JSPs.
Can JSP objects be the real cause of Appserver memory issues? How do we free up JSP objects (Objects which are being instantiated using usebean or other tags)?
We have a clustered Websphere appserver with 2 nodes and an IHS.
EDIT: The findings above are based on the heap-dump and nativestderr log analysis given below using the IBM support assistant
nativestd err log analysis:
alt text http://saregos.com/wp-content/uploads/2010/03/chart.jpg
Heap dump analysis:
![alt text][2]
Heap dump analysis showing the immediate dominators (2 levels up of hastable entry in the image above)
![alt text][3]
The last image shows that the immediate dominators are in fact objects being used in JSPs.
EDIT2: More info available at http://saregos.com/?p=43
I'd first attach a profile tool to tell you what these "Objects" are that are taking up all the memory.
Eclipse has TPTP,
or there is JProfiler
or JProbe.
Any of these should show the object heap creaping up and allow you to inspect it to see what is on the heap.
Then search the code base to find who is creating these.
Maybe you have a cache or tree/map object with elements in and you have only implemented the "equals()" method on these objects, and you need to implement "hashcode()".
This would then result in the map/cache/tree getting bigger and bigger till it falls over.
This is only a guess though.
JProfiler would be my first call
Javaworld has example screen shot of what is in memory...
(source: javaworld.com)
And a screen shot of object heap building up and being cleaned up (hence the saw edge)
(source: javaworld.com)
UPDATE *************************************************
Ok, I'd look at...
http://www-01.ibm.com/support/docview.wss?uid=swg1PK38940
Heap usage increases over time which leads to an OutOfMemory
condition. Analysis of a heapdump shows that the following
objects are taking up an increasing amount of space:
40,543,128 [304] 47 class
com/ibm/wsspi/rasdiag/DiagnosticConfigHome
40,539,056 [56] 2 java/util/Hashtable 0xa8089170
40,539,000 [2,064] 511 array of java/util/Hashtable$Entry
6,300,888 [40] 3 java/util/Hashtable$HashtableCacheHashEntry
Triggering the garbage collection manually doesn't solve your problem - it won't free resources that are still in use.
You should use a profiling tool (like jProfiler) to find your leaks. You problably use code that stores references in lists or maps that are not released during runtime - propably static references.
If you run under the Sun 6 JVM strongly consider to use the jvisualvm program in the JDK to get an inital overview of what actually goes on inside the program. The snapshot comparison is really good to help you get further in which objects sneak in.
If Sun 6 JVM is not an option, then investigate which profiling tools you have. Trials can get you really far.
It can be something as simple as gigantic character arrays underlying a substring you are collecting in a list, for e.g. housekeeping.
I suggest reading Effective Java, chapter 2. Following it, together with a profiler, will help you identify the places where your application produces memory leaks.
Freeing up memory isn't the way to solve extensive memory consumption. The extensive memory consumption may be a result of two things:
not properly written code - the solution is to write it properly, so that it does not consume more than is needed - Effective Java will help here.
the application simply needs this much memory. Then you should increase the VM memory using Xmx, Xms, XX:MaxHeapSize,...
There is no specific to free up objects allocated in JSPs, at least as far as I know. Rather than investigationg such options, I'd rather focus on finding the actual problem in your application codes and fix it.
Some hints that might help:
Check the scope of your beans. Aren't
you e.g. storing something user or
request specific into "application"
scope (by mistake)?
Check settings of web session timeout in your web application and
appserver settings.
You mentioned the heap consumption grows gradually. If it's indeed so,
try to see by how much the heap size
grows with various user scenarios:
Grab a heapdump, run a test, let the
session data timeout, grab another
dump, compare the two. That might
give you some idea where do the objects on heap come from
Check your beans for any obvious memory leaks, for sure :)
EDIT: Checking for unreleased static resources that Daniel mentions is another worthwhile thing :)
As I understand those top-level memory-eaters are cache storage and objects stored in it. Probably you should make sure that your cache is going to free objects when it takes too much memory. You may want to use weak-ref if you need cache for live objects only.

Categories