I am trying to connect R to Teradata to pull data directly into R for analysis. However, I am getting the error of,
Error in .jcall(rp, "I", "fetch", stride, block) :
java.lang.OutOfMemoryError: Java heap space
I have tried to set up my R options to increase the max heap size of JVM by doing:
options(java.parameters = "-Xmx8g")
I have also tried to initialize java parameters with rJava function .jinit as: .jinit(parameters="-Xmx8g").
But still failed.
The calculated size of the data should be approximately 3G (actually less than 3G).
You need to make sure you're allocating additional memory before loading rJava or any other packages. Wipe the environment first (via rm(list = ls())), restart R/Rstudio if you must, and modify the options at the beginning of your script.
options(java.parameters = "-Xmx8000m")
See for example https://support.snowflake.net/s/article/solution-using-r-the-following-error-is-returned-javalangoutofmemoryerror-gc-overhead-limit-exceeded
I somehow had this problem in a not reproducible manner, partly solved it with -Xmx8g but run in to problems randomly.
I now found an option with a different garbage collector by using
options(java.parameters = c("-XX:+UseConcMarkSweepGC", "-Xmx8192m"))
library(xlsx)
at the beginning of the script and before any other package is loaded since other packages can load some java things by themselves and the options have to be set before any Java is loaded.
So far, the problem didn't occurred again.
Only sometimes in a long session it can still happen. But in this case a session restart normally solves the problem.
Running the following two lines of code (before any packages are loaded) worked for me on a Mac:
options(java.parameters = c("-XX:+UseConcMarkSweepGC", "-Xmx8192m"))
gc()
This essentially combines two proposals previously posted herein: Importantly, only running the first line alone (as suggested by drmariod) did not solve the problem in my case. However, when I was additionally executing gc() just after the first line (as suggested by user2961057) the problem was solved.
Should it still not work, restart your R session, and then try (before any packages are loaded) instead options(java.parameters = "-Xmx8g") and directly after that execute gc(). Alternatively, try to further increase the RAM from "-Xmx8g" to e.g. "-Xmx16g" (provided that you have at least as much RAM).
EDIT: Further solutions: While I had to use the rJava for model estimations in R (explaining y from a large number of X), I kept receiving the above 'OutOfMemory' Errors even if I scaled up to "-Xmx60000m" (the machine I am using has 64 GB RAM). The problem was that some model specifications were simply too big (and would have required even more RAM). One solution that may help in this case is scaling the size of the problem down (e.g. by reducing the number of X's in the model), or – if possible – splitting the problem into independent pieces, estimating each separately, and putting those pieces together again.
I added garbage collection and that solved the issue for me. I am connecting to Oracle databases using RJDBC.
simply add gc()
Related
I have homework assignment and the limit of memory is 64 MB and my code uses 68 MB.
I already set my arrays to null and ran Runtime.getRuntime().gc();
Is there anything i can do? Delete all stuff from memory that i don't need? How?
I have 3 int arrays and 3 float arrays, 2 double arrays all of the same N given size.
a = null;
size = null;
prize = null;
required = null;
b= null;
potr= null;
result = null;
Runtime.getRuntime().gc();
First, check whether your program really won't run with 64 MB, using the command-line option -Xmx64m (exact syntax might depend on your JRE version). You only have a problem if you get an OutOfMemoryError under that setting.
If Java runs without a command-line limit, it tends to grab more memory than strictly necessary. So I guess, if you see 68 MB in Windows Task Manager or equivalent, everything will be OK.
If you really run into an OutOfMemoryError, you can try to profile your application. At our company, we're using the commercial JProfiler, a tool that can give you detailed information on memory usage. But it takes a learning curve to understand the profiling results.
An alternative is to post the complete code here or at https://codereview.stackexchange.com/, so we can give more specific help.
Presumably it is code that is taking the bulk of the memory up. Make use of fewer obscure libraries classes. Or get hold of JDK 1.00.
what all external libraries your code refers.....? jvm loads all classes in memory before execution....if you remove unnecessary imports and libraries that will reduce your memory footprint ...also you can use earlier version of jvm with less features if it satisfies your requirement .hope that helps .
I am trying to create large RDF/HDT files, which in turn means reading large files into memory, etc. Now, that is not really an issue since the server has 516GB of memory, around 510GB of which are free.
I am using the rdfhdt library to create the files, which works just fine. However, for one specific file, I keep getting an OutOfMemoryError, with no real reason as to why. Here is the stack trace:
Exception in thread "main" java.lang.OutOfMemoryError
at java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
at org.rdfhdt.hdt.util.string.ByteStringUtil.append(ByteStringUtil.java:238)
at org.rdfhdt.hdt.dictionary.impl.section.PFCDictionarySection.load(PFCDictionarySection.java:123)
at org.rdfhdt.hdt.dictionary.impl.section.PFCDictionarySection.load(PFCDictionarySection.java:87)
at org.rdfhdt.hdt.dictionary.impl.FourSectionDictionary.load(FourSectionDictionary.java:83)
at org.rdfhdt.hdt.hdt.impl.HDTImpl.loadFromModifiableHDT(HDTImpl.java:441)
at org.rdfhdt.hdt.hdt.writer.TripleWriterHDT.close(TripleWriterHDT.java:96)
at dk.aau.cs.qweb.Main.makePredicateStores(Main.java:137)
at dk.aau.cs.qweb.Main.main(Main.java:69)
I am running the Jar file with the tag -Xmx200G. The strange thing is, when looking in 'top', it shows VIRT to be 213G (as expected). However, every time RES climbs to just about 94GB, it crashes with the error above, which I think is strange since it should have more than 100GB left to use. I looked in this question, as the problem seems to be similar to mine, although on a different scale. However, using -verbose:gc and -XX:+PrintGCDetails doesn't seem to give me any indication as to what is wrong, and there is about 500G of swap space available as well.
The, perhaps, strangest thing however is the fact that the specific file I have issues with is not even the largest files. For scale, it has about 83M triples to write, and for other files, up to 200M triples have not been an issue. I am using Java version 1.8.0_66 and Ubuntu version 14.04.3 LTS.
So my question is, if anyone can explain what I am doing wrong? It seems very strange to me that larger files have no issue, but this one does. Please let me know if you need any other information.
Due to Java's maximum array length, a ByteArrayOutputStream can not hold more than 2GB of data. This is true regardless of your current amount of RAM or memory limits. Here's the code you're hitting:
private static int hugeCapacity(int minCapacity) {
if (minCapacity < 0) // overflow
throw new OutOfMemoryError();
You'll have to rewrite your code to not try to keep that much data in a single array.
I am struggling with a nasty problem for many days now. In my Java Swing application, I use two JTextPanes extended with Syntax Highlighting for XML-Text as described in this example with some little changes:
XML Syntax Highlighting in JTextPanes
These two JTextPanes are placed in two JScollPanes in a JSplitPane that is placed directly in the ContentPane of a JFrame. The first TextPane is editable (like a simple XML-Request-Editor), the second TextPane displays XML-Responses from my server backend.
Everything works as expected as long as I don't try to put "many lines" in those XmlTextPanes. This results in a pretty fast increase of memory used (going from < 100 MB to 1,000 MB after just a few lines inserted to one or both of the TextPanes).
The strange thing is that even resetting the TextPanes and/or removing them (or disposing the Frame that holds the Components) will not change the memory used at all! Forcing a garbage collection won't change anything, too. Something must still be holding references to the allocated stuff....
In order to see what exactly is consuming all that memory I tried to analyze the application with the Eclipse MATS resulting in this:
This clearly shows that the CachedPainter is holding a lot of stuff...
Asking Google it seems that I am not the only one having memory issues with the CachePainter but I was unable to find a reason - and even more important - and solution for this.
After messing around with this for many many hours now I found out that this problem does not occur when I set my application to use
UIManager.setLookAndFeel(UIManager.getCrossPlatformLookAndFeelClassName());
instead of
UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
With the cross platform Look and Feel, I was able to put thousands of lines of XML content in my TextPanes without getting over 200 MB memory used.
With the same code but set to platform Look and Feel (Windows 7), I reach over 2k MB memory usage after ~200 lines
I can reproduce this behavior compiling against JDK7 and JDK8 :(.
What is causing this and how can I fix it?
€dit:
Some additional Informations:
on further research it seems like some LAFs have problems with d3d buffers. This int[] in the MATS screenshot could be some kind of rendering buffer, too pointing in the same possible direction....
My application already sets this in order to prevent some rendering performance issues (frame resizing for example!):
System.setProperty("sun.java2d.noddraw", Boolean.TRUE.toString());
I could try to add this flag to the startparameters, too:
-Dsun.java2d.d3d=false
Or do you think that this won't help?
Don't worry. It's not memory leak.
The ImageCache is based on SoftReferences. From the sources
public class ImageCache {
private int maxCount;
private final LinkedList<SoftReference<ImageCache.Entry>> entries;
....
From Javadoc
All soft references to softly-reachable objects are guaranteed to have been cleared before the virtual machine throws an OutOfMemoryError.
So when you have not enough memory, the cache is cleared to free enough memory.
I've been vectorizing some matlab code I'd previously written, and during this process matlab started crashing due to segmentation faults. I narrowed the problem down to a single type of computation: assigning to multiple struct properties.
For example, even self assignment of this form eventually causes a seg fault when executed several thousand times:
[my_class_instance.my_struct_vector.my_property] = my_class_instance.my_struct_vector.my_property;
I initially assumed this must be a memory leak of some sort, so tried printing out java's free memory after every iteration, but this remained fairly constant.
So yeah, completely at a loss now as to why this breaks :-/
UPDATE: the following changes fixes the seg faulting:
temp = [my_class_instance.my_struct_vector];
[temp.my_property] = temp.my_property;
[my_class_instance.my_struct_vector] = temp;
The question is now why this would fix anything. Something about repeated accessing a handle class rather than a struct list perhaps?
UPDATE 2: THE PLOT THICKENS
I've finally replicated the problem and the work around using a dummy program simple enough to post here:
a simple class:
classdef test_class
properties
test_prop
end
end
And a program that makes a bunch of vector assignments with the class, and will always crash.
test_instance = test_class();
test_instance.test_prop = struct('test_field',{1 1});
for i=1:10000
[test_instance.test_prop.test_field] = test_instance.test_prop.test_field;
end
UPDATE 3: THE PLOT THINS
Turns out I found a bug. According to Matlab tech support, repeated vector assignment of class properties simply won't work in R2011a (and presumably in earlier version). He told me it works fine in R2012a, and then mentioned the same workaround I discovered: use a temporary variable.
So yeah...
pretty sure this question ends with that support ticket, but if any daring individuals want to take a shot as to WHY this bug exists at all, I'd definitely still be interested in such an answer. (learning is fun!)
By far the most likely cause is that the operation is internally using self-modifying code. The problem with this is that modern processors have CPU caches, so if you change code in memory, but the code has already been committed to a cache, it will generate a seg fault.
The reason why it is random is because it depends on whether the modified code is in the cache at the time of modification and other factors.
To avoid this the programmer has to be sure to have the code flush the cache before doing a self-modification.
I am running a program that I've written in Java in Eclipse. The program has a very deep level of recursion for very large inputs. For smaller inputs the program runs fine however when large inputs are given, I get the following error:
Exception in thread "main" java.lang.StackOverflowError
Can this be solved by increasing the Java stack size and if so, how do I do this in Eclipse?
Update:
#Jon Skeet
The code is traversing a parse tree recursively in order to build up a datastructure. So, for example the code will do some work using a node in the parse tree and call itself on the node's two children, combining their results to give the overall result for the tree.
The total depth of the recursion depends on the size of the parse tree but the code seems to fail (without a larger stack) when the number of recursive calls gets into the 1000s.
Also I'm pretty sure the code isn't failing because of a bug as it works for small inputs.
Open the Run Configuration for your application (Run/Run Configurations..., then look for the applications entry in 'Java application').
The arguments tab has a text box Vm arguments, enter -Xss1m (or a bigger parameter for the maximum stack size). The default value is 512 kByte (SUN JDK 1.5 - don't know if it varies between vendors and versions).
It may be curable by increasing the stack size - but a better solution would be to work out how to avoid recursing so much. A recursive solution can always be converted to an iterative solution - which will make your code scale to larger inputs much more cleanly. Otherwise you'll really be guessing at how much stack to provide, which may not even be obvious from the input.
Are you absolutely sure it's failing due to the size of the input rather than a bug in the code, by the way? Just how deep is this recursion?
EDIT: Okay, having seen the update, I would personally try to rewrite it to avoid using recursion. Generally having a Stack<T> of "things still do to" is a good starting point to remove recursion.
Add the flag -Xss1024k in the VM Arguments.
You can also increase stack size in mb by using -Xss1m for example .
i also have the same problem while parsing schema definition files(XSD) using XSOM library,
i was able to increase Stack memory upto 208Mb then it showed heap_out_of_memory_error for which i was able to increase only upto 320mb.
the final configuration was -Xmx320m -Xss208m but then again it ran for some time and failed.
My function prints recursively the entire tree of the schema definition,amazingly the output file crossed 820Mb for a definition file of 4 Mb(Aixm library) which in turn uses 50 Mb of schema definition library(ISO gml).
with that I am convinced I have to avoid Recursion and then start iteration and some other way of representing the output, but I am having little trouble converting all that recursion to iteration.
When the argument -Xss doesn't do the job try deleting the temporary files from:
c:\Users\{user}\AppData\Local\Temp\.
This did the trick for me.
You need to have a launch configuration inside Eclipse in order to adjust the JVM parameters.
After running your program with either F11 or Ctrl-F11, open the launch configurations in Run -> Run Configurations... and open your program under "Java Applications". Select the Arguments pane, where you will find "VM arguments".
This is where -Xss1024k goes.
If you want the launch configuration to be a file in your workspace (so you can right click and run it), select the Common pane, and check the Save as -> Shared File checkbox and browse to the location you want the launch file. I usually have them in a separate folder, as we check them into CVS.
Look at Morris in-order tree traversal which uses constant space and runs in O(n) (up to 3 times longer than your normal recursive traversal - but you save hugely on space). If the nodes are modifiable, than you could save the calculated result of the sub-tree as you backtrack to its root (by writing directly to the Node).
When using JBOSS Server, double click on the server:
Go to "Open Launch Configuration"
Then change min and max memory sizes (like 1G, 1m):