Reindexing Solr: java.lang.OutOfMemoryError: Java heap space solr - java

I indexed a directory containing 16k files of pdfs/docs..etc and everything worked great. However, I tried to reindex my collection, and I got the "java.lang.OutOfMemoryError: Java heap space solr" error for every line that Solr tried to index. I looked into the issue online already, and I tried to change my indexing command from java -Dc=collection -Drecursive -Dauto -jar example/exampledocs/post.jar c:/folder to java -Dc=collection -Xms1024m -Xmx1024m -Drecursive -Dauto -jar example/exampledocs/post.jar c:/folder but I got the same errors (I don't know if it was right of me to add those commands though). I've attached an image of my collection storage information. How can I fix this error?

According to this:
https://sitecore.stackexchange.com/questions/8849/java-lang-outofmemoryerror-during-solr-index-rebuilding
Start solr with the following arguments to incease its memory:
solr start -m 4096m

Related

Nutch hadoop map reduce java heap space outOfMemory

I am running a Nutch 1.16, Hadoop 2.83, Solr 8.5.1 crawler setup that is running fine up to a few million indexed pages. Then I am running into Java Heap Space issues during the MapReduce job and I just cannot seem to find the correct way to up that heap space. I have tried:
Passing -D mapreduce.map.memory.mb=24608 -D mapreduce.map.java.opts=-Xmx24096m when starting nutch crawl.
Editing NUTCH_HOME/bin/crawl commonOptions mapred.child.java.opts to -Xmx16000m
Setting HADOOP_HOME/etc/hadoop/mapred-site.xml mapred.child.java.opts to -Xmx160000m -XX:+UseConcMarkSweepGC
Copying said mapred-site.xml into my nutch/conf folder
None of that seems to change anything. I run into the same Heap Space error at the same point in the crawling process. I have tried reducing the fetcher threads back to 12 from 25 and switching off parsing while fetching. Nothing changed and I am out of ideas. I have 64GB RAM so thats really not an issue. Please help ;)
EDIT: fixed filename to mapred-site.xml
Passing -D ...
The heap space needs to be set also for the reduce task using "mapreduce.reduce.memory.mb" and "mapreduce.reduce.java.opts". Note that the script bin/crawl was recently improved in this regard, see NUTCH-2501 and the recent bin/crawl script.
3./4. Setting/copying hadoop-site.xml
Shouldn't this be set in "mapred-site.xml"?

JMeter Dashboard Generation Java Heap Space

I've been able to create dashboards for small amounts of log data (3mb) with JMeter. However, when trying to create dashboards with large amounts of data (35mb), jmeter will throw a java.lang.OutOfMemoryError: Java Heap Space.
So far I've tried to create an environment variable called JVM_ARGS=-Xms1024m -Xmx10240m but I still do not have enough space.
Is there anything else I can try to create these dashboards? Or is there a way to reduce the number of entries that get written to the log file?
Thank you!
There are 2 possibilities :
Option 1 : your JVM options are not taken into account. Show the first lines or all content of jmeter.log
Option 2 : you have added some dynamic parameter to your http requests that has created a lot of different (name) SampleResult
Edit 8 october 2018:
Root cause was point Option 2
Make sure you've really created the environment variable and it has the anticipated value, double check this by running the following command in the terminal window where you will be launching JMeter from:
echo %JVM_ARGS% for Windows
echo $JVM_ARGS for Linux/Unix/MacOS
You should see your increased JVM heap settings
Make sure to use either jmeter.bat for Windows or jmeter.sh for other operating systems wrapper script
Make sure to use 64-bit version of JRE as 32-bit will not be able to allocate more than 3G heap
Make sure you can execute java command with your 10G heap
java -Xms1024m -Xmx10240m -version
you should see your Java version
Try running ApacheJMeter.jar executable directly:
java -Xms1024m -Xmx10240m -jar ApacheJMeter.jar -g result.jtl -o destination_folder
If nothing helps be aware that you can generate tables/charts using JMeterPluginsCMD Command Line Tool (it is not a part of standard JMeter installation, can be installed using JMeter Plugins Manager)

Increase Heap Space Available for JVM: OutOfMemoryError: Requested array size exceed VM limit Ubuntu 64Bit Neo4j 2.0

My specs:
-Ubuntu 64bit
-Neo4j 2.0
-32 GB of Ram
-AMD FX-8350 Eight COre Processor
The problem:
I'm making a request to my Neo4j server with the following query:
MATCH (being:my_label_2) RETURN being
And gives me this error:
OutOfMemoryError
Requested array size exceeds VM limit
StackTrace:
java.lang.StringCoding$StringEncoder.encode(StringCoding.java:300)
java.lang.StringCoding.encode(StringCoding.java:344)
java.lang.String.getBytes(String.java:916)
org.neo4j.server.rest.repr.OutputFormat.toBytes(OutputFormat.java:194)
org.neo4j.server.rest.repr.OutputFormat.formatRepresentation(OutputFormat.java:147)
org.neo4j.server.rest.repr.OutputFormat.response(OutputFormat.java:130)
org.neo4j.server.rest.repr.OutputFormat.ok(OutputFormat.java:67)
org.neo4j.server.rest.web.CypherService.cypher(CypherService.java:101)
java.lang.reflect.Method.invoke(Method.java:606)
org.neo4j.server.rest.transactional.TransactionalRequestDispatcher.dispatch(TransactionalRequestDispatcher.java:139)
org.neo4j.server.rest.security.SecurityFilter.doFilter(SecurityFilter.java:112)
This works fine with "my_label_1" which returns around 30k results
What I believe is the problem:
I don't have enough memory allocated to my JVM
Attempts made to fix/things I've found online:
I read what the manual says to do
And what the Ubuntu Forums say to do
So I've tried going to my neo4 folder (with cd as usual) and running it with the arguments this way:
sudo bin/neo4j start -Xmx4096M
However that didn't work. When Neo4j starts it does warn me that I might not have enough space with:
WARNING: Max 1024 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
Using additional JVM arguments: -server -XX:+DisableExplicitGC -Dorg.neo4j.server.properties=conf/neo4j-server.properties -Djava.util.logging.config.file=conf/logging.properties -Dlog4j.configuration=file:conf/log4j.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled
Question
I know I'm definitely using the arguments wrong, I honestly don't have much experience with JVM configurations. How should I approach this, am I missing something?
You should put JVM setting into the conf/neo4j-wrapper.conf file. It should look like this:
user#pc:> head -n 7 neo4j-enterprise-2.0.0/conf/neo4j-wrapper.conf
wrapper.java.additional=-Dorg.neo4j.server.properties=conf/neo4j-server.properties
wrapper.java.additional=-Djava.util.logging.config.file=conf/logging.properties
wrapper.java.additional=-Dlog4j.configuration=file:conf/log4j.properties
# Java Additional Parameters
wrapper.java.additional=-XX:+UseConcMarkSweepGC
wrapper.java.additional=-XX:+CMSClassUnloadingEnabled
Note that you can configure different aspects of neo4j via different files, so it's better to read description to every file in that conf/ directory in order to get familiar with what can be done and how exactly.

Eclipse Memory Analyser,but always shows An internal error occurred?

java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid2584.hprof ...
Heap dump file created [106948719 bytes in 4.213 secs]
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2760)
at java.util.Arrays.copyOf(Arrays.java:2734)
at java.util.ArrayList.ensureCapacity(ArrayList.java:167)
at java.util.ArrayList.add(ArrayList.java:351)
at Main.main(Main.java:15)
But when i open head dump java_pid2584.hprof via Eclipse Memory Analyser,but there is always message:
An internal error occurred during:
"Parsing heap dump from **\java_pid6564.hprof'".Java heap space
The problem is that Eclipse Memory Analyser does not have enough heap space to open the Heap dump file.
You can solve the problem as follows:
open the MemoryAnalyzer.ini file
change the default -Xmx1024m to a larger size
Note that on OS X, to increase the memory allocated to MAT, you need to right-click mat.app and show the package contents. The MemoryAnalyzer.ini file is under /Contents/Eclipse.
Solution for same issue for Memory Analyzer plugin in Eclipse in MAC OS X El Capitan.
I was facing the same issue but with the eclipse plugin and I did not have any Memory Analyzer App in Applications Folder. The solution which worked for me was:
Right Click on Eclipse icon and select Show Package Content.
Go to Contents>Eclipse
Open Eclipse.ini
Change value -Xmx1024m to -Xmx2048m
Restart Eclipse
On OS X 11.5 (El Cap) modifying MemoryAnalyzer.app/Contents/MacOS/MemoryAnalyzer.ini does not work! This is because it's looking for the MemoryAnalyzer.ini in a different place.
On my computer, it was looking for:
MemoryAnalyzer.app/Contents/Eclipse/MemoryAnalyzer.ini but the real .ini file was:
MemoryAnalyzer.app/Contents/MacOS/MemoryAnalyzer.ini.
In order for your changes to take effect, copy the existing .ini file into the new location.
To find where MemoryAnalyzer is looking for the ini file, you can run:
sudo su
cd ...MemoryAnalyzer.app/Contents/MacOS/
dtruss ./MemoryAnalyzer 2>&1 | grep ini
If Memory Analyser is used from Eclipse, then edit your eclipse.ini file to increase the vm argument to -Xmx1024m or higher. This worked for me.
http://wiki.eclipse.org/index.php/MemoryAnalyzer/FAQ#Out_of_Memory_Error_while_Running_the_Memory_Analyzer
As suggested by others, its two step simple process:-
open the MemoryAnalyzer.ini file from your MAT installation directory.
change the default -Xmx1024m to a larger size for e.g. if you have to analyze a 4GB heap dump then you can replace -Xmx1024m with -Xmx5g or -Xmx6g
For more details refer:-
https://better-coding.com/solved-eclipse-mat-java-heap-space-error/
For my experience add in MemoryAnalyzer.ini, Xms and Xmx to the max as your materiel possibilities. G1GC is faster and -XX:-UseGCOverheadLimit is need because gc usage can be high and time consuming, and maybe -XX:+UseStringDeduplication is the key to consume less memory
-vmargs
-Xms8g
-Xmx8g
-XX:-UseGCOverheadLimit
-XX:+UseG1GC
-XX:+UseStringDeduplication
If you are using Mac, try running the executable inside the mat.app 'folder' with -data option, by which you can specify a writable path:
cd mat.app/Contents/MacOS
./MemoryAnalyzer -data <writable_path>
I tried all the solutions here as well, while still getting the same error and the reason eclipse was trying to open the .hprof file as a text file due to wrong or unknown file type / editor association.
Solution: Right click on the file, select open with, then select Others, and select Eclipse Memory Analyzer.
Worked with 700MB dump, and worked with 2G dump on an eclipse heap of about 600M.
An internal error has occurred. Java heap space
Ans: GO to Your Project Work space
open .setting folder
Delete all file of .setting folder.
after you can compile
now there is no error Like Heap space
Enjoy :)
You may reduce your application memory limit, and then again take a dump. Eclipse Memory Analyser puts dump file to memory - I suspect that your Eclipse has less memory than the limit of application.
You can also do the opposite and increase the memory limit for Eclipse, but if your application works on a server, it will be hard to match in size of memory.

Increase heap size in java for weka

I'm trying to increase the heap size in java for weka which keeps crashing. I used the suggested line:
> java -Xmx500m -classpath
but I get the following error:
-classpath requires class path specification
I'm not sure what this means. Any suggestions?
What I found was the actual issue was in the file 'RunWeka.ini' in '\Program Files (x86)\Weka-3-6'. I opened it with notepad and in the middle of the file there is a line 'maxheap = 512m'.
I changed the line to read 'maxheap=2000m', saved the file and reloaded weka and this fixed my problems.
I'm not sure if this is the correct way to do it or not but it worked for me.
Run this command in your terminal:
java -Xmx1024m -jar weka.jar
Omit the -classpath option. Use just -Xmx500m option.
So, instead of just:
java weka.core.Instances data/soybean.arff
you do:
java -Xmx500m weka.core.Instances data/soybean.arff
If you run weka via some script (RunWeka.bat for example), then you need to modify that script (with some text editor like notepad).
If you're using Weka 3.8.1 on Windows you can save yourself a lot of trouble by editing the javaOpts parameter. The parameter maxheap isn't used anymore, so you can set javaOpts like this in RunWeka.ini file:
javaOpts= -Xmx1040m
Where 1040m is the amount of memory you want to allocate.
Mind that the file is case sensitive.
There are a lot of ways to set this up, but this is the faster way to get Weka runing on a Windows environment at this version.
Edit: If you want Weka to use more than 1gb on windows, you need to have JDK installed. Regular JRE wont do it.
The official Weka answer (for all operating systems and Weka versions) can be found on http://weka.wikispaces.com/OutOfMemoryException.
In case you are using a recent Weka version on Windows, the answer is:
Modify the maxheap parameter in the RunWeka.ini file.
On Ubuntu i had the same problem
but i solve it by increasing the amount of memory to use for the Java Virtual Machine
run this : weka -m 1024m
You need to specify a classpath after -classpath, similar to the PATH env variable you need to specify the path where Java can find the classes.
The -Xmx500m setting looks fine, except that I would suggest to use 512m.
For Mac OS, you have to edit a configuration file in order to increase the heap size of the Weka UI application.
I am repeating what I wrote in: Is there a workaround to solve "Java heap space" memory error when the max heap value has been already specified?
Quit out of Weka if it is running.
cd into /Applications/weka-XXX.app/Contents , or wherever your weka executable was installed. There will be a file called Info.plist there. It is an XML text file. I suggest you save a copy of it to another location, as you'll need to edit it in the next step.
Open the Info.plist (XML) file in your favorite text editor and look for a block that says "VMOptions". There should be a value that says "-Xmx256M" or something similar that specifies the maximum heap size. You should change that value to something bigger, such as "-Xmx1024M".
Start Weka.
I am running Weka 3.6 in windows. This is what i did.
Go to the Weka installation directory and you will find a RunWeka.bat file. Open this file in a text editor and add -Xmx argument in the java command line.
for instance this sets to 4GB memory,
%_java% -Xmx4096m -classpath . RunWeka -i .\RunWeka.ini -w .\weka.jar -c %_cmd% "%2"
The official Weka answer is right..But....crucial is to first get rid of all JVM files and install the relevant 32 or 64 bit Java version. Not using the relevant version causes many problems including the impossibility to increase the heap further than 1024m (by changing the ini file).
Weka 3.9.2 also does not has the option of maxheap anymore. RunWeka.ini have the option of javaOpts, So you may change the below to your required memory allocation,
javaOpts=%JAVA_OPTS% ---- > javaOpts= -Xmx1024m
Here 1024m is the customised amount of memory you want to allocate.
The best way to do it using this command
java -Xmx1024m -[weka classifier] -t [training file path]
The answers above are too old (last one is 1 year ago).
I had same issue with my WEKA (version 3.8.1) on Windows 10.
I had a problem to update the heap size , the way I fixed it is by adding an environment variable (under control panel) as follows:
JAVA_OPTS = -Xms30000m -Xmx30000m
Tip: Just ensure that RunWeka.ini is using this environment variable.
In the above example I give WEKA 30GB. It works.
Hope it will be helpful for some people.
You should also see if default thread stack size 20MB is enough. Increase the value to 50MB in the file /Applications/weka-3-8-1-oracle-jvm.app/Contents/Info.plist (on MAC) like below:
<string>-Xss50M</string>
If we are using Weka Workbench CLI or Knowledge explorer we need to
change as below.
As the documentation suggests the runtime parameter should be -Xmx[size_required]m where [size_required] is memory size you intend to keep to avoid memory exception.
Open RunWeka.ini
Define maxheap=[size_required]G
In my case I kept maxheap=4G , One can set like maxheap=4096m and add -Xmx#maxheap# to all the run options at # setups (prefixed with "cmd_") sections next to java commands
like below
cmd_default=javaw -Xmx#maxheap# ...............
cmd_console=cmd.exe /K start cmd.exe ..................
cmd_explorer=java -Xmx#maxheap# .................
cmd_knowledgeFlow=java -Xmx#maxheap#....................
maxheap=4G
Verify the same by restarting Weka and Help>>SystemInfo
If you run weka from the command line but not through java i.e. typing weka into the command line, instead of typing
weka
specify the memory flag
weka -m 1024m
This will specify 1024 megabytes.
If you're running weka via weka.sh, you can directly run it with memory option.
For example,
sh weka.sh -memory 10g
This will increase the heap size to 10Gb (tested using Weka 3.8.4 on Ubuntu 18.04)

Categories