JVM Settings for elasticsearch and fscrawler - java

I am using elasticsearch and fscrawler for searching about 7TB of data.. The process starts well until it just stalls after sometime. It must be running out of memory, I am trying to add my heap using https://fscrawler.readthedocs.io/en/latest/admin/jvm-settings.html but I keep getting the error invalid maximum heap size.
Is that the right way of setting up the heap? What am I missing?

I think you are using the 32 bit version of Java. If that's the case, you need to install the 64 bit JVM and make sure to update your JAVA_HOME to reflect the new version.
More detailed info can be found here.

Related

why cant i increase my heap size?

im using jdk 1.8.0_25 and im trying to work with a big database. it weights about 2gb.
i run the program trough eclipse.
i use 64bit java version on 64bit windows 7.
got 8gb ram.
everytime i try connectiong to it , i get java heap errors... so i tried increasing my heap size and i didnt make it!
visualVM says my max is still 2gb.
what i did was - control panel> programs > java > java > view.
ive added -Xmx6g parameter to my jdk (and im sure its the right jdk) but still nothing works.
any other suggestions on how to increase my heap size?
EDIT:
here is the failing code line. just to make you guys sure that its not the code failing.
try {
Class.forName("net.ucanaccess.jdbc.UcanaccessDriver");
conn = DriverManager
.getConnection("jdbc:ucanaccess://D:/Work/currentFolder/reallyBigDB.mdb");
conn = ...... is the failing line.
From Ucanaccess home page (first hit on Google).
When dealing with large databases and using the default memory
settings (i.e., with Driver Property memory=true), it is recommended
that users allocate sufficient memory to the JVM using the -Xms and
-Xmx options. Otherwise it's needed to set Driver Property memory=false (Connection
conn=DriverManager.getConnection("jdbc:ucanaccess://c:/pippo.mdb;memory=false");)
Now obviously you have another problem, the heap size. That's an eclipse issue. You could compile it in Eclipse and run it on the command line, giving the memory parameters there. That's at least a surefire way to make sure the parameters are correct.
Nevertheless, unless you absolutely have to load a huge chunk of data into memory, you usually don't want to. Luckily Ucanaccess has that parameter.
To increase the heap size of the JVM when using Eclispe:
Window -> Preferences -> Java ->Installed JRE
Then select the JRE you are using, and click Edit and enter the argument for the JVM in Default VM arguments
PS: As already mentioned in the comment section, you should not load the entire DB in memory, so it may be a better idea to review your code instead for increasing the heap
Other two parameters alternative to memory=false that may be useful are:
skipIndexes=true (it avoids memory occupation due to indexes that aren't integrity constraints)
Lobscale=1 if the db size is due to blob/ole data. In this specific case both the load time and the memory occupation will be drammatically reduced.
They have been both introduced with the 2.0.9.4.

Cannot get Mule 3.4.0 to utilize more than 4GB of memory

I'm currently working on a project which requires a large amount of memory, and I cannot get Mule 3.4.0 to utilize over 4gb of RAM (running on RHEL 6.2). I am using the Java HotSpot 64-bit server JVM 1.7.0_45-b18 and the community version of Mule.
I have been editing the wrapper.conf file and I have tried numerous settings to no avail.
I see there is a bug listed in the Mule JIRA: https://www.mulesoft.org/jira/browse/MULE-7018 which is closed against 3.4.0, but as incomplete.
My latest attempts have been to explicitly to try and force it to take up 8gb of heap space right away, the following being the latest attempt:
# Initial Java Heap Size (in MB)
wrapper.java.initmemory=8192
# Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=8192
I have tried setting the initmemory and maxmemory paremeters to zero, in accordance with this old post about the wrapper: http://java-service-wrapper.996253.n3.nabble.com/4096MB-heap-limit-td1483.html - However, this causes mule to not start properly.
I have also explicitly tried to pass additional JVM parameters through the wrapper:
wrapper.java.additional.6=-Xmx8192
wrapper.java.additional.7=-Xms8192
When doing this, I can see both memory settings being sent to the JVM (i.e. -Xmx8192 -Xms8192 first on the process line followed by -Xms4096m -Xmx4096m). However, my top command yields no more than 4.2gb of resident memory being taken by the JVM process. I realize that the top RES column is not a 100% definitive way to determine JVM memory usage, but I am under the impression that if I'm trying to allocate 8gb out of the box, it should definitely exceed 4gb. The machine has 60gb physical memory.
Has anyone discovered a way to get more than 4GB of heap space for Mule 3.4.0?
I believe that Anton's answer will work just fine, however I couldn't clearly figure out when the Java Service Wrapper's license changed to GPL and didn't want to risk any negative implications of that change for my use case.
I found a way to make this work using the current version of the Java Service Wrapper included in Mule 3.4.0, and thus not have any additional license implications with the JSW's change to GPL.
If you modify wrapper.conf to explicitly set the min and max memory settings to 0:
wrapper.java.initmemory=0
wrapper.java.maxmemory=0
Then you can pass the memory parameters directly via additional properties, also in wrapper.conf:
wrapper.java.additional.6=-Xmx8192m
wrapper.java.additional.7=-Xms4096m
With the initmemory and maxmemory explicitly set to zero, the wrapper will no longer pass its own memory parameters to the JVM first, and thus allow you to specify your own. Note that this does not work for maxPermGen - for some reason the JSW still specifies its own value for that.
As the Jira ticket comments say, you can download a later version of the Tanuki wrapper (and replace the wrapper files included in the Mule standalone under lib/boot) to overcome this restriction. So, download Tanuki 3.3.0, remove all files with "wrapper" in their name under the Mule standalone lib/boot and subfolders, and replace them with the files included in lib and bin folders in the Tanuki wrapper. The Tanuki executable (bin/wrapper) goes under lib/boot/exec and the .so and .jar files under lib/boot. Mule should start normally, but unfortunately I can not test the 4G+ setting as I don't have a suitable machine available just now.
Tried to increase CompressedClassSpaceSize for a Mule Instance running as a Windows Service to no avail. The value would not get picked up until I finally figured out why.
I made the changes inside the wrapper.conf file by adding this line
wrapper.java.additional.4=-XX:CompressedClassSpaceSize=2G (by the way 1G is default)
but failed to check wrapper-additional.conf file where .4 key was already taken, overriding my setting. I moved my change and modified the key value to the next consecutive number and voila:
wrapper.java.additional.6=-XX:CompressedClassSpaceSize=2G
Some good commands to use:
c:> tasklist | findstr java
...take a note of the PID...
c:> jmap -heap
p.s. if you ever wanted to set this value starting Mule in standlone at cmd-line (as opposed to Service) then it will be
c:> mule -start -M-XX:CompressedClassSpaceSize=2G
Happy coding!

Getting JVM error after SOAP UI installation

I am trying to install SOAPUI tool. After the installation, when executed, I amm getting this error:
The JVM could not be started. The maximum heap size (-XMx) might be
too large or anti virus or firewall tool could block the execution
When installed to a different machine, it works fine.
Any suggestions?
This problem occurs because Soap Ui tries to get the specified amount of memory in form of single block which is rarely available.
So solution to this problem is navigate to soapUi.x.x.x.vmoption file which can be found in
c->program files-> emiware -> soapUi.x.x.x ->bin
edit this file and make the -Xms to something lesser default value is 1200m make it 512m if does not work change it some to a lesser value.
PS x.x.x. is the version of SoapUI in my case its 4.0.0
-Xms means initial heap size.
-Xmx means maximum heap size.
So you can set values as per your requirement.
This error often occurs if you try to set too much memory on a 32-bit OS such as Windows. E.g. if you use -Xmx1600m or more on Windows 32-bit you will get this error.
Which OS and version of Java do you have on the machine which fails.
What I did with mine is kill all application processes that use Java, for example: Mozilla FireFox. You can kill the process from Windows Task Manager. After that, rerun your SOAP UI.
There is quite a simple fix to this soapUI issue...
Ankit and Peter have mentioned about it here... to help you (and others) with this, I have written a step-by-step tutorial for this along with screenshots for the fix. I hope this helps you...!
You can check it here - http://quicksoftwaretesting.com/soapui-jvm-heap-size-xmx-error/
Neither of these solutions worked for me. What did work was starting the soapui.bat file int he same mentioned \bin directory.
This file does set the required JAVA environment settings.
Using Java a lot I cannot do this as an general Environment variable since this will impact my SQLdeveloper from Oracle and other Java goodies.
Make sure you downloaded the appropriate version (32/64 bit) for your OS.

how to increase java memory (algorithms runs on ubuntu but not on mac; same machine configuration)

Recently I've been getting the notorious error message: OutOfMemoryError. I've a 64Bit Mac with 16GB Ram and 2X2.6 GH quad core. Getting this error message simply doesn't make sense to me because the same algoritm that I'm running (that causes this error message) is running smoothly on another machine (ubuntu 16GB Ram).
System.out.println(java.lang.Runtime.getRuntime().maxMemory());
When I run the above code on my mac I get: 129,957,888 (without the comma of course :-))
And when running this code on the ubuntu machine I get: 1,856,700,416
Can anyone tell me how I can increase my max memory in order to run my algorithm? Thanks!
I tried to set on my eclipse: default VM arguments -Xms512m -Xmx4g, but nothing changed.
-Xmx and -Xms are the correct arguments to the java command to change heap size, but Eclipse has to be configured differently.
You're going to have to elaborate. Are you running a test in Eclipse, or outside of Eclipse? Just passing the "-Xmx" parameter to Eclipse won't do what you want, even if you do it in the correct way to actually change the max mem value for Eclipse (that requires prefixing it with "-vmargs"). If you want to change the max mem value for the forked JVM that's running your algorithm, you have to change the parameters in the run configuration.
It looks like matt b has this one covered. I just wanted to mention that you might not want to set the max heap size as high as 4GB unless your program will really need that much. From what I understand the JVM allocates all of that memory for itself when it starts, and then uses it to run code as needed. Making it allocate that much memory might cause performance problems with other applications you're running. Instead, try stepping it up in increments of 128MB, or more if your code takes a long time to fail. Alternatively, maybe you can use a memory profiler to see how much space you actually use? I have no idea if such a thing exists.
This is probably not a problem on your setup, but for mere mortals like me, blithely allocating that much memory could be problematic.

What can I do if a Java VM crashes repeatedly?

What is the best practice to solve a Java VM crash if the follow conditions are true:
No own or third party native code. 100% pure java
The same program run on many other system without any problems.
PS: With VM crash I means that the VM write a dump file like hs_err_pid1234.log and terminate.
Read the hs_err_pid1234.log file (or whatever the error log file name is). There are usually clues in there. The next step depends on what you discover in the log.
Yes, it could be a bug in the specific version of the JVM implementation you are using, but I have also seen problems caused by memory fragmentation in the operating system. Windows, for example, is prone to pin dlls at inappropriate locations, and fail to allocate a contiguous block of memory when the JVM asks for it as a result. Other out opf memory problems can also manifest themselves through crash dumps of this type.
Update or replace your JVM. If you currently have the newest version, then try an older one, or if you don't have the latest version, try updating to it. Maybe its a known issue in your particular version?
Assuming the JVM version across machines is the same:
Figure out what is different about the machine where the JVM is crashing. Same OS and OS version? We have problems with JVMs crashing on a particular version of Red Hat for example. And we have also found some older Red Hat versions unable to cope with extra memory properly, resulting in running out of swap space. (Our solution was to upgrade RedHat).
Also, is the program doing exactly the same thing across machines? Is it accessing a shared filesystem? Is the file system mounted similarly on your machines (SMB/NFS etc)? Something must be different.
The log file should give you some idea of where the crash occurred (malloc for example).
Take a look at the stacktraces in the dump file, as it should tell you what was going on when the crash occurred.
As well as digging into the hs_err dump file, I'd also submit it to Sun or whomever made your JVM (I believe there are instructions in how to do so at the top of the file?). It can't hurt.
32bit? 64bit? Amount of ram in client machine? processor? os? See if there is any connection between the systems. A connection may lead to a clue. If all else fails, consider using different major/minor versions of the JVM. Also, if the problem JUST started can you get to a time (via version control) where the program didn't crash? Look through the hs_err log, you may get an idea of what caused the crash. It could be a version of some other client library the JVM uses. Lastly, run the program in debug/profile and maybe you'll see some symptons before the crash (assuming you can duplicate it)

Categories