I am trying to install SOAPUI tool. After the installation, when executed, I amm getting this error:
The JVM could not be started. The maximum heap size (-XMx) might be
too large or anti virus or firewall tool could block the execution
When installed to a different machine, it works fine.
Any suggestions?
This problem occurs because Soap Ui tries to get the specified amount of memory in form of single block which is rarely available.
So solution to this problem is navigate to soapUi.x.x.x.vmoption file which can be found in
c->program files-> emiware -> soapUi.x.x.x ->bin
edit this file and make the -Xms to something lesser default value is 1200m make it 512m if does not work change it some to a lesser value.
PS x.x.x. is the version of SoapUI in my case its 4.0.0
-Xms means initial heap size.
-Xmx means maximum heap size.
So you can set values as per your requirement.
This error often occurs if you try to set too much memory on a 32-bit OS such as Windows. E.g. if you use -Xmx1600m or more on Windows 32-bit you will get this error.
Which OS and version of Java do you have on the machine which fails.
What I did with mine is kill all application processes that use Java, for example: Mozilla FireFox. You can kill the process from Windows Task Manager. After that, rerun your SOAP UI.
There is quite a simple fix to this soapUI issue...
Ankit and Peter have mentioned about it here... to help you (and others) with this, I have written a step-by-step tutorial for this along with screenshots for the fix. I hope this helps you...!
You can check it here - http://quicksoftwaretesting.com/soapui-jvm-heap-size-xmx-error/
Neither of these solutions worked for me. What did work was starting the soapui.bat file int he same mentioned \bin directory.
This file does set the required JAVA environment settings.
Using Java a lot I cannot do this as an general Environment variable since this will impact my SQLdeveloper from Oracle and other Java goodies.
Make sure you downloaded the appropriate version (32/64 bit) for your OS.
Related
I am using elasticsearch and fscrawler for searching about 7TB of data.. The process starts well until it just stalls after sometime. It must be running out of memory, I am trying to add my heap using https://fscrawler.readthedocs.io/en/latest/admin/jvm-settings.html but I keep getting the error invalid maximum heap size.
Is that the right way of setting up the heap? What am I missing?
I think you are using the 32 bit version of Java. If that's the case, you need to install the 64 bit JVM and make sure to update your JAVA_HOME to reflect the new version.
More detailed info can be found here.
I'm experiencing this weird problem that the JVM hangs forever very frequently.
I first observed the problem when my Java IDEs frequently hang the entire system GUI. IntelliJ IDEA hangs on indexing almost every single time upon start. Sometimes it proceeds to resolving dependency but always hangs in the end. When this happens, I can type in gnome-terminal, but the commands can't seem to be executed. I can't launch new applications with Alt-F2 or anything alike.
I had to switch to a text console and "killall -9 java" to kill the IDEA process and get control back. "kill -3 java" won't work. The log file contains nothing related, the thread dump is empty. Once the IDE hung, jstack cannot be attached to the process. "jstack -l pid" also hangs. "jstack -F pid" can't attach to the process. Visualvm hangs as well.
The CPU usage by the Java process is 0% and there is no I/O going on.
I've observed the same behavior when using Eclipse. Sometimes it hangs on start up, sometimes upon saving and sometimes upon running a Java application.
Maven / sbt builds executed within text-only ttys cause the same kind of hang, so I guess it's not a window manager / desktop environment / display driver problem.
I highly suspect it's a file system or I/O issue but I have no clue how to debug that. I've tried fsck with no luck, and my system works perfectly fine when not running java programs.
Things I've ruled out:
Permission issues: running IntelliJ with sudo doesn't help, hangs 100% of the time.
Display driver: I've tried both the Nvidia proprietary driver and nouveau, the open source one. Doesn't help.
Window manager / desktop environment: I use Cinnamon, but I've tried running IntelliJ under Unity. Doesn't help.
Java version: I've tried both Oracle Java 7 and Oracle Java 8. I'll probably try OpenJDK but I doubt it would work.
IntelliJ version: I've tried IntelliJ 13 through 14.1. All exhibited the same behavior.
Limited memory: I have 16G RAM with 16G swap space, so memory should not be a limiting factor.
Kernel log doesn't look suspicious. I can't get any kind of log remotely indicating what went wrong.
Any idea?
UPDATE (2015/04/29): The problem seems to have fixed itself after I accidentally kicked the power cable and cold restarted the computer... Still a mystery but IntelliJ is usable as of now.
Some things to check
- The Java IDEs run best with a lot of ram. I usually ask for at least
8G of memory for my dev workstation.
- Make sure you have a stable version of everything, look for known working versions/configurations on Ubuntu
- You have to manually allocate memory in IntelliJ IDEA versions < 14. For example: How to increase IDE memory limit in IntelliJ IDEA on Mac?
- Besides system logs, run tools like top and see what's happening in terms for cpu and ram when running the IDE
I had similar problems a while ago but with Eclipse. The problem was that there was no swap place at all ;) - obviously it should not be a problem with 16GB of RAM.
Could You post JVM arguments for Intellij? And also I have an idea to create another Intellij installation (eg. go back to 14 version) and see if there is similar problem (also compare JVMs settings between these two).
Edit
Ok so try:
use different JRE/JDK. If the problem disappears it will tell us more.
You are on linux so it makes it easy to monitor several things - you said that there is no CPU utilization or hard I/O. But how do you know that? Maybe it will be informative to have some statistics gathered - eg.jstat for JVM itself or for system information (You think that is I/O problem) try:
iostat -hm -p sda 1
Which will print I/O statistics for sda (if you have different discs change device parameter) in 1sec loops (can be also changed). Start this with system and dump output to the file - maybe there is some kind of 'disaster' happening before JVM hangs. Note: iostat sometimes is not available on system itself (on my Linux Mint is not) - install then package sysstat and the command will be available.
Seem to have fixed after a cold restart from an accidental power loss.. Weirdest problem I have ever seen.
I'm currently working on a project which requires a large amount of memory, and I cannot get Mule 3.4.0 to utilize over 4gb of RAM (running on RHEL 6.2). I am using the Java HotSpot 64-bit server JVM 1.7.0_45-b18 and the community version of Mule.
I have been editing the wrapper.conf file and I have tried numerous settings to no avail.
I see there is a bug listed in the Mule JIRA: https://www.mulesoft.org/jira/browse/MULE-7018 which is closed against 3.4.0, but as incomplete.
My latest attempts have been to explicitly to try and force it to take up 8gb of heap space right away, the following being the latest attempt:
# Initial Java Heap Size (in MB)
wrapper.java.initmemory=8192
# Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=8192
I have tried setting the initmemory and maxmemory paremeters to zero, in accordance with this old post about the wrapper: http://java-service-wrapper.996253.n3.nabble.com/4096MB-heap-limit-td1483.html - However, this causes mule to not start properly.
I have also explicitly tried to pass additional JVM parameters through the wrapper:
wrapper.java.additional.6=-Xmx8192
wrapper.java.additional.7=-Xms8192
When doing this, I can see both memory settings being sent to the JVM (i.e. -Xmx8192 -Xms8192 first on the process line followed by -Xms4096m -Xmx4096m). However, my top command yields no more than 4.2gb of resident memory being taken by the JVM process. I realize that the top RES column is not a 100% definitive way to determine JVM memory usage, but I am under the impression that if I'm trying to allocate 8gb out of the box, it should definitely exceed 4gb. The machine has 60gb physical memory.
Has anyone discovered a way to get more than 4GB of heap space for Mule 3.4.0?
I believe that Anton's answer will work just fine, however I couldn't clearly figure out when the Java Service Wrapper's license changed to GPL and didn't want to risk any negative implications of that change for my use case.
I found a way to make this work using the current version of the Java Service Wrapper included in Mule 3.4.0, and thus not have any additional license implications with the JSW's change to GPL.
If you modify wrapper.conf to explicitly set the min and max memory settings to 0:
wrapper.java.initmemory=0
wrapper.java.maxmemory=0
Then you can pass the memory parameters directly via additional properties, also in wrapper.conf:
wrapper.java.additional.6=-Xmx8192m
wrapper.java.additional.7=-Xms4096m
With the initmemory and maxmemory explicitly set to zero, the wrapper will no longer pass its own memory parameters to the JVM first, and thus allow you to specify your own. Note that this does not work for maxPermGen - for some reason the JSW still specifies its own value for that.
As the Jira ticket comments say, you can download a later version of the Tanuki wrapper (and replace the wrapper files included in the Mule standalone under lib/boot) to overcome this restriction. So, download Tanuki 3.3.0, remove all files with "wrapper" in their name under the Mule standalone lib/boot and subfolders, and replace them with the files included in lib and bin folders in the Tanuki wrapper. The Tanuki executable (bin/wrapper) goes under lib/boot/exec and the .so and .jar files under lib/boot. Mule should start normally, but unfortunately I can not test the 4G+ setting as I don't have a suitable machine available just now.
Tried to increase CompressedClassSpaceSize for a Mule Instance running as a Windows Service to no avail. The value would not get picked up until I finally figured out why.
I made the changes inside the wrapper.conf file by adding this line
wrapper.java.additional.4=-XX:CompressedClassSpaceSize=2G (by the way 1G is default)
but failed to check wrapper-additional.conf file where .4 key was already taken, overriding my setting. I moved my change and modified the key value to the next consecutive number and voila:
wrapper.java.additional.6=-XX:CompressedClassSpaceSize=2G
Some good commands to use:
c:> tasklist | findstr java
...take a note of the PID...
c:> jmap -heap
p.s. if you ever wanted to set this value starting Mule in standlone at cmd-line (as opposed to Service) then it will be
c:> mule -start -M-XX:CompressedClassSpaceSize=2G
Happy coding!
Recently I've been getting the notorious error message: OutOfMemoryError. I've a 64Bit Mac with 16GB Ram and 2X2.6 GH quad core. Getting this error message simply doesn't make sense to me because the same algoritm that I'm running (that causes this error message) is running smoothly on another machine (ubuntu 16GB Ram).
System.out.println(java.lang.Runtime.getRuntime().maxMemory());
When I run the above code on my mac I get: 129,957,888 (without the comma of course :-))
And when running this code on the ubuntu machine I get: 1,856,700,416
Can anyone tell me how I can increase my max memory in order to run my algorithm? Thanks!
I tried to set on my eclipse: default VM arguments -Xms512m -Xmx4g, but nothing changed.
-Xmx and -Xms are the correct arguments to the java command to change heap size, but Eclipse has to be configured differently.
You're going to have to elaborate. Are you running a test in Eclipse, or outside of Eclipse? Just passing the "-Xmx" parameter to Eclipse won't do what you want, even if you do it in the correct way to actually change the max mem value for Eclipse (that requires prefixing it with "-vmargs"). If you want to change the max mem value for the forked JVM that's running your algorithm, you have to change the parameters in the run configuration.
It looks like matt b has this one covered. I just wanted to mention that you might not want to set the max heap size as high as 4GB unless your program will really need that much. From what I understand the JVM allocates all of that memory for itself when it starts, and then uses it to run code as needed. Making it allocate that much memory might cause performance problems with other applications you're running. Instead, try stepping it up in increments of 128MB, or more if your code takes a long time to fail. Alternatively, maybe you can use a memory profiler to see how much space you actually use? I have no idea if such a thing exists.
This is probably not a problem on your setup, but for mere mortals like me, blithely allocating that much memory could be problematic.
What is the best practice to solve a Java VM crash if the follow conditions are true:
No own or third party native code. 100% pure java
The same program run on many other system without any problems.
PS: With VM crash I means that the VM write a dump file like hs_err_pid1234.log and terminate.
Read the hs_err_pid1234.log file (or whatever the error log file name is). There are usually clues in there. The next step depends on what you discover in the log.
Yes, it could be a bug in the specific version of the JVM implementation you are using, but I have also seen problems caused by memory fragmentation in the operating system. Windows, for example, is prone to pin dlls at inappropriate locations, and fail to allocate a contiguous block of memory when the JVM asks for it as a result. Other out opf memory problems can also manifest themselves through crash dumps of this type.
Update or replace your JVM. If you currently have the newest version, then try an older one, or if you don't have the latest version, try updating to it. Maybe its a known issue in your particular version?
Assuming the JVM version across machines is the same:
Figure out what is different about the machine where the JVM is crashing. Same OS and OS version? We have problems with JVMs crashing on a particular version of Red Hat for example. And we have also found some older Red Hat versions unable to cope with extra memory properly, resulting in running out of swap space. (Our solution was to upgrade RedHat).
Also, is the program doing exactly the same thing across machines? Is it accessing a shared filesystem? Is the file system mounted similarly on your machines (SMB/NFS etc)? Something must be different.
The log file should give you some idea of where the crash occurred (malloc for example).
Take a look at the stacktraces in the dump file, as it should tell you what was going on when the crash occurred.
As well as digging into the hs_err dump file, I'd also submit it to Sun or whomever made your JVM (I believe there are instructions in how to do so at the top of the file?). It can't hurt.
32bit? 64bit? Amount of ram in client machine? processor? os? See if there is any connection between the systems. A connection may lead to a clue. If all else fails, consider using different major/minor versions of the JVM. Also, if the problem JUST started can you get to a time (via version control) where the program didn't crash? Look through the hs_err log, you may get an idea of what caused the crash. It could be a version of some other client library the JVM uses. Lastly, run the program in debug/profile and maybe you'll see some symptons before the crash (assuming you can duplicate it)