How do I stop .mdmp files from being created - java

I have an instance of Solr, hosted with Tomcat that recently started creating minidump files. There are no errors in any of logs, and Solr continues to work with out a hitch.
The files are approximately 14gb, and are filling up the hard drive. Is there a way to turn this off, while we investigate the issue?

Generally speaking when JVM crashes the content of hs_err error log file (controlled by -XX:ErrorFile) is often enough to point what the trouble may be.
To prevent Oracle JVM Hotspot to generate Windows minidump (mdmp files), the JVM option to use on command line is: -XX:-CreateMinidumpOnCrash
It exists since 2011 but was very difficult to find: How to disable minidump (mdmp) files generation with Java Hotspot JVM on Windows

This article has decent information on both Linux and Windows JVM dump files. Have yet to test it myself on my current version of Java 7....
From that site:
Disabling Text dump Files
If you suspect problems with the creation of text dump files you can turn off the text dump file by using the option: -XXnoJrDump.
Disabling the Binary Crash Files
You can turn off the binary crash file by using the option: -XXdumpSize:none.

Are you using Java 7? In that case revert to Java 5 or 6. Lucene/Solr and Java 7 don't go well together and it could be this creates the dump files. Otherwise if everything is working, just disable the dumping of files.

I never found a way to disable the Java minidumps on windows. The strange part here is that everything on the server worked correctly, besides the hard drive filling up with minidumps.
We eventually re-installed everything, same version of Solr/Java/Tomcat onto a linux machine and didn't have the problem any more. I would imagine that re-installing everything onto a windows machine would have also fixed the problem. This was a strange one.

Related

GC overhead limit exceeded when reading large xls file

When I run my project in Netbeans IDE (compiling it and testing it), it works fine. It enables me reading xls file with size of 25000 rows and extract all the infromation above, then save them into database.
The problem appears when I generate the installer and deliver it. When I install my application and run it, I obtain that error:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at jxl.read.biff.File.read(File.java:217)
at jxl.read.biff.Record.getData(Record.java:117)
at jxl.read.biff.CellValue.<init>(CellValue.java:94)
at jxl.read.biff.LabelSSTRecord.<init>(LabelSSTRecord.java:53)
at jxl.read.biff.SheetReader.read(SheetReader.java:412)
at jxl.read.biff.SheetImpl.readSheet(SheetImpl.java:716)
at jxl.read.biff.WorkbookParser.getSheet(WorkbookParser.java:257)
at com.insy2s.importer.SemapExcelImporter.launchImport(SemapExcelImporter.java:82)
at//staff
I even user POI libraries but I got same scenario.
UPDATE:
In messages.log file of my application, I found this strange values (I have changed them in netbeans.conf)
Input arguments:
-Xms24m
-Xmx64m
-XX:MaxPermSize=256m
-Dnetbeans.user.dir=C:\Program Files\insy2s_semap_app
-Djdk.home=C:\Program Files\Java\jdk1.8.0_05
-Dnetbeans.home=C:\Program Files\insy2s_semap_app\platform
OK, I got the answer... Let's begin from the beginning.
It is true that Microsoft documents hanlders' libraries need much resources but not so bad to cause application running failure as I thought at the beginning. In fact, that probleme has revealed to me a weakness and a shortage.
Because of working with Netbeans 8.0.2, the new property
app.conf
should be taken into consideration. It has all what needed to configure our applications.
But it is not possible to edit it directly so to increase the max permitted memory, we have to change the values in
harness/etc/app.conf
in netbeans installation directory. For more details look here.

<Outside any known module>, Vtune Amplifier Error

Currently I'm using VTune analyzer in linux system to profiling java code.
I generated report by attaching it to the running process.
However, in top-down tree, I usually can see [Outside any known module] which took certain amount of time.
When I click it, I couldn't see any thing.
The strange thing is that sometime it can generate proper top-down report.
When vtune can generate proper report, trace file is usually about 500MB
On the other hands, when it can't generate, trace file is just about 5MB
There are plenty of opinion that it is because of "code on the fly".
So, I tried this steps after turning off the JIT option in jdk.
Ofcourse, I ran it under root.
But it doesn't work well.
My Ubutu version is 14.04.1 LTS
Please help me!!
Any kind of probable ideas may be helpful
Thanks
When you start profiling do you see a message like "Cannot profile the managed part of the target process. There is no Java* Attach API available. Only native part of the target process will be profiled."?
Yes - means you are using a standalone JRE (which not a part of JDK). The JRE package does not include Java Attach API to attach and profile java code. Could you please try JDK.
Thanks,
Denis

Where are the JVM core dump files located on Windows 7?

I have a C++ application that loads the JVM for use of JNI. It has been working for years. Recently, the JNI initialization function JNI_CreateJavaVM() started to fail, calling the JVM abort callback function and crashing the application.
It is possible that some information regarding the crash may be available in Java core dump files, if indeed these are being written. Therefore, I would like to find these files and study them.
However, I have never worked with Java core dump files before. I do not know where they are located or what they are named.
I am running on a Windows 7 64-bit system, connecting to JRE 1.6 32-bit.
I would appreciate it if someone could tell me where the Java core dump files might be located.
I am not sure but I think they are by default dumped to working dir, you can check it with:
String workingDir = System.getProperty("user.dir");
Different JVM's might have options to specify a core dump directory.
Edit: There is a similar question and good answer here: Is it possible to debug core dumps when using Java JNI?

Eclipse Helios x86 issues on Windows 7 x64, even on clean system

I have a problem with Eclipse for some time. When I move to Windows 7 x64 on my notebook, Eclipse starts getting "Freeze", for example, when using Content Assist (Code Helper), or using any other option in Eclipse. I am using quite bunch of plugins, so, I tried to delete them all, and check clean IDE. But this didn't help. I downloaded fresh Eclipse Helios for Windows x64, didn't help. I even formated the disk, reinstall Windows, install only JDK and Eclipse but it always occur. What can I do ?
Edit:
Memory: I did not change memory, and IDE freeze, change memory to 512,1024,2048 MB, keeps freezing. (via vm parameters).
Anti-Virus: I am using ESET Smart Security, but with our without it, Eclipse keeps freezing.
After much frustration, I disabled AVG and it worked fine.
Several leads.
Check whether this freeze the freeze is linked to a huge consumption of CPU or disk usage. Unlikely.
If not then this is probably a network issue. Then disable the firewall for a while and try again. Eclipse now reports your plugin usage at the beginning of a session and it might be busy looking for a connection.
Close all editors from previous session. In the past, eclipse tried to access xml DTD with from the network instead of the local catalog and that would fail if you were offline of course.
Finally, let me tell you that if this is for running eclipse you've selected the worst OS. OSX and Linux are much better options. I used to do so as well. But for the last two years, I've run Windows only inside VirtualBox when I couldn't avoid it (TOAD, Macromedia Fireworks) and I wished I had migrated before.
The crucial point is how much memory you have for Eclipse and if you have any anti-virus software installed that needs to preparse all the class files Eclipse wants to look in.
Does it settle after some usage?

What can I do if a Java VM crashes repeatedly?

What is the best practice to solve a Java VM crash if the follow conditions are true:
No own or third party native code. 100% pure java
The same program run on many other system without any problems.
PS: With VM crash I means that the VM write a dump file like hs_err_pid1234.log and terminate.
Read the hs_err_pid1234.log file (or whatever the error log file name is). There are usually clues in there. The next step depends on what you discover in the log.
Yes, it could be a bug in the specific version of the JVM implementation you are using, but I have also seen problems caused by memory fragmentation in the operating system. Windows, for example, is prone to pin dlls at inappropriate locations, and fail to allocate a contiguous block of memory when the JVM asks for it as a result. Other out opf memory problems can also manifest themselves through crash dumps of this type.
Update or replace your JVM. If you currently have the newest version, then try an older one, or if you don't have the latest version, try updating to it. Maybe its a known issue in your particular version?
Assuming the JVM version across machines is the same:
Figure out what is different about the machine where the JVM is crashing. Same OS and OS version? We have problems with JVMs crashing on a particular version of Red Hat for example. And we have also found some older Red Hat versions unable to cope with extra memory properly, resulting in running out of swap space. (Our solution was to upgrade RedHat).
Also, is the program doing exactly the same thing across machines? Is it accessing a shared filesystem? Is the file system mounted similarly on your machines (SMB/NFS etc)? Something must be different.
The log file should give you some idea of where the crash occurred (malloc for example).
Take a look at the stacktraces in the dump file, as it should tell you what was going on when the crash occurred.
As well as digging into the hs_err dump file, I'd also submit it to Sun or whomever made your JVM (I believe there are instructions in how to do so at the top of the file?). It can't hurt.
32bit? 64bit? Amount of ram in client machine? processor? os? See if there is any connection between the systems. A connection may lead to a clue. If all else fails, consider using different major/minor versions of the JVM. Also, if the problem JUST started can you get to a time (via version control) where the program didn't crash? Look through the hs_err log, you may get an idea of what caused the crash. It could be a version of some other client library the JVM uses. Lastly, run the program in debug/profile and maybe you'll see some symptons before the crash (assuming you can duplicate it)

Categories