When I run my project in Netbeans IDE (compiling it and testing it), it works fine. It enables me reading xls file with size of 25000 rows and extract all the infromation above, then save them into database.
The problem appears when I generate the installer and deliver it. When I install my application and run it, I obtain that error:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at jxl.read.biff.File.read(File.java:217)
at jxl.read.biff.Record.getData(Record.java:117)
at jxl.read.biff.CellValue.<init>(CellValue.java:94)
at jxl.read.biff.LabelSSTRecord.<init>(LabelSSTRecord.java:53)
at jxl.read.biff.SheetReader.read(SheetReader.java:412)
at jxl.read.biff.SheetImpl.readSheet(SheetImpl.java:716)
at jxl.read.biff.WorkbookParser.getSheet(WorkbookParser.java:257)
at com.insy2s.importer.SemapExcelImporter.launchImport(SemapExcelImporter.java:82)
at//staff
I even user POI libraries but I got same scenario.
UPDATE:
In messages.log file of my application, I found this strange values (I have changed them in netbeans.conf)
Input arguments:
-Xms24m
-Xmx64m
-XX:MaxPermSize=256m
-Dnetbeans.user.dir=C:\Program Files\insy2s_semap_app
-Djdk.home=C:\Program Files\Java\jdk1.8.0_05
-Dnetbeans.home=C:\Program Files\insy2s_semap_app\platform
OK, I got the answer... Let's begin from the beginning.
It is true that Microsoft documents hanlders' libraries need much resources but not so bad to cause application running failure as I thought at the beginning. In fact, that probleme has revealed to me a weakness and a shortage.
Because of working with Netbeans 8.0.2, the new property
app.conf
should be taken into consideration. It has all what needed to configure our applications.
But it is not possible to edit it directly so to increase the max permitted memory, we have to change the values in
harness/etc/app.conf
in netbeans installation directory. For more details look here.
Related
Inside Netbeans 8.0.2:
Steps: New File > Hibernate> Hibernate reverse engineering
Retrieval of the Tables and Views begin, gets to 98% and then hangs. It freezes on the same view OR the very next view.
I've tried on multiple machines - same result.
Is there a limit on the size of the input data - in the wizard? Or, maybe a problem with the database - itself?
This is VisualVS snapshot
thanks
One guess. Try increasing the heap size of Eclipse. Use the .ini file and increase the -Xmx and -Xms parameters. It is a pure guess, but who knows :)
Second suggestion :) You visualVM or yourkit or profiler whatever profiler you have. Then attach yourself to the Netbeans and then find out where exactly it is blocking. On which operation. Then We/You will know more about the nature of the error and how to resolve it :D
I would recommend clean cache dir (details), after that read this topic and probably you should open a bug on netbeans bugzilla.
I am experiencing something strange while using Oracle SQL Developer 4.0.1.14.
When I connect to a particular db and run a simple select * from table1; I get the result set. (Still happens regardless of the number of records in the table, which is few, however the table does contain over 170 fields)
If I try and run it a second time I get a java heap space error.
If I try and run it again it starts throwing Protocol violation errors, with a different numbered protocol error each successive run.
I have never experienced this problem with other oracle db’s, even when connecting through the same installation of SQL Developer.
The only way for me to be able to query that table again is to reconnect to the db. Other users of this same db do not experience this problem. Has anyone ever experienced this issue?
you can edit sqldeveloper.conf and change the size of the heap space, by adding the following line:
AddVMOption -Xmx4096M
I know it is too late may be that could help someone else
in the explorer enter %appdata%
that will bring you to your :
C:\Users\username\AppData\Roaming\
Find your sql developer in my case it was:
sqldeveloper
Find a file named: product.conf
Almost at the end of the file change the line:
AddVMOption -Xmx800m
into :
AddVMOption -Xmx2048m
In my case I increased the heap size in "sqldeveloper.conf" to be 3072M --> 3GB but this didn't fix the issue.
AddVMOption -Xms1024M
AddVMOption -Xmx3072M
I found that there was another worksheet with different structure in the xlsx file I was trying to import. After deleting that extra unneeded worksheet I could import successfully. I was using MAC machine.
Final note: converting the excel file to csv and importing the later one was much faster than importing the excel file for the same data.
The application i am working on suddenly crashed with
java.io.IOException: ... Too many open files
As i understand the issue it means that files are opened but not closed.
Stacktrace of course happens after the fact and can only help understand before what event error occurred.
What would be an intelligent way to search your code base to find this issue which only seems to occur when app is under high stress load.
use lsof -p pid to check what cause leak of file references;
use ulimit -n to see the limit of opened file references of a single process;
check any IO resources in your project,are they released in time?,Note that,File,Process,Socket(and Http connections) are all IO resources.
sometimes, too many threads will cause this problem too.
I think the best way to use a tool specifically designed for the purpose, such as this one:
This little Java agent is a tool that keeps track of where/when/who opened files in your JVM. You can have the agent trace these operations to find out about the access pattern or handle leaks, and dump the list of currently open files and where/when/who opened them.
In addition, upon "too many open files" exception, this agent will dump the list, allowing you to find out where a large number of file descriptors are in use.
I seem to remember YourKit also having some facilities around this, but can't find any specific information at the moment.
What OS? If it's linux/mac, there is information under /proc that should help. On Windows, use the Process Explorer.
As far as searching the code base, perhaps look for code that catches or raises IOException - I think I/O methods that already catch/raise this have a high likelihood of needing a close() call.
Have you tried attaching to the running process using jvisualvm (Java 5.0 and later in the JDK bin directory). You can open the running process and do a heap dump (which if you have an older JDK you will need to analyze using eclipse or intellij or netbeans et. al.).
In JDK 7 the heap dump button is under the "Monitor" tab. It will create a heap dump tab, "Classes" sub-tab that you can check and see if any classes that open files exist in high quantity. Another very useful feature is heap dump compare, so you can take a reference heap dump, let your app run a bit and then take another heap dump and compare the two (the link to compare is on the "[heapdump]" tab you get when you take one. There is also a flag in java for taking a heapdump on crash or OOM exception, you can go down that route if comparing heap dumps does not give you an obvious class that is causing the problem. Also, "Instances" subtab in the heap dump diff will show you what has been allocated in the time between the two heap dumps which may also help.
jvisualvm is an awesome tool that does not get enough mentions.
I have an instance of Solr, hosted with Tomcat that recently started creating minidump files. There are no errors in any of logs, and Solr continues to work with out a hitch.
The files are approximately 14gb, and are filling up the hard drive. Is there a way to turn this off, while we investigate the issue?
Generally speaking when JVM crashes the content of hs_err error log file (controlled by -XX:ErrorFile) is often enough to point what the trouble may be.
To prevent Oracle JVM Hotspot to generate Windows minidump (mdmp files), the JVM option to use on command line is: -XX:-CreateMinidumpOnCrash
It exists since 2011 but was very difficult to find: How to disable minidump (mdmp) files generation with Java Hotspot JVM on Windows
This article has decent information on both Linux and Windows JVM dump files. Have yet to test it myself on my current version of Java 7....
From that site:
Disabling Text dump Files
If you suspect problems with the creation of text dump files you can turn off the text dump file by using the option: -XXnoJrDump.
Disabling the Binary Crash Files
You can turn off the binary crash file by using the option: -XXdumpSize:none.
Are you using Java 7? In that case revert to Java 5 or 6. Lucene/Solr and Java 7 don't go well together and it could be this creates the dump files. Otherwise if everything is working, just disable the dumping of files.
I never found a way to disable the Java minidumps on windows. The strange part here is that everything on the server worked correctly, besides the hard drive filling up with minidumps.
We eventually re-installed everything, same version of Solr/Java/Tomcat onto a linux machine and didn't have the problem any more. I would imagine that re-installing everything onto a windows machine would have also fixed the problem. This was a strange one.
What tools are available to view the output of the built-in JVM profiler? For example, I'm starting my JVM with:
-agentlib:hprof=cpu=times,thread=y,cutoff=0,format=a,file=someFile.hprof.txt
This generates output in the hprof ("JAVA PROFILE 1.0.1") format.
I have had success in the past using HPjmeter to view these output files in a reasonable way. However, for whatever reason the files that are generated using the current version of the Sun JVM fail to load in the current version of HPjmeter:
java.lang.NullPointerException
at com.hp.jmeter.f.jb.a(Unknown Source)
at com.hp.jmeter.f.a.a(Unknown Source)
at com.hp.c.a.j.z.run(Unknown Source)
Exception in thread "HPeprofDataFileReaderThread" java.lang.AssertionError: null pointer exception from loader
at com.hp.jmeter.f.a.a(Unknown Source)
at com.hp.c.a.j.z.run(Unknown Source)
(Why would they obfuscate the bytecode for a free product?!)
Two questions arise from this:
Does anyone know the cause of this HPjmeter error? (EDIT: Yes--see below)
What other tools exist to read hprof files? And why are there none from Sun (are there)?
I know the Eclipse TPTP and other tools can monitor JVMTI data on the fly, but I need a solution that can process the generated hprof files after the fact since the deployed machine only has a JRE (not a JDK) intalled.
EDIT: A very helpful HPjmeter developer replied to my question on an HP ITRC forum indicating that heap=dump needs to be included in the -agentlib options temporarily until a bug in HPjmeter is fixed. This information makes HPjmeter viable again, but I will still leave the question open to see if anyone knows of any other tools.
EDIT: As of version 4.0.00 of HPjmeter (available 05/2009) this bug has been fixed.
Your Kit Java Profiler is able to read hprof snapshots (I am not sure if only for memory profiling or for CPU as well). It is not free but is by far the best java profiler I ever used. It presents the results in a clear, intuitive way and performs well on large data sets. The documentation is also pretty good.
For viewing and analyzing the output of hprof=samples or hprof=cpu I have used PerfAnal with good results. The GUI is a bit spartan, but very useful.
PerfAnal is a free download (GPL, originally an example project in the book Java Programming on Linux).
See this article:
http://www.oracle.com/technetwork/articles/javase/perfanal-137231.html
for more information and the download.
Normally you can just run
java -jar PerfAnal.jar hprof.java.txt
You may need to fiddle with -Xmx for large hprof files.
I am not 100% sure it'll work (it sounds like it will) and I am not sure it'll show it in the format you want... but have you thought about the VisualVM?
I believe it'll open up the resulting file.
I have been using Eclipse Memory Analyzer for analyzing different performance problems successfully. First of all, install the tool as described in the project webpage in Eclipse.
After that, you can create a dump file knowing the pid of the jvm to be analyzed
jmap -dump:format=b,file=<filename>.hprof <jvm_pid>
Then just import the .hprof file in eclipse. It has some automatic reports that try (for me they usually do not work) to point out which could be the possible problems.
Edit:
Answering the comment: You are right, it is more like a leak finder for Java. For performance problems, I have played with JRat for small projects. It shows time comsumed per method, number of times a method is called, hierarchy of calls, etc. The only problem is that as far as I know, it does not support .hprof files. To use it, yo need to execute your program adding a VM argument
-javaagent:<path>/shiftone-jrat.jar
This will generate a directory with the profile captured by the tool. Then, execute
java -jar shiftone-jrat.jar
And open the trace. Even been a simple tool, I think it could be useful.