I have 2 Classes(Agent and Penalty). Also, I have hasPoint(Penalty) and hasWeight(Agent) data properties. When I write this SWRL Rule, I got an error:
Agent(?a) ^ hasWeight(?a,?x) ^ Penalty(?p) ^ hasPoint(?p,?y) ^ swrlb:add(?z,?x,?y) -> hasWeight(?a,?z)
After this rule, my Reasoner doesn't work. But if I write like this, it works:
Agent(?a) ^ hasWeight(?a,100) ^ Penalty(?p) ^ hasPoint(?p,200) ^ swrlb:add(?z,100,200) -> hasWeight(?a,?z)
But I have created some instances. I don't need to put values manually. I want to add automatically. How can I solve it?
ERROR 15:36:25 An error occurred during reasoning: GC overhead limit exceeded.
java.lang.OutOfMemoryError: GC overhead limit exceeded
WARN 15:36:43 Protege terminated reasoner.
ERROR 15:36:43 Internal reasoner error: {}
java.lang.OutOfMemoryError: GC overhead limit exceeded
ERROR 15:37:28 Uncaught Exception in thread 'AWT-EventQueue-0'
java.lang.OutOfMemoryError: GC overhead limit exceeded
ERROR 15:37:32 Uncaught Exception in thread 'AWT-EventQueue-0'
java.lang.OutOfMemoryError: GC overhead limit exceeded
Assigning more memory could be a solution. If you are using a windows machine:
Steps:
Open the run.sh file in choice of text editor.
Try increasing the value associated with -Xmx (Describes the initial memory that will be allocated).
For more details check the "Start Protege from the command line" section on this link: https://protegewiki.stanford.edu/wiki/Setting_Heap_Size
Hope this helps!
Related
I'm using Lucene v4.10.4. I have pretty big index, it could be over few GBs. So I get OutOfMemoryError on initializing IndexSearcher:
try (Directory dir = FSDirectory.open(new File(indexPath))) {
//Out of Memory here!
IndexSearcher searcher = new IndexSearcher(DirectoryReader.open(indexDir));
How to tell Lucene's DirectoryReader to not load into memory more than 256 MB at once?
Log
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.fst.BytesStore.<init>(BytesStore.java:68)
at org.apache.lucene.util.fst.FST.<init>(FST.java:386)
at org.apache.lucene.util.fst.FST.<init>(FST.java:321)
at org.apache.lucene.codecs.blocktree.FieldReader.<init>(FieldReader.java:85)
at org.apache.lucene.codecs.blocktree.BlockTreeTermsReader.<init>(BlockTreeTermsReader.java:192)
at org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:441)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.<init>(PerFieldPostingsFormat.java:197)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:254)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:120)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:108)
at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:62)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:923)
at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:53)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:67)
First you should check the current heap size of your JVM.
java -XX:+PrintFlagsFinal -version | grep MaxHeapSize
If this number is not reasonable for your use case, you should increase it when running your program with -Xmx option of java command. A sample command to assign 8GB of heap memory would look like:
java -Xmx8g -jar your_jar_file
Hope this helps.
After one of my import scripts had completed importing all data, I tried restarting it to grab any updated data. The first thing it does is grab the most recently updated record:
db.select().from(newClass).order('updatedAt desc').limit(1).one()
However, that caused the following error from my Node script:
Possibly unhandled OrientDB.RequestError: Java heap space
at Operation.parseError (/Users/gsquare567/node_modules/oriento/lib/transport/binary/protocol/operation.js:779:13)
at Operation.consume (/Users/gsquare567/node_modules/oriento/lib/transport/binary/protocol/operation.js:369:35)
at Connection.process (/Users/gsquare567/node_modules/oriento/lib/transport/binary/connection.js:324:17)
at Connection.handleSocketData (/Users/gsquare567/node_modules/oriento/lib/transport/binary/connection.js:250:17)
at Socket.emit (events.js:95:17)
at Socket.<anonymous> (_stream_readable.js:748:14)
at Socket.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:410:10)
at emitReadable (_stream_readable.js:406:5)
at readableAddChunk (_stream_readable.js:168:9)
And I received the following server output:
java.lang.OutOfMemoryError: GC overhead limit exceeded
Dumping heap to java_pid1694.hprof ...
Heap dump file created [2055557443 bytes in 37.799 secs]
Error on fetching record during browsing. The record has been skipped
Error on retrieving record #11:1023466 (cluster: user)
-> com.orientechnologies.orient.core.db.raw.ODatabaseRaw.read(ODatabaseRaw.java:252)
-> com.orientechnologies.orient.core.db.record.ODatabaseRecordAbstract.executeReadRecord(ODatabaseRecordAbstract.java:1017)
-> com.orientechnologies.orient.core.tx.OTransactionNoTx.loadRecord(OTransactionNoTx.java:65)
-> com.orientechnologies.orient.core.db.record.ODatabaseRecordTx.load(ODatabaseRecordTx.java:264)
-> com.orientechnologies.orient.core.db.record.ODatabaseRecordTx.load(ODatabaseRecordTx.java:40)
-> com.orientechnologies.orient.core.iterator.OIdentifiableIterator.readCurrentRecord(OIdentifiableIterator.java:285)
-> com.orientechnologies.orient.core.iterator.ORecordIteratorClusters.hasNext(ORecordIteratorClusters.java:139)
-> com.orientechnologies.orient.core.sql.OCommandExecutorSQLSelect.fetchFromTarget(OCommandExecutorSQLSelect.java:913)
-> com.orientechnologies.orient.core.sql.OCommandExecutorSQLSelect.executeSearch(OCommandExecutorSQLSelect.java:397)
-> com.orientechnologies.orient.core.sql.OCommandExecutorSQLSelect.execute(OCommandExecutorSQLSelect.java:358)
-> com.orientechnologies.orient.core.sql.OCommandExecutorSQLDelegate.execute(OCommandExecutorSQLDelegate.java:60)
-> com.orientechnologies.orient.core.storage.OStorageEmbedded.executeCommand(OStorageEmbedded.java:94)
-> com.orientechnologies.orient.core.storage.OStorageEmbedded.command(OStorageEmbedded.java:83)
-> com.orientechnologies.orient.core.command.OCommandRequestTextAbstract.execute(OCommandRequestTextAbstract.java:59)
-> com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.command(ONetworkProtocolBinary.java:1181)
-> com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.executeRequest(ONetworkProtocolBinary.java:340)
-> com.orientechnologies.orient.server.network.protocol.binary.OBinaryNetworkProtocolAbstract.execute(OBinaryNetworkProtocolAbstract.java:169)
-> com.orientechnologies.common.thread.OSoftThread.run(OSoftThread.java:45)
GC overhead limit exceeded
My other import script, running concurrently, also stopped due to the following:
"error":{"name":"OrientDB.RequestError","message":"Java heap space","data":{},"previous":[],"id":1,"type":"java.lang.OutOfMemoryError","hasMore":0}
After trying to run the original script again, I got the following output in my server:
Exception in thread "OrientDB WAL Flush Task (pumpup)" Error on client connection
Java heap spacejava.lang.OutOfMemoryError: Java heap space
Node script output:
Possibly unhandled OrientDB.RequestError: Java heap space
at Operation.parseError (/Users/gsquare567/node_modules/oriento/lib/transport/binary/protocol/operation.js:779:13)
at Operation.consume (/Users/gsquare567/node_modules/oriento/lib/transport/binary/protocol/operation.js:369:35)
at Connection.process (/Users/gsquare567/node_modules/oriento/lib/transport/binary/connection.js:324:17)
at Connection.handleSocketData (/Users/gsquare567/node_modules/oriento/lib/transport/binary/connection.js:250:17)
at Socket.emit (events.js:95:17)
at Socket.<anonymous> (_stream_readable.js:748:14)
at Socket.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:410:10)
at emitReadable (_stream_readable.js:406:5)
at readableAddChunk (_stream_readable.js:168:9)
EDIT
After increasing the memory limit to 2GB, I was able to insert 5M records (instead of the previous 2M records) but am still hitting this error.
GC overhead limit exceeded
-> com.orientechnologies.orient.core.db.raw.ODatabaseRaw.read(ODatabaseRaw.java:252)
-> com.orientechnologies.orient.core.db.record.ODatabaseRecordAbstract.executeReadRecord(ODatabaseRecordAbstract.java:1017)
-> com.orientechnologies.orient.core.tx.OTransactionNoTx.loadRecord(OTransactionNoTx.java:65)
-> com.orientechnologies.orient.core.db.record.ODatabaseRecordTx.load(ODatabaseRecordTx.java:264)
-> com.orientechnologies.orient.core.db.record.ODatabaseRecordTx.load(ODatabaseRecordTx.java:40)
-> com.orientechnologies.orient.core.iterator.OIdentifiableIterator.readCurrentRecord(OIdentifiableIterator.java:285)
-> com.orientechnologies.orient.core.iterator.ORecordIteratorClusters.hasNext(ORecordIteratorClusters.java:139)
-> com.orientechnologies.orient.core.sql.OCommandExecutorSQLSelect.fetchFromTarget(OCommandExecutorSQLSelect.java:913)
-> com.orientechnologies.orient.core.sql.OCommandExecutorSQLSelect.executeSearch(OCommandExecutorSQLSelect.java:397)
-> com.orientechnologies.orient.core.sql.OCommandExecutorSQLSelect.execute(OCommandExecutorSQLSelect.java:358)
-> com.orientechnologies.orient.core.sql.OCommandExecutorSQLDelegate.execute(OCommandExecutorSQLDelegate.java:60)
-> com.orientechnologies.orient.core.storage.OStorageEmbedded.executeCommand(OStorageEmbedded.java:94)
-> com.orientechnologies.orient.core.storage.OStorageEmbedded.command(OStorageEmbedded.java:83)
-> com.orientechnologies.orient.core.command.OCommandRequestTextAbstract.execute(OCommandRequestTextAbstract.java:59)
-> com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.command(ONetworkProtocolBinary.java:1181)
-> com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.executeRequest(ONetworkProtocolBinary.java:340)
-> com.orientechnologies.orient.server.network.protocol.binary.OBinaryNetworkProtocolAbstract.execute(OBinaryNetworkProtocolAbstract.java:169)
-> com.orientechnologies.common.thread.OSoftThread.run(OSoftThread.java:45)
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "Timer-0"
The issue here is that you are trying to sort all 5M records.
That operation have to load whole dataset in memory to sort it (We actually have a plan to optimize it to avoid OOM in this cases, but it is not implemented yet).
So even if you specify limit 1 you load whole bunch of records and query will be slow and consume a lot of memory.
To optimize that query build an index over updatedAt field.
I'm trying to import LinkedMDB (6.1m triples) into my local version of jena-fuseki at startup:
/path/to/fuseki-server --file=/path/to/linkedmdb.nt /ds
and that runs for a minute, then dies with the following error:
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
at com.hp.hpl.jena.graph.Node$3.construct(Node.java:318)
at com.hp.hpl.jena.graph.Node.create(Node.java:344)
at com.hp.hpl.jena.graph.NodeFactory.createURI(NodeFactory.java:48)
at org.apache.jena.riot.system.RiotLib.createIRIorBNode(RiotLib.java:80)
at org.apache.jena.riot.system.ParserProfileBase.createURI(ParserProfileBase.java:107)
at org.apache.jena.riot.system.ParserProfileBase.create(ParserProfileBase.java:156)
at org.apache.jena.riot.lang.LangNTriples.tokenAsNode(LangNTriples.java:97)
at org.apache.jena.riot.lang.LangNTriples.parseOne(LangNTriples.java:90)
at org.apache.jena.riot.lang.LangNTriples.runParser(LangNTriples.java:54)
at org.apache.jena.riot.lang.LangBase.parse(LangBase.java:42)
at org.apache.jena.riot.RDFParserRegistry$ReaderRIOTFactoryImpl$1.read(RDFParserRegistry.java:142)
at org.apache.jena.riot.RDFDataMgr.process(RDFDataMgr.java:818)
at org.apache.jena.riot.RDFDataMgr.parse(RDFDataMgr.java:679)
at org.apache.jena.riot.RDFDataMgr.read(RDFDataMgr.java:211)
at org.apache.jena.riot.RDFDataMgr.read(RDFDataMgr.java:104)
at org.apache.jena.fuseki.FusekiCmd.processModulesAndArgs(FusekiCmd.java:251)
at arq.cmdline.CmdArgModule.process(CmdArgModule.java:51)
at arq.cmdline.CmdMain.mainMethod(CmdMain.java:100)
at arq.cmdline.CmdMain.mainRun(CmdMain.java:63)
at arq.cmdline.CmdMain.mainRun(CmdMain.java:50)
at org.apache.jena.fuseki.FusekiCmd.main(FusekiCmd.java:141)
Is there a way that I can bump up the memory limit or import the data in less intensive way?
For comparison's sake, when I used a 1million triple source file, it imports in less than 10 seconds.
Increase heap memory, java -Xmx2048M -jar fuseki-sys.jar ......
open fuseki-server with an editor you'll find the line JVM_ARGS=${JVM_ARGS:--Xmx1200M} modify it to JVM_ARGS=${JVM_ARGS:--Xmx2048M}
Set JVM_ARGS when using the fuseki-server script.
Also note that --file=... is reading the file into memory. Maybe this is too big for handling that way. If so, load into TDB and use a TDB database with Fuseki.
I want to collect heap dump on JVM crash
So i wrote a simple code
public class Test {
private String name;
public Test(String name) {
this.name = name;
}
public void execute() {
Map<String,String> randomData = new HashMap<String,String>();
for(int i=0;i<1000000000;i++) {
randomData.put("Key:" + i,"Value:" + i);
}
}
public void addData() {
}
public static void main(String args[]) {
String myName = "Aniket";
Test tStart = new Test(myName);
tStart.execute();
}
}
and I am running it as follows
[aniket#localhost Desktop]$ java -cp . -Xms2m -Xmx2m Test
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at Test.execute(Test.java:15)
at Test.main(Test.java:25)
I got OutOfMemoryError which I wanted but there is no heap dump in the working directory(like hs_err_pidXXXX.log which I expected). What am I missing? How do I get a heap dump?
Update :
I tried -XX:ErrorFile=. still no use. If above is not the way to get the heap dump(Crash JVM) how can I crash my JVM to get those logs?
You are confusing an exception or error being thrown as a JVM crash.
A JVM crash occurs due to an internal error in the JVM, you cannot trigger this by writing a normal Java program (or should not unless you find a bug)
What you are doing is triggering an Error which means the program continues to run until all the non daemon threads exit.
The simplest tool to examine the heap is VisualVM which comes with the JDK. If you want to trigger a heap dump on an OutOfMemoryError you can use -XX:+HeapDumpOnOutOfMemoryError
Use Jmap
jmap [options] pid
pid is the process id of application
When you see the below
Exception in thread "main" java.lang.OutOfMemoryError
It means your error or exception is handled by the exception handler. This is not a crash.
Eclipse has an awesome Heap Analyzer
Also, you can use jps to get the PID, and then jmap for the heap itself.
In case, you want to crash the JVM, your best guess would be native code.
Find the process id for which you want to take the heap dump
ps -ef | grep java
Once you get PID by running above command run below command to generate heap dump.
jmap -dump:format=b,file=<fileName> <java PID>
You can pass below JVM arguments to your application:
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=
This argument will automatically trigger heap dump in the specified 'file-path' when your application experiences OutOfMemoryError. There are 7 different options to take heap dumps from your application:
jmap
-XX:+HeapDumpOnOutOfMemoryError
jcmd
JVisualVM
JMX
Programmatic Approach
Administrative consoles
Details about each option can be found in this article. Once you have captured heap dump, you may use tools like Eclipse Memory Analysis tool, HeapHero to analyze the captured heap dumps.
does java 6 generate thread dump in addition to heap dump (java_pid14941.hprof)
this is what happened to one of my applications.
java.lang.OutOfMemoryError: GC overhead limit exceeded
Dumping heap to java_pid14941.hprof ...
I did find ava_pid14941.hprof in working directory, but didn't find any file which contains thread dump. I need to know what all the threads were doing when I got this OutOfMemory error.
Is there any configuration option which will generate thread dump in addition to heap dump on out of memory exception?
If you're in a Linux/Unix environment you can do this:
-XX:OnOutOfMemoryError="kill -3 %p"
This way you don't have to have your application generate periodic thread dumps and you'll get a snapshot when it actually chokes.
With %p, you don't need to pass the PID, JVM will automatically pick the correct process id as mentioned here.
How to generate thread dump java on
out of memory error?
Your question can be simplified into:
how to generate a thread dump
and:
how to catch an out of memory error (don't pay attention to naysayer here, they're missing the bigger picture, see my comment)
So it's actually quite easy, you could do it like this:
install a default uncaught exception handler
upon catching an uncaught exception, check if you have an OutOfMemoryError
if you have an OutOfMemoryError, generate yourself a full thread dump and either ask the user to send it to you by email or offer to send it automatically
Bonus: it works fine on 1.5 too :)
Thread.setDefaultUncaughtExceptionHandler( new Thread.UncaughtExceptionHandler() {
public void uncaughtException( final Thread t, final Throwable e ) {
...
}
You may want to look into this:
e.getMessage();
and this:
Thread.getAllStackTraces();
I'm doing this all the time in an app that is shipped on hundreds of different 1.5 and 1.6 JVM (on different OSes).
It's possible to trigger a thread dump when OnOutOfMemoryError is triggered using jstack. e.g:-
jstack -F pid > /var/tmp/<identifier>.dump
I don't think there is anything in java that would provide you with on-exit thread dumps. I tackle this when necessary by having a cronjob that does periodic kill -3 pid. Yes, it does clutter the logs a bit, but the footprint is still negligible.
And if you are suffering from OOM, it might be be beneficial to see how the situation evolved thread-wise.
Based on the accepted answer I created utility class. This one you can define as a Spring bean and you're all set with extended logging.
import java.util.Iterator;
import java.util.Map;
import javax.annotation.PostConstruct;
import org.apache.commons.lang3.exception.ExceptionUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class UncaughtExceptionLogger {
private final static Logger logger = LoggerFactory.getLogger(UncaughtExceptionLogger.class);
#PostConstruct
private void init() {
Thread.setDefaultUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() {
public void uncaughtException(final Thread t, final Throwable e) {
String msg = ExceptionUtils.getRootCauseMessage(e);
logger.error(String.format("Uncaght exception handler captured expcetion '%s'", msg), e);
if (msg.contains("unable to create new native thread")) {
String dump = captureThreadDump();
logger.error(String.format(
"OutOfMemoryError has been captured for threads limit. Thread dump: \n %s", dump), e);
}
if (ExceptionUtils.getRootCause(e) instanceof OutOfMemoryError) {
String dump = captureThreadDump();
logger.error(String.format("OutOfMemoryError has been captured. Thread dump: \n %s", dump), e);
}
}
});
}
public static String captureThreadDump() {
/**
* http://stackoverflow.com/questions/2787976/how-to-generate-thread-
* dump-java-on-out-of-memory-error
* http://henryranch.net/software/capturing-a-thread-dump-in-java/
*/
Map<Thread, StackTraceElement[]> allThreads = Thread.getAllStackTraces();
Iterator<Thread> iterator = allThreads.keySet().iterator();
StringBuffer stringBuffer = new StringBuffer();
while (iterator.hasNext()) {
Thread key = (Thread) iterator.next();
StackTraceElement[] trace = (StackTraceElement[]) allThreads.get(key);
stringBuffer.append(key + "\r\n");
for (int i = 0; i < trace.length; i++) {
stringBuffer.append(" " + trace[i] + "\r\n");
}
stringBuffer.append("");
}
return stringBuffer.toString();
}
}
-XX:OnOutOfMemoryError="kill -3 %p"
The JVM argument to take thread dump sadly don't work.
Child process cannot end SIGQUIT to parent.
Oracle has -XX:CrashOnOutOfMemoryError but this is available on Java 8.
Assuming that you have a JDK (not a JRE) installed on your target server, you can rely on the command jcmd to generate your thread dump, this way it will work whatever the OS of your server.
The JVM option defining the command to execute on OutOfMemoryError is then:
-XX:OnOutOfMemoryError="jcmd %p Thread.print"
Where %p designates the current process id
Moreover, as we could want to have the thread dump into a file, it is possible to have it into the JVM log file by adding the JVM options -XX:+UnlockDiagnosticVMOptions -XX:+LogVMOutput -XX:LogFile=jvm.log, this way the heap dump will be available in the file jvm.log located in the working directory of the JVM along with other information intended for diagnosing the JVM.
The full list of JVM options to add is then:
-XX:OnOutOfMemoryError="jcmd %p Thread.print" -XX:+UnlockDiagnosticVMOptions
-XX:+LogVMOutput -XX:LogFile=jvm.log
Be aware that the file is always created even when no OOME happens, so on a server, if you want to avoid having the previous JVM file being replaced at the next startup, you should consider adding the process id in the name of the log file like for example jvm_pid%p.log but don't forget to remove the files regularly.
If the JVM option -XX:LogFile is not set, by default the name of the file is of type hotspot_pid%p.log where %p designates the current process id.
In practice, the only JVM option that is needed is -XX:+HeapDumpOnOutOfMemoryError as it allows to generate an hprof file when an OnOutOfMemoryError occurs which already contains a thread dump.
See below how to get a thread dump from an hprof file using VisualVM: