I am getting UnexpectedRollbackException. Here is the complete stack trace:
org.springframework.transaction.UnexpectedRollbackException: JTA transaction unexpectedly rolled back (maybe due to a timeout); nested exception is javax.transaction.RollbackException
at org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1031)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:732)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:701)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:321)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:116)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept(Cglib2AopProxy.java:635)
at com.cmates.userIcon.service.IconUpdaterServiceImpl$$EnhancerByCGLIB$$78838aa7.persist(<generated>)
at com.cmates.userIcon.service.ScheduledIconUpdaterServiceImpl.doScheduledTask(ScheduledIconUpdaterServiceImpl.java:125)
at com.cmates.profile.services.IconSyncSingletonImpl.process(IconSyncSingletonImpl.java:121)
at com.cmates.profile.services.IconSyncJob.executeInternal(IconSyncJob.java:25)
at org.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:86)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
Caused by: javax.transaction.RollbackException
at org.objectweb.jotm.TransactionImpl.commit(TransactionImpl.java:245)
at org.objectweb.jotm.Current.commit(Current.java:488)
at org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1028)
... 13 more
[18 Apr 2011 00:54:00,590] ERROR ErrorLogger - Job (DEFAULT.iconSyncJob threw an exception.
This Exception suddenly started showing in my logs. I didn't made any changes in my code.
I guess this might be due to a timeout?
I'm seeing a mention of a quartz job in the stacktrace. Looks like something has setup a periodic job and the AOP transaction management has picked up on it.
It looks like your transaction lasts than transaction timeout limit. You should increase transaction timeout limit..
But I am not sure about that, you should post more stack trace to understand the reason of rollback.
Related
I've got a problem with a Spring web application that periodically runs into an error fetching a connection from my connection pool. Eventually in the logs I see entries like:
Caused by: javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection
Caused by: java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
Only way to recover I've found once it hits this point is to restart Tomcat.
I think the most likely explanation is I have some code somewhere that is not properly cleaning up its connection - returning it to Hikari, leaving something open so Spring can't clean it up, etc.
To troubleshoot I've set my hikari config leakDetectionThreshold to 5000ms and enabled logging. After that, I see log entries like
2018-04-24 19:53:56 WARN ProxyLeakTask:87 - Connection leak detection
triggered for org.postgresql.jdbc.PgConnection#664ec666, stack trace
follows
java.lang.Exception: Apparent connection leak detected
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122)
at org.hibernate.internal.NonContextualJdbcConnectionAccess.obtainConnection(NonContextualJdbcConnectionAccess.java:35)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:99)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getPhysicalConnection(LogicalConnectionManagedImpl.java:129)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.connection(StatementPreparerImpl.java:47)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:146)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:172)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:148)
at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1940)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1909)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1887)
at org.hibernate.loader.Loader.doQuery(Loader.java:932)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:349)
at org.hibernate.loader.Loader.doList(Loader.java:2615)
at org.hibernate.loader.Loader.doList(Loader.java:2598)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2430)
at org.hibernate.loader.Loader.list(Loader.java:2425)
at org.hibernate.loader.custom.CustomLoader.list(CustomLoader.java:335)
at org.hibernate.internal.SessionImpl.listCustomQuery(SessionImpl.java:2129)
at org.hibernate.internal.AbstractSharedSessionContract.list(AbstractSharedSessionContract.java:981)
at org.hibernate.query.internal.NativeQueryImpl.doList(NativeQueryImpl.java:147)
at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1398)
at org.hibernate.query.internal.AbstractProducedQuery.getSingleResult(AbstractProducedQuery.java:1444)
at sun.reflect.GeneratedMethodAccessor191.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.orm.jpa.SharedEntityManagerCreator$DeferredQueryInvocationHandler.invoke(SharedEntityManagerCreator.java:379)
at com.sun.proxy.$Proxy163.getSingleResult(Unknown Source)
at com.mycompany.web.jpa.util.DBHelper.getPagedMappedDbResults(DBHelper.java:76)
at com.mycompany.web.jpa.repository.TaskRepositoryImpl.findTaskDetailsByStepIdAndIdIn(TaskRepositoryImpl.java:245)
......
So it is detecting a possible leak. Could be a false positive I suppose? But this is also the only class in my app that is doing database access outside of the standard service/repository pattern often used in Spring apps, so it seems like a likely culprit, and it's my best lead at the moment.
Anyway, the last piece of non library code I see in the trace (ie stuff I wrote, so most likely to be the cause of the leak!) is my DBHelper::getPagedMappedDbResults method, relevant bit included here:
Query q = entityManager.createNativeQuery(countQueryText);
setQueryParameters(q, parameters);
long numActualResults = 0;
try {
numActualResults = ((Number)q.getSingleResult()).longValue(); // line 76
} catch (Exception e) {
System.out.println("just in case: " + e);
}
So basically I create a Query object from my EntityManager instance, set some parameters, and run it to get some results.
Is there something I need to be doing with a Query object when I'm done with it? q.cleanup()? I don't see anything like this from reading the docs, but am I not doing good housekeeping on this resource?
The entityManager itself is created from an #Autowired annotation. My understanding is if I didn't "new" it to instantiate it and instead let the Spring framework autowire it, then Spring will do whatever cleanup is necessary. Is that right? Or do I need to be doing some cleanup after I use the entityManager?
Version details:
Tomcat 8 / Java 8
Spring 5.0.0.RELEASE
Spring Data Kay-RELEASE
Hibernate 5.2.3.Final
Hikari 2.4.5
Any advice or suggestions would be greatly appreciated, thanks!
What is the query? Is it heavy? Maybe you have deadlock here? Connection management looks fine. You do not acquire connection explicitly, so no need to release it. The query might be long running so Hibernate is not able to complete it and release the connection.
Also, you can check the number of open connections on the DB side. Do some analysis on that side as well.
I wrote a mapreduce job to scan an hbase table for a certain time range to count certain elements we need for analysis.
Mappers in the MR job keeps failing but I don't know why. Seems like each time I run the job, a different number of mappers fail. The YARN log (see below) from Cloudera manager isn't helpful in pointing what the problem is, although, someone said I might be running out of memory.
It seems to retry multiple times but each time it fails. What do I need to do to make it stop failing or how can I log things to help me better determine what is happening?
Below is a log from YARN for one of the mappers that failed.
Error: org.apache.hadoop.hbase.client.RetriesExhaustedException:
Failed after attempts=36, exceptions: Thu Jun 15 16:26:57 PDT 2017,
null, java.net.SocketTimeoutException: callTimeout=60000,
callDuration=60301: row '152_p3401.db161139.sjc102.dbi_1496271480' on
table 'dbi_based_data' at
region=dbi_based_data,151_p3413.db162024.iad4.dbi_1476974340,1486675565213.d83250d0682e648d165872afe5abd60e., hostname=hslave35118.ams9.mysecretdomain.com,60020,1483570489305,
seqNum=19308931 at
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:276)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:207)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:403)
at
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
at
org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:236)
at
org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:147)
at
org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$1.nextKeyValue(TableInputFormatBase.java:216)
at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at
org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:415) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused
by: java.net.SocketTimeoutException: callTimeout=60000,
callDuration=60301: row '152_p3401.db161139.sjc102.dbi_1496271480' on
table 'dbi_based_data' at
region=dbi_based_data,151_p3413.db162024.iad4.dbi_1476974340,1486675565213.d83250d0682e648d165872afe5abd60e., hostname=hslave35118.ams9.mysecretdomain.com,60020,1483570489305,
seqNum=19308931 at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) Caused by:
java.io.IOException: Call to
hslave35118.ams9.mysecretdomain.com/10.216.35.118:60020 failed on
local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException:
Call id=12, waitTime=60001, operationTimeout=60000 expired. at
org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:291)
at
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1272)
at
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
at
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:219)
at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:64)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:360)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:334)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more Caused by:
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=12,
waitTime=60001, operationTimeout=60000 expired. at
org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:73) at
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1246)
... 13 more
So it looks like for my case I needed to extend the timeout setting. In my Java program I had to add the following lines to make the exception go away:
conf.set("hbase.rpc.timeout","90000");
conf.set("hbase.client.scanner.timeout.period","90000");
The answer was found on this link on Cloudera's site
The following error occurs when an exception occurs for myJDBCTemplate.queryForList() , before which a setQueryTimeout(1) is set. I have a database which has 1.2 million rows, and looking for the timeout exception to be printed or occur in the case when the statement is executed. So, basically, the timeout occurs but the exception does not mention that.
I am using springFramework-version => 4.1.3.RELEASE in pom.xml
INFO: org.springframework.beans.factory.xml.XMLBeanDefinitionReader - Loading XML bean definition for class path resource [org/springframework/jdbc/support/sql-error-code.xml]
org.springframework.jdbc.UncategorizedSQLException: StatementCallback; uncategorized SQLException for SQL [select * from myTable where userCategory='1']; SQL state [70100]; error code [1317]; Query execution was interrupted; nested exception is java.sql.SQLException: Query execution was interrupted
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:84)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:416)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:471)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:481)
……..
caused by java.sql.SQLExcepion: Query execution was interrupted.
From the answer found at Query execution was interrupted, error #1317 states, the interruption occurs because of timeout, which I think is the possible cause.
Also, the exception states it is caused by java.sql.SQLException, but there are no exact details, why it occurred? So, I am not sure is it because of timeout or something else.
The error is clear in your stack trace:-
error code [1317]; Query execution was interrupted
, which means your query is being interrupted by an execution time limit. This error occurs when your query takes an unexpectedly long time to execute.
The error can be solved by fetching the data in batches by executing the query repeatedly for a certain data range.
I've used EJB to implement Command pattern. EJB is a command service that execute a business logic. I known in J2EE that EJB manage transaction and also transaction timeout.
<subsystem xmlns="urn:jboss:domain:transactions:1.1">
<core-environment>
<process-id>
<uuid/>
</process-id>
</core-environment>
<recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/>
<coordinator-environment default-timeout="600"/>
</subsystem>
As configuration I shown, transaction is managed and just allow maximum 600 seconds to process. Sometimes, my app take longer than 600 seconds to process to database, and right after that I try to send a message to a queue and I get this error.
21:34:50,085 ERROR [org.hornetq.ra.HornetQRASessionFactoryImpl] (Thread-102) Could not create session: javax.resource.ResourceException: IJ000460: Error checking for a transaction
at org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl.getManagedConnection(TxConnectionManagerImpl.java:362)
at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.allocateConnection(AbstractConnectionManager.java:464)
at org.hornetq.ra.HornetQRASessionFactoryImpl.allocateConnection(HornetQRASessionFactoryImpl.java:837)
at org.hornetq.ra.HornetQRASessionFactoryImpl.createQueueSession(HornetQRASessionFactoryImpl.java:237)
at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_24]
Caused by: javax.resource.ResourceException: IJ000459: Transaction is not active: tx=TransactionImple < ac, BasicAction: 0:ffff0a01071e:2dde2ba2:5514d7c5:d1 status: ActionStatus.ABORTED >
at org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl.getManagedConnection(TxConnectionManagerImpl.java:352)
... 63 more
21:34:50,117 ERROR [stderr] (Thread-102) javax.jms.JMSException: Could not create a session: IJ000460: Error checking for a transaction
21:34:50,118 ERROR [stderr] (Thread-102) at org.hornetq.ra.HornetQRASessionFactoryImpl.allocateConnection(HornetQRASessionFactoryImpl.java:881)
21:34:50,119 ERROR [stderr] (Thread-102) at org.hornetq.ra.HornetQRASessionFactoryImpl.createQueueSession(HornetQRASessionFactoryImpl.java:237)
I can resolve it by increase transaction timeout value. But it's not a good solution. Anyone can tell me another way to do.
As a first step, try to divide the work load into smaller chunks - each relying on Container Managed Transactions.
If it is simply not feasible; then you can consider using Bean Managed Transactions.
When shutting down the noo4j DB I receive this error:
[org.neo4j]: Exception when stopping org.neo4j.kernel.impl.nioneo.xa.NeoStoreXaDataSource#45673f68 java.nio.DirectByteBuffer[pos=16 lim=1048576 cap=1048576], 1586985
java.lang.IllegalArgumentException: java.nio.DirectByteBuffer[pos=16 lim=1048576 cap=1048576], 1586985
at org.neo4j.test.impl.EphemeralFileSystemAbstraction$DynamicByteBuffer.put(EphemeralFileSystemAbstraction.java:966)
at org.neo4j.test.impl.EphemeralFileSystemAbstraction$EphemeralFileData.write(EphemeralFileSystemAbstraction.java:680)
at org.neo4j.test.impl.EphemeralFileSystemAbstraction$EphemeralFileChannel.write(EphemeralFileSystemAbstraction.java:488)
at org.neo4j.kernel.impl.nioneo.store.StoreFileChannel.write(StoreFileChannel.java:160)
at org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore$1.perform(CommonAbstractStore.java:579)
at org.neo4j.kernel.impl.util.FileUtils.windowsSafeIOOperation(FileUtils.java:367)
at org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.close(CommonAbstractStore.java:572)
at org.neo4j.kernel.impl.nioneo.store.NeoStore.closeStorage(NeoStore.java:289)
at org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.close(CommonAbstractStore.java:552)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreXaDataSource.stop(NeoStoreXaDataSource.java:507)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
at org.neo4j.kernel.impl.transaction.XaDataSourceManager.stop(XaDataSourceManager.java:185)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
at org.neo4j.kernel.lifecycle.LifeSupport.shutdown(LifeSupport.java:185)
at org.neo4j.kernel.InternalAbstractGraphDatabase.shutdown(InternalAbstractGraphDatabase.java:822)
at org.neo4j.test.ImpermanentGraphDatabase.shutdown(ImpermanentGraphDatabase.java:170)
at org.springframework.data.neo4j.support.DelegatingGraphDatabase.shutdown(DelegatingGraphDatabase.java:270)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.springframework.beans.factory.support.DisposableBeanAdapter.invokeCustomDestroyMethod(DisposableBeanAdapter.java:350)
at org.springframework.beans.factory.support.DisposableBeanAdapter.destroy(DisposableBeanAdapter.java:273)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroyBean(DefaultSingletonBeanRegistry.java:565)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingleton(DefaultSingletonBeanRegistry.java:541)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.destroySingleton(DefaultListableBeanFactory.java:870)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingletons(DefaultSingletonBeanRegistry.java:510)
at org.springframework.context.support.AbstractApplicationContext.destroyBeans(AbstractApplicationContext.java:908)
at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:884)
at org.springframework.context.support.AbstractApplicationContext$1.run(AbstractApplicationContext.java:804)
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.position(Buffer.java:244)
at org.neo4j.test.impl.EphemeralFileSystemAbstraction$DynamicByteBuffer.put(EphemeralFileSystemAbstraction.java:962)
... 31 more
2014-11-20 18:31:03.775+0000 ERROR [org.neo4j]: Exception when stopping org.neo4j.kernel.impl.transaction.XaDataSourceManager#4fd91628 Component 'org.neo4j.kernel.impl.nioneo.xa.NeoStoreXaDataSource#45673f68' failed to stop. Please see attached cause exception.
org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.impl.nioneo.xa.NeoStoreXaDataSource#45673f68' failed to stop. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:532)
at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
at org.neo4j.kernel.impl.transaction.XaDataSourceManager.stop(XaDataSourceManager.java:185)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
at org.neo4j.kernel.lifecycle.LifeSupport.shutdown(LifeSupport.java:185)
at org.neo4j.kernel.InternalAbstractGraphDatabase.shutdown(InternalAbstractGraphDatabase.java:822)
I assume it is caused by a timeout, as it only appears when instantiating a large number of nodes (> 100K). However, I do not find a way to set up any timeout using the setConfig/GraphDatabaseSettings APIs (unfortunately we cannot use a property file):
public GraphDatabaseService graphDatabaseService() {
GraphDatabaseService graphDb = new GraphDatabaseFactory()
.newEmbeddedDatabaseBuilder("db/my.db")
.setConfig( GraphDatabaseSettings.nodestore_mapped_memory_size, "10M" )
.newGraphDatabase();
Do you know what is the root cause of the issue and how to circumvent it?
Thanks
F.