I'm using spring neo4j (version 3.3.4) and trying to use spring neo4j transactions. I configured it as Spring neo4j configurations. Everything was fine, but the rollback cannot work.
For example, I tried to add one new node and and one existing node in one transaction. It raised a RuntimeException and should rollback. But the first node is created without rolling back in the database. The log shows that rolling back has happened. Does anyone had the same problem or know how to figure it out? Thanks in advance.
The log is as follows:
[2018-08-14 17:48:20.845][http-nio-8888-exec-4] DEBUG o.s.d.n.t.Neo4jTransactionManager - Rolling back Neo4j transaction [org.neo4j.ogm.drivers.bolt.transaction.BoltTransaction#421cb02f] on Session [org.neo4j.ogm.session.Neo4jSession#2f8c5762]
[2018-08-14 17:48:20.845][http-nio-8888-exec-4] DEBUG o.n.o.d.b.t.BoltTransaction - Rolling back native transaction: org.neo4j.driver.internal.ExplicitTransaction#5e10b6b3
[2018-08-14 17:48:20.846][http-nio-8888-exec-4] DEBUG o.neo4j.ogm.transaction.Transaction - Thread 54: Rollback transaction extent: 0
[2018-08-14 17:48:20.846][http-nio-8888-exec-4] DEBUG o.neo4j.ogm.transaction.Transaction - Thread 54: Rolled back
[2018-08-14 17:48:20.846][http-nio-8888-exec-4] DEBUG o.neo4j.ogm.transaction.Transaction - Thread 54: Close transaction extent: 0
[2018-08-14 17:48:20.846][http-nio-8888-exec-4] DEBUG o.neo4j.ogm.transaction.Transaction - Thread 54: Closing transaction
[2018-08-14 17:48:20.846][http-nio-8888-exec-4] DEBUG o.s.d.n.t.Neo4jTransactionManager - Closing Neo4j Session [org.neo4j.ogm.session.Neo4jSession#2f8c5762] after transaction
[2018-08-14 17:48:20.846][http-nio-8888-exec-4] DEBUG o.s.d.n.t.Neo4jTransactionManager - Resuming suspended transaction after completion of inner transaction
[2018-08-14 17:48:20.848][http-nio-8888-exec-4] ERROR c.s.k.c.s.i.GraphOperationServiceImpl - Error in committing directives: Node(19354) already exists with label `Product` and property `prodName` = 'test1'
org.neo4j.driver.v1.exceptions.ClientException: Node(19354) already exists with label `Product` and property `prodName` = 'test1'
[2018-08-14 17:48:20.849][http-nio-8888-exec-4] DEBUG o.s.j.d.DataSourceTransactionManager - Initiating transaction rollback
[2018-08-14 17:48:20.849][http-nio-8888-exec-4] DEBUG o.s.j.d.DataSourceTransactionManager - Rolling back JDBC transaction on Connection [HikariProxyConnection#1121924585 wrapping com.mysql.jdbc.JDBC4Connection#41813449]
[2018-08-14 17:48:20.850][http-nio-8888-exec-4] DEBUG o.s.j.d.DataSourceTransactionManager - Releasing JDBC Connection [HikariProxyConnection#1121924585 wrapping com.mysql.jdbc.JDBC4Connection#41813449] after transaction
[2018-08-14 17:48:20.850][http-nio-8888-exec-4] DEBUG o.s.jdbc.datasource.DataSourceUtils - Returning JDBC Connection to DataSource
It is solved. I post this answer in case someone else has the same problem.
The problem is that Rolling back JDBC transaction is on a mysql jdbc connection instead of using neo4j transaction manager, which is indicated by the following log. It happens only during rolling back the neo4j transaction with no error.
[2018-08-14 17:48:20.849][http-nio-8888-exec-4] DEBUG o.s.j.d.DataSourceTransactionManager - Rolling back JDBC transaction on Connection [HikariProxyConnection#1121924585 wrapping com.mysql.jdbc.JDBC4Connection#41813449]
The solution is to specify a bean name for neo4jTransactionManager instead of the default name "transactionManager", so that it won't be confusing with the default mysql transaction manager. And then specify the transaction manager in the transactional annotator e.g. #transactional("neo4jTransactionManager").
Related
I'm working with a FileMaker 16 datasource through the official JDBC driver in Spring Boot 2 with Hibernate 5.3 and Hikari 2.7.
The FileMaker server performance is poor, a SQL query execution time can reach a minute for big tables. Sometimes it results in connection leaking, when the connection pool is full of active connections which are never released.
The question is how to force active connections in the pool which have been hanging there say for two minutes to close, moving them to idle and making available for using again.
As an example, I'm accessing the FileMaker datasource through a RestController using the findAll method in org.springframework.data.repository.PagingAndSortingRepository:
#RestController
public class PatientController {
#Autowired
private PatientRepository repository;
#GetMapping("/patients")
public Page<Patient> find(Pageable pageable) {
return repository.findAll(pageable);
}
}
Calling /patients a few times in a raw causes connection leaking, here's what Hikari reports:
2018-09-20 13:49:00.939 DEBUG 1 --- [l-1 housekeeper]
com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats
(total=10, active=10, idle=0, waiting=2)
It also throws exceptions like this:
java.lang.Exception: Apparent connection leak detected
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:128) ~[HikariCP-2.7.9.jar!/:na]
What I need is if repository.findAll takes more than N seconds, the connection must be killed and the controller method must throw and exception. How to achieve it?
Here's my Hikari config:
allowPoolSuspension.............false
autoCommit......................true
catalog.........................none
connectionInitSql...............none
connectionTestQuery............."SELECT COUNT(*) FROM Clinics"
connectionTimeout...............30000
dataSource......................none
dataSourceClassName.............none
dataSourceJNDI..................none
dataSourceProperties............{password=<masked>}
driverClassName................."com.filemaker.jdbc.Driver"
healthCheckProperties...........{}
healthCheckRegistry.............none
idleTimeout.....................600000
initializationFailFast..........true
initializationFailTimeout.......1
isolateInternalQueries..........false
jdbc4ConnectionTest.............false
jdbcUrl.........................jdbc:filemaker://***:2399/ec_data
leakDetectionThreshold..........90000
maxLifetime.....................1800000
maximumPoolSize.................10
metricRegistry..................none
metricsTrackerFactory...........none
minimumIdle.....................10
password........................<masked>
poolName........................"HikariPool-1"
readOnly........................false
registerMbeans..................false
scheduledExecutor...............none
scheduledExecutorService........internal
schema..........................none
threadFactory...................internal
transactionIsolation............default
username........................"CHC"
validationTimeout...............5000
HikariCP focuses on just connection pool management to managing the connections that it has formed from it.
loginTimeout - how long HikariCP will wait for a connection to be formed to the database (basically a JDBC connection)
spring.datasource.hikari.connectionTimeout=30000
maxLifetime - how long a connection will live in the pool before being closed
spring.datasource.hikari.maxLifetime=1800000
idleTimeout - how long an unused connection lives in the pool
spring.datasource.hikari.idleTimeout=30000
Use javax.persistence.query.timeout to cancel the request if it takes longer than defined timeout.
javax.persistence.query.timeout (Long – milliseconds)
The javax.persistence.query.timeout hint defines how long a query is
allowed to run before it gets canceled. Hibernate doesn’t handle this
timeout itself but provides it to the JDBC driver via the JDBC
Statement.setTimeout method.
The filemaker JDBC driver ignores the javax.persistence.query.timeout parameter, even though the timeout value is set in the driver's implementation of the java.sql.setQueryTimeout setter. So I resolved the problem by extending the class com.filemaker.jdbc.Driver and overriding the connect method, so that it adds the sockettimeout parameter to the connection properties. Having this param in place, the FM JDBC driver interrupts the connection if no data have been coming from the socket for the timeout period.
I've also filed an issue with filemaker: https://community.filemaker.com/message/798471
I am facing a big concern in JBoss 6.1.0. It is a multi threaded application and am using stateless EJB with BMT and Sybase DB. JDK used is 1.7.76u. User transaction is started. Queries got ran but the associated thread tries to commit after ONE HOUR. I am not aware what happened to the thread executing. It is suspended for sure but not from the code.
Can anyone please give a valuable pointer about why the thread got suspended for more than hour. Obviously after an hour, thread resuming and trying either COMMIT or ROLLBACK will fail and has failed as the default transaction timeout is 300 seconds (which is JBoss 6 default value).
2017-01-09 10:01:49,389 DEBUG [TestDAO] [EventId: ] [pool-63-thread-6] SQL SELECT QUERY
2017-01-09 10:01:49,391 DEBUG [TestDAO] [EventId: ] [pool-63-thread-6] ['dao.rowsProcessed']: 1 rows processed
2017-01-09 10:01:49,389 DEBUG [TestDAO] [EventId: ] [pool-63-thread-6] SQL UPDATE QUERY
2017-01-09 10:01:49,391 DEBUG [TestDAO] [EventId: ] [pool-63-thread-6] ['dao.rowsUpdated']: 1 row updated
2017-01-09 11:05:48,213 DEBUG [DAOUtils] [EventId: ] [pool-63-thread-6] commitTx
2017-01-09 11:05:48,214 ERROR [DAOUtils] [EventId: ] [pool-63-thread-6] commitTx() ARJUNA-16063 The transaction is not active!
2017-01-09 11:05:48,215 DEBUG [DAOUtils] [EventId: ] [pool-63-thread-6] rollbackTx
2017-01-09 11:05:48,215 ERROR [DAOUtils] [EventId: ] [pool-63-thread-6] rollbackTx() java.lang.IllegalStateException - BaseTransaction.rollback - ARJUNA-16074 no transaction!
It seems you have long running transactions which is being time-out.
"The transaction is not active!" are caused by a transaction timeout. When a transaction times out the transaction manager rolls it back asynchronously and then when a compontent tries to access the transaction again (e.g. to commit it or roll it back) it won't be able to according to the JTA spec.
The default transaction timeout has been defined under "default-timeout" attribute at "transactions" subsystem in the application server configuration.
The default is 300 seconds / 5 minutes.
You may modify the value to increase the default transaction timeout.
You may set the value to 0 to disable the transaction reaper/ transaction timeout.
The application server VM must be restarted for the default-timeout change to be applied.
<subsystem xmlns="urn:jboss:domain:transactions:1.4">
<coordinator-environment default-timeout="300"/> <!-- HERE -->
</subsystem>
It looks to me like your it is taking longer than 5 minutes to process the message therefore its transaction is timing out.
I would recommend you to increase the transaction timeout to a higher figure to avoid this situation. It would be good if you could refactor the application code to reduce the time taken to complete a transaction. So it may be that the application logic is correctly handling the scenario in this case
As I mentioned in the JBoss forum, this is not issue with the transaction timeout.
There is no point in extending the timeout for transaction as this blocks all the other applications because locks in the database is held by transaction.
Threads executing the transaction are frozen. Any hints on why this thread is blocked from committing would be of great help.
Rgds
Manohar
I am in the process of optimizing an algorithm, and I noticed that Hibernate creates and releases update statements repetitively instead of reusing them. These are all from the same query.
15:57:31,589 TRACE [.JdbcCoordinatorImpl]:371 - Registering statement [sql : 'update ...
15:57:31,591 TRACE [.JdbcCoordinatorImpl]:412 - Releasing statement [sql : 'update ...
15:57:31,592 TRACE [.JdbcCoordinatorImpl]:525 - Closing prepared statement [sql : 'update ...
15:57:31,592 TRACE [.JdbcCoordinatorImpl]:278 - Starting after statement execution processing [ON_CLOSE]
15:57:31,594 TRACE [.JdbcCoordinatorImpl]:371 - Registering statement [sql : 'update ...
15:57:31,595 TRACE [.JdbcCoordinatorImpl]:412 - Releasing statement [sql : 'update ...
15:57:31,596 TRACE [.JdbcCoordinatorImpl]:525 - Closing prepared statement [sql : 'update ...
15:57:31,596 TRACE [.JdbcCoordinatorImpl]:278 - Starting after statement execution processing [ON_CLOSE]
15:57:31,597 TRACE [.JdbcCoordinatorImpl]:371 - Registering statement [sql : 'update ...
15:57:31,599 TRACE [.JdbcCoordinatorImpl]:412 - Releasing statement [sql : 'update ...
15:57:31,600 TRACE [.JdbcCoordinatorImpl]:525 - Closing prepared statement [sql : 'update ...
15:57:31,601 TRACE [.JdbcCoordinatorImpl]:278 - Starting after statement execution processing [ON_CLOSE]
The algorithm's main method has a #Scope and a #Transactional annotation. The expected behavior is that, if anything goes wrong, the algorithm's updates are ROLLBACK.
Beneath, the algorithm uses a #Service which has a different #Scope and is also #Transactional. The service is the one using Hibernate to update the database, with session.update(entity). The documentation says that, by default, nested transactions reuse the transaction if it exists.
Is that affirmation above correct?
Can the scope change create problems?
How can I have Hibernate reuse the statement during the transaction?
Thanks for your attention
Your understanding is correct. The scope is not related to how transactions are propagated, Spring should wrap your beans with proxies that control transactions regardless of scope.
There is no way to reuse statements when using Hibernate. Even when writing JDBC code manually it is not recommended approach, due to entanglement of code that such approach forces. Common answer to this is to use prepared statement cache on JDBC connection pool. For example with Apache DBCP pool you can use poolPreparedStatements and maxOpenPreparedStatements to control that. Pools bundled with application servers have similar settings.
I have Java Spring application running in a jetty-maven plugin. When I call a myBatis insert statement, the statement is automatically committed. However, when I call update, the statement is not committed. Per the myBatis documentation (http://www.mybatis.org/spring/transactions.html):
You cannot call SqlSession.commit(), SqlSession.rollback() or SqlSession.close() over a Spring managed SqlSession.
How do I configure my application to auto commit on a myBatis update statement?
I enabled logging. Here is what the log states on updates:
2012-12-12 17:20:31,669 DEBUG [org.mybatis.spring.SqlSessionUtils] - Creating a new SqlSession
2012-12-12 17:20:31,669 DEBUG [org.mybatis.spring.SqlSessionUtils] - SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#19e86f9] was not registered for synchronization because synchronization is not active
2012-12-12 17:20:31,669 DEBUG [org.springframework.jdbc.datasource.DataSourceUtils] - Fetching JDBC Connection from DataSource
2012-12-12 17:20:31,669 DEBUG [org.springframework.jdbc.datasource.DriverManagerDataSource] - Creating new JDBC DriverManager Connection to [jdbc:jtds:sqlserver://test/test]
2012-12-12 17:20:31,684 DEBUG [org.mybatis.spring.transaction.SpringManagedTransaction] - JDBC Connection [net.sourceforge.jtds.jdbc.ConnectionJDBC3#af7eaf] will not be managed by Spring
2012-12-12 17:20:31,684 DEBUG [com.persistence.MyMapper.updateMyItem] - ooo Using Connection [net.sourceforge.jtds.jdbc.ConnectionJDBC3#af7eaf]
2012-12-12 17:20:31,684 DEBUG [com.persistence.MyMapper.updateMyItem] - ==> Preparing: update myTable set date=? where id=?
2012-12-12 17:20:31,700 DEBUG [com.persistence.MyMapper.updateMyItem] - ==> Parameters: 2012-11-26 00:00:00.0(Timestamp), 0(Integer)
2012-12-12 17:20:31,700 DEBUG [org.mybatis.spring.SqlSessionUtils] - Closing non transactional SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#19e86f9]
2012-12-12 17:20:31,700 DEBUG [org.springframework.jdbc.datasource.DataSourceUtils] - Returning JDBC Connection to DataSource
On insert, the log is:
2012-12-12 16:35:53,932 DEBUG [org.mybatis.spring.SqlSessionUtils] - Creating a new SqlSession
2012-12-12 16:35:53,932 DEBUG [org.mybatis.spring.SqlSessionUtils] - SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#22da8f] was not registered for synchronization because synchronization is not active
2012-12-12 16:35:53,932 DEBUG [org.springframework.jdbc.datasource.DataSourceUtils] - Fetching JDBC Connection from DataSource
2012-12-12 16:35:53,932 DEBUG [org.springframework.jdbc.datasource.DriverManagerDataSource] - Creating new JDBC DriverManager Connection to [jdbc:jtds:sqlserver://test/test]
2012-12-12 16:35:53,932 DEBUG [org.mybatis.spring.transaction.SpringManagedTransaction] - JDBC Connection [net.sourceforge.jtds.jdbc.ConnectionJDBC3#3af3cb] will not be managed by Spring
2012-12-12 16:35:53,932 DEBUG [com..persistence.MyMapper.insertMyItem] - ooo Using Connection [net.sourceforge.jtds.jdbc.ConnectionJDBC3#3af3cb]
2012-12-12 16:35:53,932 DEBUG [com.persistence.MyMapper.insertMyItem] - ==> Preparing: insert into myTable (id,date) values (?, ?)
2012-12-12 16:35:53,932 DEBUG [com.persistence.MyMapper.insertMyItem] - ==> Parameters: 5(Integer), 2012-11-26 00:00:00.0(Timestamp)
2012-12-12 16:35:53,932 DEBUG [org.mybatis.spring.SqlSessionUtils] - Closing non transactional SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#22da8f]
2012-12-12 16:35:53,932 DEBUG [org.springframework.jdbc.datasource.DataSourceUtils] - Returning JDBC Connection to DataSource
The insert and update log statements seem to indicate the same basic steps.
After a bit more research, I found that it was a client issue. It was always passing a 0 for the id in the update statement. The records have ids > 0. Along the way, I configured spring txn management. It was at that point that I observed the same behavior and realized it must be something other than server side configuration issue. Sorry about not catching that prior to posting.
I am a new to java/spring/hibernate and really felt in love in java after several years of .Net programming.
now I am working on web app using Spring (MVC, declarative transactions) and Hibernate (3.6, as cache provier - ehCache 2.5). I've got some read only and read-write enitties that I would like to cache using Hibernate second cache and query cache.
everything was alright when I used caching for read only entities. I added read-write entity and ran performance tests using jMeter. For read-write entities I am facing with the issue of non repeatable read. E.g. there are several concurrent threads reading and writing to entity table.
Thread 3 gets lookup values:
16:34:45,304 DEBUG [http-bio-8080-exec-3] cache.StandardQueryCache: (StandardQueryCache.java:136) - cached query results were not up to date
16:34:45,304 DEBUG [http-bio-8080-exec-3] hibernate.SQL:(SQLStatementLogger.java:111) - select virtualdev0_.virtual_device_class_id as virtual1_45_, virtualdev0_.virtual_device_class as virtual2_45_, virtualdev0_.sitebox_id as sitebox3_45_, virtualdev0_.timestamp as timestamp45_ from virtual_device_class virtualdev0_ where virtualdev0_.sitebox_id=?
It finds out that cache is not up to date and loads entities, adds them to second level cache, materializes and returns… continuous process from here and up to 16:34:45,826
Meanwhile Thread 9 deletes one of entities and updates second level cache + timestamp:
16:34:45,799 DEBUG [http-bio-8080-exec-9] hibernate.SQL:(SQLStatementLogger.java:111) - delete from virtual_device_class where virtual_device_class_id=?
16:34:45,814 DEBUG [http-bio-8080-exec-9] cache.UpdateTimestampsCache:(UpdateTimestampsCache.java:95) - Invalidating space [virtual_device_class], timestamp: 5466792287494145
Thread 3 continues house keeping activities and finally adds query result to query cache (notice that timestamp will be higher than timpestamp for delete action of Thread 9):
16:34:45,826 DEBUG [http-bio-8080-exec-3] cache.StandardQueryCache:(StandardQueryCache.java:96) - caching query results in region: org.hibernate.cache.StandardQueryCache; timestamp=5466792287543296
Thus at this point of time deleted ID will be in query cache and query cache will be considered as being up to date.
16:34:45,852 DEBUG [http-bio-8080-exec-9] cache.UpdateTimestampsCache:(UpdateTimestampsCache.java:122) - [virtual_device_class] last update timestamp: 5466792287494145, result set timestamp: 5466792287543296
So when you try to get lookups again it will look in query cache and then will start materializing entities from the second cache.
16:34:45,852 DEBUG [http-bio-8080-exec-9] cache.StandardQueryCache:(StandardQueryCache.java:140) - returning cached query results
But deleted item won't be there, so query to db will be done.
16:34:45,863 DEBUG [http-bio-8080-exec-9] loader.Loader:(Loader.java:2022) - loading entity: [com.test.models.VirtualDeviceClass#0b2f363f-fbb9-4d17-8f86-af86ebb5100c]
16:34:45,873 DEBUG [http-bio-8080-exec-9] hibernate.SQL:(SQLStatementLogger.java:111) - select virtualdev0_.virtual_device_class_id as virtual1_45_0_, virtualdev0_.virtual_device_class as virtual2_45_0_, virtualdev0_.sitebox_id as sitebox3_45_0_, virtualdev0
As I am using Load method so it throws exception if entity is not found in db.
in my case entities would be rarely updated it might happen and that's worrying me. I've got few ideas how to try to overcome this issue:
a) set trx isolation level to Repeatable Read in DB (however do not think that it will help because adding to cache logic takes place after reading data from db)
b) manually force Standard Query Cache to evict on entity delete/update
c) do not use query cache at all (try to route most of db queries to use second cache)
Did anybody face with this issue before?
I have migrated to Hibernate 4 and it works fine now.
This issue might have been related to synchronized block being removed in method SessionFactory.getQueryCache(String regionName)
link to hibernate issue