Commit hook in JOOQ - java

I've been using JOOQ in backend web services for a while now. In many of these services, after persisting data to the database (or better said, after successfully committing data), we usually want to write some messages to Kafka about the persisted records so that other services know of these events.
What I'm essentially looking for is: Is there a way for me to register a post-commit hook or callback with JOOQ's DSLContext object, so I can run some code when a transaction successfully commits?
I'm aware of the ExecuteListener and ExecuteListenerProvider interfaces, but as far as I can tell the void end(ExecuteContext ctx) method (which is supposedly for end of lifecycle uses) is not called when committing the transaction. It is called after every query though.
Here's an example:
public static void main(String[] args) throws Throwable {
Class.forName("org.postgresql.Driver");
Connection connection = DriverManager.getConnection("<url>", "<user>", "<pass>");
connection.setAutoCommit(false);
DSLContext context = DSL.using(connection, SQLDialect.POSTGRES_9_5);
context.transaction(conf -> {
conf.set(new DefaultExecuteListenerProvider(new DefaultExecuteListener() {
#Override
public void end(ExecuteContext ctx) {
System.out.println("End method triggered.");
}
}));
DSLContext innerContext = DSL.using(conf);
System.out.println("Pre insert.");
innerContext.insertInto(...).execute();
System.out.println("Post insert.");
});
connection.close();
}
Which always seems to print:
Pre insert.
End method triggered.
Post insert.
Making me believe this is not intended for commit hooks.
Is there perhaps a JOOQ guru that can tell me if there is support for commit hooks in JOOQ? And if so, point me in the right direction?

The ExecuteListener SPI is listening to the lifecycle of a single query execution, i.e. of this:
innerContext.insertInto(...).execute();
This isn't what you're looking for. Instead, you should implement your own TransactionProvider (possibly delegating to jOOQ's DefaultTransactionProvider). You can then implement any logic you want prior to the actual commit logic.
Note that jOOQ 3.9 will also provide a new TransactionListener SPI (see #5378) to facilitate this.

Related

Vertx add handler to Spring data API

In vertx if we want to execute jdbc operation that will not block main loop we use the following code,
client.getConnection(res -> {
if (res.succeeded()) {
SQLConnection connection = res.result();
connection.query("SELECT * FROM some_table", res2 -> {
if (res2.succeeded()) {
ResultSet rs = res2.result();
// Do something with results
}
});
} else {
// Failed to get connection - deal with it
}
});
Here we add handler that will execute when our operation will be done.
Now I want to use Spring Data API but is it in the same way as above
Now I used it as follow
#Override
public void start() throws Exception {
final EventBus eventBus = this.vertx.eventBus();
eventBus.<String>consumer(Addresses.BEGIN_MATCH.asString(), handler-> {
this.vertx.executeBlocking(()-> {
final String body = handler.body();
final JsonObject resJO = this.json.asJson(body);
final int matchId = Integer.parseInt(resJO.getString("matchid"));
this.matchService.beginMatch(matchId);//this service call method of crudrepository
log.info("Match [{}] is started",matchId);
}
},
handler->{});
}
Here I used execute blocking but it use thread from the worker pool is it any alternative to wrap blocking code?
To answer the question: the need of using executeBlocking method goes away if:
You run multiple instance of your verticle in separate pids (using systemd or docker or whatever that allows you to run independent java process safely with recovery mode) and listening the same eventbus channel in cluster mode (with hazelcast for example).
You run multiple instance of your verticle as worker verticles as suggested by tsegismont in comment of this answer.
Also, it's not related to the question and it's really a personal opinion but I give it anyway: I think it's a bad idea to use Spring dependencies inside a vert.x application. Spring is relevant for Servlet based applications using at least Spring Core. I mean that's relevant to be used in an eco-system totally based on Spring. Otherwise you'll bring back a lot of unused big dependencies into your jar files.
You have for almost each Spring modules, small, lighter and independent libs with the same purposes. For example, for IoC you have guice, hk2, weld...
Personally if I need to use SQL based database, I'd be inspired by the Spring's JdbcTemplate and RowMapper model without using any Spring dependencies. It's pretty simple to reproduce that with a simple interface like that :
import java.io.Serializable;
import java.sql.ResultSet;
import java.sql.SQLException;
public interface RowMapper<T extends Serializable> {
T map(ResultSet rs) throws SQLException;
}
And another interface DatabaseProcessor with a method like that :
<T extends Serializable> List<T> execute(String query, List<QueryParam> params, RowMapper<T> rowMapper) throws SQLException;
And a class QueryParam with the value, the order and the name of your query parameters (to avoid SQL injection vulnerability).

How to invoke additional SQL before each query?

I have a database that uses plv8 engine and have stored procedures written in coffescript.
When I use jDBI, in order to call those procedures, after I open connection I have to run:
SET plv8.start_proc = 'plv8_init';
Can I do a similar thing when using JOOQ with javax.sql.DataSource?
One option is to use an ExecuteListener. You can hook into the query execution lifecycle by implementing the executeStart() method:
new DefaultExecuteListener() {
#Override
public void executeStart(ExecuteContext ctx) {
DSL.using(ctx.connection()).execute("SET plv8.start_proc = 'plv8_init'");
}
}
Now, supply the above ExecuteListener to your Configuration, and you're done.
See also the manual:
http://www.jooq.org/doc/latest/manual/sql-execution/execute-listeners

Shared Transaction between different OracleDB Connections

After several days passed to investigate about the issue, I decided to submit this question because there is no sense apparently in what is happening.
The Case
My computer is configured with a local Oracle Express database.
I have a JAVA project with several JUnit Tests that extend a parent class (I know that it is not a "best practice") which opens an OJDBC Connection (using a static Hikari connection pool of 10 Connections) in the #Before method and rolled Back it in the #After.
public class BaseLocalRollbackableConnectorTest {
private static Logger logger = LoggerFactory.getLogger(BaseLocalRollbackableConnectorTest.class);
protected Connection connection;
#Before
public void setup() throws SQLException{
logger.debug("Getting connection and setting autocommit to FALSE");
connection = StaticConnectionPool.getPooledConnection();
}
#After
public void teardown() throws SQLException{
logger.debug("Rollback connection");
connection.rollback();
logger.debug("Close connection");
connection.close();
}
StacicConnectionPool
public class StaticConnectionPool {
private static HikariDataSource ds;
private static final Logger log = LoggerFactory.getLogger(StaticConnectionPool.class);
public static Connection getPooledConnection() throws SQLException {
if (ds == null) {
log.debug("Initializing ConnectionPool");
HikariConfig config = new HikariConfig();
config.setMaximumPoolSize(10);
config.setDataSourceClassName("oracle.jdbc.pool.OracleDataSource");
config.addDataSourceProperty("url", "jdbc:oracle:thin:#localhost:1521:XE");
config.addDataSourceProperty("user", "MyUser");
config.addDataSourceProperty("password", "MyPsw");
config.setAutoCommit(false);
ds = new HikariDataSource(config);
}
return ds.getConnection();
}
}
This project has hundreds tests (not in parallel) that use this connection (on localhost) to execute queries (insert/update and select) using Sql2o but transaction and clousure of connection is managed only externally (by the test above).
The database is completely empty to have ACID tests.
So the expected result is to insert something into DB, makes the assertions and then rollback. in this way the second test will not find any data added by previous test in order to maintain the isolation level.
The Problem
Running all tests together (sequentially), 90% of times they work properly. the 10% one or two tests, randomly, fail, because there is dirty data in the database (duplicated unique for example) by previous tests. looking the logs, rollbacks of previous tests were done properly. In fact, if I check the database, it is empty)
If I execute this tests in a server with higher performance but the same JDK, same Oracle DB XE, this failure ratio is increased to 50%.
This is very strange and I have no idea because the connections are different between tests and the rollback is called each time. The JDBC Isolation level is READ COMMITTED so even if we used the same connection, this should not create any problem even using the same connection.
So my question is:
Why it happen? do you have any idea? Is the JDBC rollback synchronous as I know or there could be some cases where it can go forward even though it is not fully completed?
These are my main DB params:
processes 100
sessions 172
transactions 189
I have run into the same problem 2-3 years ago (I have spent a lot of time to get this straight). The problem is that the #Before and #After is not always really sequential. [You could try this by starting the process in debug and place some breakpoints in the annotated methods.
Edit: I was not clear enough as Tonio pointed out. The order of #Before and #After is guarantied in terms of running before the test and afterwards it. The problem was in my case that sometimes the #Before and the #After was messed up.
Expected:
#Before -> test1() -> #After -> #Before -> #test2() -> #After
But sometimes I experienced the following order:
#Before -> test1() -> #Before -> #After -> #test2() -> #After
I am not sure thet it is a bug or not. At the time I dug into the depth of it and it seemed like some kind of (processor?) scheduling related magic.
The solution to that problem was in our case to run the tests on a single thread and call manually the init and cleanup processes... Something like this:
public class BaseLocalRollbackableConnectorTest {
private static Logger logger = LoggerFactory.getLogger(BaseLocalRollbackableConnectorTest.class);
protected Connection connection;
public void setup() throws SQLException{
logger.debug("Getting connection and setting autocommit to FALSE");
connection = StaticConnectionPool.getPooledConnection();
}
public void teardown() throws SQLException{
logger.debug("Rollback connection");
connection.rollback();
logger.debug("Close connection");
connection.close();
}
#Test
public void test() throws Exception{
try{
setup();
//test
}catch(Exception e){ //making sure that the teardown will run even if the test is failing
teardown();
throw e;
}
teardown();
}
}
I have not tested it but a much more elegant solution could be to syncronize the #Before and #After methods on the same object. Please update me if You have the chanse to give it a try. :)
I hope it will solve your problem too.
If your problem just needs to be "solved" (e.g. not "best practice") regardless of performance to just make the tests complete in order, try to set:
config.setMaximumPoolSize(1);
You might need to set a timeout higher since the tests in the test queue will wait for its turn and might timeout. I usually don't suggest solutions like this but your setup is suboptimal, it will lead to race conditions and data loss. However, good luck with the tests.
Try configure audit on all statements in Oracle. Then find sessions which live simultaneously. I think that there is the problem in tests. JDBC rollback is synchronous. Commit can be configured as commit nowait but I don't think you do it special in your tests.
Also pay attention on parallel dml. On one table in the same transaction you can't do parallel dml + any other dml without commit because you get Ora-12838.
Do you have autonoumous transaction? Business logic in tests can manually rollback them and during tests autonoumous transaction is like another session and it doesn't see any commits from parent session.
Not sure if this will fix it, but you could try:
public class BaseLocalRollbackableConnectorTest {
private static Logger logger = LoggerFactory.getLogger(BaseLocalRollbackableConnectorTest.class);
protected Connection connection;
private Savepoint savepoint;
#Before
public void setup() throws SQLException{
logger.debug("Getting connection and setting autocommit to FALSE");
connection = StaticConnectionPool.getPooledConnection();
savepoint = connection.setSavepoint();
}
#After
public void teardown() throws SQLException{
logger.debug("Rollback connection");
connection.rollback(savepoint);
logger.debug("Close connection");
connection.close();
while (!connection.isClosed()) {
try { Thread.sleep(500); } catch (InterruptedException ie) {}
}
}
Really there are two 'fixes' there - loop after the close to be sure the connection IS closed before returning to the pool. Second, create a savepoint before the test and restore it afterwards.
Like all other answers have pointed out, it's hard to say what goes wrong with the provided information. Further more, even if you manage to find the current issue by audit, it doesn't mean that your tests are free from data errors.
But here's an alternative: because you already have a blank database schema, you can export it to a SQL file. Then before each test:
Drop the schema
Re-create the schema again
Feed the sample data (if needed)
It would save lots of time debugging, make sure the database in its pristine state every time you run the tests. All of this can be done in a script.
Note: Oracle Enterprise has the flashback function to support your kind of operation. Also, if you can manage to use Hibernate and the likes, there's other in-memory databases (like HSQLDB) that you can utilize to both increase testing speed and maintain coherence in your data set.
EDIT: It seems implausible, but just in case: connection.rollback() only takes effect if you don't call commit()
before it.
After all confirmation from your answers that I am not mad with Rollbacks and transactions behavior in unit tests, i deeply checked all queries and all possible causes and fortunately (yes furtunately...even if I'm ashamed for that, I make my mind free) all works as expected (Transactions, Before, After, etc).
There are some queries that get the result of some complex views (and radically deep configured into the DAO layer) to identify the single row information.
This view is based on the MAX of a TIMESTAMP in order to identify latest of a particular event (in the real life the events coming after several months).
Doing the preparation of the database to proceed with the unit tests, these events are added sequentially by each test.
In some cases, when these insert queries under the same transaction are particular fast, more events related to the same object are added in the same Millisecond (The TIMESTAMP is added manually using a JODA DateTime) and the MAX of a date, returns two or more values.
For this reason it is explained the fact that on more performant computers/servers, this happened more frequently than the slower ones.
This view is used in more tests and depending by the test, the error is different and random (NULL value added as Primary Key, duplicated primary Key, etc) .
For Example: in the following INSERT SELECT query is evident this bug:
INSERT INTO TABLE1 (ID,COL1,COL2,COL3)
SELECT :myId, T.VAL1, T.VAL2, T.VAL3
FROM MyView v
JOIN Table2 t on t.ID = v.ID
WHERE ........
the parameter myId is added afterwards as Sql2o Parameter
MyView is
SELECT ID, MAX(MDATE) FROM TABLEV WHERE.... GROUP BY ...
When the view returns at least 2 results due to the same Max Date, it fails because the ID is fixed (generated by a sequence at beginning but stored using the parameter in a second time). This generates the PK constraint violated.
This is only one case but make me (and my colleagues) crazy due to this randomly behaviours...
Adding a sleep of 1 millisecond between those events insert, it is fixed. now we are working to find a different solution even though this case (an user that interact two times in the same millisecond) cannot happen in production system
but the important things is that no magic happens as usual!
Now you can insult me :)
You can do one thing increase the no. of connections in max pool size and rollback the operation in the same place where you committed the operation instead of using it in #after statement.
Hope it will work.

Clear the in memory database after every testcase

I am using hsqldb for testing some of the data access layer in Java. I have certain test cases like 100 around. I create a in memory database and then insert some values in the table so that with my test case i can load it, but the problem is for every test case i need to clear the in memory database, only the values not the tables.
Is it possible, one thing is i need to manually delete the rows from the table and is there some thing else I can use.
Thanks
If you use DbUnit in unit-tests, you can specify that DbUnit should perform a clean-and-insert operation before every test to ensure that the contents of the database are in a valid state before every test. This can be done in a manner similar to the one below:
#Before
public void setUp() throws Exception
{
logger.info("Performing the setup of test {}", testName.getMethodName());
IDatabaseConnection connection = null;
try
{
connection = getConnection();
IDataSet dataSet = getDataSet();
//The following line cleans up all DbUnit recognized tables and inserts and test data before every test.
DatabaseOperation.CLEAN_INSERT.execute(connection, dataSet);
}
finally
{
// Closes the connection as the persistence layer gets it's connection from elsewhere
connection.close();
}
}
Note that it is always recommended to perform any setup activities in a #Before setup method, rather than in a #After teardown method. The latter indicates that you are creating new database objects in a method being tested, which IMHO does not exactly lend easily to testable behavior. Besides, if you are cleaning up after a test, to ensure that a second test runs correctly, then any such cleanup is actually a part of the setup of the second test, and not a teardown of the first.
The alternative to using DbUnit is to start a new transaction in your #Before setup method, and to roll it back in the #After teardown method. This would depend on how your data access layer is written.
If your data access layer accepts Connection objects, then your setup routine should create them, and turn off auto-commit. Also, there is an assumption that your data access layer will not invoke Connection.commit. Assuming the previous, you can rollback the transaction using Connection.rollback() in your teardown method.
With respect to transaction control, the below snippet demonstrates how one would do it using JPA for instance:
#Before
public void setUp() throws Exception
{
logger.info("Performing the setup of test {}", testName.getMethodName());
em = emf.createEntityManager();
// Starts the transaction before every test
em.getTransaction.begin();
}
#After
public void tearDown() throws Exception
{
logger.info("Performing the teardown of test {}", testName.getMethodName());
if (em != null)
{
// Rolls back the transaction after every test
em.getTransaction().rollback();
em.close();
}
}
Similar approaches would have to be undertaken for other ORM frameworks or even your custom persistence layer, if you have written one.
Could you use HSQLDB transactions?
Before every test, start a new transaction:
START TRANSACTION;
After every test, roll it back:
ROLLBACK;
This would also allow you to have some permanent data.
Depending on your test framework, it is possible to execute a delete call after each test. In Junit the annotation is #After and a method with this annotation will be run after each [#Test] method.
You have to use Truncate Query for the Destroy Database Memory or this link can be helpful to you.
http://wiki.apache.org/db-derby/InMemoryBackEndPrimer

Obtaining a Hibernate transaction within a Spring class

I am working on a program that uses Spring and obtains Hibernate transactions transparently using a TransactionInterceptor. This makes it very convenient to say "when this method is invoked from some other class, wrap it in a transaction if it's not already in one."
However, I have a class that needs to attempt a write and must find out immediately whether or not it has succeeded. While I want two methods anyway, I was hoping that there was a way to keep them in the same class without needing to explicitly create an transaction procedurally. In effect, I'd like something like this:
public void methodOne() {
//..do some stuff
try {
transactionalMethod();//won't do what I want
} catch(OptimisticLockingFailure e) {
//..recover
}
}
#Transactional
public void transactionalMethod() {
//...do some stuff to database
}
Unfortunately, as I understand it, this wouldn't work because I'd just be directly calling transactionalMethod. Is there a way to ask Spring to call a local method for me and wrap it in a transaction if needed, or does it have to be in another class that I wire to this one?
Define an interface which the class implements which does the transactionalMethod(); use dependency injection to set the class' value of that to its own implementation; in your bean factory, allow Spring to insert an Around aspect around that interface implementation. That should work for your needs.
If you want the transactionalMethod to be part of it's own transaction and not simply join onto the transaction that is already active you have to set the propagation to REQUIRES_NEW. Like so
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void transactionalMethod() {
//...do some stuff to database
}
You should also check that your transaction manager supports this propagation. the means that transactionalMethos is completely seperate from the other transaction that it was called from and it will commit / rollback completely seperately as well.

Categories