How to invoke additional SQL before each query? - java

I have a database that uses plv8 engine and have stored procedures written in coffescript.
When I use jDBI, in order to call those procedures, after I open connection I have to run:
SET plv8.start_proc = 'plv8_init';
Can I do a similar thing when using JOOQ with javax.sql.DataSource?

One option is to use an ExecuteListener. You can hook into the query execution lifecycle by implementing the executeStart() method:
new DefaultExecuteListener() {
#Override
public void executeStart(ExecuteContext ctx) {
DSL.using(ctx.connection()).execute("SET plv8.start_proc = 'plv8_init'");
}
}
Now, supply the above ExecuteListener to your Configuration, and you're done.
See also the manual:
http://www.jooq.org/doc/latest/manual/sql-execution/execute-listeners

Related

JUnit test for a method that contains SQL queries

I have an old Java project (no frameworks/build tools used) that has a class full of SQL methods and corresponding Bean-classes. The SQL methods mostly use SELECT, INSERT and UPDATE queries like this:
public static void sqlUpdateAge(Connection dbConnection, int age, int id) {
PreparedStatement s = null;
ResultSet r = null;
String sql = "UPDATE person SET age = ? WHERE id = ?";
try {
s = dbConnection.prepareStatement(sql);
s.setInt(1, age);
s.setInt(2, id);
s.addBatch();
s.executeBatch();
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
if (r != null)
r.close();
if (s != null)
s.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
What is the best practice in unit testing when it comes to SQL queries?
The easiest way I can think of, would be to use my development database; just call the sqlUpdateAge() in the test class, query the database for a result set and assertTrue that the set age is in the result set. However, this would fill up the development database with unnecessary data, and I would like to avoid that.
Is the solution to create a so-called in-memory database or somehow rollback the changes I made?
If I need an in-memory database:
Where and how would I create it? Straight to the test class, or perhaps to a config file?
How do I pass it to the updateAge() method?
I would suggest to see if you can start with autmoating the build. That is either by introducing a build tool such as maven or gradle - or if not possible - scipting the build. In any case, your goal should be to get to a point where it's easy for you to trigger a buil together with tests whenever code changes.
If you are not able to produce a consistent build on every change with the guarantee that all unit tests have been run, then there's really no value in writing unit tests in the first place. That is because otherwise, your tests are going to fail eventually due to code modifications and you wouldn't notice unless all your tests are automatically run.
Once you have that, you might have some hints to how you would like to run unit or integration tests.
As you can't benefit from testing support that many application frameworks provide, you're basically left on your own for how to configure database testing setup. In that case, I don't think that an inmemory database is really the best opion, because:
It's a different database technology than what you are normally using, and as the code indicates you are not using an ORM that will take care of different SQL dialects for you. As that's the case, you might find yourself in a position, where you are unable to accurately test your code because of SQL dialect differeces.
You will need to do all the setup of the inmemory DB yourself - which is of course possible, but still it's a piece of code that you need to maintain and that can also fail.
The two alternatives I can think of are:
Use Docker to start your actual database technology for every time you run the tests. (that's also something you have to script for yourself, but it will most likely be a very simple and short command you need to execute)
have a test database running on your test environment that you use. Every time before you run the tests, esure the database is reset to the original state. (easiest way to do this is to drop the existing schema and restore to the original schema). In this case, you will need to ensure that you don't run multiple builds in parallell against the same test database.
These suggestions apply only if you have experience on the shell and/or have support from someone in ops. If not, setting up H2 might be easier and more straight forward.
Things would have been easy with a Spring Boot project. In your case, you have many strategies:
Configure a H2 database. You can initialize your database with the creation of a schema and insertion of data in a setUp method with the #BeforeEach annotation.
You can use a dedicated framework like DbUnit.
You will have to initialize your dbConnection also in a setUp method in your unit test.

How to set consistency level on per-query basis in Java Spring Data Cassandra Reactive Library

I am using Spring Data Cassandra reactive library in Java and using ReactiveCqlTemplate to query ScyllaDB.
On an application level, we can set by specifying property as below in the application.properties file
spring.data.cassandra.consistency-level=quorum
My question is how do I specify consistency level on a per-query level using ReactiveCqlTemplate? I am using ReactiveCqlTemplate to query my Scylla DB. The code sample is as below:
#Autowired
private ReactiveCqlTemplate template;
#Override
public Mono<Template> findTemplate(Channel channel, String id) {
String query = "select * from task where id = ? and channel = ?";
return template.queryForObject(query, new RowMapper<Template>() {
public Template mapRow(Row row, int rowNum) {
Template template = new Template();
template.setContent(row.getString("content"));
template.setMessageType(row.getString("message_type"));
template.setSource(row.getString("source"));
template.setSubject(row.getString("subject"));
template.setTemplateKey(new TemplateKey(row.getString("id"), Channel.fromString(row.getString(("channel")))));
return template;
}}, UUID.fromString(id), channel.toString());
}
There is an option to set ConsistencyLevel in using the template variable above but that would be for the whole object. I would like to specify on each-query I am shooting to Scylla DB.
Have a look at:
ReactiveCqlTemplate.setConsistencyLevel(:ConsistencyLevel)
And...
ReactiveCqlTemplate.setSerialConsistencyLevel(:ConsistencyLevel)
NOTE: This will apply to all Statements executed with this template.
If you need finer-grained control per Statement, then I suggest:
ReactiveCqlTemplate.execute(:ReactivePreparedStatementCreator)
Or...
ReactiveCqlTemplate.execute(:ReactivePreparedStatementCreator, ReactivePreparedStatementCallback<T>)
The creator allows you to create the Statement however you like. And, the callback gives you access to the DataStax Statement API where you can configure additional settings as needed.
See Statement and StatementBuilder.

Vertx add handler to Spring data API

In vertx if we want to execute jdbc operation that will not block main loop we use the following code,
client.getConnection(res -> {
if (res.succeeded()) {
SQLConnection connection = res.result();
connection.query("SELECT * FROM some_table", res2 -> {
if (res2.succeeded()) {
ResultSet rs = res2.result();
// Do something with results
}
});
} else {
// Failed to get connection - deal with it
}
});
Here we add handler that will execute when our operation will be done.
Now I want to use Spring Data API but is it in the same way as above
Now I used it as follow
#Override
public void start() throws Exception {
final EventBus eventBus = this.vertx.eventBus();
eventBus.<String>consumer(Addresses.BEGIN_MATCH.asString(), handler-> {
this.vertx.executeBlocking(()-> {
final String body = handler.body();
final JsonObject resJO = this.json.asJson(body);
final int matchId = Integer.parseInt(resJO.getString("matchid"));
this.matchService.beginMatch(matchId);//this service call method of crudrepository
log.info("Match [{}] is started",matchId);
}
},
handler->{});
}
Here I used execute blocking but it use thread from the worker pool is it any alternative to wrap blocking code?
To answer the question: the need of using executeBlocking method goes away if:
You run multiple instance of your verticle in separate pids (using systemd or docker or whatever that allows you to run independent java process safely with recovery mode) and listening the same eventbus channel in cluster mode (with hazelcast for example).
You run multiple instance of your verticle as worker verticles as suggested by tsegismont in comment of this answer.
Also, it's not related to the question and it's really a personal opinion but I give it anyway: I think it's a bad idea to use Spring dependencies inside a vert.x application. Spring is relevant for Servlet based applications using at least Spring Core. I mean that's relevant to be used in an eco-system totally based on Spring. Otherwise you'll bring back a lot of unused big dependencies into your jar files.
You have for almost each Spring modules, small, lighter and independent libs with the same purposes. For example, for IoC you have guice, hk2, weld...
Personally if I need to use SQL based database, I'd be inspired by the Spring's JdbcTemplate and RowMapper model without using any Spring dependencies. It's pretty simple to reproduce that with a simple interface like that :
import java.io.Serializable;
import java.sql.ResultSet;
import java.sql.SQLException;
public interface RowMapper<T extends Serializable> {
T map(ResultSet rs) throws SQLException;
}
And another interface DatabaseProcessor with a method like that :
<T extends Serializable> List<T> execute(String query, List<QueryParam> params, RowMapper<T> rowMapper) throws SQLException;
And a class QueryParam with the value, the order and the name of your query parameters (to avoid SQL injection vulnerability).

Commit hook in JOOQ

I've been using JOOQ in backend web services for a while now. In many of these services, after persisting data to the database (or better said, after successfully committing data), we usually want to write some messages to Kafka about the persisted records so that other services know of these events.
What I'm essentially looking for is: Is there a way for me to register a post-commit hook or callback with JOOQ's DSLContext object, so I can run some code when a transaction successfully commits?
I'm aware of the ExecuteListener and ExecuteListenerProvider interfaces, but as far as I can tell the void end(ExecuteContext ctx) method (which is supposedly for end of lifecycle uses) is not called when committing the transaction. It is called after every query though.
Here's an example:
public static void main(String[] args) throws Throwable {
Class.forName("org.postgresql.Driver");
Connection connection = DriverManager.getConnection("<url>", "<user>", "<pass>");
connection.setAutoCommit(false);
DSLContext context = DSL.using(connection, SQLDialect.POSTGRES_9_5);
context.transaction(conf -> {
conf.set(new DefaultExecuteListenerProvider(new DefaultExecuteListener() {
#Override
public void end(ExecuteContext ctx) {
System.out.println("End method triggered.");
}
}));
DSLContext innerContext = DSL.using(conf);
System.out.println("Pre insert.");
innerContext.insertInto(...).execute();
System.out.println("Post insert.");
});
connection.close();
}
Which always seems to print:
Pre insert.
End method triggered.
Post insert.
Making me believe this is not intended for commit hooks.
Is there perhaps a JOOQ guru that can tell me if there is support for commit hooks in JOOQ? And if so, point me in the right direction?
The ExecuteListener SPI is listening to the lifecycle of a single query execution, i.e. of this:
innerContext.insertInto(...).execute();
This isn't what you're looking for. Instead, you should implement your own TransactionProvider (possibly delegating to jOOQ's DefaultTransactionProvider). You can then implement any logic you want prior to the actual commit logic.
Note that jOOQ 3.9 will also provide a new TransactionListener SPI (see #5378) to facilitate this.

Database cleanup after Junit tests

I have to test some Thrift services using Junit. When I run my tests as a Thrift client, the services modify the server database. I am unable to find a good solution which can clean up the database after each test is run.
Cleanup is important especially because the IDs need to be unique which are currently read form an XML file. Now, I have to manually change the IDs after running tests, so that the next set of tests can run without throwing primary key violation in the database. If I can cleanup the database after each test run, then the problem is completely resolved, else I will have to think about other solutions like generating random IDs and using them wherever IDs are required.
Edit: I would like to emphasize that I am testing a service, which is writing to database, I don't have direct access to the database. But since, the service is ours, I can modify the service to provide any cleanup method if required.
If you are using Spring, everything you need is the #DirtiesContext annotation on your test class.
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration("/test-context.xml")
#DirtiesContext(classMode = ClassMode.AFTER_EACH_TEST_METHOD)
public class MyServiceTest {
....
}
Unless you as testing specific database actions (verifying you can query or update the database for example) your JUnits shouldn't be writing to a real database. Instead you should mock the database classes. This way you don't actually have to connect and modify the database and therefor no cleanup is needed.
You can mock your classes a couple of different ways. You can use a library such as JMock which will do all the execution and validation work for you. My personal favorite way to do this is with Dependency Injection. This way I can create mock classes that implement my repository interfaces (you are using interfaces for your data access layer right? ;-)) and I implement only the needed methods with known actions/return values.
//Example repository interface.
public interface StudentRepository
{
public List<Student> getAllStudents();
}
//Example mock database class.
public class MockStudentRepository implements StudentRepository
{
//This method creates fake but known data.
public List<Student> getAllStudents()
{
List<Student> studentList = new ArrayList<Student>();
studentList.add(new Student(...));
studentList.add(new Student(...));
studentList.add(new Student(...));
return studentList;
}
}
//Example method to test.
public int computeAverageAge(StudentRepository aRepository)
{
List<Student> students = aRepository.GetAllStudents();
int totalAge = 0;
for(Student student : students)
{
totalAge += student.getAge();
}
return totalAge/students.size();
}
//Example test method.
public void testComputeAverageAge()
{
int expectedAverage = 25; //What the expected answer of your result set is
int actualAverage = computeAverageAge(new MockStudentRepository());
AssertEquals(expectedAverage, actualAverage);
}
How about using something like DBUnit?
Spring's unit testing framework has extensive capabilities for dealing with JDBC. The general approach is that the unit tests runs in a transaction, and (outside of your test) the transaction is rolled back once the test is complete.
This has the advantage of being able to use your database and its schema, but without making any direct changes to the data. Of course, if you actually perform a commit inside your test, then all bets are off!
For more reading, look at Spring's documentation on integration testing with JDBC.
When writing JUnit tests, you can override two specific methods: setUp() and tearDown(). In setUp(), you can set everything thats necessary in order to test your code so you dont have to set things up in each specific test case. tearDown() is called after all the test cases run.
If possible, you could set it up so you can open your database in the setUp() method and then have it clear everything from the tests and close it in the tearDown() method. This is how we have done all testing when we have a database.
Heres an example:
#Override
protected void setUp() throws Exception {
super.setUp();
db = new WolfToursDbAdapter(mContext);
db.open();
//Set up other required state and data
}
#Override
protected void tearDown() throws Exception {
super.tearDown();
db.dropTables();
db.close();
db = null;
}
//Methods to run all the tests
Assuming you have access to the database: Another option is to create a backup of the database just before the tests and restore from that backup after the tests. This can be automated.
If you are using Spring + Junit 4.x then you don't need to insert anything in DB.
Look at
AbstractTransactionalJUnit4SpringContextTests class.
Also check out the Spring documentation for JUnit support.
It's a bit draconian, but I usually aim to wipe out the database (or just the tables I'm interested in) before every test method execution. This doesn't tend to work as I move into more integration-type tests of course.
In cases where I have no control over the database, say I want to verify the correct number of rows were created after a given call, then the test will count the number of rows before and after the tested call, and make sure the difference is correct. In other words, take into account the existing data, then see how the tested code changed things, without assuming anything about the existing data. It can be a bit of work to set up, but let's me test against a more "live" system.
In your case, are the specific IDs important? Could you generate the IDs on the fly, perhaps randomly, verify they're not already in use, then proceed?
I agree with Brainimus if you're trying to test against data you have pulled from a database. If you're looking to test modifications made to the database, another solution would be to mock the database itself. There are multiple implementations of in-memory databases that you can use to create a temporary database (for instance during JUnit's setUp()) and then remove the entire database from memory (during tearDown()). As long as you're not using an vendor-specific SQL, then this is a good way to test modifying a database without touching your real production one.
Some good Java databases that offer in memory support are Apache Derby, Java DB (but it is really Oracle's flavor of Apache Derby again), HyperSQL (better known as HSQLDB) and H2 Database Engine. I have personally used HSQLDB to create in-memory mock databases for testing and it worked great, but I'm sure the others would offer similar results.

Categories