i'm writing unit tests for some data access code. the key pieces in the setup consist of:
jOOQ generated artifacts for CRUD operations
Liquibase to handle schema evolutions
given as much, i'm trying to setup the tests as follows:
create a java.sql.Connection to initialize an H2 database with the appropriately named schema. (it's worth noting here that the connection is created with the following URL:
jdbc:h2:mem:[schema-name];MODE=MySQL;DB_CLOSE_DELAY=-1).
using the aforementioned connection, invoke Liquibase to run through a change log that creates all the objects in the database schema
using the aforementioned connection, create a org.jooq.DSLContext with which the data access components can be tested.
an abstract class encapsulates these three steps in a #Before annotated method, and test classes extend this abstract class to leverage the initialized org.jooq.DSLContext instance. something like this:
abstract class DbTestBase {
protected lateinit var dslContext: DSLContext
private lateinit var connection: Connection
open fun setUp() {
connection = DriverManager.getConnection("jdbc:h2:mem:foo;MODE=MySQL;DB_CLOSE_DELAY=-1")
// invoke Liquibase with this connection instance...
dslContext = DSL.using(connection, SQLDialect.H2)
}
open fun tearDown() {
dslContext.close()
connection.close()
}
}
class MyTest : DbTestBase() {
private lateinit var repository: Repository
#Before override fun setUp() {
super.setUp()
repository = Repository(dslContext)
}
#After override fun tearDown() {
super.tearDown()
}
#Test fun something() {
repository.add(Bar())
}
}
this results in the following exception:
Caused by: org.h2.jdbc.JdbcSQLException: Schema "foo" not found; SQL statement:
insert into `foo`.`bar` (`id`) values (?) [90079-196]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.command.Parser.getSchema(Parser.java:688)
at org.h2.command.Parser.getSchema(Parser.java:694)
at org.h2.command.Parser.readTableOrView(Parser.java:5535)
at org.h2.command.Parser.readTableOrView(Parser.java:5529)
at org.h2.command.Parser.parseInsert(Parser.java:1062)
at org.h2.command.Parser.parsePrepared(Parser.java:417)
at org.h2.command.Parser.parse(Parser.java:321)
at org.h2.command.Parser.parse(Parser.java:293)
at org.h2.command.Parser.prepareCommand(Parser.java:258)
at org.h2.engine.Session.prepareLocal(Session.java:578)
at org.h2.engine.Session.prepareCommand(Session.java:519)
at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1204)
at org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:73)
at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:288)
at org.jooq.impl.ProviderEnabledConnection.prepareStatement(ProviderEnabledConnection.java:106)
at org.jooq.impl.SettingsEnabledConnection.prepareStatement(SettingsEnabledConnection.java:70)
at org.jooq.impl.AbstractQuery.prepare(AbstractQuery.java:410)
at org.jooq.impl.AbstractDMLQuery.prepare(AbstractDMLQuery.java:342)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:316)
... 25 more
i can see the Liquibase logging at the point where the schema is regenerated. and i've since changed the H2 URL to create file-based database, which i was able to inspect and verify that the schema does indeed exist.
i'd appreciate any help in spotting anything wrong in the approach.
after much trial and error i was able to work through my issues by resolving two problems:
Lukas's comment differentiating "database" (or "catalog") vs "schema" pushed me in the right direction. though the terms seem to be used interchangeably in MySQL (my production database), they are not in H2. seems rather obvious in hindsight, but the remedy to this was to invoke a JDBC call to manually construct the schema and then set it as the default just before invoking Liquibase to reconstruct the schema, a la:
...
connection = DriverManager.getConnection(...)
connection.createStatement().executeUpdate("create schema $schemaName")
connection.schema = schemaName.toUpperCase()
// invoke Liquibase with this connection
...
despite opening a case-insensitive connection to H2, the toUpperCase() invocation still proved necessary. not sure why...
jOOQ quotes names for all schema objects so as to make them case-insensitive. however, as i've come to understand, quotes are used to enforce case-sensitivity in H2. the presence of the quotes in the generated queries were therefore causing a slew of errors due to objects that could not be found. the remedy for this was to supply a different RenderNameStyle to the query generator that would omit the quotes, a la:
...
val settings = Settings().withRenderNameStyle(RenderNameStyle.AS_IS)
val dslContext = DSL.using(connection, SQLDialect.H2, settings)
...
hope this can be of help to someone else down the line.
Related
I am using below code to insert data into DB2 tables and it's part of a grails cron job. Job ran successfully but i don't see data inserted in Database tables. I don't see any errors also in application log. I reran the same job after some time and i see the data in database tables. I am not sure why i see this behaviour.
Code Snippet :
def conn = new Sql( dataSource )
For loop which is running from 1 to 11813
conn.execut("SQL Query")
Few things:
1. I have not explicitly called conn.close() since when we are using SQL with datasource we don't have to call this
2. Method in which i am using this conn.execute is Transactional.
3. This method contains hibernate object save also and it is like below:
if (!Object.save(flush:true, failOnError:true)) {
//throw exception
}
Can you please suggest. Thanks!
Try putting this code into a grails service and add the #Transactional annotation above the class name.
I am trying to make unit test on a DAO class using Mockito. I have written some unit test before but not on a DAO class using some data base(in this case JDBC and MySQl).
I decided to start with this simple method but I do not now which are the good practices and I do not know how to start.
I do not know if this is important in this case but the project is using Spring Framework.
public class UserProfilesDao extends JdbcDaoSupport {
#Autowired
private MessageSourceAccessor msa;
public long getUserId(long userId, int serviceId) {
String sql = msa.getMessage("sql.select.service_user_id");
Object[] params = new Object[] { userId, serviceId };
int[] types = new int[] { Types.INTEGER, Types.INTEGER };
return getJdbcTemplate().queryForLong(sql, params, types);
}
}
If you really like to test the DAO create an in memory database. Fill it with the expected values, execute the query within the DAO and check that the result is correct for the previous inserted values in the database.
Mocking the Connection, ResultSet, PreparedStatement is too heavy and the result are not as expected, because you are not accessing to a real db.
Note: to use this approach your in memory database should have the same dialect of your phisical database, so don't use specific functions or syntax of the final database, but try to follow the SQL standard.
If you use an in memory database you are "mocking" the whole database. So the result test is not a real Unit test, but is not also an integration test. Use a tool like DBUnit to easily configure and fill your database if you like this approach.
Consider that mocking the database classes (PreparedStatement, Statement, ResultSet, Connection) is a long process and you are not granted that it works as expected, because you are not testing the right format of your sql over an sql engine.
You can also take a look to an article of Lasse Koskela talking about unit testing daos.
To test the DAO you need to:
Empty the database (not necessary for in memory db)
Fill the database with data example (automatic with db unit, done in the #BeforeClass or #Before method)
Run the test (with JUnit)
If you like to formally separated real unit tests from integration tests you can move the DAO tests on a separate directory and test them when needed and in the integration tests.
A possible in memory database that has different compatibility modes is H2, with the following database compatibilities:
IBM DB2
Apache Derb
HSQLDB
MS SQL Server
MySQL
Oracle
PostgreSQL
Imagine you have four MySQL database schemas across two environments:
foo (the prod db),
bar (the in-progress restructuring of the foo db),
foo_beta (the test db),
and bar_beta (the test db for new structures).
Further, imagine you have a Spring Boot app with Hibernate annotations on the entities, like so:
#Table(name="customer", schema="bar")
public class Customer { ... }
#Table(name="customer", schema="foo")
public class LegacyCustomer { ... }
When developing locally it's no problem. You mimic the production database table names in your local environment. But then you try to demo functionality before it goes live and want to upload it to the server. You start another instance of the app on another port and realize this copy needs to point to "foo_beta" and "bar_beta", not "foo" and "bar"! What to do!
Were you using only one schema in your app, you could've left off the schema all-together and specified hibernate.default_schema, but... you're using two. So that's out.
Spring EL--e.g. #Table(name="customer", schema="${myApp.schemaName}") isn't an option--(with even some snooty "no-one needs this" comments), so if dynamically defining schemas is absurd, what does one do? Other than, you know, not getting into this ridiculous scenario in the first place.
I have fixed such kind of problem by adding support for my own schema annotation to Hibernate. It is not very hard to implement by extending LocalSessionFactoryBean (or AnnotationSessionFactoryBean for Hibernate 3). The annotation looks like this
#Target(TYPE)
#Retention(RUNTIME)
public #interface Schema {
String alias() default "";
String group() default "";
}
Example of using
#Entity
#Table
#Schema(alias = "em", group = "ref")
public class SomePersistent {
}
And a schema name for every combination of alias and group is specified in a spring configuration.
you can try with interceptors
public class CustomInterceptor extends EmptyInterceptor {
#Override
public String onPrepareStatement(String sql) {
String prepedStatement = super.onPrepareStatement(sql);
prepedStatement = prepedStatement.replaceAll("schema", "Schema1");
return prepedStatement;
}
}
add this interceptor in session object as
Session session = sessionFactory.withOptions().interceptor(new MyInterceptor()).openSession();
so what happens is when ever onPrepareStatement is executed this block of code will be called and schema name will be changed from schema to schema1.
You can override the settings you declare in the annotations using a orm.xml file. Configure maven or whatever you use to generate your deployable build artifacts to create that override file for the test environment.
Consider a situation where all client data is stored in its own database/catalog and all such databases are stored in a single RDBMS (client-data). Master data (e.g. clients, ...) is kept in another RDBMS (master-data). How can we dynamically access a particular database in client-data RDBMS by means of JdbcTemplate?
Defining DataSource for each database in client-data RDBMS and then dynamically select one as suggested here is not an option for us since the databases are created and destroyed dynamically.
I would basically need something like JDBC's Connection.setCatalog(String catalog) but I have not found anything like that available in Spring JdbcTemplate.
Maybe you could wrap the datasource with DelegatingDataSource to call setCatalog() in getConnection() and use the wrapped datasource on JdbcTemplate creation:
class MyDelegatingDS extends DelegatingDataSource {
private final String catalogName;
public MyDelegatingDS(final String catalogName, final DataSource dataSource) {
super(dataSource);
this.catalogName = catalogName;
}
#Override
public Connection getConnection() throws SQLException {
final Connection cnx = super.getConnection();
cnx.setCatalog(this.catalogName);
return cnx;
}
// maybe also override the other getConnection();
}
// then use like that: new JdbcTemplate(new MyDelegatingDS("catalogName", dataSource));
You can access the Connection from JdbcTemplate:
jdbcTemplate.getDataSource().getConnection().setCatalog(catalogName);
You'll only have to make sure the database driver supports this functionality.
jdbcTemplate.getDataSource().getConnection().setSchema(schemaName)
Was what I needed for switching schema using postgres. Props to #m3th0dman for putting me on the right track. I'm only adding this in case others find this answer searching for switching schema as I was.
I have an issue testing a Hibernate application which queries multiple catalogs/schemas.
The production database is Sybase and in addition to entities mapped to the default catalog/schema there are two entities mapped as below. There are therefore three catalogs in total.
#Table(catalog = "corp_ref_db", schema = "dbo", name = "WORKFORCE_V2")
public class EmployeeRecord implements Serializable {
}
#Table(catalog = "reference", schema = "dbo", name="cntry")
public class Country implements Serializable {
}
This all works in the application without any issues. However when unit testing my usual strategy is to use HSQL with hibernate's ddl flag set to auto and have dbunit populate the tables.
This all works fine when the tables are all in the same schema.
However, since adding these additional tables, testing is broken as the DDL will not run as HSQL only supports one catalog.
create table corp_ref_db.dbo.WORKFORCE_V2
user lacks privilege or object not found: CORP_REF_DB
If there were only two catalogs then I think it would maybe be possible to get round this by changing the default catalog and schema in the HSQL database to that one explicitly defined:
Is there any other in-memory database for which this might work or is there any strategy for getting the tests to run in HSQL.
I had thought of providing an orm.xml file which specified the default catalog and schema (overiding any annotations and having all the defined tables created in the default catalog/schema) however these overrides do not seem to be observed when the DDL is executed i.e. I get the same error as above.
Essentially, then I would like to run my existing tests and either somehow have the tables created as they are defined in the mappings or somehow override the catalog/schema definitions at the entity level.
I cannot think of any way to achieve either outcome. Any ideas?
I believe H2 supports catalogs. I haven't used them in it myself, but there's a CATALOGS table in the Information Schema.
I managed to achieve something like this in H2 via IGNORE_CATALOGS property and version 1.4.200
However, the url example from their docs did not seem to work for me, so I added a statement in my schema.xml:
SET IGNORE_CATALOGS = true;