H2 postgresql mode seems not working for me - java

My application accesses Postgres database and I have many predefined queries (Rank, Partition, complex join etc) I fire against Postgres. Now I want to go for unit testing these queries behaviour with small test data.
So I started with H2/JUnit. I found out that most of Postgres queries like Rank, Partition, Complex case when update etc. So I thought of using H2 PosgreSQL compatibility mode - will all Postgres queries work on H2?
I followed H2 documentation saying:
To use the PostgreSQL mode, use the database URL jdbc:h2:~/test;MODE=PostgreSQL or the SQL statement SET MODE PostgreSQL.
I enabled mode using SET MODE PostgreSQL and I tried to fire one of the query which involves rank() and works in Postgres but it did not work H2. It gives me the following exception:
Function "RANK' not found; in SQL statement
I am new to H2 and database testing. I am using H2 JDBC driver to fire Postgres queries by thinking H2 Posgress compatibility mode will allow me to fire Postgres queries.

So I thought of using H2 PosgreSQL compatibility mode by thinking all postgres queries will work on H2 please correct me if I am wrong
I'm afraid that's not true.
H2 tries to emulate PostgreSQL syntax and support a few features and extensions. It'll never be a full match for PostgreSQL's behaviour, and doesn't support all features.
The only options you have are:
Use PostgreSQL in testing; or
Stop using features not supported by H2
I suggest using Pg for testing. It is relatively simple to write a test harness that initdb's a postgres instance and launches it for testing then tears it down after.
Update based on comments:
There's no hard line between " unit" and "integration" tests. In this case, H2 is an external component too. Purist unit tests would have a dummy responder to queries as part of the test harness. Testing against H2 is just as much an "integration" test as testing against PostgreSQL. The fact that it's in-process and in-memory is a convenience, but not functionally significant.
If you want to unit test you should write another database target for your app to go alongside your "PostgreSQL", "SybaseIQ", etc targets. Call it, say, "MockDatabase". This should just return the expected results from queries. It doesn't really run the queries, it only exists to test the behaviour of the rest of the code.
Personally, I think that's a giant waste of time, but that's what a unit testing purist would do to avoid introducing external dependencies into the test harness.
If you insist on having unit (as opposed to integration) tests for your DB components but can't/won't write a mock interface, you must instead find a way to use an existing one. H2 would be a reasonable candidate for this - but you'll have to write a new backend with a new set of queries that work for H2, you can't just re-use your PostgreSQL backend. As we've already established, H2 doesn't support all the features you need to use with PostgreSQL so you'll have to find different ways to do the same things with H2. One option would be to create a simple H2 database with "expected" results and simple queries that return those results, completely ignoring the real application's schema. The only real downside here is that it can be a major pain to maintain ... but that's unit testing.
Personally, I'd just test with PostgreSQL. Unless I'm testing individual classes or modules that stand alone as narrow-interfaced well-defined units, I don't care whether someone calls it a "unit" or "integration" test. I'll unit test, say, data validation classes. For database interface code purist unit testing makes very little sense and I'll just do integration tests.
While having an in-process in-memory database is convenient for that, it isn't required. You can write your test harness so that the setup code initdbs a new PostgreSQL and launches it; then the teardown code kills the postmaster and deletes the datadir. I wrote more about this in this answer.
See also:
Running PostgreSQL in memory only
As for:
If all queries with expected end datasets works fine in Postgress I can assume it will work fine in all other dbs
If I understand what you're saying correctly then yes, that's the case - if the rest of your code works with a dataset from PostgreSQL, it should generally work the same with a dataset containing the same data from another database. So long as it's using simple data types not database specific features, of course.

Related

Can you run hibernate without a database set up?

I am currently working on an application that uses hibernate as its ORM; however, there is currently no database set up on my machine and I was wanting to start running some tests without one. I figure since hibernate is object/code based that there must be a way to simulate the DB functionality.
If there isn't a way to do it through hibernate, how can this be achieved in the general case (simulation of database)? Obviously, it wont need to handle large amounts of data, just testing functionality.
Just use an embedded DB like Derby
Maybe you could also try to use an ODBC-JDBC bridge and connect to an Excel or Access file, on Windows.
Hibernate is an object-relational mapping tool (ORM). You can't use it without objects and a relational database. Excluding either one makes no sense.
There are plenty of open source, free relational databases to choose from:
MySQL
PostgreSQL
Hypersonic
Derby
MariaDB
SQLite
You're only limited by your ability to download and install one.
Other options are using in-memory database like H2 / hsqldb
I assume you have hidden all the ORM calls behind a clean interface.
If you did, you could simply write another implementation of that interface backed by a Map that caches your objects.
You could then utilize this in your test environment.
But I have to agree with #duffymo, you should just go through the 'first pain' and set up a proper working enviroment.
I'm using H2. One of its major advantages is the use of dialects that simulate the behaviour of the more common DBs. For example - I'm using PostgreSQL and I define the dialect for Hibernate to be PostgreSQL. I'm using it for my integration tests - in each test I create the data that fits my scenario for this test, which is then erased pretty easily. No need to rollback transactions or anything.

Best practice for testing Hibernate mappings

I am wondering what people have found their best practice to be for testing Hibernate mappings and queries ?
This cannot be done with Unit Testing, so my experience has been to write Integration Tests that solely tests the DAO layer downwards. This way I can fully test each Insert / Update / Delete / ReadQueries without testing the full end-to-end solution.
Whenever the Integration test suite is run it will:-
Drop and re-create the database.
Run an import SQL script that contains a subset of data.
Run each test in a Transactional context that rolls back the transaction. Therefore it can be run multiple times as an independent test, or and as part of a suite, and the same result is returned as the database is always in a known state.
I never test against a different "in memory" database, as there is always an equivalent development database to test against.
I have never had the need to use DBUnit.
Never use DbUnit for this. It's way too much overhead for this level of testing.
Especially if you're using Spring in your app, check out the Spring Test Framework to help manage your data-access tests, particularly the transaction management features.
An "equivalent development database" is fine, but an in-memory H2 database will blow away anything else for speed. That's important because, while the unit/integration status of these tests may be contested, they're tests you want to run a lot, so they need to be as fast as possible.
So my DAO tests look like this:
Spring manages the SessionFactory and TransactionManager.
Spring handles all transactions for test methods.
Hibernate creates the current schema in an in-memory H2 database.
Test all the save, load, delete, and find methods, doing field-for-field comparison on before and after objects. (E.g. create object foo1, save it, load it as foo2, verify foo1 and foo2 contain identical values.)
Very lightweight and useful for quick feedback.
If you don't depend on proprietary rdbms features (triggers, stored procedures etc) then you can easily and fully test your DAOs using JUnit and an in memory database like HSQLDB. You'll need some rudimentary hibernate.cfg.xml emulation via a class (to initialize hibernate with HSQLDB, load the hbm.xml files you want) and then pass the provided datasource to your daos.
Works well and provides real value to the development lifecycle.
The way I do it is pretty similar with your own, with the exception of actually using in-memory data-bases, like HSQLDB. It's faster and more portable than having a real database configured (one that runs in a standalone server). It's true that for certain more advanced features HSQLDB won't work as it simply does not support them, but I've noticed that I hardly run into those when just integration testing my data access layer. However if this is the case, I like to use the "jar" version of mysql, which allows me to start a fully functional MYSql server from java, and shut it down when I'm done. This is not very practical as the jar file is quite big :
http://dev.mysql.com/doc/refman/5.0/en/connector-mxj-configuration-java-object.html
but it's still useful in some instances.

Create an in-memory database structure from an Oracle instance

I have an application where many "unit" tests use a real connection to an Oracle database during their execution.
As you can imagine, these tests take too much time to be executed, as they need to initialize some Spring contexts, and communicate to the Oracle instance. In addition to that, we have to manage complex mechanisms, such as transactions, in order to avoid database modifications after the test execution (even if we use usefull classes from Spring like AbstractAnnotationAwareTransactionalTests).
So my idea is to progressively replace this Oracle test instance by an in-memory database. I will use hsqldb or maybe better h2.
My question is to know what is the best approach to do that. My main concern is related to the construction of the in-memory database structure and insertion of reference data.
Of course, I can extract the database structure from Oracle, using some tools like SQL Developer or TOAD, and then modifying these scripts to adapt them to the hsqldb or h2 language. But I don't think that's the better approach.
In fact, I already did that on another project using hsqldb, but I have written manually all the scripts to create tables. Fortunately, I had only few tables to create. My main problem during this step was to "translate" the Oracle scripts used to create tables into the hsqldb language.
For example, a table created in Oracle using the following sql command:
CREATE TABLE FOOBAR (
SOME_ID NUMBER,
SOME_DATE DATE, -- Add primary key constraint
SOME_STATUS NUMBER,
SOME_FLAG NUMBER(1) DEFAULT 0 NOT NULL);
needed to be "translated" for hsqldb to:
CREATE TABLE FOOBAR (
SOME_ID NUMERIC,
SOME_DATE TIMESTAMP PRIMARY KEY,
SOME_STATUS NUMERIC,
SOME_FLAG INTEGER DEFAULT 0 NOT NULL);
In my current project, there are too many tables to do that manually...
So my questions:
What are the advices you can give me to achieve that?
Does h2 or hsqldb provide some tools to generate their scripts from an Oracle connection?
Technical information
Java 1.6, Spring 2.5, Oracle 10.g, Maven 2
Edit
Some information regarding my unit tests:
In the application where I used hsqldb, I had the following tests:
- Some "basic" unit tests, which have nothing to do with DB.
- For DAO testing, I used hsqldb to execute database manipulations, such as CRUD.
- Then, on the service layer, I used Mockito to mock my DAO objects, in order to focus on the service test and not the whole applications (i.e. service + dao + DB).
In my current application, we have the worst scenario: The DAO layer tests need an Oracle connection to be run. The services layer does not use (yet) any mock objects to simulate the DAO. So services tests also need an Oracle connection.
I am aware that mocks and in-memory database are two separates points, and I will address them as soon as possible. However, my first step is to try to remove the Oracle connection by an in-memory database, and then I will use my Mockito knowledges to enhance the tests.
Note that I also want to separate unit tests from integration tests. The latter will need an access to the Oracle database, to execute "real" tests, but my main concern (and this is the purpose of this question) is that almost all of my unit tests are not run in isolation today.
Use an in-memory / Java database for testing. This will ensure the tests are closer to the real world than if you try to 'abstract away' the database in your test. Probably such tests are also easier to write and maintain. On the other hand, what you probably do want to 'abstract away' in your tests is the UI, because UI testing is usually hard to automate.
The Oracle syntax you posted works well with the H2 database (I just tested it), so it seems H2 supports the Oracle syntax better than HSQLDB. Disclaimer: I'm one of the authors of H2. If something doesn't work, please post it on the H2 mailing list.
You should anyway have the DDL statements for the database in your version control system. You can use those scripts for testing as well. Possibly you also need to support multiple schema versions - in that case you could write version update scripts (alter table...). With a Java database you can test those as well.
By the way, you don't necessarily need to use the in-memory mode when using H2 or HSQLDB. Both databases are fast even if you persist the data. And they are easy to install (just a jar file) and need much less memory than Oracle.
Latest HSQLDB 2.0.1 supports ORACLE syntax for DUAL, ROWNUM, NEXTVAL and CURRVAL via a syntax compatibility flag, sql.syntax_ora=true. In the same manner, concatenation of a string with a NULL string and restrictions on NULL in UNIQUE constraints are handled with other flags. Most ORACLE functions such as TO_CHAR, TO_DATE, NVL etc. are already built in.
At the moment, to use simple ORACLE types such as NUMBER, you can use a type definition:
CREATE TYPE NUMBER AS NUMERIC
The next snapshot will allow NUMBER(N) and other aspects of ORACLE type compatibility when the flag is set.
Download from http://hsqldb.org/support/
[Update:] The snapshot issued on Oct 4 translates most Oracle specific types to ANSI SQL types. HSQLDB 2.0 also supports the ANSI SQL INTERVAL type and date / timestamp arithmetic the same way as Oracle.
What are your unit tests for?
If they test the proper working of DDLs and stored procedures then you should write the tests "closer" to Oracle: either without Java code or without Spring and other nice web interfaces at all focusing on the db.
If you want to test the application logic implemented in Java and Spring then you may use mock objects/database connection to make your tests independent of the database.
If you want to test the working as a whole (what is against the modular development and testing principle) then you may virtualize your database and test on that instance without having the risk of doing some nasty irreversible modifications.
As long as your tests clean up after themselves (as you already seem to know how to set up), there's nothing wrong with running tests against a real database instance. In fact it's the approach I usually prefer, because you'll be testing something as close to production as possible.
The incompatibilities seem small, but really end up biting back not so long afterwards. In a good case, you may get away with some nasty sql translation / extensive mockery. In bad cases, parts of the system will be just impossible to test, which I think is an unacceptable risk for business-critical systems.

Simple and reliable in memory database for fast java integration tests with support for JPA

My integration tests would run much faster if I used in-memory-database instead of PostgreSQL. I use JPA (Hibernate) and I need an in-memory-database that would be easy to switch to using JPA, easy to setup, and reliable. It needs to support JPA and Hibernate (or vice verse if you will) rather extensively since I have no desire to adopt my data access code for tests.
What database is the best choice given requirements above?
For integration testing, I now use H2 (from the original author of HSQLDB) that I prefer over HSQLDB. It is faster (and I want my tests to be as fast as possible), it has some nice features like the compatibility mode, the dev team is very responsive (while HSQLDB remained dormant for years until very recently).
I've been using HSQLDB in-memory for integration testing JPA/Hibernate persistence in Java. Starts pretty quickly, doesn't require any special setup.
The only issue I've seen so far with using HSQLDB with Hibernate was to do with batch size needing to be set to 0, but that might just have been related to an old version. I'll have a dig and see if I can find details of that problem.
Derby supports an in-memory mode these days, it is no longer marked experimental.
I use Derby. For one thing it is about 3 less lines of code per unit test since there is no need for a shutdown after the test. However, you need to use a JPA implementation that can drop and create tables such as EclipseLink.
Derby can also initialize a new in-memory database from a file so you can have a reference database and revert to it at anytime.
For unit testing though, I prefer to create my objects in my unit test's #Before logic I find it easier especially with JPA as it allows me the flexibility to do refactorings and not have to worry about the underlying database structure, other tools such as DBunit rely on practically a static structure and refactoring implies changing of the DBunit XMLs manually rather than relying on Eclipse's refactoring capabilities.

JDBC SQL Server Database Migrations

I'm working on a Java web application (Adobe Flex front-end, JPA/Hibernate/BlazeDS/Spring MVC backend) and will soon reach the point where I can no longer wipe the database and regenerate it.
What's the best approach for handling changes to the DB schema? The production and test databases are SQL Server 2005, dev's use MySQL, and unit tests run against an HSQLDB in-memory database. I'm fine with having dev machines continue to wipe and reload the DB from sample data using Hibernate to regenerate the tables. However, for a production deploy the DBA would like to have a DDL script that he can manually execute.
So, my ideal solution would be one where I can write Rails-style migrations, execute them against the test servers, and after verifying that they work be able to write out SQL Server DDL that the DBA can execute on the production servers (and which has already been validated to work agains the test servers).
What's a good tool for this? Should I be writing the DDL manually (and just let dev machines use Hibernate to regenerate the DB)? Can I use a tool like migrate4j (which seems to have limited support for SQL Server, if at all)?
I'm also looking to integrate DB manipulation scripts into this process (for example, converting a "Name" field into a "First Name", "Last Name" field via a JDBC script that splits all the existing strings).
Any suggestions would be much appreciated!
What's the best approach for handling changes to the DB schema?
Idempotent change scripts with a version table (and a tool to apply all the change scripts with a number greater than the version currently stored in the version table). Also check the mentioned post Bulletproof Sql Change Scripts Using INFORMATION_SCHEMA Views.
To implement this, you could roll out your own solutions or use existing tools like DbUpdater (mentioned in the comments of change scripts), LiquiBase or dbdeploy. The later has my preference.
I depend on hibernate to create whatever it needs on the production server. There's no risk of losing data because it never removes anything: it only adds what is missing.
On the current project, we have established a convention by which any feature which requires a change in the database (schema or data) is required to provide it's own DDL/DML snippets, meaning that all we need to do is to aggregate the snippets into a single script and execute it to get production up to date. None of this works on a very large scale (order of snippets becomes critical, not everyone follows the convention etc.), but in a small team and an iterative process it works just fine.

Categories