I'm working on integration tests and I'm trying to load SQL scripts to set up the database before a test.
I don't want to use DbUnit since I want to be able to do more than just inserting data and I'm also looking for better performance.
I've tried using JdbcTestUtils from Spring, but it fails when I want to execute a statement like this one:
SELECT setval('member_id_seq', 100000);
So I'm looking for a better solution.
I'm sure there's a library/framework that would allow me to execute any SQL script in Java, but I can't find it. Any suggestions?
p.s. - I'm using PostgreSQL, Spring, JPA/Hibernate
p.p.s. - I know I could also create a wrapper around the PostgreSQL psql command, but it requires having PostgreSQL installed on the continuous integration server and I was hoping I could avoid that.
Related
I want to test a handwritten DAO that uses the SQLite JDBC driver. My plan was to keep the schema and data insertion in version control as .sql files and execute them before the test to get a populated database that i can use for testing.
Searching for a solution to execute a whole sql script using JDBC turned up a bunch of Stackoverflow threads saying that it is not possible and providing some parsing scripts that split the sql script into separate sql statements (SQLScriptRunner).
These posts were mostly 3+ years old, so i am wondering if there still is no "easy" way to execute sql scripts using the JDBC API.
I am asking, because SQLite provides me with the option to clone a database from an existing one, which i would prefer over using a big Script-Executer implementation (The executer would probably be bigger than all my data access code combined).
So, is there a easy out of the box approach to execute sql scripts using JDBC, or is it still only possible using some parsing script?
I guess Spring Framework ScriptUtils might do this trick for you.
Please check
http://docs.spring.io/autorepo/docs/spring/4.0.9.RELEASE/javadoc-api/org/springframework/jdbc/datasource/init/ScriptUtils.html
My plan was to keep the schema and data insertion in version control
as .sql files and execute them before the test to get a populated
database that i can use for testing.
For this purpose there is such library as DbUnit http://dbunit.sourceforge.net/
Which i personally find a little bit tricky to use without a propper wrapper.
Some of those wrappers:
Spring Test DbUnit https://springtestdbunit.github.io/spring-test-dbunit/
Unitils DbUnit http://www.unitils.org/tutorial-database.html
While not an "out of the box" JDBC solution, I would be inclined to use SqlTool. It is especially friendly with HSQLDB, but it can be used with (almost?) any JDBC driver. (The documentation shows examples on how to connect with PostgreSQL, SQL Server, Oracle, Derby, etc.) It's available via Maven, and the JAR file is only ~150KB. For an example of how to run an SQL script with it, see my other answer here.
Regarding the proposed Spring "ScriptUtils" solution in light of the comment in your question ...
i would prefer [some other solution] over using a big Script-Executer implementation (The executer would probably be bigger than all my data access code combined).
... note that the dependencies for spring-jdbc include spring-core, spring-beans, spring-tx, and commons-logging-1.2 for a total of ~2.5MB. That is over 15 times larger than the SqlTool JAR file, and it doesn't even include any additional DbUnit "wrappers" that may or may not be required to make ScriptUtils easier to use.
My application accesses Postgres database and I have many predefined queries (Rank, Partition, complex join etc) I fire against Postgres. Now I want to go for unit testing these queries behaviour with small test data.
So I started with H2/JUnit. I found out that most of Postgres queries like Rank, Partition, Complex case when update etc. So I thought of using H2 PosgreSQL compatibility mode - will all Postgres queries work on H2?
I followed H2 documentation saying:
To use the PostgreSQL mode, use the database URL jdbc:h2:~/test;MODE=PostgreSQL or the SQL statement SET MODE PostgreSQL.
I enabled mode using SET MODE PostgreSQL and I tried to fire one of the query which involves rank() and works in Postgres but it did not work H2. It gives me the following exception:
Function "RANK' not found; in SQL statement
I am new to H2 and database testing. I am using H2 JDBC driver to fire Postgres queries by thinking H2 Posgress compatibility mode will allow me to fire Postgres queries.
So I thought of using H2 PosgreSQL compatibility mode by thinking all postgres queries will work on H2 please correct me if I am wrong
I'm afraid that's not true.
H2 tries to emulate PostgreSQL syntax and support a few features and extensions. It'll never be a full match for PostgreSQL's behaviour, and doesn't support all features.
The only options you have are:
Use PostgreSQL in testing; or
Stop using features not supported by H2
I suggest using Pg for testing. It is relatively simple to write a test harness that initdb's a postgres instance and launches it for testing then tears it down after.
Update based on comments:
There's no hard line between " unit" and "integration" tests. In this case, H2 is an external component too. Purist unit tests would have a dummy responder to queries as part of the test harness. Testing against H2 is just as much an "integration" test as testing against PostgreSQL. The fact that it's in-process and in-memory is a convenience, but not functionally significant.
If you want to unit test you should write another database target for your app to go alongside your "PostgreSQL", "SybaseIQ", etc targets. Call it, say, "MockDatabase". This should just return the expected results from queries. It doesn't really run the queries, it only exists to test the behaviour of the rest of the code.
Personally, I think that's a giant waste of time, but that's what a unit testing purist would do to avoid introducing external dependencies into the test harness.
If you insist on having unit (as opposed to integration) tests for your DB components but can't/won't write a mock interface, you must instead find a way to use an existing one. H2 would be a reasonable candidate for this - but you'll have to write a new backend with a new set of queries that work for H2, you can't just re-use your PostgreSQL backend. As we've already established, H2 doesn't support all the features you need to use with PostgreSQL so you'll have to find different ways to do the same things with H2. One option would be to create a simple H2 database with "expected" results and simple queries that return those results, completely ignoring the real application's schema. The only real downside here is that it can be a major pain to maintain ... but that's unit testing.
Personally, I'd just test with PostgreSQL. Unless I'm testing individual classes or modules that stand alone as narrow-interfaced well-defined units, I don't care whether someone calls it a "unit" or "integration" test. I'll unit test, say, data validation classes. For database interface code purist unit testing makes very little sense and I'll just do integration tests.
While having an in-process in-memory database is convenient for that, it isn't required. You can write your test harness so that the setup code initdbs a new PostgreSQL and launches it; then the teardown code kills the postmaster and deletes the datadir. I wrote more about this in this answer.
See also:
Running PostgreSQL in memory only
As for:
If all queries with expected end datasets works fine in Postgress I can assume it will work fine in all other dbs
If I understand what you're saying correctly then yes, that's the case - if the rest of your code works with a dataset from PostgreSQL, it should generally work the same with a dataset containing the same data from another database. So long as it's using simple data types not database specific features, of course.
I am currently working on an application that uses hibernate as its ORM; however, there is currently no database set up on my machine and I was wanting to start running some tests without one. I figure since hibernate is object/code based that there must be a way to simulate the DB functionality.
If there isn't a way to do it through hibernate, how can this be achieved in the general case (simulation of database)? Obviously, it wont need to handle large amounts of data, just testing functionality.
Just use an embedded DB like Derby
Maybe you could also try to use an ODBC-JDBC bridge and connect to an Excel or Access file, on Windows.
Hibernate is an object-relational mapping tool (ORM). You can't use it without objects and a relational database. Excluding either one makes no sense.
There are plenty of open source, free relational databases to choose from:
MySQL
PostgreSQL
Hypersonic
Derby
MariaDB
SQLite
You're only limited by your ability to download and install one.
Other options are using in-memory database like H2 / hsqldb
I assume you have hidden all the ORM calls behind a clean interface.
If you did, you could simply write another implementation of that interface backed by a Map that caches your objects.
You could then utilize this in your test environment.
But I have to agree with #duffymo, you should just go through the 'first pain' and set up a proper working enviroment.
I'm using H2. One of its major advantages is the use of dialects that simulate the behaviour of the more common DBs. For example - I'm using PostgreSQL and I define the dialect for Hibernate to be PostgreSQL. I'm using it for my integration tests - in each test I create the data that fits my scenario for this test, which is then erased pretty easily. No need to rollback transactions or anything.
I'm looking for a simple way to test Hibernate HQL criteria queries. I've tried using IntelliJ's Hibernate Console support, but I've run into problems.
Is there a standalone tool that provides a simple way to test HQL queries? A simple console program that creates the session factory and executes a query passed as an argument would suffice.
You can use the H2 (JDBC in-memory) database, and jetty for your container, to create a container and context that will execute hibernate queries in the unit-test phase of your build (or from JUnit).
Using a console would be my first choice for just playing with HQL (eg Hibernate Tools for eclipse). But if that doesn't work, I would just use JUnit. My team uses that strategy to test the HQL queries that we use in production code, and occasionally to help write queries in the first place.
The test setup involves setting up an in-memory database (we use HSQLDB, but there's others). Insert data either with Hibernate or with raw SQL. Then configure a Hibernate SessionFactory to connect to it, and run your HQL.
We also use this to test other kinds of Hibernate settings or behaviors, and has the side benefit of being a full test suite on Hibernate for our purposes so that we can upgrade and have confidence that nothing we need has changed in an unexpected way.
I doubt it's possible to have such a tool as a separate app, you'll need to specify a hibernate configuration (which might be in Spring Context), and it should be able to find the classes and their HBM files. Moreover you might use different UserTypes or anything like that from other JARs, so I wouldn't even search for it.
For me the best tool is a unit testing framework + debugging facilities of your IDE - you can just stop at some point where you have a session created and do whatever you want in this mode. In IntelliJ for instance, aside of usual expressions, you can put code fragments during debugging which might help you with Criteria API.
Last time I had to do this kind of work I used DbUnit to test my data access layer in which I was using JPA 2.0 with Hibernate
You could try a JUnit approch.
I'm working on a Java web application (Adobe Flex front-end, JPA/Hibernate/BlazeDS/Spring MVC backend) and will soon reach the point where I can no longer wipe the database and regenerate it.
What's the best approach for handling changes to the DB schema? The production and test databases are SQL Server 2005, dev's use MySQL, and unit tests run against an HSQLDB in-memory database. I'm fine with having dev machines continue to wipe and reload the DB from sample data using Hibernate to regenerate the tables. However, for a production deploy the DBA would like to have a DDL script that he can manually execute.
So, my ideal solution would be one where I can write Rails-style migrations, execute them against the test servers, and after verifying that they work be able to write out SQL Server DDL that the DBA can execute on the production servers (and which has already been validated to work agains the test servers).
What's a good tool for this? Should I be writing the DDL manually (and just let dev machines use Hibernate to regenerate the DB)? Can I use a tool like migrate4j (which seems to have limited support for SQL Server, if at all)?
I'm also looking to integrate DB manipulation scripts into this process (for example, converting a "Name" field into a "First Name", "Last Name" field via a JDBC script that splits all the existing strings).
Any suggestions would be much appreciated!
What's the best approach for handling changes to the DB schema?
Idempotent change scripts with a version table (and a tool to apply all the change scripts with a number greater than the version currently stored in the version table). Also check the mentioned post Bulletproof Sql Change Scripts Using INFORMATION_SCHEMA Views.
To implement this, you could roll out your own solutions or use existing tools like DbUpdater (mentioned in the comments of change scripts), LiquiBase or dbdeploy. The later has my preference.
I depend on hibernate to create whatever it needs on the production server. There's no risk of losing data because it never removes anything: it only adds what is missing.
On the current project, we have established a convention by which any feature which requires a change in the database (schema or data) is required to provide it's own DDL/DML snippets, meaning that all we need to do is to aggregate the snippets into a single script and execute it to get production up to date. None of this works on a very large scale (order of snippets becomes critical, not everyone follows the convention etc.), but in a small team and an iterative process it works just fine.