I want to test a handwritten DAO that uses the SQLite JDBC driver. My plan was to keep the schema and data insertion in version control as .sql files and execute them before the test to get a populated database that i can use for testing.
Searching for a solution to execute a whole sql script using JDBC turned up a bunch of Stackoverflow threads saying that it is not possible and providing some parsing scripts that split the sql script into separate sql statements (SQLScriptRunner).
These posts were mostly 3+ years old, so i am wondering if there still is no "easy" way to execute sql scripts using the JDBC API.
I am asking, because SQLite provides me with the option to clone a database from an existing one, which i would prefer over using a big Script-Executer implementation (The executer would probably be bigger than all my data access code combined).
So, is there a easy out of the box approach to execute sql scripts using JDBC, or is it still only possible using some parsing script?
I guess Spring Framework ScriptUtils might do this trick for you.
Please check
http://docs.spring.io/autorepo/docs/spring/4.0.9.RELEASE/javadoc-api/org/springframework/jdbc/datasource/init/ScriptUtils.html
My plan was to keep the schema and data insertion in version control
as .sql files and execute them before the test to get a populated
database that i can use for testing.
For this purpose there is such library as DbUnit http://dbunit.sourceforge.net/
Which i personally find a little bit tricky to use without a propper wrapper.
Some of those wrappers:
Spring Test DbUnit https://springtestdbunit.github.io/spring-test-dbunit/
Unitils DbUnit http://www.unitils.org/tutorial-database.html
While not an "out of the box" JDBC solution, I would be inclined to use SqlTool. It is especially friendly with HSQLDB, but it can be used with (almost?) any JDBC driver. (The documentation shows examples on how to connect with PostgreSQL, SQL Server, Oracle, Derby, etc.) It's available via Maven, and the JAR file is only ~150KB. For an example of how to run an SQL script with it, see my other answer here.
Regarding the proposed Spring "ScriptUtils" solution in light of the comment in your question ...
i would prefer [some other solution] over using a big Script-Executer implementation (The executer would probably be bigger than all my data access code combined).
... note that the dependencies for spring-jdbc include spring-core, spring-beans, spring-tx, and commons-logging-1.2 for a total of ~2.5MB. That is over 15 times larger than the SqlTool JAR file, and it doesn't even include any additional DbUnit "wrappers" that may or may not be required to make ScriptUtils easier to use.
Related
I'm working on a project with an Oracle database where we have decided to enable Edition based redefinition. We're also using jooq-codegen for creating Java objects based on objects we have created in the database.
I have read through the documentation for jooq-codegen, but have issues finding a way to make JOOQ work with Oracle editions. Normally I would use an alter session set edition=<some edition> statement to connect to the correct edition, but I can't find a way to do this with jooq-codegen.
Is there any way to do init queries with jooq-codegen, or maybe even a way to specify an edition with jooq-codegen? I'm hoping there is something I have overlooked as I can't find this in the documentation.
I don't think it should matter, but I'm using maven and this will be ran in Jenkins.
That's an interesting case where it might be beneficial to be able to run additional SQL statements after a JDBC connection has been initialised by the code generator. Probably worth a feature request you could report here.
As a workaround, you can always:
Extend the OracleDatabase from the jOOQ-meta module and override the create0() method, which provides an initialised DSLContext for all of your code generation meta queries.
Use a programmatic code generation configuration and initialise the JDBC connection yourself before passing it to the code generator
I'm working on integration tests and I'm trying to load SQL scripts to set up the database before a test.
I don't want to use DbUnit since I want to be able to do more than just inserting data and I'm also looking for better performance.
I've tried using JdbcTestUtils from Spring, but it fails when I want to execute a statement like this one:
SELECT setval('member_id_seq', 100000);
So I'm looking for a better solution.
I'm sure there's a library/framework that would allow me to execute any SQL script in Java, but I can't find it. Any suggestions?
p.s. - I'm using PostgreSQL, Spring, JPA/Hibernate
p.p.s. - I know I could also create a wrapper around the PostgreSQL psql command, but it requires having PostgreSQL installed on the continuous integration server and I was hoping I could avoid that.
I am currently working on an application that uses hibernate as its ORM; however, there is currently no database set up on my machine and I was wanting to start running some tests without one. I figure since hibernate is object/code based that there must be a way to simulate the DB functionality.
If there isn't a way to do it through hibernate, how can this be achieved in the general case (simulation of database)? Obviously, it wont need to handle large amounts of data, just testing functionality.
Just use an embedded DB like Derby
Maybe you could also try to use an ODBC-JDBC bridge and connect to an Excel or Access file, on Windows.
Hibernate is an object-relational mapping tool (ORM). You can't use it without objects and a relational database. Excluding either one makes no sense.
There are plenty of open source, free relational databases to choose from:
MySQL
PostgreSQL
Hypersonic
Derby
MariaDB
SQLite
You're only limited by your ability to download and install one.
Other options are using in-memory database like H2 / hsqldb
I assume you have hidden all the ORM calls behind a clean interface.
If you did, you could simply write another implementation of that interface backed by a Map that caches your objects.
You could then utilize this in your test environment.
But I have to agree with #duffymo, you should just go through the 'first pain' and set up a proper working enviroment.
I'm using H2. One of its major advantages is the use of dialects that simulate the behaviour of the more common DBs. For example - I'm using PostgreSQL and I define the dialect for Hibernate to be PostgreSQL. I'm using it for my integration tests - in each test I create the data that fits my scenario for this test, which is then erased pretty easily. No need to rollback transactions or anything.
I have an application where many "unit" tests use a real connection to an Oracle database during their execution.
As you can imagine, these tests take too much time to be executed, as they need to initialize some Spring contexts, and communicate to the Oracle instance. In addition to that, we have to manage complex mechanisms, such as transactions, in order to avoid database modifications after the test execution (even if we use usefull classes from Spring like AbstractAnnotationAwareTransactionalTests).
So my idea is to progressively replace this Oracle test instance by an in-memory database. I will use hsqldb or maybe better h2.
My question is to know what is the best approach to do that. My main concern is related to the construction of the in-memory database structure and insertion of reference data.
Of course, I can extract the database structure from Oracle, using some tools like SQL Developer or TOAD, and then modifying these scripts to adapt them to the hsqldb or h2 language. But I don't think that's the better approach.
In fact, I already did that on another project using hsqldb, but I have written manually all the scripts to create tables. Fortunately, I had only few tables to create. My main problem during this step was to "translate" the Oracle scripts used to create tables into the hsqldb language.
For example, a table created in Oracle using the following sql command:
CREATE TABLE FOOBAR (
SOME_ID NUMBER,
SOME_DATE DATE, -- Add primary key constraint
SOME_STATUS NUMBER,
SOME_FLAG NUMBER(1) DEFAULT 0 NOT NULL);
needed to be "translated" for hsqldb to:
CREATE TABLE FOOBAR (
SOME_ID NUMERIC,
SOME_DATE TIMESTAMP PRIMARY KEY,
SOME_STATUS NUMERIC,
SOME_FLAG INTEGER DEFAULT 0 NOT NULL);
In my current project, there are too many tables to do that manually...
So my questions:
What are the advices you can give me to achieve that?
Does h2 or hsqldb provide some tools to generate their scripts from an Oracle connection?
Technical information
Java 1.6, Spring 2.5, Oracle 10.g, Maven 2
Edit
Some information regarding my unit tests:
In the application where I used hsqldb, I had the following tests:
- Some "basic" unit tests, which have nothing to do with DB.
- For DAO testing, I used hsqldb to execute database manipulations, such as CRUD.
- Then, on the service layer, I used Mockito to mock my DAO objects, in order to focus on the service test and not the whole applications (i.e. service + dao + DB).
In my current application, we have the worst scenario: The DAO layer tests need an Oracle connection to be run. The services layer does not use (yet) any mock objects to simulate the DAO. So services tests also need an Oracle connection.
I am aware that mocks and in-memory database are two separates points, and I will address them as soon as possible. However, my first step is to try to remove the Oracle connection by an in-memory database, and then I will use my Mockito knowledges to enhance the tests.
Note that I also want to separate unit tests from integration tests. The latter will need an access to the Oracle database, to execute "real" tests, but my main concern (and this is the purpose of this question) is that almost all of my unit tests are not run in isolation today.
Use an in-memory / Java database for testing. This will ensure the tests are closer to the real world than if you try to 'abstract away' the database in your test. Probably such tests are also easier to write and maintain. On the other hand, what you probably do want to 'abstract away' in your tests is the UI, because UI testing is usually hard to automate.
The Oracle syntax you posted works well with the H2 database (I just tested it), so it seems H2 supports the Oracle syntax better than HSQLDB. Disclaimer: I'm one of the authors of H2. If something doesn't work, please post it on the H2 mailing list.
You should anyway have the DDL statements for the database in your version control system. You can use those scripts for testing as well. Possibly you also need to support multiple schema versions - in that case you could write version update scripts (alter table...). With a Java database you can test those as well.
By the way, you don't necessarily need to use the in-memory mode when using H2 or HSQLDB. Both databases are fast even if you persist the data. And they are easy to install (just a jar file) and need much less memory than Oracle.
Latest HSQLDB 2.0.1 supports ORACLE syntax for DUAL, ROWNUM, NEXTVAL and CURRVAL via a syntax compatibility flag, sql.syntax_ora=true. In the same manner, concatenation of a string with a NULL string and restrictions on NULL in UNIQUE constraints are handled with other flags. Most ORACLE functions such as TO_CHAR, TO_DATE, NVL etc. are already built in.
At the moment, to use simple ORACLE types such as NUMBER, you can use a type definition:
CREATE TYPE NUMBER AS NUMERIC
The next snapshot will allow NUMBER(N) and other aspects of ORACLE type compatibility when the flag is set.
Download from http://hsqldb.org/support/
[Update:] The snapshot issued on Oct 4 translates most Oracle specific types to ANSI SQL types. HSQLDB 2.0 also supports the ANSI SQL INTERVAL type and date / timestamp arithmetic the same way as Oracle.
What are your unit tests for?
If they test the proper working of DDLs and stored procedures then you should write the tests "closer" to Oracle: either without Java code or without Spring and other nice web interfaces at all focusing on the db.
If you want to test the application logic implemented in Java and Spring then you may use mock objects/database connection to make your tests independent of the database.
If you want to test the working as a whole (what is against the modular development and testing principle) then you may virtualize your database and test on that instance without having the risk of doing some nasty irreversible modifications.
As long as your tests clean up after themselves (as you already seem to know how to set up), there's nothing wrong with running tests against a real database instance. In fact it's the approach I usually prefer, because you'll be testing something as close to production as possible.
The incompatibilities seem small, but really end up biting back not so long afterwards. In a good case, you may get away with some nasty sql translation / extensive mockery. In bad cases, parts of the system will be just impossible to test, which I think is an unacceptable risk for business-critical systems.
I'm working on a Java web application (Adobe Flex front-end, JPA/Hibernate/BlazeDS/Spring MVC backend) and will soon reach the point where I can no longer wipe the database and regenerate it.
What's the best approach for handling changes to the DB schema? The production and test databases are SQL Server 2005, dev's use MySQL, and unit tests run against an HSQLDB in-memory database. I'm fine with having dev machines continue to wipe and reload the DB from sample data using Hibernate to regenerate the tables. However, for a production deploy the DBA would like to have a DDL script that he can manually execute.
So, my ideal solution would be one where I can write Rails-style migrations, execute them against the test servers, and after verifying that they work be able to write out SQL Server DDL that the DBA can execute on the production servers (and which has already been validated to work agains the test servers).
What's a good tool for this? Should I be writing the DDL manually (and just let dev machines use Hibernate to regenerate the DB)? Can I use a tool like migrate4j (which seems to have limited support for SQL Server, if at all)?
I'm also looking to integrate DB manipulation scripts into this process (for example, converting a "Name" field into a "First Name", "Last Name" field via a JDBC script that splits all the existing strings).
Any suggestions would be much appreciated!
What's the best approach for handling changes to the DB schema?
Idempotent change scripts with a version table (and a tool to apply all the change scripts with a number greater than the version currently stored in the version table). Also check the mentioned post Bulletproof Sql Change Scripts Using INFORMATION_SCHEMA Views.
To implement this, you could roll out your own solutions or use existing tools like DbUpdater (mentioned in the comments of change scripts), LiquiBase or dbdeploy. The later has my preference.
I depend on hibernate to create whatever it needs on the production server. There's no risk of losing data because it never removes anything: it only adds what is missing.
On the current project, we have established a convention by which any feature which requires a change in the database (schema or data) is required to provide it's own DDL/DML snippets, meaning that all we need to do is to aggregate the snippets into a single script and execute it to get production up to date. None of this works on a very large scale (order of snippets becomes critical, not everyone follows the convention etc.), but in a small team and an iterative process it works just fine.