I am working on a Spring Webflow programming using MySQL has the database.
I have some jUnit test cases that Maven runs on the build that uses a test database base not the dev database. we have a diff database for running the projects in dev then the builds.
I have some test data that I need setup in dev before running my project. I was using jUnit for it as part of the package/test in maven but the issue is that test is using diff xml and database. how do you think I can go about making the project remove some delete in dev before running.. Does anyone know of any MySQL plugin for maven that will run a script before?
I'm not entirely sure what you're asking, but I think you want to have certain data available in a DB when you run tests and that data is different in different environments.
Given you're already using maven I would suggest you set up build profiles for each environment, which will allow you to specify different DB connections, indeed if you implement an ORM such as hibernate, you can even get the DB schema and data dropped and recreated on each run in your test / dev environments.
Further more, in environments where you want the data to be specific to your test run and the data does not need to persist beyond the scope of your tests, implement an in memory DB, such as HSQL which can be seeded with data as required and wiped at the end, this will make your development environment more suitable for larger numbers of developers and remove the need for physical DB resources.
I think you're asking about how to set up a database before running a Junit test. If so, I suggest you look at DBUnit and (as suggested in another answer) an in-memory database.
Related
I am using Play 2.4 with the Flyway-Play module.
Is there are way to clean and recreate the database between tests using this plugin? Some of my unit test have database effects which are not trivial to reverse, and it would be nice to be able to start afresh after these tests.
The documentation for the flyway-play module states:
In Test mode, migration is done automatically.
However, this seems be only once before running all tests. It would be nice to have programmatic control when preparing and cleaning up tests.
Not sure how flyway is integrated in the play framework (no experience with that), but it looks like what you are looking for is the flyway cleancommand.
Drops all objects in the configured schemas.
Flyway documentation: clean
I have written a REST webservice using JAX-RS and I currently prepare my test database using DbUnit. However, if I would now deploy may application, this would not fit my needs anymore. Thus, I am looking for a maven plugin that lets me handle the preparation and update of the production database. So I need something that creates my tables and inserts default data when I deploy my service for the first time and updates the tables when I deploy new releases, when the service is running.
There are a couple of useful frameworks for this usecase.
Have a look at:
FlyWay
and Liquibase
Both can handle your use case quite neatly. The main difference is the way how migrations are defined, flyway uses SQL and Javacode based migrations, Liquibase uses XML.
My personal preference lies with FlyWay, as I find it more natural.
How should I unit test methods which their intent is querying the database and return some data? For other situations I can just mock the objects but in this case which I want to test whether they return the correct data, how should I check it isolated from db? Should I use some kind of special db? But then how should I configure that new db to work like the other one with all those columns, etc?
Thanks.
Update: Thanks to everyone, their responses leaded me to the correct path. I finally used debry. I just added a new persistence.xml for that. No other significant changes and it seems to be working now.
One approach I've used with great success is to use:
Maven to build your project
Liquibase (or Flyway) to manage your database schema, and versioning it
H2 as an in-memory database that is started along with your tests.
There's a fair bit to learn there if you haven't used any of the above, but in my experience it was well worth it. This worked really well with a Spring application; with other setups your mileage may vary.
Maven should start an instance of the H2 database in-memory before doing any tests. In a Spring application, you can just specify your datasource with an H2 JDBC URL and it'll start automagically.
You can use Liquibase to run a set of XML scripts to set up your database schema, and then a separate file to populate them with test data (either by specifying different files when running Liquibase, or by using the context attribute of each changeSet). This can be done with Maven, or in Spring using a specific Liquibase bean.
From there you can test your application exactly as if it was a normal app. No need for mocking, and you get much more useful tests as a result. You may need to change your schema or otherwise work around SQL differences between H2 and your native RDBMS.
As an aside, I'm greatly in favour of these sorts of tests. In my experience mocking everything doesn't really gain you any interesting insights, and should be a last resort for when intra-build integration tests aren't possible. There are many that disagree with me though!
The question is what behavior do you need to unit test? If you mocked out the database then you've tested all the important logic. Your database adapter will either work or not work, which you can verify in integration/acceptance tests against a real database.
You can use DBUnit. It takes your current schema and you can easily mock your data.
At work we are trying to simplify an application that was coded with an overkill use of Spring Remoting. This is how it works today:
(Controllers) Spring MVC -> Spring Remoting -> Hibernate
Everything is deployed in a single machine, Spring Remoting is not needed (never will be needed) and adds complexity to code maintenance. We want it out.
How to ensure everything works after our changes? Today we have 0% code coverage! We thought on creating integration tests to our controllers so when we remove Spring Remoting they should behave exactly the same. We thought on using a mix of Spring Test framework in conjunction with DBUnit to bring up Oracle up to a known state every test cycle.
Has anyone tried a similar solution? What are the drawbacks? Would you suggest any better alternative?
It always depends on the ratio effort / benefit that you are willing to take. You can get an almost 100% code coverage if you are really diligent and thorough. But that might be overkill too, especially when it comes to maintaining those tests. But your idea is good. I've done this a couple of times before with medium applications. This is what you should do:
Be sure that you have a well known test data set in the database at the beginning of every test in the test suite (you mentioned that yourself)
Since you're using Hibernate, you might also try using HSQLDB as a substitute for Oracle. That way, your tests will run a lot faster.
Create lots of independent little test cases covering most of your most valued functionality. You can always allow yourself to have some minor bugs in some remote and unimportant corners of the application.
Make sure those test cases all run before the refactoring.
Make sure you have a reference system that will not be touched by the refactoring, in order to be able to add new test cases, in case you think of something only later
Start refactoring, and while refactoring run all relevant tests that could be broken by the current refactoring step. Run the complete test suite once a night using tools such as jenkins.
That should work. If your application is a web application, then I can only recommend selenium. It has a nice jenkins integration and you can create hundreds of test cases by just clicking through your application in the browser (those clicks are recorded and a Java/Groovy/other language test script is generated).
In our Spring MVC / Hibernate (using v3.4) web app we use an Oracle database for integration testing.
To ensure that our database is in a known state each time the test suites are run, we set the following property in our test suite's persistence.xml:
<property name="hibernate.hbm2ddl.auto" value="create"/>
This ensures that the db schema is created each time our tests are run based on the Hibernate annotations in our classes. To populate our database with a know data set, we added a file named import.sql to our classpath with the appropriate SQL inserts. If you have the above property set, Hibernate will run the statements in import.sql on your database after creating the schema.
I'm working on a Java web application (Adobe Flex front-end, JPA/Hibernate/BlazeDS/Spring MVC backend) and will soon reach the point where I can no longer wipe the database and regenerate it.
What's the best approach for handling changes to the DB schema? The production and test databases are SQL Server 2005, dev's use MySQL, and unit tests run against an HSQLDB in-memory database. I'm fine with having dev machines continue to wipe and reload the DB from sample data using Hibernate to regenerate the tables. However, for a production deploy the DBA would like to have a DDL script that he can manually execute.
So, my ideal solution would be one where I can write Rails-style migrations, execute them against the test servers, and after verifying that they work be able to write out SQL Server DDL that the DBA can execute on the production servers (and which has already been validated to work agains the test servers).
What's a good tool for this? Should I be writing the DDL manually (and just let dev machines use Hibernate to regenerate the DB)? Can I use a tool like migrate4j (which seems to have limited support for SQL Server, if at all)?
I'm also looking to integrate DB manipulation scripts into this process (for example, converting a "Name" field into a "First Name", "Last Name" field via a JDBC script that splits all the existing strings).
Any suggestions would be much appreciated!
What's the best approach for handling changes to the DB schema?
Idempotent change scripts with a version table (and a tool to apply all the change scripts with a number greater than the version currently stored in the version table). Also check the mentioned post Bulletproof Sql Change Scripts Using INFORMATION_SCHEMA Views.
To implement this, you could roll out your own solutions or use existing tools like DbUpdater (mentioned in the comments of change scripts), LiquiBase or dbdeploy. The later has my preference.
I depend on hibernate to create whatever it needs on the production server. There's no risk of losing data because it never removes anything: it only adds what is missing.
On the current project, we have established a convention by which any feature which requires a change in the database (schema or data) is required to provide it's own DDL/DML snippets, meaning that all we need to do is to aggregate the snippets into a single script and execute it to get production up to date. None of this works on a very large scale (order of snippets becomes critical, not everyone follows the convention etc.), but in a small team and an iterative process it works just fine.