Clean up data after test - java

In the process of testing, we all do create test (dummy) data on different env levels like (dev, QA, cluster & sometimes staging). When you have a small number of tests everything is fine, but when you have a large number of tests and when they are executed in parallel as in my case tests are interfering because they're using the same data.
So my goal here is to isolate every test and make them independent from other tests. I'm planning to create unique data for each test and every test will work/manipulate with its own data.
The question is how do I clean up all of the data that were created during test execution, so tests will be able to create the same data each time they are executed? Has anyone else been through such a case and found a good solution for this?
My testing framework is built on Java, using Cucumber with Serenity BDD and RestAssured (for testing Web UI and API).
p.s there's a solution I have in mind currently and that's to keep track (something like using session variables) of every object I create whether using Web UI or API and on the final step of every test (by using '#After' method) using API to get the ID's of every object I've created and then do a delete Rest request to delete these items which will delete them from the database too.

Related

How to set up initial test data in database Java project? (possibly not using raw SQL statements)

I’m creating new automated tests java project. This will be mainly selenium test (selenide) + API test (Rest assured).
I would like to integrate those tests with application DB.
By integrate I mean:
I can setup initial application state in every test in “Given” context.
For a background, previously I made C# test project using: https://bitbucket.org/mk_meros/databag/wiki/Home this allowed me to make classes each represent different tables. Then I could use it in Given step implementation to insert data directly to DB and create initial test data. Each tests had ONE common initial app state + possibly his own Given Exists custom data, after test end data was cleared from DB, and another test was creating his own initial setup.
Can I find something similar to use in Java project?
Limitation od databag is only integration with Entity Framework.
You should have a look at DBUnit, which should be an answer to your problem.

Execute TestNG tests sequentially with different parameters?

I am writing a series of automated tests for my e-commerce company, specifically checkout tests. I need to write the exact same set of tests using a (fake) Visa, Discover, AmEx, and MasterCard. I would love to be able to write a set of tests in one or more classes and then, during the same test run, repeat the tests again only with slightly different inputs (i.e., the credit card numbers). Is there anyway to do that? I am already running these tests in parallel using <parameters> in the testng.xml, but I want these checkout tests to run sequentially as a part of the entire test run for a particular browser, but those test runs on different browsers are ran in parallel (which I already have accomplished).
Read up on the #DataProvider annotation and how to use it in the TestNG documentation. It's what makes TestNG special. The data provider method will send as many rows of data to a test method as you want.

How to manage test data fixtures for acceptance testing in large projects?

Suppose we have large, complex system with large amount of data and complicated business logic.
How to manage test data (Oracle DB) to have fast, reliable acceptance (Selenium etc.) tests starting from known state?
Because of scale and complexity, tests should:
run quite fast (1. fast revert to known DB state before each test/suite 2. definatelly not creating test data by UI before each suite)
base on data created with UI (no direct INSERTS to database - risky duplication of business logic)
have several versions/snapshots of DB state (stable group of users with related data - to avoid conflicts between assertions and new data created with ongoing automation development)
What you're describing is called a Sandbox DB. For every new deploy you'll have to provide/populate this DB with the data you need and after tests are done, to drop it.
have several versions/snapshots of DB state
This is what a Fresh Fixture pattern and Prebuilt Fixture pattern will help you with. Also you could look at the Fixture Teardown patterns.
Here you can find some considerations when dealing with such big-data-sandbox-strategies. Like scheduling, master data repository and monitoring.
To successfully manage all that - a CI server have to be put to work. Since you've tagged JAVA, a good options are:
Jenkins and Database plugin
Bamboo
Hudson
What I understand your question is that you want to run your test cases with predefined data and don't populate something from database directly.
Create database dumps for each version and store them
Create Job (ex: on CI, a Jenkins, Hudson, ... job) which loads the test data base with required dumps. This should be automatically triggered after a successfull deployment to test server.
Create a module/function for creating temporary test data
Run your test cases (ideally successfull result of the job in step 2 should trigger this)

Pentaho kettle: how to set up tests for transformations/jobs?

I've been using Pentaho Kettle for quite a while and previously the transformations and jobs i've made (using spoon) have been quite simple load from db, rename etc, input to stuff to another db. But now i've been doing transformations that do a bit more complex calculations that i would now like to test somehow.
So what i would like to do is:
Setup some test data
Run the transformation
Verify result data
One option would probably be to make a Kettle test job that would test the transformation. But as my transformations relate to a java project i would prefer to run the tests from jUnit. So i've considered making a jUnit test that would:
Setup test data (using dbunit)
Run the transformation (using kitchen.sh from command line)
Verify result data (using dbunit)
This approach would however require test database(s) which are not always available (oracle etc. expensive/legacy db's) What I would prefer is that if I could mock or pass some stub test data to my input steps some how.
Any other ideas on how to test Pentaho kettle transformations?
there is a jira somewhere on jira.pentaho.com ( i dont have it to hand ) that requests exactly this - but alas it is not yet implemented.
So you do have the right solution in mind- I'd also add jenkins and an ant script to tie it all together. I've done a similar thing with report testing - I actually had a pentaho job load the data, then it executed the report, then it compared the output with known output and reported pass/failure.
If you separate out your kettle jobs into two phases:
load data to stream
process and update data
You can use copy rows to result at the end of your load data to stream step, and get rows from result to get rows at the start of your process step.
If you do this, then you can use any means to load data (kettle transform, dbunit called from ant script), and can mock up any database tables you want.
I use this for testing some ETL scripts I've written and it works just fine.
You can use the data validator step. Of course is not a full unit test suite, but i think sometimes will be useful to check the data integrity in a quick way.
You can run more than several tests at once.
For a more "serious" test i will recommend the #codek answer and execute your kettles under Jenkins.

Best way to reset database to known state between FlexUnit4 integration tests?

Background:
I have a Flex web app that communicates with a Java back-end via BlazeDS. The Flex client is composed of a flex-client module, which holds the views and presentation models and a separate flex-service module which holds the models (value objects) and service objects.
I am in the process of writing asynchronous integration tests for the flex-service module's RemoteObjects using FlexUnit4. In some of the tests, I modify the test data and query it back to see if everything works (a technique shown here: http://saturnboy.com/2010/02/async-testing-with-flexunit4)
Question:
How do I go about resetting the database to a known state before each FlexUnit4 test method (or test method chain)? In my Java server integration tests, I did this via a combination of DBUnit and Spring Test's transactions -- which rollback after each test method. But these Flexunit integration span multiple requests and thus multiple transactions.
Short of implementing an integration testing service API on the backend, how can this be accomplished. Surely others have run into this as well? Similar questions have been asked before ( Rollback database after integration (Selenium) tests ), but with no satisfactory answers.
There are several options:
If you use sequences for primary keys: After the database has been loaded with the test data, delete the sequence generator and replace it with one that starts with -1 and counts down. After the test, you can delete objects with a primary key < 0. Breaks for tests which modify existing data.
A similar approach is to create a special user or, if you have created timestamp columns, then the initial data must be before some date in the past. That needs additional indexes, though.
Use a database on the server which can be quickly wiped (H2, for example). Add a service API which you can call from the client to reset the DB.
Add undo to your web app. That's quite an effort but a very cool feature.
Use a database which allows to move back in time with a single command, like Lotus Notes.
Don't use a database at all. Instead write a proxy server which will respond to the correct input with the correct output. Add some code to your real server to write the data exchanged into a file and create your tests from that.
Or write test cases which run against the real server and which create these files. That will allow you to track which files change when you modify code on the server or client.
On the server, write tests which make sure that it will do the correct DB modifications.
Similar to "no database at all", hide all code which accesses the DB in a DB layer and use interfaces to access it. This allows you to write a mock-up layer which behaves like the real database but which saves the data in memory. Sounds simple but is usually a whole lot of work.
Depending on the size of your test database, you could automate clean backups/restores that gives you the exact environment you had on each test run.
I've that approach in on of my current projects (different platform) and we also test data schema change scripts with the same approach.
I'm dehydrated (my fav excuse for short-comings). So sorry if this answer is too close to the "integration testing service API on the backend" response that you didn't want.
The team that set-up flexUnit 'ages ago' made choices and created solutions based on our architecture, some of which would only apply to our infrastructure. Things to consider:
1) all of our backend methods return the same remotely-mapped class. 2) most all of our methods have an abstracted method that telling the method to (or not to) run a "begin transaction" at the beginning of the method and a "commit transaction" at the end (not sure of your db chunk).
The latter isn't probably the most object oriented solution, but here's what an asynchronous unit-test call does: Every unit test calls the same method-wrapper, and we pass-in the method-name/package-locale, plus the [...]args. A beginTransaction is done. The method is called, passing a false to the method for FE unit tests (to ignore the beginTransaction and commitTransaction lines), everything is run and the main 'response' class is generated and returned to the unit test method. A db-rollback is run and the response is returned to the unit test.
All of our unit tests are based on rolling-back transactions. I couldn't tell you of the issues that they had when setting up that jive, but that's my general understanding of how schtuff works.
Hope that helps. Understandable if it doesn't.
Best of luck,
--jeremy

Categories