Java: running test at compilation time or at application execution - java

First of all I am not good at testing. I have an OSGI CRUD application. And I want to make tests for auto testing business logic. And I see here two options:
run tests at compilation time - at certain maven phase
make tests as separate bundle and run it after starting the application. For example, if I click somewhere at main menu.
Which one is the right choice? Or both are possible?
The reasons why I am asking this questions are the following:
I've never seen option 2, but seen a lot of option 1.
Option 2 is for me a better one, because business logic includes working with database, index system and memory cache and I've got not idea how to check it during compilation time.

Technically, option 1 is not a compile time testing. Maven run the test before install/deploy the bundle after compile your code.
Option 1 is for Unit testing. In detail, before install or deploy any bundle, need to make sure each and every unit of your code working as expected.
Option 2 is for functionality testing. Test begins by invoking or testing the main gateway or main functionality, that will invoke the multiple modules internally. Based on the input, some of the unit may execute or may not.. primary focus of this testing is to cover and make sure the different scenario of the functionality.
Good developer should do both. Hope this helps!!

Related

What's the best approach for modular apps UI test scripts?

I'd like to know if any of You have experience in automation UI testing of modular-like apps. The whole app is like all typical CRM-related apps, where based on Your personal client needs You just put together some of the available modules (that have been predefined earlier) in order to provide all necessary functionalities.
If there would be "static" app built of all these modules put together then we could test it in a quite easy way, just going through all defined test classes, because we would know the behaviour/interactions between all these modules.
But in case we would need to test app behaviour while putting some of its random pieces/modules together in order to check if they work well, we would need some other approach.
If there's a solution, some recommended architect pattern or anything that can help me to perform such automation tests (using i.e. Selenium WebDriver)? Or does this kind of tests are even possible to perform using WebDriver library?
I'd be grateful if You'll share any of Your thoughts and experiences in this area.
I am working in that area and had a similar situation, here's what I learned from it:
Avoid creating UI tests if you can. UI tests are intended to test the look of your application and that's it. Business logic (like when I change that setting, the displayed data should change, etc.) should be tested in unit tests which are much easier to implement. Interaction between the modules should be covered as much as possible in integration tests.
If you still have functionality left over that needs to be tested, create a config file that contains the information about what customer has which modules enabled. In your test, read that config and if a test is not supposed to run, abort it.
In case some further researcher will look for the know-how solution for this case, we can just set some different test suites for each of app modules, and then we can check each suits for some certain condition met. If some suit won't meet this condition then we'll just skip this test suites. I.e we can get the app bundles.json file, which will most likely contain all information concerning app modules, and then we can just process this file to search for modules which are unavailable in current deployed app.
Look this as nice reference on how to achieve this: Introducing to conditional test running in TestNG

Hotreload with tests. playframework how?

So Im currently in a project where we are using Java playframework 2.3.7 with activator.
One of the things I liked about playframework is the hot-reloading feature. I can modify java files save and the changes are compiled and refreshed on runtime.
How do I get that functionality but for testing? I want to be able to run a single test with this hot reloading feature, so that when I save. Tests for the given file (specified by test-only) is re-runned automatically.
There is not such a solution, however you have two choices:
Use IntellJ: To re-run the previous test(s) in IntellJ, you press shift + F10.
Write a watcher: Write a file/directory watcher such as this question/answer here, and then as soon as there are changes, the program, re-runs the test command, such as sbt clean compile test or activator compile test.
Little advice auto running tests: I don't know how complicated your application is, but as soon as you have couple of injections here and there and with additional concurrency; you do not want to run the tests as soon as you put a char in.
Little advice on Test Driven Development: Your approach should be the other way around! You write a test, which fails because there is no implementation; then you leave it alone. You go and write the implementation, then rerun the test to pass it or to get a feedback. Again, you need your cpu/memory power to focus on one thing, you don't want to brute force your implementation. Hope this makes sense!.
Little advice on your Play version: The Play 2.6 is way much better than Play 2.3; you should slowly but surely update your application; at least for the sake of security.
Ok so I found what I was looking for.
For anybody in need of this particular feature in that particular version of play (I'm not sure about other versions) what you need to do is really simple. run activator and put the ~ prefix before test. for example
#activator
[my-cool-project]~test
That will reload your tests when you make a change. if you want to do this for a particular test then you have to do the same but with test-only
#activator
[my-cool-project]~test-only MyCoolTest
hope it helps anyone looking for the same thing

How to build only once using Maven for CI

Im currently reading Continius Delivery and in the book the author says that it is crucial to build the binarys only once, and then use the same binarys for every deployment. What im having problem understanding is how this can be done in practice? For examaple in order to run the mocked unit tests there would be a special build? What im refering to is the scope tag in Maven.
If you look at the Maven life Cycle you'll see that you have only one compile task. For your tests, they will be compiled and executed right after the source compilation. With mocked unit tests, it is the same : two separated compilation for two objectives.
I think that the author of your book refers to a problem that may appear when you deploy automatically on several environment : it create more environment to debug. It is compulsory to have only one final binary for all the environment. If you have several binaries that you split over your environment, you can be assured that you will forget what difference there are between two of them, which argument you give to the first one and not the other. For continuous Delivery, it has to be the same everywhere.
Let's come back on Maven. Maven has a lot of possibilities during its life cycle. Sometimes you'll have to run several build to complete everything (as code coverage for example). This may be useful in your continuous integration process and can be done through different build type (every hour for unit test, everyday for code coverage, quality analysis and integration tests).
But in the end, when you'll enter Continuous Delivery, you'll build one final binary, one unique binary copy over your environment

Automating complete testing of Java EE web application

I have a doubt. Say I have a web application which is big and relies on Java/Java EE (JSP/Servlets).
Every time before a drop we test each and every functionality on GUI so that everything is working properly. Previously it was easy but now as the number of modules has increased exponentially, manually testing each and every GUI with required functionality is no more a feasible option.
I am on lookout for tools in which I can write my entire test case say about 1000 and then just run it once before the drop and it will list down all the test cases that have failed.
The tool preferably must be free to download and use it.
I dont know whether using
Arquilian
or
JUnit
in this regard will help or not but automating testing before the drop is really needed..
Please guide.
Use Junit together with a mock framework i.e Mockito to test units (service methods)
Use Arquillian to test on an integration level ( how different services, modules work together )
Use a database testing tool (i.e dbunit) to test your database / persistence layer)
Use Selenium to test your frontend
Test as much as possible.
Use Jenkins and Sonar to track your build process and your quality of tests and code
You should always test your application on different level. There is not just one solution.
Use unit testing to test small pieces of your application and to make refactoring as easy as possible.
Use integration test to check your modules still work together as expected.
Use GUI testing to check if your customers can work with your software.
If its relevant, think about performance testing (i.e. jmeter )
Definitively Selenium. Couple it with maven cause you will probably need to package your project specifically for testing purpose. Moreover maven allow you to launch a container during the integration-test phase and to close it automatically at the end. You can also configure this as a nightly build on jenkins / hudson so you will be quicly notified of any regression.

How can I improve my junit tests

Right my junit tests look like a long story:
I create 4 users
I delete 1 user
I try to login with the deleted user and make sure it fails
I login with one of the 3 remaining user and verify I can login
I send a message from one user to the other and verify that it appears in the outbox of the sender and in the inbox of the receiver.
I delete the message
...
...
Advantages:
The tests are quite effective (are very good at detecting bugs) and are very stable, becuase they only use the API, if I refactor the code then the tests are refactored too. As I don't use "dirty tricks" such as saving and reloading the db in a given state, my tests are oblivious to schema changes and implementation changes.
Disadvantages:
The tests are getting difficult to maintain, any change in a test affects other tests. The tests run 8-9 min which is great for continuous integration but is a bit frustrating for developers. Tests cannot be run isolated, the best you can do is to stop after the test you are interested in has run - but you absolutely must run all the tests that come before.
How would you go about improving my tests?
First, understand the tests you have are integration tests (probably access external systems and hit a wide range of classes). Unit tests should be a lot more specific, which is a challenge on an already built system. The main issue achieving that is usually the way the code is structured:
i.e. class tightly coupled to external systems (or to other classes that are). To be able to do so you need to build the classes in such a way that you can actually avoid hitting external systems during the unit tests.
Update 1: Read the following, and consider that the resulting design will allow you to actually test the encryption logic without hitting files/databases - http://www.lostechies.com/blogs/gabrielschenker/archive/2009/01/30/the-dependency-inversion-principle.aspx (not in java, but ilustrates the issue very well) ... also note that you can do a really focused integration tests for the readers/writers, instead of having to test it all together.
I suggest:
Gradually include real unit tests on your system. You can do this when doing changes and developing new features, refactoring appropriately.
When doing the previous, include focused integration tests where appropriate. Make sure you are able to run the unit tests separated from the integration tests.
Consider your tests are close to testing the system as a whole, thus are different from automated acceptance tests only in that they operate on the border of the API. Given this think about factors related to the importance of the API for the product (like if it will be used externally), and whether you have good coverage with automated acceptance tests. This can help you understand what is the value of having these on your system, and also why they naturally take so long. Take a decision on whether you will be testing the system as a whole on the interface level, or both the interface+api level.
Update 2: Based on other answers, I want to clear something regarding doing TDD. Lets say you have to check whether some given logic sends an email, logs the info on a file, saves data on the database, and calls a web service (not all at once I know, but you start adding tests for each of those). On each test you don't want to hit the external systems, what you really want to test is if the logic will make the calls to those systems that you are expecting it to do. So when you write a test that checks that an email is sent when you create an user, what you test is if the logic calls the dependency that does that. Notice that you can write these tests and the related logic, without actually having to implement the code that sends the email (and then having to access the external system to know what was sent ...). This will help you focus on the task at hand and help you get a decoupled system. It will also make it simple to test what is being sent to those systems.
unit tests should - ideally - be independent, and able to run in any order. So, I would suggest that you:
break up your tests to be independent
consider using an in-memory database as the backend for your tests
consider wrapping each test or suite in a transaction that is rolled back at the end
profile the unit tests to see where the time is going, and concentrate on that
if it takes 8 minutes to create a few users and send a few messages, the performance problem may not be in the tests, rather this may be a symptom of performance problems with the system itself - only your profiler knows for sure!
[caveat: i do NOT consider these kinds of tests to be 'integration tests', though i may be in the minority; i consider these kinds of tests to be unit tests of features, a la TDD]
Now you are testing many things in one method (a violation of One Assertion Per Test). This is a bad thing, because when any of those things changes, the whole test fails. This leads it to not being immediately obvious why a test failed and what needs to be fixed. Also when you intentionally change the behaviour of the system, you need to change more tests to correspond the changed behaviour (i.e. the tests are fragile).
To know what kind of tests are good, it helps to read more on BDD: http://dannorth.net/introducing-bdd http://techblog.daveastels.com/2005/07/05/a-new-look-at-test-driven-development/ http://jonkruger.com/blog/2008/07/25/why-behavior-driven-development-is-good/
To improve the test that you mentioned, I would split it into the following three test classes with these context and test method names:
Creating user accounts
Before a user is created
the user does not exist
When a user is created
the user exists
When a user is deleted
the user does not exist anymore
Logging in
When a user exists
the user can login with the right password
the user can not login with a wrong password
When a user does not exist
the user can not login
Sending messages
When a user sends a message
the message appears in the sender's outbox
the message appears in the reciever's inbox
the message does not appear in any other message boxes
When a message is deleted
the message does not anymore exist
You also need to improve the speed of the tests. You should have a unit test suite with good coverage, which can run in a couple of seconds. If it takes longer than 10-20 seconds to run the tests, then you will hesitate to run them after every change, and you lose some of quick feedback that running the tests gives you. (If it talks to the database, it's not a unit test, but a system or integration test, which have their uses, but are not fast enough to be executed continually.) You need to break the dependencies of the classes under test by mocking or stubbing them. Also from your description it appears that your tests are not isolated, but instead the tests depend on the side-effects caused by previous tests - this is a no-no. Good tests are FIRST.
Reduce dependencies between tests. This can be done by using Mocks. Martin Fowler speaks about it in Mocks aren't stubs, especially why mocking reduces dependencies between tests.
You can use JExample, an extension of JUnit that allows test methods to have return values that are reused by other tests. JExample tests run with the normal JUnit plugin in Eclipse, and also work side by side with normal JUnit tests. Thus migration should be no problem. JExample is used as follows
#RunWith(JExample.class)
public class MyTest {
#Test
public Object a() {
return new Object();
}
#Test
#Given("#a")
public Object b(Object object) {
// do something with object
return object;
}
#Test
#Given("#b")
public void c(Object object) {
// do some more things with object
}
}
Disclaimer, I am among the JExample developers.
If you use TestNG you can annotate tests in a variety of ways. For example, you can annotate your tests above as long-running. Then you can configure your automated-build/continuous integration server to run these, but the standard "interactive" developer build would not (unless they explicitly choose to).
This approach depends on developers checking into your continuous build on a regular basis, so that the tests do get run!
Some tests will inevitably take a long time to run. The comments in this thread re. performance are all valid. However if your tests do take a long time, the pragmatic solution is to run them but not let their time-consuming nature impact the developers to the point that they avoid running them.
Note: you can do something similar with JUnit by (say) naming tests in different fashions and getting your continuous build to run a particular subset of test classes.
By testing stories like you describe, you have very brittle tests. If only one tiny bit of functionality is changing, your whole test might be messed up. Then you will likely to change all tests, which are affected by that change.
In fact the tests you are describing are more like functional tests or component tests than unit tests. So you are using a unit testing framework (junit) for non-unit tests. In my point of view there is nothing wrong to use a unit testing framework to do non-unit tests, if (and only if) you are aware of it.
So there are following options:
Choose another testing framework which supports a "story telling"-style of testing much better, like other user already have suggested. You have to evaluate and find a suitable testing framework.
Make your tests more “unit test”-like. Therefore you will need to break up your tests and maybe change your current production code. Why? Because unit testing aims on testing small units of code (unit testing purists suggest only one class at once). By doing this your unit tests become more independent. If you change the behavior of one class, you just need to change a relatively small amount of unit test code. This makes your unit test more robust. During that process you might see that your current code does not support unit testing very well -- mostly because of dependencies between classes. This is the reason that you will also need to modify your production code.
If you are in a project and running out of time, both options might not help you any further. Then you will have to live with those tests, but you can try to ease your pain:
Remove code duplication in your tests: Like in production code eliminate code duplication and put the code into helper methods or helper classes. If something changes, you might only need to change the helper method or class. This way you will converge to the next suggestion.
Add another layer of indirection to your tests: Produce helper methods and helper classes which operate on a higher level of abstraction. They should act as API for your tests. These helpers are calling you production code. Your story tests should only call those helpers. If something changes, you need to change only one place in your API and don't need to touch all your tests.
Example signatures for your API:
createUserAndDelete(string[] usersForCreation, string[] userForDeletion);
logonWithUser(string user);
sendAndCheckMessageBoxes(string fromUser, string toUser);
For general unit testing I suggest to have a look into XUnit Test Patterns from Gerard Meszaros.
For breaking dependencies in your production tests have a look into Working Effectively with Legacy Code from Michael Feathers
In addition to the above, pick up a good book on TDD (I can recommend "TDD and Acceptance TDD for Java Developers"). Even though it will approach from a TDD point of view there is alot of helpful information about writing the right kind of unit tests.
Find someone who has alot of knowledge in the area and use them to figure out how you can improve your tests.
Join a mailing list to ask questions and just read the traffic coming through. The JUnit list at yahoo (something like groups.yahoo.com/junit). Some of the movers and shakers in the JUnit world are on that list and actively participate.
Get a list of the golden rules of unit tests and stick them on your (and others) cubicle wall, something like:
Thou shalt never access an external system
Thou shalt only test the code under test
Thou shalt only test one thing at once
etc.
Since everyone else is talking about structure I'll pick different points. This sounds like a good opportunity to profile the code to find bottleknecks and to run it through code coverage to see if you are missing anything (given the time it takes to run it the results could be interesting).
I personally use the Netbeans profiler, but there are ones in other IDEs and stand alone ones as well.
For code coverage I use Cobertura, but EMMA works too (EMMA had an annoyance that Cobertura didn't have... I forget what it was and it may not be an issue anymore). Those two are free, there are paid ones as well that are nice.

Categories