introduction
For a (school) project I'm working on we have to develop a medium sized java app that in our case uses javaFX and java CDI dependency injection, we have a scripted jenkins pipeline based on a groovy file that checks every pull request so branches that don't meet quality requirements or fail building can't be merged.
problem
Now we have the issue that only 25% of the developers currently actually run the application to test if their additions work, since the project is in so early stages that a lot of functionality we are currently working on does not get used by the GUI yet. The other 75% of developers use JUnit unit- and intergration-tests to check if their code works. While we could try to say everyone needs to run the actual application before making a pull request to check if there are no runtime errors with java CDI dependency injection. People are people and don't always listen so we quite often get code into develop that succeeds with maven but cant be run due to dependency injection issues.
question
What is the easiest way to check in the jenkins build that there are no issues with java CDI dependency injection that are going to pop up runtime?
sidenote
We already use JUnit but not all tests currently use weld injection for test, if its possible to make a single tests that checks all dependency injection issues that would also work instead of a jenkins based solution.
Your CI(continuous integration) approach is good. Testing all PRs is a way to go, you just need to decide what kind of test should be executed. JUnit style, or integration tests? Maybe both?
CDI itself has a deployment validation phase at which it checks, at bootstrap, if things are alright - if beans are passivation capable (those that should), if all declared injection points can be satisfied and so on. This can catch many user-errors before hitting runtime, so just deploying the app shows it. You can, and should, use Arquillian to set up a testing environment much like the actual runtime environment. Note that this validation (no validation in fact) cannot check dynamic resolution errors. E.g. when you use Instance<Object> and then try to resolve a bean that doesn't exist without checking.
If this isn't what you wanted, then you can consider weld-junit extension. This leverages junit (4 or 5) to bootstrap Weld SE container in which you can play around with your beans and test them. Look into READMEs and tests in the projects for examples. This doesn't reflect pure EE environment but is more of a junit style testing with CDI in it and easy to get started with.
Note that with Jenkins, you can setup multiple tasks for each PR. You can make it run all the JUnit tests as well as all the integration (Arquillian) tests for instance.
Related
I want to test the effects of a library call of my program with a real device. This call starts a service, that sends an HTTP request to a server whose URL that is hard-coded in the resources.
I want to verify that the request is sent correctly. So I set up a local HTTP server, but to be able to use it I have to change/override/mock the resource so it points to http://127.0.0.1 instead.
I want to do "end-to-end" testing; in this case it's important that the service makes an actual network request, although locally.
I've tried to override the value by creating a string resource with the same name in androidTest/res/values/strings.xml, but that resource is only visible in the test package, not in the application package.
Using the Instrumentation class only allows me to obtain the Context reference, but there's no way to replace it (or the return value of getResources()) with a mock or something similar.
How can I change a resource value of an Application under test?
You have a couple choices:
Dependency injection
Stubs/mocks
SharedPreferences
Scripts or gradle tasks
Dependency injection
Use a library like RoboGuice or Dapper. Inject an object that handles making the API requests. Then, in your test setup, you can replace the injection modules with testing versions instead. That way your test code runs instead of the original; that code can pass in different strings (either hard-coded or from the test strings.xml) instead.
DI libraries can be expensive to setup: high learning curve and can be performance problems if not used correctly. Or even can introduce hard to debug problems if the scope/lifetime of the objects isn't configured correctly. If testing is the only reason to use DI, it might not be worth it to you if you're not comfortable with a DI container.
Stubs/mocks
Wrap up your calls in something that implements a custom interface you write. Your main implementation then fills in the host URL and calls the API. Then, in tests, use a combination of stubs or mocks on that interface to replace the code that fills in the host URL part.
This is less of an integration test since the stubs or mocks will be replacing parts of the code. But is simpler than setting up a dependency injection framework.
SharedPreferences
Use the Android SharedPreferences system. Have it default to a certain endpoint (production). But allow the app to be started on the testing device, then some dialog or settings to let you change the host URL. Run the tests again and now they point to a different API URL.
Scripts or gradle tasks
Write some script or gradle task to modify the source before it is compiled in certain scenarios.
This can be fairly complicated and might even be too platform or system-dependent if not done right. Will probably be fairly brittle to changes in the system. Might introduce bugs if the wrong command is run to build the final packaged version and the wrong code goes out to the market.
Personal opinion
Which do I recommend? If you and/or your team is familiar with a DI library like RoboGuice or Dapper, I recommend that option. It is the most formal, type-safe and strict solution. It also maintains more of the integrity of the stack to test the whole solution.
If you're not familiar with a good DI library, stubs/mocks and interface wrappers are a good fall back solution. They partly have to be used in the DI solution anyway, and you can write enough tests around them to cover a good majority of the cases you need to test (and are in control of). It is close enough to the DI solution that I would recommend this to everyone who doesn't use DI in the project already.
The SharedPreferences solution works great for switching between staging and production environments for QA and support. However, I wouldn't recommend it for automated tests since the app will most likely be reinstalled/reset so often during development, it would get annoying resetting that URL that often. Also, first runs of tests would probably fail; headless tests on a CI server would fail, etc. (You could default the URL to the localhost, but then you run the risk of accidentally release that default to production sometime.)
I don't recommend scripts or the hacked-up gradle tasks. Too brittle, less clear to other developers that come behind you, and more complicated then they're worth, IMO.
In addition to Jon Adams's solutions, there's a further one:
Override resource in build type
By default, a library module is built in release mode when it's used by another module. The debug mode is only used for testing (unit tests and instrumented tests). Therefore, using the resource overriding it's possible to change the resource value for the instrumentation tests for that library only, and use the original value in the library's users.
This has some caveats though:
Instrumented/integration tests must stay on the library itself, not on the main application package;
The same resource values have to be shared across all tests (unless using product flavors)
I have a doubt. Say I have a web application which is big and relies on Java/Java EE (JSP/Servlets).
Every time before a drop we test each and every functionality on GUI so that everything is working properly. Previously it was easy but now as the number of modules has increased exponentially, manually testing each and every GUI with required functionality is no more a feasible option.
I am on lookout for tools in which I can write my entire test case say about 1000 and then just run it once before the drop and it will list down all the test cases that have failed.
The tool preferably must be free to download and use it.
I dont know whether using
Arquilian
or
JUnit
in this regard will help or not but automating testing before the drop is really needed..
Please guide.
Use Junit together with a mock framework i.e Mockito to test units (service methods)
Use Arquillian to test on an integration level ( how different services, modules work together )
Use a database testing tool (i.e dbunit) to test your database / persistence layer)
Use Selenium to test your frontend
Test as much as possible.
Use Jenkins and Sonar to track your build process and your quality of tests and code
You should always test your application on different level. There is not just one solution.
Use unit testing to test small pieces of your application and to make refactoring as easy as possible.
Use integration test to check your modules still work together as expected.
Use GUI testing to check if your customers can work with your software.
If its relevant, think about performance testing (i.e. jmeter )
Definitively Selenium. Couple it with maven cause you will probably need to package your project specifically for testing purpose. Moreover maven allow you to launch a container during the integration-test phase and to close it automatically at the end. You can also configure this as a nightly build on jenkins / hudson so you will be quicly notified of any regression.
I recently managed to convince my mates in the project that we need testing (!). Due to the highly dynamic and flexible structure of our web application, with behavior depending of lots of parameters and permission relationships, they had rejected testing altogether, for the usual reasons (time consuming, test maintenance, etc.).
We will introduce testing at the service layer:
Web Browser -> GWT/RPC -> GWT Servlet -> RMI -> SessionEJB -> RMI -> Spring beans
Thus after the GWT Servlet.
Do people recommend to use junit? Or are there other test frameworks better suited? Any other general suggestions? Thanks
You can indeed use plain JUnit or TestNG with a mock framework to test your SessionEJB and individual Spring beans in isolation, i.e. proper Unit testing.
But since there is already a lot of code written, you'll probably find more bugs with less code using system testing or integration testing, i.e. test your complete SessionEJB and spring beans roundtrip in a test application context, with even a real database behind.
For integration and system testing, you can use DBUnit to have a fixture of test data in a database. And Spring also has a lot of test support utils. All of this things work with both JUnit and TestNG.
You should be able to JUnit your Servlets & EJBs. I suggest using some kind of mock framework (e.g. EasyMock) for your servlet context and if you are using any kind of JNDI resource or dependency injection.
As for a testing framework, I highly recommend TestNG (http://testng.org), with Mockito (code.google.com/p/mockito/). I love using both due to their ease of use. #DataProvider in TestNG helps me a lot, as well as other annotations for setting up a test before/after running. I was using JUnit before until I met TestNG at work and don't think I'll be going back anytime soon :)
Check them out, TestNG is definitely picking up some steam and gaining reputation.
I am a little confused about integration testing of a simple EJB. If I want to test the EJB's local interface/no-interface do I need to use Arquillian? I stumbled upon Arquillian but I have never used it. I have a Maven directory structure/Glassfish and Eclipse Indigo
If I want to test the EJB's local interface/no-interface do I need to use Arquillian?
It is not necessary to use Arquillian, but there are certain things made easier when you do so.
Ordinarily, you would merely use the EJBContainer API available in EJB 3.1 for testing of EJBs in an embedded container (that runs in the same JVM as the tests). In the case of embedded Glassfish, this typically results in deployment of EJBs that are found in the classpath of the application.
Arquillian allows you to do a lot more than execute tests in a container. It manages the lifecycle of the container, thus not requiring any writing of code beyond setting the properties in the arquillian.xml file. It allows you to manage deployments to a container in a far more easier manner; using the ShrinkWrap API, one can programmatically perform different context-sensitive deployments to a container. Furthermore, injection of dependencies (test enrichment) can also be performed, so long as they're supported by Arquillian.
It would suffice to know that the embedded Glassfish container support for Arquillian, uses the same APIs that are exposed by the embedded Glassfish API; usually you might end up duplicating the work of Arquillian, except in certain unique scenarios.
If you're interested in taking a look at examples using Arquillian, this GitHub project would help.
If you use j2ee 6, you can use EJBContainer to create full ejb instanse.
http://download.oracle.com/javaee/6/api/javax/ejb/embeddable/EJBContainer.html
http://download.oracle.com/javaee/6/tutorial/doc/gkcrr.html
When you are not a fan of mocking (just like me), then you could either have a look at ejb3unit (http://ejb3unit.sourceforge.net/), or try Arquillian.
I must say I had very good experiences with "ejb3unit".
But it seems that "EJB3unit" peoject was not maintenance since 2-3 years. But supprisingly, weeks ago, there are again some activities on the ejb3unit site.
Arquillian is not so easy start with. I would say this mainly lies in the documentation, missing running examples, and good turorials.
But so long as you have made your firt Arquillian test run, Arquillian begins to shine!
Under the following link, you could find a tutorial serial on step by step setting up Arquillian:
http://milestonenext.blogspot.de/2012/12/ejb3-integration-test-with-arquillian.html
I read article saying
Testing support baked-in : Testing is
a priority and first-class citizen in
Grails. Grails promotes testing; and
provides utilities to make testing
easier — from low level unit tests to
high level functional tests. In
addition, Grails is not married to a
particular testing framework. You can
use JUnit, Spock, EasyB, Geb,
Selenium, Canoo etc. Any testing
framework can be made to work with
Grails (by writing a plugin that hooks
testing framework with Grails testing
infrastructure).
Does this mean that I can test Grails just like any other Java EE framework? Is that block of text saying nothing(like Grails have integration with jUnit) or is there anything special about Grails testing?
EDIT:
How does it compare to SeamTest?
I would say that Grails supports testing by means of a folder structure that already contains folders for unit and integration tests, and its commands help out with test writing. When you create a domain class or controller, for instance, it automatically creates test stubs for you. It also has commands to run all tests, run unit/integration tests only or run individual tests - these create reports for you automatically in the test folder.
You can also find a lot of plugins that support testing - there is a good functional test plugin that uses HtmlUnit to test actual requests. There is also a Selenium plugin.
My overall experience with Grails has been very positive and I highly recommend it as a framework.
I hope this helps.
As Matthew pointed out, the testing infrastructure is all set up. The directory layout is defined and tests can be run through the grails script.
Overall, the testing environment of grails and SeamTest aren't that different. They both have unit tests sans database, and integration tests that has the whole stack. The differences are mostly of a java vs. groovy nature.
Just like SeamTest provides a layer over TestNG, grails has a layer over JUnit, that provides similar support. grails.test.GrailsUnitTestCase and groovy.util.GroovyTestCase are good starting points to see how they compare.
In my opinion, where grails really stands out is in its mocking support. It uses groovy to provide very flexible mocking. In particular, you can dynamically override methods with mock versions directly on classes and objects, so there's no need to create mock classes. The framework provides shortcuts for mocking out the whole ORM layer, which allows you easily test higher level components without the overhead of the database.
Take a look at the manual's chapter on testing for some concrete examples.