Integration tests, but how much? [closed] - java

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
A recent debate within my team made me wonder. The basic topic is that how much and what shall we cover with functional/integration tests (sure, they are not the same but the example is dummy where it doesn't matter).
Let's say you have a "controller" class something like:
public class SomeController {
#Autowired Validator val;
#Autowired DataAccess da;
#Autowired SomeTransformer tr;
#Autowired Calculator calc;
public boolean doCheck(Input input) {
if (val.validate(input)) {
return false;
}
List<Stuff> stuffs = da.loadStuffs(input);
if (stuffs.isEmpty()) {
return false;
}
BusinessStuff businessStuff = tr.transform(stuffs);
if (null == businessStuff) {
return false;
}
return calc.check(businessStuff);
}
}
We need a lot of unit testing for sure (e.g., if validation fails, or no data in DB, ...), that's out of question.
Our main issue and on what we cannot agree is that how much integration tests shall cover it :-)
I'm on the side that we shall aim for less integration tests (test pyramid). What I would cover from this is only a single happy-unhappy path where the execution returns from the last line, just to see if I put these stuff together it won't blow up.
The problem is that it is not that easy to tell why did the test result in false, and that makes some of the guys feeling uneasy about it (e.g., if we simply check only the return value, it is hidden that the test is green because someone changed the validation and it returns false). Sure, yeah, we can cover all cases but that would be a heavy overkill imho.
Does anyone has a good rule of thumb for this kind of issues? Or a recommendation? Reading? Talk? Blog post? Anything on the topic?
Thanks a lot in advance!
PS: Sry for the ugly example but it's quite hard to translate a specific code part to an example. Yeah, one can argue about throwing exceptions/using a different return type/etc. but our hand is more or less bound because of external dependencies.

It's easy to figure out where the test should reside if you follow these rules:
We check the logic on Unit Tests level and we check if the logic is invoked on Component or System levels.
We don't use mocking frameworks (mockito, jmock, etc).
Let's dive, but first let's agree on terminology:
Unit tests - check a method, a class or a few of them in isolation
Component Test - initializes a piece of the app but doesn't deploy it to the App Server. Example could be - initializing Spring Contexts in the tests.
System Test - requires a full deployment on App Server. Example could be: sending HTTP REST requests to a remote server.
If we build a balanced pyramid we'll end up with most tests on Unit and Component levels and few of them will be left to System Testing. This is good since lower-level tests are faster and easier. To do that:
We should put the business logic as low as possible (preferably in Domain Model) as this will allow us to easily test it in isolation. Each time you go through a collection of objects and put conditions there - it ideally should go to the domain model.
But the fact that the logic works doesn't mean it's invoked correctly. That's where you'd need Component Tests. Initialize your Controllers as well as services and DAO and then call it once or two times to see whether the logic is invoked.
Example: user's name cannot exceed 50 symbols, can have only latin as well as some special symbols.
Unit Tests - create Users with the right and wrong usernames, check that exceptions are thrown or vice versa - the valid names are passing
Component Tests - check that when you pass a non-valid user to the Controller (if you use Spring MVC - you can do that with MockMVC) it throws the error. Here you'll need to pass only one user - all the rules have been already checked by now, here you're interested only in knowing if those rules are invoked.
System Tests - you may not need them for this scenario actually..
Here is a more elaborate example of how you can implement a balanced pyramid.

In general we write an integration test at every starting point of the application (let's say every controller). We validate some happy flows and some error flows, with a couple of asserts to give us some peace of mind that we didn't break anything.
However, we also write tests at lower levels in response to regressions or when multiple classes are involved in a piece of complicated behaviour.
We use Integration tests mainly to catch the following types of regressions:
Refactoring mistakes (not caught by Unit tests).
For problems with refactoring, a couple of IT tests that hit a good portion of your application is more than sufficient. Refactoring often hits a large portion of classes and so these tests will expose things like using the wrong class or parameter somewhere.
Early detection of injection problems (context not loading, Spring)
Injection problems often happen because of missing annotations or mistakes in XML config. The first integration test that runs and sets up the entire context (apart from mocking the back-ends) will catch these every time.
Bugs in super complicated logic that are nearly impossible to test without controlling all inputs
Sometimes you have code that is spread over several classes, needs to filtering, transformations, etc. and sometimes no one really understands what is going on. What's worse, it is nearly impossible to test on a live system because the underlying data sources cannot easily provide the exact scenario that will trigger a bug.
For these cases (once discovered) we add a new integration test, where we feed the system the input that caused the bug, and then verify if it is performing as expected. This gives a lot of peace of mind after extensive code changes.

Related

Why do we not mock domain objects in unit tests?

I give you 2 tests; the purpose of which is solely to confirm that when service.doSomething is called, emailService.sendEmail is called with the person's email as a parameter.
#Mock
private EmailService emailService;
#InjectMocks
private Service service;
#Captor
private ArgumentCaptor<String> stringCaptor;
#Test
public void test_that_when_doSomething_is_called_sendEmail_is_called_NO_MOCKING() {
final String email = "billy.tyne#myspace.com";
// There is only one way of building an Address and it requires all these fields
final Address crowsNest = new Address("334", "Main Street", "Gloucester", "MA", "01930", "USA");
// There is only one way of building a Phone and it requires all these fields
final Phone phone = new Phone("1", "978-281-2965");
// There is only one way of building a Vessel and it requires all these fields
final Vessel andreaGail = new Vessel("Andrea Gail", "Fishing", 92000);
// There is only one way of building a Person and it requires all these fields
final Person captain = new Person("Billy", "Tyne", email, crowsNest, phone, andreaGail);
service.doSomething(captain); // <-- This requires only the person's email to be initialised, it doesn't care about anything else
verify(emailService, times(1)).sendEmail(stringCaptor.capture());
assertThat(stringCaptor.getValue(), eq(email));
}
#Test
public void test_that_when_doSomething_is_called_sendEmail_is_called_WITH_MOCKING() {
final String email = "billy.tyne#myspace.com";
final Person captain = mock(Person.class);
when(captain.getEmail()).thenReturn(email);
service.doSomething(captain); // <-- This requires the person's email to be initialised, it doesn't care about anything else
verify(emailService, times(1)).sendEmail(stringCaptor.capture());
assertThat(stringCaptor.getValue(), eq(email));
}
Why is it that my team is telling me not to mock the domain objects required to run my tests, but not part of the actual test? I am told mocks are for the dependencies of the tested service only. In my opinion, the resulting test code is leaner, cleaner and easier to understand. There is nothing to distract from the purpose of the test which is to verify the call to emailService.sendEmail occurs. This is something that I have heard and accepted as gospel for a long time, over many jobs. But I still can not agree with.
I think I understand your team's position.
They are probably saying that you should reserve mocks for things that have hard-to-instantiate dependencies. That includes repositories that make calls to a database, and other services that can potentially have their own rats-nest of dependencies. It doesn't include domain objects that can be instantiated (even if filling out all the constructor arguments is a pain).
If you mock the domain objects then the test doesn't give you any code coverage of them. I know I'd rather get these domain objects covered by tests of services, controllers, repositories, etc. as much as possible and minimize tests written just to exercise their getters and setters directly. That lets tests of domain objects focus on any actual business logic.
That does mean that if the domain object has an error then tests of multiple components can fail. I think that's ok. I would still have tests of the domain objects (because it's easier to test those in isolation than to make sure all paths are covered in a test of a service), but I don't want to depend entirely on the domain object tests to accurately reflect how those objects are used in the service, it seems like too much to ask.
You have a point that the mocks allow you to make the objects without filling in all their data (and I'm sure the real code can get a lot worse than what is posted). It's a trade-off, but having code coverage that includes the actual domain objects as well as the service under test seems like a bigger win to me.
It seems to me like your team has chosen to err on the side of pragmatism vs purity. If everybody else has arrived at this consensus you need to respect that. Some things are worth making waves over. This isn't one of them.
It is a tradeoff, and you have designed your example nicely to be 'on the edge'. Generally, mocking should be done for a reason. Good reasons are:
You can not easily make the depended-on-component (DOC) behave as intended for your tests.
Does calling the DOC cause any non-derministic behaviour (date/time, randomness, network connections)?
The test setup is overly complex and/or maintenance intensive (like, need for external files) (* see below)
The original DOC brings portability problems for your test code.
Does using the original DOC cause unnacceptably long build / execution times?
Has the DOC stability (maturity) issues that make the tests unreliable, or, worse, is the DOC not even available yet?
For example, you (typically) don't mock standard library math functions like sin or cos, because they don't have any of the abovementioned problems.
Why is it recommendable to avoid mocking where unnecessary?
For one thing, mocking increases test complexity.
Secondly, mocking makes your tests dependent on the inner workings of your code, namely, on how the code interacts with the DOCs (like, in your case, that the captain's first name is obtained using getFirstName, although possibly another way might exist to get that information).
And, as Nathan mentioned, it may be seen as a plus that - without mocking - DOCs are tested for free - although I would be careful here: There is a risk that your tests lose focus if you get tempted to also test the DOCs. The DOCs should have tests of their own.
Why is your scenario 'on the edge'?
One of the abovementioned good reasons for mocking is marked with (*): "The test setup is overly complex ...", and your example is constructed to have a test setup that is a bit complex. Complexity of the test setup is obviously not a hard criterion and developers will simply have to make a choice. If you want to look at it this way, you could say that either way has some risks when it comes to future maintenance scenarios.
Summarized, I would say that neither position (generally to mock or generally not to mock) is right. Instead, developers should understand the decision criteria and then apply them to the specific situation. And, when the scenario is in the grey zone such that the criteria don't lead to a clear decision, don't fight over it.
There are two mistakes here.
First, testing that when a service method is called, it delegates to another method. That is a bad specification. A service method should be specified in terms of the values it returns (for getters) or the values that could be subsequently got (for mutators) through that service interface. The service layer should be treated as a Facade. In general, few methods should be specified in terms of which methods they delegate to and when they delegate. The delegations are implementation details and so should not be tested.
Unfortunately, the popular mocking frameworks encourage this erroneous approach. And so does over zealous use of Behaviour Driven Development.
The second mistake is centered around the very concept of unit testing. We would like each of our unit tests to test one thing, so when there is a fault in one thing, we have one test failure, and locating the fault is easy. And we tend to think of "unit" meaning the same as "method" or "class". This leads people to think that a unit test should involve only one real class, and all other classes should be mocked. This is impossible for all but the simplest of classes. Almost all Java code uses classes from the standard library, such as String or HashSet. Most professional Java code uses classes from various frameworks, such as Spring. Nobody seriously suggests mocking those. We accept that those classes are trustworthy, and so do not need mocking. We accept that it is OK not to mock "trustworthy" classes that the code of our unit uses. But, you say, our classes are not trustworthy, so we must mock them. Not so. You can trust those other classes, by having good unit tests for them. But how to avoid a tangle of interdependent classes that cause a confusing mass of test failures when there is only one fault present? That would be a nightmare to debug! Use a concept from 1970s programming (called, a virtual machine hierarchy, which is now a rather confusing term, given the additional meanings of virtual machine): arrange your software in layers from low level to high level, with higher layers performing operations using lower layers. Each layer provides a more expressive or advanced means of abstractly describing operations and objects. So, domain objects are in a low level, and the service layer is at a higher level. When several tests fail, start debugging the lowest level test failure(s): the fault will probably be in that layer, possibly (but probably not) in a lower layer, and not in a higher layer.
Reserve mocks only for input and output interfaces that would make the tests very expensive to run (typically, this means mocking the repository layer and the logging interface).
The intention of an automated test is to reveal that the intended behavior of some unit of software is no longer performing as expected (aka reveal bugs.)
The granularity/size/bounds of units under test in a given test suite is to be decided by you and your team.
Once that is decided, if something outside of that scope can be mocked without sacrificing the behavior being tested, then that means it is clearly irrelevant to the test, and it should be mocked. This will help with making your tests more:
Isolated
Fast
Readable (as you mentioned)
...and most importantly, when the test fails, it will reveal that the intended behavior of some unit of software is no longer performing as expected. Given a sufficiently small unit under test, it will be obvious where the bug has occurred and why.
If your test-without-mocks example were to fail, it could indicate an issue with Address, Phone, Vessel, or Person. This will cause wasted time tracking down exactly where the bug has occurred.
One thing I will mention is that your example with mocks is actually a bit unreadable IMO because you are asserting that a String will have a value of "Billy" but it is unclear why.

Unit testing methods which do not produce a distinct output

In my Vaadin GUI application, there are so many methods which look like below.
#Override
protected void loadLayout() {
CssLayout statusLayout = new CssLayout();
statusLayout.addComponent(connectedTextLabel);
statusLayout.addComponent(connectedCountLabel);
statusLayout.addComponent(notConnectedTextLabel);
statusLayout.addComponent(notConnectedCountLabel);
connectionsTable.getCustomHeaderLayout().addComponent(statusLayout);
connectionsTable.getCustomHeaderLayout().addComponent(commandLayout);
connectionsTable.getCustomHeaderLayout().addComponent(historyViewCheckbox);
bodySplitter.addComponent(connectionsTable);
bodySplitter.addComponent(connectionHistoryTable);
bodySplitter.setSplitPosition(75, Sizeable.Unit.PERCENTAGE);
bodySplitter.setSizeFull();
bodyLayout.addComponent(bodySplitter);
if (connectionDef.getConnectionHistoryDef() == null) {
historyViewCheckbox.setVisible(false);
}
if (connectionDef.getConnectionStatusField() == null || connectionDef.getConnectedStatusValue() == null || connectionDef.getConnectedStatusValue().isEmpty()) {
connectedTextLabel.setVisible(false);
connectedCountLabel.setVisible(false);
notConnectedTextLabel.setVisible(false);
notConnectedCountLabel.setVisible(false);
}
}
protected void setStyleNamesAndControlIds() {
mainLayout.setId("mainLayout");
header.setId("header");
footer.setId("footer");
propertyEditorLayout.setId("propertyEditorLayout");
propertyEditor.setId("propertyEditor");
mainLayout.setStyleName("mainLayout");
propertyEditorLayout.setStyleName("ui_action_edit");
header.setStyleName("TopPane");
footer.setStyleName("footer");
}
These methods are used for setting up the layout of GUIs. They do not produce a single distinct output. Almost every line in these methods is doing a separate job, which is not almost relevant to other lines.
Usually, when unit testing a method, I check the return value of the method, or validate calls on a limited number of external objects such as database connections.
But, for methods like above, there is no such single output. If I wrote unit tests for such methods, My test code checks for each method call happens in every line in the method, and in the end, it looks almost like the method itself.
If someone altered the code in any way, the test will break and they will have to update the test to match the change. But, there is no assurance that the change didn't actually break anything since test doesn't check the actual UI drawn in the browser.
For an example, if someone changed a style name of a control, he will have to update the test code with the new style name and the test will pass. But, for things to actually work without any issue, he has to change the relevant scss style files too. But the test didn't make any contribution to detect this issue. Same applies to layout setup code as well.
Is there any advantage of writing unit tests like above, other than keeping the code coverage rating at a higher level? For me, it feels useless and writing a test to compare the decompiled bytecode of the method to the original decompiled bytecode kept as a string in the test looks much better than these kinds of tests.
Is there any advantage of writing unit tests like above, other than
keeping the code coverage rating at a higher level?
Yes, if you take a sensible approach. It might not make sense, as you say, to test that a control has a particular style. So focus your tests on the parts of your code that are likely to break. If there is any conditional logic that goes into producing your UI, test that logic. The test will then protect your code from future changes that could break your logic.
As for you comment about testing methods that don't return a value, you can address that several ways.
It's your code, so you can restructure it to be more testable. Think about breaking it down into smaller methods. Isolate your logic into individual methods that can be called in a test.
Indirect verification - Rather than focusing on return values, focus on the effect your method has on other objects in the system.
Finally consider if unit testing of the UI is right for you and your organization. UIs are often difficult to unit test (as you have pointed out). Many companies write functional tests for their UIs. These are tests that drive the UI of the actual product. This is very different from unit tests which do not require the full product and are targeted at very small units of functionality.
Here's one simple example you could look to see how to fly over your application and try what is needed. This is vaadin 8, CDI & Wildfly Swarm example and in no way the only way to test UI's of Vaadin application.
https://github.com/wildfly-swarm/wildfly-swarm-examples/blob/master/vaadin/src/it/java/org/wildfly/swarm/it/vaadin/VaadinApplicationIT.java

Unit Testable convention for Service "Helper Classes" in DDD pattern

I'm fairly new to Java and joining a project that leverages the DDD pattern (supposedly). I come from a strong python background and am fairly anal about unit test driven design. That said, one of the challenges of moving to Java is the testability of Service layers.
Our REST-like project stack is laid out as follows:
ServiceHandlers which handles request/response, etc and calls specific implementations of IService (eg. DocumentService)
DocumentService - handles auditing, permission checking, etc with methods such as makeOwner(session, user, doc)
Currently, something like DocumentService has repository dependencies injected via guice. In a public method like DocumentService.makeOwner, we want to ensure the session user is an admin as well as check if the target user is already an owner (leveraging the injected repositories). This results in some dupe code - one for both users involved to resolve the user and ensure membership, permissions, etc etc. To eliminate this redundant code, I want make a sort of super simpleisOwner(user, doc) call that I can concisely mock out for various test scenarios (such as throwing the exception when the user can't be resolved, etc). Here is where my googling fails me.
If I put this in the same class as DocumentService, I can't mock it while testing makeOwner in the same class (due to Mockito limitations) even though it somewhat feels like it should go here (option1).
If I put it in a lower class like DocumentHelpers, it feels slightly funny but I can easily mock it out. Also, DocumentHelpers needs the injected repository as well, which is fine with guice. (option 2)
I should add that there are numerous spots of this nature in our infant code base that are untestable currently because methods are non-statically calling helper-like methods in the same *Service class not used by the upper ServiceHandler class. However, at this stage, I can't tell if this is poor design or just fine.
So I ask more experienced Java developers:
Does introducing "Service Helpers" seem like a valid solution?
Is this counter to DDD principals?
If not, is there are more DDD-friendly naming convention for this aside from "Helpers"?
3 bits to add:
My googling has mostly come up with debates over "helpers" as static utility methods for stateless operations like date formatting, which doesn't fit my issue.
I don't want to use PowerMock since it breaks code coverage and is pretty ugly to use.
In python I'd probably call the "Service Helper" layer described above as internal_api, but that seems to have a different meaning in Java, especially since I need the classes to be public to unit test them.
Any guidance is appreciated.
That the user who initiates the action must be an admin looks like an application-level access control concern. DDD doesn't have much of an opinion about how you should do that. For testability and separation of concerns purposes, it might be a better idea to have some kind of separate non-static class than a method in the same service or a static helper though.
Checking that the future owner is already an owner (if I understand correctly) might be a different animal. It could be an invariant in your domain. If so, the preferred way is to rely on an Aggregate to enforce that rule. However, it's not clear from your description whether Document is an aggregate and if it or another aggregate contains the data needed to tell if a user is owner.
Alternatively, you could verify the rule at the Application layer level but it means that your domain model could go inconsistent if the state change is triggered by something else than that Application layer.
As I learn more about DDD, my question doesn't seem to be all that DDD related and more just about general hierarchy of the code structure and interactions of the layers. We ended up going with a separate DocumentServiceHelpers class that could be mocked out. This contains methods like isOwner that we can mock to return true or false as needed to test our DocumentService handling more easily. Thanks to everyone for playing along.

What is the best approach for Unit testing when you have interfaces with both dummy & real implementations?

I'm familiar with the basic principles of TDD, being :
Write tests, these will fail because of no implementation
Write basic implementation to make tests pass
Refactor code
However, I'm a little confused as to where interfaces and implementation fit. I'm creating a Spring web application in my spare time, and rather than going in guns blazing, I'd like to understand how I can test interfaces/implementations a little better, take this simple example code I've created here :
public class RunMe
{
public static void main(String[] args)
{
// Using a dummy service now, but would have a real implementation later (fetch from DB etc.)
UserService userService = new DummyUserService();
System.out.println(userService.getUserById(1));
}
}
interface UserService
{
public String getUserById(Integer id);
}
class DummyUserService implements UserService
{
#Override
public String getUserById(Integer id)
{
return "James";
}
}
I've created the UserService interface, ultimately there will be a real implementation of this that will query a database, however in order to get the application off the ground I've substituted a DummyUserService implementation that will just return some static data for now.
Question : How can I implement a testing strategy for the above?
I could create a test class called DummyUserServiceTest and test that when I call getUserById() it'll return James, seems pretty simple if not a waste of time(?).
Subsequently, I could also create a test class RealUserService that would test that getUserById() returns a users name from the database. This is the part that confuses me slightly, in doing so, does this not essentially overstep the boundary of a unit test and become more of an intergration test (with the hit on the DB)?
Question (improved, a little): When using interfaces with dummy/stubbed, and real implementations, which parts should be unit tested, and which parts can safely be left untested?
I spent a few hours Googling on this topic last night, and mostly found either tutorials on what TDD is, or examples of how to use JUnit, but nothing in the realms of advising what should actually be tested. It is entirely possible though, that I didn't search hard enough or wasn't looking for the right thing...
Don't test the dummy implementations: they won't be used in production. It makes no real sense to test them.
If the real UserService implementation does nothing else than go to a database and get the user name by its ID, then the test should test that it does that and does it correctly. Call it an integration test if you want, but it's nevertheless a test that should be written and automated.
The usual strategy is to populate the database with minimal test data in the #Before annotated method of the test, and have you test method check that for an ID which exists in the database, the corresponding user name is returned.
I would recommend you to read this book first: Growing Object-Oriented Software Guided by Tests by Steve Freemand and Nat Pryce. It answers your question and many others, related to TDD.
In your particular case you should make your RealUserService configurable with a DB-adapter, which will make real DB queries. The service itself will the servicing, not data persistence. Read the book, it will help a lot :)
JB's Answer is a good one, I thought I'd throw out another technique I've used.
When developing the original test, don't bother stubbing out the UserService in the first place. In fact, go ahead and write the real thing. Proceed by following Kent Beck's 3 rules.
1) Make it work.
2) Make it right.
3) Make it fast.
Your code will have tests that then verify the find by id works. As JB stated, your tests will be considered Integration Tests at this point. Once they are passing we have successfully achieved step 1. Now, look at the design. Is it right? Tweak any design smells and check step 2 off your list.
For step 3, we need to make this test fast. We all know that integration tests are slow and error prone with all of the transaction management and database setups. Once we know the code works, I typically don't bother with the integration tests. It is at this time where you can introduce your dummy service, effectively turning your Integration Test into a unit test. Now that it doesn't touch the database in any way, we can check step 3 off the list because this test is now fast.
So, what are the problems with this approach? Well, many will say that I still need a test for the database-backed UserService. I typically don't keep integration tests laying around in my project. My opinion is that these types of tests are slow, brittle, and don't catch enough logic errors in most projects to pay for themselves.
Hope that helps!
Brandon

Java unit testing - proper isolation?

I'd like to know if my approach to unit testing has been wrong:
My application has a boot strap process that initializes a several components and provides services to various sub-systems - let's call it the "controller".
In many cases, in order to unit test these sub-systems, I would need access to the controller as these sub-systems may depend on it. My approach to doing this unit test would be to initialize the system, and then provide the controller to any unit test that requires it. I achieve this via inheritance: I have a base unit test that initializes and tests the controller, then any unit test requiring the controller would extend this base class, and hence, have access to it.
My question is this:
(1) Is this achieving proper isolation? It makes sense to me that unit tests should be done in isolation so that they are repeatable and independent - is it ok that I am providing a real initialized controller rather than mocking it or attempting to mock the specific environment required by each test?
(2) As a best practice (assuming that my previous approach is OK) - should I be creating the controller over and over for each unit test, or would it suffice to create it once (its state is not changing).
If we are supplying a "real" controller to test another component, then strictly speaking we are performing an integration test rather than a unit test. This is not necessarily a bad thing, but consider the following points:
Cost of Creating the Controller
If the controller is a heavyweight object with a considerable cost to construct, then every unit test will incur this cost. As the number of unit tests grows in number, that cost may begin to dominate the overall test execution time. It is always desirable to keep the runtime of unit tests as small as possible to allow quick turnaround after code changes.
Controller Dependencies
If the controller is a complex object, it may have dependencies of its own that need to be instantiated in order to construct the controller itself. For example, it may need to access a database or configuration file of some kind. Now, not only does the controller need to be initialized, but also those components. As the application evolves over time, the controller may require more and more dependencies, just making this problem worse as time goes on.
Controller State
If the controller carries any state, the execution of a unit test may change that state. This, in turn, may change the behaviour of subsequent unit tests. Such changes may result in apparently non-deterministic behaviour of the unit tests, introducing the possibility of masking bugs. The cure for this problem is to create the controller anew for each test, which may be impractical if that creation is expensive (as noted above).
Combinatorial Problem
The number of combinations of possible inputs to the composite system of the unit under test and the controller object may be much larger than the number of combinations for the unit alone. That number might be too large to test practically. By testing the unit in isolation with a stub or mock object in place of the controller, it is easier to keep the number of combinations under control.
God Object
If the controller is conveniently accessible to all components in every unit test, there will be a great temptation to turn the controller into a God Object that knows everything about every component in the system. Even worse, those components may begin to interact with one another through that god object. The end result is that the separation between application components begins to erode and system starts to become monolithic.
Technical Debt
Even if the controller is stateless and cheap to instantiate today, that may change as the application evolves. If that day arrives after we have written a large number of unit tests, we might be faced with a large refactoring exercise of all of those tests. Furthermore, the actual system code might also need refactoring to replace all of the controller references with lighter weight interfaces. There is a risk that the refactoring cost is significant -- possibly even too high to contemplate, resulting in a system is "stuck" in an undesirable form.
Recommendation
In order to avoid these pitfalls now and in the future, my recommendation is to avoid supplying the real controller to the unit tests.
The full controller is likely to be difficult to stub or mock effectively. This will induce (desirable) pressure to express a component's dependencies as a "thin", focused interface in place of the "thick", "kitchen sink" interface that the controller is likely to present. Why is this desirable? It is desirable because this practice promotes better separation of concerns between system components, yielding architectural benefits far beyond the unit test code base.
For lots of good practical advice about how to achieve separation of concerns and generally write testable code, see Misko Hevery's guide and talks.
I think it's OK to provide a real controller. That will provide for a good integration test of your system. At my company we do a lot of what you're doing: a base test class that sets up the environment and the actual test cases that inherit it.
Hrm ... I think I might create it once. That'll test your controller also and make sure its state isn't changing and can bear repeated invocations.
If you're looking for a strict unit test, why not use mock objects, like EasyMock:
http://www.easymock.org/
That way you can provide "mock" behavior for the controller without ever instantiating it. Unitils also provides integration with EasyMock, such that if you extend the UnitilsJUnit4 unit test class, you get automatic mock object creation and injection. Unitils also provides for DB unit/integration testing, which might be overkill for your software, though.

Categories