Preferable way of making code testable: Dependency injection vs encapsulation - java

I often find myself wondering what is the best practice for these problems. An example:
I have a java program which should get the air temperature from a weather web service. I encapsulate this in a class which creates a HttpClient and does a Get REST request to the weather service. Writing a unit test for the class requires to stub the HttpClient so that dummy data can be received in stead. There are som options how to implement this:
Dependency Injection in constructor. This breaks encapsulation. If we switch to a SOAP web service in stead, then a SoapConnection has to be injected instead of HttpClient.
Creating a setter only for the purpose of testing. The "normal" HttpClient is constructed by default, but it is also possible to change the HttpClient by using the setter.
Reflection. Having the HttpClient as a private field set by the constructor (but not taking it by parameter), and then let the test use reflection to change it into a stubbed one.
Package private. Lower the field restriction to make it accessible in test.
When trying to read about best practices on the subject it seems to me that the general consensus is that dependency injection is the preferred way, but I think the downside of breaking encapsulation is not given enough thought.
What to you think is the preferred way to make a class testable?

I believe the best way is through dependency injection, but not quite the way you describe. Instead of injecting an HttpClient directly, instead inject a WeatherStatusService (or some equivalent name). I would make this a simple interface with one method (in your use case) getWeatherStatus(). Then you can implement this interface with an HttpClientWeatherStatusService, and inject this at runtime. To unit test the core class, you have a choice of stubbing the interface yourself by implementing the WeatherStatusService with your own unit testing requirements, or using a mocking framework to mock the getWeatherStatus method. The main advantages of this way are that:
You don't break encapsulation (because changing to a SOAP implementation involves creating a SOAPWeatherStatusService and deleting the HttpClient handler).
You have broken your initial single class down, and now have two classes with a distinct purpose, one class explicitly handles retrieving the data from an API, the other class handles the core logic. This will probably be a flow like: Receive weather status request (from higher up) -> request data retrieval from api -> process/validate the returned data -> (optionally) store data or trigger other processes to operate on the data -> return the data.
You can re-use the WeatherStatusService implementation easily if a different use case emerges to utilise this data. (For example, perhaps you have one use case to store the weather conditions every 4 hours (to show the user an interactive map of the days' developments), and another use case to get the current weather. In this case, you need two different core logic requirements which both need to use the same API, so it makes sense to have the API access code consistent between these approaches).
This method is known as hexagonal/onion architecture which I recommend reading about here:
http://alistair.cockburn.us/Hexagonal+architecture
http://jeffreypalermo.com/blog/the-onion-architecture-part-1/
Or this post which sums the core ideas up:
http://blog.8thlight.com/uncle-bob/2012/08/13/the-clean-architecture.html
EDIT:
Further to your comments:
What about testing the HttpClientWeatherStatus? Ignore unit testing or else we have to find a way to mock HttpClient there?
With the HttpClientWeatherStatus class. It should ideally be immutable, so the HttpClient dependency is injected into the constructor on creation. This makes unit testing easy because you can mock HttpClient and prevent any interaction with the outside world. For example:
public class HttpClientWeatherStatusService implements WeatherStatusService {
private final HttpClient httpClient;
public HttpClientWeatherStatusService(HttpClient httpClient) {
this.httpClient = httpClient;
}
public WeatherStatus getWeatherStatus(String location) {
//Setup request.
//Make request with the injected httpClient.
//Parse response.
return new WeatherStatus(temperature, humidity, weatherType);
}
}
Where the returned WeatherStatus 'Event' is:
public class WeatherStatus {
private final float temperature;
private final float humidity;
private final String weatherType;
//Constructor and getters.
}
Then the tests look something like this:
public WeatherStatusServiceTests {
#Test
public void givenALocation_WhenAWeatherStatusRequestIsMade_ThenTheCorrectStatusForThatLocationIsReturned() {
//SETUP TEST.
//Create httpClient mock.
String location = "The World";
//Create expected response.
//Expect request containing location, return response.
WeatherStatusService service = new HttpClientWeatherStatusService(httpClient);
//Replay mock.
//RUN TEST.
WeatherStatus status = service.getWeatherStatus(location);
//VERIFY TEST.
//Assert status contains correctly parsed response.
}
}
You will generally find that there will be very few conditionals and loops in the integration layers (because these constructs represent logic, and all logic should be in the core). Because of this (specifically because there will only be a single conditional branching path in the calling code), some people would argue that there is little point unit testing this class, and that it can be covered by an integration test just as easily, and in a less brittle way. I understand this viewpoint, and don't have a problem with skipping unit tests in the integration layers, but personally I would unit test it anyway. This is because I believe unit tests in an integration domain still help me ensure that my class is highly usable, and portable/re-usable (if it's easy to test, then it's easy to use from elsewhere in the codebase). I also use unit tests as documentation detailing the use of the class, with the advantage that any CI server will alert me when the documentation is out of date.
Isn't it bloating the code for a small problem which could have been "fixed" by just some lines using reflection or simply changing to package private field access?
The fact that you put "fixed" in quotes speaks volumes about how valid you think such a solution would be. ;) I agree that there is definitely some bloat to the code, and this can be disconcerting at first. But the real point is to make a maintainable codebase which is easy to develop for. I think some projects start fast because they "fix" problems by using hacks and dodgy coding practices to maintain the pace. Often productivity grinds to a halt as the overwhelming technical debt renders changes which should be one liners into mammoth re-factors which take weeks or even months.
Once you have a project set up in a hexagonal way, the real payoffs come when you need to do one of the following:
Change the technology stack of one of your integration layers. (e.g. from mysql to postgres). In this case (as touched on above), you simply implement a new persistence layer making sure you use all the relevant interfaces from the binding/event/adapter layer. There should be no need to change core code or the interface. Finally delete the old layer, and inject the new layer in place.
Add a new feature. Often integration layers will already exist, and may not even need modification to be used. In the example of the getCurrentWeather() and store4HourlyWeather() use-cases above. Let's assume you've already implemented the store4HourlyWeather() functionality using the class outlined above. To create this new functionality (let's assume the process begins with a restful request), you need to make three new files. You need a new class in your web layer to handle the initial request, you need a new class in your core layer to represent the user story of getCurrentWeather(), and you need an interface in your binding/event/adaptor layer which the core class implements, and the web class has injected to its constructor. Now on the one hand, yes, you've created 3 files when it would have been possible to create only one file, or even just tack it onto an existing restful web handler. Of course you could, and in this simple example that would work fine. It is only over time that the distinction between layers become obvious and refactors become hard. Consider in the case where you tack it onto an existing class, that class no longer has an obvious single purpose. What will you call it? How will anyone know to look in it for this code? How complicated is your test set-up becoming so that you can test this class now that there are more dependencies to mock?
Update integration layer changes. Following on from the example above, if the weather service API (where you are getting your information from) changes, there is only one place where you need to make changes in your program to be compatible with the new API again. This is the only place in the code which knows where the data actually comes from, so it's the only place which needs changing.
Introduce the project to a new team member. Arguable point, since any well laid out project will be fairly easy to understand, but my experience so far has been that most code looks simple and understandable. It achieves one thing, and it's very good at achieving that one thing. Understanding where to look (for example) for Amazon-S3 related code is obvious because there is an entire layer devoted to interacting with it, and this layer will have no code in it relating to other integration concerns.
Fix bugs. Linked to the above, often reproducibility is the biggest step towards a fix. The advantage of all the integration layers being immutable, independent, and accepting clear parameters, is that it is easy to isolate a single failing layer and modify the parameters until it fails. (Although again, well designed code will do this well too).
I hope I've answered your questions, let me know if you have more. :) Perhaps I will look into creating a sample hexagonal project over the weekend and linking to it here to demonstrate my point more clearly.

The preferable way should favor proper encapsulation and other object-oriented design qualities, while keeping the code under test simple. So, my recommended approach would be to:
Think of a good public API for the desired class (lets call it AirTemperatureMeasurement), that fits with the system architecture.
Write a unit test for it (which fails at this point, as the class is not implemented yet). The unit test will have to mock whatever dependency makes the call to the external web service.
Implement the class under test with the simplest solution which passes the test.
Repeat the previous steps, while looking for opportunities to simplify the code and remove duplication.
For example, here is a possible detailed solution:
Step 1:
public final class AirTemperatureMeasurement {
public double getCelsius() { return 0; }
}
Step 2:
public final class AirTemperatureMeasurementTest {
#Tested AirTemperatureMeasurement cut;
#Capturing HttpClient anyHttpClient;
#Test // a white-box test
public readAirTemperatureInCelsius() {
final HttpResponse response = ...suitable response...
new Expectations() {{
anyHttpClient.request((HttpUriRequest) any);
result = response;
}};
double airTemperatureInCelsius = cut.getCelsius();
assertEquals(28.5, airTemperatureInCelsius, 0.0);
}
}
Step 3:
public final class AirTemperatureMeasurement {
public double getCelsius() {
CloseableHttpClient httpclient = HttpClients.createDefault();
// Rest ommitted for brevity.
return airTemperatureInCelsius;
}
}
The above uses the JMockit mocking library, but PowerMock would be an option too.
I would recommend using java.net.URL (if possible) instead of Apache's HttpClient, though; it would simplify both production and test code.

Related

Why do we not mock domain objects in unit tests?

I give you 2 tests; the purpose of which is solely to confirm that when service.doSomething is called, emailService.sendEmail is called with the person's email as a parameter.
#Mock
private EmailService emailService;
#InjectMocks
private Service service;
#Captor
private ArgumentCaptor<String> stringCaptor;
#Test
public void test_that_when_doSomething_is_called_sendEmail_is_called_NO_MOCKING() {
final String email = "billy.tyne#myspace.com";
// There is only one way of building an Address and it requires all these fields
final Address crowsNest = new Address("334", "Main Street", "Gloucester", "MA", "01930", "USA");
// There is only one way of building a Phone and it requires all these fields
final Phone phone = new Phone("1", "978-281-2965");
// There is only one way of building a Vessel and it requires all these fields
final Vessel andreaGail = new Vessel("Andrea Gail", "Fishing", 92000);
// There is only one way of building a Person and it requires all these fields
final Person captain = new Person("Billy", "Tyne", email, crowsNest, phone, andreaGail);
service.doSomething(captain); // <-- This requires only the person's email to be initialised, it doesn't care about anything else
verify(emailService, times(1)).sendEmail(stringCaptor.capture());
assertThat(stringCaptor.getValue(), eq(email));
}
#Test
public void test_that_when_doSomething_is_called_sendEmail_is_called_WITH_MOCKING() {
final String email = "billy.tyne#myspace.com";
final Person captain = mock(Person.class);
when(captain.getEmail()).thenReturn(email);
service.doSomething(captain); // <-- This requires the person's email to be initialised, it doesn't care about anything else
verify(emailService, times(1)).sendEmail(stringCaptor.capture());
assertThat(stringCaptor.getValue(), eq(email));
}
Why is it that my team is telling me not to mock the domain objects required to run my tests, but not part of the actual test? I am told mocks are for the dependencies of the tested service only. In my opinion, the resulting test code is leaner, cleaner and easier to understand. There is nothing to distract from the purpose of the test which is to verify the call to emailService.sendEmail occurs. This is something that I have heard and accepted as gospel for a long time, over many jobs. But I still can not agree with.
I think I understand your team's position.
They are probably saying that you should reserve mocks for things that have hard-to-instantiate dependencies. That includes repositories that make calls to a database, and other services that can potentially have their own rats-nest of dependencies. It doesn't include domain objects that can be instantiated (even if filling out all the constructor arguments is a pain).
If you mock the domain objects then the test doesn't give you any code coverage of them. I know I'd rather get these domain objects covered by tests of services, controllers, repositories, etc. as much as possible and minimize tests written just to exercise their getters and setters directly. That lets tests of domain objects focus on any actual business logic.
That does mean that if the domain object has an error then tests of multiple components can fail. I think that's ok. I would still have tests of the domain objects (because it's easier to test those in isolation than to make sure all paths are covered in a test of a service), but I don't want to depend entirely on the domain object tests to accurately reflect how those objects are used in the service, it seems like too much to ask.
You have a point that the mocks allow you to make the objects without filling in all their data (and I'm sure the real code can get a lot worse than what is posted). It's a trade-off, but having code coverage that includes the actual domain objects as well as the service under test seems like a bigger win to me.
It seems to me like your team has chosen to err on the side of pragmatism vs purity. If everybody else has arrived at this consensus you need to respect that. Some things are worth making waves over. This isn't one of them.
It is a tradeoff, and you have designed your example nicely to be 'on the edge'. Generally, mocking should be done for a reason. Good reasons are:
You can not easily make the depended-on-component (DOC) behave as intended for your tests.
Does calling the DOC cause any non-derministic behaviour (date/time, randomness, network connections)?
The test setup is overly complex and/or maintenance intensive (like, need for external files) (* see below)
The original DOC brings portability problems for your test code.
Does using the original DOC cause unnacceptably long build / execution times?
Has the DOC stability (maturity) issues that make the tests unreliable, or, worse, is the DOC not even available yet?
For example, you (typically) don't mock standard library math functions like sin or cos, because they don't have any of the abovementioned problems.
Why is it recommendable to avoid mocking where unnecessary?
For one thing, mocking increases test complexity.
Secondly, mocking makes your tests dependent on the inner workings of your code, namely, on how the code interacts with the DOCs (like, in your case, that the captain's first name is obtained using getFirstName, although possibly another way might exist to get that information).
And, as Nathan mentioned, it may be seen as a plus that - without mocking - DOCs are tested for free - although I would be careful here: There is a risk that your tests lose focus if you get tempted to also test the DOCs. The DOCs should have tests of their own.
Why is your scenario 'on the edge'?
One of the abovementioned good reasons for mocking is marked with (*): "The test setup is overly complex ...", and your example is constructed to have a test setup that is a bit complex. Complexity of the test setup is obviously not a hard criterion and developers will simply have to make a choice. If you want to look at it this way, you could say that either way has some risks when it comes to future maintenance scenarios.
Summarized, I would say that neither position (generally to mock or generally not to mock) is right. Instead, developers should understand the decision criteria and then apply them to the specific situation. And, when the scenario is in the grey zone such that the criteria don't lead to a clear decision, don't fight over it.
There are two mistakes here.
First, testing that when a service method is called, it delegates to another method. That is a bad specification. A service method should be specified in terms of the values it returns (for getters) or the values that could be subsequently got (for mutators) through that service interface. The service layer should be treated as a Facade. In general, few methods should be specified in terms of which methods they delegate to and when they delegate. The delegations are implementation details and so should not be tested.
Unfortunately, the popular mocking frameworks encourage this erroneous approach. And so does over zealous use of Behaviour Driven Development.
The second mistake is centered around the very concept of unit testing. We would like each of our unit tests to test one thing, so when there is a fault in one thing, we have one test failure, and locating the fault is easy. And we tend to think of "unit" meaning the same as "method" or "class". This leads people to think that a unit test should involve only one real class, and all other classes should be mocked. This is impossible for all but the simplest of classes. Almost all Java code uses classes from the standard library, such as String or HashSet. Most professional Java code uses classes from various frameworks, such as Spring. Nobody seriously suggests mocking those. We accept that those classes are trustworthy, and so do not need mocking. We accept that it is OK not to mock "trustworthy" classes that the code of our unit uses. But, you say, our classes are not trustworthy, so we must mock them. Not so. You can trust those other classes, by having good unit tests for them. But how to avoid a tangle of interdependent classes that cause a confusing mass of test failures when there is only one fault present? That would be a nightmare to debug! Use a concept from 1970s programming (called, a virtual machine hierarchy, which is now a rather confusing term, given the additional meanings of virtual machine): arrange your software in layers from low level to high level, with higher layers performing operations using lower layers. Each layer provides a more expressive or advanced means of abstractly describing operations and objects. So, domain objects are in a low level, and the service layer is at a higher level. When several tests fail, start debugging the lowest level test failure(s): the fault will probably be in that layer, possibly (but probably not) in a lower layer, and not in a higher layer.
Reserve mocks only for input and output interfaces that would make the tests very expensive to run (typically, this means mocking the repository layer and the logging interface).
The intention of an automated test is to reveal that the intended behavior of some unit of software is no longer performing as expected (aka reveal bugs.)
The granularity/size/bounds of units under test in a given test suite is to be decided by you and your team.
Once that is decided, if something outside of that scope can be mocked without sacrificing the behavior being tested, then that means it is clearly irrelevant to the test, and it should be mocked. This will help with making your tests more:
Isolated
Fast
Readable (as you mentioned)
...and most importantly, when the test fails, it will reveal that the intended behavior of some unit of software is no longer performing as expected. Given a sufficiently small unit under test, it will be obvious where the bug has occurred and why.
If your test-without-mocks example were to fail, it could indicate an issue with Address, Phone, Vessel, or Person. This will cause wasted time tracking down exactly where the bug has occurred.
One thing I will mention is that your example with mocks is actually a bit unreadable IMO because you are asserting that a String will have a value of "Billy" but it is unclear why.

Unit Testable convention for Service "Helper Classes" in DDD pattern

I'm fairly new to Java and joining a project that leverages the DDD pattern (supposedly). I come from a strong python background and am fairly anal about unit test driven design. That said, one of the challenges of moving to Java is the testability of Service layers.
Our REST-like project stack is laid out as follows:
ServiceHandlers which handles request/response, etc and calls specific implementations of IService (eg. DocumentService)
DocumentService - handles auditing, permission checking, etc with methods such as makeOwner(session, user, doc)
Currently, something like DocumentService has repository dependencies injected via guice. In a public method like DocumentService.makeOwner, we want to ensure the session user is an admin as well as check if the target user is already an owner (leveraging the injected repositories). This results in some dupe code - one for both users involved to resolve the user and ensure membership, permissions, etc etc. To eliminate this redundant code, I want make a sort of super simpleisOwner(user, doc) call that I can concisely mock out for various test scenarios (such as throwing the exception when the user can't be resolved, etc). Here is where my googling fails me.
If I put this in the same class as DocumentService, I can't mock it while testing makeOwner in the same class (due to Mockito limitations) even though it somewhat feels like it should go here (option1).
If I put it in a lower class like DocumentHelpers, it feels slightly funny but I can easily mock it out. Also, DocumentHelpers needs the injected repository as well, which is fine with guice. (option 2)
I should add that there are numerous spots of this nature in our infant code base that are untestable currently because methods are non-statically calling helper-like methods in the same *Service class not used by the upper ServiceHandler class. However, at this stage, I can't tell if this is poor design or just fine.
So I ask more experienced Java developers:
Does introducing "Service Helpers" seem like a valid solution?
Is this counter to DDD principals?
If not, is there are more DDD-friendly naming convention for this aside from "Helpers"?
3 bits to add:
My googling has mostly come up with debates over "helpers" as static utility methods for stateless operations like date formatting, which doesn't fit my issue.
I don't want to use PowerMock since it breaks code coverage and is pretty ugly to use.
In python I'd probably call the "Service Helper" layer described above as internal_api, but that seems to have a different meaning in Java, especially since I need the classes to be public to unit test them.
Any guidance is appreciated.
That the user who initiates the action must be an admin looks like an application-level access control concern. DDD doesn't have much of an opinion about how you should do that. For testability and separation of concerns purposes, it might be a better idea to have some kind of separate non-static class than a method in the same service or a static helper though.
Checking that the future owner is already an owner (if I understand correctly) might be a different animal. It could be an invariant in your domain. If so, the preferred way is to rely on an Aggregate to enforce that rule. However, it's not clear from your description whether Document is an aggregate and if it or another aggregate contains the data needed to tell if a user is owner.
Alternatively, you could verify the rule at the Application layer level but it means that your domain model could go inconsistent if the state change is triggered by something else than that Application layer.
As I learn more about DDD, my question doesn't seem to be all that DDD related and more just about general hierarchy of the code structure and interactions of the layers. We ended up going with a separate DocumentServiceHelpers class that could be mocked out. This contains methods like isOwner that we can mock to return true or false as needed to test our DocumentService handling more easily. Thanks to everyone for playing along.

Change method accessibility in order to test it

I have a public method that calls group of private methods.
I would like to test each of the private method with unit test as it is too complicated to test everything through the public method ,
Is think it will be a bad practice to change method accessibility only for testing purposes.
But I dont see any other way to test it (maybe reflection , but it is ugly)
Private methods should only exist as a consequence of refactoring a public method, that you've developed using TDD.
If you create a class with public methods and plan to add private methods to it, then your architecture will fail.
I know it's harsh, but what you're asking for is really, really bad software design.
I suggest you buy Uncle Bob's book "Clean Code"
http://www.amazon.co.uk/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882
Which basically gives you a great foundation for getting it right and saving you a lot of grief in your future as a developer.
There is IMO only one correct answer to this question; If the the class is too complex it means it's doing too much and has too many responsibilities. You need to extract those responsibilities into other classes that can be tested separately.
So the answer to your question is NO!
What you have is a code smell. You're seeing the symptoms of a problem, but you're not curing it. What you need to do is to use refactoring techniques like extract class or extract subclass. Try to see if you can extract one of those private methods (or parts of it) into a class of itself. Then you can add unit test to that new class. Divide and conquer untill you have the code under control.
You could, as has been mentioned, change the visibility from private to package, and then ensure that the unit-tests are in the same package (which should normally be the case anyway).
This can be an acceptable solution to your testing problem, given that the interfaces of the (now) private functions are sufficiently stable and that you also do some integration testing (that is, checking that the public methods call the private ones in the correct way).
There are, however, some other options you might want to consider:
If the private functions are interface-stable but sufficiently complex, you might consider creating separate classes for them - it is likely that some of them might benefit from being split into several smaller functions themselves.
If testing the private functions via the public interface is inconvenient (maybe because of the need for a complex setup), this can sometimes be solved by the use of helper functions that simplify the setup and allow different tests to share common setup code.
You are right, changing the visibility of methods just so you are able to test them is a bad thing to do. Here are the options you have:
Test it through existing public methods. You really shouldn't test methods but behavior, which normally needs multiple methods anyway. So stop thinking about testing that method, but figure out the behavior that is not tested. If your class is well designed it should be easily testable.
Move the method into a new class. This is probably the best solution to your problem from a design perspective. If your code is so complex that you can't reach all the paths in that private method, parts of it should probably live in their own class. In that class they will have at least package scope and can easily be tested. Again: you should still test behavior not methods.
Use reflection. You can access private fields and methods using reflection. While this is technical possible it just adds more legacy code to the existing legacy code in order to hide the legacy code. In the general case a rather stupid thing to do. There are exceptions to this. For example is for some reason you are not allowed to make even the smallest change to the production source code. If you really need this, google it.
Just change the visibility Yes it is bad practice. But sometimes the alternatives are: Make large changes without tests or don't test it at all. So sometimes it is ok to just bite the bullet and change the visibility. Especially when it is the first step for writing some tests and then extracting the behavior in its own class.

Domain Driven Design - testability and the "new" keyword

I have been trying to follow a domain driven design approach in my new project. I have always generally used Spring for dependency injection, which nicely separates my application code from the construction code, however, with DDD I always seem to have one domain object wanting to create another domain object, both of which have state and behaviour.
For example, given a media file, we want to encode it to a different format - the media asset calls on a transcode service and receives a callback:
class MediaAsset implements TranscodingResultListener {
private NetworkLocation permanentStorage;
private Transcoder transcoder;
public void transcodeTo(Format format){
transcoder.transcode(this,format);
}
public void onSuccessfulTranscode(TranscodeResult result){
Rendition rendition = new Rendition(this, result.getPath(), result.getFormat());
rendition.moveTo(permanentStorage);
}
}
Which throws two problems:
If the rendition needs some dependencies (like the MediaAsset requires a "Transcoder") and I want to use something like Spring to inject them, then I have to use AOP in order for my program to run, which I don't like.
If I want a unit test for MediaAsset that tests that a new format is moved to temporary storage, then how do I do that? I cannot mock the rendition class to verify that it had its method called... the real Rendition class will be created.
Having a factory to create this class is something that I've considered, but it is a lot of code overhead just to contain the "new" keyword which causes the problems.
Is there an approach here that I am missing, or am I just doing it all wrong?
I think that the injection of a RenditionFactory is the right approach in this case. I know it requires extra work, but you also remove a SRP violation from your class. It is often tempting to construct objects inside business logic, but my experience is that injection of the object or a objectfactory pays off 99 out of 100 times. Especially if the mentioned object is complex, and/or if it interacts with system resources.
I assume your approach for unit testing is to test the MediaAsset in isolation. Doing this, I think a factory is the common solution.
Another approach is to test the whole system (or almost the whole system). Let your test access the outer interface[1] (user interface, web service interface, etc) and create test doubles for all external systems that the system accesses (database, file system, external services, etc). Then let the test inject these external dependencies.
Doing this, you can let the tests be all about behaviour. The tests become decoupled from implementation details. For instance, you can use dependency injection for Rendition, or not: the tests don't care. Also, you might discover that MediaAsset and Rendition are not the correct concepts[2], and you might need to split MediaAsset in two and merge half of it with Rendition. Again, you can do it without worrying about the tests.
(Disclaimer: Testing on the outer level does not always work. Sometimes you need to test common concepts, which requires you to write micro tests. And then you might run into this problem again.)
[1] The best level might actually be a "domain interface", a level below the user interface where you can use the domain language instead of strings and integers, and where you can talk domain actions instead of button clicks and focus events.
[2] Perhaps this is actually your problem: Are MediaAsset and Rendition the correct concepts? If you ask your domain expert, does he know what these are? If not, are you really doing DDD?

Testing interactions with external services

prerequisites: I'm using the latest version of the Play! framework, and the Java version (not Scala).
I need to publish a message to a message queue when a user is created, and I'd like to test that behaviour. My issue is making this easily testable.
The Controller approach
In other frameworks, what I would've done would be to use constructor injection in the controller and pass in a mocked queue in my tests; however, with Play! the controllers are static, that means I can't do new MyController(mockedQueue) in my tests.
I could use Google Guice and put an #Inject annotation on a static field in my controller, but that doesn't feel very nice to me, as it either means I have to make the field public to be replaced in the test, or I have to use a container in my tests. I'd much prefer to use constructor injection, but Play! doesn't seem to facilitate that.
The Model approach
It's often said your logic should be in your model, not your controller. That makes sense; however, we're not in Ruby here and having your entities interact with external services (email, message queues etc...) is considerably less testable than in a dynamic environment where you could just replace your MessageQueue static calls with a mocked instance at will.
If I make my entity call off to the queue, how is that testable?
Of course, both these situations are unnecessary if I do end-to-end integration tests, but I'd rather not need a message queue or SMTP server spun up for my tests to run.
So my question is: How do I model my Play! controllers and/or models to facilitate testing interactions with external services?
As I see, there's not a clean solution for this.
You could use an Abstract Factory for your dependencies. This factory could have setter methods for the objects it produces.
public class MyController {
...
private static ServiceFactory serviceFactory = ServiceFactory.getInstance();
...
public static void action() {
...
QueueService queue = serviceFactory.getQueueService();
...
}
}
Your test would look like this:
public void testAction() {
QueueService mock = ...
...
ServiceFactory serviceFactory = ServiceFactory.getInstance();
serviceFactory.setQueueService(mock);
...
MyController.action();
verify(mock);
}
If you don't want to expose the setter methods of the factory, you could create an interface and configure the implementing class in your tests.
Another option would be the use o PowerMock for mocking static methods. I've used before and it works relatively fine for most cases. Just don't overuse it, or you're in maintenance hell...
And finally, since your willing to use Guice in your application, this could be a viable option.
Good luck!
I'm little confused. You can call a method of another class
public class Users extends Controller {
public static void save(#Valid User user) {
//check for user validaton
user = user.save();
QueueService queueService = new QueueSerice();
queueService.publishMessage(user);
}
}
You can write unit testcases for QueueService using a mock and write Functional testcase for Users controller save method.
EDIT: extending answer as previous was not clear
The first idea would be to add the reference to the Queue to the Model, as you have a POJO and access to the constructor. As you mention in the comments below, the Model approach is problematic when thinking on Hibernate hydrating the entity, which would discard this.
The second approach would be to add this reference to the Queue to the Controller. Now, this seems like a bad idea. Besides the public member issue you mention, I believe that the idea behind the controller is to retrieve the parameters to the request, validate they are correct (checkAuthenticity, validation, etc), send the request to be processed and then prepare the response.
The "key" here is "send the request to be processed". In some cases we may do that work in the Controller if it is simple, but in other cases it seems better to use a "Service" (to call it somehow) in which you do the work you need with the given data.
I use this separation as from the point of view of testing it's easier (for me) to test the controller via Selenium and do a separate test (using JUnit) for the service.
In you case this Service would include the reference to the Queue you mention.
On how to initialize, that will depend. You may create a singleton, initialize it every time via a constructor, etc. In you particular scenario this may depend on the work related to initialize your queue service: if it's hard you may want a Singleton with a Factory method that retrieves the service (and can be mocked in testing) and pass that as a parameter to the constructor of the Service object.
Hope this update clarifies more what I had in mind when I answered.
It is perhaps not what you are looking for, but in my current project we have solved that type of testing through integration tests and a JMS setup with a local queue and a messaging bridge.
In slightly more detail:
Your code always posts/reads messages to/from local queues, i.e. queues on your local app server (not on the external system).
A messaging bridge connects the local queue to the queue of the external service when needed, e.g. in production or in a manual testing environment.
An integration test creates the new user (or whatever you want to test), and then reads the expected message from the local queue. In this case, the messaging bridge is not active.
On my project, we use SoapUI to perform these tests as the system under test is a SOAP-based integration platform and SoapUI has good JMS support. But it could just as well be a plain JUnit test which performs the test and reads from the local JMS queue afterwards.

Categories