prerequisites: I'm using the latest version of the Play! framework, and the Java version (not Scala).
I need to publish a message to a message queue when a user is created, and I'd like to test that behaviour. My issue is making this easily testable.
The Controller approach
In other frameworks, what I would've done would be to use constructor injection in the controller and pass in a mocked queue in my tests; however, with Play! the controllers are static, that means I can't do new MyController(mockedQueue) in my tests.
I could use Google Guice and put an #Inject annotation on a static field in my controller, but that doesn't feel very nice to me, as it either means I have to make the field public to be replaced in the test, or I have to use a container in my tests. I'd much prefer to use constructor injection, but Play! doesn't seem to facilitate that.
The Model approach
It's often said your logic should be in your model, not your controller. That makes sense; however, we're not in Ruby here and having your entities interact with external services (email, message queues etc...) is considerably less testable than in a dynamic environment where you could just replace your MessageQueue static calls with a mocked instance at will.
If I make my entity call off to the queue, how is that testable?
Of course, both these situations are unnecessary if I do end-to-end integration tests, but I'd rather not need a message queue or SMTP server spun up for my tests to run.
So my question is: How do I model my Play! controllers and/or models to facilitate testing interactions with external services?
As I see, there's not a clean solution for this.
You could use an Abstract Factory for your dependencies. This factory could have setter methods for the objects it produces.
public class MyController {
...
private static ServiceFactory serviceFactory = ServiceFactory.getInstance();
...
public static void action() {
...
QueueService queue = serviceFactory.getQueueService();
...
}
}
Your test would look like this:
public void testAction() {
QueueService mock = ...
...
ServiceFactory serviceFactory = ServiceFactory.getInstance();
serviceFactory.setQueueService(mock);
...
MyController.action();
verify(mock);
}
If you don't want to expose the setter methods of the factory, you could create an interface and configure the implementing class in your tests.
Another option would be the use o PowerMock for mocking static methods. I've used before and it works relatively fine for most cases. Just don't overuse it, or you're in maintenance hell...
And finally, since your willing to use Guice in your application, this could be a viable option.
Good luck!
I'm little confused. You can call a method of another class
public class Users extends Controller {
public static void save(#Valid User user) {
//check for user validaton
user = user.save();
QueueService queueService = new QueueSerice();
queueService.publishMessage(user);
}
}
You can write unit testcases for QueueService using a mock and write Functional testcase for Users controller save method.
EDIT: extending answer as previous was not clear
The first idea would be to add the reference to the Queue to the Model, as you have a POJO and access to the constructor. As you mention in the comments below, the Model approach is problematic when thinking on Hibernate hydrating the entity, which would discard this.
The second approach would be to add this reference to the Queue to the Controller. Now, this seems like a bad idea. Besides the public member issue you mention, I believe that the idea behind the controller is to retrieve the parameters to the request, validate they are correct (checkAuthenticity, validation, etc), send the request to be processed and then prepare the response.
The "key" here is "send the request to be processed". In some cases we may do that work in the Controller if it is simple, but in other cases it seems better to use a "Service" (to call it somehow) in which you do the work you need with the given data.
I use this separation as from the point of view of testing it's easier (for me) to test the controller via Selenium and do a separate test (using JUnit) for the service.
In you case this Service would include the reference to the Queue you mention.
On how to initialize, that will depend. You may create a singleton, initialize it every time via a constructor, etc. In you particular scenario this may depend on the work related to initialize your queue service: if it's hard you may want a Singleton with a Factory method that retrieves the service (and can be mocked in testing) and pass that as a parameter to the constructor of the Service object.
Hope this update clarifies more what I had in mind when I answered.
It is perhaps not what you are looking for, but in my current project we have solved that type of testing through integration tests and a JMS setup with a local queue and a messaging bridge.
In slightly more detail:
Your code always posts/reads messages to/from local queues, i.e. queues on your local app server (not on the external system).
A messaging bridge connects the local queue to the queue of the external service when needed, e.g. in production or in a manual testing environment.
An integration test creates the new user (or whatever you want to test), and then reads the expected message from the local queue. In this case, the messaging bridge is not active.
On my project, we use SoapUI to perform these tests as the system under test is a SOAP-based integration platform and SoapUI has good JMS support. But it could just as well be a plain JUnit test which performs the test and reads from the local JMS queue afterwards.
Related
I have seen people around me using Spring MVC in unit tests for controller classes, which is not helpful in what a unit test is for.
Unit tests are supposed to test your actual implementation of the controller class, and this can be achieved more accurately with simple Junit tests rather than using Spring Mock MVC.
But then the question arises, what's the real usage of Spring Mock MVC then? What do you need it for?
Let's say I have below code :
#Controller
#RequestMapping("/systemusers")
public class SystemUserController
{
#RequestMapping(value = "/{id}", method = RequestMethod.GET)
public String getUser(final Model model)
{
// some logic to return user's details as json
return UserDetailsPage;
}
}
I can test this class/controller more accurately with Junit than with Spring Mock MVC (all it does is generates some json which can be asserted with junit).
I can also test with Spring Mock MVC like using the correct endpoint returns the correct HTTP status and correct response page string.
But doesn't that mean that we are testing Spring MVC's functionality rather than the actual code for method under test?
P.S. : I have kept the code to minimal which I think is sufficient to explain my question. Assume there is no compilation error.
When it comes to the unit-testing of a Controller (or any Endpoint which is exposed) classes, there are two things that we will validate:
(1) Controller's actual logic itself as standalone i.e., right service calls are invoked or not etc..
(2) Request URL mapping and the Response status & object
Item (1) above is what we test in general with all other classes like Services, Utility classes etc..
Item (2) needs to be covered/tested additionally for the endpoints (controller classes) which have been exposed, so whether we use Spring's MockMVC or other machinery to do that, it is up to us really.
Spring's MockMVC indeed helps us to start the in-memory servlet container & check that the right controller methods are invoked & then the right responses have been coming out.
From my personal experience, testing the controllers (for item(2)) helped me to resolve URL mapping conflict issues (of course, within the same controller) etc.. straight away rather than fixing them at the later stages of the project.
Based on My experience I will try to answer your question.
First thing we need to understand why we use unit testing?
It is a extra check used by developer to write clean working code.
Clean working code means every line written should be doing what it is
expected to do. how do you achieve this? Here comes unit testing. The
standalone unit of code you are writing should be verified standalone.
Method is best part in code which represents standalone unit code
template.
Unit Testing for Method
Developer should write a test for method which describes the method
behavior. And possible checks that i follow is is it returning the
expected value considering all positive scenarios? is it working in
case of Exception? is it calling correct subsequent methods ?
Developer should verify the method by actually calling it providing a
mock environment
Below is possible answer to your question. though it is solely based upon the developers.
Controller methods are intended to invoke correct service call accepting input and returning the value from service to view.
So I may think of write a unit test case for controller method as you are thinking is a right approach. But you should verify the method by calling it in same way as it will be called in real time.
so you need to call controller method in same way as MVC does. Hence it is better option to use MockMVC.
MockMVC is also useful to verify the urls, input params , response and status which are part of controller method. Considering these all it makes it standalone unit of code.
Hope this will clarify your doubt.
I was wondering about this question by having some trouble writting unit tests for my spring application.
Let's take the following example:
#SpringBootTest
#RunWith(SpringRunner.class)
public class AlbumDTOConverterTest {
#Autowired
private AlbumDTOConverter albumDTOConverter;
#Test
public void ToEntity_ReturnValue_NotNull() {
AlbumDTO albumDTO = new AlbumDTO("Summer album", new Date(), "This summer, we have made some wonderfull photos. Have a look!", null);
assertNotNull(albumDTOConverter.toEntity(albumDTO));
}
}
In order to make #Autowired work properly, I am launching a container with annotating the test class with #SpringBootTest.
The thing is that I think I am doing this wrong. In my opinion, I'll rather just create a new instance of AlbumDTOConverter by just using the new operator instead of using Spring's IoD.
What do you guys think about this ?
For unit tests you don't need a whole container to start. By definition, such units should be tested in isolation. Creating an instance of a class with the new keyword is perfectly fine. Even if the class has dependencies on other classes, you can also create them manually and pass to an appropriate constructor of the class.
Note that the unit is not the same as the class. The term of the unit is commonly confused among developers, especially beginners. You don't have to rely on the dependency injection in your unit tests. The container will only increase the time needed to execute the tests and the long execution time is the main reason why developers avoid running them very often. There is nothing wrong in manually building your dependency tree for a unit under tests.
In the long run, creating similar inputs for different tests might lead to duplication in the test code, but fortunately there are best practices for this problem, e.g. shared fixture.
If you are doing unit test then you should not use #Autowire every time.
Unit test basic says "Unit tests are responsible for testing a specific piece of code, just a small functionality (unit) of the code"
Now question is when to use spring capabilities ?
Sometimes, you'll need to do some unit tests relying on Spring framework like web service call, repository call etc. For example, if you have a repository that has a custom query using the #Query annotation, you might need to test your query. Also, if you are serialising/deserialising objects, you'd want to make sure that your object mapping is working. You might want to test your controllers as well, when you have some parameter validation or error handling. How can you be sure that you are using Spring correctly? In these situations you can take advantage of the new Spring Boot's test annotations.
I thinks this will give better idea.
I often find myself wondering what is the best practice for these problems. An example:
I have a java program which should get the air temperature from a weather web service. I encapsulate this in a class which creates a HttpClient and does a Get REST request to the weather service. Writing a unit test for the class requires to stub the HttpClient so that dummy data can be received in stead. There are som options how to implement this:
Dependency Injection in constructor. This breaks encapsulation. If we switch to a SOAP web service in stead, then a SoapConnection has to be injected instead of HttpClient.
Creating a setter only for the purpose of testing. The "normal" HttpClient is constructed by default, but it is also possible to change the HttpClient by using the setter.
Reflection. Having the HttpClient as a private field set by the constructor (but not taking it by parameter), and then let the test use reflection to change it into a stubbed one.
Package private. Lower the field restriction to make it accessible in test.
When trying to read about best practices on the subject it seems to me that the general consensus is that dependency injection is the preferred way, but I think the downside of breaking encapsulation is not given enough thought.
What to you think is the preferred way to make a class testable?
I believe the best way is through dependency injection, but not quite the way you describe. Instead of injecting an HttpClient directly, instead inject a WeatherStatusService (or some equivalent name). I would make this a simple interface with one method (in your use case) getWeatherStatus(). Then you can implement this interface with an HttpClientWeatherStatusService, and inject this at runtime. To unit test the core class, you have a choice of stubbing the interface yourself by implementing the WeatherStatusService with your own unit testing requirements, or using a mocking framework to mock the getWeatherStatus method. The main advantages of this way are that:
You don't break encapsulation (because changing to a SOAP implementation involves creating a SOAPWeatherStatusService and deleting the HttpClient handler).
You have broken your initial single class down, and now have two classes with a distinct purpose, one class explicitly handles retrieving the data from an API, the other class handles the core logic. This will probably be a flow like: Receive weather status request (from higher up) -> request data retrieval from api -> process/validate the returned data -> (optionally) store data or trigger other processes to operate on the data -> return the data.
You can re-use the WeatherStatusService implementation easily if a different use case emerges to utilise this data. (For example, perhaps you have one use case to store the weather conditions every 4 hours (to show the user an interactive map of the days' developments), and another use case to get the current weather. In this case, you need two different core logic requirements which both need to use the same API, so it makes sense to have the API access code consistent between these approaches).
This method is known as hexagonal/onion architecture which I recommend reading about here:
http://alistair.cockburn.us/Hexagonal+architecture
http://jeffreypalermo.com/blog/the-onion-architecture-part-1/
Or this post which sums the core ideas up:
http://blog.8thlight.com/uncle-bob/2012/08/13/the-clean-architecture.html
EDIT:
Further to your comments:
What about testing the HttpClientWeatherStatus? Ignore unit testing or else we have to find a way to mock HttpClient there?
With the HttpClientWeatherStatus class. It should ideally be immutable, so the HttpClient dependency is injected into the constructor on creation. This makes unit testing easy because you can mock HttpClient and prevent any interaction with the outside world. For example:
public class HttpClientWeatherStatusService implements WeatherStatusService {
private final HttpClient httpClient;
public HttpClientWeatherStatusService(HttpClient httpClient) {
this.httpClient = httpClient;
}
public WeatherStatus getWeatherStatus(String location) {
//Setup request.
//Make request with the injected httpClient.
//Parse response.
return new WeatherStatus(temperature, humidity, weatherType);
}
}
Where the returned WeatherStatus 'Event' is:
public class WeatherStatus {
private final float temperature;
private final float humidity;
private final String weatherType;
//Constructor and getters.
}
Then the tests look something like this:
public WeatherStatusServiceTests {
#Test
public void givenALocation_WhenAWeatherStatusRequestIsMade_ThenTheCorrectStatusForThatLocationIsReturned() {
//SETUP TEST.
//Create httpClient mock.
String location = "The World";
//Create expected response.
//Expect request containing location, return response.
WeatherStatusService service = new HttpClientWeatherStatusService(httpClient);
//Replay mock.
//RUN TEST.
WeatherStatus status = service.getWeatherStatus(location);
//VERIFY TEST.
//Assert status contains correctly parsed response.
}
}
You will generally find that there will be very few conditionals and loops in the integration layers (because these constructs represent logic, and all logic should be in the core). Because of this (specifically because there will only be a single conditional branching path in the calling code), some people would argue that there is little point unit testing this class, and that it can be covered by an integration test just as easily, and in a less brittle way. I understand this viewpoint, and don't have a problem with skipping unit tests in the integration layers, but personally I would unit test it anyway. This is because I believe unit tests in an integration domain still help me ensure that my class is highly usable, and portable/re-usable (if it's easy to test, then it's easy to use from elsewhere in the codebase). I also use unit tests as documentation detailing the use of the class, with the advantage that any CI server will alert me when the documentation is out of date.
Isn't it bloating the code for a small problem which could have been "fixed" by just some lines using reflection or simply changing to package private field access?
The fact that you put "fixed" in quotes speaks volumes about how valid you think such a solution would be. ;) I agree that there is definitely some bloat to the code, and this can be disconcerting at first. But the real point is to make a maintainable codebase which is easy to develop for. I think some projects start fast because they "fix" problems by using hacks and dodgy coding practices to maintain the pace. Often productivity grinds to a halt as the overwhelming technical debt renders changes which should be one liners into mammoth re-factors which take weeks or even months.
Once you have a project set up in a hexagonal way, the real payoffs come when you need to do one of the following:
Change the technology stack of one of your integration layers. (e.g. from mysql to postgres). In this case (as touched on above), you simply implement a new persistence layer making sure you use all the relevant interfaces from the binding/event/adapter layer. There should be no need to change core code or the interface. Finally delete the old layer, and inject the new layer in place.
Add a new feature. Often integration layers will already exist, and may not even need modification to be used. In the example of the getCurrentWeather() and store4HourlyWeather() use-cases above. Let's assume you've already implemented the store4HourlyWeather() functionality using the class outlined above. To create this new functionality (let's assume the process begins with a restful request), you need to make three new files. You need a new class in your web layer to handle the initial request, you need a new class in your core layer to represent the user story of getCurrentWeather(), and you need an interface in your binding/event/adaptor layer which the core class implements, and the web class has injected to its constructor. Now on the one hand, yes, you've created 3 files when it would have been possible to create only one file, or even just tack it onto an existing restful web handler. Of course you could, and in this simple example that would work fine. It is only over time that the distinction between layers become obvious and refactors become hard. Consider in the case where you tack it onto an existing class, that class no longer has an obvious single purpose. What will you call it? How will anyone know to look in it for this code? How complicated is your test set-up becoming so that you can test this class now that there are more dependencies to mock?
Update integration layer changes. Following on from the example above, if the weather service API (where you are getting your information from) changes, there is only one place where you need to make changes in your program to be compatible with the new API again. This is the only place in the code which knows where the data actually comes from, so it's the only place which needs changing.
Introduce the project to a new team member. Arguable point, since any well laid out project will be fairly easy to understand, but my experience so far has been that most code looks simple and understandable. It achieves one thing, and it's very good at achieving that one thing. Understanding where to look (for example) for Amazon-S3 related code is obvious because there is an entire layer devoted to interacting with it, and this layer will have no code in it relating to other integration concerns.
Fix bugs. Linked to the above, often reproducibility is the biggest step towards a fix. The advantage of all the integration layers being immutable, independent, and accepting clear parameters, is that it is easy to isolate a single failing layer and modify the parameters until it fails. (Although again, well designed code will do this well too).
I hope I've answered your questions, let me know if you have more. :) Perhaps I will look into creating a sample hexagonal project over the weekend and linking to it here to demonstrate my point more clearly.
The preferable way should favor proper encapsulation and other object-oriented design qualities, while keeping the code under test simple. So, my recommended approach would be to:
Think of a good public API for the desired class (lets call it AirTemperatureMeasurement), that fits with the system architecture.
Write a unit test for it (which fails at this point, as the class is not implemented yet). The unit test will have to mock whatever dependency makes the call to the external web service.
Implement the class under test with the simplest solution which passes the test.
Repeat the previous steps, while looking for opportunities to simplify the code and remove duplication.
For example, here is a possible detailed solution:
Step 1:
public final class AirTemperatureMeasurement {
public double getCelsius() { return 0; }
}
Step 2:
public final class AirTemperatureMeasurementTest {
#Tested AirTemperatureMeasurement cut;
#Capturing HttpClient anyHttpClient;
#Test // a white-box test
public readAirTemperatureInCelsius() {
final HttpResponse response = ...suitable response...
new Expectations() {{
anyHttpClient.request((HttpUriRequest) any);
result = response;
}};
double airTemperatureInCelsius = cut.getCelsius();
assertEquals(28.5, airTemperatureInCelsius, 0.0);
}
}
Step 3:
public final class AirTemperatureMeasurement {
public double getCelsius() {
CloseableHttpClient httpclient = HttpClients.createDefault();
// Rest ommitted for brevity.
return airTemperatureInCelsius;
}
}
The above uses the JMockit mocking library, but PowerMock would be an option too.
I would recommend using java.net.URL (if possible) instead of Apache's HttpClient, though; it would simplify both production and test code.
I have been trying to follow a domain driven design approach in my new project. I have always generally used Spring for dependency injection, which nicely separates my application code from the construction code, however, with DDD I always seem to have one domain object wanting to create another domain object, both of which have state and behaviour.
For example, given a media file, we want to encode it to a different format - the media asset calls on a transcode service and receives a callback:
class MediaAsset implements TranscodingResultListener {
private NetworkLocation permanentStorage;
private Transcoder transcoder;
public void transcodeTo(Format format){
transcoder.transcode(this,format);
}
public void onSuccessfulTranscode(TranscodeResult result){
Rendition rendition = new Rendition(this, result.getPath(), result.getFormat());
rendition.moveTo(permanentStorage);
}
}
Which throws two problems:
If the rendition needs some dependencies (like the MediaAsset requires a "Transcoder") and I want to use something like Spring to inject them, then I have to use AOP in order for my program to run, which I don't like.
If I want a unit test for MediaAsset that tests that a new format is moved to temporary storage, then how do I do that? I cannot mock the rendition class to verify that it had its method called... the real Rendition class will be created.
Having a factory to create this class is something that I've considered, but it is a lot of code overhead just to contain the "new" keyword which causes the problems.
Is there an approach here that I am missing, or am I just doing it all wrong?
I think that the injection of a RenditionFactory is the right approach in this case. I know it requires extra work, but you also remove a SRP violation from your class. It is often tempting to construct objects inside business logic, but my experience is that injection of the object or a objectfactory pays off 99 out of 100 times. Especially if the mentioned object is complex, and/or if it interacts with system resources.
I assume your approach for unit testing is to test the MediaAsset in isolation. Doing this, I think a factory is the common solution.
Another approach is to test the whole system (or almost the whole system). Let your test access the outer interface[1] (user interface, web service interface, etc) and create test doubles for all external systems that the system accesses (database, file system, external services, etc). Then let the test inject these external dependencies.
Doing this, you can let the tests be all about behaviour. The tests become decoupled from implementation details. For instance, you can use dependency injection for Rendition, or not: the tests don't care. Also, you might discover that MediaAsset and Rendition are not the correct concepts[2], and you might need to split MediaAsset in two and merge half of it with Rendition. Again, you can do it without worrying about the tests.
(Disclaimer: Testing on the outer level does not always work. Sometimes you need to test common concepts, which requires you to write micro tests. And then you might run into this problem again.)
[1] The best level might actually be a "domain interface", a level below the user interface where you can use the domain language instead of strings and integers, and where you can talk domain actions instead of button clicks and focus events.
[2] Perhaps this is actually your problem: Are MediaAsset and Rendition the correct concepts? If you ask your domain expert, does he know what these are? If not, are you really doing DDD?
Although I have tagged this as a java/spring question it can be easily asked of any mvc framework with stateless controllers. I know rails uses simple stateless controllers for instance so perhaps you guys know how to best solve this problem. I can best describe the problem with java/spring mvc which is the implementation - forgive the java jargon.
The issue
We are having trouble coming up with a satisfactory way of performing stateless-to-stateful handover in spring mvc.
In essence given a structure like:
Model: Unit
With the states: withdrawn, available, unavailable
And the operations: getOutline() and getHelp()
Controller: UnitController
with operations: displayOutline() and displayHelp()
We need a way to check the state of the unit before we execute the operation displayOutline() (because the unit itself may be withdrawn and so the user should be forwarded to a withdrawn page).
We have tried to do this a number of ways including:
The dead simple way (any language)
All methods in the controller that require an ‘available’ state unit call a method isAvailable() in the first line of its implementation. Obviously there lots of replication here, it reeks.
The AOP way (Java specific)
An #Around advice can be created called UnitAccess which does the check and reroutes the control flow (i.e. instead of calling proceed() which would invoke the underlying method it calls another method on the controller). This seems like a hack and not really what AOP if for, it does remove the replication but adds complexity and reduces transparency.
An Interceptor (Provided by servlet architecture but probably doable in other frameworks)
Which checks the unit state and essentially changes the actual URL call. Again this does not seem right. We don’t like the idea of invoking model logic before getting to a controller.
We have thought about
Command Pattern
Creating a command pattern structure which (with the use of inheritance) can return a withdrawn view or valid displayOutline view. As the execute method will perform the checks in a super()call and the specific logic inside the concrete commands. Ie creating a object structure like
DisplayOutlineCommand extends UnitCommand
public void execute(){
super();
// must be ok, perform getOutline()
}
And finally, using a custom Exception
Calling getAvailableUnit() on a service level object which will do the checks for availability, etc before returning the unit. If the unit is withdrawn then it will throw a UnitWithdrawnException which could be caught by the servlet and handled by returning an appropriate view. Were still not convinced. We are also not hot on the idea of using an exception for normal flow control.
Are we missing something? Is there an easy way to do this under spring/another stateless controller framework?
Maybe I'm missing the point, but why should a user come to the controller if the Unit is withdrawn?
I would argue it is best to ensure that normally pages don't link to a controller that require the Unit to be 'OK', if that Unit is not 'OK'.
If the state of the Unit changes between the time the referring page is rendered and the actual call comes in to the controller (it is not longer 'OK'), then the use of an exception to handle that event seems perfectly fine to me (like having an exception when an optimistic locking error occurs).
Perhaps you haven't described the whole problem, but why not put the check in displayOutline() itself? perhaps route to a displayOutlineOrHelp() method, which looks essentially like
ModelAndView displayOutlineOrHelp(...) {
Unit unit = ... //code to get the unit the request refers to
return unit.isAvailable() ? displayOutline(...) : displayHelp(...);
}