How to mock a file with EasyMock? - java

I have recently been introduced to EasyMock and have been asked to develop some unit tests for a FileMonitor class using it. The FileMonitor class is based on a timed event that wakes up and checks for file modification(s) in a defined list of files and directories. I get how to do this using the actual file system, write a test that writes to a file and let the FileMonitor do its thing. So, how do I do this using EasyMock? I just don't get how to have EasyMock mock the file system.
Thanks,
Todd

Something along the lines of:
import static org.easymock.classextension.EasyMock.*;
File testDir = createMock(File.class);
expect(testDir.lastModified()).andReturn(10L);
// more expectations
replay(testDir);
// create a FileMonitor watching testDir
// run the method which gets invoked by the trigger
verify(testDir);

Have a look at the excellent (and concise) user guide. You might reconsider using EasyMock though - most people are currently using or in the process of switching to the more advanced and more actively developed Mockito (inspired by EasyMock).

The basic technique for mocking is to introduce an interface (if the current design doesn't have one) that provides methods for the real service (the dependency) that is being mocked. The test is testing that the class under test interacts correctly with the dependency. Correctly here means that it does what you expect it to do. That does not mean it does the right thing, as the right thing can only be determined by an integration test that uses the real components (what you envision doing by creating a real file).
So you need to have a method on the class under test that lets you pass in an implementation of this interface. The most obvious is via the constructor. You have the production constructor which initializes the class with the real implementation of the interface that hits the real file system, and then under test you pass in the mock to the constructor.
In the test you run the methods on the class and assert that the interface was called in the way you expect.
I will note that coming along after a class is creating and unit testing via mocks is of limited value, but it will help lock down behavior so that future changes to the class won't break expectations in surprising ways.
I hope that helps get you started.
Some mocking frameworks support mocking actual concrete classes, which can make a lot of sense in test-after unit tests (by intercepting calls to real classes not just interfaces). I couldn't find if EasyMock lets you do that, but JDave is probably the place to go if you need that kind of functionality. It even lets you mock final classes.

I would put the actual call to the filesystem in its separate package-private method. For testing, extend the class and override that method. Thus you do not actually make a call to the file system.
EasyMocks classextension has also the possibility to create paritial mocks, but I'm not totally convinced of that.
http://easymock.org/EasyMock2_4_ClassExtension_Documentation.html

Related

Testing static methods of 3rd party Java SDKs

I've been working with Java, specifically in Android, for a few months now and I've found that working with PowerMockito is something I'd rather not do. The complexities of keeping it working have outweighed any benefit of it. I also think I'd agree with most of the comments I've read on Stackoverflow that say not to use PowerMockito, so please keep that in mind when answering my question. I am looking for guidance to testing without PowerMockito.
My question is, when writing code that interfaces with a 3rd party SDK that has some static method, how would you test it? Specifically, when it seems the only thing really worth testing is a behaviour? ie that the static method was called?
I can and do put these 3rd party services behind adapter classes usually. And I can test that my adapter was called. But how do you live with not ever being able to test that the 3rd party itself was called and maybe confirm which arguments it was called with? Is this the only thing available in my toolbox? to limit logic as much as possible so that the untested area is less likely to fail?
When explaining this to someone coming from a dynamically typed language would you just say that the test wasn't valuable? I'm thinking at this point that these kind of tests are low value, but I can understand why others would want to test this kind of thing. Its the kind of test I've seen written a lot in Ruby projects I've worked on.
The one thing I have done in the past in similar situations:
created a tiny wrapper interface and an impl class calling that static method; and test verifying that the wrapper is called
a single test case that invokes that impl class and thereby the real static method.
If one is "lucky" that call has an observable effect, for example some exception gets thrown (that is the problem with a lot of static code in my context - it simply breaks unless the whole stack is running). And then you check for that. But I also agree: there isn't much value in doing so. It proofs correct plumbing, at the cost of being subject to change whenever the behavior of that static method changes.

How to mock, or otherwise test readPassword?

I'm developing a framework for simplifying the creation of console applications. My framework is written in Scala, and I'm using ScalaTest and Mockito for unit testing.
I need to be able to mock java.io.Console, but it's declared final. I'm trying to achieve 100% unit test coverage is currently this is the only thing blocking me - in both functional and unit tests.
So far I've not been able to get very far with any solution, I just can't think of a way of doing this. It doesn't implement an interface that I can mock, the method isn't available anywhere else, obviously I can't extend it. I'm thinking perhaps there's a solution that could involve some sort of dynamic method of calling the methods like readLine and readPassword, but I'm not experienced enough to get anywhere with that train of thought either!
You should create your own interface to wrap all interactions with java.io.Console, e.g.
public interface ConsoleService {
...
}
So long as you only interact with the console via an instance of ConsoleService, then you will be able to mock the ConsoleService and test 99% of your code as you normally would. The ConsoleService interface becomes the boundary of your application for both functional testing of the entire app and the unit tests of the classes that interact with it directly.
Now we have reduced the scope of the problem to "how do I test the ConsoleService implementation", and we need to get a little creative. For example, you could redirect Console output to a file and inspect the contents of the file. You might not even want to test the ConsoleService in Scala; you could write a skeleton application using the ConsoleService then use your scripting language of choice to start a real Console on your favourite OS, interact with your skeleton application and test the ConsoleService that way. You can get as creative (and hacky) as you like here because:
it only affects a small number of tests; and
your application will likely mature to a point where the ConsoleService implementation doesn't need to change very much, i.e. your wacky testing solution will not be a great burden on future developers.
For these reasons it should be obvious that it is a good idea to keep the ConsoleService wrapper very thin because any logic in there will be tested via the strange ConsoleService tests, not nice friendly Scala tests. Often direct delegation to java.io.Console methods is good enough, but you should allow your application's functional tests to drive out the ConsoleService interface rather than making any presumptions (your functional test assertions will likely rely on particular interactions with a mock ConsoleService, or perhaps on the state of a stub, test implementation of ConsoleService which you can control in the test).
Finally, you may decide that the ConsoleServicewrapper is so thin that its implementation does require any unit/functional tests at all. The implementation of ConsoleService will likely be so critical to your application that any defects will be exposed by integration tests or manual inspection of the app in UAT.
You might end up with something like this (apologies, I don't speak Scala so it's Java):
public class RealConsoleService implements ConsoleService {
private final java.io.Console delegate;
public RealConsoleService(java.io.Console delegate) {
this.delegate = delegate;
}
#Override
public String readLine() throws IOError {
return delegate.readLine();
}
}
Two interesting points:
This is a great example of why test driven development helps write flexible code. If you wanted to rewrite your framework using another method of input and output, you would just rename ConsoleService to the more abstract ApplicationInputOutputService and plug in a different implementation.
The same concept can be used to test applications that use other difficult-to-test APIs. Many of Java's useful file IO methods are static methods and therefore difficult to control in tests. By wrapping in an interface as above, your application functionality becomes easy to test.

Domain Driven Design - testability and the "new" keyword

I have been trying to follow a domain driven design approach in my new project. I have always generally used Spring for dependency injection, which nicely separates my application code from the construction code, however, with DDD I always seem to have one domain object wanting to create another domain object, both of which have state and behaviour.
For example, given a media file, we want to encode it to a different format - the media asset calls on a transcode service and receives a callback:
class MediaAsset implements TranscodingResultListener {
private NetworkLocation permanentStorage;
private Transcoder transcoder;
public void transcodeTo(Format format){
transcoder.transcode(this,format);
}
public void onSuccessfulTranscode(TranscodeResult result){
Rendition rendition = new Rendition(this, result.getPath(), result.getFormat());
rendition.moveTo(permanentStorage);
}
}
Which throws two problems:
If the rendition needs some dependencies (like the MediaAsset requires a "Transcoder") and I want to use something like Spring to inject them, then I have to use AOP in order for my program to run, which I don't like.
If I want a unit test for MediaAsset that tests that a new format is moved to temporary storage, then how do I do that? I cannot mock the rendition class to verify that it had its method called... the real Rendition class will be created.
Having a factory to create this class is something that I've considered, but it is a lot of code overhead just to contain the "new" keyword which causes the problems.
Is there an approach here that I am missing, or am I just doing it all wrong?
I think that the injection of a RenditionFactory is the right approach in this case. I know it requires extra work, but you also remove a SRP violation from your class. It is often tempting to construct objects inside business logic, but my experience is that injection of the object or a objectfactory pays off 99 out of 100 times. Especially if the mentioned object is complex, and/or if it interacts with system resources.
I assume your approach for unit testing is to test the MediaAsset in isolation. Doing this, I think a factory is the common solution.
Another approach is to test the whole system (or almost the whole system). Let your test access the outer interface[1] (user interface, web service interface, etc) and create test doubles for all external systems that the system accesses (database, file system, external services, etc). Then let the test inject these external dependencies.
Doing this, you can let the tests be all about behaviour. The tests become decoupled from implementation details. For instance, you can use dependency injection for Rendition, or not: the tests don't care. Also, you might discover that MediaAsset and Rendition are not the correct concepts[2], and you might need to split MediaAsset in two and merge half of it with Rendition. Again, you can do it without worrying about the tests.
(Disclaimer: Testing on the outer level does not always work. Sometimes you need to test common concepts, which requires you to write micro tests. And then you might run into this problem again.)
[1] The best level might actually be a "domain interface", a level below the user interface where you can use the domain language instead of strings and integers, and where you can talk domain actions instead of button clicks and focus events.
[2] Perhaps this is actually your problem: Are MediaAsset and Rendition the correct concepts? If you ask your domain expert, does he know what these are? If not, are you really doing DDD?

When to use Mockito.verify()?

I write jUnit test cases for 3 purposes:
To ensure that my code satisfies all of the required functionality, under all (or most of) the input combinations/values.
To ensure that I can change the implementation, and rely on JUnit test cases to tell me that all my functionality is still satisfied.
As a documentation of all the use cases my code handles, and act as a spec for refactoring - should the code ever need to be rewritten. (Refactor the code, and if my jUnit tests fail - you probably missed some use case).
I do not understand why or when Mockito.verify() should be used. When I see verify() being called, it is telling me that my jUnit is becoming aware of the implementation. (Thus changing my implementation would break my jUnits, even though my functionality was unaffected).
I'm looking for:
What should be the guidelines for appropriate usage of Mockito.verify()?
Is it fundamentally correct for jUnits to be aware of, or tightly coupled to, the implementation of the class under test?
If the contract of class A includes the fact that it calls method B of an object of type C, then you should test this by making a mock of type C, and verifying that method B has been called.
This implies that the contract of class A has sufficient detail that it talks about type C (which might be an interface or a class). So yes, we're talking about a level of specification that goes beyond just "system requirements", and goes some way to describing implementation.
This is normal for unit tests. When you are unit testing, you want to ensure that each unit is doing the "right thing", and that will usually include its interactions with other units. "Units" here might mean classes, or larger subsets of your application.
Update:
I feel that this doesn't apply just to verification, but to stubbing as well. As soon as you stub a method of a collaborator class, your unit test has become, in some sense, dependent on implementation. It's kind of in the nature of unit tests to be so. Since Mockito is as much about stubbing as it is about verification, the fact that you're using Mockito at all implies that you're going to run across this kind of dependency.
In my experience, if I change the implementation of a class, I often have to change the implementation of its unit tests to match. Typically, though, I won't have to change the inventory of what unit tests there are for the class; unless of course, the reason for the change was the existence of a condition that I failed to test earlier.
So this is what unit tests are about. A test that doesn't suffer from this kind of dependency on the way collaborator classes are used is really a sub-system test or an integration test. Of course, these are frequently written with JUnit too, and frequently involve the use of mocking. In my opinion, "JUnit" is a terrible name, for a product that lets us produce all different types of test.
David's answer is of course correct but doesn't quite explain why you would want this.
Basically, when unit testing you are testing a unit of functionality in isolation. You test whether the input produces the expected output. Sometimes, you have to test side effects as well. In a nutshell, verify allows you to do that.
For example you have bit of business logic that is supposed to store things using a DAO. You could do this using an integration test that instantiates the DAO, hooks it up to the business logic and then pokes around in the database to see if the expected stuff got stored. That's not a unit test any more.
Or, you could mock the DAO and verify that it gets called in the way you expect. With mockito you can verify that something is called, how often it is called, and even use matchers on the parameters to ensure it gets called in a particular way.
The flip side of unit testing like this is indeed that you are tying the tests to the implementation which makes refactoring a bit harder. On the other hand, a good design smell is the amount of code it takes to exercise it properly. If your tests need to be very long, probably something is wrong with the design. So code with a lot of side effects/complex interactions that need to be tested is probably not a good thing to have.
This is great question!
I think the root cause of it is the following, we are using JUnit not only for unit testing. So the question should be splited up:
Should I use Mockito.verify() in my integration (or any other higher-than-unit testing) testing?
Should I use Mockito.verify() in my black-box unit-testing?
Should I use Mockito.verify() in my white-box unit-testing?
so if we will ignore higher-than-unit testing, the question can be rephrased "Using white-box unit-testing with Mockito.verify() creates great couple between unit test and my could implementation, can I make some "grey-box" unit-testing and what rules of thumb I should use for this".
Now, let's go through all of this step-by-step.
*- Should I use Mockito.verify() in my integration (or any other higher-than-unit testing) testing?*
I think the answer is clearly no, moreover you shouldn't use mocks for this. Your test should be as close to real application as possible. You are testing complete use case, not isolated part of the application.
*black-box vs white-box unit-testing*
If you are using black-box approach what is you really doing, you supply (all equivalence classes) input, a state, and tests that you will receive expected output. In this approach using of mocks in general is justifies (you just mimic that they are doing the right thing; you don't want to test them), but calling Mockito.verify() is superfluous.
If you are using white-box approach what is you really doing, you're testing the behaviour of your unit. In this approach calling to Mockito.verify() is essential, you should verify that your unit behaves as you're expecting to.
rules of thumbs for grey-box-testing
The problem with white-box testing is it creates a high coupling. One possible solution is to do grey-box-testing, not white-box-testing. This is sort of combination of black&white box testing. You are really testing the behaviour of your unit like in white-box testing, but in general you make it implementation-agnostic when possible. When it is possible, you will just make a check like in black-box case, just asserts that output is what is your expected to be. So, the essence of your question is when it is possible.
This is really hard. I don't have a good example, but I can give you to examples. In the case that was mentioned above with equals() vs equalsIgnoreCase() you shouldn't call Mockito.verify(), just assert the output. If you couldn't do it, break down your code to the smaller unit, until you can do it. On the other hand, suppose you have some #Service and you are writting #Web-Service that is essentially wrapper upon your #Service - it delegates all calls to the #Service (and making some extra error handling). In this case calling to Mockito.verify() is essential, you shouldn't duplicate all of your checks that you did for the #Serive, verifying that you're calling to #Service with correct parammeter list is sufficient.
I must say, that you are absolutely right from a classical approach's point of view:
If you first create (or change) business logic of your application and then cover it with (adopt) tests (Test-Last approach), then it will be very painful and dangerous to let tests know anything about how your software works, other than checking inputs and outputs.
If you are practicing a Test-Driven approach, then your tests are the first to be written, to be changed and to reflect the use cases of your software's functionality. The implementation depends on tests. That sometimes mean, that you want your software to be implemented in some particular way, e.g. rely on some other component's method or even call it a particular amount of times. That is where Mockito.verify() comes in handy!
It is important to remember, that there are no universal tools. The type of software, it's size, company goals and market situation, team skills and many other things influence the decision on which approach to use at your particular case.
In most cases when people don't like using Mockito.verify, it is because it is used to verify everything that the tested unit is doing and that means you will need to adapt your test if anything changes in it.
But, I don't think that is a problem. If you want to be able to change what a method does without the need to change it's test, that basically means you want to write tests which don't test everything your method is doing, because you don't want it to test your changes. And that is the wrong way of thinking.
What really is a problem, is if you can modify what your method does and a unit test which is supposed to cover the functionality entirely doesn't fail. That would mean that whatever the intention of your change is, the result of your change isn't covered by the test.
Because of that, I prefer to mock as much as possible: also mock your data objects. When doing that you can not only use verify to check that the correct methods of other classes are called, but also that the data being passed is collected via the correct methods of those data objects. And to make it complete, you should test the order in which calls occur.
Example: if you modify a db entity object and then save it using a repository, it is not enough to verify that the setters of the object are called with the correct data and that the save method of the repository is called. If they are called in the wrong order, your method still doesn't do what it should do.
So, I don't use Mockito.verify but I create an inOrder object with all mocks and use inOrder.verify instead. And if you want to make it complete, you should also call Mockito.verifyNoMoreInteractions at the end and pass it all the mocks. Otherwise someone can add new functionality/behavior without testing it, which would mean after while your coverage statistics can be 100% and still you are piling up code which isn't asserted or verified.
As some people said
Sometimes you don't have a direct output on which you can assert
Sometimes you just need to confirm that your tested method is sending the correct indirect outputs to its collaborators (which you are mocking).
Regarding your concern about breaking your tests when refactoring, that is somewhat expected when using mocks/stubs/spies. I mean that by definition and not regarding a specific implementation such as Mockito.
But you could think in this way - if you need to do a refactoring that would create major changes on the way your method works, it is a good idea to do it on a TDD approach, meaning you can change your test first to define the new behavior (that will fail the test), and then do the changes and get the test passed again.

What is the difference between mocks and stubs ( JMock)

What is the difference between mocks and stubs in jMock? I can create both with jMock? how i can create stubs with it and what the situation is most appropriate for this, I believe that using stubs is when I need to prepare some state for test.
Thanks
Wikipedia has an article regarding Mock objects, but the terminology is not explained as good as could be. We used to make this distinction (which may be subject to discussion, of course):
Mocks and stubs both simulate an object which is required for testing a component.
The word "mock" is used when you want to assert that a specific kind of interaction between the tested component and the mocked object takes place. That's why mock frameworks (like EasyMock) provide methods to assert that all expected calls have actually been performed. E. g. you want to see that your service actually calls a (mocked) DAO. So this call is part of your test conditions / assertions.
The word "stub" however is used when you are simply trying to provide an implementation which helps testing your component. What kind of interaction takes place does not matter, you just want the stub to fill in the gaps so you can test your component. Your focus lies on the tested components and what it does.
So it's just two words for the same thing, depending on what you are trying to achieve with it.
Mocha is a traditional mocking library very much in the JMock mould. Stubba is a separate part of Mocha that allows mocking and stubbing of methods on real (non-mock) classes. It works by moving the method of interest to one side, adding a new stubbed version of the method which delegates to a traditional mock object. You can use this mock object to set up stubbed return values or set up expectations of methods to be called. After the test completes the stubbed version of the method is removed and replaced by the original.
for more detail with example
http://jamesmead.org/blog/2006-09-11-the-difference-between-mocks-and-stubs
We usually make a distinction between queries and actions. Queries don't change the state of the world outside the mocked object--we can call it once or 5 times. They're like pre-conditions if you've done Design by Contract.
Actions change the outside world (e.g. subtract a value), and we specify mocks for those. It matters how many times we call a mock because the results will be different. These are like post-conditions.
Stub Queries, Mock Actions.

Categories