How to mock, or otherwise test readPassword? - java

I'm developing a framework for simplifying the creation of console applications. My framework is written in Scala, and I'm using ScalaTest and Mockito for unit testing.
I need to be able to mock java.io.Console, but it's declared final. I'm trying to achieve 100% unit test coverage is currently this is the only thing blocking me - in both functional and unit tests.
So far I've not been able to get very far with any solution, I just can't think of a way of doing this. It doesn't implement an interface that I can mock, the method isn't available anywhere else, obviously I can't extend it. I'm thinking perhaps there's a solution that could involve some sort of dynamic method of calling the methods like readLine and readPassword, but I'm not experienced enough to get anywhere with that train of thought either!

You should create your own interface to wrap all interactions with java.io.Console, e.g.
public interface ConsoleService {
...
}
So long as you only interact with the console via an instance of ConsoleService, then you will be able to mock the ConsoleService and test 99% of your code as you normally would. The ConsoleService interface becomes the boundary of your application for both functional testing of the entire app and the unit tests of the classes that interact with it directly.
Now we have reduced the scope of the problem to "how do I test the ConsoleService implementation", and we need to get a little creative. For example, you could redirect Console output to a file and inspect the contents of the file. You might not even want to test the ConsoleService in Scala; you could write a skeleton application using the ConsoleService then use your scripting language of choice to start a real Console on your favourite OS, interact with your skeleton application and test the ConsoleService that way. You can get as creative (and hacky) as you like here because:
it only affects a small number of tests; and
your application will likely mature to a point where the ConsoleService implementation doesn't need to change very much, i.e. your wacky testing solution will not be a great burden on future developers.
For these reasons it should be obvious that it is a good idea to keep the ConsoleService wrapper very thin because any logic in there will be tested via the strange ConsoleService tests, not nice friendly Scala tests. Often direct delegation to java.io.Console methods is good enough, but you should allow your application's functional tests to drive out the ConsoleService interface rather than making any presumptions (your functional test assertions will likely rely on particular interactions with a mock ConsoleService, or perhaps on the state of a stub, test implementation of ConsoleService which you can control in the test).
Finally, you may decide that the ConsoleServicewrapper is so thin that its implementation does require any unit/functional tests at all. The implementation of ConsoleService will likely be so critical to your application that any defects will be exposed by integration tests or manual inspection of the app in UAT.
You might end up with something like this (apologies, I don't speak Scala so it's Java):
public class RealConsoleService implements ConsoleService {
private final java.io.Console delegate;
public RealConsoleService(java.io.Console delegate) {
this.delegate = delegate;
}
#Override
public String readLine() throws IOError {
return delegate.readLine();
}
}
Two interesting points:
This is a great example of why test driven development helps write flexible code. If you wanted to rewrite your framework using another method of input and output, you would just rename ConsoleService to the more abstract ApplicationInputOutputService and plug in a different implementation.
The same concept can be used to test applications that use other difficult-to-test APIs. Many of Java's useful file IO methods are static methods and therefore difficult to control in tests. By wrapping in an interface as above, your application functionality becomes easy to test.

Related

Why do we not mock domain objects in unit tests?

I give you 2 tests; the purpose of which is solely to confirm that when service.doSomething is called, emailService.sendEmail is called with the person's email as a parameter.
#Mock
private EmailService emailService;
#InjectMocks
private Service service;
#Captor
private ArgumentCaptor<String> stringCaptor;
#Test
public void test_that_when_doSomething_is_called_sendEmail_is_called_NO_MOCKING() {
final String email = "billy.tyne#myspace.com";
// There is only one way of building an Address and it requires all these fields
final Address crowsNest = new Address("334", "Main Street", "Gloucester", "MA", "01930", "USA");
// There is only one way of building a Phone and it requires all these fields
final Phone phone = new Phone("1", "978-281-2965");
// There is only one way of building a Vessel and it requires all these fields
final Vessel andreaGail = new Vessel("Andrea Gail", "Fishing", 92000);
// There is only one way of building a Person and it requires all these fields
final Person captain = new Person("Billy", "Tyne", email, crowsNest, phone, andreaGail);
service.doSomething(captain); // <-- This requires only the person's email to be initialised, it doesn't care about anything else
verify(emailService, times(1)).sendEmail(stringCaptor.capture());
assertThat(stringCaptor.getValue(), eq(email));
}
#Test
public void test_that_when_doSomething_is_called_sendEmail_is_called_WITH_MOCKING() {
final String email = "billy.tyne#myspace.com";
final Person captain = mock(Person.class);
when(captain.getEmail()).thenReturn(email);
service.doSomething(captain); // <-- This requires the person's email to be initialised, it doesn't care about anything else
verify(emailService, times(1)).sendEmail(stringCaptor.capture());
assertThat(stringCaptor.getValue(), eq(email));
}
Why is it that my team is telling me not to mock the domain objects required to run my tests, but not part of the actual test? I am told mocks are for the dependencies of the tested service only. In my opinion, the resulting test code is leaner, cleaner and easier to understand. There is nothing to distract from the purpose of the test which is to verify the call to emailService.sendEmail occurs. This is something that I have heard and accepted as gospel for a long time, over many jobs. But I still can not agree with.
I think I understand your team's position.
They are probably saying that you should reserve mocks for things that have hard-to-instantiate dependencies. That includes repositories that make calls to a database, and other services that can potentially have their own rats-nest of dependencies. It doesn't include domain objects that can be instantiated (even if filling out all the constructor arguments is a pain).
If you mock the domain objects then the test doesn't give you any code coverage of them. I know I'd rather get these domain objects covered by tests of services, controllers, repositories, etc. as much as possible and minimize tests written just to exercise their getters and setters directly. That lets tests of domain objects focus on any actual business logic.
That does mean that if the domain object has an error then tests of multiple components can fail. I think that's ok. I would still have tests of the domain objects (because it's easier to test those in isolation than to make sure all paths are covered in a test of a service), but I don't want to depend entirely on the domain object tests to accurately reflect how those objects are used in the service, it seems like too much to ask.
You have a point that the mocks allow you to make the objects without filling in all their data (and I'm sure the real code can get a lot worse than what is posted). It's a trade-off, but having code coverage that includes the actual domain objects as well as the service under test seems like a bigger win to me.
It seems to me like your team has chosen to err on the side of pragmatism vs purity. If everybody else has arrived at this consensus you need to respect that. Some things are worth making waves over. This isn't one of them.
It is a tradeoff, and you have designed your example nicely to be 'on the edge'. Generally, mocking should be done for a reason. Good reasons are:
You can not easily make the depended-on-component (DOC) behave as intended for your tests.
Does calling the DOC cause any non-derministic behaviour (date/time, randomness, network connections)?
The test setup is overly complex and/or maintenance intensive (like, need for external files) (* see below)
The original DOC brings portability problems for your test code.
Does using the original DOC cause unnacceptably long build / execution times?
Has the DOC stability (maturity) issues that make the tests unreliable, or, worse, is the DOC not even available yet?
For example, you (typically) don't mock standard library math functions like sin or cos, because they don't have any of the abovementioned problems.
Why is it recommendable to avoid mocking where unnecessary?
For one thing, mocking increases test complexity.
Secondly, mocking makes your tests dependent on the inner workings of your code, namely, on how the code interacts with the DOCs (like, in your case, that the captain's first name is obtained using getFirstName, although possibly another way might exist to get that information).
And, as Nathan mentioned, it may be seen as a plus that - without mocking - DOCs are tested for free - although I would be careful here: There is a risk that your tests lose focus if you get tempted to also test the DOCs. The DOCs should have tests of their own.
Why is your scenario 'on the edge'?
One of the abovementioned good reasons for mocking is marked with (*): "The test setup is overly complex ...", and your example is constructed to have a test setup that is a bit complex. Complexity of the test setup is obviously not a hard criterion and developers will simply have to make a choice. If you want to look at it this way, you could say that either way has some risks when it comes to future maintenance scenarios.
Summarized, I would say that neither position (generally to mock or generally not to mock) is right. Instead, developers should understand the decision criteria and then apply them to the specific situation. And, when the scenario is in the grey zone such that the criteria don't lead to a clear decision, don't fight over it.
There are two mistakes here.
First, testing that when a service method is called, it delegates to another method. That is a bad specification. A service method should be specified in terms of the values it returns (for getters) or the values that could be subsequently got (for mutators) through that service interface. The service layer should be treated as a Facade. In general, few methods should be specified in terms of which methods they delegate to and when they delegate. The delegations are implementation details and so should not be tested.
Unfortunately, the popular mocking frameworks encourage this erroneous approach. And so does over zealous use of Behaviour Driven Development.
The second mistake is centered around the very concept of unit testing. We would like each of our unit tests to test one thing, so when there is a fault in one thing, we have one test failure, and locating the fault is easy. And we tend to think of "unit" meaning the same as "method" or "class". This leads people to think that a unit test should involve only one real class, and all other classes should be mocked. This is impossible for all but the simplest of classes. Almost all Java code uses classes from the standard library, such as String or HashSet. Most professional Java code uses classes from various frameworks, such as Spring. Nobody seriously suggests mocking those. We accept that those classes are trustworthy, and so do not need mocking. We accept that it is OK not to mock "trustworthy" classes that the code of our unit uses. But, you say, our classes are not trustworthy, so we must mock them. Not so. You can trust those other classes, by having good unit tests for them. But how to avoid a tangle of interdependent classes that cause a confusing mass of test failures when there is only one fault present? That would be a nightmare to debug! Use a concept from 1970s programming (called, a virtual machine hierarchy, which is now a rather confusing term, given the additional meanings of virtual machine): arrange your software in layers from low level to high level, with higher layers performing operations using lower layers. Each layer provides a more expressive or advanced means of abstractly describing operations and objects. So, domain objects are in a low level, and the service layer is at a higher level. When several tests fail, start debugging the lowest level test failure(s): the fault will probably be in that layer, possibly (but probably not) in a lower layer, and not in a higher layer.
Reserve mocks only for input and output interfaces that would make the tests very expensive to run (typically, this means mocking the repository layer and the logging interface).
The intention of an automated test is to reveal that the intended behavior of some unit of software is no longer performing as expected (aka reveal bugs.)
The granularity/size/bounds of units under test in a given test suite is to be decided by you and your team.
Once that is decided, if something outside of that scope can be mocked without sacrificing the behavior being tested, then that means it is clearly irrelevant to the test, and it should be mocked. This will help with making your tests more:
Isolated
Fast
Readable (as you mentioned)
...and most importantly, when the test fails, it will reveal that the intended behavior of some unit of software is no longer performing as expected. Given a sufficiently small unit under test, it will be obvious where the bug has occurred and why.
If your test-without-mocks example were to fail, it could indicate an issue with Address, Phone, Vessel, or Person. This will cause wasted time tracking down exactly where the bug has occurred.
One thing I will mention is that your example with mocks is actually a bit unreadable IMO because you are asserting that a String will have a value of "Billy" but it is unclear why.

Unit Testable convention for Service "Helper Classes" in DDD pattern

I'm fairly new to Java and joining a project that leverages the DDD pattern (supposedly). I come from a strong python background and am fairly anal about unit test driven design. That said, one of the challenges of moving to Java is the testability of Service layers.
Our REST-like project stack is laid out as follows:
ServiceHandlers which handles request/response, etc and calls specific implementations of IService (eg. DocumentService)
DocumentService - handles auditing, permission checking, etc with methods such as makeOwner(session, user, doc)
Currently, something like DocumentService has repository dependencies injected via guice. In a public method like DocumentService.makeOwner, we want to ensure the session user is an admin as well as check if the target user is already an owner (leveraging the injected repositories). This results in some dupe code - one for both users involved to resolve the user and ensure membership, permissions, etc etc. To eliminate this redundant code, I want make a sort of super simpleisOwner(user, doc) call that I can concisely mock out for various test scenarios (such as throwing the exception when the user can't be resolved, etc). Here is where my googling fails me.
If I put this in the same class as DocumentService, I can't mock it while testing makeOwner in the same class (due to Mockito limitations) even though it somewhat feels like it should go here (option1).
If I put it in a lower class like DocumentHelpers, it feels slightly funny but I can easily mock it out. Also, DocumentHelpers needs the injected repository as well, which is fine with guice. (option 2)
I should add that there are numerous spots of this nature in our infant code base that are untestable currently because methods are non-statically calling helper-like methods in the same *Service class not used by the upper ServiceHandler class. However, at this stage, I can't tell if this is poor design or just fine.
So I ask more experienced Java developers:
Does introducing "Service Helpers" seem like a valid solution?
Is this counter to DDD principals?
If not, is there are more DDD-friendly naming convention for this aside from "Helpers"?
3 bits to add:
My googling has mostly come up with debates over "helpers" as static utility methods for stateless operations like date formatting, which doesn't fit my issue.
I don't want to use PowerMock since it breaks code coverage and is pretty ugly to use.
In python I'd probably call the "Service Helper" layer described above as internal_api, but that seems to have a different meaning in Java, especially since I need the classes to be public to unit test them.
Any guidance is appreciated.
That the user who initiates the action must be an admin looks like an application-level access control concern. DDD doesn't have much of an opinion about how you should do that. For testability and separation of concerns purposes, it might be a better idea to have some kind of separate non-static class than a method in the same service or a static helper though.
Checking that the future owner is already an owner (if I understand correctly) might be a different animal. It could be an invariant in your domain. If so, the preferred way is to rely on an Aggregate to enforce that rule. However, it's not clear from your description whether Document is an aggregate and if it or another aggregate contains the data needed to tell if a user is owner.
Alternatively, you could verify the rule at the Application layer level but it means that your domain model could go inconsistent if the state change is triggered by something else than that Application layer.
As I learn more about DDD, my question doesn't seem to be all that DDD related and more just about general hierarchy of the code structure and interactions of the layers. We ended up going with a separate DocumentServiceHelpers class that could be mocked out. This contains methods like isOwner that we can mock to return true or false as needed to test our DocumentService handling more easily. Thanks to everyone for playing along.

How to write unit tests for data access?

Unit tests have become increasingly important in modern software development, and I'm finding myself lost in the dust. Primarily a Java programmer, I understand the basics of unit tests: have methods in place that test fundamental operations in your application. I can implement this for the simple (and often used as examples) cases:
public boolean addNumbers(int a, intb) {return a + b;}
//Unit test for above
public boolean testAddNumbers() {return addNumbers(5, 10) == 15;}
What confuses me is how to move this out into practical application. After all, most simple functions are already in APIs or the JDK. A real world situation that I do frequently in my job is data access, i.e. writing DAOs to work with a database. I can't write static tests like the example above, because pulling a record set from an Oracle box can return a whole manner of things. Writing a generalized unit test that just looks for a particular pattern in the return set seems too broad and unhelpful. Instead, I write no unit tests. This is bad.
Another example of a case where I don't know how to approach writing tests is web applications. My web applications are typically built on a J2EE stack, but they don't involve much logic. Generally it's delivering information from databases with little to no manipulation. Are these inappropriate targets for unit tests?
In short, I've found the vast majority of unit test examples to focus on test cases that are too simplistic and not relevant to what I do. I'm looking for any (preferably Java) examples/tips on writing unit tests for applications that move and display data, not perform logic on it.
You generally don't write unit tests for DAOs, but integration tests. These tests basically consist in
setting the database in a well-known state, suitable for the test
call the DAO method
verify that the DAO returns the right data and/or changes the stateof the database as expected.
Shameless plug: DbSetup is good tool to do the first part. But other tools exist like DBUnit.
To test the business logic of the app (complex or not, that doesn't change much), you typically mock the DAOs using a mocking framework like Mockito:
SomeDao mockDao = mock(SomeDao.class);
when(mockDao.findAllEmployees()).thenReturn(Arrays.asList(e1, e2, e3));
SomeService service = new SomeService(mockDao);
someService.increaseSalaryOfAllEmployeees(1000);
// todo verify that e1, e2 and e3's salary is 1000 larger than before

What are the best practices for writing unit tests with Mock frameworks

I am new to mocking and I have goggled a lot for the best practices but till now I couldn't find any satisfying resource so I thought of putting in SO.
I have tried few test cases and had following doubts:
Do you write a separate unit test for each method (public private etc) and mock other methods calls that's being invoked inside this method or you test only public method?
Is it okay to verify the invocation of stubbed method at the end when testing a method that returns nothing e.g DB insertion?
Please add other practices also that are must to know.
There are many levels of testing. Unit testing is of a finer granularity to integration testing which you should research separately. Regrettably this is still quite a young area of the software engineering industry and as a result the terminology is being intermixed in ways not intended.
For unit testing you should write tests that determine if the behaviour of the class meets expectations. Once you have all such tests you should find any private methods are also tested as a consequence so no need to test private methods directly. If you only test behaviour, you should find your tests never need to change although the class under test may over time - you may of course need to increase the number of tests to compensate just never change the existing tests.
Each class, in a good design, should have minimal use of other classes (collaborators). Those collaborators that get mocked are often implementing infrastructure such as database access. Be wary of testing collaboration as this is more closely associated with a larger system test - mocking collaborators gives your unit test knowledge not only of how it behaves but also how it operates which is a different subject matter.
Sorry to be vague but you are embarking on a large topic and I wanted to be brief.

How to mock a file with EasyMock?

I have recently been introduced to EasyMock and have been asked to develop some unit tests for a FileMonitor class using it. The FileMonitor class is based on a timed event that wakes up and checks for file modification(s) in a defined list of files and directories. I get how to do this using the actual file system, write a test that writes to a file and let the FileMonitor do its thing. So, how do I do this using EasyMock? I just don't get how to have EasyMock mock the file system.
Thanks,
Todd
Something along the lines of:
import static org.easymock.classextension.EasyMock.*;
File testDir = createMock(File.class);
expect(testDir.lastModified()).andReturn(10L);
// more expectations
replay(testDir);
// create a FileMonitor watching testDir
// run the method which gets invoked by the trigger
verify(testDir);
Have a look at the excellent (and concise) user guide. You might reconsider using EasyMock though - most people are currently using or in the process of switching to the more advanced and more actively developed Mockito (inspired by EasyMock).
The basic technique for mocking is to introduce an interface (if the current design doesn't have one) that provides methods for the real service (the dependency) that is being mocked. The test is testing that the class under test interacts correctly with the dependency. Correctly here means that it does what you expect it to do. That does not mean it does the right thing, as the right thing can only be determined by an integration test that uses the real components (what you envision doing by creating a real file).
So you need to have a method on the class under test that lets you pass in an implementation of this interface. The most obvious is via the constructor. You have the production constructor which initializes the class with the real implementation of the interface that hits the real file system, and then under test you pass in the mock to the constructor.
In the test you run the methods on the class and assert that the interface was called in the way you expect.
I will note that coming along after a class is creating and unit testing via mocks is of limited value, but it will help lock down behavior so that future changes to the class won't break expectations in surprising ways.
I hope that helps get you started.
Some mocking frameworks support mocking actual concrete classes, which can make a lot of sense in test-after unit tests (by intercepting calls to real classes not just interfaces). I couldn't find if EasyMock lets you do that, but JDave is probably the place to go if you need that kind of functionality. It even lets you mock final classes.
I would put the actual call to the filesystem in its separate package-private method. For testing, extend the class and override that method. Thus you do not actually make a call to the file system.
EasyMocks classextension has also the possibility to create paritial mocks, but I'm not totally convinced of that.
http://easymock.org/EasyMock2_4_ClassExtension_Documentation.html

Categories