Unit Test Practice About Field Accessibility - java

I have a tree data structure for example
public class Tree {
class Node {
//stuffs...
}
private Node root;
// ...
}
I'm using junit4. In my unit test, I'd like to run some sanity check where I need to traverse the tree (e.g. check the binary search tree properties are preserved). But since the root is kept private, I can't traverse it outside the class.
Something I can think about are:
A getter for root can prevent the reference itself from being changed, but external code still may change fields in root.
A test is not a part of the data structure itself, I don't really want to put it in the Tree class. And even I do, I have to delete them after the test is done.
Please tell me the right thing to do, thanks.

There's multiple things you can do, a few options are:
Make a public getter (but this breaks code encapsulation, only should be considered if you for some odd reason cannot put tests in the same package)
Write a package-private getter and place the tests in the same package (probably the best solution)
Use reflection to gain access to the value (not recommended)
I would personally go with option 2 (please see my last paragraph for my recommend answer, since I would not do any of the above). This is what we used in industry to test things that we can't normally access to. It is minimally invasive, it doesn't break encapsulation like a public getter does, and it doesn't require you to do intrusive reflection code.
As a discussion on why we didn't use (3), I was on a project once where the lead developer decided to do exactly this in all his unit tests. There were tens of thousands of unit tests which were all using reflection to verify things. The performance gain we got from converting them away from a reflection-assisting library was nice enough that we got a lot more snappy feedback when we ran our unit tests.
Further, you should ask yourself if you need to do such tests. While testing is nice, you should ideally be unit testing the interface of your stuff, not reaching into the guts to assert everything works. This will make it very painful if you ever need to refactor your class because you'll invalidate a bunch of tests when you touch anything. Therefore I recommend only testing the public methods and be very rigorous in your tests.

Related

Using application classes as utilities in unit tests

This is a question about best practices for unit tests. Let's say I have a ClassA that does some object transforms (from CSVs to Java beans, using external library). Then I have ClassB that needs these transformed Java beans to execute a calculation.
When writing a unit test for ClassB, is it acceptable to use ClassA's transform method to get these transformed bean objects? In other words, use an application class as utility in unit test.
The other option would be to write a util test method which would have same code as ClassA (the transformation is done using an external library, so it's easy to use it in both application class and test utility class)
You are asking about best practices - thus I will not answer for your particular example (for which others have already given you some hints how to handle it), but I will look at the general question: Is it acceptable to use application classes as utility classes within test code.
There can be situations where this is the design alternative which is the best tradeoff between all goals, but it depends on the specific situation. The competing goals are the following:
Avoiding code duplication, test code readability: Creating complex objects from test code can make the test code complex and maintenance intensive. Re-using other application code to create these objects can simplify the test code, which otherwise would somehow duplicate the code for object creation.
Trustworthiness of the test code: If your test code depends on other components, the test results become less trustworthy - failing tests may be due to the other components, passing tests can be due to the created objects having different properties than intended in the test. Note that the risk of this happening can be reduced if the functions used as utilities are themselves thoroughly tested.
Completeness of the test suite: When using utility functions to create input objects there is a risk that only those input objects are used during tests that the utility functions are able to create. That is, if the utility functions will never create malformed objects, then you might limit your testing such that you never test your SUT with malformed objects. If you are aware of this risk, you can certainly still have some additional tests where malformed objects are used - in these tests you can, however, not use the utility functions.
Complexity of the test setup, execution time etc.: If the utility functions themselves have further dependencies, or they compile, link or execute slowly, they bring all these disadvantages into the test code. Thus, only certain application functions will be suitable to be used in test code.
An example that comes to my mind is testing some function that receives a parse tree of an algebraic expression as input and shall evaluate that expression. Assume that the same application also has a parse function that creates parse trees from strings.
Using the parse function within unit-tests of the evaluation function would IMO be a good option here: The expression strings would be nicely readable in the unit-tests, the parse function will likely only have dependencies that the test code would otherwise need by itself anyway (like, classes for nodes and edges of the parse tree). The parsing can be expected to have no significant impact on test performance. Plus, if the parse function is well tested its use within the test code will not impact the trustworthiness in a significant way. It may nevertheless still be necessary to have some extra tests where malformed parse trees are intentionally created to test the behaviour of the evaluation function for such cases.
when you unit test a method
you prepare the data ( mocks, hardcoded etc ),
you call the method under test
you evaluate the result
what you do not do is call other things, utility methods etc.
Why?
Because that means that your code has dependencies on other things and it does not run in isolation.
It also means that your test has too much knowledge about the code and that's a bad things especially when you start doing refactoring. Too much knowledge about the code means that as soon as you change the code you will break most tests which defeats one of of their purposes which is to make refactoring ( and developers life ) easier.

Unit testing methods which do not produce a distinct output

In my Vaadin GUI application, there are so many methods which look like below.
#Override
protected void loadLayout() {
CssLayout statusLayout = new CssLayout();
statusLayout.addComponent(connectedTextLabel);
statusLayout.addComponent(connectedCountLabel);
statusLayout.addComponent(notConnectedTextLabel);
statusLayout.addComponent(notConnectedCountLabel);
connectionsTable.getCustomHeaderLayout().addComponent(statusLayout);
connectionsTable.getCustomHeaderLayout().addComponent(commandLayout);
connectionsTable.getCustomHeaderLayout().addComponent(historyViewCheckbox);
bodySplitter.addComponent(connectionsTable);
bodySplitter.addComponent(connectionHistoryTable);
bodySplitter.setSplitPosition(75, Sizeable.Unit.PERCENTAGE);
bodySplitter.setSizeFull();
bodyLayout.addComponent(bodySplitter);
if (connectionDef.getConnectionHistoryDef() == null) {
historyViewCheckbox.setVisible(false);
}
if (connectionDef.getConnectionStatusField() == null || connectionDef.getConnectedStatusValue() == null || connectionDef.getConnectedStatusValue().isEmpty()) {
connectedTextLabel.setVisible(false);
connectedCountLabel.setVisible(false);
notConnectedTextLabel.setVisible(false);
notConnectedCountLabel.setVisible(false);
}
}
protected void setStyleNamesAndControlIds() {
mainLayout.setId("mainLayout");
header.setId("header");
footer.setId("footer");
propertyEditorLayout.setId("propertyEditorLayout");
propertyEditor.setId("propertyEditor");
mainLayout.setStyleName("mainLayout");
propertyEditorLayout.setStyleName("ui_action_edit");
header.setStyleName("TopPane");
footer.setStyleName("footer");
}
These methods are used for setting up the layout of GUIs. They do not produce a single distinct output. Almost every line in these methods is doing a separate job, which is not almost relevant to other lines.
Usually, when unit testing a method, I check the return value of the method, or validate calls on a limited number of external objects such as database connections.
But, for methods like above, there is no such single output. If I wrote unit tests for such methods, My test code checks for each method call happens in every line in the method, and in the end, it looks almost like the method itself.
If someone altered the code in any way, the test will break and they will have to update the test to match the change. But, there is no assurance that the change didn't actually break anything since test doesn't check the actual UI drawn in the browser.
For an example, if someone changed a style name of a control, he will have to update the test code with the new style name and the test will pass. But, for things to actually work without any issue, he has to change the relevant scss style files too. But the test didn't make any contribution to detect this issue. Same applies to layout setup code as well.
Is there any advantage of writing unit tests like above, other than keeping the code coverage rating at a higher level? For me, it feels useless and writing a test to compare the decompiled bytecode of the method to the original decompiled bytecode kept as a string in the test looks much better than these kinds of tests.
Is there any advantage of writing unit tests like above, other than
keeping the code coverage rating at a higher level?
Yes, if you take a sensible approach. It might not make sense, as you say, to test that a control has a particular style. So focus your tests on the parts of your code that are likely to break. If there is any conditional logic that goes into producing your UI, test that logic. The test will then protect your code from future changes that could break your logic.
As for you comment about testing methods that don't return a value, you can address that several ways.
It's your code, so you can restructure it to be more testable. Think about breaking it down into smaller methods. Isolate your logic into individual methods that can be called in a test.
Indirect verification - Rather than focusing on return values, focus on the effect your method has on other objects in the system.
Finally consider if unit testing of the UI is right for you and your organization. UIs are often difficult to unit test (as you have pointed out). Many companies write functional tests for their UIs. These are tests that drive the UI of the actual product. This is very different from unit tests which do not require the full product and are targeted at very small units of functionality.
Here's one simple example you could look to see how to fly over your application and try what is needed. This is vaadin 8, CDI & Wildfly Swarm example and in no way the only way to test UI's of Vaadin application.
https://github.com/wildfly-swarm/wildfly-swarm-examples/blob/master/vaadin/src/it/java/org/wildfly/swarm/it/vaadin/VaadinApplicationIT.java

Change method accessibility in order to test it

I have a public method that calls group of private methods.
I would like to test each of the private method with unit test as it is too complicated to test everything through the public method ,
Is think it will be a bad practice to change method accessibility only for testing purposes.
But I dont see any other way to test it (maybe reflection , but it is ugly)
Private methods should only exist as a consequence of refactoring a public method, that you've developed using TDD.
If you create a class with public methods and plan to add private methods to it, then your architecture will fail.
I know it's harsh, but what you're asking for is really, really bad software design.
I suggest you buy Uncle Bob's book "Clean Code"
http://www.amazon.co.uk/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882
Which basically gives you a great foundation for getting it right and saving you a lot of grief in your future as a developer.
There is IMO only one correct answer to this question; If the the class is too complex it means it's doing too much and has too many responsibilities. You need to extract those responsibilities into other classes that can be tested separately.
So the answer to your question is NO!
What you have is a code smell. You're seeing the symptoms of a problem, but you're not curing it. What you need to do is to use refactoring techniques like extract class or extract subclass. Try to see if you can extract one of those private methods (or parts of it) into a class of itself. Then you can add unit test to that new class. Divide and conquer untill you have the code under control.
You could, as has been mentioned, change the visibility from private to package, and then ensure that the unit-tests are in the same package (which should normally be the case anyway).
This can be an acceptable solution to your testing problem, given that the interfaces of the (now) private functions are sufficiently stable and that you also do some integration testing (that is, checking that the public methods call the private ones in the correct way).
There are, however, some other options you might want to consider:
If the private functions are interface-stable but sufficiently complex, you might consider creating separate classes for them - it is likely that some of them might benefit from being split into several smaller functions themselves.
If testing the private functions via the public interface is inconvenient (maybe because of the need for a complex setup), this can sometimes be solved by the use of helper functions that simplify the setup and allow different tests to share common setup code.
You are right, changing the visibility of methods just so you are able to test them is a bad thing to do. Here are the options you have:
Test it through existing public methods. You really shouldn't test methods but behavior, which normally needs multiple methods anyway. So stop thinking about testing that method, but figure out the behavior that is not tested. If your class is well designed it should be easily testable.
Move the method into a new class. This is probably the best solution to your problem from a design perspective. If your code is so complex that you can't reach all the paths in that private method, parts of it should probably live in their own class. In that class they will have at least package scope and can easily be tested. Again: you should still test behavior not methods.
Use reflection. You can access private fields and methods using reflection. While this is technical possible it just adds more legacy code to the existing legacy code in order to hide the legacy code. In the general case a rather stupid thing to do. There are exceptions to this. For example is for some reason you are not allowed to make even the smallest change to the production source code. If you really need this, google it.
Just change the visibility Yes it is bad practice. But sometimes the alternatives are: Make large changes without tests or don't test it at all. So sometimes it is ok to just bite the bullet and change the visibility. Especially when it is the first step for writing some tests and then extracting the behavior in its own class.

When to use Mockito.verify()?

I write jUnit test cases for 3 purposes:
To ensure that my code satisfies all of the required functionality, under all (or most of) the input combinations/values.
To ensure that I can change the implementation, and rely on JUnit test cases to tell me that all my functionality is still satisfied.
As a documentation of all the use cases my code handles, and act as a spec for refactoring - should the code ever need to be rewritten. (Refactor the code, and if my jUnit tests fail - you probably missed some use case).
I do not understand why or when Mockito.verify() should be used. When I see verify() being called, it is telling me that my jUnit is becoming aware of the implementation. (Thus changing my implementation would break my jUnits, even though my functionality was unaffected).
I'm looking for:
What should be the guidelines for appropriate usage of Mockito.verify()?
Is it fundamentally correct for jUnits to be aware of, or tightly coupled to, the implementation of the class under test?
If the contract of class A includes the fact that it calls method B of an object of type C, then you should test this by making a mock of type C, and verifying that method B has been called.
This implies that the contract of class A has sufficient detail that it talks about type C (which might be an interface or a class). So yes, we're talking about a level of specification that goes beyond just "system requirements", and goes some way to describing implementation.
This is normal for unit tests. When you are unit testing, you want to ensure that each unit is doing the "right thing", and that will usually include its interactions with other units. "Units" here might mean classes, or larger subsets of your application.
Update:
I feel that this doesn't apply just to verification, but to stubbing as well. As soon as you stub a method of a collaborator class, your unit test has become, in some sense, dependent on implementation. It's kind of in the nature of unit tests to be so. Since Mockito is as much about stubbing as it is about verification, the fact that you're using Mockito at all implies that you're going to run across this kind of dependency.
In my experience, if I change the implementation of a class, I often have to change the implementation of its unit tests to match. Typically, though, I won't have to change the inventory of what unit tests there are for the class; unless of course, the reason for the change was the existence of a condition that I failed to test earlier.
So this is what unit tests are about. A test that doesn't suffer from this kind of dependency on the way collaborator classes are used is really a sub-system test or an integration test. Of course, these are frequently written with JUnit too, and frequently involve the use of mocking. In my opinion, "JUnit" is a terrible name, for a product that lets us produce all different types of test.
David's answer is of course correct but doesn't quite explain why you would want this.
Basically, when unit testing you are testing a unit of functionality in isolation. You test whether the input produces the expected output. Sometimes, you have to test side effects as well. In a nutshell, verify allows you to do that.
For example you have bit of business logic that is supposed to store things using a DAO. You could do this using an integration test that instantiates the DAO, hooks it up to the business logic and then pokes around in the database to see if the expected stuff got stored. That's not a unit test any more.
Or, you could mock the DAO and verify that it gets called in the way you expect. With mockito you can verify that something is called, how often it is called, and even use matchers on the parameters to ensure it gets called in a particular way.
The flip side of unit testing like this is indeed that you are tying the tests to the implementation which makes refactoring a bit harder. On the other hand, a good design smell is the amount of code it takes to exercise it properly. If your tests need to be very long, probably something is wrong with the design. So code with a lot of side effects/complex interactions that need to be tested is probably not a good thing to have.
This is great question!
I think the root cause of it is the following, we are using JUnit not only for unit testing. So the question should be splited up:
Should I use Mockito.verify() in my integration (or any other higher-than-unit testing) testing?
Should I use Mockito.verify() in my black-box unit-testing?
Should I use Mockito.verify() in my white-box unit-testing?
so if we will ignore higher-than-unit testing, the question can be rephrased "Using white-box unit-testing with Mockito.verify() creates great couple between unit test and my could implementation, can I make some "grey-box" unit-testing and what rules of thumb I should use for this".
Now, let's go through all of this step-by-step.
*- Should I use Mockito.verify() in my integration (or any other higher-than-unit testing) testing?*
I think the answer is clearly no, moreover you shouldn't use mocks for this. Your test should be as close to real application as possible. You are testing complete use case, not isolated part of the application.
*black-box vs white-box unit-testing*
If you are using black-box approach what is you really doing, you supply (all equivalence classes) input, a state, and tests that you will receive expected output. In this approach using of mocks in general is justifies (you just mimic that they are doing the right thing; you don't want to test them), but calling Mockito.verify() is superfluous.
If you are using white-box approach what is you really doing, you're testing the behaviour of your unit. In this approach calling to Mockito.verify() is essential, you should verify that your unit behaves as you're expecting to.
rules of thumbs for grey-box-testing
The problem with white-box testing is it creates a high coupling. One possible solution is to do grey-box-testing, not white-box-testing. This is sort of combination of black&white box testing. You are really testing the behaviour of your unit like in white-box testing, but in general you make it implementation-agnostic when possible. When it is possible, you will just make a check like in black-box case, just asserts that output is what is your expected to be. So, the essence of your question is when it is possible.
This is really hard. I don't have a good example, but I can give you to examples. In the case that was mentioned above with equals() vs equalsIgnoreCase() you shouldn't call Mockito.verify(), just assert the output. If you couldn't do it, break down your code to the smaller unit, until you can do it. On the other hand, suppose you have some #Service and you are writting #Web-Service that is essentially wrapper upon your #Service - it delegates all calls to the #Service (and making some extra error handling). In this case calling to Mockito.verify() is essential, you shouldn't duplicate all of your checks that you did for the #Serive, verifying that you're calling to #Service with correct parammeter list is sufficient.
I must say, that you are absolutely right from a classical approach's point of view:
If you first create (or change) business logic of your application and then cover it with (adopt) tests (Test-Last approach), then it will be very painful and dangerous to let tests know anything about how your software works, other than checking inputs and outputs.
If you are practicing a Test-Driven approach, then your tests are the first to be written, to be changed and to reflect the use cases of your software's functionality. The implementation depends on tests. That sometimes mean, that you want your software to be implemented in some particular way, e.g. rely on some other component's method or even call it a particular amount of times. That is where Mockito.verify() comes in handy!
It is important to remember, that there are no universal tools. The type of software, it's size, company goals and market situation, team skills and many other things influence the decision on which approach to use at your particular case.
In most cases when people don't like using Mockito.verify, it is because it is used to verify everything that the tested unit is doing and that means you will need to adapt your test if anything changes in it.
But, I don't think that is a problem. If you want to be able to change what a method does without the need to change it's test, that basically means you want to write tests which don't test everything your method is doing, because you don't want it to test your changes. And that is the wrong way of thinking.
What really is a problem, is if you can modify what your method does and a unit test which is supposed to cover the functionality entirely doesn't fail. That would mean that whatever the intention of your change is, the result of your change isn't covered by the test.
Because of that, I prefer to mock as much as possible: also mock your data objects. When doing that you can not only use verify to check that the correct methods of other classes are called, but also that the data being passed is collected via the correct methods of those data objects. And to make it complete, you should test the order in which calls occur.
Example: if you modify a db entity object and then save it using a repository, it is not enough to verify that the setters of the object are called with the correct data and that the save method of the repository is called. If they are called in the wrong order, your method still doesn't do what it should do.
So, I don't use Mockito.verify but I create an inOrder object with all mocks and use inOrder.verify instead. And if you want to make it complete, you should also call Mockito.verifyNoMoreInteractions at the end and pass it all the mocks. Otherwise someone can add new functionality/behavior without testing it, which would mean after while your coverage statistics can be 100% and still you are piling up code which isn't asserted or verified.
As some people said
Sometimes you don't have a direct output on which you can assert
Sometimes you just need to confirm that your tested method is sending the correct indirect outputs to its collaborators (which you are mocking).
Regarding your concern about breaking your tests when refactoring, that is somewhat expected when using mocks/stubs/spies. I mean that by definition and not regarding a specific implementation such as Mockito.
But you could think in this way - if you need to do a refactoring that would create major changes on the way your method works, it is a good idea to do it on a TDD approach, meaning you can change your test first to define the new behavior (that will fail the test), and then do the changes and get the test passed again.

Sanity Check - Significant increase in the number of objects when using JUNIT

I am using Junit for the first time in a project and I'm fascinated by the way it is forcing me to restructure my code. One thing I've noticed is that the number of objects I've created in order to be able to test chunks of code is significantly increasing. Is this typical?
Thanks,
Elliott
Yes, this is normal.
In general the smaller/more focused your classes and methods are, the easier to understand and test them. This might produce more files and actual lines of code, but it is because you are adding more abstractions that makes your code have a better/cleaner design.
You may want to read about the Single Responsibility Principle. Uncle Bob also has some re-factoring examples in his book called Clean Code where he touches on exactly these points.
One more thing when you are unit testing. Dependency Injection is one of the single most important thing that will save you a lot of headaches when it comes to structuring your code. (And just for clarification, DI will not necessary cause you to have more classes, but it will help decouple your classes more from each other.)
Yes, I think this is fairly typical. When I start introducing testing code into a legacy codebase, I find myself creating smaller utility classes and pojos and testing those. The original class just becomes a wrapper to call these smaller classes.
One example would be when you have a method which does a calculation, updates an object and then saves to a database.
public void calculateAndUpdate(Thing t) {
calculate(t); // quite a complex calculation with mutliple results & updates t
dao.save(t);
}
You could create a calculation object which is returned by the calculate method. The method then updates the Thing object and saves it.
public void calculateAndUpdate(Thing t) {
Calculation calculation = new Calculator().calculate(t); // does not update t at all
update(t, calculation); // updates t with the result of calculation
dao.save(t); // saves t to the database
}
So I've introduced two new objects, a Calculator & Calculation. This allows me to test the result of the calculation without having to have a database available. I can also unit test the update method as well. It's also more functional, which I like :-)
If I continued to test with the original method, then I would have to unit test the calculation udpate and save as one item. Which isn't nice.
For me, the second is a better code design, better separation of concerns, smaller classes, more easily tested. But the number of small classes goes up. But the overall complexity goes down.
depends on what kind of objects you are referring to. Typically, you should be fine with using a mocking framework like EasyMock or Mockito in which case the number of additional classes required solely for testing purposes should be pretty less. If you are referring to additional objects in your main source code, may be unit testing is helping you refactor your code to make it more readable and reusable, which is a good idea anyways IMHO :-)

Categories