Dependent functional JUnit tests can't share static fields - java

I have the following test (simplified for this question):
#FixMethodOrder(MethodSorters.JVM)
public class ArticleTest {
private static Article article;
#Test
public void testCreateArticle() {
articleService.create("My article");
article = articleService.findByTitle("My article");
assertNotNull(article);
}
#Test
public void testUpdateArticle() {
article.setTitle("New title");
articleService.save(article);
assertNull(articleService.findByTitle("My article"));
article = articleService.findByTitle("New title");
assertNotNull(article);
}
}
testCreateArticle passes successfully, but testUpdateArticle fails at the first line because article is null, throwing thus an NPE (although the first test asserted that article wasn't null).
Anyone understands why? Note that I run the test with Play Framework (which loves bytecode manipulations), so this may be related somehow...
Also, I know that having dependent tests is a bad practice, but IRL, this isn't a unit test but a kind of test scenario, so I just wanted to give dependent tests a try to understand by myself why people don't like them ;)
But anyway, static fields are supposed to be shared between tests, am I wrong?
Update: I know that I could recreate an article in testUpdateArticle(), but the real tests are more complex (maybe I failed at creating an MVCE...). Say I have a third test that depends on the second one (that depends on the first one), etc. The first one needs nothing special, the second one needs a created article, the third one needs a created then updated article, etc. I wanted to try to avoid redoing all DB operations each time, by keeping the state between the tests (making them dependent then).

A better approach would be to recreate the article object before each test using the #Before annotation.
#Before
public void setUp() {
articleService.create("My article");
}
That way the article object does not need to be static and makes testing easier.
NOTE: Don't forget to clean-up article in #After method
#After
public void tearDown() {
articleService.delete("My article");
}

I can't reproduce your behavior:
#FixMethodOrder
public class RemoveMeTest {
private static String string;
#Test
public void testOne() {
string = "foo";
}
#Test
public void testTwo() {
System.out.println("in test two: " + string) ;
}
}
And the output is indeed "foo". Do you a special configurations or test runners for your test cases?
And a test case should really, really not be dependent on other test cases. I wanted to mention that, too, even if your case is more complex and others have mentioned it already ;)
Test A fails, B and C will fail too, even though they work. You change test A, runs, but B and C fails, in your example when you change the String "my article", even though they work. If C fails, you have to run A and B before, to verify that C runs again.
The worst is imo: It increases complexity which is totally unnecessary. As it is my belief: test code is as valuable as production code, refering to readability, complexity, maintainability, and so on. It's much easier to understand 10 lines of code, instead of 30 lines of code. You can trust my experience. And with experience I mean: I did it wrong a lot... and present self was very angry about (lazy) past self :P. With that said: I prefer an easy to understand test case over n db round trips, it's much cheaper in the long run...

Related

How to unit test to the case where there are two public methods, one calling another?

What is the way to test the methods when there are two public methods and a method calls another public method in the same class?
How should I write unit tests in this scenario?
An example
class SpecificIntMath {
public int add(int a,int b) {
return a+b;
}
public int multiply(int a, int b) {
int mul = 0;
for(int i = 0;i<b,i++) {
mul=add(a,mul);
}
return mul;
}
}
This example doesn't show the complexity of both the methods involved but the concept.
Should I test add and multiply separately? If I should test multiply only, I feel like we miss out cases where multiple cannot provide parameters.
Assuming multiply and add to be tested separately, should I be able to mock add? How is that possible?
Assuming multiply and add to be tested separately and I shouldn't mock, should I let add perform as it is. If this is the case how should I deal with the flow of the program inside add?
What is the approach to test such a kind of situation.
Edit 1:
In the below code,
class MCVC {
public boolean getWhereFrom(List<User> users) {
boolean allDone = true;
for(User user: users){
String url = user.getUrl();
switch(url) {
case Consts.GOOGLE:
someDao.updateFromAddr(user);
user.setEntry("Search Engine");
break;
case Consts.FACEBOOK:
someDao.updateFromAddr(user);
user.setEntry("Social Media");
break;
case Consts.HOME:
someDao.updateToAddr(user);
user.setEntry("Company");
default
user.setEntry(null);
allDone = false;
break;
}
}
return allDone;
}
public void likedDeck() {
List<Users> usersList = deckDao.getPotentialUsers(345L,HttpStatus.OK);
boolean flag = getWhereFrom(usersList);
if(flag) {
for(User user: usersList) {
//some Action
}
}
}
}
Should I consider getWhereFrom() while testing likedDeck() or should I assume any default situation? If I consider default situation, I lose out on cases where the output isn't default. I am not sure I should mock it since class which is calling is being tested. Spying/Mocking class under test
You don't care.
You use unit-testing to test the contract of each public method on its own. Thus you write tests that make sure that both add() and multiply() do what they are supposed to do.
The fact that the one uses the other internally is of no interest on the outside. Your tests should neither know nor care about this internal implementation detail.
And just for the record: as your code is written right now; you absolutely do not turn to mocking here. Mocking is not required here; and only adds the risk of testing something that has nothing to do with your real production code. You only use mocking for situations when you have to control aspects of objects in order to enable testing. But nothing in your example code needs mocking to be tested. And if it would - it would be an indication of poor design/implementation (given the contract of those methods)!
Edit; given the changes example in the question:
First of all, there is a bug in getWhereFrom() - you iterate a list; but you keep overwriting the return value in that list. So when the first iteration sets the result to false; that information might be lost in the next loop.
I see two options for the actual question:
You turn to Mockito; and its "spy" concept to do partial mocking; in case you want to keep your source code as is
Me, personally; I would rather invest time into improving the production code. It looks to me as getWhereFrom() could be worth its own class (where I would probably not have it work on a list of users; but just one user; that also helps with returning a single boolean value ;-). And when you do that, you can use dependency injection to acquire a (mocked) instance of that "WhereFromService" class.
In other words: the code you are showing could be reworked/refactored; for example to more clearly follow the SRP. But that is of course a larger undertaking; that you need to discuss with the people around you.
At least test them both seperatly. That the multiply test implicitly tests the add is no problem. In most of this cases you should ask yourself the question if it is necessary that both methods need to be public.
Should I test add and multiply separately?
You should test them separately, if you are doing unit testing. You would only like to test them together when doing component or integration tests.
Assuming multiply and add to be tested separately, should I be able to
mock add?
yes
How is that possible?
use mockito or any other mocking framework. Exactly how you can see here Use Mockito to mock some methods but not others
Assuming multiply and add to be tested separately and I shouldn't
mock, should I let add perform as it is.
I wouldn't do that. Internal changes in add could affect the tests from multiply and your tests would get more complicated and unstable.

Should I test cases in which nothing is expected to happen

A Sample can be deleted if status is S or P. I have this tests:
#Test
public void canBeDeletedWhenStatusIsP() {
Sample sample = new Sample();
sample.setState("P");
assertTrue(sample.canBeDeleted());
}
#Test
public void canBeDeletedWhenStatusIsS() {
Sample sample = new Sample();
sample.setState("S");
assertTrue(sample.canBeDeleted());
}
Should I go further? How should I test when the sample can't be deleted? For example:
#Test
public void cantBeDeletedWhenStatusINeitherPNorS() {
Sample sample = new Sample();
sample.setState("Z");
assertFalse(sample.canBeDeleted());
}
Is this test useful? What about the test naming? Would be this logic tested enough?
SaintThread is giving you a good "direct" answer.
But lets step back. Because you are doing something wrong in your production code. Most likely, your production code does something like a switch on that String that denotes the sample state. And not only once, but within all the methods it provides. And ... that is not a good OO design!
Instead, you should use polymorphism, like:
abstract class Sample {
boolean canBeDeleted();
// ... probably other methods as well
with and various concrete subclasses, like
class ZSample extends Sample {
#Override canBeDeleted() { return false; }
// ...
And finally, you have
class SampleFactory {
Sample createSampleFrom(String stateIdentifier) {
// here you might switch over that string and return a corresponding object, for example of class ZSample
And then, your tests boil down to:
Testing the factory; example for input "Z", it returns an instance of ZSample
Testing all your subclasses of Sample; for example that canBeDeleted() returns false for an instance of ZSample
The point is: your code does do the work of a FSM (finite state machine). Then don't use if/elses all over the place; instead, do the OO thing: create an explicit state machine. And, free bonus: this approach would also make it possible to turn your Sample objects into immutable thingies; which is most often better than having to deal with objects that can change their state over time (as immutability helps a lot with multi-threading issues for example).
Disclaimer: if your "Sample" class is only about that one method, maybe the above is overkill. But in any other case ... maybe step back and see if my suggestions would add value to your design!
In my opinion you should test:
cantBeDeletedWithoutStatus
assertFalse(sample.canBeDeleted());
cantBeDeletedWhenStatusIsInvalid
sample.setState("Z");
assertFalse(sample.canBeDeleted());
cantBeDeletedWhenStatusIsToggledToInvalid
sample.setState("P");
sample.setState("Z");
assertFalse(sample.canBeDeleted());
canBeDeletedWhenStatusIsToggledToS
sample.setState("Z");
sample.setState("S");
assertFalse(sample.canBeDeleted());
canBeDeletedWhenStatusIsToggledToP
sample.setState("Z");
sample.setState("P");
assertFalse(sample.canBeDeleted());
Let me know your thoughts in the comments
We should want our tests to be thorough, so they are likely to detect many classes of bugs. So the simple answer is yes, test the no-op case.
You don't tell what the possible values to the state. Let us assume they must be uppercase English letters, giving 26 states. Your question is then essentially the same as "should I have 26 test cases". That is many, but not prohibitive many. Now imagine a more complex case, for which the state is an int and all int values are possible. Testing them all thoroughly would be impractical. What to do?
The means for dealing with testing when there are very many inputs or initial states is to use equivalence partitiong. Divide the inputs or states into sets of inputs and sets of states, such that all the elements in a set should result in the same behaviour and are adjacent to each other. So in your case the equivalence partitions might be A-O, P, Q-R,S, T-Z. Then have one test case for each partition.

Deleting code and unit tests in java

I have been asked to create unit tests for code which I deleted from a java class (note this is not an API so I do not need to deprecate the code).
Assume you have a java class as per below, and you need to delete some code as commented below:
public class foo extends foobar {
protected void doStuff() {
doMoreStuff();
// Everything below needs to be deleted, including the doStuffToo() method
Object o = null;
doStuffToo(o);
}
public void doMoreStuff() {
boolean a = true;
}
// this method needs to be deleted
public void doStuffToo(Object o) {
o = new Object();
}
}
I am of the opinion that you should simply delete the test cases for the deleted code, however I am being told that I should write unit tests to check for the existence of the old code in the event there is a bad merge in the future.
What is considered best practice in the above example?
Hopefully, you have unit tests in place for the doStuff method. Assuming you do, I would do the following:
Update the tests for the doStuff method to reflect what the new logic should do.
Run the doStuff tests and verify that they fail.
Delete the application code you have been asked to remove.
Re-run the doStuff tests and verify that they pass.
If the tests don't pass in step 4, analyse and refactor accordingly until they pass.
Remove the tests which test the deleted methods (you will have received a compiler error from step 3 anyway).
I would love to know what the unit tests that "check for the existence of the old code" would do and the benefit they would provide. I know what would happen to such code if left behind. It would remain in the code base and become redundant, causing confusion to new team members until a few years later when a wise person decides to remove them.
Anyway, your safety net is the doStuff tests which hopefully would catch any merge problems. If you have the tests under source control (I hope so!), then you can always revert to a previous revision of the code base to retrieve the deleted tests if required in the future.

Redundant methods across several unit tests/classes

Say in the main code, you've got something like this:
MyClass.java
public class MyClass {
public List<Obj1> create(List<ObjA> list) {
return (new MyClassCreator()).create(list);
}
// Similar methods for other CRUD operations
}
MyClassCreator.java
public class MyClassCreator {
Obj1Maker obj1Maker = new Obj1Maker();
public List<Obj1> create(List<ObjA> list) {
List<Obj1> converted = new List<Obj1>();
for(ObjA objA : list)
converted.add(obj1Maker.convert(objA));
return converted;
}
}
Obj1Maker.java
public class Obj1Maker {
public Obj1 convert(ObjA objA) {
Obj1 obj1 = new Obj1();
obj1.setProp(formatObjAProperty(objA));
return obj1;
}
private String formatObjAProperty(ObjA objA) {
// get objA prop and do some manipulation on it
}
}
Assume that the unit test for Obj1Maker is already done, and involves a method makeObjAMock() which mocks complex object A.
My questions:
For unit testing MyClassCreator, how would I test create(List<ObjA> list)? All the method really does is delegate the conversion from ObjA to Obj1 and runs it in a loop. The conversion itself is already tested. If I were to create a list of ObjA and tested each object in the list of Obj1 I get back, I would have to copy makeObjAMock() into MyClassCreator's unit test. Obviously, this would be duplicate code, so is using verify() enough to ensure that create(List list) works?
For unit testing MyClass, again, its create(List<ObjA>) method just delegates the operation to MyClassCreator. Do I actually need to test this with full test cases, or should I just verify that MyClassCreator's create method was called?
In the unit test for Obj1Maker, I checked that the properties Obj1 and ObjA corresponded to each other by doing assertEquals(obj1.getProp(), formatObjAProperty(objA)). However, that means I had to duplicate the code for the private formatObjAProperty method from the Obj1Maker class into its unit test. How can I prevent code repetition in this case? I don't want to make this method public/protected just so I can use it in a unit test. Is repetition acceptable in this case?
Thanks, and sorry for the lengthy questions.
My opinion is here. Picking which methods to test is a hard thing to do.
You have to think about a) whether you are meeting your requirements and b) what could go wrong when someone stupid makes changes to the code in the future. (Actually, the stupid person could be you. We all have bad days.)
I would say, writing new code to verify the two objects have the same data in two formats would be a good idea. Probably there is no reason to duplicate the code from the private method and copying the code over is a bad idea. Remember that you are verifying requirements. So if the original string said "6/30/13" and the reformatted one said "June 30th 2013", I would just hard code the check:
assertEquals("Wrong answer", "June 30th 2013", obj.getProp());
Add some more asserts for edge cases and errors. (In my example, use "2/30/13" and "2/29/12" and "12/1/14" to check illegal date, leap year day and that it gets "1st" not "1th" perhaps.)
In the test on the create method, I would probably just go for the easy error and verify that the returned array had the same number as the one passed in. The one I passed in would have two identical elements and some different ones. I'd just check that the identical ones came back identical and the different ones non-identical. Why? Because we already know the formatter works.
I wouldn't test the constructor but would make sure some test ran the code in it. It's good to make sure most of the code actually runs in a test to catch dumb errors like null pointers you missed.
The balance point is what you are looking for.
Enough tests, testing enough different things, to feel good about the code working.
Enough tests, testing obvious things, that stupid changes in the future will get found.
Not so many tests that the tests take forever to run and all the developers (including you) will put off running them because they don't want to wait or lose their train of thought while they run.
Balance!

Specifying order of execution in JUnit test case [duplicate]

This question already has answers here:
How to run test methods in specific order in JUnit4?
(23 answers)
Closed 9 years ago.
I have a test case where I add an entity, update it and delete the same. Hence, the order of execution is important here. I want it to be :
Create
Update
Delete
Strangely, for just one test case ( out of 15) , JUnit executes it in the following order :
Delete
Update
Create .
How do I tell JUnit to execute them in a specific order ? In other cases, JUnit works totally fine ( executing serially ) . And why does JUnit behave weirdly in this one case ?
Relevant code snippet below :
private static Date date;
private static int entity;
static Parking p;
public ParkingTests(String name) {
super(name);
}
public void testAdd() throws Exception {
//Add code here
}
public void testUpdate() throws Exception {
//update code here
}
public void testDelete() throws Exception {
//delete code here
}
}
It gets weirder. I run a lot of test cases as part of a suite. If I run just the Parking case, the order is maintained. If I run it along with others, it is sometimes maintained, sometimes not !
Your kind of situation is awkward, as it feels bad to keep duplicating work in order to isolate the tests (see below) - but note that most of the duplication can be pulled out into setUp and tearDown (#Before, #After) methods, so you don't need much extra code. Provided that the tests are not running so slowly that you stop running them often, it's better to waste a bit of CPU in the name of clean testing.
public void testAdd() throws Exception {
// wipe database
// add something
// assert that it was added
}
public void testUpdate() throws Exception {
// wipe database
// add something
// update it
// assert that it was updated
}
public void testDelete() throws Exception {
// wipe database
// add something
// delete it
// assert that it was deleted
}
The alternative is to stick everything into one test with multiple asserts, but this is harder to understand and maintain, and gives a bit less information when a test fails:
public void testCRUD() throws Exception {
// wipe database
// add something
// assert that it was added
// update it
// assert that it was updated
// delete it
// assert that it was deleted
}
Testing with databases or collections or storage of any kind is tricky because one test can always affect other tests by leaving junk behind in the database/collection. Even if your tests don't explicitly rely on one another, they may still interfere with one another, especially if one of them fails.
Where possible, use a fresh instance for each test, or wipe the data, ideally in as simple a way as possible - e.g. for a database, wiping an entire table is more likely to succeed than a very specific deletion that you might accidentally get wrong.
Update: It's usually better to wipe data at the start of the test, so one failed test run doesn't affect the next run.
Generally junit tests(test methods) should not depend on each other.
Following is taken from junit FAQ
Each test runs in its own test fixture to isolate tests from the
changes made by other tests. That is, tests don't share the state of
objects in the test fixture. Because the tests are isolated, they can
be run in any order...... The ordering of test-method invocations is not
guaranteed.
So if you want to do some common initialization stuff then you could do that in the method annotated with #Before and cleanup in method annotated with #After. Or else if that initialization is not required for all tests methods in your test class then you could put that in private methods and call them appropriately from your tests.
On a side note, if you still want to do ordering of tests then you may have a look at TestNG.
If you're determined you would want to have order of execution for your tests, JUnit 4.11 now supports this through an annotation. See this thread for more discussion - basically, you would use
#FixMethodOrder
to guarantee some test order that way. It is discouraged though.
If you are using Java 7 then you should know that Junit gets the list of all tests using "Method[] getDeclaredMethods()" from java.lang.Class. You can read from the javadoc of this method or from junit docs that: "The elements in the array returned are not sorted and are not in any particular order.", but in previous jvm implementation methods list was ordered as they were in source code.
This was taken from this blog and he provides a work around.
In general, JUnit does not guarantee the ordering of test cases. It's not guaranteed to be alphabetical, nor the order in the file. If the ordering of tests were important, then one depends on the output of the previous. What if the first one failed? Should we even bother with the later (and dependent) tests? Probably not.
So if we had this:
#Test
public void first(){...}
#Test
public void second() {...}
#Test
public void third() {...}
We don't know what order they will run in. Since we are hoping they go in order, and we should probably not bother running second or third if the previous one(s) failed, we can do this instead:
#Test
public void firstThree(){
first();
second();
third();
}
public void first(){...}
public void second() {...}
public void third() {...}
Notice that we only have one #Test this time, and it guarantees ordering.
If you want to run junit tests in order "just as they present in your source code",
see my note about this here:
How to run junit tests in order as they present in your source code
But it is really not a good idea, tests must be independent.
What you can do :
Cleanup the database before every test
Start by testing the first logical operation first. When you have enough confidence, assume it is correct and move to the next, etc...
Write white box tests first, but start with black box tests. For example if you have triggers or similar in your database, start with that.

Categories