Deleting code and unit tests in java - java

I have been asked to create unit tests for code which I deleted from a java class (note this is not an API so I do not need to deprecate the code).
Assume you have a java class as per below, and you need to delete some code as commented below:
public class foo extends foobar {
protected void doStuff() {
doMoreStuff();
// Everything below needs to be deleted, including the doStuffToo() method
Object o = null;
doStuffToo(o);
}
public void doMoreStuff() {
boolean a = true;
}
// this method needs to be deleted
public void doStuffToo(Object o) {
o = new Object();
}
}
I am of the opinion that you should simply delete the test cases for the deleted code, however I am being told that I should write unit tests to check for the existence of the old code in the event there is a bad merge in the future.
What is considered best practice in the above example?

Hopefully, you have unit tests in place for the doStuff method. Assuming you do, I would do the following:
Update the tests for the doStuff method to reflect what the new logic should do.
Run the doStuff tests and verify that they fail.
Delete the application code you have been asked to remove.
Re-run the doStuff tests and verify that they pass.
If the tests don't pass in step 4, analyse and refactor accordingly until they pass.
Remove the tests which test the deleted methods (you will have received a compiler error from step 3 anyway).
I would love to know what the unit tests that "check for the existence of the old code" would do and the benefit they would provide. I know what would happen to such code if left behind. It would remain in the code base and become redundant, causing confusion to new team members until a few years later when a wise person decides to remove them.
Anyway, your safety net is the doStuff tests which hopefully would catch any merge problems. If you have the tests under source control (I hope so!), then you can always revert to a previous revision of the code base to retrieve the deleted tests if required in the future.

Related

What does removed call to "com.some.Filename::someMethodName" --> SURVIVED mean in pitest?

What does removed call to "com.some.Filename::someMethodName" --> SURVIVED mean in pitest.
Does it mean that if that method call is removed, the code will still work properly?
When pitest says the mutantion has survived it means it changed the codebase, and not a single test detected the code has changed. So you are not being very demanding on your test suite.
Ideally each mutation created should be killed by at least 1 unit test.
some more information regarding mutation test that may help you: https://pedrorijo.com/blog/intro-mutation/. (disclaimer, I'm the tutorial author)
Pi-Mutation wants:-
Every line of code got executed by unit test cases
And data being modified by the source code must asserted/validated.
Just master's these above two point's to be master of pi-mutation API.
Suppose you have following source code for which pi-mutation must be passed.
public Optional<CodeRemovedModel> method1(List<CodeRemovedModel> list) {
if(list.isEmpty()) {
return Optional.empty();
}
return doSomething(list);
}
private Optional<CodeRemovedModel> doSomething(List<CodeRemovedModel> list) {
// iterating over list item and modifying two fields.
// as per mutation this forEach loop must be executed
// And, the modified fields must be asserted - if not you will get "removed call .... -> SURVIVED" error
list.forEach(s -> {
s.setFirstName("RAHUL");
s.setLastName("VSK");
});
return Optional.of(list.get(0));
}
In following test case I'm ignoring assertion of one field, hence the error will shows up.
#Test
public void testMethod1_NON_EMPTY_LIST() {
List<CodeRemovedModel> l = new ArrayList<>();
l.add(new CodeRemovedModel());
Optional<CodeRemovedModel> actual = this.codeRemovedMutation.method1(l);
assertEquals("RAHUL", actual.get().getFirstName());
//assertEquals("VSK", actual.get().getLastName());
}
In Simple mutation testing terms:
Faults (or mutations) are automatically seeded into your code, then your tests are run. If your tests fail then the mutation is killed, if your tests pass then the mutation lived/survived.
In most cases, developers tend to write Unit Test cases targeting the Code coverage only which may not test each statement.So using a mutation testing tool like PIT is very helpful because it actually detects any faults in the executed code. But in most cases, to kill the mutants introduced by PIT, you have to write assert statements. Mutation testing improves the testing standard because it enforces to write test cases which tests each statement of the code.
Does it mean that if that method call is removed the code will still work properly?
It does not mean that when the method call is removed, the actual code will work but it means even the method call is removed, at least one of the test cases has not detected the change and the test case passed and mutant survived.

Dependent functional JUnit tests can't share static fields

I have the following test (simplified for this question):
#FixMethodOrder(MethodSorters.JVM)
public class ArticleTest {
private static Article article;
#Test
public void testCreateArticle() {
articleService.create("My article");
article = articleService.findByTitle("My article");
assertNotNull(article);
}
#Test
public void testUpdateArticle() {
article.setTitle("New title");
articleService.save(article);
assertNull(articleService.findByTitle("My article"));
article = articleService.findByTitle("New title");
assertNotNull(article);
}
}
testCreateArticle passes successfully, but testUpdateArticle fails at the first line because article is null, throwing thus an NPE (although the first test asserted that article wasn't null).
Anyone understands why? Note that I run the test with Play Framework (which loves bytecode manipulations), so this may be related somehow...
Also, I know that having dependent tests is a bad practice, but IRL, this isn't a unit test but a kind of test scenario, so I just wanted to give dependent tests a try to understand by myself why people don't like them ;)
But anyway, static fields are supposed to be shared between tests, am I wrong?
Update: I know that I could recreate an article in testUpdateArticle(), but the real tests are more complex (maybe I failed at creating an MVCE...). Say I have a third test that depends on the second one (that depends on the first one), etc. The first one needs nothing special, the second one needs a created article, the third one needs a created then updated article, etc. I wanted to try to avoid redoing all DB operations each time, by keeping the state between the tests (making them dependent then).
A better approach would be to recreate the article object before each test using the #Before annotation.
#Before
public void setUp() {
articleService.create("My article");
}
That way the article object does not need to be static and makes testing easier.
NOTE: Don't forget to clean-up article in #After method
#After
public void tearDown() {
articleService.delete("My article");
}
I can't reproduce your behavior:
#FixMethodOrder
public class RemoveMeTest {
private static String string;
#Test
public void testOne() {
string = "foo";
}
#Test
public void testTwo() {
System.out.println("in test two: " + string) ;
}
}
And the output is indeed "foo". Do you a special configurations or test runners for your test cases?
And a test case should really, really not be dependent on other test cases. I wanted to mention that, too, even if your case is more complex and others have mentioned it already ;)
Test A fails, B and C will fail too, even though they work. You change test A, runs, but B and C fails, in your example when you change the String "my article", even though they work. If C fails, you have to run A and B before, to verify that C runs again.
The worst is imo: It increases complexity which is totally unnecessary. As it is my belief: test code is as valuable as production code, refering to readability, complexity, maintainability, and so on. It's much easier to understand 10 lines of code, instead of 30 lines of code. You can trust my experience. And with experience I mean: I did it wrong a lot... and present self was very angry about (lazy) past self :P. With that said: I prefer an easy to understand test case over n db round trips, it's much cheaper in the long run...

Redundant methods across several unit tests/classes

Say in the main code, you've got something like this:
MyClass.java
public class MyClass {
public List<Obj1> create(List<ObjA> list) {
return (new MyClassCreator()).create(list);
}
// Similar methods for other CRUD operations
}
MyClassCreator.java
public class MyClassCreator {
Obj1Maker obj1Maker = new Obj1Maker();
public List<Obj1> create(List<ObjA> list) {
List<Obj1> converted = new List<Obj1>();
for(ObjA objA : list)
converted.add(obj1Maker.convert(objA));
return converted;
}
}
Obj1Maker.java
public class Obj1Maker {
public Obj1 convert(ObjA objA) {
Obj1 obj1 = new Obj1();
obj1.setProp(formatObjAProperty(objA));
return obj1;
}
private String formatObjAProperty(ObjA objA) {
// get objA prop and do some manipulation on it
}
}
Assume that the unit test for Obj1Maker is already done, and involves a method makeObjAMock() which mocks complex object A.
My questions:
For unit testing MyClassCreator, how would I test create(List<ObjA> list)? All the method really does is delegate the conversion from ObjA to Obj1 and runs it in a loop. The conversion itself is already tested. If I were to create a list of ObjA and tested each object in the list of Obj1 I get back, I would have to copy makeObjAMock() into MyClassCreator's unit test. Obviously, this would be duplicate code, so is using verify() enough to ensure that create(List list) works?
For unit testing MyClass, again, its create(List<ObjA>) method just delegates the operation to MyClassCreator. Do I actually need to test this with full test cases, or should I just verify that MyClassCreator's create method was called?
In the unit test for Obj1Maker, I checked that the properties Obj1 and ObjA corresponded to each other by doing assertEquals(obj1.getProp(), formatObjAProperty(objA)). However, that means I had to duplicate the code for the private formatObjAProperty method from the Obj1Maker class into its unit test. How can I prevent code repetition in this case? I don't want to make this method public/protected just so I can use it in a unit test. Is repetition acceptable in this case?
Thanks, and sorry for the lengthy questions.
My opinion is here. Picking which methods to test is a hard thing to do.
You have to think about a) whether you are meeting your requirements and b) what could go wrong when someone stupid makes changes to the code in the future. (Actually, the stupid person could be you. We all have bad days.)
I would say, writing new code to verify the two objects have the same data in two formats would be a good idea. Probably there is no reason to duplicate the code from the private method and copying the code over is a bad idea. Remember that you are verifying requirements. So if the original string said "6/30/13" and the reformatted one said "June 30th 2013", I would just hard code the check:
assertEquals("Wrong answer", "June 30th 2013", obj.getProp());
Add some more asserts for edge cases and errors. (In my example, use "2/30/13" and "2/29/12" and "12/1/14" to check illegal date, leap year day and that it gets "1st" not "1th" perhaps.)
In the test on the create method, I would probably just go for the easy error and verify that the returned array had the same number as the one passed in. The one I passed in would have two identical elements and some different ones. I'd just check that the identical ones came back identical and the different ones non-identical. Why? Because we already know the formatter works.
I wouldn't test the constructor but would make sure some test ran the code in it. It's good to make sure most of the code actually runs in a test to catch dumb errors like null pointers you missed.
The balance point is what you are looking for.
Enough tests, testing enough different things, to feel good about the code working.
Enough tests, testing obvious things, that stupid changes in the future will get found.
Not so many tests that the tests take forever to run and all the developers (including you) will put off running them because they don't want to wait or lose their train of thought while they run.
Balance!

Can I place unit test result validation in the #After method?

I'm writing a Unit test for the behavior of a method, and want to try to reduce my code duplication - I'm about to double the test cases again, and the file is getting a bit hard to look at.
If anyone's really interested, the code is here (warning to the future - link may not be permanent).
Right now, I do the following, repeated several times in slightly different ways:
#Test
public void testAdd64OfMax64() {
// Set up container
TileEntityChest nmsinv = new TileEntityChest();
Inventory container = new CraftInventory(nmsinv);
// Call method
HashMap<Integer,ItemStack> leftover = container.addItem(new ItemStack(Foo, 64));
// Set up expected result
ItemStack[] expected = empty.clone();
expected[0] = new ItemStack(Foo, 64);
// Verify result
assertThat(container.getContents(), is(expected));
assertThat(leftover, is(Collections.EMPTY_MAP));
}
(Note: assertThat comes from org.hamcrest.Matchers.)
I know that I can put the "Set up container" and cloning the empty array into a #Before method.
My question is, can I put the assertThat method into an #After method and have the test still fail (when it should fail)?
No. The After method should be used to release external resources after a test run. Also, note that it is guaranteed to run even if your Test method throws an exception which may not be desirable in your case.
Also, think about the people who will have to maintain your code in the future. You are violating the PLA, by validating the results of your tests in the After method. Nobody expects that!
It would be better if you created a helper method e.g. checkResults which was called at the end of each of your Test methods to validate your results.

Specifying order of execution in JUnit test case [duplicate]

This question already has answers here:
How to run test methods in specific order in JUnit4?
(23 answers)
Closed 9 years ago.
I have a test case where I add an entity, update it and delete the same. Hence, the order of execution is important here. I want it to be :
Create
Update
Delete
Strangely, for just one test case ( out of 15) , JUnit executes it in the following order :
Delete
Update
Create .
How do I tell JUnit to execute them in a specific order ? In other cases, JUnit works totally fine ( executing serially ) . And why does JUnit behave weirdly in this one case ?
Relevant code snippet below :
private static Date date;
private static int entity;
static Parking p;
public ParkingTests(String name) {
super(name);
}
public void testAdd() throws Exception {
//Add code here
}
public void testUpdate() throws Exception {
//update code here
}
public void testDelete() throws Exception {
//delete code here
}
}
It gets weirder. I run a lot of test cases as part of a suite. If I run just the Parking case, the order is maintained. If I run it along with others, it is sometimes maintained, sometimes not !
Your kind of situation is awkward, as it feels bad to keep duplicating work in order to isolate the tests (see below) - but note that most of the duplication can be pulled out into setUp and tearDown (#Before, #After) methods, so you don't need much extra code. Provided that the tests are not running so slowly that you stop running them often, it's better to waste a bit of CPU in the name of clean testing.
public void testAdd() throws Exception {
// wipe database
// add something
// assert that it was added
}
public void testUpdate() throws Exception {
// wipe database
// add something
// update it
// assert that it was updated
}
public void testDelete() throws Exception {
// wipe database
// add something
// delete it
// assert that it was deleted
}
The alternative is to stick everything into one test with multiple asserts, but this is harder to understand and maintain, and gives a bit less information when a test fails:
public void testCRUD() throws Exception {
// wipe database
// add something
// assert that it was added
// update it
// assert that it was updated
// delete it
// assert that it was deleted
}
Testing with databases or collections or storage of any kind is tricky because one test can always affect other tests by leaving junk behind in the database/collection. Even if your tests don't explicitly rely on one another, they may still interfere with one another, especially if one of them fails.
Where possible, use a fresh instance for each test, or wipe the data, ideally in as simple a way as possible - e.g. for a database, wiping an entire table is more likely to succeed than a very specific deletion that you might accidentally get wrong.
Update: It's usually better to wipe data at the start of the test, so one failed test run doesn't affect the next run.
Generally junit tests(test methods) should not depend on each other.
Following is taken from junit FAQ
Each test runs in its own test fixture to isolate tests from the
changes made by other tests. That is, tests don't share the state of
objects in the test fixture. Because the tests are isolated, they can
be run in any order...... The ordering of test-method invocations is not
guaranteed.
So if you want to do some common initialization stuff then you could do that in the method annotated with #Before and cleanup in method annotated with #After. Or else if that initialization is not required for all tests methods in your test class then you could put that in private methods and call them appropriately from your tests.
On a side note, if you still want to do ordering of tests then you may have a look at TestNG.
If you're determined you would want to have order of execution for your tests, JUnit 4.11 now supports this through an annotation. See this thread for more discussion - basically, you would use
#FixMethodOrder
to guarantee some test order that way. It is discouraged though.
If you are using Java 7 then you should know that Junit gets the list of all tests using "Method[] getDeclaredMethods()" from java.lang.Class. You can read from the javadoc of this method or from junit docs that: "The elements in the array returned are not sorted and are not in any particular order.", but in previous jvm implementation methods list was ordered as they were in source code.
This was taken from this blog and he provides a work around.
In general, JUnit does not guarantee the ordering of test cases. It's not guaranteed to be alphabetical, nor the order in the file. If the ordering of tests were important, then one depends on the output of the previous. What if the first one failed? Should we even bother with the later (and dependent) tests? Probably not.
So if we had this:
#Test
public void first(){...}
#Test
public void second() {...}
#Test
public void third() {...}
We don't know what order they will run in. Since we are hoping they go in order, and we should probably not bother running second or third if the previous one(s) failed, we can do this instead:
#Test
public void firstThree(){
first();
second();
third();
}
public void first(){...}
public void second() {...}
public void third() {...}
Notice that we only have one #Test this time, and it guarantees ordering.
If you want to run junit tests in order "just as they present in your source code",
see my note about this here:
How to run junit tests in order as they present in your source code
But it is really not a good idea, tests must be independent.
What you can do :
Cleanup the database before every test
Start by testing the first logical operation first. When you have enough confidence, assume it is correct and move to the next, etc...
Write white box tests first, but start with black box tests. For example if you have triggers or similar in your database, start with that.

Categories