How to test a java software? - java

We have been assigned an open source software to test!
The software has 3 packages and each package has 10 or more classes and each class might have dozens of methods.
My question is that before I start structural (white-box) testing, do i need to understand every line of code in the software?
Do I need to understand the whole flow of program starting from main() method?
What is the approach I should take?

If you have specs on what each method is supposed to do: What is expected output for specified input then no you don't need to go into implementation details of those methods. Normally this should be written down!
You could write unit tests that check if methods are satisfying predefined contracts (if they exist).
If you don't have specs or with recent trend 'UserStories' you need to 'reverse engineer your specs' :) You would need to analyse each method to understand what is it doing, next you will check where those methods are being called in order to figure out what are the possible values passed in the method calls. Also from the calling methods you might get the idea what are the corner cases. And those you definitely want to test.
.... and slowly you learned the whole code :)

No, you don't have to understand every line of code to write Unit tests. I'm not so experienced with unit tests yet, but what if seen so far is testing every (or most) methods that respond differently to a certain input (--> arguments, object variables ...).
So you have to know what a method does, when it successfully does things, and when it is suppose to fail. And for some methods even the turning points between those cases are important.
For Example
Let's say we have a method that sums two ints, that have to be equal to or larger than 0:
public static int sumInts(int a, int b) {
if (a < 0 || b < 0) throw new IllegalArgumentException("'a' and 'b' should be 0 or above!");
return a + b;
}
What you could test:
if a = 49 and b = 16, does it return 65?
if a = -3 and b = 4, does it throw an exception?
if a = 5 and b = -13, does it throw an exception?
if a = -46 and b = -13, does it throw an exception?
if a = 0 and b = 0, does it return 0?
if a = -1 and b = -1, does it throw an exception?
Of course, this is just a very simple example. What you test depends on your method. For this method, the last two tests would likely be totally unneeded. But there are methods that are more complex, where those could be useful.

There are several types of tests.
Unit test tests the functionality of each method on different cases of input data. Like, if you have a BigInteger fact(int n) method, you have to write tests on normal positive integers, zero, negative integers and max/min values. There are several libraries which'll help you: JUnit,TestNG, etc.
Integration test tests the whole your application, all packages and classes as a group. See Arquillian project.
Also, you can write black-box tests using Selenium.
Much more types of testing like regression testing, load testing, etc. This is the work for QA engineer.
After dark ages of software industry the community finally came to some good unit testing practices:
Code coverage. We would be living in caves without code coverage metrics.
Fault injection helps you in building more robust and stable software.
Wide mocking usage to write more specific tests without dirty influence of the rest of your application.

Related

What To Unit Test [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm a bit confused how much I should dedicate on Unit Tests.
Say I have a simple function like:
appendRepeats(StringBuilder strB, char c, int repeats)
[This function will append char c repeats number of times to strB.
e.g.:
strB = "hello"
c = "h"
repeats = 5
// result
strB = "hellohhhhh"
]
For unit testing this function, I feel there's already so many possibilities:
AppendRepeats_ZeroRepeats_DontAppend
AppendRepeats_NegativeRepeats_DontAppend
AppendRepeats_PositiveRepeats_Append
AppendRepeats_NullStrBZeroRepeats_DontAppend
AppendRepeats_NullStrBNegativeRepeats_DontAppend
AppendRepeats_NullStrBPositiveRepeats_Append
AppendRepeats_EmptyStrBZeroRepeats_DontAppend
AppendRepeats_EmptyStrBNegativeRepeats_DontAppend
AppendRepeats_EmptyStrBPositiveRepeats_Append
etc. etc.
strB can be null or empty or have value.
c can be null or have value
repeats can be negative or positive or zero
That seems already 3 * 2 * 3 = 18 test methods. Could be a lot more on other functions if those functions also need to test for special characters, Integer.MIN_VALUE, Integer.MAX_VALUE, etc. etc.
What should be my line of stopping?
Should I assume for the purpose of my own program:
strB can only be empty or have value
c has value
repeats can only be empty or positive
Sorry for the bother. Just genuinely confused how paranoid I should go with unit testing in general. Should I stay within the bounds of my assumptions or is that bad practice and should I have a method for each potential case in which case, the number of unit test methods would scale exponentially quite quick.
There's no right answer, and it's a matter of personal opinion and feelings.
However, some things I believe are universal:
If you adopt Test Driven Development, in which you never write any non-test code unless you've first written a failing unit test, this will guide you in the number of tests you write. With some experience in TDD, you'll get a feel for this, so even if you need to write unit tests for old code that wasn't TDD'd, you'll be able to write tests as if it was.
If a class has too many unit tests, that's an indication that the class does too many things. "Too many" is hard to quantify, however. But when it feels like too many, try to split the class up into more classes each with fewer responsibilities.
Mocking is fundamental to unit testing -- without mocking collaborators, you're testing more than the "unit". So learn to use a mocking framework.
Checking for nulls, and testing those checks, can add up to a lot of code. If you adopt a style in which you never produce a null, then your code never needs to handle a null, and there's no need to test what happens in that circumstance.
There are exceptions to this, for example if you're supplying library code, and want to give friendly invalid parameter errors to the caller
For some methods, property tests can be a viable way to hit your code with a lot of tests. jUnit's #Theory is one implementation of this. It allows you to test assertions like 'plus(x,y) returns a positive number for any positive x and positive y'
The general rule of thumb is usually that every 'fork' in your code should have a test, meaning you should cover all possible edge-cases.
For example, if you have the following code:
if (x != null) {
if (x.length > 100) {
// do something
} else {
// do something else
}
} else {
// do something completely else
}
You should have three test cases- one for null, one for value shorter than 100 and one for longer.
This is if you are strict and want to be 100% covered.
Whether it's different tests or parameterized is less important, it's more a matter of style and you can go either way. I think the more important thing is to cover all cases.
The set of test cases you have developed are the result of a black-box test-design approach, in fact they look as if you had applied the classification-tree-method. While it is perfectly fine to temporarily take a black-box perspective when doing unit-testing, limiting yourself to black-box testing only can have some undesired effects: First, as you have observed, you can end up with the Cartesian product of all possible scenarios for each of the inputs, second, you will probably still not find bugs that are specific to the chosen implementation.
By (also) taking a glass-box (aka white-box) perspective, you can avoid creating useless tests: Knowing that your code as the first step handles the special case that the number of repeats is negative means you don't have to multiply this scenario with all the others. Certainly, this means you are making use of your knowledge of implementation details: If you were later to change your code such that the check against negative repeats comes at several places, then you better also adjust your test suite.
Since there seems to be a wide spread concern about testing implementation details: unit-testing is about testing the implementation. Different implementations have different potential bugs. If you don't use unit-testing for finding these bugs, then any other test level (integration, subsystem, system) is definitely less suited for finding them systematically - and in a bigger project you don't want implementation level bugs escape to later development phases or even to the field. On a side note, coverage analysis implies you take a glass-box perspective, and TDD does the same.
It is correct, however, that a test suite or individual tests should not unnecessarily depend on implementation details - but that is a different statement than saying you should not depend on implementation details at all. A plausible approach therefore is, to have a set of tests that make sense from a black-box perspective, plus tests that are meant to catch those bugs that are implementation specific. The latter need to be adjusted when you change your code, but the effort can be reduced by various means, e.g. using test helper methods etc.
In your case, taking a glass-box perspective would probably reduce the number of tests with negative repeats to one, also the null char cases, possibly also the NullStrB cases (assuming you handle that early by replacing the null with an empty string), and so on.
First, use a code coverage tool. That will show you which lines of your code are executed by your tests. IDEs have plugins for code coverage tools so that you can run a test and see which lines were executed. Shoot for covering every line, that may be hard for some cases but for this kind of utility it is very do-able.
Using the code coverage tool makes uncovered edge cases stand out. For tests that are harder to implement code coverage shows you what lines your test executed, so if there's an error in your test you can see how far it got.
Next, understand no tests cover everything. There will always be values you don't test. So pick representative inputs that are of interest, and avoid ones that seem redundant. For instance, is passing in an empty StringBuilder really something you care about? It doesn't affect the behavior of the code. There are special values that may cause a problem, like null. If you are testing a binary search you'll want to cover the case where the array is really big, to see if the midpoint calculation overflows. Look for the kinds of cases that matter.
If you validate upfront and kick out troublesome values, you don't have to do as much work testing. One test for null StringBuilder passed in to verify you throw IllegalArgumentException, one test for negative repeat value to verify you throw something for that.
Finally, tests are for the developers. Do what is useful for you.

if local variables cant be tested then what other ways can the variable values be checked [duplicate]

This question already has answers here:
build a simple debugger with jdi to set breakpoints and retrieve the value of a variable
(2 answers)
Closed 5 years ago.
This is going to be a long question, driven by the thirst of wanting to know how something is working against the conventional methodologies.
I came across a very interesting application codeacademy, that was actually testing the local variables within a main method.
Here is a screenshot of the page, that got me into thinking how this can be possible.
found some similar questions in stackoverflow.
Is it possible to obtain variables in main method from junit test?
How to use BCEL or ASM to get the value of a local declared variable in JAVA?
I am not satisfied by knowing that it cant be done, what i want to know, is that, is there a way, like the java compiler api or some other, that can point me into knowing how such an application made is possible.
Testing local variables in implementation-validation (like unit testing, or QA automated testing) is generally bad practice.
Local variables depend on the particular implementation, and particular implementation should be hidden behind reasonably abstract API, to allow the developers to replace the implementation in the future - if they get some better idea - without affecting consumers of the result (as long, as the API is so good, that it doesn't need any change).
With some very complex implementations/algorithms a developer indeed may be interested to verify particular intermediate/inner results of the complex algorithm, to make the development of the implementation itself easier. At that point it makes sense to create internal inner-API providing reasonably abstract intermediate results, even if they are hard-bonded to particular algorithm, and test that inner-API in unit tests. But in case of the algorithm replacement you have to accept change of the inner-API, and all the inner unit tests. There should be still reasonable high-level abstract API which is unaffected by the inner changes.
A need to have tests on the level of local variables should indicate some deeper problem of the code base.
Your particular use-case of Java tutorial is quite unrelated to real Java code development, as the purpose of that lecture is completely different.
As was shown by the simple myNumber = 21 + 21; test, the validation of lecture is purely compare-text based, using probably some regex on the source code inputted by the student. Not even checking the resulting bytecode, as that one is identical with myNumber = 42;.
If you are working on some lecturing system, using some kind of custom virtual machine and debug interface may work, but in case of simple lectures even text-compare solution may be enough.
Usually when the student is advanced enough to solve some task, you can start to use input/output from/to stdin/stdout to create automated tests to validate student solution with set of known input/output tests, like some sites like https://www.hackerrank.com/ do, or various programming contests. At that point you don't need access to anything, nor local variables, nor API unit testing, you just redirect stdin to feed solution with desired input, and catch stdout to compare it with designed output.
But lecture validation is like completely unrelated to unit testing.
And too much stringent testing for expected result may be even counterproductive in the lecturing process!
Imagine you task the student to write code which will output sum of squares from 1 to N, and you will accept only:
int sum = 0;
for (int i = 1; i <= N; ++i) {
sum += i * i;
}
(verified on bytecode level, in a way that names of variables and ++ suffix/prefix doesn't matter)
Now some student will try to submit this:
int sum = 0, square = 0, sq_delta = -1;
for (int i = 1; i <= N; ++i) {
sum += (square += (sq_delta += 2));
}
And it will fail... (even if his solution would be absolutely superior around 1985 to the variant with actual multiplication) ... sad story. :)
Probably we are over-thinking this.
When I got your question, you are wondering how the online tool used here can know about the content of variables inside some main() method.
Thing is: this isn't necessarily a Java feature.
Keep in mind: it is their web application. They can do whatever they implement in there. In other words: of course you can use a Java compiler to generate an AST (abstract syntax tree) representation of any piece of Java code. And of course, that AST contains all the information that is present in the corresponding source code.

Should we check no changes in unit tests?

This example is custom made to ask my doubt.
Object Car {
color:null
tyre : 0;
}
fillCar(Object Car, boolean b) {
if (b) {
Car.color = "Red"
} else {
Car.tyre = 4;
}
}
Now i need to unit test my code.
My test1 is (Car, true) and test2 is (Car, false).
My question:
Do i need to test "tyres == 4" when in test1 and on similar lines do i need to check "color == null" when test2 ?
The answer is YES if this is part of the functional requirement of the method.
For example, if your specifications say that when True the value of tyre must be 4 and other variables will not matter, then it is not necessary. But if your specifications say that not only tyre must be 4 but the rest of variables must remain with the same value, then you should check that out too.
Take into account that Unit test not only are useful for checking that your code is fine, but also for making sure that when your code in the future, you do not corrupt the expected functionality.
Generally, there is no harm in testing all parts of the code. In fact, I would encourage it. It's a very easy way of checking that no mistakes have been made in the logic.
In this case, the code is simple enough to see the result. However, it could become much more complex if Car is extended, or more functionality added.
There is a lot of argument about this but unit testing started to make sense to me once I focused on verifying the interface. In this case you have a function which exposes an interface where you can pass in a car-object and a boolean and then have certain modifications made to the car object depending on the value of the boolean. You quite rightly see two unit tests covering that and personally I would stop there. If you are worried about nulls showing up you can cover that in the unit tests when the car objects are constructed. If you assign something other than a straight forward literal that might be a null then tests for nulls would make sense.
One more tip - unit testing works much better for me within the context of test driven design (TDD). YMMV but I find non-TDD code very hard to unit test.
Finally - just a mention that I found learning TDD/unit-testing is well worth it.

Is there some kind of 'assertion' coverage tool (for Java)?

Before this question is marked as duplicate, please read it. ;)
There are already several questions about coverage tools and such, however this is a bit different than the usual ones (I hope).
According to wikipedia there are several different kind of 'coverage' variations that affect several different aspects of the term 'coverage'.
Here a little example:
public class Dummy {
public int a = 0;
public int b = 0;
public int c = 0;
public void doSomething() {
a += 5;
b += 5;
c = b + 5;
}
}
public class DummyTest {
#Test
public void testDoSomething() {
Dummy dummy = new Dummy();
dummy.doSomething();
assertEquals( 10, dummy.c );
}
}
As you can see, the test will have a coverage of 100% lines, the assertion on the value of field 'c' will cover this field and indirectly also cover field 'b', however there is no assertion coverage on field 'a'.
This means that the test covers 100% of the code lines and assures that c contains the expected value and most probably also b contains the correct one, however a is not asserted at all and may a completely wrong value.
So... now the question: Is there a tool able to analyze the (java) code and create a report about which fields/variables/whatever have not been (directly and/or indirectly) covered by an assertion?
(ok when using getters instead of public fields you would see that getA() is not called, but well this is not the answer I'd like to hear ;) )
As you can see, the test will have a coverage of 100% lines, the assertion on the value of field 'c' will cover this field and indirectly also cover field 'b', however there is no assertion coverage on field 'a'. This means that the test covers 100% of the code lines and assures that c contains the expected value and most probably also b contains the correct one, however a is not asserted at all and may a completely wrong value.
Well, "cover" unfortunately means different things to different people... This test indeed exercises 100% of the code lines, but it does not test them all.
What you're looking for is handled well by mutation testing.
Have a look at Jester, which uses mutation testing to report on code coverage.
There are hundreds of definition of "test coverage", of which the COTS tools only handle a very few at best. (My company builds test coverage tools so we track this kind of thing). See this lecture on test coverage for an interesting overview.
The closest definition I have heard is one for data coverage; depending on your definition :-{ it tells you that each data item has been written and read during execution. The lecture talks about verifying that every write and every read has been exercised as a special case.
I don't know the hundreds of definitions by heart, but you may have invented yet one more: data coverage restricted to assertions.
there are Assertions in Java if that is what you are looking for.
To see how much code has been covered there are tools that you can use here are some examples:
cobertura
clover

Basic jUnit Questions

I was testing a String multiplier class with a multiply() method that takes 2 numbers as inputs (as String) and returns the result number (as String)
public String multiply(String num1, String num2);
I have done the implementation and created a test class with the following test cases involving the input String parameter as
valid numbers
characters
special symbol
empty string
Null value
0
Negative number
float
Boundary values
Numbers that are valid but their product is out of range
numbers will + sign (+23)
Now my questions are these:
I'd like to know if "each and every" assertEquals() should be in it's own test method? Or, can I group similar test cases like testInvalidArguments() to contains all asserts involving invalid characters since ALL of them throw the same NumberFormatException ?
If testing an input value like character ("a"), do I need to include test cases for ALL scenarios?
"a" as the first argument
"a" as the second argument
"a" and "b" as the 2 arguments
As per my understanding, the benefit of these unit tests is to find out the cases where the input from a user might fail and result in an exception. And, then we can give the user with a meaningful message (asking them to provide valid input) instead of an exception. Is that the correct? And, is it the only benefit?
Are the 11 test cases mentioned above sufficient? Did I miss something? Did I overdo? When is enough?
Following from the above point, have I successfully tested the multiply() method?
Unit testing is great (in the 200 KLOC project I'm working I've got as many unit test code as regular code) but (assuming a correct unit test):
a unit test that passes does not guarantee that your code works
Think of it this way:
a unit test that fails proves your code is broken
It is really important to realize this.
In addition to that:
it is usually impossible to test every possible input
And then, when you're refactoring:
if all your unit tests are passing does not mean you didn't introduce a regression
But:
if one of your unit test fails you know you have introduced a regression
This is really fundamental and should be unit testing 101.
1) I do think it's a good idea to limit the number of assertions you make in each test. JUnit only reports the first failure in a test, so if you have multiple assertions some problems may be masked. It's more useful to be able to see everything that passed and everything that failed. If you have 10 assertEquals in one test and the first one fails, then you just don't know what would have happened with the other 9. Those would be good data points to have when debugging.
2) Yes, you should include tests for all of your inputs.
3) It's not just end-user input that needs to be tested. You'll want to write tests for any public methods that could possibly fail. There are some good guidelines for this, particularly concerning getters and setters, at the JUnit FAQ.
4) I think you've got it pretty well covered. (At least I can't think of anything else, but see #5).
5) Give it to some users to test out. They always find sample data that I never think of testing. :)
1) There is a tradeoff between granularity of tests (and hence ease of diagnosis) and verbosity of your unit test code. I'm personally happy to go for relatively coarse-grained test methods, especially once the tests and tested code have stabilized. The granularity issue is only relevant when tests fail. (If I get a failure in a multi-assertion testcase, I either fix the first failure and repeat, or I temporarily hack the testcase as required to figure out what is going on.)
2) Use your common sense. Based on your understanding of how the code is written, design your tests to exercise all of the qualitatively different subcases. Recognize that it is impossible to test all possible inputs in all but the most trivial cases.
3) The point of unit testing is to provide a level of assurance that the methods under test do what they are required to do. What this means depends on the code being tested. For example, if I am unit testing a sort method, validation of user input is irrelevant.
4) The coverage seems reasonable. However, without a detailed specification of what your class is required to do, and examination of the actual unit tests, it is impossible to say if you ave covered everything. For example, is your method supposed to cope with leading / trailing whitespace characters, numbers with decimal points, numbers like "123,456", numbers expressed using non-latin digits, numbers in base 42?
5) Define "successfully tested". If you mean, do my tests prove that the code has no errors, then the answer is a definite "NO". Unless the unit tests enumerate each and every possible input, they cannot constitute a proof of correctness. (And in some circumstances, not even testing all inputs is sufficient.)
In all but the most trivial cases, testing cannot prove the absence of bugs. The only thing it can prove is that bugs are present. If you need to prove that a program has no bugs, you need to resort to "formal methods"; i.e. applying formal theorem proving techniques to your program.
And, as another answer points out, you need to give it to real users to see what they might come up with in the way of unexpected input. In other words ... whether the stated or inferred user requirements are actually complete and valid.
True numbers of tests are, of course, infinite. That is not practical. You have to choose valid representative cases. You seem to have done that. Good job.
1) It's best to keep your tests small and focused. That way, when a test fails, it's clear why the test failed. This usually results in a single assertion per test, but not always.
However, instead of hand-coding a test for each individual "invalid scenario", you might want to take a look at JUnit 4.4 Theories (see the JUnit 4.4 release notes and this blog post), or the JUnit Parameterized test runner.
Parametrized tests and Theories are perfect for "calculation" methods like this one. In addition, to keep things organized, I might make two test classes, one for "good" inputs, and one for "bad" inputs.
2) You only need to include the test cases that you think are most likely to expose any bugs in your code, not all possible combinations of all inputs (that would be impossible as WizardOfOdds points out in his comments). The three sets that you proposed are good ones, but I probably wouldn't test more than those three. Using theories or parametrized tests, however, would allow you to add even more scenarios.
3) There are many benefits to writing unit tests, not just the one you mention. Some other benefits include:
Confidence in your code - You have a high decree of certainty that your code is correct.
Confidence to Refactor - you can refactor your code and know that if you break something, your tests will tell you.
Regressions - You will know right away if a change in one part of the system breaks this particular method unintentionally.
Completeness - The tests forced you to think about the possible inputs your method can receive, and how the method should respond.
5) It sounds like you did a good job with coming up with possible test scenarios. I think you got all the important ones.
I just want to add, that with unit testing, you can gain even more if you think first of the possible cases and after that implement in the test driven development fashion, because this will help you stay focuesed on the current case and this will enable you to create easiest implementation possible in DRY fashion. You might also be usng some test coverage tool, e.g. in Eclipse EclEmma, which is really easy to use and will show you if tests have executed all of your code, which might help you to determine when it is enough (although this is not a proof, just a metric). Generally when it comes to unit testing I was much inspired by Kent Becks's Test Driven Development by Example book, I strongly recommend it.

Categories