I saw the following code snippet online:
As it mentioned, the argument order of Junit(expected, actual) and Hamcrest(actual, expected) is reversed.
I was wondering what is the reason behind it? Does the argument order really matter for Junit or Hamcrest? If someone accidently placed the argument in the wrong order, will it affect the result?
Well, for Hamcrest, the types are actually different: the left-hand side is an Object and the right hand side is a Matcher.
For JUnit the difference is "only" semantics, i.e. you get a misleading assert message in case of failure.
AssertJ (to add another example) uses a fluent-style interface:
assertThat(actual).isEqualTo(expected);
assertThat(actual).isGreaterThan(expected);
...
A lot of that confusion goes away when you realize that you only need to use assertThat. This assert simply does what all the others do; and it does it in a much better way.
And no, the order doesn't really matter. In the end, this is about matching two values. If the two values are identical, you don't care.
The only problem: if you reverse values, then fixing broken tests might be harder - as you should better understand the difference between the actual and the expected value.
Finally: there is a simple helper when using assertThat. It wants A (ctual), then E(expected). Very much in alphabetical order.
Related
I work in a TDD environment and I use assertTrue a lot whereas there are many other methods, such as assert equals etc. I have a class that I have more than 40 test cases and they are all assertTrue. is this acceptable?
I wanted to ask as a style, is this proper?
any suggestions?
if u think this question is inappropriate let me know i ll delete it.
EDIT:
assertTrue(targetSpecifiers.size() == 2);
assertTrue(targetSpecifiers.get(0).getPlacementId().compareTo(new BigInteger("1")) ==0);
assertTrue(targetSpecifiers.get(1).getPlacementId().compareTo(new BigInteger("2")) ==0);
The main benefits of using other assertions is that they better communicate intent and are likely to give a more meaningful default message in the event of failure.
e.g.
if you write assertEquals(2, x) if x is actually 1 then the failure message will be:
java.lang.AssertionError: expected:<2> but was:<1>
which is more helpful than if you write assertTrue(x == 2) where all you would see is the AssertionError and the stack trace.
This is even more important when you are using TDD because you when you write a failing test first you want to be confident that the test is failing for the reason you are expecting it to and that there is not some accidental behaviour going on.
Where appropriate you should use the correct assertXXX methods as they improve the reporting of failures. For e.g. if you are testing for the equality of let us say 2 string "abc" (expected) and "abxy" (actual), then the use of assertEquals
assertEquals("abc", "abxy")
will provide a better output that is easier to reason about than using the assertTrue like below
assertTrue("abc".equals("abxy"))
NOTE: Also pay attention to where you are specifying the actual and expected arguments. I see lot of developers not following the convention (junit's convention) that the expected should be the first param to the assertXXX methods. Improper usage also leads to lot of confusion
My guess is that you've got things like:
assertTrue(expectedValue.equals(actualValue));
That will still test the right thing - but when there's a failure, all it can tell you was that the assertion failed. If you used this instead:
assertEquals(expectedValue, actualValue);
... then the failure will say "Expected: 5; Was: 10" or something similar, which makes it considerably easier to work out what's going on.
Unless you're asserting the result of a method returning boolean or something like that, I find assertTrue to be pretty rarely useful.
If you could give examples of your assertions, we may be able to translate them into more idiomatic ones.
These assertions are perfectly valid, but other assertions are easier to read and deliver better failure messages.
I recommend looking at Hamcrest- this provides the most readable form of assertions and failure messages. Your example of
assertTrue(targetSpecifiers.size() == 2);
assertTrue(targetSpecifiers.get(0).getPlacementId().compareTo(new BigInteger("1")) ==0);
assertTrue(targetSpecifiers.get(1).getPlacementId().compareTo(new BigInteger("2")) ==0);
could be rewritten as
assertThat(targetSpecifiers, hasSize(2));
assertThat(targetSpecifiers.get(0).getPlacementId(), equalTo(BigInteger.valueOf(1));
assertThat(targetSpecifiers.get(1).getPlacementId(), equalTo(BigInteger.valueOf(1));
or even more succinctly as
assertThat(targetSpecifiers, contains(
hasProperty("placementId", equalTo(BigInteger.valueOf(1)),
hasProperty("placementId", equalTo(BigInteger.valueOf(2))
);
contains verifies completeness and order, so this covers all three assertions.
I have a class A<X, Y> and I want to refactor it to A<Y, X> in a way that all the references to it would be modified as well.
I don't think that has been implemented in Eclipse yet. It's a rather rare refactoring, though...
But if your type hierarchy below A is not too complex yet, try using this regex-search-replace (where A|B|C means A and all subtypes of A, e.g. B and C):
\b(A|B|C)<\s*(\w+)\s*,\s*(\w+)\s*>
update: since you want to match more sophisticated stuff, try this (without the artifical line-breaks):
\b(A|B|C)<
\s*((?:\w+|\?)(?:\s+(?:extends|super)\s+(?:\w+|\?))?)\s*,
\s*((?:\w+|\?)(?:\s+(?:extends|super)\s+(?:\w+|\?))?)\s*>
replace by
$1<$3, $2>
Since you're using Eclipse, you can manually check every replacement for correctness
In Eclipse right-click on the method, then Refactor->Change method signature, you can change the order of the parameters there
If you aren't using Eclipse (or another tool that has good refactoring - highly recommended if you're aren't), then I can think of two ways to do this:
First:
If you're using TDD, then write a test that will only succeed when the variables are properly swapped. Then make the change to the method signature, and make sure your test passes.
Second:
1. Remove the 2nd parameter from the method signature, which will throw compilation errors on all calls to that method
2. Go to each of the lines that are failing compilation, and carefully swap the variables
3. Put the 2nd variable back into the method signature, in the new, reversed order
4. Run some tests to make sure it still works the way you expect it to
The second method is obviously ugly. But if you're aren't using an IDE with good refactoring support, compilation errors are a good way to capture 100% of the calls to that method (at least within your project). If you're writing a code library that is used by other people, or by other programs, then it becomes much more complicated to communicate that change to all affected parties.
I have two java classes that are very similar in semantics but differ in syntax. The differences are minor, like -
Changes in variable names,
Changes in position of some statements (with no dependent lines in between),
Extra imports, etc.
I need to compare these two classes to prove that they are indeed semantically identical. The same needs to be done for a large number of java file pairs.
The first approach of reading from the two files and comparing the lines, with logic to deal with the differences mentioned above seems inefficient. Is there some other way that I can achieve this task? Any helpful APIs out there?
Compile both of the classes without debug information and then decompile them back to source files. The decompiled files should be a lot more similar than the original source files.
You can improve this further by running some optimizations on the compiled files. For example you can use Proguard with just shrinking enabled to removed unused code.
Changes in position of some statements can be hard to detect though.
If you want to examine the changes in the code try Araxis Merge or WinMerge.
But if you want logical differences, I am afraid you might have to do it manually.
I would advise to use one of these tools to look for textual changes and then look for logical differences.
There are a lot of similarity checker out there, and until now there's no yet perfect tool for this. Each has its own advantages / disadvantages. The approaches generally falls into two categories: token-based or tree-based.
Token-based similarity checking is usually done with regular expressions, but other approaches are possible. In one of my projects at university, we developed one utilizing alignment strategy from bioinformatics field. The disadvantage of this technique is mainly if the size of the two sources isn't more or less equal.
Tree-based is more like a compiler, so normally using some compilation techniques it's possible (well, more or less) to check for this. Tree-based approach has disadvantages of being exponential in comparison complexity.
Comparing line by line wont work. I think you may need to use a parser. I would suggest that you take a look at ANTLR. It should have a java grammar where you could put your actions which will do the comparison.
As far as I know there's now way to compare the semantics of two Java classes. Take for example the following two methods:
public String m1(String a, int b) { ... }
and
public String m2(String x, int y) { ... }
A part from changes in variables and methods names, their signature is the same: same return type, and same input types. However, this is no guarantee that the two methods are semantically equivalent. For example, m1 could return a string consisting of the first b characters of a, while m2 could return a string consisting of y repetitions of x. As you can see, although only variables and names change, the semantics of the two methods is totally different.
I don't see an easy way out for your problem. You can perhaps make some assumption and try the following approach:
assume that the methods names in the two classes are the same
write test cases (for example with JUnit) for all the methods in the first class
run the test cases on the second class
ensure that the second class does not have other (untested) methods (for example using reflection)
This approach gives you an idea about equivalent semantics, but it makes strong assumption.
As a final remark, let me add that specifying the semantics of programs is an interesting and open research topic. Some interesting development in this area include research on Semantic Web Services. A widely adopted approach to give machine processable semantics to programs is that of specifying their IOPE: Input and Output types (as int the Java methods above), and their Preconditions and Effects. Preconditions are essentially logical conditions that must hold true for successfully invoking the program, and Effects are formal descriptions of the changes (in the state of the world) caused by the successful execution of the program. Even with IOPE there are a lot of problems ... which I skip in this short description.
I was testing a String multiplier class with a multiply() method that takes 2 numbers as inputs (as String) and returns the result number (as String)
public String multiply(String num1, String num2);
I have done the implementation and created a test class with the following test cases involving the input String parameter as
valid numbers
characters
special symbol
empty string
Null value
0
Negative number
float
Boundary values
Numbers that are valid but their product is out of range
numbers will + sign (+23)
Now my questions are these:
I'd like to know if "each and every" assertEquals() should be in it's own test method? Or, can I group similar test cases like testInvalidArguments() to contains all asserts involving invalid characters since ALL of them throw the same NumberFormatException ?
If testing an input value like character ("a"), do I need to include test cases for ALL scenarios?
"a" as the first argument
"a" as the second argument
"a" and "b" as the 2 arguments
As per my understanding, the benefit of these unit tests is to find out the cases where the input from a user might fail and result in an exception. And, then we can give the user with a meaningful message (asking them to provide valid input) instead of an exception. Is that the correct? And, is it the only benefit?
Are the 11 test cases mentioned above sufficient? Did I miss something? Did I overdo? When is enough?
Following from the above point, have I successfully tested the multiply() method?
Unit testing is great (in the 200 KLOC project I'm working I've got as many unit test code as regular code) but (assuming a correct unit test):
a unit test that passes does not guarantee that your code works
Think of it this way:
a unit test that fails proves your code is broken
It is really important to realize this.
In addition to that:
it is usually impossible to test every possible input
And then, when you're refactoring:
if all your unit tests are passing does not mean you didn't introduce a regression
But:
if one of your unit test fails you know you have introduced a regression
This is really fundamental and should be unit testing 101.
1) I do think it's a good idea to limit the number of assertions you make in each test. JUnit only reports the first failure in a test, so if you have multiple assertions some problems may be masked. It's more useful to be able to see everything that passed and everything that failed. If you have 10 assertEquals in one test and the first one fails, then you just don't know what would have happened with the other 9. Those would be good data points to have when debugging.
2) Yes, you should include tests for all of your inputs.
3) It's not just end-user input that needs to be tested. You'll want to write tests for any public methods that could possibly fail. There are some good guidelines for this, particularly concerning getters and setters, at the JUnit FAQ.
4) I think you've got it pretty well covered. (At least I can't think of anything else, but see #5).
5) Give it to some users to test out. They always find sample data that I never think of testing. :)
1) There is a tradeoff between granularity of tests (and hence ease of diagnosis) and verbosity of your unit test code. I'm personally happy to go for relatively coarse-grained test methods, especially once the tests and tested code have stabilized. The granularity issue is only relevant when tests fail. (If I get a failure in a multi-assertion testcase, I either fix the first failure and repeat, or I temporarily hack the testcase as required to figure out what is going on.)
2) Use your common sense. Based on your understanding of how the code is written, design your tests to exercise all of the qualitatively different subcases. Recognize that it is impossible to test all possible inputs in all but the most trivial cases.
3) The point of unit testing is to provide a level of assurance that the methods under test do what they are required to do. What this means depends on the code being tested. For example, if I am unit testing a sort method, validation of user input is irrelevant.
4) The coverage seems reasonable. However, without a detailed specification of what your class is required to do, and examination of the actual unit tests, it is impossible to say if you ave covered everything. For example, is your method supposed to cope with leading / trailing whitespace characters, numbers with decimal points, numbers like "123,456", numbers expressed using non-latin digits, numbers in base 42?
5) Define "successfully tested". If you mean, do my tests prove that the code has no errors, then the answer is a definite "NO". Unless the unit tests enumerate each and every possible input, they cannot constitute a proof of correctness. (And in some circumstances, not even testing all inputs is sufficient.)
In all but the most trivial cases, testing cannot prove the absence of bugs. The only thing it can prove is that bugs are present. If you need to prove that a program has no bugs, you need to resort to "formal methods"; i.e. applying formal theorem proving techniques to your program.
And, as another answer points out, you need to give it to real users to see what they might come up with in the way of unexpected input. In other words ... whether the stated or inferred user requirements are actually complete and valid.
True numbers of tests are, of course, infinite. That is not practical. You have to choose valid representative cases. You seem to have done that. Good job.
1) It's best to keep your tests small and focused. That way, when a test fails, it's clear why the test failed. This usually results in a single assertion per test, but not always.
However, instead of hand-coding a test for each individual "invalid scenario", you might want to take a look at JUnit 4.4 Theories (see the JUnit 4.4 release notes and this blog post), or the JUnit Parameterized test runner.
Parametrized tests and Theories are perfect for "calculation" methods like this one. In addition, to keep things organized, I might make two test classes, one for "good" inputs, and one for "bad" inputs.
2) You only need to include the test cases that you think are most likely to expose any bugs in your code, not all possible combinations of all inputs (that would be impossible as WizardOfOdds points out in his comments). The three sets that you proposed are good ones, but I probably wouldn't test more than those three. Using theories or parametrized tests, however, would allow you to add even more scenarios.
3) There are many benefits to writing unit tests, not just the one you mention. Some other benefits include:
Confidence in your code - You have a high decree of certainty that your code is correct.
Confidence to Refactor - you can refactor your code and know that if you break something, your tests will tell you.
Regressions - You will know right away if a change in one part of the system breaks this particular method unintentionally.
Completeness - The tests forced you to think about the possible inputs your method can receive, and how the method should respond.
5) It sounds like you did a good job with coming up with possible test scenarios. I think you got all the important ones.
I just want to add, that with unit testing, you can gain even more if you think first of the possible cases and after that implement in the test driven development fashion, because this will help you stay focuesed on the current case and this will enable you to create easiest implementation possible in DRY fashion. You might also be usng some test coverage tool, e.g. in Eclipse EclEmma, which is really easy to use and will show you if tests have executed all of your code, which might help you to determine when it is enough (although this is not a proof, just a metric). Generally when it comes to unit testing I was much inspired by Kent Becks's Test Driven Development by Example book, I strongly recommend it.
If I have a basic accessor method that returns an ArrayList
What exactly would i test for it?
I am very inexperienced when it comes to testing.
That depends on how you expect the method to behave. For example: If someone has called the method and changed the list that was retrieved, do you want those changes to show up the next time the getter is called? Either way, test that behaviour. What does the getter return when the list would be empty? Null or an empty list? This should also be tested.
Typically writing explicit Junit tests for accessors is usually a little overkill (what are you testing? return foo;). Using a code coverage tool such as clover can help you target your testing efforts at your most complicated code first.
Interessting question for you:
How much should a unit test "test"
How to use Junit and Hibernate usefully
What should not be unit tested
Edit:
Added some of my favorite Questions here on Stackoverflow regarding JUnit and Unit Testing.
Always keeps in mind that test code is also code and for every 1,000 lines of code, you produce at least 4 bugs. So test what doesn't work and don't write tests for something that can't possibly break (like code generated by your IDE). If it does break, write a test :)
In general unit tests should test that your method does what it states it should do.
If your method returns an arraylist your basic test is to assert that an arraylist is indeed returned when it is called.
The next level of detail in the test is to check is the arraylist constructed correctly? Have the values you expect in it been filled correctly? If it's supposed to be an empty list, is that the case?
Now you have your "sunny day" case (i.e the method works under normal conditions) you should add some negative (or "rainy day") conditions if appropriate. If the method accepts a length for the array, what if you pass in a negative number or a int.Max etc.
As stated in another answer, this is probably overkill for a simple accessor, but the principles apply to any unit tests you need to write.
Depends on your requirements. You might test:
If the return value is not null
If the returned collection is not empty
If the returned collection is modifiable/unmodifiable
If the returned collection is sorted
If the returned collection contains all expected values
If the accessor method does not throw a runtime exception
But, as I said, it depends on the requirements, it depends on the 'kind' of collection you expect when you call the accessor. Maybe you allow setting the list to null but create an empty list. The the test could make sure that you really get an empty list when you set the list to null.
Hope it helps to give you an idea!