Basic jUnit Questions - java

I was testing a String multiplier class with a multiply() method that takes 2 numbers as inputs (as String) and returns the result number (as String)
public String multiply(String num1, String num2);
I have done the implementation and created a test class with the following test cases involving the input String parameter as
valid numbers
characters
special symbol
empty string
Null value
0
Negative number
float
Boundary values
Numbers that are valid but their product is out of range
numbers will + sign (+23)
Now my questions are these:
I'd like to know if "each and every" assertEquals() should be in it's own test method? Or, can I group similar test cases like testInvalidArguments() to contains all asserts involving invalid characters since ALL of them throw the same NumberFormatException ?
If testing an input value like character ("a"), do I need to include test cases for ALL scenarios?
"a" as the first argument
"a" as the second argument
"a" and "b" as the 2 arguments
As per my understanding, the benefit of these unit tests is to find out the cases where the input from a user might fail and result in an exception. And, then we can give the user with a meaningful message (asking them to provide valid input) instead of an exception. Is that the correct? And, is it the only benefit?
Are the 11 test cases mentioned above sufficient? Did I miss something? Did I overdo? When is enough?
Following from the above point, have I successfully tested the multiply() method?

Unit testing is great (in the 200 KLOC project I'm working I've got as many unit test code as regular code) but (assuming a correct unit test):
a unit test that passes does not guarantee that your code works
Think of it this way:
a unit test that fails proves your code is broken
It is really important to realize this.
In addition to that:
it is usually impossible to test every possible input
And then, when you're refactoring:
if all your unit tests are passing does not mean you didn't introduce a regression
But:
if one of your unit test fails you know you have introduced a regression
This is really fundamental and should be unit testing 101.

1) I do think it's a good idea to limit the number of assertions you make in each test. JUnit only reports the first failure in a test, so if you have multiple assertions some problems may be masked. It's more useful to be able to see everything that passed and everything that failed. If you have 10 assertEquals in one test and the first one fails, then you just don't know what would have happened with the other 9. Those would be good data points to have when debugging.
2) Yes, you should include tests for all of your inputs.
3) It's not just end-user input that needs to be tested. You'll want to write tests for any public methods that could possibly fail. There are some good guidelines for this, particularly concerning getters and setters, at the JUnit FAQ.
4) I think you've got it pretty well covered. (At least I can't think of anything else, but see #5).
5) Give it to some users to test out. They always find sample data that I never think of testing. :)

1) There is a tradeoff between granularity of tests (and hence ease of diagnosis) and verbosity of your unit test code. I'm personally happy to go for relatively coarse-grained test methods, especially once the tests and tested code have stabilized. The granularity issue is only relevant when tests fail. (If I get a failure in a multi-assertion testcase, I either fix the first failure and repeat, or I temporarily hack the testcase as required to figure out what is going on.)
2) Use your common sense. Based on your understanding of how the code is written, design your tests to exercise all of the qualitatively different subcases. Recognize that it is impossible to test all possible inputs in all but the most trivial cases.
3) The point of unit testing is to provide a level of assurance that the methods under test do what they are required to do. What this means depends on the code being tested. For example, if I am unit testing a sort method, validation of user input is irrelevant.
4) The coverage seems reasonable. However, without a detailed specification of what your class is required to do, and examination of the actual unit tests, it is impossible to say if you ave covered everything. For example, is your method supposed to cope with leading / trailing whitespace characters, numbers with decimal points, numbers like "123,456", numbers expressed using non-latin digits, numbers in base 42?
5) Define "successfully tested". If you mean, do my tests prove that the code has no errors, then the answer is a definite "NO". Unless the unit tests enumerate each and every possible input, they cannot constitute a proof of correctness. (And in some circumstances, not even testing all inputs is sufficient.)
In all but the most trivial cases, testing cannot prove the absence of bugs. The only thing it can prove is that bugs are present. If you need to prove that a program has no bugs, you need to resort to "formal methods"; i.e. applying formal theorem proving techniques to your program.
And, as another answer points out, you need to give it to real users to see what they might come up with in the way of unexpected input. In other words ... whether the stated or inferred user requirements are actually complete and valid.

True numbers of tests are, of course, infinite. That is not practical. You have to choose valid representative cases. You seem to have done that. Good job.

1) It's best to keep your tests small and focused. That way, when a test fails, it's clear why the test failed. This usually results in a single assertion per test, but not always.
However, instead of hand-coding a test for each individual "invalid scenario", you might want to take a look at JUnit 4.4 Theories (see the JUnit 4.4 release notes and this blog post), or the JUnit Parameterized test runner.
Parametrized tests and Theories are perfect for "calculation" methods like this one. In addition, to keep things organized, I might make two test classes, one for "good" inputs, and one for "bad" inputs.
2) You only need to include the test cases that you think are most likely to expose any bugs in your code, not all possible combinations of all inputs (that would be impossible as WizardOfOdds points out in his comments). The three sets that you proposed are good ones, but I probably wouldn't test more than those three. Using theories or parametrized tests, however, would allow you to add even more scenarios.
3) There are many benefits to writing unit tests, not just the one you mention. Some other benefits include:
Confidence in your code - You have a high decree of certainty that your code is correct.
Confidence to Refactor - you can refactor your code and know that if you break something, your tests will tell you.
Regressions - You will know right away if a change in one part of the system breaks this particular method unintentionally.
Completeness - The tests forced you to think about the possible inputs your method can receive, and how the method should respond.
5) It sounds like you did a good job with coming up with possible test scenarios. I think you got all the important ones.

I just want to add, that with unit testing, you can gain even more if you think first of the possible cases and after that implement in the test driven development fashion, because this will help you stay focuesed on the current case and this will enable you to create easiest implementation possible in DRY fashion. You might also be usng some test coverage tool, e.g. in Eclipse EclEmma, which is really easy to use and will show you if tests have executed all of your code, which might help you to determine when it is enough (although this is not a proof, just a metric). Generally when it comes to unit testing I was much inspired by Kent Becks's Test Driven Development by Example book, I strongly recommend it.

Related

What To Unit Test [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm a bit confused how much I should dedicate on Unit Tests.
Say I have a simple function like:
appendRepeats(StringBuilder strB, char c, int repeats)
[This function will append char c repeats number of times to strB.
e.g.:
strB = "hello"
c = "h"
repeats = 5
// result
strB = "hellohhhhh"
]
For unit testing this function, I feel there's already so many possibilities:
AppendRepeats_ZeroRepeats_DontAppend
AppendRepeats_NegativeRepeats_DontAppend
AppendRepeats_PositiveRepeats_Append
AppendRepeats_NullStrBZeroRepeats_DontAppend
AppendRepeats_NullStrBNegativeRepeats_DontAppend
AppendRepeats_NullStrBPositiveRepeats_Append
AppendRepeats_EmptyStrBZeroRepeats_DontAppend
AppendRepeats_EmptyStrBNegativeRepeats_DontAppend
AppendRepeats_EmptyStrBPositiveRepeats_Append
etc. etc.
strB can be null or empty or have value.
c can be null or have value
repeats can be negative or positive or zero
That seems already 3 * 2 * 3 = 18 test methods. Could be a lot more on other functions if those functions also need to test for special characters, Integer.MIN_VALUE, Integer.MAX_VALUE, etc. etc.
What should be my line of stopping?
Should I assume for the purpose of my own program:
strB can only be empty or have value
c has value
repeats can only be empty or positive
Sorry for the bother. Just genuinely confused how paranoid I should go with unit testing in general. Should I stay within the bounds of my assumptions or is that bad practice and should I have a method for each potential case in which case, the number of unit test methods would scale exponentially quite quick.
There's no right answer, and it's a matter of personal opinion and feelings.
However, some things I believe are universal:
If you adopt Test Driven Development, in which you never write any non-test code unless you've first written a failing unit test, this will guide you in the number of tests you write. With some experience in TDD, you'll get a feel for this, so even if you need to write unit tests for old code that wasn't TDD'd, you'll be able to write tests as if it was.
If a class has too many unit tests, that's an indication that the class does too many things. "Too many" is hard to quantify, however. But when it feels like too many, try to split the class up into more classes each with fewer responsibilities.
Mocking is fundamental to unit testing -- without mocking collaborators, you're testing more than the "unit". So learn to use a mocking framework.
Checking for nulls, and testing those checks, can add up to a lot of code. If you adopt a style in which you never produce a null, then your code never needs to handle a null, and there's no need to test what happens in that circumstance.
There are exceptions to this, for example if you're supplying library code, and want to give friendly invalid parameter errors to the caller
For some methods, property tests can be a viable way to hit your code with a lot of tests. jUnit's #Theory is one implementation of this. It allows you to test assertions like 'plus(x,y) returns a positive number for any positive x and positive y'
The general rule of thumb is usually that every 'fork' in your code should have a test, meaning you should cover all possible edge-cases.
For example, if you have the following code:
if (x != null) {
if (x.length > 100) {
// do something
} else {
// do something else
}
} else {
// do something completely else
}
You should have three test cases- one for null, one for value shorter than 100 and one for longer.
This is if you are strict and want to be 100% covered.
Whether it's different tests or parameterized is less important, it's more a matter of style and you can go either way. I think the more important thing is to cover all cases.
The set of test cases you have developed are the result of a black-box test-design approach, in fact they look as if you had applied the classification-tree-method. While it is perfectly fine to temporarily take a black-box perspective when doing unit-testing, limiting yourself to black-box testing only can have some undesired effects: First, as you have observed, you can end up with the Cartesian product of all possible scenarios for each of the inputs, second, you will probably still not find bugs that are specific to the chosen implementation.
By (also) taking a glass-box (aka white-box) perspective, you can avoid creating useless tests: Knowing that your code as the first step handles the special case that the number of repeats is negative means you don't have to multiply this scenario with all the others. Certainly, this means you are making use of your knowledge of implementation details: If you were later to change your code such that the check against negative repeats comes at several places, then you better also adjust your test suite.
Since there seems to be a wide spread concern about testing implementation details: unit-testing is about testing the implementation. Different implementations have different potential bugs. If you don't use unit-testing for finding these bugs, then any other test level (integration, subsystem, system) is definitely less suited for finding them systematically - and in a bigger project you don't want implementation level bugs escape to later development phases or even to the field. On a side note, coverage analysis implies you take a glass-box perspective, and TDD does the same.
It is correct, however, that a test suite or individual tests should not unnecessarily depend on implementation details - but that is a different statement than saying you should not depend on implementation details at all. A plausible approach therefore is, to have a set of tests that make sense from a black-box perspective, plus tests that are meant to catch those bugs that are implementation specific. The latter need to be adjusted when you change your code, but the effort can be reduced by various means, e.g. using test helper methods etc.
In your case, taking a glass-box perspective would probably reduce the number of tests with negative repeats to one, also the null char cases, possibly also the NullStrB cases (assuming you handle that early by replacing the null with an empty string), and so on.
First, use a code coverage tool. That will show you which lines of your code are executed by your tests. IDEs have plugins for code coverage tools so that you can run a test and see which lines were executed. Shoot for covering every line, that may be hard for some cases but for this kind of utility it is very do-able.
Using the code coverage tool makes uncovered edge cases stand out. For tests that are harder to implement code coverage shows you what lines your test executed, so if there's an error in your test you can see how far it got.
Next, understand no tests cover everything. There will always be values you don't test. So pick representative inputs that are of interest, and avoid ones that seem redundant. For instance, is passing in an empty StringBuilder really something you care about? It doesn't affect the behavior of the code. There are special values that may cause a problem, like null. If you are testing a binary search you'll want to cover the case where the array is really big, to see if the midpoint calculation overflows. Look for the kinds of cases that matter.
If you validate upfront and kick out troublesome values, you don't have to do as much work testing. One test for null StringBuilder passed in to verify you throw IllegalArgumentException, one test for negative repeat value to verify you throw something for that.
Finally, tests are for the developers. Do what is useful for you.

Should I test (duplicate) data, or only the behavior?

From the design perspective, I am wondering should I test the data, especially if it's a generally known data (not something very configurable) - this can apply to things like popular file extensions, special IP addresses etc.
Suppose we have a emergency phone number classifier:
public class ContactClassifier {
public final static String EMERGENCY_PHONE_NUMBER = "911";
public boolean isEmergencyNumber(String number) {
return number.equals(EMERGENCY_PHONE_NUMBER);
}
}
Should I test it this way ("911" duplication):
#Test
public testClassifier() {
assertTrue(contactClassifier.isEmergencyNumber("911"));
assertFalse(contactClassifier.isEmergencyNumber("111-other-222"));
}
or (test if properly recognized "configured" number):
#Test
public testClassifier() {
assertTrue(contactClassifier.isEmergencyNumber(ContactClassifier.EMERGENCY_PHONE_NUMBER));
assertFalse(contactClassifier.isEmergencyNumber("111-other-222"));
}
or inject "911" in the constructor,which looks the most reasonable for me, but even if I do so - should I wrote a test for the "application glue" if the component was instantiated with proper value? If someone can do a typo in data (code), then I see no reasons someone can do a typo in tests case (I bet such data would be copy-paste)
What is the point in test data that you can test? That constant value is in fact constant value? It's already defined in code. Java makes sure that the value is in fact the value so don't bother.
What you should do in unit test is test implementation, if it's correct or not. To test incorrect behaviour you use data defined inside test, marked as wrong, and send to method. To test that data is correct you input it during test, if it's border values that are not well known, or use application wide known values (constants inside interfaces) if they're defined somewhere already.
What is bothering you is that the data, that should be well known to everyone) is placed in test and that is not correct at all. What you can do is to move it to interface level. This way, by design, you have your application known data designed to be part of contract and it's correctness checked by java compiler.
Values that are well known should not be checked but should be handled by interfaces of some sort to maintain them. Changing it is easy, yes, and your test will not fail during that change, but to avoid accidents with it you should have merge request, reviews and tasks that are associated with them. If someone does change it by accident you can find that at the code review. If you commit everything to master you have bigger problems than constants doubly defined.
Now, onto parts that are bothering you in other approaches:
1) If someone can do a typo in data (code), then I see no reasons someone can do a typo in tests case (I bet such data would be copy-paste)
Actually, if someone changes values in data and then continues to develop, at some point he will run clean-install and see those failed tests. At that point he will probably change/ignore test to make it pass. If you have person that changes data so randomly you have bigger issues, and if not and the change is defined by task - you made someone do the change twice (at least?). No pros and many cons.
2) Worrying about someone making a mistake is generally bad practice. You can't catch it using code. Code reviews are designed for that. You can worry though about someone not correctly using the interface you defined.
3) Should I test it this way:
#Test
public testClassifier() {
assertTrue(contactClassifier.isEmergencyNumber(ContactClassifier.EMERGENCY_PHONE_NUMBER));
assertFalse(contactClassifier.isEmergencyNumber("111-other-222"));
}
Also not this way. This is not test but test batch, i.e. multiple tests in the same method. It should be this way (convention-s):
#Test
public testClassifier_emergencyNumberSupplied_correctnessConfirmed() {
assertTrue(contactClassifier.isEmergencyNumber(ContactClassifier.EMERGENCY_PHONE_NUMBER));
}
#Test
public testClassifier_incorrectValueSupplied_correctnessNotConfirmed() {
assertFalse(contactClassifier.isEmergencyNumber("111-other-222"));
}
4) it's not necessary when method is properly named, but if it's long enough you might consider naming the values inside test. For example
#Test
public testClassifier_incorrectValueSupplied_correctnessNotConfirmed() {
String nonEmergencyNumber = "111-other-222";
assertFalse(contactClassifier.isEmergencyNumber(nonEmergencyNumber));
}
External constants as such have a problem. The import disappears and the constant is added to the class' constant pool. Hence when in the future the constant is changed in the original class, the compiler does not see a dependency between the .class files, and leaves the old constant value in the test class.
So you would need a clean build.
Furthermore tests should be short, clear to read and fast to write. Tests deal with concrete cases of data. Abstractions are counter-productive, and may even lead to errors in the test themselves. Constants (like a speed limit) should be etched in stone, should be literals. Value properties like the maximum velocity of a car brand can stem from some kind of table lookup.
Of course repeated values could be placed in local constants. Prevents typos, easy - as local - abstraction, clarifies the semantic meaning of a value.
However as cases in general will use constants maybe twice or three times (positive and negative test), I would go for bare constants.
In my opinion the test should check behaviour and not the internal implementation.
The fact that isEmergencyNumber verifies the number over constant declared in the class you're trying to test is verification over internal implementation. You shouldn't rely on it in the test because it is not safe.
Let me give you some examples:
Example #1: Someone changed EMERGENCY_PHONE_NUMBER by mistake and didn't notice. The second test will never catch it.
Example #2: Suppose ContactClassifier is changed by not very smart developer to the following code. Of course it is completely edge case and most likely it will never happen in practice, but it also helps to understand what I mean.
public final static String EMERGENCY_PHONE_NUMBER = new String("911");
public boolean isEmergencyNumber(String number) {
return number == EMERGENCY_PHONE_NUMBER;
}
In this case your second test will not fail because it relies on internal implementation, but your first test which checks real word behaviour will catch the problem.
Writing a unit test serves an important purpose: you specify rules to be followed by the method being tested.
So, when the method breaks that rule i.e. the behavior changes, the test would fail.
I suggest, write in human language, what you want the rule to be, and then accordingly write it in computer language.
Let me elaborate.
Option 1 When I ask ContactClassifier.isEmergencyNumber method, "Is the string "911" an emergency number?", it should say yes.
Translates to
assertTrue(contactClassifier.isEmergencyNumber("911"));
What this means is you want to control and test what number is specified by the constant ContactClassifier.EMERGENCY_PHONE_NUMBER. Its value should be 911 and that the method isEmergencyNumber(String number) does its logic against this "911" string.
Option 2 When I ask ContactClassifier.isEmergencyNumber method, "Is the string specified in ContactClassifier.EMERGENCY_PHONE_NUMBER an emergency number ?", it should say yes.
It translates to
assertTrue(contactClassifier.isEmergencyNumber("911"));
What this means is you don't care what string is specified by the constant ContactClassifier.EMERGENCY_PHONE_NUMBER. Just that the method isEmergencyNumber(String number) does its logic against that string.
So, the answer would depend on which one of above behaviors you want to ensure.
I'd opt for
#Test
public testClassifier() {
assertTrue(contactClassifier.isEmergencyNumber("911"));
assertFalse(contactClassifier.isEmergencyNumber("111-other-222"));
}
as this doesn't test against something from the class under test that might be faulty. Testing with
#Test
public testClassifier() {
assertTrue(contactClassifier.isEmergencyNumber(ContactClassifier.EMERGENCY_PHONE_NUMBER));
assertFalse(contactClassifier.isEmergencyNumber("111-other-222"));
}
will never catch if someone introduces a typo into ContactClassifier.EMERGENCY_PHONE_NUMBER.
In my opinion that is not necessary to test this logic. The reason is: this logic is trivial for me.
We can test all line of our code, but I don't think that is a good idea to do this. For example getter and setter. If we follow the theory to test all line of code, we have to write test for each of getter and setter. But these tests have low value and cost more time to write, to maintain. That is not a good investment

Can JUnit make assertions about all Strings?

Is there any way in JUnit (or any other testing framework) to make assertions about all possible inputs to a method?
Something like:
assertTrue(myClass.myMethod(anyString))
EDIT:
I realize that there are an infinite number of possible inputs but I guess I was wondering if there were some frameworks that could statically analyze a method and detect that in all possible cases a certain result would be reached. E.g. public static boolean myMethod(String input) { return true; } will always return true?
No, there is practically an unlimited number of possible inputs.
Its your job to separate them into test cases with (expected) equivalent behaviour.
If such an artificial intelligence would exist, then it could also write the code to be tested.
There exist test case generators, that auto create test cases, but they are mostly useless. They produce a huge amount of test cases, and mainly only touch the code, instead of testing an expected result.
Such tools raise test coverage percentage, but in a very dubious way. (I would call that an illegal raise of test coverage: you should test, not touch!)
Such a tool is CodePro from Google. Use CodePro->Test Case generation (e.g within Eclipse)
On first you will be a bit suprised, it's not to bad to try it out. Then you will know the limits of auto test case generation.
You cannot do this with JUnit. The only way I think you could do such a thing would be using Formal Logic Verification
As said before, it's not possible. However, there's the approach of automated property based testing. This comes somehow as close as possible to your idea. Well, still far tough...
For instance, hava a look at scalacheck:
ScalaCheck is a library written in Scala and used for automated
property-based testing of Scala or Java programs. ScalaCheck was
originally inspired by the Haskell library QuickCheck, but has also
ventured into its own.

Do I need to double check flows in UT?

I have a class A with a dependency B.
I wrote a UT to B::foo(String s1, String s2). Say I test a flow of B::foo("a", "a")
Assuming A::foo(..) calls B::foo(..)
Do I have to write a UT of A::("a", "a") ?
I would inject B::foo mock and check it was called once and also the result from A is as expected given a mocked result from B.
Would you avoid mock in such situation?
Would you avoid the whole flow as it's already checked in B UT ?
Unit tests serve as an additional line of defense against software bugs. Making bug in production code is likely, making same bug in both production code and unit test is a lot less likely. This is one of the reasons you write unit tests - to gain one more guarantee your software works as intended.
I would inject B::foo mock and check it was called once and also the result from A is as expected given a mocked result from B.
You need to ask yourself how much you gain by doing so. If A is simple wrapper over B
How valuable would such A tests be?
How hard would a bug in A code be to detect?
And how easy to make?
And how hard to fix?
Every unit test is a decision to be made. There is no "Yes, write tests for this class" guideline nor rule. You need to determine whether your time can be spend on writing unit tests for wrapper class or whether it would be better to invest it elsewhere.
B::foo is unit tested, so the best course of action is to assume it is perfect. If you have reason to doubt that B::foo is perfect, add tests in BTest until you're comfortable with it, then assume that it is perfect.
At that point, writing a unit test of A::foo is probably redundant, unless you're asserting that it accurately returns (some permutation of) B::foo. As jimmy_keen said, this may mean that your test for A is trivial. Remember that unit tests are designed to cover things likely to break, so if all you have is a wrapper, you probably don't need thorough testing.
(Caveat: If B is not under your control, and you can't be confident of its perfection, by all means add B tests wherever in you can—including a separate test class, or in ATest. That's a separate abstraction-breaking case, though.)

Unit testing with assertTrue vs others

I work in a TDD environment and I use assertTrue a lot whereas there are many other methods, such as assert equals etc. I have a class that I have more than 40 test cases and they are all assertTrue. is this acceptable?
I wanted to ask as a style, is this proper?
any suggestions?
if u think this question is inappropriate let me know i ll delete it.
EDIT:
assertTrue(targetSpecifiers.size() == 2);
assertTrue(targetSpecifiers.get(0).getPlacementId().compareTo(new BigInteger("1")) ==0);
assertTrue(targetSpecifiers.get(1).getPlacementId().compareTo(new BigInteger("2")) ==0);
The main benefits of using other assertions is that they better communicate intent and are likely to give a more meaningful default message in the event of failure.
e.g.
if you write assertEquals(2, x) if x is actually 1 then the failure message will be:
java.lang.AssertionError: expected:<2> but was:<1>
which is more helpful than if you write assertTrue(x == 2) where all you would see is the AssertionError and the stack trace.
This is even more important when you are using TDD because you when you write a failing test first you want to be confident that the test is failing for the reason you are expecting it to and that there is not some accidental behaviour going on.
Where appropriate you should use the correct assertXXX methods as they improve the reporting of failures. For e.g. if you are testing for the equality of let us say 2 string "abc" (expected) and "abxy" (actual), then the use of assertEquals
assertEquals("abc", "abxy")
will provide a better output that is easier to reason about than using the assertTrue like below
assertTrue("abc".equals("abxy"))
NOTE: Also pay attention to where you are specifying the actual and expected arguments. I see lot of developers not following the convention (junit's convention) that the expected should be the first param to the assertXXX methods. Improper usage also leads to lot of confusion
My guess is that you've got things like:
assertTrue(expectedValue.equals(actualValue));
That will still test the right thing - but when there's a failure, all it can tell you was that the assertion failed. If you used this instead:
assertEquals(expectedValue, actualValue);
... then the failure will say "Expected: 5; Was: 10" or something similar, which makes it considerably easier to work out what's going on.
Unless you're asserting the result of a method returning boolean or something like that, I find assertTrue to be pretty rarely useful.
If you could give examples of your assertions, we may be able to translate them into more idiomatic ones.
These assertions are perfectly valid, but other assertions are easier to read and deliver better failure messages.
I recommend looking at Hamcrest- this provides the most readable form of assertions and failure messages. Your example of
assertTrue(targetSpecifiers.size() == 2);
assertTrue(targetSpecifiers.get(0).getPlacementId().compareTo(new BigInteger("1")) ==0);
assertTrue(targetSpecifiers.get(1).getPlacementId().compareTo(new BigInteger("2")) ==0);
could be rewritten as
assertThat(targetSpecifiers, hasSize(2));
assertThat(targetSpecifiers.get(0).getPlacementId(), equalTo(BigInteger.valueOf(1));
assertThat(targetSpecifiers.get(1).getPlacementId(), equalTo(BigInteger.valueOf(1));
or even more succinctly as
assertThat(targetSpecifiers, contains(
hasProperty("placementId", equalTo(BigInteger.valueOf(1)),
hasProperty("placementId", equalTo(BigInteger.valueOf(2))
);
contains verifies completeness and order, so this covers all three assertions.

Categories