A single JUnit test being run under JUnit 4.11 fail the majority of the time while being run via either to module test suite (40 runs: 2 failures, 38 passes), or the class test suite (40 runs: 6 failures, 34 passes), but running the test method by itself did not produce a single failure (50 runs: 0 failures, 50 passes).
To summarize what is happening, the equals(Object MyObject) implementation returns true if the org.joda.time.DateTime corresponding to the key Stamp.START or the key Stamp.STOP is the same for the current instance as the one in instance passed to the method. Here's the code:
import org.joda.time.DateTime;
...
private final Map<Stamp, DateTime> timeStampMap;
...
#Override
public boolean equals(Object obj) {
if (this == obj) { return true; }
if (obj == null || getClass() != obj.getClass()) { return false; }
final MyObject other = (MyObject) obj;
return (Objects.equals(this.timeStampMap.get((Stamp.START)),
other.timeStampMap.get(Stamp.START))
&& Objects.equals(this.timeStampMap.get(Stamp.STOP),
this.timeStampMap.get(Stamp.STOP)));
}
...
public enum Stamp {
START,
STOP
}
And the test itself:
#Test
#Config(configuration = TestConfig.NO_CONFIG)
public void equalityTest() {
MyObject a = new MyObject(BigDecimal.TEN);
MyObject b = a;
assertThat(a.hashCode(), is(b.hashCode()));
assertTrue(a.equals(b));
b = new MyObject(BigDecimal.TEN);
// This line produces the failure
assertThat(a, is(not(b)));
}
Why would this test only fail when run under either test suite, but not when run on it's own?
Since you are using Joda time, an alternative approach might be to fix the current time to something of your choosing using DateTimeUtils.setCurrentMillisFixed(val).
For example:
#Test
#Config(configuration = TestConfig.NO_CONFIG)
public void equalityTest() {
DateTimeUtils.setCurrentMillisFixed(someValue);
MyObject a = new MyObject(BigDecimal.TEN);
MyObject b = a;
assertThat(a.hashCode(), is(b.hashCode()));
assertTrue(a.equals(b));
DateTimeUtils.setCurrentMillisFixed(someValue + someOffset);
b = new MyObject(BigDecimal.TEN);
// This line produces the failure
assertThat(a, is(not(b)));
}
I suggest making the code more testable. Instead of having the code get the date directly, you can pass in an interface named Clock:
public interface Clock {
DateTime now();
}
Then you could add Clock to the constructor:
MyObject(BigDecimal bigDecimal, Clock clock) {
timeStampMap.put(Stamp.START, clock.now());
}
For production code, you can make a helper constructor:
MyObject(BigDecimal bigDecimal) {
this(bigDecimal, new SystemClock());
}
...where SystemClock looked like this:
public class SystemClock implements Clock {
#Override
public DateTime now() {
return new DateTime();
}
}
Your tests could either mock Clock or you could create a fake clock implementation.
Over the process of trying to produce an MCVE and author the question, I discovered something interesting:
When the test is run at the method level, note the timestamp difference of 1 millisecond. The difference is never less than that:
[START: 2015-02-26T11:53:20.581-06:00, STOP: 2015-02-26T11:53:20.641-06:00, DURATION: 0.060]
[START: 2015-02-26T11:53:20.582-06:00, STOP: 2015-02-26T11:53:20.642-06:00, DURATION: 0.060]
But when I run the test ends up being run as part of the suites, this happens nearly every single time:
[START: 2015-02-26T12:25:31.183-06:00, STOP: 2015-02-26T12:25:31.243-06:00, DURATION: 0.060]
[START: 2015-02-26T12:25:31.183-06:00, STOP: 2015-02-26T12:25:31.243-06:00, DURATION: 0.060]
Zero difference. Weird right?
My best guess is that the JVM is proverbially all warmed up and has some momentum built by the time it reaches this particular test when running the test suites. So much so, that the instantiations occur so quickly as to be nearly simultaneous. The tiny amount of time that passes between the time that MyObject a is instantiated and b is assigned until b is reassigned as a new MyObject is so minute as to produce a MyObject with an identical pair of DateTimes.
Turns out, there are a few usable solutions:
The Solution I Went With:
This is really similar to Duncan's. Call DateTimeUtils.setCurrentMillisOffset(val) before reassigning MyObject b and then reset immediately afterward, since I only need the offset long enough to force a difference in the DateTimes between MyObjects a and b:
#Test
#Config(configuration = TestConfig.NO_CONFIG)
public void equalityTest() {
MyObject a = new MyObject(BigDecimal.TEN);
MyObject b = a;
assertThat(a.hashCode(), is(b.hashCode()));
assertTrue(a.equals(b));
// Force an offset
DateTimeUtils.setCurrentMillisOffset(1000);
b = new MyObject(BigDecimal.TEN);
// Clears the offset
DateTimeUtils.setCurrentMillisSystem();
assertThat(a, is(not(b)));
}
Namshubwriter's Solution (link to answer):
Easily the best solution in cases where this issue will likely be seen throughout a project and/or in actual use.
Duncan's Solution (link to answer):
Set the current time to return a fixed time by calling DateTimeUtils.setCurrentMillisFixed(val) at the beginning of the unit test, then adding an offset to that time by calling DateTimeUtils.setCurrentMillisFixed(val + someOffset) to force the difference before reassigning MyObject b. Click the link to jump right to his solution with the code.
It is worth pointing out that you'll need to call DateTimeUtils.setCurrentMillisSystem() at some point to reset the time, otherwise other tests dependent on the time could be affected.
Original Solution:
I think it is worth mentioning here that, it is my understanding this is the only solution that does not depend on the program having certain security privileges on the parent system.
Place a call to Thread.sleep() ensure that there is a time separation between the DateTime timestamps of the two MyObjects:
#Test
#Config(configuration = TestConfig.NO_CONFIG)
public void equalityTest() {
MyObject a = new MyObject(BigDecimal.TEN);
MyObject b = a;
assertThat(a.hashCode(), is(b.hashCode()));
assertTrue(a.equals(b));
try {
Thread.sleep(0, 1);
} catch (Exception e) {
e.printStackTrace();
}
b = new MyObject(BigDecimal.TEN);
// Line that was failing
assertThat(a, is(not(b)));
}
Related
I've been learning Java just for a bit, so please advise how exception throwing test should look like in this case?
I have following Gambling Machine Class. And then 2 tests for it. I do not really know what should follow the "Integer" in second method (shouldThrowWhenNumbersOutOfRange). Could you please advise as to the exact syntax?
public class GamblingMachine {
public int howManyWins(Set<Integer> userNumbers) throws InvalidNumbersException {
validateNumbers(userNumbers);
Set<Integer> computerNumbers = generateComputerNumbers();
int count = 0;
for (Integer number : userNumbers) {
if (computerNumbers.contains(number)) {
count++;
}
}
return count;
}
private void validateNumbers(Set<Integer> numbers) throws InvalidNumbersException {
if (numbers.size() != 6) {
throw new InvalidNumbersException();
}
if (numbers.stream().anyMatch(number -> number < 1 || number > 49)) { //anyMatch-function to check whether any element in list satisfy given condition
throw new InvalidNumbersException();
}
}
private Set<Integer> generateComputerNumbers() {
Set<Integer> numbers = new HashSet<>();
Random generator = new Random();
while(numbers.size() < 6) {
numbers.add(generator.nextInt(49) + 1);
}
return numbers;
}
}
private GamblingMachine machine = new GamblingMachine();
#ParameterizedTest
#NullAndEmptySource
public void shouldThrowWhenNumbersEmpty(Set<Integer> numbers) throws InvalidNumbersException {
Assertions.assertThrows(NumberFormatException.class, () -> {
Integer.parseInt(" ");
});
}
#ParameterizedTest
#CsvFileSource(resources ="/numbersOutOfRange.cvs", numLinesToSkip = 1)
public void shouldThrowWhenNumbersOutOfRange(Set<Integer> numbers) throws InvalidNumbersException {
Assertions.assertThrows(NumberFormatException.class, () -> {
Integer. //how code should look like here?
});
}
The point of a test is to, you know, test something. Your shouldThrowWhenNumbersEmpty test doesn't do that (well, it tests that Integer.parseInt(" ") throws something. It does, of course. You... don't have to test the core libraries).
In other words, your gambling machine tests need to be calling some stuff from your GamblingMachine class. The idea is to test GamblingMachine. Not to test Integer.parseInt.
It's also a bizarre test: Why in the blazes is shouldThrowWhenNumbersEmpty parameterized? I assume the point of that test is: "Ensure that the gambling machine works as designed when passing an empty set of numbers in, specifically, the part of the design that states that an InvalidNumbersException is thrown if you do that".
Which is done with something like:
#Test
public void shouldThrowWhenNumbersEmpty() {
Assertions.assertThrows(InvalidNumbersException.class, () -> {
Set<Integer> empty = Set.of();
machine.howManyWins(empty);
});
}
Parameterized tests are a fairly exotic concept. Your test setup appears to be falling into a trap: It appears to be set up that you repeat all the logic that is already in your gamblingmachine class, to then apply this logic to the incoming (parameterized) data, figure out what your gambling machine ought to be doing, and then double check its work.
That's not how you should write tests. Tests focus on a specific result. Parameterized tests can make sense, but only if the stuff you have to do for any given input is roughly the same. For example:
Good use of parameterized testing
You have a csv file containing a bunch of lines, each of which has 6 rolls + the correct answer. Your parameterized test treats each line the same: Call howManyWins using the 6 rolls as input, then check that howManyWins returns the expected value.
Bad use of parameterized testing
You have a csv file containing a bunch of lines, each of which has 6 rolls. Your parameterized test will calculate the right result for the rolls, then invoke gambling machine, and check that the gambling machine gives the same answer as what you calculated.
This is bad: You're just repeating the code. It also means your test code is itself doing more than the very basics (it's doing a bunch of business logic), thus raising the question: Who tests your test, then?
Both of your test methods seem like they should NOT be parameterized, unless that csv also contains results.
I'm practicing unit testing on the methods of the Java Period class. The method minusDays looks like this:
public Period minusDays(long daysToSubtract) {
return (daysToSubtract == Long.MIN_VALUE ? plusDays(Long.MAX_VALUE).plusDays(1) : plusDays(-daysToSubtract));
My unit test looks like this:
#Test
public void testMinusDays ()
{
Period x = Period.of(1,1,2);
Period y = Period.of(1,1,1);
Assert.assertEquals(y, x.minusDays(1));
}
And the problem is I'm getting 50% branch coverage and don't know which parts of the if else I'm testing because I can't follow it.
First Step: If ? : is too confusing, replace it with an equal if condition:
public Period minusDays(long daysToSubtract) {
if (daysToSubtract == Long.MIN_VALUE) {
return plusDays(Long.MAX_VALUE).plusDays(1);
}
return plusDays(-daysToSubtract);
}
And now you know what you are missing. You are tesing for daysToSubtract == 1, but not the possibility daysToSubtract == Long.MIN_VALUE, in other words you are only testing one one of two cases, which makes 50%.
you have to write a test with x.minusDay(Long.MIN_VALUE) and a test with another value. after that you should have 100%
I have one singleton object which actually store user activity. I wanted to remove this data on certain time ( at every night 12 ). I wanted to know How we can achieve this with out having different thread running.
Add a method to the singleton that returns the last date it ran:
static Date lastRun = new Date(); //when the class initializes
Date lastDateRan() {
return lastRun;
}
Then add another method that checks if today > lastRun (pay attention to check only the date - not time/hour - in case you decide to use TimeStamp or any other library).
Whenever the object is called, check:
if (today > lastRun) {
lastRun = today;
// and clean the object.
}
It won't run every-day exactly at midnight, but it'll have the exact same effect! (the first call after midnight will get "fresh" data)
You can use following code :
new java.util.Timer().schedule(
new java.util.TimerTask() {
#Override
public void run() {
Object_name.close //your code to cleanup object
}
},
12*60*60*1000 /*time after which it will run again*/
);
I am working with a Java Web Application that has a lot of unit tests. We run the unit tests within Eclipse. I am going through the tests and refactoring some of them. I have seen a few tests that are written like this (I'll boil it down to the assertions, my literal examples actually are variables in the tests)
assertEquals(new Integer(7), new Long(7));
This test passes!, and I don't understand why, since the types are different. After seeing this behavior, I created a simple Java project in Eclipse and wrote basically the same unit test
assertEquals(new Integer(7), new Long(7);
and it Failed as I expected. I don't need any help, I was just curious how the test passes in one environment and fails (as it should) in another.
If you look at the equals method for java.lang.Long it says:
public boolean equals(Object obj) {
if (obj instanceof Long) {
return value == ((Long)obj).longValue();
}
return false;
}
So new Long(7).equals(new Integer(7)) should be false, because Integer instanceof Long is false. This test program confirms that:
public class Stuff {
public static void main(String[] args) {
System.out.println("int equals long : " + new Integer(7).equals(new Long(7)));
System.out.println("long equals int : " + new Long(7).equals(new Integer(7)));
}
}
which prints out
int equals long : false
long equals int : false
I'm guessing the web application test that came up with the opposite result used an add-on like ComparableAssert, which has this signature
public static void assertEquals(java.lang.Comparable expected,
java.lang.Comparable actual)
It would be easy to mistake for the other, especially if the test uses static imports. Longs and Integers are comparable to each other (new Long(7).compareTo(new Integer(7)) evaluates to 0) so this assert would succeed.
I'm trying to write a unit test (using JMockit) that verifies that methods are called according to a partial order. The specific use case is ensuring that certain operations are called inside a transaction, but more generally I want to verify something like this:
Method beginTransaction is called.
Methods operation1 through to operationN are called in any order.
Method endTransaction is called.
Method someOtherOperation is called some time before, during or after the transaction.
The Expectations and Verifications APIs don't seem to be able to handle this requirement.
If I have a #Mocked BusinessObject bo I can verify that the right methods are called (in any order) with this:
new Verifications() {{
bo.beginTransaction();
bo.endTransaction();
bo.operation1();
bo.operation2();
bo.someOtherOperation();
}};
optionally making it a FullVerifications to check that there are no other side-effects.
To check the ordering constraints I can do something like this:
new VerificationsInOrder() {{
bo.beginTransaction();
unverifiedInvocations();
bo.endTransaction();
}};
but this does not handle the someOtherOperation case. I can't replace the unverifiedInvocations with bo.operation1(); bo.operation2() because that puts a total ordering on the invocations. A correct implementation of the business method could call bo.operation2(); bo.operation1().
If I make it:
new VerificationsInOrder() {{
unverifiedInvocations();
bo.beginTransaction();
unverifiedInvocations();
bo.endTransaction();
unverifiedInvocations();
}};
then I get a "No unverified invocations left" failure when someOtherOperation is called before the transaction. Trying bo.someOtherOperation(); minTimes = 0 also doesn't work.
So: Is there a clean way to specify partial ordering requirements on method calls using the Expectations/Verifications API in JMockIt? Or do I have to use a MockClass and manually keep track of invocations, a la:
#MockClass(realClass = BusinessObject.class)
public class MockBO {
private boolean op1Called = false;
private boolean op2Called = false;
private boolean beginCalled = false;
#Mock(invocations = 1)
public void operation1() {
op1Called = true;
}
#Mock(invocations = 1)
public void operation2() {
op2Called = true;
}
#Mock(invocations = 1)
public void someOtherOperation() {}
#Mock(invocations = 1)
public void beginTransaction() {
assertFalse(op1Called);
assertFalse(op2Called);
beginCalled = true;
}
#Mock(invocations = 1)
public void endTransaction() {
assertTrue(beginCalled);
assertTrue(op1Called);
assertTrue(op2Called);
}
}
if you really need such test then: don't use mocking library but create your own mock with state inside that can simply check the correct order of methods.
but testing order of invocations is usually a bad sign. my advice would be: don't test it, refactor. you should test your logic and results rather than a sequence of invocations. check if side effects are correct (database content, services interaction etc). if you test the sequence then your test is basically exact copy of your production code. so what's the added value of such test? and such test is also very fragile (as any duplication).
maybe you should make your code looks like that:
beginTransaction()
doTransactionalStuff()
endTransaction()
doNonTransactionalStuff()
From my usage of jmockit, I believe the answer is no even in the latest version 1.49.
You can implement this type of advanced verification using a MockUp extension with some internal fields to keep track of which functions get called, when, and in what order.
For example, I implemented a simple MockUp to track method call counts. The purpose of this example is real, for where the Verifications and Expectations times fields did not work when mocking a ThreadGroup (useful for other sensitive types as well):
public class CalledCheckMockUp<T> extends MockUp<T>
{
private Map<String, Boolean> calledMap = Maps.newHashMap();
private Map<String, AtomicInteger> calledCountMap = Maps.newHashMap();
public void markAsCalled(String methodCalled)
{
if (methodCalled == null)
{
Log.logWarning("Caller attempted to mark a method string" +
" that is null as called, this is surely" +
" either a logic error or an unhandled edge" +
" case.");
}
else
{
calledMap.put(methodCalled, Boolean.TRUE);
calledCountMap.putIfAbsent(methodCalled, new AtomicInteger()).
incrementAndGet();
}
}
public int methodCallCount(String method)
{
return calledCountMap.putIfAbsent(method, new AtomicInteger()).get();
}
public boolean wasMethodCalled(String method)
{
if (method == null)
{
Log.logWarning("Caller attempted to mark a method string" +
" that is null as called, this is surely" +
" either a logic error or an unhandled edge" +
" case.");
return false;
}
return calledMap.containsKey(method) ? calledMap.get(method) :
Boolean.FALSE;
}
}
With usage like the following, where cut1 is a dynamic proxy type that wraps an actual ThreadGroup:
String methodId = "activeCount";
CalledCheckMockUp<ThreadGroup> calledChecker = new CalledCheckMockUp<ThreadGroup>()
{
#Mock
public int activeCount()
{
markAsCalled(methodId);
return active;
}
};
. . .
int callCount = 0;
int activeCount = cut1.activeCount();
callCount += 1;
Assertions.assertTrue(calledChecker.wasMethodCalled(methodId));
Assertions.assertEquals(callCount, calledChecker.methodCallCount(methodId));
I know question is old and this example doesn't fit OP's use case exactly, but hoping it may help guide others to a potential solution that come looking (or the OP, god-forbid this is still unsolved for an important use case, which is unlikely).
Given the complexity of what OP is trying to do, it may help to override the $advice method in your custom MockUp to ease differentiating and recording method calls. Docs here: Applying AOP-style advice.