Junit test can't pass all test case once - java

I have a very strange problem, when i try to run a JUnit test with multiple test case, it will only pass the first test case and shown IndexOut of Bound error
public class ABCTest {
#Test
public void basicTest1(){...}
#Test
public void basicTest2(){...}
...
but if i commend the rest test case, test them one by one, it will pass all of them.
public class ABCTest {
#Test
public void basicTest1(){...}
//#Test
//public void basicTest2(){...}
//...

Since you do not provide the complete testcase and implementation class, I have to make some assumptions.
Most likely you are mutating the state of the tested object by the testcase.
Usually you try to get a clean test fixture for each unit test. This works by having a method with the #Before annotation which creates a new instance of the class under test. (This was called 'setUp()' in older versions of junit.)
This ensures that the order of test method execution as well as the number of executions does not matter and each method is working isolated.

Look at what you are doing inside of the test case and see if you are changing data that may be used by the other test cases and not restoring it to the original state. For example you have a text file that you read and write to in basicTest1 that you then read again in basicTest2 but assume the file is the same as it was before you ran basicTest1.
This is just one possible problem. would need to see the code for more insight

Related

How to reuse method and test in JUnit?

I've tried to avoid duplicate code in JUnit test, but I'm kind of stuck.
This is my first test, for the second one it has exactly the same methods but different service (different input). instead of the TestCaseResourceTest1 I have TestCaseResourceTest2. Now what could be the proper way to test both? I want to have a separate file for test number 2, how should I avoid the duplicate code? (ex. use the beforeFileTest() method)
public class TestCaseResourceTest1 {
#Mock
private TestService testService;
#Mock
private AreaService areaService;
private TestCaseService1 testCaseService1; // is changed in test2
#Before
public void before() throws Exception{
testCaseService1 = mock(TestCaseService1.class); // is changed in test2
MockitoAnnotations.initMocks(this);
beforeFileTest();
}
private void beforeFileTest() throws Exception{
doReturn(true).when(areaService).chechExists(any(String.class), eq(false));
}
#Test
public void verifyFileExists() throws Exception{
verifyOtherArea(testCaseService1); // is changed in test2
doReturn(false).when(areaService).chechExists(any(String.class), eq(false));
}
}
just lines with comment is changed in test2 are differences.
Tnx
Given this excerpt from your question:
… instead of the TestCaseResourceTest1 I have TestCaseResourceTest2 … I want to have a separate file for test number 2
… the standard ways of sharing code between test cases are:
Create a Test Suite and include the shared code in the test suite (typically in #BeforeClass and #AfterClass methods). This allows you to (1) run setup code once (per suite invocation); (2) encapsulate shared setup/teardown code and (3) easily add more tests cases later. For example:
#RunWith(Suite.class)
#Suite.SuiteClasses({
TestCaseResourceTest1.class,
TestCaseResourceTest2.class
)}
public class TestSuiteClass {
#BeforeClass
public void setup() {
beforeFileTest();
}
private void beforeFileTest() throws Exception {
// ...
}
}
Create an abstract class which parents TestCaseResourceTest1 and TestCaseResourceTest2 and let those test cases call the shared code in the parent (typically via super() calls). With this approach you can declare default shared code in the parent while still allowing sub classes to (1) have their own behaviour and (2) selectively override the parent/default behaviour
Create a custom JUnit runner, define the shared behaviour in this runner and then annotate the relevant test cases with #RunWith(YourCustomRunner.class). More details on this approach here
Just to reiterate what some of the other posters have said; this is not a common first step so you may prefer to start simple and only move to suites or abstract classes or custom runners if your usage provides a compelling reason to do so.
I had the such situation and it was a sign about wrong implementation design. We are talking about pure unit tests where we test exactly what is implemented in the production classes. If we need duplicated tests it means we probably have duplication in implementation.
How did I resolve it in my project?
Extracted common logic into parent service class and implemented unit tests for it.
For child services I implemented tests only for particular implemented code there. No more.
Implemented an integration tests on real environment were both services were involved and tested completely.
Assuming you want to have the exact same test run for 2 different classes (and not mocking it as in your example code), you can create an abstract test class, that has abstract method that returns an instance of the class to be tested.
Something in the vein of:
public abstract class TestCaseResourceTest {
protected abstract TestCaseService1 getServiceToTest();
#Before
public void before() throws Exception {
testCaseService1 = getServiceToTest();
MockitoAnnotations.initMocks(this);
beforeFileTest();
}
#Test
public void test() {
// do your test here
}
}
public class ConcreteTest extends TestCaseResourceTest {
protected TestCaseService1 getServiceToTest() {
return new TestCaseService();
}
}
public class ConcreteTest2 extends TestCaseResourceTest {
protected TestCaseService1 getServiceToTest() {
return new DifferentService();
}
}
Have you considered using JUnit 5 with its http://junit.org/junit5/docs/current/user-guide/#writing-tests-parameterized-tests ?
It allows you to re-use your tests with different input. This is an example from the documentation which illustrates what you can do now with JUnit 5:
#ParameterizedTest
#ValueSource(strings = { "Hello", "World" })
void testWithStringParameter(String argument) {
assertNotNull(argument);
}
But you can also create your methods which return the input data:
#ParameterizedTest
#MethodSource("stringProvider")
void testWithSimpleMethodSource(String argument) {
assertNotNull(argument);
}
static Stream<String> stringProvider() {
return Stream.of("foo", "bar");
}
Here I am using just strings, but you can really use any objects.
If you are using Maven, you can add these dependencies to start using JUnit 5:
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-params</artifactId>
<version>5.0.0-RC2</version>
<scope>test</scope>
</dependency>
The only annoying thing about JUnit 5 is that it is not released yet.
When going from one test to two tests, you don't know what will be duplicate code, so I find it useful to put everything into one test method. In this case, start by putting the contents of the #Before and beforeFileTest methods inline in the test.
Then you can see that it is just te service that needs changing, so you can extract everything except that into a helper method that is called from two tests.
Also, after you have two tests that are calling the same helper method and are happy with that test coverage, you could look into writing parameterized tests. For example with JunitParams: https://github.com/Pragmatists/junitparams/wiki/Quickstart

getting TestNG to treat class variables like JUnit with Guice

I am trying to setup TestNG so that it gives me new instances of my class variable for each test (basically like JUnit). I need this as I intend to parallelize my tests at the method level. I have been experimenting with both standalone Guice and the built in Guice functionality that TestNG provides to try to accomplish this but I have had no luck. I know that I can use ThreadLocal, but calling .get() for every variable in the test is pretty unappealing. I am weary of using GuiceBerry as it does not really have a lot of updates/activity and it's last release is not even acquirable via Maven. I am pretty set on TestNG as for all the inconvenience this is causing me it still does a lot of great things. I am open to things other tools though to accomplish my goal. Basically I want things setup so the below tests would work consistently. Any help would be greatly appreciated.
// just has a variable thats a class called child with a simple string variable
// with a value of "original
Parent p;
#Test
public void sometest1(){
p.child.value = "Altered";
Assert.assertTrue(p.child.value.equals("Altered"));
}
#Test
public void sometest2(){
Assert.assertTrue(p.child.value.equals("original"));
}
TestNG doesn't create a new instance for each test. If you want such a behavior than I recommend creating separate test classes. e.g.:
public class SomeTest1 {
Parent p;
#Test
public void something(){
p.child.value = "Altered";
Assert.assertTrue(p.child.value.equals("Altered"));
}
}
public class SomeTest2 {
Parent p;
#Test
public void something(){
Assert.assertTrue(p.child.value.equals("original"));
}
}
Note that TestNG can run JUnit 3 and JUnit 4 tests (you might maintain a mixed suite depending on the style you want to use in a given test class).

How to make the unit test execute a particular test case everytime when it sees a certain function in the executing java project?

I am having a build failure issue while running a bunch of unit test over a java project. I am getting the NoClassDefFoundError which is happening because of the lack of ability for the unit test to get the dependencies. I am trying to mock an object for the class and then call the function, but the code is structured in a way that is getting a bit complex for me to handle the issue. I am very new to unit testing. I have provided below, a sample of code structure that my project has
Class ServiceProvider(){
obj declarations;
public void mainFunction(){
//Does a couple of things and calls a function in another class
boolean val = subFunction();
}
public boolean subFunction(){
boolean val = AnotherClass.someFunction(text);
//this function throws lots of exceptions and all those are caught and handled
return val;
}
#RunsWith(MockitoJUnitRunner.class)
Class UnitTestBunch(){
#Mock
AnotherClass acObj = new AnotherClass();
#InjectMock
ServiceProvider sp = new ServiceProvider();
#Test
public void unitTest1() throws Exception{
when(acObj.someFunction(text)).thenReturn(true);
}
#Test
public void unitTest2() throws Exception{
thrown.expect(ExceptionName.Class);
sp.mainFunction();
}
I have a test that uses the mock object and performs the function call associated with that class. But, the issue here is that there are a bunch of other unit test cases that are written similar to the unitTest2 function and calls the mainFunction at the end of the test. This mainFunction invokes someFunction() and causes NoCalssDefFoundError(). I am trying to make the unit test execute the content in unitTest1 everytime when it sees the AnotherClass.someFunction(). I am not sure if this is achievable or not. There could be another better way to resolve this issue. Could someone please pitch in some ideas?
In your test you seem to be using unitTest1 for setup, not for testing anything. When you run a unit test, each test should be able to run separately or together, in any order.
You're using JUnit4 in your tests, so it would be very easy to add the statement you have in unitTest1 into a #Before method. JUnit4 will call this method before each test method (annotated with #Test).
#Before
public void stubAcObj() throws Exception{
when(acObj.someFunction(text)).thenReturn(true);
}
The method may be named anything, though setUp() is a common name borrowed from a method to override in JUnit3. However, it must be annotated with org.junit.Before.
If you need this from multiple test cases, you should just create a helper, as you would with any code. This doesn't work as well with #InjectMocks, but you may want to avoid using #InjectMocks in general as it will fail silently if you add a dependency to your system-under-test.
public class AnotherClassTestHelper {
/** Returns a Mockito mock of AnotherClass with a stub for someFunction. */
public static AnotherClass createAnotherClassMock() {
AnotherClass mockAnotherClass = Mockito.mock(AnotherClass.class);
when(mockAnotherClass.someFunction(text)).thenReturn(true);
return mockAnotherClass;
}
}
As a side note, this is a counterintuitive pattern:
/* BAD */
#Mock
AnotherClass acObj = new AnotherClass();
You create a new, real AnotherClass, then instruct Mockito to overwrite it with a mock (in MockitoJUnitRunner). It's much better just to say:
/* GOOD */
#Mock AnotherClass acObj;

Declaring Jmockit mock parameters on #BeforeMethod of TestNG

I've been testing my code behavior using TestNG and JMockit for a while now and I have had no specific issue with their combination. Today I came across a situation where I needed to mock one of my internal dependencies, in the so called, type wide manner and I did not need to keep that mock around since none of the test cases dealt with it directly while they counted on the mocked version functionality. So, naturally, I put the mocking logic in my #BeforeMethod. Here is a sample:
public class SampleTest
{
#Mocked
#Cascading
private InnerDependency dependency;
#BeforeMethod
public void beforeMethod()
{
new NonStrictExpectations()
{
{
dependency.getOutputStream((String)any);
result = new Delegate<OutputStream>()
{
public OutputStream getOutputStream(String url)
{
return null;
}
};
}
};
}
#Test
public void testNormalOperation()
{
// The test whose desired behavior depends on dependency being mocked out
// ..
}
}
But, since my tests do not care about the mocked dependency explicitly, I'm not willing to declare it as a test class field, unlike what is done above. To my knowledge of JMockit The only options remaining would be:
Declare dependency as a local mock field:
new NonStrictExpectations()
{
#Cascading
private InnerDependency dependency;
{
//...
}
}
Declare dependency as an input argument for beforeMethod(), similar to what is done for normal #Test methods:
#BeforeMethod
public void beforeMethod(#Mocked #Cascading final InnerDependency dependency)
{
// ...
}
I see that JMockit 1.6+ would not like the first option and warns with WARNING: Local mock field "dependency" should be moved to the test class or converted to a parameter of the test method. Hence, to keep everyone happy, I'm ruling this option out.
But for the second option, TestNG (currently 6.8.6) throws exception when running the test saying java.lang.IllegalArgumentException: wrong number of arguments. I don't see this behavior with normal #Test cases passed with #Mocked parameters. Even playing with #Parameter and #Optional will not help (and should not have!).
So, is there any way I could make this work without declaring the unneccessary test class mock field, or am I missing something here?
Thanks
Only test methods (annotated with #Test in JUnit or TestNG) support mock parameters, so the only choice here is to declare a mock field at the test class level.
Even if not used in any test method, I think it's better than having it declared in a setup method (using #Before, #BeforeMethod, etc.). If it were to be possible, the mock would still have to apply to all tests, because of the nature of setup methods; having a mock field of the test class makes it clear what the scope of the mock is.
Dynamic partial mocking is one more technique to specify #Mocked dependencies locally. However, it has it's limitations (see comments below).

JUnit 4 test suite problems

I have a problem with some JUnit 4 tests that I run with a test suite.
If I run the tests individually they work with no problems but when run in a suite most of them, 90% of the test methods, fail with errors. What i noticed is that always the first tests works fine but the rest are failing. Another thing is that a few of the tests the methods are not executed in the right order (the reflection does not work as expected or it does because the retrieval of the methods is not necessarily in the created order). This usually happens if there is more than one test with methods that have the same name. I tried to debug some of the tests and it seems that from a line to the next the value of some attributes becomes null.
Does anyone know what is the problem, or if the behavior is "normal"?
Thanks in advance.
P.S.:
OK, the tests do not depend on each other, none of them do and they all have the #BeforeClass, #Before, #After, #AfterClass so between tests everything is cleared up. The tests work with a database but the database is cleared before each test in the #BeforeClass so this should not be the problem.
Simplefied example:
TEST SUITE:
import org.junit.BeforeClass;
import org.junit.runner.RunWith;
import org.junit.runners.Suite;
importy testclasses...;
#RunWith(Suite.class)
#Suite.SuiteClasses({ Test1.class, Test2.class })
public class TestSuiteX {
#BeforeClass
public static void setupSuite() { System.out.println("Tests started"); }
#AfterClass
public static void setupSuite() { System.out.println("Tests started"); }
}
TESTS:
The tests are testing the functionalily on a server application running on Glassfish.
Now the tests extend a base class that has the #BeforeClass - method that clears the database and login's and the #AfterClass that only makes a logoff.
This is not the source of the problems because the same thing happened before introducing this class.
The class has some public static attributes that are not used in the other tests and implements the 2 controll methods.
The rest of the classes, for this example the two extends the base class and does not owerride the inherited controll methods.
Example of the test classes:
imports....
public class Test1 extends AbstractTestClass {
protected static Log log = LogFactory.getLog( Test1.class.getName() );
#Test
public void test1_A() throws CustomException1, CustomException2 {
System.out.println("text");
creates some entities with the server api.
deletes a couple of entities with the server api.
//tests if the extities exists in the database
Assert.assertNull( serverapi.isEntity(..) );
}
}
and the second :
public class Test1 extends AbstractTestClass {
protected static Log log = LogFactory.getLog( Test1.class.getName() );
private static String keyEntity;
private static EntityDO entity;
#Test
public void test1_B() throws CustomException1, CustomException2 {
System.out.println("text");
creates some entities with the server api, adds one entities key to the static attribute and one entity DO to the static attribute for the use in the next method.
deletes a couple of entities with the server api.
//tests if the extities exists in the database
Assert.assertNull( serverapi.isEntity(..) );
}
#Test
public void test2_B() throws CustomException1, CustomException2 {
System.out.println("text");
deletes the 2 entities, the one retrieved by the key and the one associated with the static DO attribute
//tests if the deelted entities exists in the database
Assert.assertNull( serverapi.isEntity(..) );
}
This is a basic example, the actual tests are more complex but i tried with simplified tests and still it does not work.
Thank you.
The situation you describe sounds like a side-effecting problem. You mention that tests work fine in isolation but are dependent on order of operations: that's usually a critical symptom.
Part of the challenge of setting up a whole suite of test cases is the problem of ensuring that each test starts from a clean state, performs its testing and then cleans up after itself, putting everything back in the clean state.
Keep in mind that there are situations where the standard cleanup routines (e.g., #Before and #After) aren't sufficient. One problem I had some time ago was in a set of databases tests: I was adding records to the database as a part of the test and needed to specifically remove the records that I'd just added.
So, there are times when you need to add specific cleanup code to get back to your original state.
It seems that you built your test suite on the assumption that the order of executing methods is fixed. This is wrong - JUnit does not guarantee the order of execution of test methods, so you should not count on it.
This is by design - unit tests should be totally independent of each other. To help guaranteeing this, JUnit creates a distinct, new instance of your test class for executing each test method. So whatever attributes you set in one method, will be lost in the next one.
If you have common test setup / teardown code, you should put it into separate methods, annotated with #Before / #After. These are executed before and after each test method.
Update: you wrote
the database is cleared before each test in the #BeforeClass
if this is not a typo, this can be the source of your problems. The DB should be cleared in the #Before method - #BeforeClass is run only once for each class.
Be careful in how you use #BeforeClass to set up things once and for all, and #Before to set up things before each individual test. And be careful about instance variables.
We may be able to help more specifically, if you can post a simplified example of what is going wrong.

Categories