I'm executing a few hundred tests in test classes consisting of a singular beforeMethod test, followed by a variable amount of primary tests and occasionally an afterMethod.
The purpose of the beforMethod test, is to populate the test environment with data used in the primary tests while separating logging and recording from the primary tests, which we report on.
We have set up an automatic issue creation tool using a listener. We've found that it would give great value to add execution time to this tool, so that it can show us how long it would take to reproduce the errors in said issues.
To this end, I have made a simple addition to this code, that uses ITestResult.getEndMillis() and getStartMillis() to get the execution time.
The problem we're experiencing with this approach, is that if the test encounters a failure during the primary tests, ITestResult.getStartMillis() will not account for the start time of the before method, but only the primary method.
How would we go about determining the start time of the test class itself (always the beforeMethod), rather than just the current method?
Since we're running hundreds of tests in a massive setup, a solution that allows this without changing each separate test class, would definitely be preferable.
The setup of the java test classes look something like this (scrubbed of business specifics):
package foobar;
import foobar
#UsingTunnel
#Test
public class FLOWNAME_TESTNAME extends TestBase {
private final Value<String> parameter;
public FLOWNAME_TESTNAME(Value<String> parameter) {
super(PropertyProviderImpl.get());
this.parameter = parameter;
}
#StoryCreating(test = "TESTNAME")
#BeforeMethod
public void CONDITIONS() throws Throwable {
new TESTNAME_CONDITIONS(parameter).executeTest();
}
#TestCoverage(test = "TESTNAME")
public void PRIMARYTESTS() throws Throwable {
TESTCASE1 testcase1 = new TESTCASE1(parameter.get());
testcase1.executeTest();
testcase1.throwSoftAsserts();
TESTCASE2 testcase2 = new TESTCASE2(parameter.get());
testcase2.executeTest();
testcase2.throwSoftAsserts();
}
}
So in this case, the problem arises when the listener detects a failure in either TESTCASE1 or TESTCASE2, because these will not include the execution time of TESTNAME_CONDITIONS because that test is inside a different method, yet practically speaking, they are part of the same test flow, aka the same test class.
I found a solution to the issue.
It is possible to use ITestResult.getTestContext().getStartDate().getTime() to obtain the time of which the test class itself is run, rather than the current test method.
The final solution was quite simply:
result.getEndMillis() - result.getTestContext().getStartDate().getTime()) / 60000
Where "result" corresponds to ITestResult.
This outputs the time between the start of the test and the end of the last executed method.
Related
I am trying to create Test cases during runtime.
Background:
I'm calling the test like this:
public class XQTest {
XQueryTest buildTest = new XQueryTest();
#Test
public void test() throws Exception {
buildTest.test();
}
}
Afterwards it searches the FileDirectory for matching Files and build tests from it.
XQueryTest.java
tester = new XQueryTester(a, b);
tester.testHeader(c, d);
XQueryTester.java performs the actual assertion.
Is it possible to "outsource" these actual Testcases, so it's easier to Identify which test failed on jenkins, because at the moment I only have One Test (XQTest.java) which generate serveral tests.
Another problem is, if one test fails, the whole Test failed and skips the rest, even though it's just a part of the whole.
Junit5 supports a runtime tests via the TestFactory and DynamicTest concepts.
See
https://dzone.com/articles/junit-5-dynamic-tests-generate-tests-at-run-time
https://www.baeldung.com/junit5-dynamic-tests
I have a very strange problem, when i try to run a JUnit test with multiple test case, it will only pass the first test case and shown IndexOut of Bound error
public class ABCTest {
#Test
public void basicTest1(){...}
#Test
public void basicTest2(){...}
...
but if i commend the rest test case, test them one by one, it will pass all of them.
public class ABCTest {
#Test
public void basicTest1(){...}
//#Test
//public void basicTest2(){...}
//...
Since you do not provide the complete testcase and implementation class, I have to make some assumptions.
Most likely you are mutating the state of the tested object by the testcase.
Usually you try to get a clean test fixture for each unit test. This works by having a method with the #Before annotation which creates a new instance of the class under test. (This was called 'setUp()' in older versions of junit.)
This ensures that the order of test method execution as well as the number of executions does not matter and each method is working isolated.
Look at what you are doing inside of the test case and see if you are changing data that may be used by the other test cases and not restoring it to the original state. For example you have a text file that you read and write to in basicTest1 that you then read again in basicTest2 but assume the file is the same as it was before you ran basicTest1.
This is just one possible problem. would need to see the code for more insight
I'm having a strange problem with Easymock 3.0 and JUnit 4.8.2.
The problem only occurs when executing the tests from Maven and not from Eclipse.
This is the unit test (very simple):
...
protected ValueExtractorRetriever mockedRetriever;
...
#Before
public void before() {
mockedRetriever = createStrictMock(ValueExtractorRetriever.class);
}
#After
public void after() {
reset(mockedRetriever);
}
#Test
public void testNullValueExtractor() {
expect(mockedRetriever.retrieve("PROP")).andReturn(null).once();
replay(mockedRetriever);
ValueExtractor retriever = mockedRetriever.retrieve("PROP");
assertNull(retriever);
assertTrue(true);
}
And I get:
java.lang.IllegalStateException: 1 matchers expected, 2 recorded.
The weird thing is that I'm not even using an argument matcher. And that is the only method of the test! and to make it even worst it works from Eclipse and fails from Maven!
I found a few links which didn't provide me with an answer:
Another StackOverflow post
Expected Exceptions in JUnit
If I change the unit test and add one more method (which does use an argument matcher):
#Test
public void testIsBeforeDateOk() {
expect(mockedRetriever.retrieve((String)anyObject())).andReturn(new PofExtractor()).anyTimes();
replay(this.mockedRetriever);
FilterBuilder fb = new FilterBuilder();
assertNotNull(fb);
CriteriaFilter cf = new CriteriaFilter();
assertNotNull(cf);
cf.getValues().add("2010-12-29T14:45:23");
cf.setType(CriteriaType.DATE);
cf.setClause(Clause.IS_BEFORE_THE_DATE);
CriteriaQueryClause clause = CriteriaQueryClause.fromValue(cf.getClause());
assertNotNull(clause);
assertEquals(CriteriaQueryClause.IS_BEFORE_THE_DATE, clause);
clause.buildFilter(fb, cf, mockedRetriever);
assertNotNull(fb);
Filter[] filters = fb.getFilters();
assertNotNull(filters);
assertEquals(filters.length, 1);
verify(mockedRetriever);
logger.info("OK");
}
this last method passes the test but not the other one. How is this possible!?!?!
Regards,
Nico
More links:
"bartling.blogspot.com/2009/11/using-argument-matchers-in-easymock-and.html"
"www.springone2gx.com/blog/scott_leberknight/2008/09/the_n_matchers_expected_m_recorded_problem_in_easymock"
"stackoverflow.com/questions/4605997/3-matchers-expected-4-recorded"
I had a very similar problem and wrote my findings in the link below.
http://www.flyingtomoon.com/2011/04/unclosed-record-state-problem-in.html (just updated)
I believe the problem in on another test that affects your current test. The problem is on another test class and it affects you test. In order to find the place of the real problem, I advice to disable the problematic tests one by one till you notify the failing test.
Actually this is what I did. I disabled the failing tests one by one till I found the problematic test. I found a test that throws an exception and catches by "#extected" annotation without stopping the recording.
We had this problem recently, and it only reared its head when we ran the entire test suite (1100+ test cases). Eventually, I found that I could put a breakpoint on the test that was blowing up, and then step back in the list of tests that Eclipse had already executed, looking for the previous test case that had set up a mock incorrectly.
Our problem turned out to be somebody using EasyMock.anyString() outside of an EasyMock.expect(...) statement. Sure enough, it was done two tests before the one that was failing.
So essentially, what was happening is that the misuse of a matcher outside of an expect statement was poisoning EasyMock's state, and the next time we tried to create a mock, EasyMock would blow up.
I believe the first error message
java.lang.IllegalStateException: 1
matchers expected, 2 recorded.
means your mockedRetriever methods called twice but test expects it was called once. So your Eclipse and Maven's configuration differs.
And I have no reason to reset mock after test. Just keep in mind JUnit creates new class instance for every single test method.
EDITED:
What about the reason why the last test method passed the answer is:
expect(mockedRetriever.retrieve((String)anyObject())).andReturn(new PofExtractor()).anyTimes();
But in your first test method it is:
expect(mockedRetriever.retrieve("PROP")).andReturn(null).once();
as equivalent of:
expect(mockedRetriever.retrieve("PROP")).andReturn(null);
I have a problem with some JUnit 4 tests that I run with a test suite.
If I run the tests individually they work with no problems but when run in a suite most of them, 90% of the test methods, fail with errors. What i noticed is that always the first tests works fine but the rest are failing. Another thing is that a few of the tests the methods are not executed in the right order (the reflection does not work as expected or it does because the retrieval of the methods is not necessarily in the created order). This usually happens if there is more than one test with methods that have the same name. I tried to debug some of the tests and it seems that from a line to the next the value of some attributes becomes null.
Does anyone know what is the problem, or if the behavior is "normal"?
Thanks in advance.
P.S.:
OK, the tests do not depend on each other, none of them do and they all have the #BeforeClass, #Before, #After, #AfterClass so between tests everything is cleared up. The tests work with a database but the database is cleared before each test in the #BeforeClass so this should not be the problem.
Simplefied example:
TEST SUITE:
import org.junit.BeforeClass;
import org.junit.runner.RunWith;
import org.junit.runners.Suite;
importy testclasses...;
#RunWith(Suite.class)
#Suite.SuiteClasses({ Test1.class, Test2.class })
public class TestSuiteX {
#BeforeClass
public static void setupSuite() { System.out.println("Tests started"); }
#AfterClass
public static void setupSuite() { System.out.println("Tests started"); }
}
TESTS:
The tests are testing the functionalily on a server application running on Glassfish.
Now the tests extend a base class that has the #BeforeClass - method that clears the database and login's and the #AfterClass that only makes a logoff.
This is not the source of the problems because the same thing happened before introducing this class.
The class has some public static attributes that are not used in the other tests and implements the 2 controll methods.
The rest of the classes, for this example the two extends the base class and does not owerride the inherited controll methods.
Example of the test classes:
imports....
public class Test1 extends AbstractTestClass {
protected static Log log = LogFactory.getLog( Test1.class.getName() );
#Test
public void test1_A() throws CustomException1, CustomException2 {
System.out.println("text");
creates some entities with the server api.
deletes a couple of entities with the server api.
//tests if the extities exists in the database
Assert.assertNull( serverapi.isEntity(..) );
}
}
and the second :
public class Test1 extends AbstractTestClass {
protected static Log log = LogFactory.getLog( Test1.class.getName() );
private static String keyEntity;
private static EntityDO entity;
#Test
public void test1_B() throws CustomException1, CustomException2 {
System.out.println("text");
creates some entities with the server api, adds one entities key to the static attribute and one entity DO to the static attribute for the use in the next method.
deletes a couple of entities with the server api.
//tests if the extities exists in the database
Assert.assertNull( serverapi.isEntity(..) );
}
#Test
public void test2_B() throws CustomException1, CustomException2 {
System.out.println("text");
deletes the 2 entities, the one retrieved by the key and the one associated with the static DO attribute
//tests if the deelted entities exists in the database
Assert.assertNull( serverapi.isEntity(..) );
}
This is a basic example, the actual tests are more complex but i tried with simplified tests and still it does not work.
Thank you.
The situation you describe sounds like a side-effecting problem. You mention that tests work fine in isolation but are dependent on order of operations: that's usually a critical symptom.
Part of the challenge of setting up a whole suite of test cases is the problem of ensuring that each test starts from a clean state, performs its testing and then cleans up after itself, putting everything back in the clean state.
Keep in mind that there are situations where the standard cleanup routines (e.g., #Before and #After) aren't sufficient. One problem I had some time ago was in a set of databases tests: I was adding records to the database as a part of the test and needed to specifically remove the records that I'd just added.
So, there are times when you need to add specific cleanup code to get back to your original state.
It seems that you built your test suite on the assumption that the order of executing methods is fixed. This is wrong - JUnit does not guarantee the order of execution of test methods, so you should not count on it.
This is by design - unit tests should be totally independent of each other. To help guaranteeing this, JUnit creates a distinct, new instance of your test class for executing each test method. So whatever attributes you set in one method, will be lost in the next one.
If you have common test setup / teardown code, you should put it into separate methods, annotated with #Before / #After. These are executed before and after each test method.
Update: you wrote
the database is cleared before each test in the #BeforeClass
if this is not a typo, this can be the source of your problems. The DB should be cleared in the #Before method - #BeforeClass is run only once for each class.
Be careful in how you use #BeforeClass to set up things once and for all, and #Before to set up things before each individual test. And be careful about instance variables.
We may be able to help more specifically, if you can post a simplified example of what is going wrong.
Normally I would have one junit test that shows up in my integration server of choice as one test that passes or fails (in this case I use teamcity). What I need for this specific test is the ability to loop through a directory structure testing that our data files can all be parsed without throwing an exception.
Because we have 30,000+ files that that 1-5 seconds each to parse this test will be run in its own suite. The problem is that I need a way to have one piece of code run as one junit test per file so that if 12 files out of 30,000 files fail I can see which 12 failed not just that one failed, threw a runtimeexception and stopped the test.
I realize that this is not a true "unit" test way of doing things but this simulation is very important to make sure that our content providers are kept in check and do not check in invalid files.
Any suggestions?
I think what you want is parameterized tests. It's available if you're using JUnit4 (or TestNG). Since you mention JUnit, you'll want to look at the #RunWith(Parameterized.class)
and #Parameters annotations' documentation.
I'd write one test that read all the files, either in a loop or some other means, and collected all the failed files in a collection of some kind for reporting.
Maybe a better solution would be a TestNG test with a DataProvider to pass along the list of file paths to read. TestNG will create and run one test for each file path parameter passed in.
A Junit3 answer: Create a TestSuite, that creates the instances of the TestCases that you need, with each TestCase initialized according to your dynamic data. The suite will run as a whole within a single JVM instance, but the individual TestCases are independent of each other (setUp, tearDown get called, the error handling is correct, reporting gives what you asked for, etc).
The actual implementation can be a bit clumsy, because TestCase conflates the Name of the test with the METHOD to be run, but that can be worked around.
We normally just combine the suite with the dynamic testcases in the same class, and use the suite() method to get the TestSuite. Ant's JUnit task is smart enough to notice this, for example.
public class DynamicTest extends TestCase {
String filename ;
public DynamicTest ( String crntFile ) {
super("testMethod");
filename = crntFile ;
}
// This is gross, but necessary if you want to be able to
// distinguish which test failed - otherwise they all share
// the name DynamicTest.testMethod.
public String getName() {
return this.getClass().getName() + " : " + filename ;
}
// Here's the actual test
public void testMethod() {
File f = new File( filename ) ;
assertTrue( f.exists() ) ;
}
// Here's the magic
public static TestSuite suite() {
TestSuite s = new TestSuite() ;
for ( String crntFile : getListOfFiles() ) {
s.addTest( new DynamicTest(crntFile ) ) ;
}
return s ;
}
}
You can, of course, separate the TestSuite from the TestCase if you prefer. The TestCase doesn't hold up well stand alone, though, so you'll need to have some care with your naming conventions if your tests are being auto-detected.