Junit test that creates other tests - java

Normally I would have one junit test that shows up in my integration server of choice as one test that passes or fails (in this case I use teamcity). What I need for this specific test is the ability to loop through a directory structure testing that our data files can all be parsed without throwing an exception.
Because we have 30,000+ files that that 1-5 seconds each to parse this test will be run in its own suite. The problem is that I need a way to have one piece of code run as one junit test per file so that if 12 files out of 30,000 files fail I can see which 12 failed not just that one failed, threw a runtimeexception and stopped the test.
I realize that this is not a true "unit" test way of doing things but this simulation is very important to make sure that our content providers are kept in check and do not check in invalid files.
Any suggestions?

I think what you want is parameterized tests. It's available if you're using JUnit4 (or TestNG). Since you mention JUnit, you'll want to look at the #RunWith(Parameterized.class)
and #Parameters annotations' documentation.

I'd write one test that read all the files, either in a loop or some other means, and collected all the failed files in a collection of some kind for reporting.
Maybe a better solution would be a TestNG test with a DataProvider to pass along the list of file paths to read. TestNG will create and run one test for each file path parameter passed in.

A Junit3 answer: Create a TestSuite, that creates the instances of the TestCases that you need, with each TestCase initialized according to your dynamic data. The suite will run as a whole within a single JVM instance, but the individual TestCases are independent of each other (setUp, tearDown get called, the error handling is correct, reporting gives what you asked for, etc).
The actual implementation can be a bit clumsy, because TestCase conflates the Name of the test with the METHOD to be run, but that can be worked around.
We normally just combine the suite with the dynamic testcases in the same class, and use the suite() method to get the TestSuite. Ant's JUnit task is smart enough to notice this, for example.
public class DynamicTest extends TestCase {
String filename ;
public DynamicTest ( String crntFile ) {
super("testMethod");
filename = crntFile ;
}
// This is gross, but necessary if you want to be able to
// distinguish which test failed - otherwise they all share
// the name DynamicTest.testMethod.
public String getName() {
return this.getClass().getName() + " : " + filename ;
}
// Here's the actual test
public void testMethod() {
File f = new File( filename ) ;
assertTrue( f.exists() ) ;
}
// Here's the magic
public static TestSuite suite() {
TestSuite s = new TestSuite() ;
for ( String crntFile : getListOfFiles() ) {
s.addTest( new DynamicTest(crntFile ) ) ;
}
return s ;
}
}
You can, of course, separate the TestSuite from the TestCase if you prefer. The TestCase doesn't hold up well stand alone, though, so you'll need to have some care with your naming conventions if your tests are being auto-detected.

Related

TestNG - Getting start time of before method

I'm executing a few hundred tests in test classes consisting of a singular beforeMethod test, followed by a variable amount of primary tests and occasionally an afterMethod.
The purpose of the beforMethod test, is to populate the test environment with data used in the primary tests while separating logging and recording from the primary tests, which we report on.
We have set up an automatic issue creation tool using a listener. We've found that it would give great value to add execution time to this tool, so that it can show us how long it would take to reproduce the errors in said issues.
To this end, I have made a simple addition to this code, that uses ITestResult.getEndMillis() and getStartMillis() to get the execution time.
The problem we're experiencing with this approach, is that if the test encounters a failure during the primary tests, ITestResult.getStartMillis() will not account for the start time of the before method, but only the primary method.
How would we go about determining the start time of the test class itself (always the beforeMethod), rather than just the current method?
Since we're running hundreds of tests in a massive setup, a solution that allows this without changing each separate test class, would definitely be preferable.
The setup of the java test classes look something like this (scrubbed of business specifics):
package foobar;
import foobar
#UsingTunnel
#Test
public class FLOWNAME_TESTNAME extends TestBase {
private final Value<String> parameter;
public FLOWNAME_TESTNAME(Value<String> parameter) {
super(PropertyProviderImpl.get());
this.parameter = parameter;
}
#StoryCreating(test = "TESTNAME")
#BeforeMethod
public void CONDITIONS() throws Throwable {
new TESTNAME_CONDITIONS(parameter).executeTest();
}
#TestCoverage(test = "TESTNAME")
public void PRIMARYTESTS() throws Throwable {
TESTCASE1 testcase1 = new TESTCASE1(parameter.get());
testcase1.executeTest();
testcase1.throwSoftAsserts();
TESTCASE2 testcase2 = new TESTCASE2(parameter.get());
testcase2.executeTest();
testcase2.throwSoftAsserts();
}
}
So in this case, the problem arises when the listener detects a failure in either TESTCASE1 or TESTCASE2, because these will not include the execution time of TESTNAME_CONDITIONS because that test is inside a different method, yet practically speaking, they are part of the same test flow, aka the same test class.
I found a solution to the issue.
It is possible to use ITestResult.getTestContext().getStartDate().getTime() to obtain the time of which the test class itself is run, rather than the current test method.
The final solution was quite simply:
result.getEndMillis() - result.getTestContext().getStartDate().getTime()) / 60000
Where "result" corresponds to ITestResult.
This outputs the time between the start of the test and the end of the last executed method.

Specific setup for each junit test

I have a sequence of tests which have to be fed an input data in the form of a file.However,the exact data content to be fed into each would be specific.
I intend to use temporary files to achieve this.
The Setup method does not take a parameter.
SO ,what could be done so that the setup can be made to read a specific fragment for each specific test.
The actual set of steps in Setup would be same - creating a temporary file,but with a specific tailored piece of data.
Setup methods (i.e., methods annotated with #Before) are designed for running the same steps before every test case. If this isn't the behavior you need, just don't use them.
At the end of the day, a JUnit test is just Java - you could just have a method that takes a parameter and sets up the test accordingly and call it explicitly with the different arguments you need:
public class MyTest {
private void init(String fileName) {
// Reads data from the file and sets up the test
}
#Test
public testSomething() {
init("/path/to/some/file");
// Perform the test and assert the result
}
#Test
public testSomethingElse() {
init("/path/to/another/file");
// Perform the test and assert the result
}
}

JUnit Test at runtime

I am trying to create Test cases during runtime.
Background:
I'm calling the test like this:
public class XQTest {
XQueryTest buildTest = new XQueryTest();
#Test
public void test() throws Exception {
buildTest.test();
}
}
Afterwards it searches the FileDirectory for matching Files and build tests from it.
XQueryTest.java
tester = new XQueryTester(a, b);
tester.testHeader(c, d);
XQueryTester.java performs the actual assertion.
Is it possible to "outsource" these actual Testcases, so it's easier to Identify which test failed on jenkins, because at the moment I only have One Test (XQTest.java) which generate serveral tests.
Another problem is, if one test fails, the whole Test failed and skips the rest, even though it's just a part of the whole.
Junit5 supports a runtime tests via the TestFactory and DynamicTest concepts.
See
https://dzone.com/articles/junit-5-dynamic-tests-generate-tests-at-run-time
https://www.baeldung.com/junit5-dynamic-tests

jUnit4.0 : how to know current test method name?

I've implemented a feature in my jUnit tests that takes, for every test case, a fresh copy of a data source. This copy is taken in a folder specific for each test case. The idea is that every test case can start from a clean situation, manipulate it and let it as such after the run. This is often useful when the test fails for analysing the problem.
For now I have to call this feature directly in the test method because I don't know how to retrieve the current test name:
public void testTest1() {
TestHelper th=TestHelper.create("testTest1",subPathToDataSource);
// do the test...
Path dataPath = th.getDataPath();
...
}
I would like to be able to write something like this:
Path dataPath;
#Before
public initTest() {
th=TestHelper.create(SomeJUnitObject.getCurrentTestName(),subPathToDataSource);
...
}
public void testTest1() {
// do the test...
Path dataPath = th.getDataPath();
...
}
Until now I found as answers : "You don't need to know that"... But I do need it !
Is this possible ?
Kind regards
Look at the TestName rule.
You should be able to add in your test class:
#Rule TestName name=new TestName();
And then access it.
(On phone, so can't check versions support/details - might be 4.x only)
Here is an alternative approach; create an abstract class which your "real" test classes inherit.
I have several such examples in my projects and here I will give one, mainly testing for individual JSON Patch operations.
All my test files are JSON, and located under an appropriately named resource directory. The base, abstract class is JsonPatchOperationTest. And here is the full code of AddOperationTest which tests for JSON Patch's add operation:
public final class AddOperationTest
extends JsonPatchOperationTest
{
public AddOperationTest()
throws IOException
{
super("add");
}
}
And that's it! Not even one test method in this class, but of course your implementation may vary.
In your case you probably want to pass the directory name as a constructor argument, or the like.

How to create a thread safe customized HTML Result in Testng

I am learning selenium grid and testng.
I am trying to create customized html result using testng interfaces. my understanding on how to create html results is fine in a single machine execution.
But when i factor remote machines, i am not able to understand like how results are consolidated if tests are executed in parallel in remote machines?
Any factors should i consider before implementing my own interfaces?
Any help is appreciated..
Thanks in advance
It works fine, from multi-threaded tests to the HTML report because the ITestResult is always the same in the end, no matter what you do. You can simple create a "CustomReport" class that extends IReporter. Then, override the generateReport method and just let TestNG create and pass these 2 arguments into it:
#Override
public void generateReport( List<XmlSuite> xml, List<ISuite> suites, String outdir )
{
for ( ISuite thisSuite: suites ) {
thisSuite.getResults(). ...
...
}
....
Then, inside that method, do what you will, to customize the report and generate HTML tables or whatever.
Also, one thing I do (to reduce confusion in the console output while multithreaded tests run) is log the thread name to messages on the TestNG report by using something like:
public void logIt( String message ) {
Reporter.log( "Thread-" + Thread.currentThread().getId() + ": " + message, true );
}
TestNG is awesome, especially when you understand what I said above as well as the fact that you implicitly allow TestNG to pass a XMLTest or ITestContext or ITestResult to some of the #Configuration methods. For example:
#BeforeClass
public void setUp( ITestContext context ) {
logger.info("BeforeClass setUp...");
suiteParams = context.getSuite().getXmlSuite().getAllParameters();
...

Categories