I'm struggling to understand the flow of execution of some Java TestNG code that my boss assigned to me. Unfortunately, the original author is no longer with my company, so I'm on my own. I've read through a few TestNG tutorials (especially this one) but still have questions.
Here is the code's test class, which learns about the tests it is to run from an external file, runs the tests, then closes everything up:
public class MyTestDriver {
public Object[][] data = null;
#BeforeSuite
public void beforeSuite() {
// open external info file
}
#DataProvider(name = "GetConfigData")
public Object[][] GetSyncConfigData() throws IOException {
try {
// Using external file, gather info about individual tests
// load that info into Object[][] data, I think
} catch (Exception e) {
// log errors
}
return data;
}
}
A test may be either an async'ed test or a sync'ed test, so those cases are handled by a subclass each. Here's the async'ed subclass:
public class AsyncAPITest extends MyTestDriver {
#Test(dataProvider = "GetConfigData")
public void asyncTestExecution(String RUN, String TYPE, String ENVIRONMENT, String TESTNAME) throws Exception {
try {
// run tests
} catch (Exception e) {
// log errors
}
}
}
Coders familiar with Java TestNG will spot the annotations.
Now, let's say I run the code and the external file specifies only one Async test should be run. In that event, I'm certain the code's order of execution would be:
#BeforeSuite
MyTestDriver.beforeSuite()
#DataProvider(name = "GetConfigData")
MyTestDriver.GetSyncConfigData()
#Test(dataProvider = "GetConfigData")
AsyncAPITest.asyncTestExecution()
But here's what I don't understand: How is information passed from MyTestDriver.GetSyncConfigData() to AsyncAPITest.asyncTestExecution()? If you look at method asyncTestExecution(), that method actually takes in quite a few arguments:
public void asyncTestExecution(String RUN, String TYPE, String ENVIRONMENT, String TESTNAME) throws Exception
What is supplying those arguments? If I look through the code of MyTestDriver.GetSyncConfigData(), shouldn't I see something like this somewhere:
// data initialized as Object[][]
// data = AsyncAPITest.asyncTestExecution(RUN, TYPE, ENVIRONMENT, TESTNAME);
return data;
I just don't understand how AsyncAPITest.asyncTestExecution() is called, or what is supplying those arguments. I'm largely asking because I want to send in more arguments for later modifications. Thank you in advance for any suggestions or observations.
data has no significance. The arguments to your #Test method are being provided by #DataProvider. The return type of Dataprovider is Object[x][y].
x means sets of arguments. y means the arguments.
In your case, the dataprovider must be returning multiple values for a set of {String RUN, String TYPE, String ENVIRONMENT, String TESTNAME}. TestNG reads these sets and provides #Test, which would be run x times with each set of arguments.
How? If you are interested, you should read the implementation in testng code, but simply said, TestNG searches for annotations using the reflections API, creates multiple methods at runtime x times with y arguments and invokes them.
If you have to add an argument, then you need to add something to y - so you would need to make your dataprovider return it and your Test method accept it.
Since you have mentioned the name of the data provider in the annotation of your test method, testNG would scan for any matching data provider methods. Thus the GetSyncConfigData data provider method is invoked. Now, each 1d array within the 2d array returned by the data provider represents a test case. So if your data is of size Object[3][4], then there are 3 test cases and each test case provides 4 arguments to your test method.
In case if a matching data provider is not found or in case of any mismatch in the number/type of arguments, an exception would be thrown.
More info
Related
I'm executing a few hundred tests in test classes consisting of a singular beforeMethod test, followed by a variable amount of primary tests and occasionally an afterMethod.
The purpose of the beforMethod test, is to populate the test environment with data used in the primary tests while separating logging and recording from the primary tests, which we report on.
We have set up an automatic issue creation tool using a listener. We've found that it would give great value to add execution time to this tool, so that it can show us how long it would take to reproduce the errors in said issues.
To this end, I have made a simple addition to this code, that uses ITestResult.getEndMillis() and getStartMillis() to get the execution time.
The problem we're experiencing with this approach, is that if the test encounters a failure during the primary tests, ITestResult.getStartMillis() will not account for the start time of the before method, but only the primary method.
How would we go about determining the start time of the test class itself (always the beforeMethod), rather than just the current method?
Since we're running hundreds of tests in a massive setup, a solution that allows this without changing each separate test class, would definitely be preferable.
The setup of the java test classes look something like this (scrubbed of business specifics):
package foobar;
import foobar
#UsingTunnel
#Test
public class FLOWNAME_TESTNAME extends TestBase {
private final Value<String> parameter;
public FLOWNAME_TESTNAME(Value<String> parameter) {
super(PropertyProviderImpl.get());
this.parameter = parameter;
}
#StoryCreating(test = "TESTNAME")
#BeforeMethod
public void CONDITIONS() throws Throwable {
new TESTNAME_CONDITIONS(parameter).executeTest();
}
#TestCoverage(test = "TESTNAME")
public void PRIMARYTESTS() throws Throwable {
TESTCASE1 testcase1 = new TESTCASE1(parameter.get());
testcase1.executeTest();
testcase1.throwSoftAsserts();
TESTCASE2 testcase2 = new TESTCASE2(parameter.get());
testcase2.executeTest();
testcase2.throwSoftAsserts();
}
}
So in this case, the problem arises when the listener detects a failure in either TESTCASE1 or TESTCASE2, because these will not include the execution time of TESTNAME_CONDITIONS because that test is inside a different method, yet practically speaking, they are part of the same test flow, aka the same test class.
I found a solution to the issue.
It is possible to use ITestResult.getTestContext().getStartDate().getTime() to obtain the time of which the test class itself is run, rather than the current test method.
The final solution was quite simply:
result.getEndMillis() - result.getTestContext().getStartDate().getTime()) / 60000
Where "result" corresponds to ITestResult.
This outputs the time between the start of the test and the end of the last executed method.
I am trying to write a test method in TestNG, that after it fails - the entire test suite will stop running.
#Test
public void stopTestingIfThisFailed() throws Exception
{
someTestStesp();
if (softAsserter.isOneFailed()) {
asserter.fail("stopTestingIfThisFailed test Failed");
throw new Exception("Test can't continue, fail here!");
}
}
The exception is being thrown, but other test methods are running.
How to solve this?
You can use the dependsOnMethods or dependsOnGroups annotation parameter in your other test methods:
#Test(dependsOnMethods = {"stopTestingIfThisFailed"})
public void testAnotherTestMehtod() {
}
JavaDoc of the dependsOnMethods parameter:
The list of methods this method depends on. There is no guarantee on the order on which the methods depended upon will be run, but you are guaranteed that all these methods will be run before the test method that contains this annotation is run. Furthermore, if any of these methods was not a SUCCESS, this test method will not be run and will be flagged as a SKIP. If some of these methods have been overloaded, all the overloaded versions will be run.
See https://testng.org/doc/documentation-main.html#dependent-methods
It depends on what you expect (there is no direct support for this in TestNG). You can create ShowStopperException which is thrown in #Test and then in your ITestListener implementation (see docs) you can call System.exit(1 (or whatever number)) when you find this exeption in result but there will be no report and in general it's not good practice. Second option is to have some base class which is parent of all test classes and some context variable which will handle ShowStopperException in #BeforeMethod in parent class and throw SkipException so workflow can be like:
test passed
test passed
showstopper exception in some test
test skipped
test skipped
test skipped
...
I solved the problem like this: after a test that mustn't fail fails - I'm writing data to a temporary text file.
Later, in the next test I added code in the #BeforeClass that checks that data in the former mentioned text file. If a show stopper was found I'm killing the current process.
If a test the "can't" fail actually fails:
public static void saveShowStopper() {
try {
General.createFile("ShowStopper","tempShowStopper.txt");
} catch (ParseException e) {
e.printStackTrace();
}
}
The #BeforeClass validating code:
#BeforeClass(alwaysRun = true)
public void beforeClass(ITestContext testContext, #Optional String step, #Optional String suiteLoopData,
#Optional String group) throws Exception
{
boolean wasShowStopperFound = APIUtils.loadShowStopper();
if (wasShowStopperFound){
Thread.currentThread().interrupt();
return;
}
}
It behaves if you throw a specific exception, SkipException, from the #BeforeSuite setup method.
See (possible dupe)
TestNG - How to force end the entire test suite from the BeforeSuite annotation if a condition is met
If you want to do it from an arbitrary test, it doesn't appear there is a framework mechanism. But you could always flip a flag, and check that flag in the #BeforeTest setup method. Before you jump to that, maybe have a think if you could check once before the whole suite runs, and just abort there (ie #BeforeSuite).
I have a sequence of tests which have to be fed an input data in the form of a file.However,the exact data content to be fed into each would be specific.
I intend to use temporary files to achieve this.
The Setup method does not take a parameter.
SO ,what could be done so that the setup can be made to read a specific fragment for each specific test.
The actual set of steps in Setup would be same - creating a temporary file,but with a specific tailored piece of data.
Setup methods (i.e., methods annotated with #Before) are designed for running the same steps before every test case. If this isn't the behavior you need, just don't use them.
At the end of the day, a JUnit test is just Java - you could just have a method that takes a parameter and sets up the test accordingly and call it explicitly with the different arguments you need:
public class MyTest {
private void init(String fileName) {
// Reads data from the file and sets up the test
}
#Test
public testSomething() {
init("/path/to/some/file");
// Perform the test and assert the result
}
#Test
public testSomethingElse() {
init("/path/to/another/file");
// Perform the test and assert the result
}
}
I've implemented a feature in my jUnit tests that takes, for every test case, a fresh copy of a data source. This copy is taken in a folder specific for each test case. The idea is that every test case can start from a clean situation, manipulate it and let it as such after the run. This is often useful when the test fails for analysing the problem.
For now I have to call this feature directly in the test method because I don't know how to retrieve the current test name:
public void testTest1() {
TestHelper th=TestHelper.create("testTest1",subPathToDataSource);
// do the test...
Path dataPath = th.getDataPath();
...
}
I would like to be able to write something like this:
Path dataPath;
#Before
public initTest() {
th=TestHelper.create(SomeJUnitObject.getCurrentTestName(),subPathToDataSource);
...
}
public void testTest1() {
// do the test...
Path dataPath = th.getDataPath();
...
}
Until now I found as answers : "You don't need to know that"... But I do need it !
Is this possible ?
Kind regards
Look at the TestName rule.
You should be able to add in your test class:
#Rule TestName name=new TestName();
And then access it.
(On phone, so can't check versions support/details - might be 4.x only)
Here is an alternative approach; create an abstract class which your "real" test classes inherit.
I have several such examples in my projects and here I will give one, mainly testing for individual JSON Patch operations.
All my test files are JSON, and located under an appropriately named resource directory. The base, abstract class is JsonPatchOperationTest. And here is the full code of AddOperationTest which tests for JSON Patch's add operation:
public final class AddOperationTest
extends JsonPatchOperationTest
{
public AddOperationTest()
throws IOException
{
super("add");
}
}
And that's it! Not even one test method in this class, but of course your implementation may vary.
In your case you probably want to pass the directory name as a constructor argument, or the like.
This question already has answers here:
Run single test from a JUnit class using command-line
(4 answers)
Closed 9 years ago.
I have looked at all the similar questions, but in my opinion, none of them give a solid answer to this. I have a test class (JUnit 4 but also interested in JUnit 3) and I want to run individual test methods from within those classes programmatically/dynamically (not from the command line). Say, there are 5 test methods but I only want to run 2. How can I achieve this programmatically/dynamically (not from the command line, Eclipse etc.).
Also, there is the case where there is a #Before annotated method in the test class. So, when running an individual test method, the #Before should run beforehand as well. How can that be overcome?
Thanks in advance.
This is a simple single method runner. It's based on JUnit 4 framework but can run any method, not necessarily annotated with #Test
private Result runTest(final Class<?> testClazz, final String methodName)
throws InitializationError {
BlockJUnit4ClassRunner runner = new BlockJUnit4ClassRunner(testClazz) {
#Override
protected List<FrameworkMethod> computeTestMethods() {
try {
Method method = testClazz.getMethod(methodName);
return Arrays.asList(new FrameworkMethod(method));
} catch (Exception e) {
throw new RuntimeException(e);
}
}
};
Result res = new Result();
runner.run(res);
return res;
}
class Result extends RunNotifier {
Failure failure;
#Override
public void fireTestFailure(Failure failure) {
this.failure = failure;
};
boolean isOK() {
return failure == null;
}
public Failure getFailure() {
return failure;
}
}
I think this can only be done with a custom TestRunner. You could pass the names of the tests you wish to run as arguments when launching your tests. A more fancier solution would be to implement a custom annotation (lets say #TestGroup), which takes a group name as argument. You could than annotate your test methods with it, giving those tests you want to run together the same group name. Again, pass the group name as argument when launching the tests. Within your test runner, collect only those methods with the corresponding group name and launch those tests.
However, the simplest solution to this is to move those tests you want to run separately to another file...