I have a sequence of tests which have to be fed an input data in the form of a file.However,the exact data content to be fed into each would be specific.
I intend to use temporary files to achieve this.
The Setup method does not take a parameter.
SO ,what could be done so that the setup can be made to read a specific fragment for each specific test.
The actual set of steps in Setup would be same - creating a temporary file,but with a specific tailored piece of data.
Setup methods (i.e., methods annotated with #Before) are designed for running the same steps before every test case. If this isn't the behavior you need, just don't use them.
At the end of the day, a JUnit test is just Java - you could just have a method that takes a parameter and sets up the test accordingly and call it explicitly with the different arguments you need:
public class MyTest {
private void init(String fileName) {
// Reads data from the file and sets up the test
}
#Test
public testSomething() {
init("/path/to/some/file");
// Perform the test and assert the result
}
#Test
public testSomethingElse() {
init("/path/to/another/file");
// Perform the test and assert the result
}
}
Related
I'm executing a few hundred tests in test classes consisting of a singular beforeMethod test, followed by a variable amount of primary tests and occasionally an afterMethod.
The purpose of the beforMethod test, is to populate the test environment with data used in the primary tests while separating logging and recording from the primary tests, which we report on.
We have set up an automatic issue creation tool using a listener. We've found that it would give great value to add execution time to this tool, so that it can show us how long it would take to reproduce the errors in said issues.
To this end, I have made a simple addition to this code, that uses ITestResult.getEndMillis() and getStartMillis() to get the execution time.
The problem we're experiencing with this approach, is that if the test encounters a failure during the primary tests, ITestResult.getStartMillis() will not account for the start time of the before method, but only the primary method.
How would we go about determining the start time of the test class itself (always the beforeMethod), rather than just the current method?
Since we're running hundreds of tests in a massive setup, a solution that allows this without changing each separate test class, would definitely be preferable.
The setup of the java test classes look something like this (scrubbed of business specifics):
package foobar;
import foobar
#UsingTunnel
#Test
public class FLOWNAME_TESTNAME extends TestBase {
private final Value<String> parameter;
public FLOWNAME_TESTNAME(Value<String> parameter) {
super(PropertyProviderImpl.get());
this.parameter = parameter;
}
#StoryCreating(test = "TESTNAME")
#BeforeMethod
public void CONDITIONS() throws Throwable {
new TESTNAME_CONDITIONS(parameter).executeTest();
}
#TestCoverage(test = "TESTNAME")
public void PRIMARYTESTS() throws Throwable {
TESTCASE1 testcase1 = new TESTCASE1(parameter.get());
testcase1.executeTest();
testcase1.throwSoftAsserts();
TESTCASE2 testcase2 = new TESTCASE2(parameter.get());
testcase2.executeTest();
testcase2.throwSoftAsserts();
}
}
So in this case, the problem arises when the listener detects a failure in either TESTCASE1 or TESTCASE2, because these will not include the execution time of TESTNAME_CONDITIONS because that test is inside a different method, yet practically speaking, they are part of the same test flow, aka the same test class.
I found a solution to the issue.
It is possible to use ITestResult.getTestContext().getStartDate().getTime() to obtain the time of which the test class itself is run, rather than the current test method.
The final solution was quite simply:
result.getEndMillis() - result.getTestContext().getStartDate().getTime()) / 60000
Where "result" corresponds to ITestResult.
This outputs the time between the start of the test and the end of the last executed method.
I'm struggling to understand the flow of execution of some Java TestNG code that my boss assigned to me. Unfortunately, the original author is no longer with my company, so I'm on my own. I've read through a few TestNG tutorials (especially this one) but still have questions.
Here is the code's test class, which learns about the tests it is to run from an external file, runs the tests, then closes everything up:
public class MyTestDriver {
public Object[][] data = null;
#BeforeSuite
public void beforeSuite() {
// open external info file
}
#DataProvider(name = "GetConfigData")
public Object[][] GetSyncConfigData() throws IOException {
try {
// Using external file, gather info about individual tests
// load that info into Object[][] data, I think
} catch (Exception e) {
// log errors
}
return data;
}
}
A test may be either an async'ed test or a sync'ed test, so those cases are handled by a subclass each. Here's the async'ed subclass:
public class AsyncAPITest extends MyTestDriver {
#Test(dataProvider = "GetConfigData")
public void asyncTestExecution(String RUN, String TYPE, String ENVIRONMENT, String TESTNAME) throws Exception {
try {
// run tests
} catch (Exception e) {
// log errors
}
}
}
Coders familiar with Java TestNG will spot the annotations.
Now, let's say I run the code and the external file specifies only one Async test should be run. In that event, I'm certain the code's order of execution would be:
#BeforeSuite
MyTestDriver.beforeSuite()
#DataProvider(name = "GetConfigData")
MyTestDriver.GetSyncConfigData()
#Test(dataProvider = "GetConfigData")
AsyncAPITest.asyncTestExecution()
But here's what I don't understand: How is information passed from MyTestDriver.GetSyncConfigData() to AsyncAPITest.asyncTestExecution()? If you look at method asyncTestExecution(), that method actually takes in quite a few arguments:
public void asyncTestExecution(String RUN, String TYPE, String ENVIRONMENT, String TESTNAME) throws Exception
What is supplying those arguments? If I look through the code of MyTestDriver.GetSyncConfigData(), shouldn't I see something like this somewhere:
// data initialized as Object[][]
// data = AsyncAPITest.asyncTestExecution(RUN, TYPE, ENVIRONMENT, TESTNAME);
return data;
I just don't understand how AsyncAPITest.asyncTestExecution() is called, or what is supplying those arguments. I'm largely asking because I want to send in more arguments for later modifications. Thank you in advance for any suggestions or observations.
data has no significance. The arguments to your #Test method are being provided by #DataProvider. The return type of Dataprovider is Object[x][y].
x means sets of arguments. y means the arguments.
In your case, the dataprovider must be returning multiple values for a set of {String RUN, String TYPE, String ENVIRONMENT, String TESTNAME}. TestNG reads these sets and provides #Test, which would be run x times with each set of arguments.
How? If you are interested, you should read the implementation in testng code, but simply said, TestNG searches for annotations using the reflections API, creates multiple methods at runtime x times with y arguments and invokes them.
If you have to add an argument, then you need to add something to y - so you would need to make your dataprovider return it and your Test method accept it.
Since you have mentioned the name of the data provider in the annotation of your test method, testNG would scan for any matching data provider methods. Thus the GetSyncConfigData data provider method is invoked. Now, each 1d array within the 2d array returned by the data provider represents a test case. So if your data is of size Object[3][4], then there are 3 test cases and each test case provides 4 arguments to your test method.
In case if a matching data provider is not found or in case of any mismatch in the number/type of arguments, an exception would be thrown.
More info
Background: I'm executing tests with TestNG and I have a class annotated with #Test that generates a number, or ID if you will, and that same number is the input value of my second test. Is it possible to pass values between TestNG tests?
Sure. For example if you have two tests that is related you can pass the values from one test to another via test context attributes:
#Test
public void test1(ITestContext context) { //Will be injected by testNG
/* Do the test here */
context.setAttribute("myOwnAttribute", "someTestResult");
}
#Test(dependsOnMethods = "test1")
public void test2(ITestContext context) { //Will be injected by testNG
String prevResult = (String) context.getAttribute("myOwnAttribute");
}
You should create one test that handles whole case. Tests can't depend on each other, it's considered as bad practise. If you are using maven order of tests execution can be different in different environments.
Bad practice or not, it can be accomplished by simply using class fields. Just make sure your cases are executed in predictable order (eg. using #Test(priority) or dependsOn TestNG feature).
I've implemented a feature in my jUnit tests that takes, for every test case, a fresh copy of a data source. This copy is taken in a folder specific for each test case. The idea is that every test case can start from a clean situation, manipulate it and let it as such after the run. This is often useful when the test fails for analysing the problem.
For now I have to call this feature directly in the test method because I don't know how to retrieve the current test name:
public void testTest1() {
TestHelper th=TestHelper.create("testTest1",subPathToDataSource);
// do the test...
Path dataPath = th.getDataPath();
...
}
I would like to be able to write something like this:
Path dataPath;
#Before
public initTest() {
th=TestHelper.create(SomeJUnitObject.getCurrentTestName(),subPathToDataSource);
...
}
public void testTest1() {
// do the test...
Path dataPath = th.getDataPath();
...
}
Until now I found as answers : "You don't need to know that"... But I do need it !
Is this possible ?
Kind regards
Look at the TestName rule.
You should be able to add in your test class:
#Rule TestName name=new TestName();
And then access it.
(On phone, so can't check versions support/details - might be 4.x only)
Here is an alternative approach; create an abstract class which your "real" test classes inherit.
I have several such examples in my projects and here I will give one, mainly testing for individual JSON Patch operations.
All my test files are JSON, and located under an appropriately named resource directory. The base, abstract class is JsonPatchOperationTest. And here is the full code of AddOperationTest which tests for JSON Patch's add operation:
public final class AddOperationTest
extends JsonPatchOperationTest
{
public AddOperationTest()
throws IOException
{
super("add");
}
}
And that's it! Not even one test method in this class, but of course your implementation may vary.
In your case you probably want to pass the directory name as a constructor argument, or the like.
I have a very strange problem, when i try to run a JUnit test with multiple test case, it will only pass the first test case and shown IndexOut of Bound error
public class ABCTest {
#Test
public void basicTest1(){...}
#Test
public void basicTest2(){...}
...
but if i commend the rest test case, test them one by one, it will pass all of them.
public class ABCTest {
#Test
public void basicTest1(){...}
//#Test
//public void basicTest2(){...}
//...
Since you do not provide the complete testcase and implementation class, I have to make some assumptions.
Most likely you are mutating the state of the tested object by the testcase.
Usually you try to get a clean test fixture for each unit test. This works by having a method with the #Before annotation which creates a new instance of the class under test. (This was called 'setUp()' in older versions of junit.)
This ensures that the order of test method execution as well as the number of executions does not matter and each method is working isolated.
Look at what you are doing inside of the test case and see if you are changing data that may be used by the other test cases and not restoring it to the original state. For example you have a text file that you read and write to in basicTest1 that you then read again in basicTest2 but assume the file is the same as it was before you ran basicTest1.
This is just one possible problem. would need to see the code for more insight