Can we execute only failed test cases in Jenkins? - java

If we build and execute selenium (TestNG) suite in Jenkins and if some tests fails; after fixing is there any way to execute only those failed test cases in Jenkins?

Yes, from the docs:
Every time tests fail in a suite, TestNG creates a file called testng-failed.xml in the output directory. This XML file contains the necessary information to rerun only these methods that failed, allowing you to quickly reproduce the failures without having to run the entirety of your tests.
If I understand your use case, you are looking to save the list of failed tests, make changes to the code and re-execute that list, is that correct ? In that case you can store that testng-failed.xml file and use it for the next execution in Jenkins, possibly adding a checkbox to the job that lets you choose wether to use this test suite or the default one.

Please try below to run specific tests (example: testcases you have fixed) with mnetioning below include tag in testNG.xml
<classes>
<class name="test.IndividualMethodsTest">
<methods>
<include name="testMethod" />
</methods>
</class>
</classes>

From my QA and automation perspective, it's always better to run the entire suite from jenkins, if you want to check why they fail you can do it locally. the other option is to parameterize the xml but it is a lot of work, it could be by maven arguments and writing in the xml

Why in Jenkins?, you can build one Testng Retry Listener where it keeps polling the entire test execution and re-executes only Failed testcases.
Refer this below.
public class RetryFailedTestCases implements IRetryAnalyzer {
private int retryCnt = 0;
// Mentioned maxRetryCnt (Maximiun Retry Count) as per your requirement. Here I
// took 3, If any failed testcases then it runs two times
private int maxRetryCnt = 1;
// This method will be called everytime a test fails. It will return TRUE if a
// test fails and need to be retried, else it returns FALSE
public boolean retry(ITestResult result) {
if (retryCnt < maxRetryCnt) {
System.out.println("Retrying " + result.getName() + " again and the count is " + (retryCnt + 1));
retryCnt++;
return true;
}
return false;
}
}

Related

How should I manage dependencies among test cases [duplicate]

When I run go test, my output:
--- FAIL: TestGETSearchSuccess (0.00s)
Location: drivers_api_test.go:283
Error: Not equal: 200 (expected)
!= 204 (actual)
--- FAIL: TestGETCOSearchSuccess (0.00s)
Location: drivers_api_test.go:391
Error: Not equal: 200 (expected)
!= 204 (actual)
But after I run go test again, all my tests pass.
Tests fail only when I reset my mysql database, and then run go test for the first time.
For every GET request, I do a POST request before to ensure that there is data created in the DB.
Could anyone help me with how to make sure that tests are run sequentially? That is the POST requests are run before the GET requests?
You can't / shouldn't rely on test execution order. The order in which tests are executed is not defined, and with the use of testing flags it is possible to exclude tests from running, so you have no guarantee that they will run at all.
For example the following command will only run tests whose name contains a 'W' letter:
go test -run W
Also note that if some test functions mark themselves eligible for parallel execution using the T.Parallel() method, the go tool will reorder the tests to first run non-parallel tests, and then run parallel tests in parallel under certain circumstances (controlled by test flags like -p). You can see examples of this in this answer: Are tests executed in parallel in Go or one by one?
Tests should be independent from each other. If a test function has prerequisites, that cannot be done/implemented in another test function.
Options to do additional tasks before a test function is run:
You may put it in the test function itself
You may put it in a package init() function, in the _test.go file itself. This will run once before execution of test functions begins.
You may choose to implement a TestMain() function which will be called first and in which you may do additional setup before you call M.Run() to trigger the execution of test functions.
You may mix the above options.
In your case in package init() or TestMain() you should check if your DB is initialized (there are test records inserted), and if not, insert the test records.
Note that starting with Go 1.7, you may use subtests in which you define execution order of subtests. For details see blog post: Using Subtests and Sub-benchmarks, and the package doc of the testing package.
For those who as I am is getting problems because of multiple concurring tests running simultaneously. I found a way to limit the maximum number of test running in parallel:
go test -p 1
With this, your test will run sequentially one by one.
Source
Apart for 3rd party libraries like Convey and Ginkgo, with plain Golang 1.7 you can run tests sequentially. You can read more here
func TestFoo(t *testing.T) {
// <setup code>
t.Run("A=1", func(t *testing.T) { ... })
t.Run("A=2", func(t *testing.T) { ... })
t.Run("B=1", func(t *testing.T) { ... })
// <tear-down code>
}
And you can run them conditionally with:
go test -run '' # Run all tests.
go test -run Foo # Run top-level tests matching "Foo", such as "TestFooBar".
go test -run Foo/A= # For top-level tests matching "Foo", run subtests matching "A=".
go test -run /A=1 # For all top-level tests, run subtests matching "A=1".
So lets say you got an user package from a REST api that you want to test. You need to test the create handler in order to be able to test the login handler. Usually I would have this on the user_test.go
type UserTests struct { Test *testing.T}
func TestRunner(t *testing.T) {
t.Run("A=create", func(t *testing.T) {
test:= UserTests{Test: t}
test.TestCreateRegularUser()
test.TestCreateConfirmedUser()
test.TestCreateMasterUser()
test.TestCreateUserTwice()
})
t.Run("A=login", func(t *testing.T) {
test:= UserTests{Test: t}
test.TestLoginRegularUser()
test.TestLoginConfirmedUser()
test.TestLoginMasterUser()
})
}
Then I can append methods to the UserTest type that wont be executed by the go test command in any _test.go file
func (t *UserTests) TestCreateRegularUser() {
registerRegularUser := util.TableTest{
Method: "POST",
Path: "/iot/users",
Status: http.StatusOK,
Name: "registerRegularUser",
Description: "register Regular User has to return 200",
Body: SerializeUser(RegularUser),
}
response := util.SpinSingleTableTests(t.Test, registerRegularUser)
util.LogIfVerbose(color.BgCyan, "IOT/USERS/TEST", response)
}
The best way to achieve that is to create a TestMain, as presented here.
import (
"testing"
"os"
)
func TestMain(m *testing.M) {
// Do your stuff here
os.Exit(m.Run())
}
It's also possible to synchronize the test using wait groups:
awaitRootElement := sync.WaitGroup{}
awaitRootElement.Add(1)
t.Run("it should create root element", func(t0 *testing.T) {
// do the test which creates root element
awaitRootElement.Done()
})
awaitRootElement.Wait()
t.Run("it should act on root element somehow", func(t0 *testing.T) {
// do tests which depend on root element
})
Note that you should wait before scheduling the tests, since the asynchronous test execution otherwise might deadlock (the test routine is awaiting another test which never gets to run).

TestNG parallel test/method using dataprovoder

I have a testNG method just like this:
#Test(dataProvider="takeMyProvider")
public void myTest(String param1, String param2){
System.out.println(param1 + " " + param2);
}
My dataprovider returns 10 elements. My method will be executed 10 times in one thread. How is it possible to parallel this? For example
I want to have 5 methods in parallel. The webdriver should open 5 browsers at the same time. After these 5 tests in parallel the other 5 test should be executed
or
the webdriver should open 10 browsers and do all 10 elements parallel
Does anybody have an idea?
You can define the parallelism via a suite file in TestNG. Example following runs methods in parallel with 10 threads:
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite name="MySuiteNameHere" parallel="methods" thread-count="10">
<test name="Selenium Tests">
<classes>
<class name="foo.bar.FooTest"/>
</classes>
</test>
</suite>
You also need to note that your data provider can is thread safe to allow it to not force the method to run sequentially.
// data providers force single threaded by default
#DataProvider(name = "takeMyProvider", parallel = true)
Be careful, though. TestNG does not create new instances of the class object when running with parallel methods. That means that if you save values on the test class object you can run into threading issues.
Also note, if you set the thread count to 5, it does not wait for the first 5 to all be finished and then start up the next 5. It basically puts all the test methods into a queue and then starts up x threads. Each thread then simply polls the next element from the queue when it is available.
TestNG's #Test annotation already has what you want... To some degree:
// Execute 10 times with a pool of 5 threads
#Test(invocationCount = 10, threadPoolSize = 5)
What this won't do is fit your first scenario exactly, that is, run the first 5, wait for them to finish, run the other 5.
many thx for your feedback and useful tipps.
My tests ran - maybe - in any parallel way but only in one browser instance.
Lets jump in in detail:
My dataprovider returns an object[][]
#Dataprovider(name = "takeMyProvider", parallel = true)
public object[][] myProvider(){
return new object[][]{{"1", "name1"}, {"2", "name2"} {"3", "name3"}}
}
This test method is executed three times
#Test(dataProvider="takeMyProvider")
public void myTest(String param1, String param2){
System.out.println(param1 + " " + param2);
}
but just in one browser instance. Thats not what I want.
I want testNG to start 3 chrome instances and doing the 3 tests in parallel.
Btw I am running the tests on a selenium grid. Maybe with 100 nodes.
It would be perfect when 100 nodes doing this test in parallel. Or even 1.000, depends on the dataprovider.
Does anybody have an idea?
Best regards

JUnit test case failed

I have a simple test case:
public class FileManagerTest {
String dirPath = “/myDir/”
#Before
public void setUp() {
mFileManager = MyFileManager.getInstance();
}
#Test
private void testPersistFiles() {
System.out.println(“testPersistFiles()…”);
//it deletes old files & persists new files to /myDir/ directory
boolean successful =mFileManager.persistFiles();
Assert.assertTrue(successful);
}
#Test
public void testGetFiles() {
System.out.println(“testGetFiles()…”);
mFileManager.persistFiles();
//I double checked, the persistFiles() works, the files are persisted.
List<File> files = mFileManager.getFilesAtPath(dirPath);
Assert.assertNotNull(files); //Failure here!!!!
}
#Test
public void testGetFilesMap() {
System.out.println(“testGetFilesMap()…”);
mFileManager.persistFiles();
Map<String, File> filesMap = mFileManager.getFilesMapAtPath(dirPath);
Assert.assertNotNull(files);
}
}
The persistFiles() function in FileManager delete all files under /myDir/ then persist files again.
As you see above, I have a System.out.println(…) in each test function. When I run it , I can see all the prints in the following order:
testGetFilesMap()…
testGetFiles()…
testPersistFiles()…
However, test is failed at testGetFiles(). Two things I don't understand:
I don’t understand, it is failed at testGetFiles() why I can still see the print testPersistFiles() which sounds like even it is failed, it doesn't stop running, but continues to run the next test testPersistFiles()? What is happening behind the scene in JUnit test case??
Another thing I don’t understand is why testGetFiles() is failed? I can see log that the persistFiles() has persisted files. Why it got null after that?
I don’t understand, it is failed at testGetFiles() why I can still see the print testPersistFiles() which sounds like even it is failed, i
That is how unit testing works. Each test should be isolated and working using only its set of data. Unit test frameworks run every test so you can see which parts of the system work and which do not, they do not stop on the first failure.
mFileManager.getFilesAtPath(dirPath);
You are not searching the files in the right place
String dirPath = “/myDir/”
Are you sure that this path is ok? with a slash before the directory name?
For each of your tests, JUnit creates a separate instance of that class and runs it. Since you seem to have 3 tests, JUnit will create 3 instances of your class, execute #Before on each of them to initialize state, and then run them.
The order in which they are run is typically the order in which the tests are written but this is not guaranteed.
Now about the print statement - you see that it's the first statement in your test so it will be executed. Then mFileManager.persistFiles(); is executed. For some reason it returns a false and hence the test fails.
As to why it returns false, you can run a local debugger, put a break point at the beginning of that method, single-step and see.

Selenium webdriver tests pass indivdually but fail when run together

I have a java file which has 7 junit tests to run. If I run all the tests at once all but 1 passes. If I comment out certain tests and that one test always passes.
Can anybody offer any suggestions as to what could be causing this?
My first thought was something in the test Setup or cleanup but I am not sure what it could be. All I do in the clean up is exit the driver and output the time taken to run the test.
In the setup I set up the driver, the time started, create a firefox profile and read in some data from a properties file to use in the tests.
If it was the setup / cleanup surely the other 6 tests would also be effected? The test that fails is a simple test to check that entering an invalid card type displays an error message on the page.
UPDATE:
I've renamed the test so it runs first and now all 7 pass each time. What could be causing this? Do I need to set something in my test cleanup to get it back to a default state?
My test cleanup:
#After
public void testCleanup() throws IOException {
driver.quit();
endTime = System.currentTimeMillis();
long totalTime = ((endTime - startTime)/1000)/60;
System.out.println();
System.out.println("Test Suite Took: " + totalTime + " Minutes.");
}

Additional logging JBehave

The scenario is this:
We are using JBehave and Selenium for system, integration and end to end testing.
I am checking the results of a calculation on a page with in excess of 20 values to validate.
Using Junit Assert the entire test will fail on the first instance of one of the values being incorrect. What I wanted to do was that if an assertion failure is met then the test continues to execute so that I can then collate all of the values that are incorrect in one test run rather than multiple test runs.
To do this I capture the assertions and write out to a log file anything that fails the validation. This has left me with a couple of issues:
1) The log file where I write out the assertions failures do not contain the name of the JBehave Story or Scenario that was being run when the exception occurred.
2) The JBehave Story or Scenario is listed as having 'Passed' and I want it to be listed as 'Failed'.
Is there any way that I can either log the name of the Story and Scenario out to the additional log file OR get the additional logging written to the JBehave log file?
How can I get the Story / Scenario marked as failed?
In the JBehave configuration I have:
configuredEmbedder()
.embedderControls()
.doIgnoreFailureInStories(true)
.doIgnoreFailureInView(false)
.doVerboseFailures(true)
.useStoryTimeoutInSecs(appSet.getMaxRunningTime());
and
.useStoryReporterBuilder(
new StoryReporterBuilder()
.withDefaultFormats()
.withViewResources(viewResources)
.withFormats(Format.HTML, Format.CONSOLE)
.withFailureTrace(true)
.withFailureTraceCompression(true)
.withRelativeDirectory("jbehave/" + appSet.getApplication())
Yes, you can create your own StoryReporter:
public class MyStoryReporter implements org.jbehave.core.reporters.StoryReporter{
private Log log = ...
#Override
public void successful(String step) {
log.info(">>successStep:" + step);
}
#Override
public void failed(String step, Throwable cause) {
log.error(">>error:" + step + ", reason:" + cause);
}
...
}
and register it like this:
.useStoryReporterBuilder(
new StoryReporterBuilder()
.withReporters(new MyStoryReporter())
..

Categories