Difference between Tests and Steps in testng extent report - java

I'm confused in difference between Tests and Steps in testng extent report.
I have 2 test cases as 1 pass and 1 fail. In extent report under Test: 1 test(s) passed 1 test(s) failed, 0 others and under Steps: 1 step(s) passed
2 step(s) failed, 0 others
So would anyone clarify what is the difference between both ?
Attaching code snippet and testng extent report
#Test
public void demoTestPass()
{
test = extent.createTest("demoTestPass", "This test will demonstrate the PASS test case");
Assert.assertTrue(true);
}
#Test
public void demoTestFail()
{
test = extent.createTest("demoTestFail", "This test will demonstrate the FAIL test case");
Assert.assertEquals("Hi", "Hello");
}
Please click for Extent report here.
Any clarification would be much appreciated.

Difference Between Tests and Steps in extentReport:
Tests defines: Total test section which you have created in your Report: With the syntax like : extentReport.createTest("name of section");
Steps defines : Total number of log which you have generated in Script, With the syntax like : testlog.info() OR testlog.pass() OR testlog.fail() where testlog is object of ExtentTest class
Example:
In this report, there are 3 section which has been created and its showing as Tests. And Steps defines numbers of logs which has been passed in those Test.
Your case :
Test: 1 test(s) passed 1 test(s) failed, 0 others and under Steps: 1 step(s) passed 2 step(s) failed, 0 others
Test include 1 pass and 1 fail, because of its get failure in Steps. Your Steps include 1 pass and 2 fails and its reflected on Test.

Test(startTest("test name")) is something that is used to create a new test in extent reports.
Steps denotes that how many messages (test. Pass("pass message"), test. Fail ("fail message), test. Info ("info message")) you've logged to reports.
Consider you've two test methods and each test method has 1pass and 1 info messages.
So, in the extent reports, it'll show like 2 tests, total 4 steps.
2 pass steps and 2 info steps

Related

Print test summary to console in JUnit 5

Is it possible in JUnit 5 to print a test summary at the end of all tests to the console?
It should contain a list of the tests that have failed, and a list of the ones that were successful.
try gradle test -i
this will print test cases with result and the final output result.

How should I manage dependencies among test cases [duplicate]

When I run go test, my output:
--- FAIL: TestGETSearchSuccess (0.00s)
Location: drivers_api_test.go:283
Error: Not equal: 200 (expected)
!= 204 (actual)
--- FAIL: TestGETCOSearchSuccess (0.00s)
Location: drivers_api_test.go:391
Error: Not equal: 200 (expected)
!= 204 (actual)
But after I run go test again, all my tests pass.
Tests fail only when I reset my mysql database, and then run go test for the first time.
For every GET request, I do a POST request before to ensure that there is data created in the DB.
Could anyone help me with how to make sure that tests are run sequentially? That is the POST requests are run before the GET requests?
You can't / shouldn't rely on test execution order. The order in which tests are executed is not defined, and with the use of testing flags it is possible to exclude tests from running, so you have no guarantee that they will run at all.
For example the following command will only run tests whose name contains a 'W' letter:
go test -run W
Also note that if some test functions mark themselves eligible for parallel execution using the T.Parallel() method, the go tool will reorder the tests to first run non-parallel tests, and then run parallel tests in parallel under certain circumstances (controlled by test flags like -p). You can see examples of this in this answer: Are tests executed in parallel in Go or one by one?
Tests should be independent from each other. If a test function has prerequisites, that cannot be done/implemented in another test function.
Options to do additional tasks before a test function is run:
You may put it in the test function itself
You may put it in a package init() function, in the _test.go file itself. This will run once before execution of test functions begins.
You may choose to implement a TestMain() function which will be called first and in which you may do additional setup before you call M.Run() to trigger the execution of test functions.
You may mix the above options.
In your case in package init() or TestMain() you should check if your DB is initialized (there are test records inserted), and if not, insert the test records.
Note that starting with Go 1.7, you may use subtests in which you define execution order of subtests. For details see blog post: Using Subtests and Sub-benchmarks, and the package doc of the testing package.
For those who as I am is getting problems because of multiple concurring tests running simultaneously. I found a way to limit the maximum number of test running in parallel:
go test -p 1
With this, your test will run sequentially one by one.
Source
Apart for 3rd party libraries like Convey and Ginkgo, with plain Golang 1.7 you can run tests sequentially. You can read more here
func TestFoo(t *testing.T) {
// <setup code>
t.Run("A=1", func(t *testing.T) { ... })
t.Run("A=2", func(t *testing.T) { ... })
t.Run("B=1", func(t *testing.T) { ... })
// <tear-down code>
}
And you can run them conditionally with:
go test -run '' # Run all tests.
go test -run Foo # Run top-level tests matching "Foo", such as "TestFooBar".
go test -run Foo/A= # For top-level tests matching "Foo", run subtests matching "A=".
go test -run /A=1 # For all top-level tests, run subtests matching "A=1".
So lets say you got an user package from a REST api that you want to test. You need to test the create handler in order to be able to test the login handler. Usually I would have this on the user_test.go
type UserTests struct { Test *testing.T}
func TestRunner(t *testing.T) {
t.Run("A=create", func(t *testing.T) {
test:= UserTests{Test: t}
test.TestCreateRegularUser()
test.TestCreateConfirmedUser()
test.TestCreateMasterUser()
test.TestCreateUserTwice()
})
t.Run("A=login", func(t *testing.T) {
test:= UserTests{Test: t}
test.TestLoginRegularUser()
test.TestLoginConfirmedUser()
test.TestLoginMasterUser()
})
}
Then I can append methods to the UserTest type that wont be executed by the go test command in any _test.go file
func (t *UserTests) TestCreateRegularUser() {
registerRegularUser := util.TableTest{
Method: "POST",
Path: "/iot/users",
Status: http.StatusOK,
Name: "registerRegularUser",
Description: "register Regular User has to return 200",
Body: SerializeUser(RegularUser),
}
response := util.SpinSingleTableTests(t.Test, registerRegularUser)
util.LogIfVerbose(color.BgCyan, "IOT/USERS/TEST", response)
}
The best way to achieve that is to create a TestMain, as presented here.
import (
"testing"
"os"
)
func TestMain(m *testing.M) {
// Do your stuff here
os.Exit(m.Run())
}
It's also possible to synchronize the test using wait groups:
awaitRootElement := sync.WaitGroup{}
awaitRootElement.Add(1)
t.Run("it should create root element", func(t0 *testing.T) {
// do the test which creates root element
awaitRootElement.Done()
})
awaitRootElement.Wait()
t.Run("it should act on root element somehow", func(t0 *testing.T) {
// do tests which depend on root element
})
Note that you should wait before scheduling the tests, since the asynchronous test execution otherwise might deadlock (the test routine is awaiting another test which never gets to run).

Customizing TestResults display in TestNG

I like to customize and display more information for Test suites or tests like Test Run times, for eg: adding more information to below displayed output
===============================================
Demo-Suite
Total tests run: 19, Failures: 1, Skips: 0
===============================================
Any suggestions how to add more to above info like adding Average Test Suite run time etc.,
Here is the solution for you:
Let us assume we have a TestNG script with 3 Testcases, where 1 Testcase passes & 2 Testcases fails.
#Test
public void test1()
{
Assert.assertEquals(12, 13);
}
#Test
public void test2()
{
System.out.println("Testcase 2 Started");
Assert.assertEquals(12, 13, "Dropdown count doesnot match");
System.out.println("Testcase 2 Completed");
}
#Test
public void test3()
{
System.out.println("Testcase 3 Started");
Assert.assertEquals("Hello", "Hello", "Words doesnot match. Please raise a Bug.");
System.out.println("Testcase 3 Completed");
}
So you get the result on the console as: Tests run: 3, Failures: 2, Skips: 0
Now to look at the granular details you can do the following:
a. Move to the tab "Results of running class your_class_name". Here you will observe some more fine prints of the execution in-terms of Default Suite Execution Time, Default Test Execution Time, Time taken for each Individual Test, etc.
b. Now to view more details you can click on the "Open TestNG report" icon located on the top bar of "Results of running class your_class_name". This will provide you a lot more information about Testcase Results & Time taken.
Now if you need more detailed information in the form of Dashboard, Execution Info & System Details, you can integrate "ExtentReports" within TestNG to get some superb graphical representations of your test execution.
Dashboard:
Execution Info:
System Details:
Let me know if this answers your question.
Too long by 3 characters
to be a comment.
TestNG documentation is your friend. You could provide your own implementation. Very basic example here.
Another approach is to use HTML/XML Report generation and to inspect the data from a test run there. It's a bunch of html pages with pretty colors and some data. Sample report here. Also if your project is using Apache Maven than just enable the surefire plug-in. Sample report here.

Spock and Maven. Tests with #Unroll failed with errors but placeholders is not filled with iteration data in Maven output

I have some parameterized test with Spock, and it's 10 cases which coming to test in where block. So, I decide to use #Unroll annotation so when some of the 10 cases fail, I will understand which one.
So I add to feature placeholder with message about what kind of case it is, let's say it's
"Test with #message case"(String message, etc..){...}.
If I'll try to launch it in IDEA, output is looks like expected. (in the left side of the window where tree of tests is opened)
Test with SomeIterationMessage case: failed
Test with AnotherIterationMessage case: failed
But console IDEA output is looks like:
Condition not satisfied:
resultList.size() == expectedSize
| | | |
[] 0 | 1
false
<Click to see difference>
at transformer.NameOfSpec.Contract. Test with #message case (NameOfSpec.groovy:220)
If I launch building the project by Maven through command line and this tests failed I just get messages like in IDEA Console output. So it's absolutely useless and like in the IDEA Console output.
Test with #message case: failed
Test with #message case: failed
So it does not replace placeholders with particular iteration data to get info about which iteration was crushed.
How to figure it out so the IDEA console output and the Maven outputs get it right? 'Cause if it impossible, this #Unroll annotation really piece of nothing. 'Cause in IDE test can pass with no problem, but in a big project with tons of dependencies it can crush when you build it, and you will never get why and which iteration failed cause output is telling you nothing.
Okay, so it's can be used with Maven with no problem. In the IDE we can use our panel with a tree of tests. It works fine as I said before.
And what about maven. Yes, it doesn't show anything in console output. But when the test was failed we can go to the
target\surefire-reports
In the root of our module which Maven generate for each of the class and fill with outputs, and get right name of iteration with actual iteration data.
-------------------------------------------------------------------------------
Test set: *************************.***MapReduceSpec
-------------------------------------------------------------------------------
Tests run: 6, Failures: 0, Errors: 3, Skipped: 1, Time elapsed: 0.938 sec <<< FAILURE! - in *******************.DmiMapReduceSpec
MapReduce flow goes correctly for dataclass (contract) (*******************.***MapReduceSpec) Time elapsed: 0.185 sec <<< ERROR!
Where value "(contract)" in the end of method string, has been taken from iteration parameter #message. So raw method name looks like
"MapReduce flow goes correctly for dataclass (#message)"(){...}
Of course, it's not a very cool trick, but it's anyway much faster, then debug 20 or even more inputs in where block manually figuring out which is dead.

Junit preconditions and test data

I have a java assignment to create an address book then test and evaluate it. I have created it and created some junit tests. In the deliverables section of the assignment it says to list all the test cases for the full program in a table along with:
A unique id
a description of the test
pre-conditions for running the test
the test data
the expected result
Could somebody tell me what they mean by the preconditions and the test data for the test below:
public void testGetName()
{
Entry entry1 = new Entry("Alison Murray", "34 Station Rd", "Workington", "CA14 4TG");
assertEquals("Alison Murray",entry1.getName()); }
Tried emailing the tutor (im a distanct learner) but its taking too long to get a reply. Would the pre-condition be that entry1 needs populated? Test data: "Alison Murray"? Any help is apreciated
There are two types of checks with JUnit:
assertions (org.junit.Assert.*);
assumptions (org.junit.Assume.*).
Assertions are usually used to check your test results. If teh result is not what was expected, then the test fails.
Assumptions are used to check it test data are valid (if they match the test case). If they don't, the test is cancelled (without any error).
As I read your code sample: there are no preconditions and the test data would be entry1.

Categories