Print test summary to console in JUnit 5 - java

Is it possible in JUnit 5 to print a test summary at the end of all tests to the console?
It should contain a list of the tests that have failed, and a list of the ones that were successful.

try gradle test -i
this will print test cases with result and the final output result.

Related

Code Coverage for every (different) input data

I want to get the Code Coverage of some simple test, which gets data from a DataProvider. I need the coverage result for each data that runs through the test. For example:
if (value != 0)
{
//do something
}
if (value == 100) {
//do something
}
//else do something
If the test gets a value like 0 from the DataProvider, it never reaches the first part of the code, so the coverage result is different than if the value was 100.
So how do I get those coverage results for each data? I am using jacoco with the maven plugin...
It maybe would help, if there is a possibility to run the subtests with maven... Currently I am doing this:
mvn test
but I want to do something like this:
mvn -Dtest=myTestClass#myTest#myData (#myData of course not working)
However IntelliJ uses this parameter to specifiy the subtest:
java.exe -ea [.......] #name0 //-> to run the test only with first Data
java.exe -ea [.......] #name1 //-> to run the test only with second Data
etc.
Thanks for your help in advance!
Code coverage is the percentage of code which is covered by automated tests. Code coverage measurement simply determines which statements in a body of code have been executed through a test run, and which statements have not.
https://confluence.atlassian.com/clover/about-code-coverage-71599496.html
You can pass the arguments from command line and then run your tests.
You can pass them on the command line like this
mvn test -Dtest=<ClassName> -Dvalue=100
then access them in your test with
int value=Integer.valueOf(System.getProperty("value"));

Difference between Tests and Steps in testng extent report

I'm confused in difference between Tests and Steps in testng extent report.
I have 2 test cases as 1 pass and 1 fail. In extent report under Test: 1 test(s) passed 1 test(s) failed, 0 others and under Steps: 1 step(s) passed
2 step(s) failed, 0 others
So would anyone clarify what is the difference between both ?
Attaching code snippet and testng extent report
#Test
public void demoTestPass()
{
test = extent.createTest("demoTestPass", "This test will demonstrate the PASS test case");
Assert.assertTrue(true);
}
#Test
public void demoTestFail()
{
test = extent.createTest("demoTestFail", "This test will demonstrate the FAIL test case");
Assert.assertEquals("Hi", "Hello");
}
Please click for Extent report here.
Any clarification would be much appreciated.
Difference Between Tests and Steps in extentReport:
Tests defines: Total test section which you have created in your Report: With the syntax like : extentReport.createTest("name of section");
Steps defines : Total number of log which you have generated in Script, With the syntax like : testlog.info() OR testlog.pass() OR testlog.fail() where testlog is object of ExtentTest class
Example:
In this report, there are 3 section which has been created and its showing as Tests. And Steps defines numbers of logs which has been passed in those Test.
Your case :
Test: 1 test(s) passed 1 test(s) failed, 0 others and under Steps: 1 step(s) passed 2 step(s) failed, 0 others
Test include 1 pass and 1 fail, because of its get failure in Steps. Your Steps include 1 pass and 2 fails and its reflected on Test.
Test(startTest("test name")) is something that is used to create a new test in extent reports.
Steps denotes that how many messages (test. Pass("pass message"), test. Fail ("fail message), test. Info ("info message")) you've logged to reports.
Consider you've two test methods and each test method has 1pass and 1 info messages.
So, in the extent reports, it'll show like 2 tests, total 4 steps.
2 pass steps and 2 info steps

Customizing TestResults display in TestNG

I like to customize and display more information for Test suites or tests like Test Run times, for eg: adding more information to below displayed output
===============================================
Demo-Suite
Total tests run: 19, Failures: 1, Skips: 0
===============================================
Any suggestions how to add more to above info like adding Average Test Suite run time etc.,
Here is the solution for you:
Let us assume we have a TestNG script with 3 Testcases, where 1 Testcase passes & 2 Testcases fails.
#Test
public void test1()
{
Assert.assertEquals(12, 13);
}
#Test
public void test2()
{
System.out.println("Testcase 2 Started");
Assert.assertEquals(12, 13, "Dropdown count doesnot match");
System.out.println("Testcase 2 Completed");
}
#Test
public void test3()
{
System.out.println("Testcase 3 Started");
Assert.assertEquals("Hello", "Hello", "Words doesnot match. Please raise a Bug.");
System.out.println("Testcase 3 Completed");
}
So you get the result on the console as: Tests run: 3, Failures: 2, Skips: 0
Now to look at the granular details you can do the following:
a. Move to the tab "Results of running class your_class_name". Here you will observe some more fine prints of the execution in-terms of Default Suite Execution Time, Default Test Execution Time, Time taken for each Individual Test, etc.
b. Now to view more details you can click on the "Open TestNG report" icon located on the top bar of "Results of running class your_class_name". This will provide you a lot more information about Testcase Results & Time taken.
Now if you need more detailed information in the form of Dashboard, Execution Info & System Details, you can integrate "ExtentReports" within TestNG to get some superb graphical representations of your test execution.
Dashboard:
Execution Info:
System Details:
Let me know if this answers your question.
Too long by 3 characters
to be a comment.
TestNG documentation is your friend. You could provide your own implementation. Very basic example here.
Another approach is to use HTML/XML Report generation and to inspect the data from a test run there. It's a bunch of html pages with pretty colors and some data. Sample report here. Also if your project is using Apache Maven than just enable the surefire plug-in. Sample report here.

Spock and Maven. Tests with #Unroll failed with errors but placeholders is not filled with iteration data in Maven output

I have some parameterized test with Spock, and it's 10 cases which coming to test in where block. So, I decide to use #Unroll annotation so when some of the 10 cases fail, I will understand which one.
So I add to feature placeholder with message about what kind of case it is, let's say it's
"Test with #message case"(String message, etc..){...}.
If I'll try to launch it in IDEA, output is looks like expected. (in the left side of the window where tree of tests is opened)
Test with SomeIterationMessage case: failed
Test with AnotherIterationMessage case: failed
But console IDEA output is looks like:
Condition not satisfied:
resultList.size() == expectedSize
| | | |
[] 0 | 1
false
<Click to see difference>
at transformer.NameOfSpec.Contract. Test with #message case (NameOfSpec.groovy:220)
If I launch building the project by Maven through command line and this tests failed I just get messages like in IDEA Console output. So it's absolutely useless and like in the IDEA Console output.
Test with #message case: failed
Test with #message case: failed
So it does not replace placeholders with particular iteration data to get info about which iteration was crushed.
How to figure it out so the IDEA console output and the Maven outputs get it right? 'Cause if it impossible, this #Unroll annotation really piece of nothing. 'Cause in IDE test can pass with no problem, but in a big project with tons of dependencies it can crush when you build it, and you will never get why and which iteration failed cause output is telling you nothing.
Okay, so it's can be used with Maven with no problem. In the IDE we can use our panel with a tree of tests. It works fine as I said before.
And what about maven. Yes, it doesn't show anything in console output. But when the test was failed we can go to the
target\surefire-reports
In the root of our module which Maven generate for each of the class and fill with outputs, and get right name of iteration with actual iteration data.
-------------------------------------------------------------------------------
Test set: *************************.***MapReduceSpec
-------------------------------------------------------------------------------
Tests run: 6, Failures: 0, Errors: 3, Skipped: 1, Time elapsed: 0.938 sec <<< FAILURE! - in *******************.DmiMapReduceSpec
MapReduce flow goes correctly for dataclass (contract) (*******************.***MapReduceSpec) Time elapsed: 0.185 sec <<< ERROR!
Where value "(contract)" in the end of method string, has been taken from iteration parameter #message. So raw method name looks like
"MapReduce flow goes correctly for dataclass (#message)"(){...}
Of course, it's not a very cool trick, but it's anyway much faster, then debug 20 or even more inputs in where block manually figuring out which is dead.

Junit preconditions and test data

I have a java assignment to create an address book then test and evaluate it. I have created it and created some junit tests. In the deliverables section of the assignment it says to list all the test cases for the full program in a table along with:
A unique id
a description of the test
pre-conditions for running the test
the test data
the expected result
Could somebody tell me what they mean by the preconditions and the test data for the test below:
public void testGetName()
{
Entry entry1 = new Entry("Alison Murray", "34 Station Rd", "Workington", "CA14 4TG");
assertEquals("Alison Murray",entry1.getName()); }
Tried emailing the tutor (im a distanct learner) but its taking too long to get a reply. Would the pre-condition be that entry1 needs populated? Test data: "Alison Murray"? Any help is apreciated
There are two types of checks with JUnit:
assertions (org.junit.Assert.*);
assumptions (org.junit.Assume.*).
Assertions are usually used to check your test results. If teh result is not what was expected, then the test fails.
Assumptions are used to check it test data are valid (if they match the test case). If they don't, the test is cancelled (without any error).
As I read your code sample: there are no preconditions and the test data would be entry1.

Categories