Access allure reports while test is running - java

We're using TestNG+Allure reports for our java testing infrastructure. The test might take up to 4 hours to run. We want to see the reports of tests that are already done in the suite.
Can I see a report in between? How can one implement it?
I'm using Jenkins but I believe the solution might be different (since Jenkins supports publishing the reports only as a post-build action).

Related

Testcase in allure report showing as passed tests even when there are failures for some testdata

I ran a testcase with multiple test data and one got failed and remaining got passed but the allure report at suites level is showing as passed for the Testcase. When I go to retries tab failed testcaes is available with failed status.
I want to make the overall status of that testcase as failed in allure report suites level even if a single test data fails in retries tab. Can someone help me with your suggestions.
I can only speculate based on my own experience, that this might occur in situations where there are multiple directories where test results may be stored. For example, let's consider a scenario where we are using TestNG, the maven-surefire-plugin, and the Allure framework. The results are stored by default in both target/allure-results and in target/surefire-reports.
When you run allure serve target/surefire-reports/, it behaves as described because Allure takes not its own results from there and does the best it can with them. It displays them but it is far from this what we expect.
When you run allure serve target/allure-results/, it behaves as it should, with each test treated separately and not counted as a retry.
Of course, this is just my speculation. In general, it's a good idea to check the target directory (or the build directory if using Gradle) and thoroughly examine what and where is being collected.
In order to provide better answer, it is necessary to know which plugins you are using, what their configuration is, and how the Allure reports are generated.

I enabled rerunFailingTestsCount Surefire feature. How do I configure Jenkins CI to show the rich test data?

I found out about rerunFailingTestsCount feature in Surefire (commit). When a test fails, the runner tries to rerun it up to a specified number of times. If any of these reruns succeed, the test is considered PASSED, but FLAKY.
This feature implements extension to the JUnit XML report format, with additional attributes in the test result.
How can I configure Jenkins CI to meaningfully show the newly gained test data about my testing?
I would like to be able to monitor my flaky tests, so I can maintain general overview of what's going on, and later so I can prioritize fixing the ones that slow the build the most.
Build containing only flaky tests should be easily distinguishable from one that contained some failed tests, and from one containing only passing tests.
It looks like you've found almost all the answers by yourself :)
The only thing that IMO is missing is Some Jenkins Plugin that indeed can show (visualize) the flaky tests based on surefire reports.
There is indeed such a plugin called Flaky Test Handler.
Disclaimer - I haven't tried it by myself but it seems that it can do the job. And it would have been my best bet to solve the issue.
An alternative would be creating a Jenkins plugin by yourself but it looks like a lot of hassle.
Yet another approach I can think of is creating a maven plugin that would parse the results of surefire plugin and create some additional HTML report and based on that information you could just visualize the HTML report in Jenkins (avoiding writing the jenkins plugin).
One last approach that I worked with a long time ago - is a maven plugin that again parses the results of surefire test and adds the result to some database (like mongo db or something). It can be invoked only in Jenkins so that Jenkins for supply some additional information like buildNumber.
Later on you could roll your own UI calling to mongo db and giving statistics over builds (like previous build had 10 flaky test, this build has 8 and so forth).

Code coverage for Fitnesse

I have just inherited a old java codebase (around 10 - 15 years old). It does not have any automated testing coverage, or at least the contemporary world knows about it. I am planning to write some Fitnesse scripts around it, to begin with.
I know about Concordion etc. And I have my reason to pick Fitnesse. I will keep away from that since that is not the topic of this question.
My question is, I don't know of a quick way to measure the code coverage done by the Fitnesse tests as I write them. I know that jacoco (or similar libraries) should be able to report them but I just can't figure out exactly how.
So, if anyone of you have worked on Fitnesse test scripts and have managed to have Jenkins report on the coverage achieved by the scripts, please help.
Thanks.
I have not done this myself, because I tend to use FitNesse to test deployed applications (and I cannot measure code coverage in the 'real installation'). But I believe this should be fairly straightforward if you run your FitNesse tests as part of a Jenkins (or any other build server's) jUnit run, which measures code coverage.
To have your FitNesse tests be executed as part of a jUnit run: create a Java class annotated with #RunWith(FitNesseRunner.class) and give it a #Suite("MyPageOrSuite.That.IWantToRun") annotation to indicate which tests to run. This will execute the specified page(s) in the same Java process that the jUnit process is using, so if that is instrumented in some way to determine code coverage the coverage of your FitNesse tests will be included in the report.
Sample jUnit test class, running FitNesse.SuiteAcceptanceTests.SuiteSlimTests.TestScriptTable:
#RunWith(FitNesseRunner.class)
#FitNesseRunner.Suite("FitNesse.SuiteAcceptanceTests.SuiteSlimTests.TestScriptTable")
#FitNesseRunner.FitnesseDir(".")
#FitNesseRunner.OutputDir("../target/fitnesse-results")
public class FitNesseRunnerTest {
}

Junit Tests returning as Unrooted Tests

I've similar questions asked on this site, but this is a bit of a different scenario than what I have seen.
We have a PC client execute JUnit 4 tests. However, we have a custom test runner that ships the JUnit 4 tests as JUnit 3 tests (using JUnit38ClassRunner) over ethernet to a target system that is running a service that executes JUnit tests using JUnit 3.8. The tests execute as intended, however, when they are returned to the PC client they are marked as Unrooted Tests. Is there a way to organize these tests as "non-unrooted tests"? It is somewhat difficult to sift through the failed results when they are all returned in one group when not using Eclipse. Using JUnit 4 is not an option on the remote system as the target embedded system uses Java 1.4.2, and this is not changing anytime in the near future. We really do not want to have to downgrade to JUnit 3.8 on the PC client side because of the #RunWith annotation, which will take us a little while to figure out how to re-implement.
Any assistance on this is appreciated, thanks in advance.
I wasn't able to find a solution for this in Eclipse, but using Ant to manage the junit execution generates xml reports, and I can view the summary in a web page by compiling those reports using the <junitreport> element.

Run JUnit Tests through web page

We would like to have a set of tests as part of our web application. The tests will be used for analyzing the health status of the application, so a support person or a scheduler can run the test to see if the application itself and various required remote systems are available.
I have seen this being done using some kind of webbased JUnit frontend, it allowed to run tests and reported the results as HTML. This would be great because the developers know JUnit, but I couldn't find the library in the intertubes.
Where can I find a library doing this?
You can use some free services to verify the availability of your system. Here are two that I've used:
mon.itor.us
pingdom
Another thing you can take a look at is JMeter, but it does not have a web UI.
Original answer:
Perhaps you mean functional tests (that can be run through JUnit). Take a look at
Selenium - it's web functional testing tool.
(Note that these are not unit tests. They don't test individual units of the code. Furthermore unit tests are executed at build time, not at runtime.)
Bozho is correct, these are not unit tests but I have done something similar. At my company I am not the one that ultimately deploys these things to our test environment or production environment. During development I create a couple of servlets that test things like it can get a valid database connection, it can hit our AD server etc. It than basically prints out a message and indicates success or failure.
That way when I have the code deployed to one of our environments, I can have the person deploying it hit the URL and make sure everything comes back OK. When I get ready to do the final deployment I just remove the servlet config.
If you already have a set of tests composed and ready to run, then Hudson can run those tests on a schedule and report on the results.
Update: If you're looking for a tool to check your servers and applications every few minutes for availability check out Nagios.
Maybe you mean some kind of acceptance test tool. If so, have a look at Fitnesse.
What you're probably looking for is CruiseControl.Net - it combines with NUnit/JUnit etc to make an automated testing framework with HTML reporting tools and a tray app for your desktop as well. I actually just downloaded it again an hour ago for a new role - it's really good.
It can be used to run anything from unit tests to getting files from source control, to kicking off compiler builds or rebooting servers (when used with NAnt - a .Net build tool).
You should look for a Continous Integration tool like Jenkin.

Categories