I have used Cucumber with Jenkins in the past and jenkins-cucumber-jvm-reports-plugin-java is fantastic. All I need to do is to generate a cucumber json output to get a beautiful report.
My new project uses Bamboo and I could not find anything similar or atleast something closer. Has any one have experience using Cucumber and Bamboo? Could use some helpful pointers. As this exercise is POC, either Ruby or Java would be great.
I can see that this was asked a long time ago. Not sure whether it is still valid but here is my experience with the same problem.
In my project we are using cucumber with bamboo and report generationn is also vital for us.
We found that the Cucumber Report Plugin for bamboo is not sufficient for us so we moved to the Cucumber Reporting tool from MasterThought. As I see this is the exact same thing that the jenkins-cucumber-jvm-reports-plugin-java is using.
To utilize this in Bamboo we are calling it directly through maven after we have executed our tests during our build but there is also a maven-cucumber-reporting-mojo which can be used if you want to configure it through maven (we haven't used this because at that point it supported only one json input and we had hundreds of jsons but it's possible that this functionality was added in the mean time).
We are setting the generated html based output as artifact for our build so we can access the nice reports in bamboo for every build.
I hope it helps.
Related
I'm trying to make the integration between JUnit, Jenkins and TM4J (Jira) using this tutorial: https://support.smartbear.com/tm4j-cloud/docs/api-and-test-automation/junit-integration.html.
The problem is that the tm4j_result.json file is NOT generated, although the result from the Cucumber integration is being generated successfully.
Is it possible to generate BOTH Cucumber and JUnit reports?
This is the Jenkins reference, which contains the exact same file pattern example and the bitbucket code in the reference shows only
adaptivist library and surefire listener
To answer your question : "Is it possible to generate BOTH Cucumber and JUnit reports?"
I would answer there is no asking this, since from your automated tests project, you will "either" work with BDD, hence use the cucumber file, or, work with non-BDD; which will in this case resort to the Junit file.
Should you have both, a solution using both will be needed at a pipeline level and a double test execution command will be deemed necessary to keep things reasonably simple.
There is no mix of BDD and non BDD all in the same test execution command, to put it simply !
I found out about rerunFailingTestsCount feature in Surefire (commit). When a test fails, the runner tries to rerun it up to a specified number of times. If any of these reruns succeed, the test is considered PASSED, but FLAKY.
This feature implements extension to the JUnit XML report format, with additional attributes in the test result.
How can I configure Jenkins CI to meaningfully show the newly gained test data about my testing?
I would like to be able to monitor my flaky tests, so I can maintain general overview of what's going on, and later so I can prioritize fixing the ones that slow the build the most.
Build containing only flaky tests should be easily distinguishable from one that contained some failed tests, and from one containing only passing tests.
It looks like you've found almost all the answers by yourself :)
The only thing that IMO is missing is Some Jenkins Plugin that indeed can show (visualize) the flaky tests based on surefire reports.
There is indeed such a plugin called Flaky Test Handler.
Disclaimer - I haven't tried it by myself but it seems that it can do the job. And it would have been my best bet to solve the issue.
An alternative would be creating a Jenkins plugin by yourself but it looks like a lot of hassle.
Yet another approach I can think of is creating a maven plugin that would parse the results of surefire plugin and create some additional HTML report and based on that information you could just visualize the HTML report in Jenkins (avoiding writing the jenkins plugin).
One last approach that I worked with a long time ago - is a maven plugin that again parses the results of surefire test and adds the result to some database (like mongo db or something). It can be invoked only in Jenkins so that Jenkins for supply some additional information like buildNumber.
Later on you could roll your own UI calling to mongo db and giving statistics over builds (like previous build had 10 flaky test, this build has 8 and so forth).
I am working for a company that is using multiple Maven projects/modules to create what will eventually become one product. To help me explain, imagine a file structure similar to below:
- Parent Directory
- Project_1
- /src/
- /target/
- POM.xml
- Project_2
- /src/
- /target/
- POM.xml
Along the way we are using JUnit to unit test our code, and it is an important contractual requirement that we achieve above a certain percentage threshold of code coverage with our tests.
We are using JaCoCo to generate coverage reports in the form of a HTML website. JaCoCo itself is proving to be invaluable but one major problem we have is that this creates a single site under the /target/site/jacoco/ directory.
I have done some investigating myself and found that, unless I am mistaken, JaCoCo by default does not support the ability to converge multiple Maven projects into a single JaCoCo report.
So my question is, can anybody suggest an alternative solution - something that will allow us to converge multiple reports onto a single web server?
One option we have is to move all sites into individual folders on a web server and then have an index page linking them together, but it's "clumsy" at best. For example:
- Web Server
- index.html
- Project_1
- (Generated report files)
- Project_2
- (Generated report files)
Any better suggestions would be greatly appreciated.
JaCoCo does not provide a simple way to do this as of today. However, they do specify three alternatives that are described here: https://github.com/jacoco/jacoco/wiki/MavenMultiModule
Their most suitable approach involves creating a separate reporter module that contains dependencies on all the other modules (in the github article referred to as Strategy: Module with Dependencies).
The reporter module uses the jacoco:report-aggregate (http://www.eclemma.org/jacoco/trunk/doc/report-aggregate-mojo.html) maven goal to fetch all the individual reports and binds them together into one.
An example project:
https://prismoskills.appspot.com/lessons/Maven/Chapter_06_-_Jacoco_report_aggregation.jsp
There are many different approaches you can go with.
First of all you might want to consider something like Sonar, so that you'll compile all your modules and will run a Sonar that will inspect the coverage among other things. Sonar will take the results and upload to sonar server (with the database and everything) so that you'll be able to see in UI what went wrong
Another approach is just rolling your own Maven plugin (assuming you're using Maven). The reports generated by jacoco is also an XML report if I'm not mistaken. So it can be parsed pretty easily. So, one could write a Maven plugin that would identify all the reports like this, parse them and provide some unified view.
Yet another approach is to cause the whole build failure when the coverage doesn't reach some threshold. I know, it doesn't answer your question directly, but if you do it like this, you'll kind of guarantee the minimum level of coverage (that can be increased from time to time at the level of project).
I have a Maven-based Java webapp that has bunch of unit tests, integration tests, code coverage reports etc and some more technical details.
I would like generate project information which would contain all the above information aggregated in one place so that it could be seen by others.
What's are some of the tools available in order to achieve this?
Remote server upload can be done either via http/dav/scp etc.
You can maintain a roadmap via markdown/apt or other formats.
Current open issues (may be jira) via maven-changes-plugin ?
Technical details such as tools etc. can be documented by the above as well?
Unit tests results can be done via usual maven site generation (surefire reporting) Code coverage via cobertura the old way or via JaCoCo.
We're adding unit tests to previously untested code, as the need arises to modify that code. It's difficult to get useful coverage metrics since the majority of code in any package is known to be untested.
Are there any tools available to measure differential code coverage, that is, the percent of code modified in a given changeset which was covered by a unit test?
Use pycobertura. It's a command-line tool to prevent code coverage regression by diffing two coverage reports. It tells you whether your new code is better or worse than the previous version, coverage-wise.
$ pycobertura diff ./master/coverage.xml ./myfeature/coverage.xml
It's language agnostic since it just relies on the Cobertura report (XML file) generated by your testing/coverage tool.
Pycobertura can also generate HTML reports which fit nicely in CI/CD tools such as Jenkins.
https://github.com/aconrad/pycobertura
Continuous integration tools like Jenkins let you keep a history of test coverage and show you a graph that includes a coverage trend compared to previous builds. Example: Cobertura Jenkins Plugin
There is a gradle plugin that computes diff code coverage
https://github.com/form-com/diff-coverage-gradle
All you need is to provide diff file to the plugin or you can use git diff tool that is shown in the example
Take a look into Sonar, really good tool to analyze entire application quality and coverage.
I recently did exactly that, using JaCoCo and the ConQAT framework for code analysis. The approach is as follows:
Load the source code from the baseline version and the head version (and potentially intermediary ones, if you also have done tests in-between)
Compare the program history method by method to determine where changes happened
Annotate coverage information to each tested revision
Search for methods that have not been covered since they last changed
There is also a blog post containing a more detailed description including visualizations and more advanced topics like refactoring detection to only identify changes worth testing.
You can use diff-test-coverage for this. diff-test-coverage is a Node.js commandline tool which filters test coverage based on a (source control) diff.
It works by filtering the coverage report information, so it can report the coverage information for new code.
It supports both Git and Mercurial and all common coverage report formats.