How to Suppress Karate Failed Features Logs [duplicate] - java

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?

Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

Related

How to generate graphs from karate report? [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

Is it correct if requirement verification happens in jUnit tests

I'm having this logic to check a java file content and verify that it has a comment, which says who is the author(not necessarily the creator) of that file - special requirement of the project.
I wrote a unit test using Junit to check the logic and it works fine.
And I want all the .java files adhere to that standard and make the build fail if at least one of them does not comply to that.
So far I have my Junit test method to do the following,
Read all the .java file contents in the application
For each content check if it contains the comment which has the standard format
Fail the test case if at least one of them with no comment with such a format (so that eventually the build will fail)
Is that a correct approach? It will serve the purpose but is it a good practice to use Junit test to do some verification work.?
If not, what kind of approach should I use to analyze(using my logic - I have a Analyzer.java file with the logic) all the files in the build time and have the build be success iff all files comply too the required standard.
EDIT :
Code comment check is only one verification. There are several checks that need to be done. (ex : variable names should end with a given suffix, patterns to use some internal libraries .. etc) All the scenarios are handled in that logic (Analyzer.java). Just need to check all the java file contents and use that logic to verify them.
It's safe to say like I have a java library and when invoked a method that accept a file name , check(fileName) , it will analyze the file and return true if it pass some invisible logic. And if it returns false build should be failed. Since I need to fail the build if something is not right I'm using it in a jUnit test to check all my .java files in the code base.
If this can be done by a static code analyzing tool (but needs to use the logic I have) it is also acceptable. But I don't have an idea whether this kind of custom verification supports by the existing static code analyzers.
Is that approach I'm using correct ? ... is it a good practice to use junit test to do some verification work
No. Unit testing is for checking the integrity of a code unit, ensuring the unit's behavior works properly. You should not be checking comments/documentation in unit tests.
Since I need to fail the build if something is not right..
You need to add more steps to your build process, more specifically a static analysis step.
Unit testing is considered a build step, along with compilation, execution and deployment. Your project requires an extra step, which brings me to the following...
You could use a build tool, such as Apache Ant, to add steps to your project's build. Although static analysis doesn't come bundled (this is simply a build automation tool), it does allow you to ensure that the build fails of a custom build step fails.
With that said, you could add a step that triggers a static analysis program. This page contains an example of using Ant to create multiple build steps, including static code analysis and bug checking. You could even create your own analyzer to use.
For more information about build tools & automation:
StackOverflow: What is a Build Tool?
Wiki: Software Build > Build Tools
Wiki: Build Automation
You can use Checkstyle for that.
Build can be failed.
Check comments. It is called static code analysis.
To define the format for an author tag or a version tag, set property authorFormat or versionFormat respectively to a regular expression.
If that would be my project then i would not be placing this kind of verifications in the standard src/test/java and run as part of the test suite of the application.
Unit tests suite should be testing production logic and not doing 'coding style' checks.
A place for this kind of verification would be for example on a pre-commit to git repository. And all the scripts checking this (or style-checking tools) would be invoked in that place.
You can put everything in one place, but as far as i can see the, the separation of concerns in all the areas of the software development are the leading trend for quite a while.. and there is a good epoint to that.

Creating a Java reporting project -- would like to use JUnit and Ant but not sure how

I'm interested in generating reports for a certain group of (non-software) engineers in my company using JUnit. Unlike typical JUnit testing where you are ensuring objects, methods, etc are bug free, I want to use JUnit to confirm if results in large log files are within range and give reports on it.
My background: Experienced with Java. I am familiar with JUnit 4, I have spent the last hour or 2 looking through ANT documentation and I think I have a strong idea how it works (I just don't know all the XML commands), and I use Eclipse religiously. Specifically Eclipse June (latest).
My goal is to create a slew of testcases that assert if the values in the logs are within the right range. The test class can then be run over and over on every single log file. After each run, the results will be added to an HTML file.
I am aware of the JUnit Report features in Ant but I am having a lot of trouble putting all the pieces of the puzzle together properly. I know I can hack my way through this project quite easily, but I want to do this the right way.
Can any point me in the right direction in how to:
Make the right build.xml files
Set up JUnit classes/suites/runners so that it runs over and over for a given list of logs
Set up ANT so it does output HTML/XML and how to point it to the right style-sheet so it can be opened in IE/Firefox
How to make the output file
Override features so the reports are custom.
What to override for post-processing. (So I can convert the output HTML into a PDF, etc)
How to create a standalone executable that will do all this automatically - Perhaps incorporating a little Swing if the user doesn't supply an input manually.
How to properly loop through many tests (currently I am just making a main function and doing:
code:
public static void main(String[] args) {
JUnitCore junit = new JUnitCore();
RunListener listener = new RunListener();
junit.addListener(listener);
Result result = junit.run(GearAndBrakeFaultLogReports.class);
for (Failure f : result.getFailures()) {
System.out.println(f.toString());
System.out.println(f.getDescription());
System.out.println(f.getTestHeader());
System.out.println(f.getException());
System.out.println(f.getMessage());
}
System.out.println("Done.");
}
I have a feeling that is the "hacky" way of doing it and if I knew how to plug into Ant properly, it will run my tests, generate the XML/HTML and I would have the output desired.
I imagine I will have more questions as this project develops, but I think getting over the initial hump will be very good for me! I know there are a lot of questions, so any help and I'll gladly point bump :) My hope is, someone familiar with these tools can answer all these with a matter of pointing me to web sites, functions, or an example project that does a similar thing.
For creating a PDF from JUnit test results:
http://junitpdfreport.sourceforge.net/managedcontent/
As a unit testing frameword, JUnit reports are very specific to software engineering. Although, you might be able to customize it for a non-unit testing scenario, it will take a lot of customizing. Unlike other testing frameworks (such as TestNG) JUnit doesn't provide the ability to make your own report generator. TestNG, however, can be used to run JUnit tests and even produces a JUnit report in addition to its own reports. TestNG can run your JUnit tests and then create custom reports by providing a custom implementation of the org.testng.IReporter interface.
http://testng.org/doc/documentation-main.html
Having written reports in Java for many years now, I would strongly recommend trying out a reporting tool such as JasperReports and the iReport tool. Jasper Reports can process custom data sources such as your log files and produce reports in many formats, including XML, HTML, and PDF.
http://community.jaspersoft.com/project/ireport-designer

clover: how does it work?

I am evaluating clover currently and wonder how to use it best. First I'd like to understand how it works conceptually.
1) What does instrumentation mean? Are the test-calls attached to implementation's statements?
2) How is this done? Are the tests actually executed with some fancy execution context (similar to JRebel e.g.) for this? Or is it more like static analysis ?
3) After a "clover-run", some DB is saved to disk, and based on this, reports are generated right? Is the DB-Format accessible? I mean Can I launch my own analysis on it, e.g. using my own reporting tools ? What information does the DB contain exactly? Can I see the mapping between test and implementation there ?
4) Are there other tools that find the mapping between test and implementation? Not just the numbers, but which test, actually covers a line of code ...
Thanks, Bastl.
How is this done? Are the tests actually executed with some fancy execution context (similar to JRebel e.g.) for this? Or is it more like static analysis?
During code instrumentation by Clover it detects which methods are test methods (by default it recognizes JUnit3/4 and TestNG). Such methods gets additional instrumentation instructions. Shortly speaking, entering a test method will usually instantiate a dedicated coverage recorder which measures coverage exclusively for this test. More information about per test recording strategies available in Clover:
https://confluence.atlassian.com/display/CLOVER/Clover+Performance+Results
https://confluence.atlassian.com/display/CLOVER/About+Distributed+Per-Test+Coverage
After a "clover-run", some DB is saved to disk, and based on this, reports are generated right?
A Clover database (clover.db) contains information about code structure (packages, files, classes, methods, statements, branches), it has also information about test methods. There are also separate coverage recording files (produced at runtime) containing information about number of "hits" of given code element. Clover supports both global coverage (i.e. for the whole run) as well as per-test coverage (i.e. coverage from a single test).
More information is here:
https://confluence.atlassian.com/display/CLOVER/Managing+the+Coverage+Database
Is the DB-Format accessible?
The API is still in development (https://jira.atlassian.com/browse/CLOV-1375), but there is a possibility to get basic information. See:
https://confluence.atlassian.com/display/CLOVER/Database+Structure
for more details about DB model and code samples.
But the question is: do you really need to manually read this DB? You wrote that:
Can I see the mapping between test and implementation there ?
Such mapping is already provided by Clover - in the HTML report for example if you click on a source line it will pop up a list of test methods hitting this line.
PS: I'm a Clover developer at Atlassian, feel free to contact me if you have any questions.
What does instrumentation mean?
Additional code is woven in with your code.
Are the test-calls attached to implementation's statements?
I am not sure what you mean but it could be instructions or call to methods. Trivial methods will be inlined by the JIT at runtime.
How is this done?
There are many ways to do it, but often the Instrumentation class is to used to capture when a class is being loaded and a library like Objectweb's ASM is used to manipulate the code.
Are the tests actually executed with some fancy execution context
The context counts which lines have been executed.
Or is it more like static analysis ?
No, it is based on what is called.
After a "clover-run", some DB is saved to disk, and based on this, reports are generated right? Is the DB-Format accessible?
You had best ask the producers of clover as to the content of their files.
Are there other tools that find the mapping between test and implementation? Not just the numbers, but which test, actually covers a line of code ...
There are many code coverage tools available including EMMA, JaCoCo, Cobertura, IDEA has one builtin.

Developing Jenkins post-build plugin

I am currently developing a simple plugin that retrieves results from a Jenkins build. I am extending Notifier and using build.getResults() to get the information. However, when I upload my plugin, I can't set it as a post-build action.
When I run my builds, they break on build.getResults() since I am trying to get the results while the build is still running.
What can I do to properly get the build result ?
Best thing is to look at existing plugins which use Notifier extension point (click to expand implementing plugins list).
Check that you have the Descriptor implemenation (inner) class, as well as config.jelly. Also check jenkins.out and jenkins.err logs for any exceptions (such as malformed config.jelly).
Edit: Actually, Notifier subclass of this plugin looks really simple as Notifiers go: https://wiki.jenkins-ci.org/display/JENKINS/The+Continuous+Integration+Game+plugin , see especially its GamePublisher.java and corresponding config.jelly, and it's GameDescriptor.java, which has been made a full outer class (often descriptor is inner class). Also if you want options into Jenkins' Global configuration, you need a global.jelly, but if you don't have such options, that is something you can just leave out (unlike config.jelly, which you must have for Notifier even if it is empty, like here).
As a general note, it can be really annoying when things do not work, and you do not get any error, your stuff simply is just not displayed by Jenkins... If you just want to get things to work for you, using Groovy build step might be easier, but if you want to get things to work for others, then doing a decent full plugin reduces support requests.
Since this sounds so simple, are you sure you need a plugin ? Take a look at using a Groovy Postbuild step instead; they're much easier to write. There are some good usage examples in the link. If you decide you really need a plugin, see if you can extend an existing one rather than writing your own; it's an easier way to understand the ins and outs of Jenkins plugin writing.

Categories