I have been writing some tests using Groovy/Spock and came to the need of some decorator/logic that would only run some tests based in a variable's value
I was reading about #IgnoreIf which is the perfect option as it allows me to write something in the lines of
#IgnoreIf(value = { !enabled }, reason = "Feature flag not enabled")
The problem I have is, the reason argument only got released in 2.1 and my company is using 1.3 due to some major issues whilst migrating to v2.
Is there any other option I could use to achieve the same result, bearing in mind that, at the end of they day, I want to see skipped tests in the pipeline but with a reason why.
Thanks
The problem I have is, the reason argument only got released in 2.1 and my company is using 1.3 due to some major issues whilst migrating to v2.
My recommendation is to fix those issues and upgrade instead of investing a lot of work to add a feature to a legacy Spock version. Solve the problem instead of evading it.
I want to see skipped tests in the pipeline but with a reason why.
You did not specify what exactly you need. Is this about console log or rather about generated test reports? In the former case, a cheap trick would be:
#IgnoreIf({ println 'reason: back-end server unavailable on Sundays'; true })
If you need a nice ignore message in the test report, it gets more complicated. You might need to develop your own annotation-based Spock extension which would react to your custom #MyIgnoreIf(value = { true }, reason = 'whatever') and also somehow hook into report generation, which might be possible, but I never tried.
Besides, the reason that Spock 2.x can offer users skip reasons more easily is that its engine runs on top of the JUnit 5 platform, which has a Node.SkipResult.skip(String reason) method out of the box that Spock can delegate the skip reason to.
Spock 1.x however runs on top of JUnit 4, where there is no such infrastructure. Instead, you could use JUnit 4 assumptions, which are basically the equivalent of dynamic skip conditions with reasons, but that would be code-based rather than annotation-based, e.g.:
def dummy() {
org.junit.Assume.assumeFalse('back-end server unavailable on Sundays', true)
expect:
true
}
The output would be something like this and the rest of your test method execution skipped:
org.junit.AssumptionViolatedException: back-end server unavailable on Sundays
at org.junit.Assume.assumeTrue(Assume.java:68)
at org.junit.Assume.assumeFalse(Assume.java:75)
at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:234)
at de.scrum_master.stackoverflow.q74575745.AuthRequestInterceptorTest.dummy(AuthRequestInterceptorTest.groovy:26)
In IntelliJ IDEA, it looks like this, the method actually being reported as skipped:
I think that JUnit 4 assumptions are good enough for your legacy code base, until you upgrade them to Spock 2.x and can enjoy all of the nice new syntax features and extensions.
Related
I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047
I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047
So Im currently in a project where we are using Java playframework 2.3.7 with activator.
One of the things I liked about playframework is the hot-reloading feature. I can modify java files save and the changes are compiled and refreshed on runtime.
How do I get that functionality but for testing? I want to be able to run a single test with this hot reloading feature, so that when I save. Tests for the given file (specified by test-only) is re-runned automatically.
There is not such a solution, however you have two choices:
Use IntellJ: To re-run the previous test(s) in IntellJ, you press shift + F10.
Write a watcher: Write a file/directory watcher such as this question/answer here, and then as soon as there are changes, the program, re-runs the test command, such as sbt clean compile test or activator compile test.
Little advice auto running tests: I don't know how complicated your application is, but as soon as you have couple of injections here and there and with additional concurrency; you do not want to run the tests as soon as you put a char in.
Little advice on Test Driven Development: Your approach should be the other way around! You write a test, which fails because there is no implementation; then you leave it alone. You go and write the implementation, then rerun the test to pass it or to get a feedback. Again, you need your cpu/memory power to focus on one thing, you don't want to brute force your implementation. Hope this makes sense!.
Little advice on your Play version: The Play 2.6 is way much better than Play 2.3; you should slowly but surely update your application; at least for the sake of security.
Ok so I found what I was looking for.
For anybody in need of this particular feature in that particular version of play (I'm not sure about other versions) what you need to do is really simple. run activator and put the ~ prefix before test. for example
#activator
[my-cool-project]~test
That will reload your tests when you make a change. if you want to do this for a particular test then you have to do the same but with test-only
#activator
[my-cool-project]~test-only MyCoolTest
hope it helps anyone looking for the same thing
I use Java 8, Maven, (Spring, Mockito), JUnit.
The idea is to know if it is possible to build my application with tests (not skipped and not ignored). And set some as not mandatory.
Maybe there is a annotation like "#FailureWarningOnly" in JUnit or somewhere else?
Requirements:
I do not want to allow all tests to be in failure, but only these that are not been mandatory for the build.
I do not want to use #Ignore, because I want to have the info if something goes wrong in the build phase.
I want also the stacktrace exception of the error with the solution.
Personally I am not a big fan of making some tests have to be successful while other test may.
It is an door opened to make your test more brittle and less helpful and may give bad habits.
In the very most of cases you can use profile or improve the setup of the tests to make their behavior reproducible.
In rare cases where it is not possible, you could make assumptions that whether are not true mean that the next statements of the test should be ignored.
You could make it manually or use a feature of your test framework.
JUnit (4 and 5) provides Assumptions that is :
a collection of utility methods that support conditional test
execution based on assumptions. In direct contrast to failed
assertions, failed assumptions do not result in a test failure;
rather, a failed assumption results in a test being aborted.
It looks like to :
#Test
public void foo(){
Assume.assumeTrue(shouldFailTestInThisCase(...));
// my assertions
Assert.assertEquals(...);
Assert.assertEquals(...);
}
Our app is as follows:
[Frontend] <-restAPI-> [Backend]
Backend supposed to be always at the latest version and can support multiple versions of the frontend, like Ver1, Ver2, etc. There could be minor changes in the restAPI protocol or even how the frontend reacts (more function or different behavior).
This test project tests correct communication, how the frontend behaves and backend for serving right data.
We would like to have the same test project branch to be used for all supported versions, right now there are really only minor differences so our java test code have
if (version == "ver1") {
....
} else if (version == "ver2") {
....
}
Question is what is the most elegant way to split the versions ? Right now it works, but as version number will rise it would became a mess.
I thought like to have a parent #Test method and decide which child test to run according to the version.
BasicTest.java
BasicVer1Test.java
BasicVer2Test.java
Question is if my idea is good, maybe somebody faced similar problem.
The responsibility of your "test system" is testing. Not version control.
In other words: the "correct" answer here is to use a source code management system!
Your code base contains
A) source code
B) associated tests
So, when your product has several distinct versions, than should be managed by having A) and B) together within same branches.
Whereas your setup seems to be that those aspects are really separated; and your "test code base" is not in the same way version controlled as your product code. That is the crucial point to address.
Anything else is just fighting symptoms!
[EDIT] To add as example
Branch 1 - Version 1
Source for Version 1
Tests for Version 1
-------------------------------
Branch 2 - Version 2
Source for Version 2
Tests for Version 2
When new versions add more function or change behavior, it should be tested separately and its source should be maintained separately!