I have a scenario, where I need to run multiple tags and need not run another one when a condition occurs. I don't want to run Feature_1 when the browser is IE.
#run_me #do_not_run
Feature 1
#Tag1 #Tag2
Scenario: Scenario 1
#run_me
Feature 2
#Tag1 #Tag3
Scenario: Scenario 2
Condition:
if(browser == "IE"){
then execute all #run_me tags but don't execute it when there is #do_not_run
else
run all #run_me
Current code:
--tags #run_me
What should be the right way to achieve it?
Note:
I tried --tags #run_me ~#do_not_runinside the if condition. But not sure it is the correct method or not.
You can follow below approach to execute the tags.
#fast -- Scenarios tagged with #fast
#wip and not #slow -- Scenarios tagged with #wip that aren’t also tagged with #slow
#smoke and #fast -- Scenarios tagged with both #smoke and #fast
#gui or #database --Scenarios tagged with either #gui or #database
Reference Link
When I run go test, my output:
--- FAIL: TestGETSearchSuccess (0.00s)
Location: drivers_api_test.go:283
Error: Not equal: 200 (expected)
!= 204 (actual)
--- FAIL: TestGETCOSearchSuccess (0.00s)
Location: drivers_api_test.go:391
Error: Not equal: 200 (expected)
!= 204 (actual)
But after I run go test again, all my tests pass.
Tests fail only when I reset my mysql database, and then run go test for the first time.
For every GET request, I do a POST request before to ensure that there is data created in the DB.
Could anyone help me with how to make sure that tests are run sequentially? That is the POST requests are run before the GET requests?
You can't / shouldn't rely on test execution order. The order in which tests are executed is not defined, and with the use of testing flags it is possible to exclude tests from running, so you have no guarantee that they will run at all.
For example the following command will only run tests whose name contains a 'W' letter:
go test -run W
Also note that if some test functions mark themselves eligible for parallel execution using the T.Parallel() method, the go tool will reorder the tests to first run non-parallel tests, and then run parallel tests in parallel under certain circumstances (controlled by test flags like -p). You can see examples of this in this answer: Are tests executed in parallel in Go or one by one?
Tests should be independent from each other. If a test function has prerequisites, that cannot be done/implemented in another test function.
Options to do additional tasks before a test function is run:
You may put it in the test function itself
You may put it in a package init() function, in the _test.go file itself. This will run once before execution of test functions begins.
You may choose to implement a TestMain() function which will be called first and in which you may do additional setup before you call M.Run() to trigger the execution of test functions.
You may mix the above options.
In your case in package init() or TestMain() you should check if your DB is initialized (there are test records inserted), and if not, insert the test records.
Note that starting with Go 1.7, you may use subtests in which you define execution order of subtests. For details see blog post: Using Subtests and Sub-benchmarks, and the package doc of the testing package.
For those who as I am is getting problems because of multiple concurring tests running simultaneously. I found a way to limit the maximum number of test running in parallel:
go test -p 1
With this, your test will run sequentially one by one.
Source
Apart for 3rd party libraries like Convey and Ginkgo, with plain Golang 1.7 you can run tests sequentially. You can read more here
func TestFoo(t *testing.T) {
// <setup code>
t.Run("A=1", func(t *testing.T) { ... })
t.Run("A=2", func(t *testing.T) { ... })
t.Run("B=1", func(t *testing.T) { ... })
// <tear-down code>
}
And you can run them conditionally with:
go test -run '' # Run all tests.
go test -run Foo # Run top-level tests matching "Foo", such as "TestFooBar".
go test -run Foo/A= # For top-level tests matching "Foo", run subtests matching "A=".
go test -run /A=1 # For all top-level tests, run subtests matching "A=1".
So lets say you got an user package from a REST api that you want to test. You need to test the create handler in order to be able to test the login handler. Usually I would have this on the user_test.go
type UserTests struct { Test *testing.T}
func TestRunner(t *testing.T) {
t.Run("A=create", func(t *testing.T) {
test:= UserTests{Test: t}
test.TestCreateRegularUser()
test.TestCreateConfirmedUser()
test.TestCreateMasterUser()
test.TestCreateUserTwice()
})
t.Run("A=login", func(t *testing.T) {
test:= UserTests{Test: t}
test.TestLoginRegularUser()
test.TestLoginConfirmedUser()
test.TestLoginMasterUser()
})
}
Then I can append methods to the UserTest type that wont be executed by the go test command in any _test.go file
func (t *UserTests) TestCreateRegularUser() {
registerRegularUser := util.TableTest{
Method: "POST",
Path: "/iot/users",
Status: http.StatusOK,
Name: "registerRegularUser",
Description: "register Regular User has to return 200",
Body: SerializeUser(RegularUser),
}
response := util.SpinSingleTableTests(t.Test, registerRegularUser)
util.LogIfVerbose(color.BgCyan, "IOT/USERS/TEST", response)
}
The best way to achieve that is to create a TestMain, as presented here.
import (
"testing"
"os"
)
func TestMain(m *testing.M) {
// Do your stuff here
os.Exit(m.Run())
}
It's also possible to synchronize the test using wait groups:
awaitRootElement := sync.WaitGroup{}
awaitRootElement.Add(1)
t.Run("it should create root element", func(t0 *testing.T) {
// do the test which creates root element
awaitRootElement.Done()
})
awaitRootElement.Wait()
t.Run("it should act on root element somehow", func(t0 *testing.T) {
// do tests which depend on root element
})
Note that you should wait before scheduling the tests, since the asynchronous test execution otherwise might deadlock (the test routine is awaiting another test which never gets to run).
I want to get the Code Coverage of some simple test, which gets data from a DataProvider. I need the coverage result for each data that runs through the test. For example:
if (value != 0)
{
//do something
}
if (value == 100) {
//do something
}
//else do something
If the test gets a value like 0 from the DataProvider, it never reaches the first part of the code, so the coverage result is different than if the value was 100.
So how do I get those coverage results for each data? I am using jacoco with the maven plugin...
It maybe would help, if there is a possibility to run the subtests with maven... Currently I am doing this:
mvn test
but I want to do something like this:
mvn -Dtest=myTestClass#myTest#myData (#myData of course not working)
However IntelliJ uses this parameter to specifiy the subtest:
java.exe -ea [.......] #name0 //-> to run the test only with first Data
java.exe -ea [.......] #name1 //-> to run the test only with second Data
etc.
Thanks for your help in advance!
Code coverage is the percentage of code which is covered by automated tests. Code coverage measurement simply determines which statements in a body of code have been executed through a test run, and which statements have not.
https://confluence.atlassian.com/clover/about-code-coverage-71599496.html
You can pass the arguments from command line and then run your tests.
You can pass them on the command line like this
mvn test -Dtest=<ClassName> -Dvalue=100
then access them in your test with
int value=Integer.valueOf(System.getProperty("value"));
I have 2 pages a Suite page Suite1 and a SuiteSetUp page. Inside the suiteSetUp page I am calling a java constructor with the suite name as parameter.
now When I run my fitnesse suite I am getting the string 'SuiteSetUp' in java. I should get Suite1.. is it doable in FitNesse using java?
!|library |
|suite set up|${USERNAME}|${suiteName}|
suiteName is defined in a template which is included to this page
suiteName is defined as follows.
!define suiteName {${RUNNING_PAGE_NAME}}
this is how I included the template to the page the !define command is written inside the template page
!include -c .FrontPage._TEMPLATE
I guess you include the template page in the SuiteSetUp page, which means the RUNNING_PAGE_NAME will be "SuiteSetUp", thats why you are getting that value. (ok it may sound like I did not say anything... but that's a simple fact).
You can switch to RUNNING_PAGE_PATH or PAGE_PATH and perform string manipulation to retrieve the last token in java if you want to do it in a similar way.
So we have tests that look like this:
Scenario: XXX- 9056: Change password to special characters
Meta:
#Regression
#ticket #5732
#skip
Given a customer with the following properties:...
we put the #skip there whenever we are still working on it or we know it will not work properly.
We want to get Serenity reports, but we don't want it to include skipped stories. How can we exclude them from being reported?
We found our issue was that in the Scenario line some of our test cases looked like
Scenario: XXX-#9056: Change password to special characters
Instead of
Scenario: XXX- 9056: Change password to special characters
So having the pound symbol(#) on the line with Scenario was messing it up. It doesn't matter if it is under the meta tag. Now none of the skipped test are showing in the report.