If I execute the test cases on EBY one they work, but if I execute the test Suite the results I received are just random, sometimes the TestCases Might work and other times don't.
I tried with explicit waits to see if it's the responding time.
Also, to verify TestCase by test case to see if there was something wrong.
{
driver.get(Constant.VTenantURL);
driver.manage().timeouts().implicitlyWait(4, TimeUnit.SECONDS);
BlockUI_Wait.Execute(driver);
WebElement TestTenant = driver.findElement(By.linkText("TC 1839 Tenant Creation"));
TestTenant.click();
BlockUI_Wait.Execute(driver);
Manage_VTenants.VTUsers(driver).click();WebElement Add = driver.findElement(By.xpath("//*[#id=\"tab_users_domain\"]/div/div[1]/table/tbody/tr[3]/td[1]/i"));
Add.click();Manage_VTenants.VTAddItem3(driver).click();
BlockUI_Wait.Execute(driver);
Save_Action.Execute(driver);
}
I wish to know a better way or best practices to implement in order to received reliable data from the results.
Related
I am running a test where I run a write, then read operations:
emailRepository.insertEmail(email); // Returns a Completable
emailRepository.getEmail(email); // returns a Maybe<String>
If I try to test it using TestObserver<String> as below, I get an empty Maybe from getEmail() and the test fails:
emailRepository.insertEmail(email)
.andThen(Maybe.defer(() -> emailRepository.getEmail(email)))
.test()
.assertValue(email); // fails -- empty Maybe
However, if I run it with blockingGet(), the same test works:
String emailFromDb = emailRepository.insertEmail(email)
.andThen(Maybe.defer(() -> emailRepository.getEmail(email)))
.blockingGet();
assertValue(email,emailFromDb); // succeeds
I thought by chaining the test() method I would be decreasing the risk of race conditions, but weirdly this does not seem to be the case.
Could it be just a coincidence that blockingGet() is working?
My understanding was that once the task is handed over to the database (MySql, in this case), the Completable would complete but the DB may not have finished the write. But if that were the case, then shouldn't I be seeing it fail some times as well with blockingGet()? Or am I misinterpreting things here?
When I run go test, my output:
--- FAIL: TestGETSearchSuccess (0.00s)
Location: drivers_api_test.go:283
Error: Not equal: 200 (expected)
!= 204 (actual)
--- FAIL: TestGETCOSearchSuccess (0.00s)
Location: drivers_api_test.go:391
Error: Not equal: 200 (expected)
!= 204 (actual)
But after I run go test again, all my tests pass.
Tests fail only when I reset my mysql database, and then run go test for the first time.
For every GET request, I do a POST request before to ensure that there is data created in the DB.
Could anyone help me with how to make sure that tests are run sequentially? That is the POST requests are run before the GET requests?
You can't / shouldn't rely on test execution order. The order in which tests are executed is not defined, and with the use of testing flags it is possible to exclude tests from running, so you have no guarantee that they will run at all.
For example the following command will only run tests whose name contains a 'W' letter:
go test -run W
Also note that if some test functions mark themselves eligible for parallel execution using the T.Parallel() method, the go tool will reorder the tests to first run non-parallel tests, and then run parallel tests in parallel under certain circumstances (controlled by test flags like -p). You can see examples of this in this answer: Are tests executed in parallel in Go or one by one?
Tests should be independent from each other. If a test function has prerequisites, that cannot be done/implemented in another test function.
Options to do additional tasks before a test function is run:
You may put it in the test function itself
You may put it in a package init() function, in the _test.go file itself. This will run once before execution of test functions begins.
You may choose to implement a TestMain() function which will be called first and in which you may do additional setup before you call M.Run() to trigger the execution of test functions.
You may mix the above options.
In your case in package init() or TestMain() you should check if your DB is initialized (there are test records inserted), and if not, insert the test records.
Note that starting with Go 1.7, you may use subtests in which you define execution order of subtests. For details see blog post: Using Subtests and Sub-benchmarks, and the package doc of the testing package.
For those who as I am is getting problems because of multiple concurring tests running simultaneously. I found a way to limit the maximum number of test running in parallel:
go test -p 1
With this, your test will run sequentially one by one.
Source
Apart for 3rd party libraries like Convey and Ginkgo, with plain Golang 1.7 you can run tests sequentially. You can read more here
func TestFoo(t *testing.T) {
// <setup code>
t.Run("A=1", func(t *testing.T) { ... })
t.Run("A=2", func(t *testing.T) { ... })
t.Run("B=1", func(t *testing.T) { ... })
// <tear-down code>
}
And you can run them conditionally with:
go test -run '' # Run all tests.
go test -run Foo # Run top-level tests matching "Foo", such as "TestFooBar".
go test -run Foo/A= # For top-level tests matching "Foo", run subtests matching "A=".
go test -run /A=1 # For all top-level tests, run subtests matching "A=1".
So lets say you got an user package from a REST api that you want to test. You need to test the create handler in order to be able to test the login handler. Usually I would have this on the user_test.go
type UserTests struct { Test *testing.T}
func TestRunner(t *testing.T) {
t.Run("A=create", func(t *testing.T) {
test:= UserTests{Test: t}
test.TestCreateRegularUser()
test.TestCreateConfirmedUser()
test.TestCreateMasterUser()
test.TestCreateUserTwice()
})
t.Run("A=login", func(t *testing.T) {
test:= UserTests{Test: t}
test.TestLoginRegularUser()
test.TestLoginConfirmedUser()
test.TestLoginMasterUser()
})
}
Then I can append methods to the UserTest type that wont be executed by the go test command in any _test.go file
func (t *UserTests) TestCreateRegularUser() {
registerRegularUser := util.TableTest{
Method: "POST",
Path: "/iot/users",
Status: http.StatusOK,
Name: "registerRegularUser",
Description: "register Regular User has to return 200",
Body: SerializeUser(RegularUser),
}
response := util.SpinSingleTableTests(t.Test, registerRegularUser)
util.LogIfVerbose(color.BgCyan, "IOT/USERS/TEST", response)
}
The best way to achieve that is to create a TestMain, as presented here.
import (
"testing"
"os"
)
func TestMain(m *testing.M) {
// Do your stuff here
os.Exit(m.Run())
}
It's also possible to synchronize the test using wait groups:
awaitRootElement := sync.WaitGroup{}
awaitRootElement.Add(1)
t.Run("it should create root element", func(t0 *testing.T) {
// do the test which creates root element
awaitRootElement.Done()
})
awaitRootElement.Wait()
t.Run("it should act on root element somehow", func(t0 *testing.T) {
// do tests which depend on root element
})
Note that you should wait before scheduling the tests, since the asynchronous test execution otherwise might deadlock (the test routine is awaiting another test which never gets to run).
How to get the assertion result in JSR2233 Sampler or Beanshell Sampler,so that it would be helpful to show the total number of test case ,success test case ,failure test case in swing showMessageDialog, once testcase execution completed.
I am not really sure if I have understood question correctly!
You seem to be using JMeter for functional testing which is fine! I assume that each request (or Sampler) is a test case. If the below answer is not really you were looking for, provide more info.
setUp Thread Group:
I create these 2 variables in a Beanshell Sampler.
bsh.shared.success = 0;
bsh.shared.fail = 0;
Thread Group:
I add the below code to update success / fail count based on the assertion result in a Beanshell Listener.
if(sampleResult.isSuccessful()){
bsh.shared.success++;
}else{
bsh.shared.fail++;
}
tearDown Thread Group:
I finally display the Success and failed testcases count. You can use your Swing method here!
log.info("PASSED : " + bsh.shared.success);
log.info("FAILED : " + bsh.shared.fail);
I don't know what you're trying to do, but you're violating 2 main rules of JMeter Best Practices:
Run tests in non-GUI mode. Use GUI for tests development and debugging only.
Use the most performing scripting language
When you run your tests in command-line non-GUI mode you can get interim and final statistics via Summariser which is enabled by default:
See 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure guide for more detailed explanation of above recommendations and other JMeter performance and tuning tips and tweaks.
I have a java assignment to create an address book then test and evaluate it. I have created it and created some junit tests. In the deliverables section of the assignment it says to list all the test cases for the full program in a table along with:
A unique id
a description of the test
pre-conditions for running the test
the test data
the expected result
Could somebody tell me what they mean by the preconditions and the test data for the test below:
public void testGetName()
{
Entry entry1 = new Entry("Alison Murray", "34 Station Rd", "Workington", "CA14 4TG");
assertEquals("Alison Murray",entry1.getName()); }
Tried emailing the tutor (im a distanct learner) but its taking too long to get a reply. Would the pre-condition be that entry1 needs populated? Test data: "Alison Murray"? Any help is apreciated
There are two types of checks with JUnit:
assertions (org.junit.Assert.*);
assumptions (org.junit.Assume.*).
Assertions are usually used to check your test results. If teh result is not what was expected, then the test fails.
Assumptions are used to check it test data are valid (if they match the test case). If they don't, the test is cancelled (without any error).
As I read your code sample: there are no preconditions and the test data would be entry1.
I'm quite new to WebDriver and TestNG framework. I've started with a project that does a regression test of an e-commerce website. I'm done with the login and registration and so on. But there is something that I don't quite understand.
Example, I have this easy code that searches for a product.
driver.get(url + "/k/k.aspx");
driver.findElement(By.id("q")).clear();
driver.findElement(By.id("q")).sendKeys("xxxx"); //TODO: Make this dynamic
driver.findElement(By.cssSelector("input.submit")).click();
Now I want to check if xxxx is represented on the page. This can be done with
webdriver.findElement(By.cssSelector("BODY")).getText().matches("^[\\s\\S]*xxxxxx[\\s\\S]*$")
I store this in a Boolean and check if its true or false.
Now to the question, based on this Boolean value I want to say that the test result is success or fail. How can I do that? What triggers a testNG test to fail?
TestNG or any other testing tool decides success or failure of a test based on assertion.
Assert.assertEquals(actualVal, expectedVal);
So if actualVal and expectedVal are same then test will pass else it will fail.
Similarly you will find other assertion options if you using any IDE like Eclipse.
If you want to stop your test execution based on the verification of that text value, then you can use Asserts. However, if you want to log the outcome of the test as a failure and carry on, you should try using soft assertions, which log the verification as passed or failed and continue with the test. Latest Testng comes equipped to handle this - info at Cedric's blog
write this code where your if condition fails
throw new RuntimeException("XXXX not found: ");
u can use throw exception, and each method which will cal this meth should also throw Excetion after method name or you can use try catch. sample:
protected Boolean AssertIsCorrectURL(String exedctedURL) throws Exception {
String errMsg = String.format("Actual URL page: '%s'. Expected URL page: '%s'",
this.driver.getCurrentUrl(), exedctedURL);
throw new Exception(errMsg);
}
You can do this.
boolean result = webdriver.findElement(By.cssSelector("BODY")).getText().matches("^[\s\S]xxxxxx[\s\S]$")
Assert.assertTrue(result);