Display success/Failure count in Swing Dialog box in Jmeter - java

How to get the assertion result in JSR2233 Sampler or Beanshell Sampler,so that it would be helpful to show the total number of test case ,success test case ,failure test case in swing showMessageDialog, once testcase execution completed.

I am not really sure if I have understood question correctly!
You seem to be using JMeter for functional testing which is fine! I assume that each request (or Sampler) is a test case. If the below answer is not really you were looking for, provide more info.
setUp Thread Group:
I create these 2 variables in a Beanshell Sampler.
bsh.shared.success = 0;
bsh.shared.fail = 0;
Thread Group:
I add the below code to update success / fail count based on the assertion result in a Beanshell Listener.
if(sampleResult.isSuccessful()){
bsh.shared.success++;
}else{
bsh.shared.fail++;
}
tearDown Thread Group:
I finally display the Success and failed testcases count. You can use your Swing method here!
log.info("PASSED : " + bsh.shared.success);
log.info("FAILED : " + bsh.shared.fail);

I don't know what you're trying to do, but you're violating 2 main rules of JMeter Best Practices:
Run tests in non-GUI mode. Use GUI for tests development and debugging only.
Use the most performing scripting language
When you run your tests in command-line non-GUI mode you can get interim and final statistics via Summariser which is enabled by default:
See 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure guide for more detailed explanation of above recommendations and other JMeter performance and tuning tips and tweaks.

Related

Sample result's label is not changed through BeanShell post processor for some conditions

I made this code so that if status == 500 the name of the api would be equal to "FAKE_CLIENT_RETRY", if the status of the api == "ERROR", the name would be equal to "FAKE_CLIENT_CALLBACK_ERROR"
import org.apache.jmeter.samplers.SampleResult;
//process main sample
if(("${status}").equals("500")) {
SampleResult.setResponseCodeOK();
SampleResult.setSuccessful(true);
vars.put("Api_Fake_Client_Name","FAKE_CLIENT_RETRY");
}else if(("${status}").equals("ERROR")){
SampleResult.setSuccessful(false);
vars.put("Api_Fake_Client_Name","FAKE_CLIENT_CALLBACK_ERROR");
}else{
vars.put("Api_Fake_Client_Name","FAKE_CLIENT_CALLBACK_SUCESS");
}
But even when status == "ERROR" the name it returns is "FAKE_CLIENT_RETRY"
The strangest thing is that I know that the execution entered the "if" of the condition == "ERROR", because the return that comes with status == "ERROR" appears with execution failure in Jmeter and I forced the return in this case to return with fails via code snippet:
SampleResult.setSuccessful (false);
But despite having entered, it ignores the snippet that asks to rename the api.
Jmeter Sreenshot ----> Jmeter response
You're setting an Api_Fake_Client_Name variable value in the Assertion, it will be available (updated) either in the next sampler of during the next iteration of the current sampler:
Also be aware that starting from JMeter 3.1 you're supposed to be using JSR223 Test Elements and Groovy language for scripting so you could change your code to something like:
switch (vars.get('status')) {
case '500':
prev.setResponseCodeOK()
prev.setSuccessful(true)
vars.put('Api_Fake_Client_Name', 'FAKE_CLIENT_RETRY');
prev.setSampleLabel('FAKE_CLIENT_RETRY')
break;
case 'ERROR':
prev.setSuccessful(false)
vars.put('Api_Fake_Client_Name', 'FAKE_CLIENT_CALLBACK_ERROR')
prev.setSampleLabel('FAKE_CLIENT_CALLBACK_ERROR')
break;
default:
vars.put('Api_Fake_Client_Name', 'FAKE_CLIENT_CALLBACK_SUCESS')
prev.setSampleLabel('FAKE_CLIENT_CALLBACK_SUCESS')
}
More information:
JMeter Test Elements Execution Order
Scripting JMeter Assertions in Groovy - A Tutorial
Following script worked without any issue within BeanShell Post Processor
if(("${status}").equals("500")) {
prev.setResponseCodeOK();
prev.setSuccessful(true);
prev.setSampleLabel("FAKE_CLIENT_RETRY");
vars.put("Api_Fake_Client_Name","FAKE_CLIENT_RETRY");
}else if(("${status}").equals("ERROR")){
prev.setSuccessful(false);
prev.setSampleLabel("FAKE_CLIENT_CALLBACK_ERROR");
vars.put("Api_Fake_Client_Name","FAKE_CLIENT_CALLBACK_ERROR");
}else{
prev.setSampleLabel("FAKE_CLIENT_CALLBACK_SUCESS");
vars.put("Api_Fake_Client_Name","FAKE_CLIENT_CALLBACK_SUCESS");
}
Please note that you have access to the SampleResult through the variable previous. Lets use prev.setSampleLabel("Label"); to set the label of the sample result.
prev - (SampleResult) - gives access to the previous SampleResult
Migration to JSR223 PostProcessor+Groovy is highly recommended for performance, support of new Java features and limited maintenance of the BeanShell library
Add a Debug Post Processor to view the JMeter variables through view result tree.
Sample test plan (JMX) is available in GitHub.

In QTest Pulse Test Management tool, unable to link Test Scenarios to the Requirements

We get to set some rules, triggers, actions and constants to perform linking of requirements, linking of scenarios, update cucumber json results etc which will ultimately be reflected in Qtest Manager. This in turn is linked to Jira. SO I get to see all the linking and execution statuses on Jira as well. But I am getting below error while executing the rules in QTest Pulse. What does this mean?
enter image description here
VMError: Failed to load '#qasymphony/pulse-sdk': Unknown type. at _require (/tmp/tmp-8224DLoF0tvd3tni/node_modules/vm2/lib/sandbox.js:324:10) at eval (eval at (/tmp/tmp-8224DLoF0tvd3tni/vm.js:7:3), :1:22) at Object. (/tmp/tmp-8224DLoF0tvd3tni/vm.js:7:3) at NodeVM.run (/tmp/tmp-8224DLoF0tvd3tni/node_modules/vm2/lib/main.js:449:23) at [stdin]:52:34 at Script.runInThisContext (vm.js:116:20) at Object.runInThisContext (vm.js:306:38) at Object. ([stdin]-wrapper:9:26) at Module._compile (internal/modules/cjs/loader.js:955:30) at evalScript (internal/process/execution.js:80:25) { name: 'VMError', code: 'ELOADFAIL' } " result: null
Do you need to use qTest pulse?
I would suggest starting up the automation host on the machine you want to execute the tests or adding the qTest plugin to Jenkins and link it to qTest Manager to update your results.
I would suggest contacting tricentis support as they would be more helpful.

Issue with Test Cases when executed as a whole

If I execute the test cases on EBY one they work, but if I execute the test Suite the results I received are just random, sometimes the TestCases Might work and other times don't.
I tried with explicit waits to see if it's the responding time.
Also, to verify TestCase by test case to see if there was something wrong.
{
driver.get(Constant.VTenantURL);
driver.manage().timeouts().implicitlyWait(4, TimeUnit.SECONDS);
BlockUI_Wait.Execute(driver);
WebElement TestTenant = driver.findElement(By.linkText("TC 1839 Tenant Creation"));
TestTenant.click();
BlockUI_Wait.Execute(driver);
Manage_VTenants.VTUsers(driver).click();WebElement Add = driver.findElement(By.xpath("//*[#id=\"tab_users_domain\"]/div/div[1]/table/tbody/tr[3]/td[1]/i"));
Add.click();Manage_VTenants.VTAddItem3(driver).click();
BlockUI_Wait.Execute(driver);
Save_Action.Execute(driver);
}
I wish to know a better way or best practices to implement in order to received reliable data from the results.

Jenkins - check Git before every builds

I am currently setting up a Continuous Integration system with Jenkins, and I came across a problem :
Almost every project depends on others projects. So, in order to perform daily builds, I use the CloudBees Build Flow plugin. It does its job pretty nicely actually, but not in an optimal way : It builds EVERY jobs I tell it to, without even checking on Git if there are any changes. So I would like to know if there are any ways to force Jenkins to check on Git if there are any changes before actually building the project.
PS : Sorry for my English, I am not a native speaker
Not sure, if you have looked at the configs in the job settings. There is a place to force a fresh checkout. I have svn linked, similar thing will be with git
If not you can looking for adding manual commands as shown below. Check to see if you can arrange the order of this to execute first then build your task
In the end, I chose to stick to BuildFlow and the Groovy language, instead of using scripts, but it's just by convenience, and this solution would totally work with shell language. Moreover, using BuildFlow allows you to use Parallel(), to start multiple jobs at the same time.
Here's my solution :
I found the plugin Jenkins Poll SCM, that polls the SCM before trying to build it (only if necessary).
The only problem with CloudBees Build Flow plugin is that it does not wait for previous jobs to be completed, as I am not using the build() method. To overcome this problem, I made my own buildWithPolling() method, that waits for the job to be done before going on. The only downside of my method is that it does not wait for downstream jobs to be finished (But I don't know if it does with the build() method either...). Here is the code of my method :
def buildWithPolling(project)
{
//Connection to the URL starting the polling, and starting the building if needed
def address = "http://localhost:8080/jenkins/job/" + project + "/poll"
println "Connexion à " + address + " pour scrutation du Git et build si besoin est."
def poll = new URL(address)
poll.openStream()
//Declaration of variables used to know if the build is still running, or if it is finished
boolean inProgress = true
def parser = new XmlParser()
def rootNode = null;
address = "http://localhost:8080/jenkins/job/" + project + "/lastBuild/api/xml?depth=1&tree=building&xpath=*/building"
while(inProgress) {
//A 5 seconds pause, because we don't need to overload the server
sleep(5000)
//Request sent to the server, to know if the job is finished.
def baos =new ByteArrayOutputStream()
baos << new URL(address).openStream()
rootNode = parser.parseText(new String(baos.toByteArray()))
inProgress = rootNode.text().toBoolean()
}
}
It is probaly not the best solution, but it's working for me !

WebDriver + TestNG - How to handle test results

I'm quite new to WebDriver and TestNG framework. I've started with a project that does a regression test of an e-commerce website. I'm done with the login and registration and so on. But there is something that I don't quite understand.
Example, I have this easy code that searches for a product.
driver.get(url + "/k/k.aspx");
driver.findElement(By.id("q")).clear();
driver.findElement(By.id("q")).sendKeys("xxxx"); //TODO: Make this dynamic
driver.findElement(By.cssSelector("input.submit")).click();
Now I want to check if xxxx is represented on the page. This can be done with
webdriver.findElement(By.cssSelector("BODY")).getText().matches("^[\\s\\S]*xxxxxx[\\s\\S]*$")
I store this in a Boolean and check if its true or false.
Now to the question, based on this Boolean value I want to say that the test result is success or fail. How can I do that? What triggers a testNG test to fail?
TestNG or any other testing tool decides success or failure of a test based on assertion.
Assert.assertEquals(actualVal, expectedVal);
So if actualVal and expectedVal are same then test will pass else it will fail.
Similarly you will find other assertion options if you using any IDE like Eclipse.
If you want to stop your test execution based on the verification of that text value, then you can use Asserts. However, if you want to log the outcome of the test as a failure and carry on, you should try using soft assertions, which log the verification as passed or failed and continue with the test. Latest Testng comes equipped to handle this - info at Cedric's blog
write this code where your if condition fails
throw new RuntimeException("XXXX not found: ");
u can use throw exception, and each method which will cal this meth should also throw Excetion after method name or you can use try catch. sample:
protected Boolean AssertIsCorrectURL(String exedctedURL) throws Exception {
String errMsg = String.format("Actual URL page: '%s'. Expected URL page: '%s'",
this.driver.getCurrentUrl(), exedctedURL);
throw new Exception(errMsg);
}
You can do this.
boolean result = webdriver.findElement(By.cssSelector("BODY")).getText().matches("^[\s\S]xxxxxx[\s\S]$")
Assert.assertTrue(result);

Categories