I have recently converted my project from JUnit5 to TestNG, solely for the purpose of getting decent reports.
I have added a listener that generates the report at the end of each run:
#Override
public void onFinish(ITestContext context) {
System.out.println("FINISH. Sending email report.");
utils.EmailHandler.sendEmail("Finished test", context.toString());
}
My problem is that the reports being sent by email are not from the current run, as desired, but the previous run.
Yet if I open the report in /test-output/custom-report.html using Eclipse IDE it is the correct one!
How do I ensure the emails sent out are current?
I have looked at a couple of similar questions here, but neither are appropriate to me:
Similar questions:
TestNg emailable-report is not updating?
ReportNG HTML report not updating
It finally worked when I moved the call to sendEmail to the end of the GenerateReport method of the listener. That removes all confusion and ensures that that output file is complete before attempting to send it out.
Unless, you 're somehow attaching the old version; from your description, I would say that most likely the file is created AFTER the email is being sent (hence previous version every-time). However, if this theory is correct, it must have emailed an empty file the first time :) Did it?
Idea: insert a couple of minutes delay in your code where the email is being sent. Go check the file as soon as the email leaves the ground, I think it will be the old version (as it hasn't been created yet!)
Have you tried using the #AfterTest annotation? Not sure, but onFinish(ITestContext context) could be lingering somewhere between #AfterMethod and AfterTestcausing your email to leave slightly earlier; before fully attached! Not sure though, why you send an email after each test and not after the whole suite has finished [so to use onFinish(ISuite suite)].
#AfterTest
public void afterTest(ITestContext context) {
//improving answer after initial comments
if(bufferredWriter!=null){
bufferredWriter.close();
}
else{
System.out.println("FINISH. Sending email report.");
utils.EmailHandler.sendEmail("Finished test", context.toString());
}
}
Best of luck!
PS. Nevertheless, I would highly recommend to have a look at extentReports. Definitely better reporting than the built-in that comes with TestNG!
Related
I know it is possible to delete the entire Jmeter result tree, through the code:
import org.apache.jmeter.gui.GuiPackage;
import org.apache.jmeter.gui.JMeterGUIComponent;
import org.apache.jmeter.gui.tree.JMeterTreeNode;
import org.apache.jmeter.samplers.Clearable;
log.info("Clearing All ...");
guiPackage = GuiPackage.getInstance();
guiPackage.getMainFrame().clearData();
for (JMeterTreeNode node : guiPackage.getTreeModel().getNodesOfType(Clearable.class)) {
JMeterGUIComponent guiComp = guiPackage.getGui(node.getTestElement());
if (guiComp instanceof Clearable){
Clearable item = (Clearable) guiComp;
try {
item.clearData();
} catch (Exception ex) {
log.error("Can't clear: "+node+" "+guiComp, ex);
}
}
}
but I didn't want to delete the entire result tree, only the returns that returned with status == 500. Because my api returns 500 until the callback is available for consultation, when it finds the callback it returns "success", so while the api keeps retrying, these retries show up as an error in the report,but in fact the callback has not returned yet, when it returns the api returns the callback and is successful. I would like to remove these requests from the report, which are retry.
Add a JSR223 Post Processor with following code to ignore the test results when the response code is 500
if (prev.getResponseCode()=="500"){
prev.setIgnore()
}
prev - (SampleResult) - gives access to the previous SampleResult (if any)
API documentation for prev variable (SampleResult)
I don't think it's possible without modifying JMeter source code or heavily using reflection. In any case the answer will not fit here.
In general:
You should be using JMeter GUI only for tests development and debugging, when it comes to test execution you should run your test in command-line non-GUI mode
You should not be using listeners during test execution as they don't add any value, just consume valuable resources, all the necessary information is stored in .jtl results file
There is Filter Results plugin which allows removing the "unwanted" data from the .jtl results file
You can also (as well) generate the HTML Reporting Dashboard out of the .jtl results file, the Dashboard has its own responses filtering facilities
guys. First, thanks for the answers, I ended up giving up on the idea of removing the result from the tree and looking for a way to modify the way that result is presented.
I made a beanShell assertion, so that if Status==500, it would return "Success" in the result tree:
BeanShell
I also made it so that if it were a new attempt the name displayed in the results tree would indicate this, leaving the api name mutable depending on the return:
api name = variable
and I have this logic:
import org.apache.jmeter.samplers.SampleResult;
//process main sample
if (${status} == 500) {
SampleResult.setResponseCodeOK();
SampleResult.setSuccessful(true);
vars.put("Api_Fake_Client_name", "API_FAKE_CLIENT_RETRY");
I will configure the answers in the other conditions, but I believe that this way I will be able to solve my problem because the new attempts no longer appear as an error.
I work with JBehave on a daily basis, but have been tasked with working on a project that uses Cucumber. In order to add a custom reporting class functionality to that project, I need to add two steps, one at the start of the feature (story) and another at the start of the scenario. I merely want to pass to the application a description of the feature/story and the scenario to be passed to the reporting module. I know that cucumber can access the scenario name through code, but that would only resolve one of the two lines - I would still need to have another one that passes the description of the feature/story.
What I've tried in the feature file:
Feature: Ecolab BDD Test Automation Demo
Scenario Outline: User can login and logout from the landing page
Given story "EcolabWebDemo_TestCases - Ecolab BDD Test Automation Demo"
Given scenario "User can login and logout from the landing page"
Given I am on the Ecolab landing page
The corresponding code for the two added Given statements at the beginning above:
#Given("^story {string}$") // \"(\\S+)\"
public void givenStory(String storyName) {
test.initStory(storyName); // will show on report in Features column
}
#Given("^scenario {string}$") // \"(\\S+)\"
public void givenScenario(String scenarioName) {
test.initScenario(scenarioName);
}
The commented regex patterns afterwards are the suggested ones I should try but do not seem to work either.
The current configuration at least seems to "find" the steps but reports:
cucumber.runtime.CucumberException:
java.util.regex.PatternSyntaxException: Illegal repetition near index
13 ^the scenario {string}$
So that's obviously not the solution. The regex used instead of {string} simply does not find a match and does not run.
regex is absolute Greek to me, not sure why it can't just be simple like the {string} option implied it would be in the cucumber documentation. I've been searching on-line for guidance for the better part of two days to no avail, I'm apparently not even sure what to be searching for.
Based on Grasshopper's suggestion, I updated the version of Cucumber from 1.2.0 to 1.2.5. I was prepared to change the pom.xml to use the 3.x versions but tried the latest of the specified libraries first, and it did report after an attempted run what the correct regex should be for the two steps I added.
#Given("^story \"([^\"]*)\"$")
and
#Given("^scenario \"([^\"]*)\"$")
Now that the project has a version that seems to recognize strings and also reports the missing steps, the project now runs as intended.
Thanks for your help, Grasshopper.
I'm developing a skill in Amazon Alexa. I'm trying to test the same using echosim.io but the problem is as below.
My Skill name is MyBot and the same is invocation name.
In echosim.io, When I say Alexa Launch MyBot, it gives the welcome response (The help response that I've coded in). When I say help, it gives me the help response that I've entered.
I've 4 intents say
FaqIntentOne
FIntentOne
FaqIntentTwo
FIntentTwo
And my Sample utterances are as below.
FaqIntentOne what is first answer
FIntentOne give me first answer
FaqIntentTwo what is second answer
FIntentTwo give me second answer
When I run these, Alexa doesn't give me a response.
I've the correct methods and the correct response set there. please let me know why it is not working for the utterances other than the built in ones.
when test in Alexa's test interface in developer.amazon.com, it is giving me the correct response.
This is quite confusing.
Below is how it looks in my code.
if ("FaqIntentOne".equals(intentName) || "FIntentOne".equals(intentName)) {
return getFirstHelp(intent, session);
}
else if ("FaqIntentTwo".equals(intentName) || "FIntentTwo".equals(intentName)) {
return getSecondHelp(intent, session);
}
Thanks
Though Amazon has referred people to echosim, it is not 'official' (it was developed by a 3rd party), so if it works in Amazon's test environment and not in echosim then it is possible that the issue is with echosim.
Otherwise I think you are going to need to look more closely at what is happening in your code, ie. debug it or put in some print statements and compare what happens when invoked in those 2 ways.
If you are running in Lambda - seems to be the most common - then you will need to take a look at CloudWatch logs.
I have a PageObject startPage where I have a login and a logout method. The login method works fine and is executed in the #BeforeScenario:
#BeforeScenario
public void login() {
// {..} Declaration of baseUrl,user,password...
homeVM.setDefaultBaseUrl(baseUrl);
homeVM.open();
homeVM.login(user, password);
}
and login(user,password) in class homeVM is like:
typeInto(find(By.id(getUserFieldId())), user);
typeInto(find(By.id(getPasswordFieldId())), password);
findBy(getLoginButtonXPath()).then().click();
so nothing special, this works all fine.
Then I switch through several PageObjects in different test steps without a problem. When code reaches the #AfterScenario which looks like:
#AfterScenario
public void logout() {
homeVM.logoff();
}
and class homeVM with method logoff() looks like:
WebElement btnLogout = getDriver().findElement(By.xpath("//a [contains(#class,'lnkLogout')]"));
btnLogout.click();
But this isn't working (nothing happens, no exception, no click.. just nothing). Then I tried to log some information about getDriver() with:
System.out.println("WindowHandles:"+getDriver().getWindowHandles().size());
System.out.println("Title: "+getDriver().getTitle());
and both values are just empty (""). So it seems that getDriver() is just empty (not even null, so I don't get a NullPointerException). Why is it so? I tried to check getDriver() for the last PageObject I used in my test but there I get all the information I need, just getDriver() in the #AfterScenario is empty. Any idea or solution what to do next or why this is happening? I'm using chromeDriver.
EDIT:
Okay, I recognized something unexpected:
I have an assertThat(<something>) method in my last step and this step is actually producing an assignment failure (because the behaviour is not implemented yet)... and if I comment this assertThat() out, the #AfterScenario and its logout is executed correctly. So the webDriver gets "emptied" if the test fails? Is this on purpose?
EDIT2:
If I catch the AssertionErrorException Exception the test runs fine again but of course the test will be marked as "Test Passed". So it really has something to do that if the exception is thrown the current webDriver gets emptied. But this seems to be wrong...
Once Serenity (or Thucydides in this case) spots a test failure (e.g. from an assertion error), the test is switched to "dry-run" mode, as it considers that subsequent steps are compromised and may result in unnecessary (and slow) web driver calls.
As I found out from John Smart that once Serenity spots a test failure the test is switched to "dry-run" mode, so no web driver calls are possible anymore I had to find another way to perform a logout.
As my chromedriver runs by default all scenarios in the same session and browser I had to perform a manual logout after every scenario. But by setting
System.setProperty("restart.browser.each.scenario", "true");
it was possible to restart the browser and clean the session after every scenario. This worked for me so I don't need the #AfterScenario with logoff(); anymore.
overcoming the issue in cucumber watir framework
filename = DateTime.now.strftime("%Y-%m-%d--%Hh_%Mm_%Ss")
#browser.driver.save_screenshot ("#{filename}.png")
Note:
filename is the name of the screenshot file
you can pass the location of the screenshot file as well like this
#browser.driver.save_screenshot ("/Screenshots/#{filename}.png")
I'm writing an Eclipse plugin with a custom launch configuration, i.e. a launch() method inside a subclass of LaunchConfigurationDelegate. This method essentially just calls Runtime.exec(), but when I write to System.out from within launch() it goes to the console of the Eclipse instance which is debugging the plugin, rather than to the console of the plugin instance itself. I've analysed the ILaunchConfiguration and ILaunch arguments to the method but cannot find anywhere that they specify any output/error streams I can write to.
As is recommended in the tutorials, I have 2 separate plugins running together; one which handles the UI stuff (LaunchConfigurationTab,LaunchConfigurationTabGroup,LaunchShortcut,) and the other which contains the LaunchConfigurationDelegate itself.
I created a console in my UI plugin using this code, and I can write to it fine from within the UI code. But I cannot figure out how to direct output generated in my non-UI plugin to the console created in my UI plugin.
I've read this post and this one, but they do not specify how to "get ahold" of the output which is generated within the launch() method in the first place.
Any pointers would be really welcome, I am stuck!
Well I finally managed to get something working as follows:
In my LaunchConfigurationDelegate I introduced the following static method:
public static void setConsole(PrintStream ps) {
System.setOut(ps);
System.setErr(ps);
}
Then when creating my console in my UI plugin's PerspectiveFactory I call it as follows:
private void createConsole() {
console = new MessageConsole("My Console", null);
console.activate();
ConsolePlugin.getDefault().getConsoleManager().addConsoles(new IConsole[]{ console });
MessageConsoleStream stream = console.newMessageStream();
MyLaunchConfigurationDelegate.setConsole(new PrintStream(stream));
}
This works, except everytime I close down Eclipse and restart it the console disappears. However when I reset my perspective, the console appears again. So obviously I need that code to be called on startup, not in the PerspectiveFactory itself.
Hope this helps someone.. and if anybody has some input for this last problem (or about my approach in general) please do comment!