When I'm running my tests on JUNIT 5, its report automatically collapses between one parameterized test and another during execution:
For example, when the 1st Device is running, the run report is opened like that. When the first test finishes and goes to the second run, all these lines collapses and I need to right-click the "TestCases" at the top and click on "Expand All" option.
Not sure if this is clear, I wish I could record a gif, but I'd like to know if there is an option to JUNIT do not collapse the executions automatically, because sometimes I want to follow the execution in real-time and see if any of the tests got an error, and not just at the end of the execution.
JUnit is just sending out „events“ to your IDE about tests being started and finished. JUnit has no control over how the IDE will animate this stream of events.
So I recommend that you open a feature request for your IDE vendor.
Related
Intellij keeps track of test (JUnit) history, which is very nice...
But it registers failing tests as 'problem', which get triggered by commit-checks. I want to commit non-working tests at the end of the day, so I want to be able to clear the test-history.
However, I can't find that option anywhere, nor does 'delete' work on the selected test-run.
How to clear all of the test history?
In Settings, Editor, Inspections, select "Failed line in test" and turn the severity down to "Weak Warning"
A workaround would be to comment the non working tests, then run it again and commit it with only working tests.
Also the put TODO in the comments of the non working tests, so that you know that you have to check them later on.
I have an automated test about changing frame to a PDF viewer to read the text. I have four tests which basically do the same: Look for a number, letter... and finally test that a word does not exist in the PDF. If I run these test individually, or all frame related test, it works well. But if I run the test suite (which includes like 500 test). One of them systematically fails every time, showing me this error:
org.openqa.selenium.NoSuchFrameException: no such frame
I'm using try/catch, thread/sleep... and all my test are working good, but I cannot figure out why it's always the same test which fails, and why if i run it individually or all the feature, it works. Just wondering if you guys can show me different reasons why this could happen, so I can improve my skills.
Does your test suite take screenshots if the test fails? If it doesn't, I would encourage you to implement a rule for it. (there are numerous examples over the web). A screenshot could shed some light into what is going on.
It sounds like a performance issue though. When you run 1 test, there just isn't a lot of load on the system, and the frame is loaded fast enough for the test to locate it.
However, when you add in the whole suite, tests can sometimes run slower and steps could fail as a result.
It's possible that the failing step is the previous step. If the frame is supposed to load after clicking something, but the click action took place before the link was fully loaded (the link was not actually clicked), then the frame won't be there and the test fails.
It wouldn't matter how long the failing step waited as the previous step was really the one that failed
Is there a way to have eclipse show junit assertion errors (or more generally unhandled exceptions during testing for that matter) automatically without me going to the junit window, selecting the failed test, clicking the corresponding line on the right side, and then clicking the corresponding button each single time? This is ludicrous. There surely must be a better way I have not found yet!
I'm pretty new to TestNG hailing from a cucumber background.
In my current project, the jenkins jobs are configured with Maven and TestNG, using java and Selenium for the scripting.
Any job, say with 30 tests, if takes 2hrs to complete, when abruptly terminated due to some reason on the last minute, I do not get results of any tests that were run successfully. Hence forced to run again the entire job.
All I see is the error stack trace and result as:
Results :Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
I am sure there is a better way of designing this, hoping for better approaches.
How can I make sure I do not lose results of the tests that did run successfully(though passed/failed)?
Is a test management tool or some entity to store the run time results , a mandatory requirement or TestNG has some provision built-in?
How can I make sure I do not lose results of the tests that did run successfully(though passed/failed)?
There are usually two ways in which you can build reporting into your TestNG driven tests.
Batched mode - This is usually how all the TestNG reports are built. A listener which implements org.testng.IReporter interface is built and within its generateReport(), all the logic of consolidating the test results into a report is carried out.
Realtime mode - This is usually done by implementing the TestNG listener org.testng.IInvokedMethodListener and then within its afterInvocation() do the following :
Check the type of the incoming org.testng.IInvokedMethod (to see if its a configuration method (or) a #Test method ) and handle these types of methods differently (if the report needs to show them separately). Then test the status of org.testng.ITestResult and based on the status, show them as PASS/FAIL/SKIPPED
IReporter implementations are run at the end after all the tests have run (which is why i call them as batched mode). So if something crashes towards the end but before the reporting phase is executed, you lose all execution data.
So you might want to try and build a realtime reporting model. You can take a look at the RuntimeReporter report that SeLion uses. Its built on the realtime model.
Is a test management tool or some entity to store the run time results , a mandatory requirement or TestNG has some provision built-in?
There are no such mandatory requirements that TestNG places. As I explained above, it all boils down to how you are constructing your reports. If you are constructing the reports in a realtime fashion (you can leverage templating engines such as Velocity/Freemarker/Thymeleaf) to build your reporting template and then use the IInvokedMethodListener to inject values into the template, so that it can be rendered easily.
Read more here for a comparison on the templating engines so that you can choose what fits your need.
I've plenty of test cases, say 100. when i want to run my regression/smoke tests i can do that by dividing those in groups and running the testng.xml file.. but my wish is to create a UI which have the test case names,browsers.when i want to run 2-3 test cases, i'll just run select the test cases and browsers and then click on 'Run'(A button in my UI). so it'll interact with testng.xml and then send values to it. so indirectly i want to edit the testng.xml file and then run the testsuite. Anyone can help me out in this COntext or give me links of some online tutorial or anything from which i can get help ?
I use Eclipse, Cucumber and tags to do this.