I've similar questions asked on this site, but this is a bit of a different scenario than what I have seen.
We have a PC client execute JUnit 4 tests. However, we have a custom test runner that ships the JUnit 4 tests as JUnit 3 tests (using JUnit38ClassRunner) over ethernet to a target system that is running a service that executes JUnit tests using JUnit 3.8. The tests execute as intended, however, when they are returned to the PC client they are marked as Unrooted Tests. Is there a way to organize these tests as "non-unrooted tests"? It is somewhat difficult to sift through the failed results when they are all returned in one group when not using Eclipse. Using JUnit 4 is not an option on the remote system as the target embedded system uses Java 1.4.2, and this is not changing anytime in the near future. We really do not want to have to downgrade to JUnit 3.8 on the PC client side because of the #RunWith annotation, which will take us a little while to figure out how to re-implement.
Any assistance on this is appreciated, thanks in advance.
I wasn't able to find a solution for this in Eclipse, but using Ant to manage the junit execution generates xml reports, and I can view the summary in a web page by compiling those reports using the <junitreport> element.
Related
We're using TestNG+Allure reports for our java testing infrastructure. The test might take up to 4 hours to run. We want to see the reports of tests that are already done in the suite.
Can I see a report in between? How can one implement it?
I'm using Jenkins but I believe the solution might be different (since Jenkins supports publishing the reports only as a post-build action).
I am trying to create the following setup :
A Selenium (Java) project that has a set of 10 automated test cases.
When this project is executed, it generates an HTML test execution report.
This project should be 'hosted' on an internal network.
Anyone who has access to the network should be able to 'invoke' this project, which in turn executes the test cases and passes the HTML report to the person who invoked it.
The project should be accessible ONLY for execution and the code should NOT be accessible.
My goal is that this implementation should be executable by any framework irrespective of the technology that the framework uses. I was thinking of creating the project as a WebService using Java (servlet).
My question is:
Can this implementation be accessed by any external automation framework ?
Are there any limitations to this implementation?
Is there a better way to implement this requirement?
Thanks in advance.
You can create a maven project and have your automated tests under maven test folder.Configure your tests to run through POM.xml(use maven surefire plugin).Configure a jenkins job to run the maven test.Anybody with access the jenkins can build/run this task!
Below link should give you a headstart
http://learn-automation.com/selenium-integration-with-jenkins/
As a matter of fact, it is something we did on one of our projects. As I cannot share specifics, I will give you overall architectural view of the project.
The core of all things was a service that could run JUnit tests on requests. It was a Soap web-service, but nothing stops you from making it REST. To implement this you need to implement your version of JUnit test runners (see for example: http://www.mscharhag.com/java/understanding-junits-runner-architecture or https://github.com/junit-team/junit4/wiki/Test-runners)
If you use JUnit as test framework for running your Selenuim tests this may be a great solution for you - JUnit will generate HTML reports for you if you configure it properly, it will hide actual test suite implementation from users and run test suite on demand. This solution is also great because it operates on JUnit level and does not care about what kind of tests it actually runs, so it can be also reused for any other kind of automated tests.
So to answer all your questions:
Can this implementation be accessed by any external automation
framework ? -> yes, it can be accessed by anybody who able send http
requests
Are there any limitations to this implementation? -> none that I am
aware of
Is there a better way to implement this requirement? -> well, I
didn't actually work with TestNG much so I don't know if it is
easier or more difficult to do it on Junit level. You can use
Jenkins or other CI tool as well to achieve same results - they can
run JUnit tests for you and almost always have API ready for this,
although those APIs may be not perfect.
So I'd say that if you need this only for one thing you can use CI tools for this purpose, if you don't have CI tools available then choice has been made for you. However, from our experience, having this kind of service was a great asset for a company and I really wonder why there's no such products available elsewhere yet.
I have just inherited a old java codebase (around 10 - 15 years old). It does not have any automated testing coverage, or at least the contemporary world knows about it. I am planning to write some Fitnesse scripts around it, to begin with.
I know about Concordion etc. And I have my reason to pick Fitnesse. I will keep away from that since that is not the topic of this question.
My question is, I don't know of a quick way to measure the code coverage done by the Fitnesse tests as I write them. I know that jacoco (or similar libraries) should be able to report them but I just can't figure out exactly how.
So, if anyone of you have worked on Fitnesse test scripts and have managed to have Jenkins report on the coverage achieved by the scripts, please help.
Thanks.
I have not done this myself, because I tend to use FitNesse to test deployed applications (and I cannot measure code coverage in the 'real installation'). But I believe this should be fairly straightforward if you run your FitNesse tests as part of a Jenkins (or any other build server's) jUnit run, which measures code coverage.
To have your FitNesse tests be executed as part of a jUnit run: create a Java class annotated with #RunWith(FitNesseRunner.class) and give it a #Suite("MyPageOrSuite.That.IWantToRun") annotation to indicate which tests to run. This will execute the specified page(s) in the same Java process that the jUnit process is using, so if that is instrumented in some way to determine code coverage the coverage of your FitNesse tests will be included in the report.
Sample jUnit test class, running FitNesse.SuiteAcceptanceTests.SuiteSlimTests.TestScriptTable:
#RunWith(FitNesseRunner.class)
#FitNesseRunner.Suite("FitNesse.SuiteAcceptanceTests.SuiteSlimTests.TestScriptTable")
#FitNesseRunner.FitnesseDir(".")
#FitNesseRunner.OutputDir("../target/fitnesse-results")
public class FitNesseRunnerTest {
}
I'm writing some system tests in Groovy and piggybacking on its unit testing infrastructure. It works pretty well except that I don't like the default JUnit test runner, which displays . for each test in the suite waiting until the end to report details about the errors and failures. The system tests can take a long time to run so it's useful to be able to interrupt them in the middle once you know that a failure or error exists but you can only do that if you know which testcase failed and what the failure was while the suite is running.
Are there any alternate JUnit test runners which I could use that provide this functionality out of the box?
I'm not exactly sure what you mean by "textui", but IntelliJ IDEA 11 Community Edition has an integrated JUnit test runner that works with Groovy exactly as you describe (but it has a graphical user interface only, as far as I know).
I just want to quickly ask, I found all over the internet and even here on SO, how Selenium IDE can create Java source files from what you are doing in browser. But all these sources result in some Unit Test. For Java I believe JUnit and some other are supported by Selenium IDE.
But I want to ask, why? I mean, if you still need to compile them before executing, why are Unit Tests used instead of just running the code and look if WebDriver throwns any exception? What is the advantage of using for example JUnit here? I know its mostly used this way, I just donĀ“t know why. Thanks.
Here's a couple of reasons off the top of my head:
1) You can hook your selenium tests into your build process (and hence your CI process).
2) You can use JUnit assertions.
3) You can build up multiple suites of JUnit tests (which can then be run in parallel).
I'm sure there's more but I guess it depends on the number of tests you have and the size of the project you are working on. If you're project already has a set of JUnit tests then it's quite nice to be able to write selenium tests without too much effort.
If you use Junit you can quickly recover from failures before starting a new test with the #before and #after annotations. You can also TearDown the tests with it. This also makes the tests more organized.
Not always you get an exception. Your application can handle an exception/user input/ etc. and navigate to different page then expected without throwing an exception - that can be easily verified by JUnit - assert expepected title of a page / presence of an element with actual values.