What I am doing: Using Jenkins to run the same test suites and test cases against various environments - dev / staging / production. I'm using WebDriver with a Java implementation and TestNG.
What I'd like to do: Selectively disable some tests, but not entire test suites, from running depending on the environment. Rather than maintain separate codebases between environments, I'd like to know of a way to accomplish this.
Initial thoughts: I was thinking setting a system property in Jenkins for each job in each environment and each test decorator would have to pull this piece of information out to determine if it should be ran or not. I think it's clunky, I'm not sure how to do it, and I'm not sure if this is the right approach.
Can someone tell me the best way to accomplish this? I'm hoping this isn't the best way.
Thanks,
Joe
Have you looked at TestNG listeners?
You can write a listener which just before the berfore test suite is run BUT after the tests to run have been identifier, to iterate around the list of tests and remove tests you do not want to run.
Because this is programatic you can write any java to achieve what you want.
Also, you could create annotations to identify which tests run in which environment; e.g annotate those tests with something like #RunInEnvironment({"UAT", INT"}); Your listener could then use those annotations to remove tests from the list which are not required.
I think that groups is your answer. With TestNG you can include/exclude groups. You will just need to define which tests are in which groups.
http://testng.org/doc/documentation-main.html#exclusions
#MrTi this solution is rather static and I believe he wants more dynamicity.
If detecting your environment can be done at startup, you may be able to try one of the solution described in that thread. This framework might be useful as well: https://github.com/wolfs/testng-rules
Note: on JUnit you would use Rules https://github.com/lacostej/web-validators/commit/2e1af8e1d9d1bf206849702d4231961563457815 (implementation uses old API)
Is there a program out there that can allow me to find all ignored junits?
By this I mean, I have seen unit tests that use the #Ignore and tests with method name like ignore_testFoo() or xtestBar() or xxtestBar1(), which all get ignored and they are very hard to find sometimes.
I could grep for those cases, but I was wondering if there was an application that would find any of those situations automatically.
I tried using cobertura to obtain coverage on junits, to see which methods were being executed and which were not being executed, and picking apart the bad unit tests that was.
I was just wondering if there was a program or another method to obtain this information without hacking something up.
A static analysis tool would serve you well here. Checkstyle is a decent choice amongst them, it has a long list of modules, and worst case you can easily write your own module to validate any coding convention you need.
You would locate or create a module for it then execute to find any non-conforming code.
Edit
PMD looks to be an excellent choice to handle this task. It actually comes with a set of JUnit rules already built in and its very easy to combine rules or create new ones.
It should be easy to detect ignored tests using junit3 by a grep on your java test files. Find all lines matching test and parenthesis but with a method name that doesn't start by test.
For junit4, you could
* implement your own test runner by extending the default one, print out ignored tests
* build a small app that loads test classes, get all declared methods through introspect, print out those markedas ignored.
There may be a tool to do that, maybe even some runners already do, but actually it could take a few hours to have those tools from scratch if you really need them.
I'm currently re-using JUnit 4 tests from another project against my code. I obtain them directly from the other project's repository as part of my automated Ant build. This is great, as it ensures I keep my code green against the very latest version of the tests.
However, there is a subset of tests that I never expect to pass on my code. But if I start adding #Ignore annotations to those tests, I will have to maintain my own separate copy of the test implementation, which I really don't want to do.
Is there a way of excluding individual tests without modifying the Test source? Here's what I have looked at so far:
As far as I can see, the Ant JUnit task only allows you to exclude entire Test classes, not individual test methods - so that's no good for me, I need method granularity.
I considered putting together a TestSuite that uses reflection to dynamically find and add all of the original tests, then add code to explicitly remove the tests I don't want to run. But I ditched that idea when I noticed that the TestSuite API doesn't provide a method for removing tests.
I can create my own Test classes that extend the original Test classes, override the specific tests I don't want to run, and annotate them with #Ignore. I then run JUnit on my subclasses. The downside here is that if new Test classes are added to the original project, I won't pick them up automatically. I'll have to monitor for new Test classes as they are added to the original project. This is my best option so far, but doesn't feel ideal.
The only other option I can think of is to run the bad tests anyway and ignore the failures. However, these tests take a while to run (and fail!) so I'd prefer to not run them at all. Additionally, I can't see a way of telling the Ant task to ignore failures on specific test methods (again - I see how you can do it for individual Test classes, but not methods).
If you can't touch the original test at all you are going to have some serious limitations. Your overriding sounds like the best bet, but with a couple of changes:
Build the Ant tests specifically excluding the super classes, so that additional classes that you don't know about get run.
You can use the #Rule annotation (new to JUnit 4.7) to know what test is being run and abort it (by returning an empty Statement implementation) rather than overriding specific methods, giving you more flexibility in knowing whether or not to avoid the test. The only problem with this method is that you can't stop the #Before methods from running using this method, which may be slow. If that is a problem (and you really can't touch the tests) then #Ignore in the overridden method is the only thing I can think of.
If, however, you can touch those tests, some additional options open up:
You could run them with a custom runner by specifying the #RunWith tag on the class. This runner would just pass over execution to the standard runner (JUnit4.class) in that project, but in your project (via a system property or some other mechanism) would inspect the test name and not run a test. This has the advantage of being the least intrusive, but the most difficult to implement (runners are hairy beasts, one of the stated goals of #Rule was to eliminate most of the need to make them).
Another is to make an assumeThat statement on the test that would check some configuration setting that would be true if that test should run. That would actually involve injecting right into the test, which is most likely a deal breaker in anything remotely labeled a "separate project."
It doesn't help you now, but TestNG supports this sort of ability.
OK, this is a rather heavyweight solution, but don't throw things at me if it sounds ridiculous.
The core of Junit4 is the org.junit.runner.Runner class, and its various subclasses, most importantly org.junit.runners.Suite. These runners determine what the tests are for a given test class, using things like #Test and #Ignore.
It's quite easy to create custom implementations of a runner, and normally you would hook them up by using the #RunWith annotation on your test classes, but obviously that's not an option for you.
However, in theory you could write your own Ant task, perhaps based upon the standard Ant Junit task, which takes your custom test runner and uses it directly, passing each test class to it in turn. Your runner implementation could use an external config file which specifies which test methods to ignore.
It'd be quite a lot of work, and you'd have to spend time digging around in the prehistoric Ant Junit codebase to find out how it works. The investment in time may be worth it, however.
It's just a shame that the Junit Ant task doesn't provide a mechanism to specify the test Runner, that would be ideal.
A possibility I can think of to achieve what you want with the stated constraints is to use bytecode modification. You could keep a list of classes and methods to ignore in a separate file, and patch the bytecode of the test classes as you load them to remove this methods altogether.
If I am not mistaken, JUnit uses reflection to find the test methods to execute. A method rename operation would then allow you to remove these methods before JUnit finds them. Or the method can be modified to return immediately, without performing any operation.
A library like BCEL can be used to modify the classes when loaded.
If you want to run only a subset of the tests it sounds like that class has more than one responsibility and should be refactored down. Alternately the test class could be broken apart so that the original project had all the tests but on one or more classes(I'm guessing some of the tests are really integration tests and touch the database or network) and you could exclude the class(es) you didn't want.
If you can't do any of that, your option of overriding is probably best. Take the process of whenever you need to ignore some methods you extend that class and add it to your Ant exclude list. That way you can exclude what you can't pass and will still pull in all new tests (methods you didn't override and new test classes) without modifying your build.
If the unwanted tests are in specific classes/packages, you could use a fileset exclude in Ant to exclude them during import.
Two options
Work with the owner of the borrowed tests to extract your ones into a separate class you both can share.
Create your own test class which proxies the test class you want to use. For each method you want to include have a method in your class. You'll need to construct an instance of the test class you are calling and do before and after methods too if they're in the original.
Create a custom Junit runner based on blockjunitrunner and use it to filter out or in the tests you want.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
At work we are currently still using JUnit 3 to run our tests. We have been considering switching over to JUnit 4 for new tests being written but I have been keeping an eye on TestNG for a while now. What experiences have you all had with either JUnit 4 or TestNG, and which seems to work better for very large numbers of tests? Having flexibility in writing tests is also important to us since our functional tests cover a wide aspect and need to be written in a variety of ways to get results.
Old tests will not be re-written as they do their job just fine. What I would like to see in new tests though is flexibility in the way the test can be written, natural assertions, grouping, and easily distributed test executions.
I've used both, but I have to agree with Justin Standard that you shouldn't really consider rewriting your existing tests to any new format. Regardless of the decision, it is pretty trivial to run both. TestNG strives to be much more configurable than JUnit, but in the end they both work equally well.
TestNG has a neat feature where you can mark tests as a particular group, and then easily run all tests of a specific group, or exclude tests of a particular group. Thus you can mark tests that run slowly as in the "slow" group and then ignore them when you want quick results. A suggestion from their documentation is to mark some subset as "checkin" tests which should be run whenever you check new files in. I never saw such a feature in JUnit, but then again, if you don't have it, you don't REALLY miss it.
For all its claims of high configuration, I did run into a corner case the a couple weeks ago where I couldn't do what I wanted to do... I wish I could remember what it is, but I wanted to bring it up so you know that it's not perfect.
The biggest advantage TestNG has is annotations... which JUnit added in version 4 anyways.
First I would say, don't rewrite all your tests just to suit the latest fad. Junit3 works perfectly well, and the introduction of annotations in 4 doesn't buy you very much (in my opinion). It is much more important that you guys write tests, and it sounds like you do.
Use whatever seems most natural and helps you get your work done.
I can't comment on TestNG b/c I haven't used it. But I would recommend unitils, a great wrapper for JUnit/TestNG/DBUnit/EasyMock, regardless of which route you take. (It supports all the flavors mentioned above)
TestNG's biggest draw cards for me include its support test groups, and more importantly - test group dependencies (marking a test as being dependent of a group causes the tests to simply skip running when the dependent group fails).
TestNG's other big draw cards for me include test parameters, data providers, annotation transformers, and more than anything - the vibrant and responsive user community.
Whilst on the surface one might not think all of TestNGs features above might not be needed, once you start to understand the flexibility bring to your tests, you'll wonder how you coped with JUnit.
(disclaimer - I've not used JUnit 4.x at all, so am unable to really comment on advances or new features there).
About a year ago, we had the same problem. I spent sometime considering which move was better, and eventually we realized that TestNG has no 'killer features'. It's nice, and has some features JUnit 4 doesn't have, but we don't need them.
We didn't want people to feel uncomfortable writing tests while getting to know TestNG because we wanted them to keep writing a lot of tests.
Also, JUnit is pretty much the de-facto standard in the Java world. There's no decent tool that doesn't support it from the box, you can find a lot of help on the web and they added a lot of new features in the past year which shows it's alive.
We decided to stick with JUnit and never looked back.
Cheers to all the above. Some other things I've personally found I like more in TestNG are:
The #BeforeClass for TestNG takes place after class creation, so you aren't constrained by only being able to call static methods of your class in it.
Parallel and parameterized tests, maybe I just don't have enough of a life... but I just get a kick writing one set of Selenium tests, accepting a driver name as a parameter. Then defining 3 parallel test groups, 1 each for the IE, FF and Chrome drivers, and watching the race! I originally did 4, but way too many of the pages I've worked on break the HtmlUnit driver for one reason or another.
Yeah, probably need to find that life. ;)
I wanted to share the one I encountered today. I found built-in Parameterized runner is quite crude in Junit4 as compare to TestNG (I know each framework has its strengths but still). The Junit4 annotation #parameters is restricted to one set of parameters. I encountered this problem while testing the valid and invalid behavior for functionality in same test class. So the first public, static annotated method that it finds will be used, but it may find them in any order. This causes us to write different classes unnecessarily. However TestNG provides clean way to provide different kind of data providers for each and every method. So we can test the same unit of code with valid and invalid way in same test class putting the valid/invalid data separately. I will go with TestNG.
Also one more advantage of TestNG is supporting of parallel testing. In our era of multicores it's important, i think.
I also used both frameworks. But i using hamcrest for assertations. Hamcrest allows you easily write your own assert method. So instead of
assertEquals(operation.getStatus(), Operation.Status.Active);
You can write
assertThat(operation, isActive());
That gives you opportunity to use higher level of abstraction in your tests. And this makes your tests more robust.
JUnit 4 Vs TestNG – Comparison by mkyong.com ( updated on 2013).
Conclusion: I suggest to use TestNG as core unit test framework for Java project, because TestNG is more advance in parameterize testing, dependency testing and suite testing (Grouping concept).
TestNG is meant for functional, high-level testing and complex integration test. Its flexibility is especially useful with large test suites.
In addition, TestNG also cover the entire core JUnit4 functionality. It’s just no reason for me to use JUnit anymore.
In simple terms, TestNG = JUnit + lot more. So, Why debate ? go and
grab TestNG :-)
You can find more detailed comparison here.
Why we use TestNG instead of JUnit?
The declaration of #BeforeClass and #AfterClass method has to be static in JUnit whereas, there is more flexibility in TestNG in the method declaration, it does not have these constraints.
In TestNG, we can parametrize tests using 2 ways. #Parameter or #DataProvider annotation.
i) #Parameter for simple cases, where key value mapping is required.(data is provided through xml file)
ii) #DataProvider for complex cases. Using 2 dimensional array, It can provide data.
In TestNG, since #DataProvider method need not be static, we can use multiple data provider methods in the same test class.
Dependency Testing: In TestNG, if the initial test fails, then all subsequent dependent tests will be skipped, not marked as failed. But JUnit marked it failed.
Grouping: Single tests can belong to multiple groups and then run in different contexts (like slow or fast tests). A similar feature exists in JUnit Categories but lacks the #BeforeGroups / #AfterGroups TestNG annotations that allow initializing the test / tearing it down.
Parallelism: If you’d like to run the same test in parallel on multiple threads, TestNG has you covered with a simple to use annotation while JUnit doesn’t offer a simple way to do so out of the box.
TestNG #DataProvider can also support XML for feeding in data, CSVs, or even plain text files.
TestNG allows you to declare dependencies between tests, and skip them if the dependency test didn’t pass.
#Test(dependsOnMethods = { "dependOnSomething" })
This functionality doesn’t exist in JUnit
Reporting:
TestNG reports are generated by default to a test-output folder that includes HTML reports with all of the test data, passed/failed/skipped, how long did they run, which input was used and the complete test logs. In addition, it also exports everything to an XML file which can be used to construct your own report template.
On the JUnit front, all of this data is also available via XML, but there’s no out of the box report and you need to rely on plugins.
Resource Link:
A Quick JUnit vs TestNG Comparison
JUnit vs. TestNG: Which Testing Framework Should You Choose?
A good difference is given in this tutorial side by side: TestNG Vs JUnit: What's the Difference?
A couple of additions to Mike Stone's reply:
1) The most frequent thing I use TestNG's groups for is when I want to run a single test method in a test suite. I simply add this test to the group "phil" and then run this group. When I was using JUnit 3, I would comment out the entries for all methods but the one I wanted to run in the "suite" method, but then would commonly forget to uncomment them before checkin. With the groups, I no longer have this problem.
2) Depending on the complexity of the tests, migrating tests from JUnit3 to TestNG can be done somewhat automatically with sed and creating a base class to replace TestCase that static imports all of the TestNG assert methods.
I have info on my migration from JUnit to TestNG here and here.
My opinion about what makes TestNG truly far more powerful:
1. JUnit still requires the before/after class methods to be static, which limits
what you can do prior to the running of tests, TestNG never has this issue.
2. TestNG #Configuration methods can all take an optional argument to their
annotated methods in the form of a ITestResult, XmlTest, Method, or
ITestContext. This allows you to pass things around that JUnit wouldn't
provide you. JUnit only does this in listeners and it is limited in use.
3. TestNG comes with some pre-made report generation classes that you can copy
and edit and make into your own beautiful test output with very little
effort. Just copy the report class into your project and add a listener
to run it. Also, ReportNG is available.
4. TestNG has a handful of nice listeners that you can hook onto so you can do
additional AOP style magic at certain phases during testing.
Your question seems two folded to me. On one had you would like to compare two test frameworks, on the other hand you would like to implement tests easily, have natural assertions, etc...
Ok, firstly JUnit has been playing catchup with TestNG in terms of functionality, they have bridged the gap some what with v4, but not well enough in my opinion. Things like annotations and dataproviders are still much better in TestNG. Also they are more flexible in terms of test execution, since TestNG has test dependency, grouping and ordering.
JUnit still requires certain before/after methods to be static, which limits what you can do prior to the running of tests, TestNG never has this issue.
TBH, mostly the differences between the two frameworks don't mean much, unless your focusing on integration/automation testing. JUnit from my experience is built from the ground up for unit testing and is now being pushed towards higher levels of testing, which IMO makes it the wrong tool for the job. TestNG does well at unit testing and due to its robust dataproviding and great test execution abilities, works even better at integration/automation test level.
Now for what I believe is a separate issue, how to write well structured, readable and maintainable tests. Most of this I am sure you know, but things like Factory Pattern, Command Pattern and PageObjects (if your testing websites) are vital, it is very important to have a layer of abstraction between what your testing (SUT) and what the actual test is (assertions of business logic). In order to have much nicer assertions, you can use Hamcrest. Make use of javas inheritance/interfaces to reduce repetition and enforce commonality.
Almost forgot, also use the Test Data Builder Pattern, this coupled with TestNG's dataprovider annotation is very useful.