Is there currently a way to disable TestNG test based on a condition
I know you can currently disable test as so in TestNG:
#Test(enabled=false, group={"blah"})
public void testCurrency(){
...
}
I will like to disable the same test based on a condition but dont know how. something Like this:
#Test(enabled={isUk() ? false : true), group={"blah"})
public void testCurrency(){
...
}
Anyone has a clue on whether this is possible or not.
An easier option is to use the #BeforeMethod annotation on a method which checks your condition. If you want to skip the tests, then just throw a SkipException. Like this:
#BeforeMethod
protected void checkEnvironment() {
if (!resourceAvailable) {
throw new SkipException("Skipping tests because resource was not available.");
}
}
You have two options:
Implement an annotation transformer.
Use BeanShell.
Your annotation transformer would test the condition and then override the #Test annotation to add the attribute "enabled=false" if the condition is not satisfied.
There are two ways that I know of that allow you the control of "disabling" tests in TestNG.
The differentiation that is very important to note is that SkipException will break out off all subsequent tests while implmenting IAnnotationTransformer uses Reflection to disbale individual tests, based on a condition that you specify. I will explain both SkipException and IAnnotationTransfomer.
SKIP Exception example
import org.testng.*;
import org.testng.annotations.*;
public class TestSuite
{
// You set this however you like.
boolean myCondition;
// Execute before each test is run
#BeforeMethod
public void before(Method methodName){
// check condition, note once you condition is met the rest of the tests will be skipped as well
if(myCondition)
throw new SkipException();
}
#Test(priority = 1)
public void test1(){}
#Test(priority = 2)
public void test2(){}
#Test(priority = 3)
public void test3(){}
}
IAnnotationTransformer example
A bit more complicated but the idea behind it is a concept known as Reflection.
Wiki - http://en.wikipedia.org/wiki/Reflection_(computer_programming)
First implement the IAnnotation interface, save this in a *.java file.
import java.lang.reflect.Constructor;
import java.lang.reflect.Method;
import org.testng.IAnnotationTransformer;
import org.testng.annotations.ITestAnnotation;
public class Transformer implements IAnnotationTransformer {
// Do not worry about calling this method as testNG calls it behind the scenes before EVERY method (or test).
// It will disable single tests, not the entire suite like SkipException
public void transform(ITestAnnotation annotation, Class testClass, Constructor testConstructor, Method testMethod){
// If we have chose not to run this test then disable it.
if (disableMe()){
annotation.setEnabled(false);
}
}
// logic YOU control
private boolean disableMe() {
}
}
Then in you test suite java file do the following in the #BeforeClass function
import org.testng.*;
import org.testng.annotations.*;
/* Execute before the tests run. */
#BeforeClass
public void before(){
TestNG testNG = new TestNG();
testNG.setAnnotationTransformer(new Transformer());
}
#Test(priority = 1)
public void test1(){}
#Test(priority = 2)
public void test2(){}
#Test(priority = 3)
public void test3(){}
One last step is to ensure that you add a listener in your build.xml file.
Mine ended up looking like this, this is just a single line from the build.xml:
<testng classpath="${test.classpath}:${build.dir}" outputdir="${report.dir}"
haltonfailure="false" useDefaultListeners="true"
listeners="org.uncommons.reportng.HTMLReporter,org.uncommons.reportng.JUnitXMLReporter,Transformer"
classpathref="reportnglibs"></testng>
I prefer this annotation based way for disable/skip some tests based on environment settings. Easy to maintain and not requires any special coding technique.
Using the IInvokedMethodListener interface
Create a custom anntotation e.g.: #SkipInHeadlessMode
Throw SkipException
public class ConditionalSkipTestAnalyzer implements IInvokedMethodListener {
protected static PropertiesHandler properties = new PropertiesHandler();
#Override
public void beforeInvocation(IInvokedMethod invokedMethod, ITestResult result) {
Method method = result.getMethod().getConstructorOrMethod().getMethod();
if (method == null) {
return;
}
if (method.isAnnotationPresent(SkipInHeadlessMode.class)
&& properties.isHeadlessMode()) {
throw new SkipException("These Tests shouldn't be run in HEADLESS mode!");
}
}
#Override
public void afterInvocation(IInvokedMethod iInvokedMethod, ITestResult iTestResult) {
//Auto generated
}
}
Check for the details:
https://www.lenar.io/skip-testng-tests-based-condition-using-iinvokedmethodlistener/
A Third option also can be Assumption
Assumptions for TestNG - When a assumption fails, TestNG will be instructed to ignore the test case and will thus not execute it.
Using the #Assumption annotation
Using AssumptionListener Using the Assumes.assumeThat(...)
method
You can use this example: Conditionally Running Tests In TestNG
Throwing a SkipException in a method annotated with #BeforeMethod did not work for me because it skipped all the remaining tests of my test suite with no regards if a SkipException were thrown for those tests.
I did not investigate it thoroughly but I found another way : using the dependsOnMethods attribute on the #Test annotation:
import org.testng.SkipException;
import org.testng.annotations.Test;
public class MyTest {
private boolean conditionX = true;
private boolean conditionY = false;
#Test
public void isConditionX(){
if(!conditionX){
throw new SkipException("skipped because of X is false");
}
}
#Test
public void isConditionY(){
if(!conditionY){
throw new SkipException("skipped because of Y is false");
}
}
#Test(dependsOnMethods="isConditionX")
public void test1(){
}
#Test(dependsOnMethods="isConditionY")
public void test2(){
}
}
SkipException: It's useful in case of we have only one #Test method in the class. Like for Data Driven Framework, I have only one Test method which need to either executed or skipped on the basis of some condition. Hence I've put the logic for checking the condition inside the #Test method and get desired result.
It helped me to get the Extent Report with test case result as Pass/Fail and particular Skip as well.
Related
Suppose I develop an extension which disallows test method names to start with an uppercase character.
public class DisallowUppercaseLetterAtBeginning implements BeforeEachCallback {
#Override
public void beforeEach(ExtensionContext context) {
char c = context.getRequiredTestMethod().getName().charAt(0);
if (Character.isUpperCase(c)) {
throw new RuntimeException("test method names should start with lowercase.");
}
}
}
Now I want to test that my extension works as expected.
#ExtendWith(DisallowUppercaseLetterAtBeginning.class)
class MyTest {
#Test
void validTest() {
}
#Test
void TestShouldNotBeCalled() {
fail("test should have failed before");
}
}
How can I write a test to verify that the attempt to execute the second method throws a RuntimeException with a specific message?
Another approach could be to use the facilities provided by the new JUnit 5 - Jupiter framework.
I put below the code which I tested with Java 1.8 on Eclipse Oxygen. The code suffers from a lack of elegance and conciseness but could hopefully serve as a basis to build a robust solution for your meta-testing use case.
Note that this is actually how JUnit 5 is tested, I refer you to the unit tests of the Jupiter engine on Github.
public final class DisallowUppercaseLetterAtBeginningTest {
#Test
void testIt() {
// Warning here: I checked the test container created below will
// execute on the same thread as used for this test. We should remain
// careful though, as the map used here is not thread-safe.
final Map<String, TestExecutionResult> events = new HashMap<>();
EngineExecutionListener listener = new EngineExecutionListener() {
#Override
public void executionFinished(TestDescriptor descriptor, TestExecutionResult result) {
if (descriptor.isTest()) {
events.put(descriptor.getDisplayName(), result);
}
// skip class and container reports
}
#Override
public void reportingEntryPublished(TestDescriptor testDescriptor, ReportEntry entry) {}
#Override
public void executionStarted(TestDescriptor testDescriptor) {}
#Override
public void executionSkipped(TestDescriptor testDescriptor, String reason) {}
#Override
public void dynamicTestRegistered(TestDescriptor testDescriptor) {}
};
// Build our test container and use Jupiter fluent API to launch our test. The following static imports are assumed:
//
// import static org.junit.platform.engine.discovery.DiscoverySelectors.selectClass
// import static org.junit.platform.launcher.core.LauncherDiscoveryRequestBuilder.request
JupiterTestEngine engine = new JupiterTestEngine();
LauncherDiscoveryRequest request = request().selectors(selectClass(MyTest.class)).build();
TestDescriptor td = engine.discover(request, UniqueId.forEngine(engine.getId()));
engine.execute(new ExecutionRequest(td, listener, request.getConfigurationParameters()));
// Bunch of verbose assertions, should be refactored and simplified in real code.
assertEquals(new HashSet<>(asList("validTest()", "TestShouldNotBeCalled()")), events.keySet());
assertEquals(Status.SUCCESSFUL, events.get("validTest()").getStatus());
assertEquals(Status.FAILED, events.get("TestShouldNotBeCalled()").getStatus());
Throwable t = events.get("TestShouldNotBeCalled()").getThrowable().get();
assertEquals(RuntimeException.class, t.getClass());
assertEquals("test method names should start with lowercase.", t.getMessage());
}
Though a little verbose, one advantage of this approach is it doesn't require mocking and execute the tests in the same JUnit container as will be used later for real unit tests.
With a bit of clean-up, a much more readable code is achievable. Again, JUnit-Jupiter sources can be a great source of inspiration.
If the extension throws an exception then there's not much a #Test method can do since the test runner will never reach the #Test method. In this case, I think, you have to test the extension outside of its use in the normal test flow i.e. let the extension be the SUT.
For the extension provided in your question, the test might be something like this:
#Test
public void willRejectATestMethodHavingANameStartingWithAnUpperCaseLetter() throws NoSuchMethodException {
ExtensionContext extensionContext = Mockito.mock(ExtensionContext.class);
Method method = Testable.class.getMethod("MethodNameStartingWithUpperCase");
Mockito.when(extensionContext.getRequiredTestMethod()).thenReturn(method);
DisallowUppercaseLetterAtBeginning sut = new DisallowUppercaseLetterAtBeginning();
RuntimeException actual =
assertThrows(RuntimeException.class, () -> sut.beforeEach(extensionContext));
assertThat(actual.getMessage(), is("test method names should start with lowercase."));
}
#Test
public void willAllowTestMethodHavingANameStartingWithAnLowerCaseLetter() throws NoSuchMethodException {
ExtensionContext extensionContext = Mockito.mock(ExtensionContext.class);
Method method = Testable.class.getMethod("methodNameStartingWithLowerCase");
Mockito.when(extensionContext.getRequiredTestMethod()).thenReturn(method);
DisallowUppercaseLetterAtBeginning sut = new DisallowUppercaseLetterAtBeginning();
sut.beforeEach(extensionContext);
// no exception - good enough
}
public class Testable {
public void MethodNameStartingWithUpperCase() {
}
public void methodNameStartingWithLowerCase() {
}
}
However, your question suggests that the above extension is only an example so, more generally; if your extension has a side effect (e.g. sets something in an addressable context, populates a System property etc) then your #Test method could assert that this side effect is present. For example:
public class SystemPropertyExtension implements BeforeEachCallback {
#Override
public void beforeEach(ExtensionContext context) {
System.setProperty("foo", "bar");
}
}
#ExtendWith(SystemPropertyExtension.class)
public class SystemPropertyExtensionTest {
#Test
public void willSetTheSystemProperty() {
assertThat(System.getProperty("foo"), is("bar"));
}
}
This approach has the benefit of side stepping the potentially awkward setup steps of: creating the ExtensionContext and populating it with the state required by your test but it may come at the cost of limiting the test coverage since you can really only test one outcome. And, of course, it is only feasible if the extension has a side effect which can be evaulated in a test case which uses the extension.
So, in practice, I suspect you might need a combination of these approaches; for some extensions the extension can be the SUT and for others the extension can be tested by asserting against its side effect(s).
After trying the solutions in the answers and the question linked in the comments, I ended up with a solution using the JUnit Platform Launcher.
class DisallowUppercaseLetterAtBeginningTest {
#Test
void should_succeed_if_method_name_starts_with_lower_case() {
TestExecutionSummary summary = runTestMethod(MyTest.class, "validTest");
assertThat(summary.getTestsSucceededCount()).isEqualTo(1);
}
#Test
void should_fail_if_method_name_starts_with_upper_case() {
TestExecutionSummary summary = runTestMethod(MyTest.class, "InvalidTest");
assertThat(summary.getTestsFailedCount()).isEqualTo(1);
assertThat(summary.getFailures().get(0).getException())
.isInstanceOf(RuntimeException.class)
.hasMessage("test method names should start with lowercase.");
}
private TestExecutionSummary runTestMethod(Class<?> testClass, String methodName) {
SummaryGeneratingListener listener = new SummaryGeneratingListener();
LauncherDiscoveryRequest request = request().selectors(selectMethod(testClass, methodName)).build();
LauncherFactory.create().execute(request, listener);
return listener.getSummary();
}
#ExtendWith(DisallowUppercaseLetterAtBeginning.class)
static class MyTest {
#Test
void validTest() {
}
#Test
void InvalidTest() {
fail("test should have failed before");
}
}
}
JUnit itself will not run MyTest because it is an inner class without #Nested. So there are no failing tests during the build process.
Update
JUnit itself will not run MyTest because it is an inner class without #Nested. So there are no failing tests during the build process.
This is not completly correct. JUnit itself would also run MyTest, e.g. if "Run All Tests" is started within the IDE or within a Gradle build.
The reason why MyTest was not executed is because I used Maven and I tested it with mvn test. Maven uses the Maven Surefire Plugin to execute tests. This plugin has a default configuration which excludes all nested classes like MyTest.
See also this answer about "Run tests from inner classes via Maven" and the linked issues in the comments.
JUnit 5.4 introduced the JUnit Platform Test Kit which allows you to execute a test plan and inspect the results.
To take a dependency on it from Gradle, it might look something like this:
testImplementation("org.junit.platform:junit-platform-testkit:1.4.0")
And using your example, your extension test could look something like this:
import org.junit.jupiter.api.extension.ExtendWith
import org.junit.jupiter.api.fail
import org.junit.platform.engine.discovery.DiscoverySelectors
import org.junit.platform.testkit.engine.EngineTestKit
import org.junit.platform.testkit.engine.EventConditions
import org.junit.platform.testkit.engine.TestExecutionResultConditions
internal class DisallowUpperCaseExtensionTest {
#Test
internal fun `succeed if starts with lower case`() {
val results = EngineTestKit
.engine("junit-jupiter")
.selectors(
DiscoverySelectors.selectMethod(ExampleTest::class.java, "validTest")
)
.execute()
results.tests().assertStatistics { stats ->
stats.finished(1)
}
}
#Test
internal fun `fail if starts with upper case`() {
val results = EngineTestKit
.engine("junit-jupiter")
.selectors(
DiscoverySelectors.selectMethod(ExampleTest::class.java, "TestShouldNotBeCalled")
)
.execute()
results.tests().assertThatEvents()
.haveExactly(
1,
EventConditions.finishedWithFailure(
TestExecutionResultConditions.instanceOf(java.lang.RuntimeException::class.java),
TestExecutionResultConditions.message("test method names should start with lowercase.")
)
)
}
#ExtendWith(DisallowUppercaseLetterAtBeginning::class)
internal class ExampleTest {
#Test
fun validTest() {
}
#Test
fun TestShouldNotBeCalled() {
fail("test should have failed before")
}
}
}
I write JUnit5 Extension. But I cannot find way how to obtain test result.
Extension looks like this:
import org.junit.jupiter.api.extension.AfterTestExecutionCallback;
import org.junit.jupiter.api.extension.TestExtensionContext;
public class TestResultExtension implements AfterTestExecutionCallback {
#Override
public void afterTestExecution(TestExtensionContext context) throws Exception {
//How to get test result? SUCCESS/FAILED
}
}
Any hints how to obtain test result?
This work for me:
public class RunnerExtension implements AfterTestExecutionCallback {
#Override
public void afterTestExecution(ExtensionContext context) throws Exception {
Boolean testResult = context.getExecutionException().isPresent();
System.out.println(testResult); //false - SUCCESS, true - FAILED
}
}
#ExtendWith(RunnerExtension.class)
public abstract class Tests {
}
As other answers point out, JUnit communicates failed tests with exceptions, so an AfterTestExecutionCallback can be used to gleam what happened. Note that this is error prone as extension running later might still fail the test.
Another way to do that is to register a custom TestExecutionListener. Both of these approaches are a little roundabout, though. There is an issue that tracks a specific extension point for reacting to test results, which would likely be the most straight-forward answer to your question. If you can provide a specific use case, it would be great if you could head over to #542 and leave a comment describing it.
You can use SummaryGeneratingListener from org.junit.platform.launcher.listeners
It contains MutableTestExecutionSummary field, which implements TestExecutionSummary interface, and this way you can obtain info about containers, tests, time, failures etc.
You can create custom listener, for example:
Create class that extends SummaryGeneratingListener
public class ResultAnalyzer extends SummaryGeneratingListener {
#Override
public void testPlanExecutionFinished(TestPlan testPlan) {
//This method is invoked after all tests in all containers is finished
super.testPlanExecutionFinished(testPlan);
analyzeResult();
}
private void analyzeResult() {
var summary = getSummary();
var failures = summary.getFailures();
//Do something
}
}
Register listener by creating file
src\main\resources\META-INF\services\org.junit.platform.launcher.TestExecutionListener
and specify your implementation in it
path.to.class.ResultAnalyzer
Enable auto-detection of extensions, set parameter
-Djunit.jupiter.extensions.autodetection.enabled=true
And that's it!
Docs
https://junit.org/junit5/docs/5.0.0/api/org/junit/platform/launcher/listeners/SummaryGeneratingListener.html
https://junit.org/junit5/docs/5.0.0/api/org/junit/platform/launcher/listeners/TestExecutionSummary.html
https://junit.org/junit5/docs/current/user-guide/#extensions-registration-automatic
I have only this solution:
String testResult = context.getTestException().isPresent() ? "FAILED" : "OK";
It seems that it works well. But I am not sure if it will work correctly in all situations.
Fails in JUnit are propagated with exceptions. There are several exceptions, which indicate various types of errors.
So an exception in TestExtensionContext#getTestException() indicates an error. The method can't manipulate actual test results, so depending on your use case you might want to implement TestExecutionExceptionHandler, which allows you to swallow exceptions, thus changing whether a test succeeded or not.
You're almost there.
To implement a test execution callback and get the test result for logging (or generating a report) you can do the following:
import org.junit.jupiter.api.extension.AfterTestExecutionCallback;
import org.junit.jupiter.api.extension.ExtensionContext;
public class TestResultExtension implements AfterTestExecutionCallback
{
#Override
public void afterTestExecution(ExtensionContext context) throws Exception
{
// check the context for an exception
Boolean passed = context.getExecutionException().isEmpty();
// if there isn't, the test passed
String result = passed ? "PASSED" : "FAILED";
// now that you have the result, you can do whatever you want
System.out.println("Test Result: " + context.getDisplayName() + " " + result);
}
}
And then you just add the TestResultExtension using the #ExtendWith() annotation for your test cases:
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import static org.junit.jupiter.api.Assertions.assertTrue;
#ExtendWith(TestResultExtension.class)
public class SanityTest
{
#Test
public void testSanity()
{
assertTrue(true);
}
#Test
public void testInsanity()
{
assertTrue(false);
}
}
It's a good idea to extend a base test that includes the extension
import org.junit.jupiter.api.extension.ExtendWith;
#ExtendWith(TestResultExtension.class)
public class BaseTest
{}
And then you don't need to include the annotation in every test:
public class SanityTest extends BaseTest
{ //... }
I have a set of test cases and i would like the entire file ignored if some condition is met. Can i use
Assume.assumeTrue(precondition); in the setup method to ensure that if a precondition is false that the test will not run in the entire file?
so if i had a setup method like this:
#Before
public void setUp() throws Exception {
super.setUp();
Assume.assumeTrue(1==3);//entire test file should be ignored is my hope
//some setup stuff ...
}
can i hope that none of my test will run ? this is my end goal that on some condition being met i can ignore all tests in a file. I have tried it and it seems to ignore them but want a expert opinion and i dont want the Assume method to affect any other tests besides the one in the file its called in.
Even I thinking it is not a good practice, I can understand the #Ignore annotation to put some Tests in quarantine. But I am not sure about conditioning it to a flag.
That said, implement this:
https://gist.github.com/yinzara/9980184
Then use this #ConditionalIgnore annotation.
public class SomeTest {
#Rule
public ConditionalIgnoreRule rule = new ConditionalIgnoreRule();
#Test
#ConditionalIgnore( condition = IgnoredByTeamA.class )
public void testIgnoredByTeamA() {
...
}
}
public class IgnoredByTeamA implements IgnoreCondition {
public boolean isSatisfied() {
return true;
}
}
More details here
I have 2 test methods, and i need to run them with different configurations
myTest() {
.....
.....
}
#Test
myTest_c1() {
setConf1();
myTest();
}
#Test
myTest_c2() {
setConf2();
myTest();
}
//------------------
nextTest() {
.....
.....
}
#Test
nextTest_c1() {
setConf1();
nextTest();
}
#Test
nextTest_c2() {
setConf2();
nextTest();
}
I cannot run them both from one config (as in code below) because i need separate methods for tosca execution.
#Test
tests_c1() {
setConf1();
myTest()
nextTest();
}
I don't want to write those 2 methods to run each test, how can i solve this?
First i thought to write custom annotation
#Test
#RunWithBothConf
myTest() {
....
}
But maybe there are any other solutions for this?
What about using Theories?
#RunWith(Theories.class)
public class MyTest{
private static enum Configs{
C1, C2, C3;
}
#DataPoints
public static Configs[] configValues = Configs.values();
private void doConfig(Configs config){
swich(config){...}
}
#Theory
public void test1(Config config){
doConfig(config);
// rest of test
}
#Theory
public void test2(Config config){
doConfig(config);
// rest of test
}
Not sure why formatting if off.
I have a similar issue in a bunch of test cases I have, where certain tests need to be run with different configurations. Now, 'configuration' in your case might be more like settings, in which case maybe this isn't the best option, but for me it's more like a deployment model, so it fits.
Create a base class containing the tests.
Extend the base class with one that represents the different configuration.
As you execute each of the derived classes, the tests in the base class will be run with the configuration setup in its own class.
To add new tests, you just need to add them to the base class.
Here is how I would approach it:
Create two test classes
The first class configures to conf1 but uses the #Before attribute trigger the setup
The second class extends the first but overrides the configure method
In the example below I have a single member variable conf. If no configuration is run it stays at its default value 0. setConf1 is now setConf in the Conf1Test class which sets this variable to 1. setConf2 is now setConf in the Conf2Test class.
Here is the main test class:
public class Conf1Test
{
protected int conf = 0;
#Before
public void setConf()
{
conf = 1;
}
#Test
public void myTest()
{
System.out.println("starting myTest; conf=" + conf);
}
#Test
public void nextTest()
{
System.out.println("starting nextTest; conf=" + conf);
}
}
And the second test class
public class Conf2Test extends Conf1Test
{
// override setConf to do "setConf2" function
public void setConf()
{
conf = 2;
}
}
When I configure my IDE to run all tests in the package I get the following output:
starting myTest; conf=1
starting nextTest; conf=1
starting myTest; conf=2
starting nextTest; conf=2
I think this gives you what. Each test only has to be written once. Each test gets run twice, once with conf1 and once with conf2
The way you have it right now seems fine to me. You aren't duplicating any code, and each test is clear and easy to understand.
I have a requirement of reading a text file which contains list of all the testmethods in yes/no value and to pick the "yes" marked testmethods only for a TestCase Class,and to execute in Junit.
So I have written a script to read the file and to group it in a map< TestCaseName,ArrayList_ofEnabledTestMethods > . To run that I found one option is to use Assume.assumeTrue().
But I wanted to try some otherway... instead of writting extra lines before each test methods , So I tried to write a custom runner (ABCSuite which extends ParentRunner) and planned to use it in my TestSuite file like below :
import org.junit.runner.RunWith;
import org.junit.runners.Suite;
#RunWith(ABCSuite.class)
#Suite.SuiteClasses({TestCalc.class})
public class BatTest{
}
Here TestCalc.class contains all the test methods some of which is marked "yes" in the earlier mentioned text file .
Please let me know how I can use of extending the ParentRunner class/Junit Libraries to achieve this . If any good tutorial is there or any link which addressed this before please.. share
You can do this by extending BlockJUnit4ClassRunner:
public class FilterRunner extends BlockJUnit4ClassRunner {
private List<String> testsToRun = Arrays.asList(new String[] { "test1" });
public FilterRunner(Class<?> klass) throws InitializationError {
super(klass);
}
#Override
protected void runChild(final FrameworkMethod method, RunNotifier notifier) {
Description description= describeChild(method);
if (method.getAnnotation(Ignore.class) != null || !testsToRun.contains(method.getName())) {
notifier.fireTestIgnored(description);
} else {
runLeaf(methodBlock(method), description, notifier);
}
}
}
You can fill in testsToRun as you like. The above will mark the other tests as Ignored. You use this like:
#RunWith(Suite.class)
#SuiteClasses({Class1Test.class})
public class TestSuite {
}
#RunWith(FilterRunner.class)
public class Class1Test {
#Test
public void test1() {
System.out.println("test1");
}
#Test
public void test2() {
System.out.println("test2");
}
}
This produces the following output:
test1
If you don't want to add the #FilterRunner to each test class, look at my answer to How to define JUnit method rule in a suite?.
The JUnit way of implementing this would be an implementation of a Filter. It must be instantiated by the Runner that implements Filterable. Filters are applied recursively through the tree of tests. So you only need to apply that filter once in your base suite.
You need to extend a runner and in the constructor apply the filter. To make things more flexible, you could configure the filters that should be applied with annotations.
I had the same requirement and that worked out well.