How do I pass data into a test class I started programmatically with
junitCore.run(MyAwesomeClass.class);
I need to pass in some objects constructed based on input from the command line. My program is an executable jar.
A little context on why I'm doing this. I'm writing a command line program to drive tests based on inputs from a spreadsheet for my QA guy. I'm trying to test some code that has Android code mixed in, and I want to run it on the JVM. For that, I'm using Robolectric to fill in the stubs just so I can run, but the caveat is, you have to use their JUnit test runner.
Thing you want is not actually the way JUnit is supposed to be used, because test class should be a complete runnable testing code.
However you can always implement what you want using static initialization prior to running you test class, like:
MyAwesomeClass.prepare(myParameter);
junitCore.run(MyAwesomeClass.class);
For example:
public class Test2 {
private static int param;
public static void prepare(int param) {
Test2.param = param;
}
#Test public void test() {
Assert.assertEquals(param, 2);
}
public static void main(String[] args) {
JUnitCore jUnitCore = new JUnitCore();
Test2.prepare(2);
jUnitCore.run(Test2.class);
}
}
Related
Suppose I develop an extension which disallows test method names to start with an uppercase character.
public class DisallowUppercaseLetterAtBeginning implements BeforeEachCallback {
#Override
public void beforeEach(ExtensionContext context) {
char c = context.getRequiredTestMethod().getName().charAt(0);
if (Character.isUpperCase(c)) {
throw new RuntimeException("test method names should start with lowercase.");
}
}
}
Now I want to test that my extension works as expected.
#ExtendWith(DisallowUppercaseLetterAtBeginning.class)
class MyTest {
#Test
void validTest() {
}
#Test
void TestShouldNotBeCalled() {
fail("test should have failed before");
}
}
How can I write a test to verify that the attempt to execute the second method throws a RuntimeException with a specific message?
Another approach could be to use the facilities provided by the new JUnit 5 - Jupiter framework.
I put below the code which I tested with Java 1.8 on Eclipse Oxygen. The code suffers from a lack of elegance and conciseness but could hopefully serve as a basis to build a robust solution for your meta-testing use case.
Note that this is actually how JUnit 5 is tested, I refer you to the unit tests of the Jupiter engine on Github.
public final class DisallowUppercaseLetterAtBeginningTest {
#Test
void testIt() {
// Warning here: I checked the test container created below will
// execute on the same thread as used for this test. We should remain
// careful though, as the map used here is not thread-safe.
final Map<String, TestExecutionResult> events = new HashMap<>();
EngineExecutionListener listener = new EngineExecutionListener() {
#Override
public void executionFinished(TestDescriptor descriptor, TestExecutionResult result) {
if (descriptor.isTest()) {
events.put(descriptor.getDisplayName(), result);
}
// skip class and container reports
}
#Override
public void reportingEntryPublished(TestDescriptor testDescriptor, ReportEntry entry) {}
#Override
public void executionStarted(TestDescriptor testDescriptor) {}
#Override
public void executionSkipped(TestDescriptor testDescriptor, String reason) {}
#Override
public void dynamicTestRegistered(TestDescriptor testDescriptor) {}
};
// Build our test container and use Jupiter fluent API to launch our test. The following static imports are assumed:
//
// import static org.junit.platform.engine.discovery.DiscoverySelectors.selectClass
// import static org.junit.platform.launcher.core.LauncherDiscoveryRequestBuilder.request
JupiterTestEngine engine = new JupiterTestEngine();
LauncherDiscoveryRequest request = request().selectors(selectClass(MyTest.class)).build();
TestDescriptor td = engine.discover(request, UniqueId.forEngine(engine.getId()));
engine.execute(new ExecutionRequest(td, listener, request.getConfigurationParameters()));
// Bunch of verbose assertions, should be refactored and simplified in real code.
assertEquals(new HashSet<>(asList("validTest()", "TestShouldNotBeCalled()")), events.keySet());
assertEquals(Status.SUCCESSFUL, events.get("validTest()").getStatus());
assertEquals(Status.FAILED, events.get("TestShouldNotBeCalled()").getStatus());
Throwable t = events.get("TestShouldNotBeCalled()").getThrowable().get();
assertEquals(RuntimeException.class, t.getClass());
assertEquals("test method names should start with lowercase.", t.getMessage());
}
Though a little verbose, one advantage of this approach is it doesn't require mocking and execute the tests in the same JUnit container as will be used later for real unit tests.
With a bit of clean-up, a much more readable code is achievable. Again, JUnit-Jupiter sources can be a great source of inspiration.
If the extension throws an exception then there's not much a #Test method can do since the test runner will never reach the #Test method. In this case, I think, you have to test the extension outside of its use in the normal test flow i.e. let the extension be the SUT.
For the extension provided in your question, the test might be something like this:
#Test
public void willRejectATestMethodHavingANameStartingWithAnUpperCaseLetter() throws NoSuchMethodException {
ExtensionContext extensionContext = Mockito.mock(ExtensionContext.class);
Method method = Testable.class.getMethod("MethodNameStartingWithUpperCase");
Mockito.when(extensionContext.getRequiredTestMethod()).thenReturn(method);
DisallowUppercaseLetterAtBeginning sut = new DisallowUppercaseLetterAtBeginning();
RuntimeException actual =
assertThrows(RuntimeException.class, () -> sut.beforeEach(extensionContext));
assertThat(actual.getMessage(), is("test method names should start with lowercase."));
}
#Test
public void willAllowTestMethodHavingANameStartingWithAnLowerCaseLetter() throws NoSuchMethodException {
ExtensionContext extensionContext = Mockito.mock(ExtensionContext.class);
Method method = Testable.class.getMethod("methodNameStartingWithLowerCase");
Mockito.when(extensionContext.getRequiredTestMethod()).thenReturn(method);
DisallowUppercaseLetterAtBeginning sut = new DisallowUppercaseLetterAtBeginning();
sut.beforeEach(extensionContext);
// no exception - good enough
}
public class Testable {
public void MethodNameStartingWithUpperCase() {
}
public void methodNameStartingWithLowerCase() {
}
}
However, your question suggests that the above extension is only an example so, more generally; if your extension has a side effect (e.g. sets something in an addressable context, populates a System property etc) then your #Test method could assert that this side effect is present. For example:
public class SystemPropertyExtension implements BeforeEachCallback {
#Override
public void beforeEach(ExtensionContext context) {
System.setProperty("foo", "bar");
}
}
#ExtendWith(SystemPropertyExtension.class)
public class SystemPropertyExtensionTest {
#Test
public void willSetTheSystemProperty() {
assertThat(System.getProperty("foo"), is("bar"));
}
}
This approach has the benefit of side stepping the potentially awkward setup steps of: creating the ExtensionContext and populating it with the state required by your test but it may come at the cost of limiting the test coverage since you can really only test one outcome. And, of course, it is only feasible if the extension has a side effect which can be evaulated in a test case which uses the extension.
So, in practice, I suspect you might need a combination of these approaches; for some extensions the extension can be the SUT and for others the extension can be tested by asserting against its side effect(s).
After trying the solutions in the answers and the question linked in the comments, I ended up with a solution using the JUnit Platform Launcher.
class DisallowUppercaseLetterAtBeginningTest {
#Test
void should_succeed_if_method_name_starts_with_lower_case() {
TestExecutionSummary summary = runTestMethod(MyTest.class, "validTest");
assertThat(summary.getTestsSucceededCount()).isEqualTo(1);
}
#Test
void should_fail_if_method_name_starts_with_upper_case() {
TestExecutionSummary summary = runTestMethod(MyTest.class, "InvalidTest");
assertThat(summary.getTestsFailedCount()).isEqualTo(1);
assertThat(summary.getFailures().get(0).getException())
.isInstanceOf(RuntimeException.class)
.hasMessage("test method names should start with lowercase.");
}
private TestExecutionSummary runTestMethod(Class<?> testClass, String methodName) {
SummaryGeneratingListener listener = new SummaryGeneratingListener();
LauncherDiscoveryRequest request = request().selectors(selectMethod(testClass, methodName)).build();
LauncherFactory.create().execute(request, listener);
return listener.getSummary();
}
#ExtendWith(DisallowUppercaseLetterAtBeginning.class)
static class MyTest {
#Test
void validTest() {
}
#Test
void InvalidTest() {
fail("test should have failed before");
}
}
}
JUnit itself will not run MyTest because it is an inner class without #Nested. So there are no failing tests during the build process.
Update
JUnit itself will not run MyTest because it is an inner class without #Nested. So there are no failing tests during the build process.
This is not completly correct. JUnit itself would also run MyTest, e.g. if "Run All Tests" is started within the IDE or within a Gradle build.
The reason why MyTest was not executed is because I used Maven and I tested it with mvn test. Maven uses the Maven Surefire Plugin to execute tests. This plugin has a default configuration which excludes all nested classes like MyTest.
See also this answer about "Run tests from inner classes via Maven" and the linked issues in the comments.
JUnit 5.4 introduced the JUnit Platform Test Kit which allows you to execute a test plan and inspect the results.
To take a dependency on it from Gradle, it might look something like this:
testImplementation("org.junit.platform:junit-platform-testkit:1.4.0")
And using your example, your extension test could look something like this:
import org.junit.jupiter.api.extension.ExtendWith
import org.junit.jupiter.api.fail
import org.junit.platform.engine.discovery.DiscoverySelectors
import org.junit.platform.testkit.engine.EngineTestKit
import org.junit.platform.testkit.engine.EventConditions
import org.junit.platform.testkit.engine.TestExecutionResultConditions
internal class DisallowUpperCaseExtensionTest {
#Test
internal fun `succeed if starts with lower case`() {
val results = EngineTestKit
.engine("junit-jupiter")
.selectors(
DiscoverySelectors.selectMethod(ExampleTest::class.java, "validTest")
)
.execute()
results.tests().assertStatistics { stats ->
stats.finished(1)
}
}
#Test
internal fun `fail if starts with upper case`() {
val results = EngineTestKit
.engine("junit-jupiter")
.selectors(
DiscoverySelectors.selectMethod(ExampleTest::class.java, "TestShouldNotBeCalled")
)
.execute()
results.tests().assertThatEvents()
.haveExactly(
1,
EventConditions.finishedWithFailure(
TestExecutionResultConditions.instanceOf(java.lang.RuntimeException::class.java),
TestExecutionResultConditions.message("test method names should start with lowercase.")
)
)
}
#ExtendWith(DisallowUppercaseLetterAtBeginning::class)
internal class ExampleTest {
#Test
fun validTest() {
}
#Test
fun TestShouldNotBeCalled() {
fail("test should have failed before")
}
}
}
I am working in Eclipse Oxygen.1a (4.7.1a) with the JUnit 5 library, and it seems like none of my annotated methods are running correctly when I run a test class using JUnitCore.
For example, if I call the following class using JUnitCore.run(TestClass.class) :
public class TestClass {
#BeforeAll
public static void beforeAll() {
System.out.println("In TestClass.beforeAll");
}
#Test
public void testMethod() {
System.out.println("In TestClass.testMethod");
}
#AfterAll
public static void afterAll() {
System.out.println("In TestClass.afterAll");
}
}
There is no output to System.out, and the Result object says that 1 test failed. Implying that none of these methods ran. I can use a JUnit Launcher to run the test class like so:
final LauncherDiscoveryRequest request = LauncherDiscoveryRequestBuilder.request()
.selectors(selectClass(TestClass.class)).build();
final Launcher launcher = LauncherFactory.create();
launcher.execute(request);
However, this does not give me any feedback about how many tests passed/failed, it seems to just run them. I can't find much documentation on using JUnitCore with JUnit5, is there something newer that I should be working with?
You should not use JUnitCore with JUnit 5, but instead use the platform launcher API as you did in the second part of your analysis.
launcher#execute(.) does not return any value, hence you need to use a listener to aggregate the results from the execution of your tests, as per the JUnit 5 documentation 7.1.2 Executing Tests.
There is no return value for the execute() method, but you can easily use a listener to aggregate the final results in an object of your own. For an example see the SummaryGeneratingListener.
You can for instance produce a TestExecutionSummary which may indeed provide you with the information you want to collect:
final LauncherDiscoveryRequest request =
LauncherDiscoveryRequestBuilder.request()
.selectors(selectClass(TestClass.class))
.build();
final Launcher launcher = LauncherFactory.create();
final SummaryGeneratingListener listener = new SummaryGeneratingListener();
launcher.registerTestExecutionListeners(listener);
launcher.execute(request);
TestExecutionSummary summary = listener.getSummary();
long testFoundCount = summary.getTestsFoundCount();
List<Failure> failures = summary.getFailures();
...
I am trying to write a test suite using JUnit4 by relying on JUnit4TestAdapter. Having a look at the code of this class I saw that it only works with a Class as input. I would like to build a test class and set a parameter on it before running it with my TestSuite. Unfortunately, Junit4TestAdapter is building the test by using reflection (not 100% sure about the mechanism behind it), which means that I cannot change my test class on runtime.
Has anybody done anything similar before? Is there any possible workaround to this issue? Thanks for your help!
public class SimpleTest {
#Test
public void testBasic() {
TemplateTester tester = new TemplateTester();
ActionIconsTest test = new ActionIconsTest();
test.setParameter("New Param Value");
tester.addTests(test);
tester.run();
}
}
/////
public class TemplateTester {
private TestSuite suite;
public TemplateTester() {
suite = new TestSuite();
}
public void addTests(TemplateTest... tests) {
for (TemplateTest test : tests) {
suite.addTest(new JUnit4TestAdapter(test.getClass()));
}
}
public void run() {
suite.run(new TestResult());
}
}
/////
public interface TemplateTest {
}
/////
public class ActionIconsTest extends BaseTestStrategy implements TemplateTest {
#Test
public void icons() {
//Test logic here
}
public void navigateToTestPage() {
//Here I need the parameter
}
}
/////
public abstract class BaseTestStrategy {
protected String parameter;
#Before
public void init() {
navigateToTestPage();
}
public abstract void navigateToTestPage();
public void setParameter(String parameter) {
this.parameter = parameter;
}
}
I am trying to test a web application with Selenium. The way I want to test is by splitting the functionality, e.g., I want to test the available icons (ActionIconsTest), then I'd like to test other parts like buttons, etc.
The idea behind this is to have a better categorization of the functionality available in certain screen. This is quite coupled with the way we are currently developing our web app.
With this in mind, TemplateTest is just an interface implemented by the different kind of tests (ActionIconTest, ButtonTest, etc) available in my system.
TemplateTester is a Junit suite test with all the different tests that implement the interface TemplateTest.
The reason for this question is because I was trying to implement a Strategy pattern and then realized of the inconvenient of passing a class to Junit4TestAdapter in runtime.
Well, taking in account that JUNIT needs your tester's Class object as an object factory (so he can create several instances of your tester), I can only suggest you pass parameters to your tester through System Properties.
Moreover, it's the recommended way of passing parameters: http://junit.org/faq.html#running_7
I have written simple java program:
package bsh;
import test.Testclass;
public class Whatever {
public static void main(String args[]){
Testclass t = new Testclass();
System.out.println(t.squareIt(8));
}
}
package test;
public class Testclass {
public Testclass(){
}
public int squareIt(int i){
return i*i;
}
}
I have two questions about this java program:
How to execute this java program from jmeter?
How to call sqaureIt(int i) method from jmeter?
How can i achieve this?
I haven't tried main class execution , but i have certainly executed Junit Testcases through Jmeter
Have a look at this doc Junitsampler tutorial
Aside from following the tutorial as Sudhakar mentioned...
Your main test case must extend TestCase or some form of it. Your test method must begin with the word test or use annotations.
Do not use a void main method as in a normal java application.
It will automatically call and run your method that starts with the name test.
So you could do this:
public class Whatever extends TestCase {
public void testIt() {
//test code here
new Testclass().squareIt(5);
}
}
JMeter is typically used for testing performance on web applications. Correct me if I'm wrong, but unless you plan on converting this into some sort of web app, you should try using VisualVM to measure your program's performance.
I have 2 test methods, and i need to run them with different configurations
myTest() {
.....
.....
}
#Test
myTest_c1() {
setConf1();
myTest();
}
#Test
myTest_c2() {
setConf2();
myTest();
}
//------------------
nextTest() {
.....
.....
}
#Test
nextTest_c1() {
setConf1();
nextTest();
}
#Test
nextTest_c2() {
setConf2();
nextTest();
}
I cannot run them both from one config (as in code below) because i need separate methods for tosca execution.
#Test
tests_c1() {
setConf1();
myTest()
nextTest();
}
I don't want to write those 2 methods to run each test, how can i solve this?
First i thought to write custom annotation
#Test
#RunWithBothConf
myTest() {
....
}
But maybe there are any other solutions for this?
What about using Theories?
#RunWith(Theories.class)
public class MyTest{
private static enum Configs{
C1, C2, C3;
}
#DataPoints
public static Configs[] configValues = Configs.values();
private void doConfig(Configs config){
swich(config){...}
}
#Theory
public void test1(Config config){
doConfig(config);
// rest of test
}
#Theory
public void test2(Config config){
doConfig(config);
// rest of test
}
Not sure why formatting if off.
I have a similar issue in a bunch of test cases I have, where certain tests need to be run with different configurations. Now, 'configuration' in your case might be more like settings, in which case maybe this isn't the best option, but for me it's more like a deployment model, so it fits.
Create a base class containing the tests.
Extend the base class with one that represents the different configuration.
As you execute each of the derived classes, the tests in the base class will be run with the configuration setup in its own class.
To add new tests, you just need to add them to the base class.
Here is how I would approach it:
Create two test classes
The first class configures to conf1 but uses the #Before attribute trigger the setup
The second class extends the first but overrides the configure method
In the example below I have a single member variable conf. If no configuration is run it stays at its default value 0. setConf1 is now setConf in the Conf1Test class which sets this variable to 1. setConf2 is now setConf in the Conf2Test class.
Here is the main test class:
public class Conf1Test
{
protected int conf = 0;
#Before
public void setConf()
{
conf = 1;
}
#Test
public void myTest()
{
System.out.println("starting myTest; conf=" + conf);
}
#Test
public void nextTest()
{
System.out.println("starting nextTest; conf=" + conf);
}
}
And the second test class
public class Conf2Test extends Conf1Test
{
// override setConf to do "setConf2" function
public void setConf()
{
conf = 2;
}
}
When I configure my IDE to run all tests in the package I get the following output:
starting myTest; conf=1
starting nextTest; conf=1
starting myTest; conf=2
starting nextTest; conf=2
I think this gives you what. Each test only has to be written once. Each test gets run twice, once with conf1 and once with conf2
The way you have it right now seems fine to me. You aren't duplicating any code, and each test is clear and easy to understand.