Is there anyway we can share data between different extensions in JUNIT 5 using store
Example
public class Extension1{
beforeAllCallback(){
getStore(GLOBAL).put(projectId,"112");
}
}
public class Extension2{
beforeTestExecutionCallback(){
System.out.println("projectId="+getStore(GLOBAL).get(projectId));
}
}
Yes, two extensions can share state via the Store as follows.
Note, however, that you may wish to store the shared state in the root context Store if you want the state to be accessible across test classes.
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.BeforeAllCallback;
import org.junit.jupiter.api.extension.BeforeTestExecutionCallback;
import org.junit.jupiter.api.extension.ExtendWith;
import org.junit.jupiter.api.extension.ExtensionContext;
import org.junit.jupiter.api.extension.ExtensionContext.Namespace;
#ExtendWith({ Extension1.class, Extension2.class })
public class Tests {
#Test
void test() {
// executing this results in the following being printed to SYS_OUT.
// PROJECT_ID=112
}
}
class Extension1 implements BeforeAllCallback {
public static final String PROJECT_ID = Extension1.class.getName() + ".PROJECT_ID";
#Override
public void beforeAll(ExtensionContext context) throws Exception {
context.getStore(Namespace.GLOBAL).put(PROJECT_ID, "112");
}
}
class Extension2 implements BeforeTestExecutionCallback {
#Override
public void beforeTestExecution(ExtensionContext context) throws Exception {
System.out.println("PROJECT_ID=" + context.getStore(Namespace.GLOBAL).get(Extension1.PROJECT_ID));
}
}
Related
I'm trying to add spans when constructor of some class called. I'm using opentelemetry javaagent and extensions to add tracing to my application.
import io.opentelemetry.api.GlobalOpenTelemetry;
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.api.trace.Tracer;
import io.opentelemetry.javaagent.extension.instrumentation.TypeInstrumentation;
import io.opentelemetry.javaagent.extension.instrumentation.TypeTransformer;
import net.bytebuddy.asm.Advice;
import net.bytebuddy.description.type.TypeDescription;
import net.bytebuddy.matcher.ElementMatcher;
import net.bytebuddy.matcher.ElementMatchers;
import static net.bytebuddy.matcher.ElementMatchers.isConstructor;
import static net.bytebuddy.matcher.ElementMatchers.named;
public class ClassConstructorInstrumentation implements TypeInstrumentation {
#Override
public ElementMatcher<TypeDescription> typeMatcher() {
return ElementMatchers
.namedOneOf("org.example.ServiceManagerDummy");
}
#Override
public void transform(TypeTransformer transformer) {
transformer.applyAdviceToMethod(
isConstructor(),
this.getClass().getName() + "$ConstructorSpanCreateAdvice");
// transformer.applyAdviceToMethod(
// named("dummyMethod"),
// this.getClass().getName() + "$ConstructorSpanCreateAdvice");
}
#SuppressWarnings("unused")
public static class ConstructorSpanCreateAdvice {
#Advice.OnMethodEnter
public static void onEnter() {
System.out.println("START SPAN ");
}
#Advice.OnMethodExit(onThrowable = Throwable.class)
public static void onExit(
#Advice.Thrown Throwable throwable
) {
System.out.println("END SPAN ");
}
}
}
public class ServiceManagerDummy {
public ServiceManagerDummy() {
System.out.println("SERVICE MANAGER CONSTR");
dummyMethod();
}
private void dummyMethod() {
System.out.println("DUMMY METHOD CALLED");
}
}
I'm using a simple configuration as above just to verify that when the constructor was called my advice method log it. But when it configured to add some logs when the constructor was called, I received nothing in the log. But when I add config for method calling (commented code) it works. What's wrong in my configuration?
What Byte Buddy would normally do would be to wrap the constructor in a try-finally-block. For a constructor, that is not possible as the super method call cannot be wrapped in such a block. "onThrowable" is therefore not possible for constructors.
How can I create Scenario object in cucumber framework and keep alive through all test run?
First Class:
import io.cucumber.java.Before;
import io.cucumber.java.Scenario;
import io.cucumber.java.en.Given;
public class TestStepDefinitionOne {
Scenario scenario;
#Before
public void keepScenario(Scenario scenario) {
this.scenario = scenario;
}
Given("First step")
public void first_step() {
// Some code
scenario.log("Some text");
}
Given("Second step")
public void second_step() {
// Some code
scenario.log("Some text");
}
}
Second Class:
import io.cucumber.java.Before;
import io.cucumber.java.Scenario;
import io.cucumber.java.en.Given;
public class TestStepDefinitionTwo {
Scenario scenario;
#Before
public void keepScenario(Scenario scenario) {
this.scenario = scenario;
}
Given("First step")
public void first_step() {
// Some code
scenario.log("Some text");
}
Given("Second step")
public void second_step() {
// Some code
scenario.log("Some text");
}
}
If I create #Before method in each step definition class everything works fine.
My question: Is there any way to create Scenario object in one place (like Hooks class) and keep it alive through all test run?
my project is generated by http://start.vertx.io
my http handler:
// ...
public void handle(RoutingContext ctx) {
String verticleName = ctx.queryParams().get("v");
ctx.vertx().deployVerticle(verticleName);
ctx.response().end();
}
// ...
but it reports error
java.lang.RuntimeException: Resource not found: verticles/TestVerticle.java
at io.vertx.core.impl.verticle.CompilingClassLoader.<init>(CompilingClassLoader.java:68)
at io.vertx.core.impl.JavaVerticleFactory.createVerticle(JavaVerticleFactory.java:37)
at io.vertx.core.impl.VerticleManager.doDeployVerticle(VerticleManager.java:217)
at io.vertx.core.impl.VerticleManager.doDeployVerticle(VerticleManager.java:193)
at io.vertx.core.impl.VerticleManager.doDeployVerticle(VerticleManager.java:180)
at io.vertx.core.impl.VerticleManager.deployVerticle(VerticleManager.java:156)
at io.vertx.core.impl.VertxImpl.deployVerticle(VertxImpl.java:623)
at io.vertx.core.impl.VertxImpl.deployVerticle(VertxImpl.java:608)
By default the verticle name should be a fully-qulified class name that has a no-args constructor and implements Verticle (or extends one of its implementors). e.g.
package demo;
import io.vertx.core.AbstractVerticle;
public class MyVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
// ...
}
}
Then you can do:
vertx.deployVerticle("demo.MyVerticle");
If you want to use another mechanism you can create a custom VerticleFactory (https://vertx.io/docs/vertx-service-factory/java/) and use your own logic. e.g.
package demo;
import io.vertx.core.Promise;
import io.vertx.core.Verticle;
import io.vertx.core.spi.VerticleFactory;
import java.util.concurrent.Callable;
public class CustomVerticleFactory implements VerticleFactory {
#Override
public String prefix() {
return "custom";
}
#Override
public void createVerticle(String verticleName, ClassLoader classLoader, Promise<Callable<Verticle>> promise) {
if (verticleName.equals("custom:x")) {
promise.complete(() -> new MyVerticle());
} else {
promise.fail("...");
}
}
}
Load it to your Vertx instance:
vertx.registerVerticleFactory(new CustomVerticleFactory());
And then you can do:
vertx.deployVerticle("custom:x");
Found the right way.
I should start the app with -Xbootclasspath/a:.
Then the class loader can found java source files(XXXVerticle.java) at . (working dir)
I'm testing extensively with JUnit and sometimes - while debugging my code - I want (temporary) only run a single #Test of my #RunWith(Arquillian.class) test class. Currently I'm adding a #Ignore to all other tests and wondering if something like #IgnoreOther does exist.
Are there better solutions to ignore all other tests?
The simplest way is to replace all #Test to //###$$$#Test. Then when your debugging is finished replace //###$$$#Test to #Test.
Moreover typically IDEs allow running one test only. For example in Eclipse you can do it from Outline view.
Just my two cents. You can try to use Junit Rules as #srkavin suggested.
Here is an example.
package org.foo.bar;
import org.junit.rules.MethodRule;
import org.junit.runners.model.FrameworkMethod;
import org.junit.runners.model.Statement;
public class SingleTestRule implements MethodRule {
private String applyMethod;
public SingleTestRule(String applyMethod) {
this.applyMethod = applyMethod;
}
#Override
public Statement apply(final Statement statement, final FrameworkMethod method, final Object target) {
return new Statement() {
#Override
public void evaluate() throws Throwable {
if (applyMethod.equals(method.getName())) {
statement.evaluate();
}
}
};
}
}
package org.foo.bar;
import org.junit.Assert;
import org.junit.Rule;
import org.junit.Test;
public class IgnoreAllTest {
#Rule
public SingleTestRule test = new SingleTestRule("test1");
#Test
public void test1() throws Exception {
System.out.println("test1");
}
#Test
public void test2() throws Exception {
Assert.fail("test2");
}
#Test
public void test3() throws Exception {
Assert.fail("test3");
}
}
Test rules (JUnit 4.7+) will help. For example, you can write a rule that ignores all #Test methods except one with a specific name.
The answer from srkavin (and mijer) is correct, but the code is deprecated from JUnit 4.9. The interface and the method signature have changed. I want to provide this for others interested in this issue.
public class IgnoreOtherRule implements TestRule
{
private String applyMethod;
public IgnoreOtherRule(String applyMethod){
this.applyMethod = applyMethod;
}
#Override
public Statement apply(final Statement statement, final Description description)
{
return new Statement()
{
#Override
public void evaluate() throws Throwable {
if (applyMethod.equals(description.getMethodName())) {
statement.evaluate();
}
}
};
}
}
Is there a way in JUnit to detect within an #After annotated method if there was a test failure or error in the test case?
One ugly solution would be something like that:
boolean withoutFailure = false;
#Test
void test() {
...
asserts...
withoutFailure = true;
}
#After
public void tearDown() {
if(!withoutFailuere) {
this.dontReuseTestenvironmentForNextTest();
}
}
This is ugly because one need to take care of the "infrastructure" (withoutFailure flag) in the test code.
I hope that there is something where I can get the test status in the #After method!?
If you are lucky enough to be using JUnit 4.9 or later, TestWatcher will do exactly what you want.
Share and Enjoy!
I extend dsaff's answer to solve the problem that a TestRule can not execute some code snipped between the execution of the test-method and the after-method. So with a simple MethodRule one can not use this rule to provide a success flag that is use in the #After annotated methods.
My idea is a hack! Anyway, it is to use a TestRule (extends TestWatcher). A TestRule will get knowledge about failed or success of a test. My TestRule will then scan the class for all Methods annotated with my new AfterHack annotations and invoke that methods with a success flag.
AfterHack annotation
import static java.lang.annotation.ElementType.METHOD;
import static java.lang.annotation.RetentionPolicy.RUNTIME;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
#Retention(RUNTIME)
#Target(METHOD)
public #interface AfterHack {}
AfterHackRule
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.List;
import org.junit.rules.TestWatcher;
import org.junit.runner.Description;
public class AfterHackRule extends TestWatcher {
private Object testClassInstance;
public AfterHackRule(final Object testClassInstance) {
this.testClassInstance = testClassInstance;
}
protected void succeeded(Description description) {
invokeAfterHackMethods(true);
}
protected void failed(Throwable e, Description description) {
invokeAfterHackMethods(false);
}
public void invokeAfterHackMethods(boolean successFlag) {
for (Method afterHackMethod :
this.getAfterHackMethods(this.testClassInstance.getClass())) {
try {
afterHackMethod.invoke(this.testClassInstance, successFlag);
} catch (IllegalAccessException | IllegalArgumentException
| InvocationTargetException e) {
throw new RuntimeException("error while invoking afterHackMethod "
+ afterHackMethod);
}
}
}
private List<Method> getAfterHackMethods(Class<?> testClass) {
List<Method> results = new ArrayList<>();
for (Method method : testClass.getMethods()) {
if (method.isAnnotationPresent(AfterHack.class)) {
results.add(method);
}
}
return results;
}
}
Usage:
public class DemoTest {
#Rule
public AfterHackRule afterHackRule = new AfterHackRule(this);
#AfterHack
public void after(boolean success) {
System.out.println("afterHack:" + success);
}
#Test
public void demofails() {
Assert.fail();
}
#Test
public void demoSucceeds() {}
}
BTW:
1) Hopefully there is a better solution in Junit5
2) The better way is to use the TestWatcher Rule instead of the #Before and #After Method at all (that is the way I read dsaff's answer)
#see
I don't know any easy or elegant way to detect the failure of a Junit test in an #After method.
If it is possible to use a TestRule instead of an #After method, one possibility to do it is using two chained TestRules, using a TestWatcher as the inner rule.
Example:
package org.example;
import static org.junit.Assert.fail;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.ExternalResource;
import org.junit.rules.RuleChain;
import org.junit.rules.TestRule;
import org.junit.rules.TestWatcher;
import org.junit.runner.Description;
public class ExampleTest {
private String name = "";
private boolean failed;
#Rule
public TestRule afterWithFailedInformation = RuleChain
.outerRule(new ExternalResource(){
#Override
protected void after() {
System.out.println("Test "+name+" "+(failed?"failed":"finished")+".");
}
})
.around(new TestWatcher(){
#Override
protected void finished(Description description) {
name = description.getDisplayName();
}
#Override
protected void failed(Throwable e, Description description) {
failed = true;
}
})
;
#Test
public void testSomething(){
fail();
}
#Test
public void testSomethingElse(){
}
}