I'm trying to search for an example of how to run a tests suite for test classes that use Robolectric, for example I have this class:
#Config(sdk = 16, manifest = "src/main/AndroidManifest.xml")
#RunWith(RobolectricTestRunner.class)
public class RestaurantModelTest {
//setUp code here...
#Test
public void testFindByLocation() throws Exception {
//test code here
}
}
All unit test and assertions of class RestaurantModelTest are passing, but I also have another class lets call it XModelTest (in which all assertions and unit test are passing.
My problem
I don't find any tutorial/example of how to use a test suite using robolectric.
Should this be done in the same package where my RestaurantModelTest and XModelTest are? If not where?
Also I tried doing this with TestSuite from JUnit but many questions arise should the class with my TestSuite extend extends TestSuite super class?
If someone can give me a short example using my RestaurantModelTestand XModelTestclasses it would be great.
I believe, I've also partially covered this question answering your second question - Can't run android test suite: Exception in thread “main”
Here's how to write a suite with Robolectric:
Let's say we have 2 model classes.
CartModel.java
public class CartModel {
public float totalAmount;
public int products;
public void addToCart(float productPrice) {
products++;
totalAmount += productPrice;
}
}
and RestaurantModel.java
public class RestaurantModel {
public int staff;
public void hire(int numberOfHires) {
staff += numberOfHires;
}
}
Let's write some dummy tests for them:
CartModelTest.java
#RunWith(RobolectricGradleTestRunner.class)
#Config(constants = BuildConfig.class, sdk=21)
public class CartModelTest {
#Test
public void addToCart() throws Exception {
CartModel cartModel = new CartModel();
assertEquals(0, cartModel.totalAmount, 0);
assertEquals(0, cartModel.products);
cartModel.addToCart(10.2f);
assertEquals(10.2f, cartModel.totalAmount, 0);
assertEquals(1, cartModel.products);
}
}
RestaurantModelTest.java
#RunWith(RobolectricGradleTestRunner.class)
#Config(constants = BuildConfig.class, sdk=21)
public class RestaurantModelTest {
#Test
public void hire() throws Exception {
RestaurantModel restaurantModel = new RestaurantModel();
assertEquals(0, restaurantModel.staff);
restaurantModel.hire(1);
assertEquals(1, restaurantModel.staff);
}
}
And now last step - to group them together into one ModelsTestSuite.java:
#RunWith(Suite.class)
#Suite.SuiteClasses({
RestaurantModelTest.class,
CartModelTest.class
})
public class ModelsTestSuite {}
To run - just right-click in ModelsTestSuite and click "Run ModelsTestSuite". That's it!
NB! In Android Studio 2.0 3b, you have to disable instant run (Preferences -> Build, Execution, Deployment -> Instant Run -> Enable Instant Run - uncheck) in order to run Robolectric tests (to avoid java.lang.RuntimeException: java.lang.ClassNotFoundException: Could not find a class for package: <package name> and class name: com.android.tools.fd.runtime.BootstrapApplication).
I hope, it helps
Related
I am trying to get some context of the result of the test run in the #AfterTest. I would like to have, at bare minimum, knowledge if it passed or not and ideally also the thrown exception if there is one.
However, every parameter I try seems to not be resolvable and I can't find any documentation on what should be available.
Code:
public class TestClass {
#AfterEach
public void afterEach(
TestInfo testInfo, //works, but no report on state of test
// none of these work
TestExecutionSummary summary,
TestExecutionResult result,
TestFailure fail,
Optional<Throwable> execOp,
Throwable exception
) {
// ...
}
}
What can I do to get this context?
Not sure if this is what you want, but you can use either a TestExecutionListener or a TestWatcher (there are also other ways that you can check in documentation).
An example of TestWatcher can be found here: TestWatcher in junit5 and a more detailed explanation here: https://www.baeldung.com/junit-testwatcher.
The following code example was partially taken from here.
public class TestResultLoggerExtension implements TestWatcher, AfterAllCallback {
...
#Override
public void testSuccessful(ExtensionContext context) {
System.out.println("Test Successful for test {}: ", context.getDisplayName();
}
#Override
public void testFailed(ExtensionContext context, Throwable cause) {
System.out.println("Test Failed for test {}, with cause {}", context.getDisplayName(), cause.getMessage());
}
}
You test class would be something like this:
#ExtendWith(TestResultLoggerExtension.class)
public class TestClass {
You can adapt the logic to your needs.
More References:
https://junit.org/junit5/docs/current/user-guide/#extensions-test-result-processing
https://junit.org/junit5/docs/current/user-guide/#launcher-api-listeners-custom
Im looking for a way of optimize running multiple parameterized tests with expensive setup
my current code looks like this
#ParameterizedTest
#MethodSource("test1CasesProvider")
void test1(String param) {
// expensive setup code 1
// execution & assertions
}
#ParameterizedTest
#MethodSource("test2CasesProvider")
void test2(String param) {
// expensive setup code 2
// execution & assertions
}
but in that shape expensive setup runs for every testCase, which is not very good
I can split this test into two separate tests and use #BeforeAll, then setup runs only once per test, but im looking for a way to keep both cases in one test
You can use #Nested tests in this case, like this way:
public class MyTests {
#Nested
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class Test1Cases {
#BeforeAll
void setUpForTest1() {
System.out.println("Test1Cases: setting up things!");
}
#AfterAll
void tearDownForTest1() {
System.out.println("Test1Cases: tear down things!");
}
#ParameterizedTest
#MethodSource("source")
void shouldDoSomeTests(String testCase) {
System.out.println("Test1Cases: Doing parametrized tests: " + testCase);
}
Stream<Arguments> source() {
return Stream.of(
Arguments.of("first source param!"),
Arguments.of("second source param!"),
Arguments.of("third source param!")
);
}
}
#Nested
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class Test2Cases {
#BeforeAll
void setUpForTest2() {
System.out.println("Test2Cases: setting up things!");
}
#AfterAll
void tearDownForTest2() {
System.out.println("Test2Cases: tear down things!");
}
#ParameterizedTest
#MethodSource("source")
void shouldDoSomeTests(String testCase) {
System.out.println("Test2Cases: Doing parametrized tests: " + testCase);
}
Stream<Arguments> source() {
return Stream.of(
Arguments.of("first source param!"),
Arguments.of("second source param!"),
Arguments.of("third source param!")
);
}
}
}
The output in this case was:
Test2Cases: setting up things!
Test2Cases: Doing parametrized tests: first source param!
Test2Cases: Doing parametrized tests: second source param!
Test2Cases: Doing parametrized tests: third source param!
Test2Cases: tear down things!
Test1Cases: setting up things!
Test1Cases: Doing parametrized tests: first source param!
Test1Cases: Doing parametrized tests: second source param!
Test1Cases: Doing parametrized tests: third source param!
Test1Cases: tear down things!
We defined one testng result listener which help us to send the testing result for each test case defined in testng.xml to one internal tool such like below:
public class TestResultsListener implements ITestListener, ISuiteListener {
#Override
public void onFinish(ISuite suite){
// some code to send the final suite result to internal tools
}
#Override
public void onTestSuccess(ITestResult iTestResult) {
this.sendResult(iTestResult,"PASS");Result
}
private void sendStatus(ITestResult iTestResult, String status){
// Set test case information
......
jsonArr.add(testResult);
}
}
And then we integrated this listener to other project's testng xml file such like:
<listeners>
<listener class-name="com.qa.test.listener.TestesultsListener" />
</listeners>
It worked as designed: once the test suite finishes, the test result will be uploaded to internal tools.
Now we have one requirement that in one project, one test case in testng.xml is related to 3 test cases in internal tool which means that for one test case in testng.xml we need to update 3 test cases in internal tools. How can we update our current testng listener to fulfill this?
Thanks a lot.
You can annotate each of your tests with the list of corresponding internal test tool ids:
Here I suppose that you have 2 testng tests: one is related to internal test IT-1, and the other one to internal tests IT-2, IT-3 and IT-4:
#Listeners(MyTestListener.class)
public class TestA {
#Test
#InternalTool(ids = "IT-1")
public void test1() {
System.out.println("test1");
fail();
}
#Test
#InternalTool(ids = {"IT-2", "IT-3", "IT-4"})
public void test2() {
System.out.println("test2");
}
}
The annotation is simply defined like this:
#Retention(RetentionPolicy.RUNTIME)
public #interface InternalTool {
String[] ids();
}
The your listener has just to figure out which annotation are present on successful/failed tests:
public class MyTestListener extends TestListenerAdapter implements ITestListener {
#Override
public void onTestSuccess(ITestResult tr) {
super.onTestSuccess(tr);
updateInternalTool(tr, true);
}
#Override
public void onTestFailure(ITestResult tr) {
super.onTestFailure(tr);
updateInternalTool(tr, false);
}
private void updateInternalTool(ITestResult tr, boolean success) {
InternalTool annotation = tr.getMethod().getConstructorOrMethod().getMethod().getAnnotation(InternalTool.class);
for (String id : annotation.ids()) {
System.out.println(String.format("Test with id [%s] is [%s]", id, success ? "successful" : "failed"));
}
}
}
The following output is produced:
test1
Test with id [IT-1] is [failed]
test2
Test with id [IT-2] is [successful]
Test with id [IT-3] is [successful]
Test with id [IT-4] is [successful]
You can also extend this mechanism to Suite listeners as well.
Disclaimer: The line
InternalTool annotation = tr.getMethod().getConstructorOrMethod().getMethod().getAnnotation(InternalTool.class); is not bullet-proof (high risk of null pointer exception). Should be more robust.
I am setting up an Spring boot application on Jenkins. For the unit tests i am getting below error. This error is not particular to one test cases. Every time I run it is giving me error for different test. I am not sure what is wrong. Same project is working fine (build and unit tests) on local and other environments like (development, stage). Any idea with below errors?
00:49:42.836 [main] DEBUG org.springframework.test.context.support.ActiveProfilesUtils - Could not find an 'annotation declaring class' for annotation type [org.springframework.test.context.ActiveProfiles] and class [com.abc.services.tokens.crypto.aws.AesGcmDynamoCryptoCipherProviderTest]
00:49:42.836 [main] INFO org.springframework.test.context.support.DefaultTestContextBootstrapper - Using TestExecutionListeners: [org.springframework.test.context.web.ServletTestExecutionListener#43195e57, org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener#333291e3, org.springframework.test.context.support.DependencyInjectionTestExecutionListener#479d31f3, org.springframework.test.context.support.DirtiesContextTestExecutionListener#40ef3420]
Here is the test class
#SuppressWarnings("unchecked")
public class AesGcmDynamoCryptoCipherProviderTest extends AbstractTestNGBeanMockingTests {
#MockBean
AwsCrypto awsCrypto;
#MockBean
DynamoDBProvider dynamoDBProvider;
#MockBean
MasterKeyProvider masterKeyProvider;
#MockBean
Table table;
private static Item mockCipherItem(UUID cipherId) {
Item item = mock(Item.class);
return item;
}
private static <T> CryptoResult<T, ?> mockCryptoResult(T result) {
// do something
return cryptoResult;
}
#BeforeMethod
private void init() {
CryptoResult<String, ?> decryptoResult = mockCryptoResult(Base64.getEncoder().encodeToString("*decrypted*".getBytes()));
CryptoResult<String, ?> encryptoResult = mockCryptoResult("*encrypted*");
}
#Test
public void testGetCipher() {
AesGcmDynamoCryptoCipherProvider provider = new AesGcmDynamoCryptoCipherProvider("table", awsCrypto, dynamoDBProvider, masterKeyProvider);
UUID cipherId = UUID.randomUUID();
Item cipherItem = mockCipherItem(cipherId);
AesGcmCipher cipher = provider.getCipher(cipherId);
assertNotNull(cipher);
assertEquals(cipher.getCipherId(), cipherId);
}
}
Base class
#ContextConfiguration(classes = { //...
AbstractTestNGBeanMockingTests.MockBeanConfiguration.class //...
})
#DirtiesContext
public class AbstractTestNGBeanMockingTests extends AbstractTestNGSpringContextTests {
private static ThreadLocal<Class<? extends AbstractTestNGBeanMockingTests>> currentTestClass = new ThreadLocal<>();
#AfterClass(alwaysRun = true)
#Override
protected void springTestContextAfterTestClass() throws Exception {
super.springTestContextAfterTestClass();
}
#BeforeClass(alwaysRun = true, dependsOnMethods = { "springTestContextBeforeTestClass" })
#Override
protected void springTestContextPrepareTestInstance() throws Exception {
currentTestClass.set(this.getClass());
super.springTestContextPrepareTestInstance();
currentTestClass.set(null);
}
#BeforeMethod
public void initializeMockedBeans() {
MockBeanRegistration.initializeMockedBeans(this);
}
protected static class MockBeanConfiguration {
MockBeanConfiguration(ApplicationContext context) {
MockBeanRegistration.registerMocks((BeanDefinitionRegistry) context, currentTestClass.get());
}
}
}
I have bumped into this error after moving classes into new packages somewhere under the java folder, but omitting to move the corresponding test classes in the test folders.
After applying the changes in the test packages as well, it runs again.
You wrote that you experience the problem only in the Jenkins environment.
My guess is that Jenkins starts always with a new checkout of the project from a 100% clean status. In the other environments you might have some residues from the previous development, and these somehow allow the tests to 'work', but I would expect that it is Jenkins getting it right...
Try to setup the app in a development environment from scratch. If you get the error, so you will properly analyze it and correct it.
I have JUnit main test suite. This suite contains many suites - for each testing configuration
#RunWith(ProgressSuite.class)
#SuiteClasses({
SimpleTest.class,
AboutTest.class,
CDH4_JDBC_TestSuite.class,
CDH5_JDBC_TestSuite.class,
CDH4_Metastore_TestSuite.class,
CDH5_Metastore_TestSuite.class,
CDH4_JDBC_Kerberos_TestSuite.class,
CDH5_JDBC_Kerberos_TestSuite.class,
CDH4_Metastore_Kerberos_TestSuite.class,
CDH5_Metastore_Kerberos_TestSuite.class,
})
public class TestSuite {
}
Suites for each testing configuration contains the same test cases, but contains different setUpClass() and tearDownClass() methods
#RunWith(Suite.class)
#SuiteClasses({
PerspectiveSwitchTest.class,
NewFolderFromToolbarTest.class,
RenameFolderFromToolbarTest.class,
RenameFileFromToolbarTest.class,
OpenFilePropertiesFromToolbarTest.class,
OpenFolderPropertiesFromToolbarTest.class,
DeleteFileFromToolbarTest.class,
DeleteFolderFromToolbarTest.class,
CopyPasteFolderFromToolbarTest.class,
CopyPasteFileFromToolbarTest.class,
CutPasteFolderFromToolbarTest.class,
CutPasteFileFromToolbarTest.class,
})
public class CDH4_JDBC_Kerberos_TestSuite {
private static SWTWorkbenchBot bot = new SWTWorkbenchBot();
private static AddNewEcosystemNavigator addNewEcosystemNavigator;
private static EcosystemConfigurationLoader ecosystemConfigurationLoader;
private static EcosystemConfiguration ecosystemConfiguration;
private static GenericNavigator genericNavigator;
#BeforeClass
public static void setUpClass() {
bot = new SWTWorkbenchBot();
addNewEcosystemNavigator = new AddNewEcosystemNavigator();
ecosystemConfigurationLoader = new EcosystemConfigurationLoader();
genericNavigator = new GenericNavigator();
ecosystemConfiguration = ecosystemConfigurationLoader
.getDefaultCDH4JDBCKerberosEcosystemConfiguration();
addNewEcosystemNavigator.addNewEcosystemManually(bot,
ecosystemConfiguration);
}
#AfterClass
public static void tearDownClass() {
genericNavigator.closeDialogWindow();
addNewEcosystemNavigator.discardEcosystem(bot, ecosystemConfiguration);
}
}
I am using Jenkins and Tycho for building tests. When I run test suite and some tests fails, I am not able to distinguish on which configuration tests failed. In Jekins I can see only information e.g NewFolderFromToolbarTest was runned 8 times (3 times failed, 5 times passed). Of course I am able get required information from log, but it is time consuming.
Is there any way how to get required information? e.g Use different test structure, use different jenkins plugin, renamed method dynamically if even possible etc? any ideas please? thanks a lot
You could make the test classes abstract and have each configuration be a subclass
public class CDH4NewFolderFromToolbarTest extends AbstractNewFolderFromToolbarTest{
//...
}
Then in your suite call the specific test classes
RunWith(Suite.class)
#SuiteClasses({
CDH4PerspectiveSwitchTest.class,
CDH4NewFolderFromToolbarTest.class,
CDH4RenameFolderFromToolbarTest.class,
CDH4RenameFileFromToolbarTest.class,
//...etc
})
public class CDH4_JDBC_Kerberos_TestSuite {
//same as before
}
I would advocate that instead of reconfiguring in each subclass since the #BeforeClass and #AfterClass will only be called once if it is put in the suite.