I've started to write a class for permission checking on writing and updating objects to a db. For development and testing the permission checking can be disabled via a play config value. My first take was to write the tests with the permission checking enabled. If the check is disabled while the test is run, I simply disable the test via
org.junit.Assume.assumeTrue(false);
Of course this means that in development, when permission checking is disabled, the tests will be probably never run.
Would it be more proper to have two code paths in each test, one for config enabled, one for config disabled?
Also, is this the time to introduce a mocking framework that inject config values via static method replacement at will? When googling I stumbled over https://blog.codecentric.de/en/2011/11/testing-and-mocking-of-static-methods-in-java/ that recommends PowerMock for the job.
An alternative would be to create a static wrapper class for the play configuration values and allow the change of values there at will.
Which way is preferable?
make the config configurable during runtime
use a mocking framework to change the (static) methods at will
I vote for #1 - make the configuration configurable. I wrote a "framework" (two methods in a superclass) in one of my applications to allow the test to say what it wants the security role to be for the test (and putting it back at the end). This lets the test choose which role it wants to be run under independent of all other conditions.
I've used the same approach for other types of configuration as well.
Related
I'd like to know if any of You have experience in automation UI testing of modular-like apps. The whole app is like all typical CRM-related apps, where based on Your personal client needs You just put together some of the available modules (that have been predefined earlier) in order to provide all necessary functionalities.
If there would be "static" app built of all these modules put together then we could test it in a quite easy way, just going through all defined test classes, because we would know the behaviour/interactions between all these modules.
But in case we would need to test app behaviour while putting some of its random pieces/modules together in order to check if they work well, we would need some other approach.
If there's a solution, some recommended architect pattern or anything that can help me to perform such automation tests (using i.e. Selenium WebDriver)? Or does this kind of tests are even possible to perform using WebDriver library?
I'd be grateful if You'll share any of Your thoughts and experiences in this area.
I am working in that area and had a similar situation, here's what I learned from it:
Avoid creating UI tests if you can. UI tests are intended to test the look of your application and that's it. Business logic (like when I change that setting, the displayed data should change, etc.) should be tested in unit tests which are much easier to implement. Interaction between the modules should be covered as much as possible in integration tests.
If you still have functionality left over that needs to be tested, create a config file that contains the information about what customer has which modules enabled. In your test, read that config and if a test is not supposed to run, abort it.
In case some further researcher will look for the know-how solution for this case, we can just set some different test suites for each of app modules, and then we can check each suits for some certain condition met. If some suit won't meet this condition then we'll just skip this test suites. I.e we can get the app bundles.json file, which will most likely contain all information concerning app modules, and then we can just process this file to search for modules which are unavailable in current deployed app.
Look this as nice reference on how to achieve this: Introducing to conditional test running in TestNG
I want to test the effects of a library call of my program with a real device. This call starts a service, that sends an HTTP request to a server whose URL that is hard-coded in the resources.
I want to verify that the request is sent correctly. So I set up a local HTTP server, but to be able to use it I have to change/override/mock the resource so it points to http://127.0.0.1 instead.
I want to do "end-to-end" testing; in this case it's important that the service makes an actual network request, although locally.
I've tried to override the value by creating a string resource with the same name in androidTest/res/values/strings.xml, but that resource is only visible in the test package, not in the application package.
Using the Instrumentation class only allows me to obtain the Context reference, but there's no way to replace it (or the return value of getResources()) with a mock or something similar.
How can I change a resource value of an Application under test?
You have a couple choices:
Dependency injection
Stubs/mocks
SharedPreferences
Scripts or gradle tasks
Dependency injection
Use a library like RoboGuice or Dapper. Inject an object that handles making the API requests. Then, in your test setup, you can replace the injection modules with testing versions instead. That way your test code runs instead of the original; that code can pass in different strings (either hard-coded or from the test strings.xml) instead.
DI libraries can be expensive to setup: high learning curve and can be performance problems if not used correctly. Or even can introduce hard to debug problems if the scope/lifetime of the objects isn't configured correctly. If testing is the only reason to use DI, it might not be worth it to you if you're not comfortable with a DI container.
Stubs/mocks
Wrap up your calls in something that implements a custom interface you write. Your main implementation then fills in the host URL and calls the API. Then, in tests, use a combination of stubs or mocks on that interface to replace the code that fills in the host URL part.
This is less of an integration test since the stubs or mocks will be replacing parts of the code. But is simpler than setting up a dependency injection framework.
SharedPreferences
Use the Android SharedPreferences system. Have it default to a certain endpoint (production). But allow the app to be started on the testing device, then some dialog or settings to let you change the host URL. Run the tests again and now they point to a different API URL.
Scripts or gradle tasks
Write some script or gradle task to modify the source before it is compiled in certain scenarios.
This can be fairly complicated and might even be too platform or system-dependent if not done right. Will probably be fairly brittle to changes in the system. Might introduce bugs if the wrong command is run to build the final packaged version and the wrong code goes out to the market.
Personal opinion
Which do I recommend? If you and/or your team is familiar with a DI library like RoboGuice or Dapper, I recommend that option. It is the most formal, type-safe and strict solution. It also maintains more of the integrity of the stack to test the whole solution.
If you're not familiar with a good DI library, stubs/mocks and interface wrappers are a good fall back solution. They partly have to be used in the DI solution anyway, and you can write enough tests around them to cover a good majority of the cases you need to test (and are in control of). It is close enough to the DI solution that I would recommend this to everyone who doesn't use DI in the project already.
The SharedPreferences solution works great for switching between staging and production environments for QA and support. However, I wouldn't recommend it for automated tests since the app will most likely be reinstalled/reset so often during development, it would get annoying resetting that URL that often. Also, first runs of tests would probably fail; headless tests on a CI server would fail, etc. (You could default the URL to the localhost, but then you run the risk of accidentally release that default to production sometime.)
I don't recommend scripts or the hacked-up gradle tasks. Too brittle, less clear to other developers that come behind you, and more complicated then they're worth, IMO.
In addition to Jon Adams's solutions, there's a further one:
Override resource in build type
By default, a library module is built in release mode when it's used by another module. The debug mode is only used for testing (unit tests and instrumented tests). Therefore, using the resource overriding it's possible to change the resource value for the instrumentation tests for that library only, and use the original value in the library's users.
This has some caveats though:
Instrumented/integration tests must stay on the library itself, not on the main application package;
The same resource values have to be shared across all tests (unless using product flavors)
We are using JUnit to execute integration tests and also the system integration tests which rely on external test systems (not necessarily maintained by our own company).
I wonder where to put the code that checks if the system is available prior to running the test cases? So I can determine if there is a network or other issue and not one with the test itself.
JUnit allows to setup some parts of the test in JUnit-Rules. Is it a good idea to setup the service that communicates with the external system within the rule and do some basic checks ("ping") to the external system within the rule? Or to store the state and within the Test use a JUnit assume(rule.isAvailable) to avoid having the test executed?
Or would it be smarter to put this verification code in a custom JUnit Runner?
Or is there even another way to do this? (simply create some utils?)
The goal is to skip the tests if some conditions are not met since it is obvious the tests will fail. I know this indicates a bad exception handling but there is a lot of legacy code I can't change altogether.
I tried to find some articles myself but it seems the search terms ("test", "external system" and so on) are a little thankless.
thanks!
Consider using org.junit.Assume.* as described here. When assumptions fail, your tests are ignored by default. You can write a custom runner to do something else when your assumptions fail.
The key thing though is that the tests don't fail when assumptions like the availability of your external services fail.
If the ping applies to every single test in the class, I would put the ping call in the #Before method. #Before will get executed before every single test method (i.e., #JUnit -annotated methods).
If the ping does not apply to all tests in the class, then you would have to hit ping explicitly from those methods.
I'm creating a Spring-based Webapplication where I use Spring to add the implementation of a service to the defined interface. So far, so standard. Works fine.
I want to allow a user to overwrite applications behavior at runtime for his session. For that I want spring to change the implementation behind an interface depending on the user-session.
A use case for that are automatic testcases that run on INT and should test the output of an email created by the system. On INT there is an email-serivce configured sending emails to our mail-server. I dont want the testcase to have to check mails using a mail-Protocol. I want in case automatic testcases are running to chenge the email-implementation to write the email as comments to the HTML, so my tests can easily check the result. And so there are some more such cases where it would be nice to change the implementing bean for special circumstances.
Is there a concept in spring that helps me to implement such a feature or do I have to create that on my own?
Additional information: It's all about automated acceptance tests. That tests run on systems that we share with maunual testers.
=> Manual testers want to get a real email for their tests
=> Automatic tests reduce complexity by not receiving emails, just checking the email-content with less dependencies.
There was no problem, if we had two systems, one configured for humans needs and one for automated tests needs. But thats not the case, so I need a way to change the systems behavior on runtime.
It is usually done in the following way:
set up tests using spring-test module which lets you create a spring context for testing and inject beans into your test classes,
use the same context files for tests as you do for production except create a separate spring profile where default mail service implementation is substituted with a mock,
write a test case where you simulate steps done by user programatically and finally you assert something like assertEquals("<expected_email_text>", mailServiceMock.getLastEmail()).
From your question it is not clear why you would deviate from the standard approach described above. If you explained your reasons perhaps it would be easier to come up with an appropriate answer.
Can I create a junit environment without file system and network access? I want to enforce stricter rules for our test cases.
Ideally, this would be configurable with maven and work for the default test phase.
Based on this answer https://stackoverflow.com/a/309427/116509, I think you could set a SecurityManager on test setup and restore the original on teardown.
However, IMHO, some unit tests should be allowed to touch the file system, if for example the class under test actually creates files as part of its contract. (You can use dependency injection to make sure the files are created in a temp directory). Likewise, a good unit test of a class that uses HTTP should test it against an HTTP endpoint. Otherwise you just end up mocking everything and your test becomes worthless. I suppose the default should be to deny access, and then a developer would need to specifically override the permissions for this kind of test.
The typical way to handle these dependancies on a file system/network access, is to mock them out in a test context. This way, the real code can go through the normal channels, but your tests don't have to rely on a file system or a network.
Look into mocking frameworks to help you do a lot of this. Enabling this kind of testing will also make your code cleaner, too. :)
You can use Ashcroft to prohibit access to file system and other resources from your tests. It uses Java Security manager to restrict access to certain resources.
Another approach would be to use AspectJ and implement several advices prohibiting calling certain APIs or packages.
I'm not sure what you mean by JUnit environment, but you should not need a file system, or network access to run unit tests. On the other hand, if you are testing code that uses network and filesystem APIs, you may have an issue. In that case, you may need to abstract your code into smaller testable chunks. You should not be testing weather the network and filesystem APIs are working in a unit test, these are integration tests. You should only be testing your code in unit tests.