User-dependent bean definition in Spring - java

I'm creating a Spring-based Webapplication where I use Spring to add the implementation of a service to the defined interface. So far, so standard. Works fine.
I want to allow a user to overwrite applications behavior at runtime for his session. For that I want spring to change the implementation behind an interface depending on the user-session.
A use case for that are automatic testcases that run on INT and should test the output of an email created by the system. On INT there is an email-serivce configured sending emails to our mail-server. I dont want the testcase to have to check mails using a mail-Protocol. I want in case automatic testcases are running to chenge the email-implementation to write the email as comments to the HTML, so my tests can easily check the result. And so there are some more such cases where it would be nice to change the implementing bean for special circumstances.
Is there a concept in spring that helps me to implement such a feature or do I have to create that on my own?
Additional information: It's all about automated acceptance tests. That tests run on systems that we share with maunual testers.
=> Manual testers want to get a real email for their tests
=> Automatic tests reduce complexity by not receiving emails, just checking the email-content with less dependencies.
There was no problem, if we had two systems, one configured for humans needs and one for automated tests needs. But thats not the case, so I need a way to change the systems behavior on runtime.

It is usually done in the following way:
set up tests using spring-test module which lets you create a spring context for testing and inject beans into your test classes,
use the same context files for tests as you do for production except create a separate spring profile where default mail service implementation is substituted with a mock,
write a test case where you simulate steps done by user programatically and finally you assert something like assertEquals("<expected_email_text>", mailServiceMock.getLastEmail()).
From your question it is not clear why you would deviate from the standard approach described above. If you explained your reasons perhaps it would be easier to come up with an appropriate answer.

Related

Mocking a resource on Android instrumentation test

I want to test the effects of a library call of my program with a real device. This call starts a service, that sends an HTTP request to a server whose URL that is hard-coded in the resources.
I want to verify that the request is sent correctly. So I set up a local HTTP server, but to be able to use it I have to change/override/mock the resource so it points to http://127.0.0.1 instead.
I want to do "end-to-end" testing; in this case it's important that the service makes an actual network request, although locally.
I've tried to override the value by creating a string resource with the same name in androidTest/res/values/strings.xml, but that resource is only visible in the test package, not in the application package.
Using the Instrumentation class only allows me to obtain the Context reference, but there's no way to replace it (or the return value of getResources()) with a mock or something similar.
How can I change a resource value of an Application under test?
You have a couple choices:
Dependency injection
Stubs/mocks
SharedPreferences
Scripts or gradle tasks
Dependency injection
Use a library like RoboGuice or Dapper. Inject an object that handles making the API requests. Then, in your test setup, you can replace the injection modules with testing versions instead. That way your test code runs instead of the original; that code can pass in different strings (either hard-coded or from the test strings.xml) instead.
DI libraries can be expensive to setup: high learning curve and can be performance problems if not used correctly. Or even can introduce hard to debug problems if the scope/lifetime of the objects isn't configured correctly. If testing is the only reason to use DI, it might not be worth it to you if you're not comfortable with a DI container.
Stubs/mocks
Wrap up your calls in something that implements a custom interface you write. Your main implementation then fills in the host URL and calls the API. Then, in tests, use a combination of stubs or mocks on that interface to replace the code that fills in the host URL part.
This is less of an integration test since the stubs or mocks will be replacing parts of the code. But is simpler than setting up a dependency injection framework.
SharedPreferences
Use the Android SharedPreferences system. Have it default to a certain endpoint (production). But allow the app to be started on the testing device, then some dialog or settings to let you change the host URL. Run the tests again and now they point to a different API URL.
Scripts or gradle tasks
Write some script or gradle task to modify the source before it is compiled in certain scenarios.
This can be fairly complicated and might even be too platform or system-dependent if not done right. Will probably be fairly brittle to changes in the system. Might introduce bugs if the wrong command is run to build the final packaged version and the wrong code goes out to the market.
Personal opinion
Which do I recommend? If you and/or your team is familiar with a DI library like RoboGuice or Dapper, I recommend that option. It is the most formal, type-safe and strict solution. It also maintains more of the integrity of the stack to test the whole solution.
If you're not familiar with a good DI library, stubs/mocks and interface wrappers are a good fall back solution. They partly have to be used in the DI solution anyway, and you can write enough tests around them to cover a good majority of the cases you need to test (and are in control of). It is close enough to the DI solution that I would recommend this to everyone who doesn't use DI in the project already.
The SharedPreferences solution works great for switching between staging and production environments for QA and support. However, I wouldn't recommend it for automated tests since the app will most likely be reinstalled/reset so often during development, it would get annoying resetting that URL that often. Also, first runs of tests would probably fail; headless tests on a CI server would fail, etc. (You could default the URL to the localhost, but then you run the risk of accidentally release that default to production sometime.)
I don't recommend scripts or the hacked-up gradle tasks. Too brittle, less clear to other developers that come behind you, and more complicated then they're worth, IMO.
In addition to Jon Adams's solutions, there's a further one:
Override resource in build type
By default, a library module is built in release mode when it's used by another module. The debug mode is only used for testing (unit tests and instrumented tests). Therefore, using the resource overriding it's possible to change the resource value for the instrumentation tests for that library only, and use the original value in the library's users.
This has some caveats though:
Instrumented/integration tests must stay on the library itself, not on the main application package;
The same resource values have to be shared across all tests (unless using product flavors)

Custom Filter Chain for TestClasses?

I'm using spring-boot and #WebIntegrationTest to run some Selenium tests. I'm trying to figure out how to add/remove some filters for my test cases.
I've gone over the docs a few times and have not been able to find a way to do this. Is it possible?
Please note: I am not using mockMvc and for these test cases we do not want to.
See Reference Spring Boot docs how to register or disable servlet filters. To register one, just implement Filter interface and register it with #Bean annotation.
But, my understanding is that Selenium testing should test application as black box and shouldn't mix testing context with production context. Optionally this testing can happen against production environment.
Personally would include one or two sanity tests into application build itself to make sure it's working end to end. But I wouldn't mix contexts anyway.
Otherwise I would place all the tests into separate project firing requests against PROD or continuous delivery environment.
BTW, I highly recommend looking into Page Object pattern when doing Selenium testing.

Proper testing when config influences methods results?

I've started to write a class for permission checking on writing and updating objects to a db. For development and testing the permission checking can be disabled via a play config value. My first take was to write the tests with the permission checking enabled. If the check is disabled while the test is run, I simply disable the test via
org.junit.Assume.assumeTrue(false);
Of course this means that in development, when permission checking is disabled, the tests will be probably never run.
Would it be more proper to have two code paths in each test, one for config enabled, one for config disabled?
Also, is this the time to introduce a mocking framework that inject config values via static method replacement at will? When googling I stumbled over https://blog.codecentric.de/en/2011/11/testing-and-mocking-of-static-methods-in-java/ that recommends PowerMock for the job.
An alternative would be to create a static wrapper class for the play configuration values and allow the change of values there at will.
Which way is preferable?
make the config configurable during runtime
use a mocking framework to change the (static) methods at will
I vote for #1 - make the configuration configurable. I wrote a "framework" (two methods in a superclass) in one of my applications to allow the test to say what it wants the security role to be for the test (and putting it back at the end). This lets the test choose which role it wants to be run under independent of all other conditions.
I've used the same approach for other types of configuration as well.

Mocking / stubbing "third party" services during integrated development

The application I am currently working with is being worked on by 3 separate teams, each working away on different functional areas that come together at the end of the day. The difficulty is keeping the 3 teams always in sync and not having one team's issues affect another. I am looking for a way that I can stub out / mock the calls that are being made to some of these services provided by the other teams so that we can work separately most of the time, yet quickly switch back to integrated mode when needed.
Ideally I would like:
- during normal development, I could turn on a flag and those services will be mock services (for example, when I am just developing away on my part of the code and don't really care if the other team's service returns the right thing, just that it returns something)
- I don't want to have add code to check this flag everywhere in the code and if it is on, use the mock, else use the real thing... I just want it to automatically know to use the mock class when this flag is on
We are using Java 7 + CDI + Jboss. Is this possible to do with some kind of wiring or filters?
TIA.
Using Alternatives you can achieve this better ways, you can switch back to integrated mode when needed with disable alternative in beans.xml
CDI Alternatives are so good for the Mock service.
Instead of having to change the source code of your application, however, you can make the choice at deployment time by using alternatives.
Alternatives are commonly used for purposes like the following:
To handle client-specific business logic that is determined at runtime
To specify beans that are valid for a particular deployment scenario
(for example, when country-specific sales tax laws require
country-specific sales tax business logic)
To create dummy (mock) versions of beans to be used for testing
Alternatives allow you to overwrite this at execution time using the beans.xml file - a simple deployment artifact.
A typical scenario would be to use different beans.xml for different environments and thereby enable mock-alternatives for components that you don't want to execute on your local / integration environments.
Using Alternatives in CDI Applications
In Spring-based applications (using dependency injection), I've accomplished this by keeping two application contexts, one configured to use stubs and one configured to use real code. I wrote some wrapper code to load the appropriate application context at the right time, but you could manage it other ways, i.e. with separate run targets that use a different classpath.
In terms of the stubs, I've often created them by hand (i.e. written stubbed services that wrap a spreadsheet of canned data or generate random data), but Mockito could be useful when building certain kinds of stubs.
Depending on what kinds of resources you need to stub (and whether you have a budget), another option is service virtualization. I don't have any direct experience using service virtualization tools, but my current client is using the commercial LISA tool to stub out a SOA service layer. I gather that several companies sell similar tools.

End to End testing of Webservices

first time poster and TDD adopter. :-) I'll be a bit verbose so please bear with me.
I've recently started developing SOAP based web services using the Apache CXF framework, Spring and Commons Chain for implementing business flow. The problem I'm facing here is with testing the web services -- testing as in Unit testing and functional testing.
My first attempt at Unit testing was a complete failure. To keep the unit tests flexible, I used a Spring XML file to keep my test data in. Also, instead of creating instances of "components" to be tested, I retrieved them from my Spring Application context. The XML files which harbored data quickly got out of hand; creating object graphs in XML turned out to be a nightmare. Since the "components" to be tested were picked from the Spring Application Context, each test run loaded all the components involved in my application, the DAO objects used etc. Also, as opposed to the concept of unit test cases being centralized or concentrated on testing only the component, my unit tests started hitting databases, communicating with mail servers etc. Bad, really bad.
I knew what I had done wrong and started to think of ways to rectify it. Following an advice from one of the posts on this board, I looked up Mockito, the Java mocking framework so that I could do away with using real DAO classes and mail servers and just mock the functionality.
With unit tests a bit under control, this brings me to my second problem; the dependence on data. The web services which I have been developing have very little logic but heavy reliance on data. As an example, consider one of my components:
public class PaymentScheduleRetrievalComponent implements Command {
public boolean execute(Context ctx) {
Policy policy = (Policy)ctx.get("POLICY");
List<PaymentSchedule> list = billingDAO.getPaymentStatementForPolicy(policy);
ctx.put("PAYMENT_SCHEDULE_LIST", list);
return false;
}
}
A majority of my components follow the same route -- pick a domain object from the context, hit the DAO [we are using iBatis as the SQL mapper here] and retrieve the result.
So, now the questions:
- How are DAO classes tested esp when a single insertion or updation might leave the database in a "unstable" state [in cases where let's say 3 insertions into different tables actually form a single transaction]?
- What is the de-facto standard for functional testing web services which move around a lot of data i.e. mindless insertions/retrievals from the data store?
Your personal experiences/comments would be greatly appreciated. Please let me know in case I've missed out some details on my part in explaining the problem at hand.
-sasuke
I would stay well away from the "Context as global hashmap" 'pattern' if I were you.
Looks like you are testing your persistence mapping...
You might want to take a look at: testing persistent objects without spring
I would recommend an in-memory database for running your unit tests against, such as HSQL. You can use this to create your schema on the fly (for example if you are using Hibernate, you can use your XML mappings files), then insert/update/delete as required before destroying the database at the end of your unit test. At no time will your test interfere with your actual database.
For you second problem (end-to-end testing of web services), I have successfully unit tested CXF-based services in the past. The trick is to publish your web service using a light-weight web server at the beginning of your test (Jetty is ideal), then use CXF to point a client to your web service endpoint, run your calls, then finally shut down the Jetty instance hosting your web service once your unit test has completed.
To achive this, you can use the JaxWsServerFactoryBean (server-side) and JaxWsProxyFactoryBean (client-side) classes provided with CXF, see this page for sample code:
http://cwiki.apache.org/CXF20DOC/a-simple-jax-ws-service.html#AsimpleJAX-WSservice-Publishingyourservice
I would also give a big thumbs up to SOAP UI for doing functional testing of your web service. JMeter is also extremely useful for stress testing web services, which is particularity important for those services doing database lookups.
First of all: Is there a reason you have to retrieve the subject under test (SUT) from the Spring Application context? For efficient unit testing you should be able to create the SUT without the context. It sounds like you have some hidden dependencies somewhere. That might be the root of some of your headache.
How are DAO classes tested esp when a
single insertion or updation might
leave the database in a "unstable"
state [in cases where let's say 3
insertions into different tables
actually form a single transaction]?
It seems you are worried about the database's constistency after you have running the tests. If possible use a own database for testing, where you don't need care about it. If you have such a sandbox database you can delete data as you wish. In this case I would do the following:
Flag all your fake data with some common identifier, like putting a special prefix to a field.
Before running the test drop a delete statement, which deletes the flagged data. If there is none, then nothing bad happens.
Run your single DAO test. After that repeat step 2. for the next test.
What is the de-facto standard for
functional testing web services which
move around a lot of data i.e.
mindless insertions/retrievals from
the data store?
I am not aware of any. From the question your are asking I can infer that you have on one side the web service and on the other side the database. Split up the responsibilities. Have separate test suites for each side. One side just testing database access (as described above). On the other side just testing web service requests and responses. In this case it pays of the stub/fake/mock the layer talking to the network. Or consider https://wsunit.dev.java.net/.
If the program is only shoving data in and out I think that there is not much behavior. If this is the case, then the hardest work is to unit test the database side and the web service side. The point is you can do unit testing without the need for "realistic" data. For functional testing you will need handrolled data, which is close to reality. This might be cumbersome, but if you already unit tested the database and web service parts intensively, this should reduce the need for "realistic" test cases considerably.
First of all, make thing clear.
In an ideal world the lifecycle of the software your are building is something like this:
- sy makes a report with the customer, so you got an user story with examples about how the application should work
- you generalize the user story, so you got rules, which you call as use cases
- you start to write a piece of functional (end to end) test, and it fails...
- after that your build the ui and mock out the services, so you got a green functional test and a specification about how your services should work...
- your job is to keep the functional test green, and implement the services step by step writing integration tests, and mocking out dependencies with the same approach until you reach the level of unit tests
- after that you do the next iteration with the use cases, write the next piece of functional test, and so on until the end of the project
- after that you make acceptance tests with the customer who accepts the product and pays a lot
So what did we learn from this:
There are many types of tests (don't confuse them with each other)
functional tests - for testing the use cases (mock out nothing)
integration tests - for testing application, component, module, class interactions (mock out the irrelevant components)
unit tests - for testing a single class in isolation from its environment (mock out everything)
user acceptance tests - customer makes sure, that she accepts the product (manual functional tests, or presentation made from automatic functional tests in working)
You don't need to test everything by functional tests and integration tests, because it is impossible. Test only the relevant part by functional and integration tests and test everything by unit tests! Familiarize yourself with the testing pyramid.
Use TDD, it makes life easier!
How are DAO classes tested esp when a single insertion or updation might leave the database in a "unstable" state [in cases where let's
say 3 insertions into different tables actually form a single
transaction]?
You don't have to test your database transactions. Assume, that they are working well, because database developers have already tested them, and I am sure you don't want to write concurrency tests... Db is an external component, so you don't have to test it yourself. You can write a data access layer to adapt the data storage to your system, and write integration tests only for those adapters. In case of database migration these tests will work by the adapters of the new database as well, because your write them to implement a specific interface... By any other tests (except functional tests) you can mock out your data access layer. Do the same with every other external component as well, write adapters and mock them out. Put these kind of integration tests to a different test suite than the other tests, because they are slow because of database access, filesystem access, etc...
What is the de-facto standard for functional testing web services which move around a lot of data i.e. mindless insertions/retrievals
from the data store?
Your can mock out your data store with an in memory db which implements the same storage adapters until you implemented everything else except the database. After that your implement the data access layer for the database and test it with your functional tests as well. It will be slow, but it has to run only once, for example by every new release... If you need functional tests by developing, you can mock it out with an in memory solution again... An alternative approach to run only the affected functional tests by developing, or modify the settings of the test db to make things faster, and so on... I am sure there are many test optimization solutions...
I must say I don't really understand
your exact problem. Is the problem
that your database is left in an
altered state after you've run the
test?
Yes, there are actually two issues here. First one being the problem with the database left in an inconsistent state after running the test cases. The second one being that I'm looking for an elegant solution in terms of end-to-end testing of web services.
For efficient unit testing you should
be able to create the SUT without the
context. It sounds like you have some
hidden dependencies somewhere. That
might be the root of some of your
headache.
That indeed was the root cause of my headaches which I am now about to do away with with the help of a mocking framework.
It seems you are worried about the
database's constistency after you have
running the tests. If possible use a
own database for testing, where you
don't need care about it. If you have
such a sandbox database you can delete
data as you wish.
This is indeed one of the solutions to the problem I mentioned in my previous post but this might not work in all the cases esp when integrating with a legacy system in which the database/data isn't in your control and in cases when some DAO methods require a certain data to be already present in a given set of tables. Should I look into database unit testing frameworks like DBUnit?
In this case it pays of the
stub/fake/mock the layer talking to
the network. Or consider
https://wsunit.dev.java.net/.
Ah, looks interesting. I've also heard of tools like SOAPUI and the likes which can be used for functional testing. Has anyone here had any success with such tools?
Thanks for all the answers and apologies for the ambiguous explanation; English isn't my first language.
-sasuke

Categories