Blanking fields with Reflection - java

I am writing a Junit test framework to test web services.
There is a requirement for the input values to come from many different sources, eg an earlier web service call or literals within the class.
To achieve this I have constructors that accept the different inputs in different ways; all simple so far.
The problem is the webservices also need to be exercised with a full data load and a mandatory fields only payload.
Rather then litter the (in some cases verrry long) tests with if statements deciding whether to set a value or not, I have written an annotation #Optional.
Adding this annotation causes it to be nulled by the following code:
/**
* Find all of the fields annotated with optional and null them
* #throws IllegalAccessException
* #throws IllegalArgumentException
*/
private void blankOptionalFields() throws IllegalAccessException{
for(Field field: this.getClass().getDeclaredFields()){
Annotation optionalAnnotation = field.getAnnotation(Optional.class);
if(!(field.isSynthetic()) && optionalAnnotation instanceof Optional){
field.setAccessible(true);
try{
field.set(this, null);
}
catch(IllegalArgumentException e){
logger.debug("Tried to set a scalar field to null!", e);
}
}
}
}
So two things:
1: Although this works it somehow feels fragile/dangerous, there must be a better approach?
2: If this is not a carzy approach, what is the best way to go about setting the scalar values to appropiate values?

How about defining an interface containing just the one method that blanks out optional attributes? You can test an object for implementing the interface and call the method directly.
This handles specific exceptions more elegantly than trying to create a catch all situation using reflection:
interface Blankable {
/** #return true if all optional fields are successfully blanked. **/
public boolean blankOptionalFields();
}
and use like:
if (obj implements Blankable) {
if (!((Blankable) obj).blankOptionalFields()) {
logger.debug("Could not blank optional fields for " + obj);
}
}

I would refactor the tests, splitting out the initialization code from the actual test code. For that matter, you could put the actual test code (the code that invokes the web service) into a method that is shared between multiple test methods.
As an semi-related comment: I would think of "unit" tests as exercising the service methods stand-alone, while "integration" tests would exercise it as an actual web service.

I'm not enamored with this approach because you're mixing test code in with your production code.
If you know which fields are mandatory ahead of time, is it possible to just loop through those fields at set them without a complicated if structure?

Related

exclusion of some fields/parameters from logging via Spring AOP

In my spring project I have such an aspect class for logging
#Aspect
#Component
public class BaseLoggingAspect {
private static final Logger logger = LoggerFactory.getLogger(BaseLoggingAspect.class);
#Target({ ElementType.FIELD, ElementType.PARAMETER })
public #interface NonLoggingField {
}
#Pointcut("execution(public * *(..))")
private void allPublicMethods() {
}
#Pointcut("within(img.imaginary.service.*)")
private void inServices() {
}
#Pointcut("within(img.imaginary.dao.*)")
private void inDao() {
}
#Before("allPublicMethods() && inServices() || inDao()")
public void logBeforeCall(JoinPoint joinPoint) {
if (logger.isDebugEnabled()) {
logger.debug("begin method {} in {} class with arguments: {}", joinPoint.getSignature().getName(),
joinPoint.getTarget().getClass().getSimpleName(), joinPoint.getArgs());
}
}
}
this aspect simply catches all the public methods of the service and dao layers and outputs to the log at the beginning of execution the name of the method, the name of the class, and the masi of the values of the arguments of the method
in this aspect, I created a NonLoggingField annotation that I want to apply to some fields of classes of those objects that can be passed to the parameters of these logged methods, for example this:
public class User {
#NonLoggingField
public String userEmail;
public name;
public User(String userEmail, String name) {
this.userEmail = userEmail;
this.name= name;
}
public String tiString() {
return String.format("user name: %s and his email: %s", name, userEmail);
}
}
the fact is that such objects will be written to the log through its toString method, but it is necessary that the email somehow does not get into the log using the notLoggingField annotation, while there are thoughts in my head to do through reflection, but there is no clarity how to do this without over difficult code using reflection, especially considering that objects may have objects of other types inside, which may have the same fields with annotations or collections with objects with such fields. perhaps the AspectJ library can help, but I can't find such mechanisms in it. Please help me come up with something
During runtime, a method parameter is just a value. The JVM does not know at this point if the caller called the method using constants, literals, fields or results of other method calls. That kind of information, you only see in the source code. In byte code, whatever dereferencing operation or computation necessary to determine the parameter's value (or a reference to the corresponding object) is done before calling the method. So there is no connection to the field annotation.
Would annotating method parameters be an alternative for you?
If your requirement is very specific, e.g. intercept field accesses from toString methods and return dummy values instead, if the field is annotated, that would be possible. But this would not be fool-proof. Imagine for example that toString calls a getter method instead of directly accessing the field or that a method other than toString logs the field. You do not always want to falisfy the field value on read access, because other parts of the application might rely on it working correctly. Not every toString call is made in order to log something.
I think you should solve the problem in another way, e.g. by applying filter rules for the logging tool you use. Or if you really want solve it at the application level, you could create an interface like
public interface PrivacyLogger {
String toStringSensitive();
}
and make each class containing sensitive information implement that interface. The logging aspect could then for each printed object determine if it is instanceof toStringSensitive(). If so, it would log the result of toStringSensitive() instead of toString(), i.e. in the simplest case something like
Object toBeLogged = whatever();
logger.log(
toBeLogged instanceof PrivacyLogger
? ((PrivacyLogger) toBeLogged).toStringSensitive()
: toBeLogged
);
Of course, you need to iterate over getArgs() and determine the correct log string for each object. Probably, you want to write a utility method doing that for the whole parameters array.
Moreover, in a complex class, the toStringSensitive() implementation should of course also check if its own fields are PrivacyLogger instances and in that case fold the values of their resapctive toStringSensitive() methods into itw own, so that it works recursively.
I am sorry I have no better news for you, but privacy is something which needs too be built into an application from the ground. There is no simple, fool-proof way to do that with one little aspect. The aspect can utilise the existing application infrastructure and avoid scattering and tangling, but it cannot decide on its own what needs to be prohibited from getting logged and what not.

What methods should I have JUnit tests for - How to mock methods with many dependencies

I am new to JUnit, and do not know which methods should have tests and which should not. Take the following example:
public List<Site> getSites(String user)
{
SiteDao dao = new SiteDaoImpl();
List<Site> siteList = new ArrayList<Site>();
ServiceRequest rq = new ServiceRequest();
rq.setUser(user);
try
{
ServiceResponse response = siteDAO.getReponse(rq);
List<String> siteNums = response.getSiteNums();
if (siteNums != null && !siteNums.isEmpty())
{
List<DbModelSite> siteInfo = dao.getSiteInfo(siteNums);
if (siteInfo != null && !siteInfo.isEmpty())
{
siteList = SiteMapper.mapSites(siteInfo);
}
}
}
catch (Exception e)
{
e.printStackTrace();
}
return siteList;
}
public static List<Site> mapSites(List<DbModelSite> siteInfo)
{
List<Site> siteList = null;
if (siteInfo != null && !siteInfo.isEmpty())
{
siteList = new ArrayList<Site>();
for (DbModelSite temp : siteInfo)
{
Site currSite = mapSite(temp);
siteList.add(currSite);
}
}
return siteList;
}
public static Site mapSite(DbModelSite site)
{
Site mappedSite = null;
if (site != null)
{
mappedSite = new Site();
mappedSite.setSiteNum(site.getSiteNum());
mappedSite.setSpace(site.getSpace());
mappedSite.setIndicator("Y");
}
return mappedSite;
}
It is pretty trivial to come up with a unit test for both the mapSites() and mapSite()methods, but where I am having trouble is with the getSites() method. Does it make sense to unit test this method? If so, how would I go about doing so? It seems that this would require quite a bit of mocking, and as I am very new to JUnit, I have not been able to figure out how to mock all of these objects.
So my question is really two fold:
How do you determine if a method needs to be unit tested?
How does one unit test a complex method which requires a large amount of mocking?
Yes, it makes sense to test that method.
The first thing to be able to test it, would be to use dependency injection. If the method creates its own SiteDao instance using new, there is no way you can tell the method to use another, mock instance of SiteDao.
So, read on dependency injection, and use it. Basically, it boils down to
public class MyService {
private SiteDao siteDao;
public MyService(SiteDao siteDao) {
this.siteDao = siteDao;
}
// use the siteDao passed when constructing the object, instead of constructing it
}
That way, when testing your service, you can do
SiteDao mockSiteDao = mock(SiteDao.class);
SiteService service = new SiteService(mockSiteDao);
Here's one pice of advice that is not directly related to your question, but will make your code much simpler, and thus easier to test, too:
Never return null from a method returning a collection. Return an empty collection to signal "no element".
In general, don't accept null as a valid method argument value, especially if the argument is a collection.
Corollary of 1 and 2: by following these principles, you never need to check for null or emptyness of a collection. Just use it directly.
This will reduce the number of if (siteNums != null && !siteNums.isEmpty()) cluttering your code, and you'll have way fewer branches to test, too.
Note that all sane libraries (the JDK methods, JPA, etc.) follow these principles. A JPA query will never return a null list for example.
Also, don't swallow an exception by just printing its stack trace and returning an empty list, as if nothing bad happened. Let the exception propagate so that you can notice and fix the bug.
Just imagine that this method is a method returning the number of cancer tumors found by a medical analysis system. Would you really like the system to tell you that you're in perfect health, whereas the system was in fact unable to do its job due to an exception? I would really prefer the system to say "I'm out of order, use another machine to be sure".
The idea of unit testing is to ensure that each "unit" (which is usually a method) can be tested in isolation so you can test that for given input you receive an expected output, so to answer your questions:
I should say all public methods should be unit tested
If you are doing too much in your method that you need to mock lots then you probably want to break the functionality out into another class
Going back to your example there are a couple of things to be wary of if you want to unit test:
new - anytime you use this keyword in a method you will find it difficult to mock that object. In some cases (like ServiceRequest) it's fine but in others such as SiteDao you'll have problems.
Static methods - same thing, with SiteMapper.mapSites(siteInfo) you will find it difficult to mock
You can use libraries such as PowerMock to mock new, private and static methods but I personally try to avoid that.

Getting single NonNull value using Spring Data

I'd like to have a method in my Repository that returns a single value.
Like this:
TrainingMode findByTrainingNameAndNameEng( String trainingName, String nameEng );
http://docs.spring.io/spring-data/data-jpa/docs/current/reference/html/
Spring Data Docs describe that in this case the method can return null if no entity is found.
I'd like to throw an exception with generic message like No TrainingMode found by %trainingName% and %nameEng% or smth like that.
I can use Optional<TrainingMode> as a return value and then use orElseThrow
Optional<TrainingMode> findByTrainingNameAndNameEng( String trainingName, String nameEng );
repository.findByTrainingNameAndNameEng(name, nameEng).orElseThrow(() -> new RuntimeException(...));
But I should call this method each time when this method is called. It's not clear - DRY priciple is broken.
How to get nonnull single value with orElseThrow using Spring Data?
The DRY principle would be violated if you duplicate null handling throughout the application logic where it is being invoked. If DRY principle is the thing you are worried the most then i can think of:
You can make a "Service" class which would delegate calls to annotated repository and handle null response logic to it, and use that service class instead of calling repositories directly. Drawback would be introducing another layer to your application (which would decouple repositories from your app logic).
There is possibility of adding custom behavior to your data repositories which is described in "3.6.1. Adding custom behavior to single repositories" section of documentation. Sorry for not posting the snippet.
The issue I personally have with second approach is that it pollutes app with interfaces, enforces you to follow a certain naming patterns (never liked 'Impl' suffixes), and might make migrating code a bit more time consuming (when app becomes big it becomes harder to track which interface is responsible for which custom behavior and then people just simply start creating their own behavior which turns out to be duplicate of another).
I found a solution.
First, Spring Data processes getByName and findByName equally. And we can use it: in my case find* can return null (or returns not null Optional, as you wish) and get* should return only value: if null is returned then exception is thrown.
I decided to use AOP for this case.
Here's the aspect:
#Aspect
#Component
public class GetFromRepositoryAspect {
#Around("execution(public !void org.springframework.data.repository.Repository+.get*(..))")
public Object aroundDaoMethod( ProceedingJoinPoint joinpoint ) throws Throwable {
Object result = joinpoint.proceed();
if (null == result) {
throw new FormattedException( "No entity found with arhs %s",
Arrays.toString( joinpoint.getArgs() ) );
}
return result;
}
}
That's all.
You can achieve this by using the Spring nullability annotations. If the method return type is just some Entity and it's not a wrapper type, such as Optional<T>, then org.springframework.dao.EmptyResultDataAccessException will be thrown in case of no results.
Read more about Null Handling of Repository Methods.

Record method calls in one session for replaying in future test sessions?

I have a backend system which we use a third-party Java API to access from our own applications. I can access the system as a normal user along with other users, but I do not have godly powers over it.
Hence to simplify testing I would like to run a real session and record the API calls, and persist them (preferably as editable code), so we can do dry test runs later with API calls just returning the corresponding response from the recording session - and this is the important part - without needing to talk to the above mentioned backend system.
So if my application contains line on the form:
Object b = callBackend(a);
I would like the framework to first capture that callBackend() returned b given the argument a, and then when I do the dry run at any later time say "hey, given a this call should return b". The values of a and b will be the same (if not, we will rerun the recording step).
I can override the class providing the API so all the method calls to capture will go through my code (i.e. byte code instrumentation to alter behavior of classes outside my control is not necessary).
What framework should I look into to do this?
EDIT: Please note that bounty hunters should provide actual code demonstrating the behavior I look for.
Actually You can build such framework or template, by using proxy pattern. Here I explain, how you can do it using dynamic proxy pattern. The idea is to,
Write a proxy manager to get recorder and replayer proxies of API on demand!
Write a wrapper class to store your collected information and also implement hashCode and equals method of that wrapper class for efficient lookup from Map like data structure.
And finally use recorder proxy to record and replayer proxy for replaying purpose.
How recorder works:
invokes the real API
collects the invocation information
persists data in expected persistence context
How replayer works:
Collect the method information (method name, parameters)
If collected information matches with previously recorded information then return the previously collected return value.
If returned value does not match, persist the collected information (As you wanted).
Now, lets look at the implementation. If your API is MyApi like bellow:
public interface MyApi {
public String getMySpouse(String myName);
public int getMyAge(String myName);
...
}
Now we will, record and replay the invocation of public String getMySpouse(String myName). To do that we can use a class to store the invocation information like bellow:
public class RecordedInformation {
private String methodName;
private Object[] args;
private Object returnValue;
public String getMethodName() {
return methodName;
}
public void setMethodName(String methodName) {
this.methodName = methodName;
}
public Object[] getArgs() {
return args;
}
public void setArgs(Object[] args) {
this.args = args;
}
public Object getReturnValue() {
return returnType;
}
public void setReturnValue(Object returnValue) {
this.returnValue = returnValue;
}
#Override
public int hashCode() {
return super.hashCode(); //change your implementation as you like!
}
#Override
public boolean equals(Object obj) {
return super.equals(obj); //change your implementation as you like!
}
}
Now Here comes the main part, The RecordReplyManager. This RecordReplyManager gives you proxy object of your API , depending on your need of recording or replaying.
public class RecordReplyManager implements java.lang.reflect.InvocationHandler {
private Object objOfApi;
private boolean isForRecording;
public static Object newInstance(Object obj, boolean isForRecording) {
return java.lang.reflect.Proxy.newProxyInstance(
obj.getClass().getClassLoader(),
obj.getClass().getInterfaces(),
new RecordReplyManager(obj, isForRecording));
}
private RecordReplyManager(Object obj, boolean isForRecording) {
this.objOfApi = obj;
this.isForRecording = isForRecording;
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
Object result;
if (isForRecording) {
try {
System.out.println("recording...");
System.out.println("method name: " + method.getName());
System.out.print("method arguments:");
for (Object arg : args) {
System.out.print(" " + arg);
}
System.out.println();
result = method.invoke(objOfApi, args);
System.out.println("result: " + result);
RecordedInformation recordedInformation = new RecordedInformation();
recordedInformation.setMethodName(method.getName());
recordedInformation.setArgs(args);
recordedInformation.setReturnValue(result);
//persist your information
} catch (InvocationTargetException e) {
throw e.getTargetException();
} catch (Exception e) {
throw new RuntimeException("unexpected invocation exception: " +
e.getMessage());
} finally {
// do nothing
}
return result;
} else {
try {
System.out.println("replying...");
System.out.println("method name: " + method.getName());
System.out.print("method arguments:");
for (Object arg : args) {
System.out.print(" " + arg);
}
RecordedInformation recordedInformation = new RecordedInformation();
recordedInformation.setMethodName(method.getName());
recordedInformation.setArgs(args);
//if your invocation information (this RecordedInformation) is found in the previously collected map, then return the returnValue from that RecordedInformation.
//if corresponding RecordedInformation does not exists then invoke the real method (like in recording step) and wrap the collected information into RecordedInformation and persist it as you like!
} catch (InvocationTargetException e) {
throw e.getTargetException();
} catch (Exception e) {
throw new RuntimeException("unexpected invocation exception: " +
e.getMessage());
} finally {
// do nothing
}
return result;
}
}
}
If you want to record the method invocation, all you need is getting an API proxy like bellow:
MyApi realApi = new RealApi(); // using new or whatever way get your service implementation (API implementation)
MyApi myApiWithRecorder = (MyApi) RecordReplyManager.newInstance(realApi, true); // true for recording
myApiWithRecorder.getMySpouse("richard"); // to record getMySpouse
myApiWithRecorder.getMyAge("parker"); // to record getMyAge
...
And to replay all you need:
MyApi realApi = new RealApi(); // using new or whatever way get your service implementation (API implementation)
MyApi myApiWithReplayer = (MyApi) RecordReplyManager.newInstance(realApi, false); // false for replaying
myApiWithReplayer.getMySpouse("richard"); // to replay getMySpouse
myApiWithRecorder.getMyAge("parker"); // to replay getMyAge
...
And You are Done!
Edit:
The basic steps of recorder and replayers can be done in above mentioned way. Now its upto you, that how you want to use or perform those steps. You can do what ever you want and whatever you like in the recorder and replayer code blocks and just choose your implementation!
I should prefix this by saying I share some of the concerns in Yves Martin's answer: that such a system may prove frustrating to work with and ultimately less helpful than it would seem at first blush.
That said, from a technical standpoint, this is an interesting problem, and I couldn't not take a go at it. I put together a gist to log method calls in a fairly general way. The CallLoggingProxy class defined there allows usage such as the following.
Calendar original = CallLoggingProxy.create(Calendar.class, Calendar.getInstance());
original.getTimeInMillis(); // 1368311282470
CallLoggingProxy.ReplayInfo replayInfo = CallLoggingProxy.getReplayInfo(original);
// Persist the replay info to disk, serialize to a DB, whatever floats your boat.
// Come back and load it up later...
Calendar replay = CallLoggingProxy.replay(Calendar.class, replayInfo);
replay.getTimeInMillis(); // 1368311282470
You could imagine wrapping your API object with CallLoggingProxy.create prior to passing it into your testing methods, capturing the data afterwards, and persisting it using whatever your favorite serialization system happens to be. Later, when you want to run your tests, you can load the data back up, create a new instance based on the data with CallLoggingProxy.replay, and passing that into your methods instead.
The CallLoggingProxy is written using Javassist, as Java's native Proxy is limited to working against interfaces. This should cover the general use case, but there are a few limitations to keep in mind:
Classes declared final can't be proxied by this method. (Not easily fixable; this is a system limitation)
The gist assumes the same input to a method will always produce the same output. (More easily fixable; the ReplayInfo would need to keep track of sequences of calls for each input instead of single input/output pairs.)
The gist is not even remotely threadsafe (Fairly easily fixable; just requires a little thought and effort)
Obviously the gist is simply a proof of concept, so it's also not been very thoroughly tested, but I believe the general principle is sound. It's also possible there's a more fully baked framework out there to achieve this sort of goal, but if such a thing does exist, I'm not aware of it.
If you do decide to continue with the replay approach, then hopefully this will be enough to give you a possible direction to work in.
I had the same needs some months ago for non-regression testing when planning a heavy technical refactoring of a large application and... I have found nothing available as a framework.
In fact, replaying may be particularly difficult and may only work in a specific context - no (or few) application with a standard complexity can be really considered as stateless. It is a common problem when testing persistence code with a relational database. To be relevant, the complete system initial state must be restored and each replay step must impact the global state the same way. It becomes a challenge when a system state is distributed into pieces like databases, files, memory... Let's guess what happens if a timestamp taken from a system's clock is used somewhere !
So a more pratical option is to only record... and then do a clever comparison for subsequent runs.
Depending of the number of runs you plan, a human-driven session on the application may be enough, or you have to investing in an automated scenario in a robot playing with your application user interface.
First to record: you can use dynamic proxy interface or aspect programming to intercept method call and to capture state before and after invocation. It may mean: dump concerned database tables, copy some files, serialize Java objects in text format like XML.
Then compare this reference capture with a new run. This comparison should be tuned to exclude any irrelevant elements from each piece of state, like row identifiers, timestamps, file names... to only compare data where your backend's added value shines.
Finally nothing really standard, and often a few specific scripts and codes may be enough to achieve the aim: detect as much errors as possible and try to prevent non-expected side-effects.
This can be done with AOP, aspect oriented programming. It allows to intercept method calls by byte code manipulation. Do a bit of search for examples.
In one case this can do recording, in the other replaying.
Pointers: wikipedia, AspectJ, Spring AOP.
Unfortunately one moves a bit outside the java syntax, and a simple example can better be sought elsewhere. With explanation.
Maybe combined with unit tests / some mocking test framework for offline testing with recorded data.
you could look into 'Mockito'
Example:
//You can mock concrete classes, not only interfaces
LinkedList mockedList = mock(LinkedList.class);
//stubbing
when(mockedList.get(0)).thenReturn("first");
when(mockedList.get(1)).thenThrow(new RuntimeException());
//following prints "first"
System.out.println(mockedList.get(0));
//following throws runtime exception
System.out.println(mockedList.get(1));
//following prints "null" because get(999) was not stubbed
System.out.println(mockedList.get(999));
after you could replay each test more times and it will return data that you put in.
// pseudocode
class LogMethod {
List<String> parameters;
String method;
addCallTo(String method, List<String> params):
this.method = method;
parameters = params;
}
}
Have a list of LogMethods and call new LogMethod().addCallTo() before every call in your test method.
The idea of playing back the API calls sounds like a use case for the event sourcing pattern. Martin Fowler has a good article on it here. This is a nice pattern that records events as a sequence of objects which are then stored, you can then replay the sequence of events as required.
There is an implementation of this pattern using Akka called Eventsourced, which may help you build the type of system you require.
I had a similar problem some years ago. None of the above solutions would have worked for methods that are not pure functions (side effect free). The major task is in my opinion:
how to extract a snapshot of the recorded object(s) (not only restricted to objects implementing Serializable)
how to generate test code of a serialized representation in a readable way (not only restricted to beans, primitives and collections)
So I had to go my own way - with testrecorder.
For example, given:
ResultObject b = callBackend(a);
...
ResultObject callBackend(SourceObject source) {
...
}
you will only have to annotate the method like this:
#Recorded
ResultObject callBackend(SourceObject source) {
...
}
and start your application (the one that should be recorded) with the testrecorder agent. Testrecorder will manage all tasks for you, such as:
serializing arguments, results, state of this, exceptions (complete object graph!)
finding a readable representation for object construction and object matching
generating a test from the serialized data
you can extend recordings to global variables, input and output with annotations
An example for the test will look like this:
void testCallBackend() {
//arrange
SourceObject sourceObject1 = new SourceObject();
sourceObject1.setState(...); // testrecorder can use setters but is not limited to them
... // setting up backend
... // setting up globals, mocking inputs
//act
ResultObject resultObject1 = backend.callBackend(sourceObject1);
//assert
assertThat(resultObject, new GenericMatcher() {
... // property matchers
}.matching(ResultObject.class));
... // assertions on backend and sourceObject1 for potential side effects
... // assertions on outputs and globals
}
If I understood you question correctly, you should try db4o.
You will store the objects with db4o and restore later to mock and JUnit tests.

How to key off a parameter to a stubbed method using Mockito

Greetings.
I am mocking a search engine for testing in my web app. This search engine returns xml documents with different schemas. The schema depends on a parameter known as a collection set. Returning different schemas based on collection sets is the part that's difficult to mock, because specifying the collection set is basically a setup method, and a void one at that. This search engine is an external jar file so I can't modify the API. I have to work with what's been provided. Here's an example:
Engine engine = factory.getEngine();
Search search = engine.getSearch();
search.addCollectionSet(someCollectionSet);
SearchResult result = search.getSearchResult();
Document[] documents = result.getAllDocuments();
Then for each document, I can get the xml by calling:
document.getDocumentText();
When I'm using my mock objects, getDocumentText() returns an xml string, created by a generator, that conforms to the schema. What I want to do is use a different type of generator depending on which collection set was provided in step 3 in the first code snippet above. I've been trying to do something like this:
doAnswer(new Answer() {
Object answer(InvocationOnMock invocation) {
if (args == "foo") {
SearchResult result = getMockSearchResult();
when(search.getSearchResult()).thenReturn(result);
}
}
}).when(search.addCollectionSet(anyString()));
But this results in lots of red highlighting :)
Basically, my goal is to key off of addCollectionSet(someCollectionSet) so that when it's called, I can do some kind of switch off of the parameter and ensure that a different generator is used. Does anyone know how I can accomplish something like this? Or is there maybe some form of Dependency Injection that could be used to conditionally wire up my generator?
Thanks!
Update
I've changed my factory object so that it never returns the engine, but rather, the Search and Find objects from that engine, so now I can do something like this:
Search search = factory.getSearch(collectionSet);
So what I'd like to do is something like this:
when(factory.getSearch(anyString()).thenAnswer(new Answer() {
Object answer(InvocationOnMock invocation) {
switch(args[0]) {
case fooSet: return fooSearch; break;
case barSet: return barSearch; break;
In other words, I still want to key off the string that was passed in to getSearch in a switch statement. Admittedly, I could do something more like felix has suggested below, but I'd rather have all my cases wrapped in a switch. Can someone provide an example of how this could be done? Thanks!
Update
I've seen that you can capture the arguments that are passed into mocked calls, but these captured arguments are used for later assertions. I haven't seen a way that I can key off these arguments so that a call to my mock will return different values depending on the arguments. It seems like there has to be a way to do this, I just don't have enough experience with Mockito to figure it out. But surely someone does!
I would recommend wrapping the call to the legacy code into your own object.
So you end up with your own method along these lines:
class SearchEngineWrapper {
public String getSearchResult(String collection){
Engine engine = factory.getEngine();
Search search = engine.getSearch();
search.addCollectionSet(someCollectionSet);
SearchResult result = search.getSearchResult();
...
return document.getDocumentText();
}
}
Now you can mock out this method. The method also nicely documents your intent. Also you could test the actual implementation in an integration test.
when(searchEngineWrapper.getSearchResult("abc").thenReturn("foo");
when(searchEngineWrapper.getSearchResult("xyz").thenReturn("bar");

Categories