Sequential processing of multi-threaded results - java

I am setting up a Spring Boot application (DAO pattern with #Repositories) where I am attempting to write a #Service to asynchronously pull data from a database in multiple threads and merge-process the incoming payloads sequentially, preferably on arrival.
The goal is to utilize parallel database access for requests where multiple non-overlapping sets of filter conditions need to be queried individually, but post-processed (transformed, e.g. aggregated) into a combined result.
Being rather new to Java, and coming from Golang and its comparably trivial syntax for multi-threading and task-communication, I struggle to identify a preferable API in Java and Spring Boot - or determine if this approach is even favorable to begin with.
Question:
Given
a Controller:
#RestController
#RequestMapping("/api")
public class MyController {
private final MyService myService;
#Autowired
public MyController(MyService myService) {
this.myService = myService;
}
#PostMapping("/processing")
public DeferredResult<MyResult> myHandler(#RequestBody MyRequest myRequest) {
DeferredResult<MyResult> myDeferredResult = new DeferredResult<>();
myService.myProcessing(myRequest, myDeferredResult);
return myDeferredResult;
}
a Service:
import com.acme.parallel.util.MyDataTransformer
#Service
public class MyServiceImpl implementing MyService {
private final MyRepository myRepository;
#Autowired
public MyService(MyRepository myRepository) {
this.myRepository = myRepository;
}
public void myProcessing(MyRequest myRequest, MyDeferredResult myDeferredResult) {
MyDataTransformer myDataTransformer = new MyDataTransformer();
/* PLACEHOLDER CODE
for (MyFilter myFilter : myRequest.getMyFilterList()) {
// MyPartialResult myPartialResult = myRepository.myAsyncQuery(myFilter);
// myDataTransformer.transformMyPartialResult(myPartialResult);
}
*/
myDeferredResult.setResult(myDataTransformer.getMyResult());
}
}
a Repository:
#Repository
public class MyRepository {
public MyPartialResult myAsyncQuery(MyFilter myFilter) {
// for the sake of an example
return new MyPartialResult(myFilter, TakesSomeAmountOfTimeToQUery.TRUE);
}
}
as well as a MyDataTransformer helper class:
public class MyDataTransformer {
private final MyResult myResult = new MyResult(); // e.g. a Map
public void transformMyPartialResult(MyPartialResult myPartialResult) {
/* PLACEHOLDER CODE
this.myResult.transformAndMergeIntoMe(myPartialResult);
*/
}
}
how can I implement
the MyService.myProcessing method asynchronously and multi-threaded, and
the MyDataTransformer.transformMyPartialResult method sequential/thread-safe
(or redesign the above)
most performantly, to merge incoming MyPartialResult into one single MyResult?
Attempts:
The easiest solution seems to be to skip the "on arrival" part, and a commonly preferred implementation might e.g. be:
public void myProcessing(MyRequest myRequest, MyDeferredResult myDeferredResult) {
MyDataTransformer myDataTransformer = new MyDataTransformer();
List<CompletableFuture<myPartialResult>> myPartialResultFutures = new ArrayList<>();
for (MyFilter myFilter : myRequest.getMyFilterList()) { // Stream is the way they say, but I like for
myPartialResultFutures.add(CompletableFuture.supplyAsync(() -> myRepository.myAsyncQuery(myFilter));
}
myPartialResultFutures.stream()
.map(CompletableFuture::join)
.map(myDataTransformer::transformMyPartialResult);
myDeferredResult.setResult(myDataTransformer.getMyResult());
}
However, if feasible I'd like to benefit from sequentially processing incoming payloads when they arrive, so I am currently experimenting with something like this:
public void myProcessing(MyRequest myRequest, MyDeferredResult myDeferredResult) {
MyDataTransformer myDataTransformer = new MyDataTransformer();
List<CompletableFuture<myPartialResult>> myPartialResultFutures = new ArrayList<>();
for (MyFilter myFilter : myRequest.getMyFilterList()) {
myPartialResultFutures.add(CompletableFuture.supplyAsync(() -> myRepository.myAsyncQuery(myFilter).thenAccept(myDataTransformer::transformMyPartialResult));
}
myPartialResultFutures.forEach(CompletableFuture::join);
myDeferredResult.setResult(myDataTransformer.getMyResult());
}
but I don't understand if I need to implement any thread-safety protocols when calling myDataTransformer.transformMyPartialResult, and how - or if this even makes sense, performance-wise.
Update:
Based on the assumption that
myRepository.myAsyncQuery takes slightly varying amounts of time, and
myDataTransformer.transformMyPartialResult taking an ever increasing amount of time each call
implementing a thread-safe/atomic type/Object, e.g. a ConcurrentHashMap:
public class MyDataTransformer {
private final ConcurrentMap<K, V> myResult = new ConcurrentHashMap<K, V>();
public void transformMyPartialResult(MyPartialResult myPartialResult) {
myPartialResult.myRows.stream()
.map((row) -> this.myResult.merge(row[0], row[1], Integer::sum)));
}
}
into the latter Attempt (processing "on arrival"):
public void myProcessing(MyRequest myRequest, MyDeferredResult myDeferredResult) {
MyDataTransformer myDataTransformer = new MyDataTransformer();
List<CompletableFuture<myPartialResult>> myPartialResultFutures = new ArrayList<>();
for (MyFilter myFilter : myRequest.getMyFilterList()) {
myPartialResultFutures.add(CompletableFuture.supplyAsync(() -> myRepository.myAsyncQuery(myFilter).thenAccept(myDataTransformer::transformMyPartialResult));
}
myPartialResultFutures.forEach(CompletableFuture::join);
myDeferredResult.setResult(myDataTransformer.getMyResult());
}
is up to one order of magnitude faster than waiting on all threads first, even with atomicity protocol overhead.
Now, this may have been obvious (not ultimately, though, as async/multi-threaded processing is by far not always the better choice), and I am glad this approach is a valid choice.
What remains is what looks to me like a hacky, flexibility lacking solution - or at least an ugly one. Is there a better approach?

Related

Kafka Stream with multiple consumers / processors not persisting data concurrently

I'm new with Kafka and want to persist data from kafka topics to database tables (each topic flow to a specific table). I know Kafka connect exists and can be used to achieve this but there are reasons why this approach is preferred.
Unfortunately only one topic is writing the database. Kafka seems to not process() all processors concurrently. Either MyFirstData is writing to database or MySecondData but never but at the same time.
According the my readings the is the option overriding init() from of kafka stream Processor interface which offers context.forward() not sure if this will help and how to use it in my used case.
I use Spring Cloud Stream (but got the same behaviour with Kafka DSL and Processor API implementations)
My code snippet:
Configuring the consumers:
#Configuration
#RequiredArgsConstructor
public class DatabaseProcessorConfiguration {
private final MyFirstDao myFirstDao;
private final MySecondDao mySecondDao;
#Bean
public Consumer<KStream<GenericData.Record, GenericData.Record>> myFirstDbProcessor() {
return stream -> stream.process(() -> {
return new MyFirstDbProcessor(myFirstDao);
});
}
#Bean
public Consumer<KStream<GenericRecord, GenericRecord>> mySecondDbProcessor() {
return stream -> stream.process(() -> new MySecondDbProcessor(mySecondDao));
}
}
This MyFirstDbProcessor and MySecondDbProcessor is analog to this.
#Slf4j
#RequiredArgsConstructor
public class MyFirstDbProcessor implements Processor<GenericData.Record, GenericData.Record, Void, Void> {
private final MyFirstDao myFirstDao;
#Override
public void process(Record<GenericData.Record, GenericData.Record> record) {
CdcRecordAdapter adapter = new CdcRecordAdapter(record.key(), record.value());
MyFirstTopicKey myFirstTopicKey = adapter.getKeyAs(MyFirstTopicKey.class);
MyFirstTopicValue myFirstTopicValue = adapter.getValueAs(MyFirstTopicValue.class);
MyFirstData data = PersistenceMapper.map(myFirstTopicKey, myFirstTopicValue);
switch (myFirstTopicValue.getCrudOperation()) {
case UPDATE, INSERT -> myFirstDao.persist(data);
case DELETE -> myFirstDao.delete(data);
default -> System.err.println("unimplemented CDC operation streamed by kafka");
}
}
}
My Dao implementations: I try an implementation of MyFirstRepository with JPARepository and ReactiveCrudRepository but same behaviour. MySecondRepository is implemented analog to MyFirstRepository.
#Component
#RequiredArgsConstructor
public class MyFirstDaoImpl implements MyFirstDao {
private final MyFirstRepository myFirstRepository;
#Override
public MyFirstData persist(MyFirstData myFirstData) {
Optional<MyFirstData> dataOptional = MyFirstRepository.findById(myFirstData.getId());
if (dataOptional.isPresent()){
var data = dataOptional.get();
myFirstData.setCreatedDate(data.getCreatedDate());
}
return myFirstRepository.save(myFirstData);
}
#Override
public void delete(MyFirstData myFirstData) {
System.out.println("delete() from transaction detail dao called");
MyFirstRepository.delete(myFirstData);
}
}

java factory class test mocking list of implementations

I have created a factory to provide instance of IMyProcessor based on some boolean flag.
The below populates the map with both of my implementations.
#Component
public class MyProcessorFactory {
private static final Map<String, IMyProcessor> processorServiceCache = new HashMap<>();
#Value("${processor.async:true}")
private boolean isAsync;
public MyProcessorFactory(final List<IMyProcessor> processors) {
for (IMyProcessor service : processors) {
processorServiceCache.put(service.getType(), service);
}
}
public IMyProcessor getInstance() {
IMyProcessor processor = isAsync ? processorServiceCache.get("asynchronous") : processorServiceCache.get("synchronous");
return processor;
}
}
I am now trying to write a Unit test using Junit5 but I am struggling to setup the List of implementations:
I have tried the following:
#ExtendWith(MockitoExtension.class)
class ProcessorFactoryTest {
#InjectMocks
private MyProcessorFactory myProcessorFactory;
#Test
void testAsyncIsReturned() {
}
#Test
void testSyncisReturned() {}
}
I want to test based on the boolean flag async true/false, the correct implementation is returned.
It will be helpful to see how you write such test cases. I autowire the implementations of the interface as construction injection into a list then add to a map using a string key.
Along with answer, I am open to other ideas/refactorings that may make the testing easier.

Spring Data: default 'not deleted' logic for automatic method-based queries when using soft-delete policy

Let's say we use soft-delete policy: nothing gets deleted from the storage; instead, a 'deleted' attribute/column is set to true on a record/document/whatever to make it 'deleted'. Later, only non-deleted entries should be returned by query methods.
Let's take MongoDB as an example (alghough JPA is also interesting).
For standard methods defined by MongoRepository, we can extend the default implementation (SimpleMongoRepository), override the methods of interest and make them ignore 'deleted' documents.
But, of course, we'd also like to use custom query methods like
List<Person> findByFirstName(String firstName)
In a soft-delete environment, we are forced to do something iike
List<person> findByFirstNameAndDeletedIsFalse(String firstName)
or write queries manually with #Query (adding the same boilerplate condition about 'not deleted' all the time).
Here comes the question: is it possible to add this 'non-deleted' condition to any generated query automatically? I did not find anything in the documentation.
I'm looking at Spring Data (Mongo and JPA) 2.1.6.
Similar questions
Query interceptor for spring-data-mongodb for soft deletions here they suggest Hibernate's #Where annotation which only works for JPA+Hibernate, and it is not clear how to override it if you still need to access deleted items in some queries
Handling soft-deletes with Spring JPA here people either suggest the same #Where-based approach, or the solution applicability is limited with the already-defined standard methods, not the custom ones.
It turns out that for Mongo (at least, for spring-data-mongo 2.1.6) we can hack into standard QueryLookupStrategy implementation to add the desired 'soft-deleted documents are not visible by finders' behavior:
public class SoftDeleteMongoQueryLookupStrategy implements QueryLookupStrategy {
private final QueryLookupStrategy strategy;
private final MongoOperations mongoOperations;
public SoftDeleteMongoQueryLookupStrategy(QueryLookupStrategy strategy,
MongoOperations mongoOperations) {
this.strategy = strategy;
this.mongoOperations = mongoOperations;
}
#Override
public RepositoryQuery resolveQuery(Method method, RepositoryMetadata metadata, ProjectionFactory factory,
NamedQueries namedQueries) {
RepositoryQuery repositoryQuery = strategy.resolveQuery(method, metadata, factory, namedQueries);
// revert to the standard behavior if requested
if (method.getAnnotation(SeesSoftlyDeletedRecords.class) != null) {
return repositoryQuery;
}
if (!(repositoryQuery instanceof PartTreeMongoQuery)) {
return repositoryQuery;
}
PartTreeMongoQuery partTreeQuery = (PartTreeMongoQuery) repositoryQuery;
return new SoftDeletePartTreeMongoQuery(partTreeQuery);
}
private Criteria notDeleted() {
return new Criteria().orOperator(
where("deleted").exists(false),
where("deleted").is(false)
);
}
private class SoftDeletePartTreeMongoQuery extends PartTreeMongoQuery {
SoftDeletePartTreeMongoQuery(PartTreeMongoQuery partTreeQuery) {
super(partTreeQuery.getQueryMethod(), mongoOperations);
}
#Override
protected Query createQuery(ConvertingParameterAccessor accessor) {
Query query = super.createQuery(accessor);
return withNotDeleted(query);
}
#Override
protected Query createCountQuery(ConvertingParameterAccessor accessor) {
Query query = super.createCountQuery(accessor);
return withNotDeleted(query);
}
private Query withNotDeleted(Query query) {
return query.addCriteria(notDeleted());
}
}
}
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
public #interface SeesSoftlyDeletedRecords {
}
We just add 'and not deleted' condition to all the queries unless #SeesSoftlyDeletedRecords asks as to avoid it.
Then, we need the following infrastructure to plug our QueryLiikupStrategy implementation:
public class SoftDeleteMongoRepositoryFactory extends MongoRepositoryFactory {
private final MongoOperations mongoOperations;
public SoftDeleteMongoRepositoryFactory(MongoOperations mongoOperations) {
super(mongoOperations);
this.mongoOperations = mongoOperations;
}
#Override
protected Optional<QueryLookupStrategy> getQueryLookupStrategy(QueryLookupStrategy.Key key,
QueryMethodEvaluationContextProvider evaluationContextProvider) {
Optional<QueryLookupStrategy> optStrategy = super.getQueryLookupStrategy(key,
evaluationContextProvider);
return optStrategy.map(this::createSoftDeleteQueryLookupStrategy);
}
private SoftDeleteMongoQueryLookupStrategy createSoftDeleteQueryLookupStrategy(QueryLookupStrategy strategy) {
return new SoftDeleteMongoQueryLookupStrategy(strategy, mongoOperations);
}
}
public class SoftDeleteMongoRepositoryFactoryBean<T extends Repository<S, ID>, S, ID extends Serializable>
extends MongoRepositoryFactoryBean<T, S, ID> {
public SoftDeleteMongoRepositoryFactoryBean(Class<? extends T> repositoryInterface) {
super(repositoryInterface);
}
#Override
protected RepositoryFactorySupport getFactoryInstance(MongoOperations operations) {
return new SoftDeleteMongoRepositoryFactory(operations);
}
}
Then we just need to reference the factory bean in a #EnableMongoRepositories annotation like this:
#EnableMongoRepositories(repositoryFactoryBeanClass = SoftDeleteMongoRepositoryFactoryBean.class)
If it is required to determine dynamically whether a particular repository needs to be 'soft-delete' or a regular 'hard-delete' repository, we can introspect the repository interface (or the domain class) and decide whether we need to change the QueryLookupStrategy or not.
As for JPA, this approach does not work without rewriting (possibly duplicating) a substantial part of the code in PartTreeJpaQuery.

Spring Boot: Wrapping JSON response in dynamic parent objects

I have a REST API specification that talks with back-end microservices, which return the following values:
On "collections" responses (e.g. GET /users) :
{
users: [
{
... // single user object data
}
],
links: [
{
... // single HATEOAS link object
}
]
}
On "single object" responses (e.g. GET /users/{userUuid}) :
{
user: {
... // {userUuid} user object}
}
}
This approach was chosen so that single responses would be extensible (for example, maybe if GET /users/{userUuid} gets an additional query parameter down the line such at ?detailedView=true we would have additional request information).
Fundamentally, I think it is an OK approach for minimizing breaking changes between API updates. However, translating this model to code is proving very arduous.
Let's say that for single responses, I have the following API model object for a single user:
public class SingleUserResource {
private MicroserviceUserModel user;
public SingleUserResource(MicroserviceUserModel user) {
this.user = user;
}
public String getName() {
return user.getName();
}
// other getters for fields we wish to expose
}
The advantage of this method is that we can expose only the fields from the internally used models for which we have public getters, but not others. Then, for collections responses I would have the following wrapper class:
public class UsersResource extends ResourceSupport {
#JsonProperty("users")
public final List<SingleUserResource> users;
public UsersResource(List<MicroserviceUserModel> users) {
// add each user as a SingleUserResource
}
}
For single object responses, we would have the following:
public class UserResource {
#JsonProperty("user")
public final SingleUserResource user;
public UserResource(SingleUserResource user) {
this.user = user;
}
}
This yields JSON responses which are formatted as per the API specification at the top of this post. The upside of this approach is that we only expose those fields that we want to expose. The heavy downside is that I have a ton of wrapper classes flying around that perform no discernible logical task aside from being read by Jackson to yield a correctly formatted response.
My questions are the following:
How can I possibly generalize this approach? Ideally, I would like to have a single BaseSingularResponse class (and maybe a BaseCollectionsResponse extends ResourceSupport class) that all my models can extend, but seeing how Jackson seems to derive the JSON keys from the object definitions, I would have to user something like Javaassist to add fields to the base response classes at Runtime - a dirty hack that I would like to stay as far away from as humanly possible.
Is there an easier way to accomplish this? Unfortunately, I may have a variable number of top-level JSON objects in the response a year from now, so I cannot use something like Jackson's SerializationConfig.Feature.WRAP_ROOT_VALUE because that wraps everything into a single root-level object (as far as I am aware).
Is there perhaps something like #JsonProperty for class-level (as opposed to just method and field level)?
There are several possibilities.
You can use a java.util.Map:
List<UserResource> userResources = new ArrayList<>();
userResources.add(new UserResource("John"));
userResources.add(new UserResource("Jane"));
userResources.add(new UserResource("Martin"));
Map<String, List<UserResource>> usersMap = new HashMap<String, List<UserResource>>();
usersMap.put("users", userResources);
ObjectMapper mapper = new ObjectMapper();
System.out.println(mapper.writeValueAsString(usersMap));
You can use ObjectWriter to wrap the response that you can use like below:
ObjectMapper mapper = new ObjectMapper();
ObjectWriter writer = mapper.writer().withRootName(root);
result = writer.writeValueAsString(object);
Here is a proposition for generalizing this serialization.
A class to handle simple object:
public abstract class BaseSingularResponse {
private String root;
protected BaseSingularResponse(String rootName) {
this.root = rootName;
}
public String serialize() {
ObjectMapper mapper = new ObjectMapper();
ObjectWriter writer = mapper.writer().withRootName(root);
String result = null;
try {
result = writer.writeValueAsString(this);
} catch (JsonProcessingException e) {
result = e.getMessage();
}
return result;
}
}
A class to handle collection:
public abstract class BaseCollectionsResponse<T extends Collection<?>> {
private String root;
private T collection;
protected BaseCollectionsResponse(String rootName, T aCollection) {
this.root = rootName;
this.collection = aCollection;
}
public T getCollection() {
return collection;
}
public String serialize() {
ObjectMapper mapper = new ObjectMapper();
ObjectWriter writer = mapper.writer().withRootName(root);
String result = null;
try {
result = writer.writeValueAsString(collection);
} catch (JsonProcessingException e) {
result = e.getMessage();
}
return result;
}
}
And a sample application:
public class Main {
private static class UsersResource extends BaseCollectionsResponse<ArrayList<UserResource>> {
public UsersResource() {
super("users", new ArrayList<UserResource>());
}
}
private static class UserResource extends BaseSingularResponse {
private String name;
private String id = UUID.randomUUID().toString();
public UserResource(String userName) {
super("user");
this.name = userName;
}
public String getUserName() {
return this.name;
}
public String getUserId() {
return this.id;
}
}
public static void main(String[] args) throws JsonProcessingException {
UsersResource userCollection = new UsersResource();
UserResource user1 = new UserResource("John");
UserResource user2 = new UserResource("Jane");
UserResource user3 = new UserResource("Martin");
System.out.println(user1.serialize());
userCollection.getCollection().add(user1);
userCollection.getCollection().add(user2);
userCollection.getCollection().add(user3);
System.out.println(userCollection.serialize());
}
}
You can also use the Jackson annotation #JsonTypeInfo in a class level
#JsonTypeInfo(include=As.WRAPPER_OBJECT, use=JsonTypeInfo.Id.NAME)
Personally I don't mind the additional Dto classes, you only need to create them once, and there is little to no maintenance cost. And If you need to do MockMVC tests, you will most likely need the classes to deserialize your JSON responses to verify the results.
As you probably know the Spring framework handles the serialization/deserialization of objects in the HttpMessageConverter Layer, so that is the correct place to change how objects are serialized.
If you don't need to deserialize the responses, it is possible to create a generic wrapper, and a custom HttpMessageConverter (and place it before MappingJackson2HttpMessageConverter in the message converter list). Like this:
public class JSONWrapper {
public final String name;
public final Object object;
public JSONWrapper(String name, Object object) {
this.name = name;
this.object = object;
}
}
public class JSONWrapperHttpMessageConverter extends MappingJackson2HttpMessageConverter {
#Override
protected void writeInternal(Object object, Type type, HttpOutputMessage outputMessage) throws IOException, HttpMessageNotWritableException {
// cast is safe because this is only called when supports return true.
JSONWrapper wrapper = (JSONWrapper) object;
Map<String, Object> map = new HashMap<>();
map.put(wrapper.name, wrapper.object);
super.writeInternal(map, type, outputMessage);
}
#Override
protected boolean supports(Class<?> clazz) {
return clazz.equals(JSONWrapper.class);
}
}
You then need to register the custom HttpMessageConverter in the spring configuration which extends WebMvcConfigurerAdapter by overriding configureMessageConverters(). Be aware that doing this disables the default auto detection of converters, so you will probably have to add the default yourself (check the Spring source code for WebMvcConfigurationSupport#addDefaultHttpMessageConverters() to see defaults. if you extend WebMvcConfigurationSupport instead WebMvcConfigurerAdapter you can call addDefaultHttpMessageConverters directly (Personally I prefere using WebMvcConfigurationSupport over WebMvcConfigurerAdapter if I need to customize anything, but there are some minor implications to doing this, which you can probably read about in other articles.
Jackson doesn't have a lot of support for dynamic/variable JSON structures, so any solution that accomplishes something like this is going to be pretty hacky as you mentioned. As far as I know and from what I've seen, the standard and most common method is using wrapper classes like you are currently. The wrapper classes do add up, but if you get creative with your inheretence you may be able to find some commonalities between classes and thus reduce the amount of wrapper classes. Otherwise you might be looking at writing a custom framework.
I guess you are looking for Custom Jackson Serializer. With simple code implementation same object can be serialized in different structures
some example:
https://stackoverflow.com/a/10835504/814304
http://www.davismol.net/2015/05/18/jackson-create-and-register-a-custom-json-serializer-with-stdserializer-and-simplemodule-classes/

Spock Mock with Guava Collection

I'm having difficulties trying to mock a dependency within Guava Collection.
Let's assume I have a following code to test:
#Service
public final class ServiceA {
private final ServiceB serviceB;
#Autowired
public ServiceA(ServiceB serviceB) {
this.serviceB = serviceB;
}
public Collection<String> runAll(Collection<String> dataList) {
final ImmutableList.Builder<String> builder = ImmutableList.builder();
for (String data : dataList) {
builder.add(serviceB.run(data));
}
return builder.build();
}
}
My Spock Spec looks like this:
class ServiceASpec extends Specification {
def serviceB = Mock(ServiceB.class)
def serviceA = new ServiceA(serviceB)
def "test"() {
when:
def strings = serviceA.runAll(['data1', 'data2'])
then:
1 * serviceB.run('data1') >> 'result1'
1 * serviceB.run('data2') >> 'result2'
0 * _._
strings == ['result1', 'result2']
}
}
This spec runs just fine and it is doing what I want it to do.
Then I refactored my implementation to use Guava's Collections2.transform(..):-
#Service
public final class ServiceA {
private final ServiceB serviceB;
#Autowired
public ServiceA(ServiceB serviceB) {
this.serviceB = serviceB;
}
public Collection<String> runAll(Collection<String> dataList) {
return Collections2.transform(dataList, new Function<String, String>() {
#Override
public String apply(final String input) {
return serviceB.run(input);
}
});
}
}
When I rerun my spec, I'm getting this error:-
Too few invocations for:
1 * serviceB.run('data1') >> 'result1' (0 invocations)
Unmatched invocations (ordered by similarity):
None
Too few invocations for:
1 * serviceB.run('data2') >> 'result2' (0 invocations)
Unmatched invocations (ordered by similarity):
None
My take is it has something to do with the mocking timing because the Guava function will only be executed when the collection is used.
However, I wasn't sure how to refactor my spec to make this to work.
How do I go about solving this? Thanks.
Under the hood transform() method returns TransformedCollection class. As you can see here transformation is applied no sooner than on iterating the wrapped collection. Since you don't iterate the transformed collection mocked service is not invoked and no interaction is recorded.
It seems that simply iterating the collection should solve the problem, however such test should be really well documented.
Another way is using FluentIterable.from(list).transform(function).toList() instead of Collections2.transform(list, function).

Categories