What's the difference between Spring Data's MongoTemplate and MongoRepository? - java

I need to write an application with which I can do complex queries using spring-data and mongodb. I have been starting by using the MongoRepository but struggled with complex queries to find examples or to actually understand the Syntax.
I'm talking about queries like this:
#Repository
public interface UserRepositoryInterface extends MongoRepository<User, String> {
List<User> findByEmailOrLastName(String email, String lastName);
}
or the use of JSON based queries which I tried by trial and error because I don't get the syntax right. Even after reading the mongodb documentation (non-working example due to wrong syntax).
#Repository
public interface UserRepositoryInterface extends MongoRepository<User, String> {
#Query("'$or':[{'firstName':{'$regex':?0,'$options':'i'}},{'lastName':{'$regex':?0,'$options':'i'}}]")
List<User> findByEmailOrFirstnameOrLastnameLike(String searchText);
}
After reading through all the documentation it seems that mongoTemplate is far better documented then MongoRepository. I'm referring to following documentation:
http://static.springsource.org/spring-data/data-mongodb/docs/current/reference/html/
Can you tell me what is more convenient and powerful to use? mongoTemplate or MongoRepository? Are both same mature or does one of them lack more features then the other?

"Convenient" and "powerful to use" are contradicting goals to some degree. Repositories are by far more convenient than templates but the latter of course give you more fine-grained control over what to execute.
As the repository programming model is available for multiple Spring Data modules, you'll find more in-depth documentation for it in the general section of the Spring Data MongoDB reference docs.
TL;DR
We generally recommend the following approach:
Start with the repository abstract and just declare simple queries using the query derivation mechanism or manually defined queries.
For more complex queries, add manually implemented methods to the repository (as documented here). For the implementation use MongoTemplate.
Details
For your example this would look something like this:
Define an interface for your custom code:
interface CustomUserRepository {
List<User> yourCustomMethod();
}
Add an implementation for this class and follow the naming convention to make sure we can find the class.
class UserRepositoryImpl implements CustomUserRepository {
private final MongoOperations operations;
#Autowired
public UserRepositoryImpl(MongoOperations operations) {
Assert.notNull(operations, "MongoOperations must not be null!");
this.operations = operations;
}
public List<User> yourCustomMethod() {
// custom implementation here
}
}
Now let your base repository interface extend the custom one and the infrastructure will automatically use your custom implementation:
interface UserRepository extends CrudRepository<User, Long>, CustomUserRepository {
}
This way you essentially get the choice: everything that just easy to declare goes into UserRepository, everything that's better implemented manually goes into CustomUserRepository. The customization options are documented here.

FWIW, regarding updates in a multi-threaded environment:
MongoTemplate provides "atomic" out-of-the-box operations updateFirst, updateMulti, findAndModify, upsert... which allow you to modify a document in a single operation. The Update object used by these methods also allows you to target only the relevant fields.
MongoRepository only gives you the basic CRUD operations find, insert, save, delete, which work with POJOs containing all the fields. This forces you to either update the documents in several steps (1. find the document to update, 2. modify the relevant fields from the returned POJO, and then 3. save it), or define your own update queries by hand using #Query.
In a multi-threaded environment, like e.g. a Java back-end with several REST endpoints, single-method updates are the way to go, in order to reduce the chances of two concurrent updates overwriting one another's changes.
Example: given a document like this: { _id: "ID1", field1: "a string", field2: 10.0 } and two different threads concurrently updating it...
With MongoTemplate it would look somewhat like this:
THREAD_001 THREAD_002
| |
|update(query("ID1"), Update().set("field1", "another string")) |update(query("ID1"), Update().inc("field2", 5))
| |
| |
and the final state for the document is always { _id: "ID1", field1: "another string", field2: 15.0 } since each thread is accesing the DB only once and only the specified field is changed.
Whereas the same case scenario with MongoRepository would look like this:
THREAD_001 THREAD_002
| |
|pojo = findById("ID1") |pojo = findById("ID1")
|pojo.setField1("another string") /* field2 still 10.0 */ |pojo.setField2(pojo.getField2()+5) /* field1 still "a string" */
|save(pojo) |save(pojo)
| |
| |
and the final document being either { _id: "ID1", field1: "another string", field2: 10.0 } or { _id: "ID1", field1: "a string", field2: 15.0 } depending on which save operation hits the DB last.
(NOTE: Even if we used Spring Data's #Version annotation as suggested in the comments, not much would change: one of the save operations would throw an OptimisticLockingFailureException, and the final document would still be one of the above, with only one field updated instead of both.)
So I'd say that MongoTemplate is a better option, unless you have a very elaborated POJO model or need the custom queries capabilities of MongoRepository for some reason.

This answer may be a bit delayed, but I would recommend avoiding the whole repository route. You get very little implemented methods of any great practical value. In order to make it work you run into the Java configuration nonsense which you can spend days and weeks on without much help in the documentation.
Instead, go with the MongoTemplate route and create your own Data access layer which frees you from the configuration nightmares faced by Spring programmers. MongoTemplate is really the savior for engineers who are comfortable architecting their own classes and interactions since there is lot of flexibility. The structure can be something like this:
Create a MongoClientFactory class that will run at the application level and give you a MongoClient object. You can implement this as a Singleton or using an Enum Singleton (this is thread safe)
Create a Data access base class from which you can inherit a data access object for each domain object). The base class can implement a method for creating a MongoTemplate object which you class specific methods can use for all DB accesses
Each data access class for each domain object can implement the basic methods or you can implement them in the base class
The Controller methods can then call methods in the Data access classes as needed.

Related

Mandatory Flush for a repository

Situation: JPA, SpringBoot, Hibernate
public interface ViewRepository
extends JpaRepository<SomeView, Long>, JpaSpecificationExecutor<SomeView> {
Optional<SomeView> findByIdAndLanguageId(Long id,Long lid);
}
//service
SomeView getSomeView(){
SomeView someView1 = repo.findByIdAndLanguageId(id,lid1);
....
//now get in it in different lang id
SomeView someView2 = repo.findByIdAndLanguageId(id,lid2);
//the problem here is that the value for someView2is the same as someView1 since hibernate cash this.
}
Is there an annotation / way to prevent this caching for any call to this repository only?(not application wide turning of the caching) at service level or repository level ....
This is main feature of Hibernate.
If you look something up and change new found entity, changes will be saved to database without any additional code.
If you don't want that you need to use thing called "stateless session". But please warn everyone around about it, because otherwise you end up with many surprised people. This "stateless session" isn't very popular thing and no one will expect you use it.
If you don't want this caching to happen, you can immediately detach the entity after reading it i.e.:
SomeView someView1 = repo.findByIdAndLanguageId(id,lid1);
entityManager.detach(someView1);

How to properly convert domain entities to DTOs while considering scalability & testability

I have read several articles and Stackoverflow posts for converting domain objects to DTOs and tried them out in my code. When it comes to testing and scalability I am always facing some issues. I know the following three possible solutions for converting domain objects to DTOs. Most of the time I am using Spring.
Solution 1: Private method in the service layer for converting
The first possible solution is to create a small "helper" method in the service layer code which is convertig the retrieved database object to my DTO object.
#Service
public MyEntityService {
public SomeDto getEntityById(Long id){
SomeEntity dbResult = someDao.findById(id);
SomeDto dtoResult = convert(dbResult);
// ... more logic happens
return dtoResult;
}
public SomeDto convert(SomeEntity entity){
//... Object creation and using getter/setter for converting
}
}
Pros:
easy to implement
no additional class for convertion needed -> project doesn't blow up with entities
Cons:
problems when testing, as new SomeEntity() is used in the privated method and if the object is deeply nested I have to provide a adequate result of my when(someDao.findById(id)).thenReturn(alsoDeeplyNestedObject) to avoid NullPointers if convertion is also dissolving the nested structure
Solution 2: Additional constructor in the DTO for converting domain entity to DTO
My second solution would be to add an additional constructor to my DTO entity to convert the object in the constructor.
public class SomeDto {
// ... some attributes
public SomeDto(SomeEntity entity) {
this.attribute = entity.getAttribute();
// ... nesting convertion & convertion of lists and arrays
}
}
Pros:
no additional class for converting needed
convertion hided in the DTO entity -> service code is smaller
Cons:
usage of new SomeDto() in the service code and therefor I have to provide the correct nested object structure as a result of my someDao mocking.
Solution 3: Using Spring's Converter or any other externalized Bean for this converting
If recently saw that Spring is offering a class for converting reasons: Converter<S, T> but this solution stands for every externalized class which is doing the convertion. With this solution I am injecting the converter to my service code and I call it when i want to convert the domain entity to my DTO.
Pros:
easy to test as I can mock the result during my test case
separation of tasks -> a dedicated class is doing the job
Cons:
doesn't "scale" that much as my domain model grows. With a lot of entities I have to create two converters for every new entity (-> converting DTO entitiy and entitiy to DTO)
Do you have more solutions for my problem and how do you handle it? Do you create a new Converter for every new domain object and can "live" with the amount of classes in the project?
Thanks in advance!
Solution 1: Private method in the service layer for converting
I guess Solution 1 will not not work well, because your DTOs are domain-oriented and not service oriented. Thus it will be likely that they are used in different services. So a mapping method does not belong to one service and therefore should not be implemented in one service. How would you re-use the mapping method in another service?
The 1. solution would work well if you use dedicated DTOs per service method. But more about this at the end.
Solution 2: Additional constructor in the DTO for converting domain entity to DTO
In general a good option, because you can see the DTO as an adapter to the entity. In other words: the DTO is another representation of an entity. Such designs often wrap the source object and provide methods that give you another view on the wrapped object.
But a DTO is a data transfer object so it might be serialized sooner or later and send over a network, e.g. using spring's remoting capabilities. In this case the client that receives this DTO must deserialize it and thus needs the entity classes in it's classpath, even if it only uses the DTO's interface.
Solution 3: Using Spring's Converter or any other externalized Bean for this converting
Solution 3 is the solution that I also would prefer. But I would create a Mapper<S,T> interface that is responsible for mapping from source to target and vice versa. E.g.
public interface Mapper<S,T> {
public T map(S source);
public S map(T target);
}
The implementation can be done using a mapping framework like modelmapper.
You also said that a converter for each entity
doesn't "scale" that much as my domain model grows. With a lot of entities I have to create two converters for every new entity (-> converting DTO entitiy and entitiy to DTO)
I doupt that you only have to create 2 converter or one mapper for one DTO, because your DTO is domain-oriented.
As soon as you start to use it in another service you will recognize that the other service usually should or can not return all values that the first service does.
You will start to implement another mapper or converter for each other service.
This answer would get to long if I start with pros and cons of dedicated or shared DTOs, so I can only ask you to read my blog pros and cons of service layer designs.
EDIT
About the third solution: where do you prefer to put the call for the mapper?
In the layer above the use cases. DTOs are data transfer objects, because they pack data in data structures that are best for the transfer protocol. Thus I call that layer the transport layer.
This layer is responsible for mapping use case's request and result objects from and to the transport representation, e.g. json data structures.
EDIT
I see you're ok with passing an entity as a DTO constructor parameter. Would you also be ok with the opposite? I mean, passing a DTO as an Entity constructor parameter?
A good question. The opposite would not be ok for me, because I would then introduce a dependency in the entity to the transport layer. This would mean that a change in the transport layer can impact the entities and I don't want changes in more detailed layers to impact more abstract layers.
If you need to pass data from the transport layer to the entity layer you should apply the dependency inversion principle.
Introduce an interface that will return the data through a set of getters, let the DTO implement it and use this interface in the entities constructor. Keep in mind that this interface belongs to the entity's layer and thus should not have any dependencies to the transport layer.
interface
+-----+ implements || +------------+ uses +--------+
| DTO | ---------------||-> | EntityData | <---- | Entity |
+-----+ || +------------+ +--------+
I like the third solution from the accepted answer.
Solution 3: Using Spring's Converter or any other externalized Bean for this converting
And I create DtoConverter in this way:
BaseEntity class marker:
public abstract class BaseEntity implements Serializable {
}
AbstractDto class marker:
public class AbstractDto {
}
GenericConverter interface:
public interface GenericConverter<D extends AbstractDto, E extends BaseEntity> {
E createFrom(D dto);
D createFrom(E entity);
E updateEntity(E entity, D dto);
default List<D> createFromEntities(final Collection<E> entities) {
return entities.stream()
.map(this::createFrom)
.collect(Collectors.toList());
}
default List<E> createFromDtos(final Collection<D> dtos) {
return dtos.stream()
.map(this::createFrom)
.collect(Collectors.toList());
}
}
CommentConverter interface:
public interface CommentConverter extends GenericConverter<CommentDto, CommentEntity> {
}
CommentConveter class implementation:
#Component
public class CommentConverterImpl implements CommentConverter {
#Override
public CommentEntity createFrom(CommentDto dto) {
CommentEntity entity = new CommentEntity();
updateEntity(entity, dto);
return entity;
}
#Override
public CommentDto createFrom(CommentEntity entity) {
CommentDto dto = new CommentDto();
if (entity != null) {
dto.setAuthor(entity.getAuthor());
dto.setCommentId(entity.getCommentId());
dto.setCommentData(entity.getCommentData());
dto.setCommentDate(entity.getCommentDate());
dto.setNew(entity.getNew());
}
return dto;
}
#Override
public CommentEntity updateEntity(CommentEntity entity, CommentDto dto) {
if (entity != null && dto != null) {
entity.setCommentData(dto.getCommentData());
entity.setAuthor(dto.getAuthor());
}
return entity;
}
}
I ended up NOT using some magical mapping library or external converter class, but just adding a small bean of my own which has convert methods from each entity to each DTO I need. The reason is that the mapping was:
either stupidly simple and I would just copy some values from one field to another, perhaps with a small utility method,
or was quite complex and would be more complicated to write down in the custom parameters to some generic mapping library, compared to just writing out that code. This is for example in the case where the client can send JSON but under the hood this is transformed into entities, and when the client retrieves the parent object of these entities again, it's converted back into JSON.
This means I can just call .map(converter::convert) on any collection of entities to get back a stream of my DTO's.
Is it scalable to have it all in one class? Well the custom configuration for this mapping would have to be stored somewhere even if using a generic mapper. The code is generally extremely simple, except for a handful of cases, so I'm not too worried about this class exploding in complexity. I'm also not expecting to have dozens more entities, but if I did I might group these converters in a class per subdomain.
Adding a base class to my entities and DTO's so I can write a generic converter interface and implement it per class just isn't needed (yet?) either for me.
In my opinion the third solution is the best one. Yes for each entity you'll have to create a two new convert classes but when you come time for testing you won't have a lot of headaches. You should never chose the solution which will cause you to write less code at the begining and then write much more when it comes to testing and maintaining that code.
Another point is , if you use the second approach and your entity has lazy dependencies, your Dto can't understand if dependency is loaded unless you inject EntityManager into the Dto and use it to check if dependency was loaded. I don't like this approach cause Dto shouldn't know anything about EntityManager. As a solution I personally prefer Converters but at the same time I prefer to have multiple Dto classes for the same entity . For example If I am 100 % sure that User Entity will be loaded without corresponding Company , then there has to be a UserDto that doesn't have CompanyDto as a field. At the same time If I know that UserEntity will be loaded with correlated Company , then I will use aggregate pattern , something like a UserCompanyDto class that contains UserDto and CompanyDto as parameters
On my side I prefer using option 3 with a third party library such as modelmapper or mapstruct. Also I use it through interface in an util package, because I don't want any external tool or library to interact directly with my code.
Definition:
public interface MapperWrapper {
<T> T performMapping(Object source, Class<T> destination);
}
#Component
public class ModelMapperWrapper implements MapperWrapper {
private ModelMapper mapper;
public ModelMapperWrapper() {
this.mapper = new ModelMapper();
}
#Override
public <T> T performMapping(Object source, Class<T>
destination) {
mapper.getConfiguration()
.setMatchingStrategy(MatchingStrategies.STRICT);
return mapper.map(source, destination);
}
}
Then after I can test it easily:
Testing:
#SpringJUnitWebConfig(TestApplicationConfig.class)
class ModelMapperWrapperTest implements WithAssertions {
private final MapperWrapper mapperWrapper;
#Autowired
public ModelMapperWrapperTest(MapperWrapper mapperWrapper) {
this.mapperWrapper = mapperWrapper;
}
#BeforeEach
void setUp() {
}
#Test
void givenModel_whenMapModelToDto_thenReturnsDto() {
var model = new DummyModel();
model.setId(1);
model.setName("DUMMY_NAME");
model.setAge(25);
var modelDto = mapperWrapper.performMapping(model, DummyModelDto.class);
assertAll(
() -> assertThat(modelDto.getId()).isEqualTo(String.valueOf(model.getId())),
() -> assertThat(modelDto.getName()).isEqualTo(model.getName()),
() -> assertThat(modelDto.getAge()).isEqualTo(String.valueOf(model.getAge()))
);
}
#Test
void givenDto_whenMapDtoToModel_thenReturnsModel() {
var modelDto = new DummyModelDto();
modelDto.setId("1");
modelDto.setName("DUMMY_NAME");
modelDto.setAge("25");
var model = mapperWrapper.performMapping(modelDto, DummyModel.class);
assertAll(
() -> assertThat(model.getId()).isEqualTo(Integer.valueOf(modelDto.getId())),
() -> assertThat(model.getName()).isEqualTo(modelDto.getName()),
() -> assertThat(model.getAge()).isEqualTo(Integer.valueOf(modelDto.getAge()))
);
}
}
After that it can be very easy to use another mapper library. I should have created an abstract factory, or strategy pattern also.

How to select JDBC / JPA implementations at run time?

I'm writing an application meant to manage a database using both JDBC and JPA for an exam. I would like the user to select once at the beginning the API to use so that all the application will use the selected API (whether it be JPA or JDBC).
For the moment I decided to use this approach:
I created an interface for each DAO class (e.g. interface UserDAO) with all needed method declarations.
I created two classes for each DAO distinguished by the API used (e.g UserDAOImplJDBC and UserDAOImplJPA). Both of them implement the interface (in our case, UserDAO).
I created a third class (e.g. UserDAOImpl) that extends the JDBC implementation class. In all my code I've been always using this class. When I wanted to switch to the JPA I just had to change in all DAO classes the extends ***ImplDAOJDBC to extends ***ImplDAOJPA.
Now, as I'm starting having many DAO classes it's starting being complicate to modify the code each time.
Is there a way to change all extends faster?
I was considering adding an option in the first screen (for example a radioGroup) to select JDBC or JPA. But yet I have no idea how to make it work without having to restructure all code. Any idea?
Use a factory to get the appropriate DAO, every time you need one:
public class UserDaoFactory {
public UserDao create() {
if (SomeSharedSingleton.getInstance().getPersistenceOption() == JDBC) {
return new UserDAOImplJDBC();
}
else {
return new UserDAOImplJPA();
}
}
}
That's a classic OO pattern.
That said, I hope you realize that what you're doing there should really never be done in a real application:
there's no reason to do the exact same thing in two different ways
the persistence model of JPA and JDBC is extremely different: JPA entities are managed by the JPA engine, so every change to JPA entities is transparently made persistent. That's not the case with JDBC, where the data you get from the database is detached. So the way to implement business logic is very different between JPA and JDBC: you typically never need to save any change when using JPA.
You got 1 and 2 right, but 3 completely wrong.
Instead of having Impl extending one of the other implementations, choose which implementation to initialize using a utility method, for example. That's assuming you don't use Dependency Injection framework such as Spring.
UserDAO dao = DBUtils.getUserDAO();
public class DBUtils {
public static boolean shouldUseJdbc() {
// Decide on some configuration what should you use
}
public static UserDAO getUserDAO() {
if (shouldUseJdbc()) {
return new UserDAOImplJDBC();
}
else {
return new UserDAOImplJPA();
}
}
}
This is still jus an examle, as your DAOs don't need to be instantiated each time, but actually should be singletons.

Spring Data: multiple repository interfaces into a single 'repository' service class

I have quite some JpaRepository extended Repository interfaces due to the design of the database.
In order to construct a simple object i.e Person I have to make method calls to about 4 - 5 repositories just because the data is spread like that throughout the database. Something like this (pardon for pseudocode):
#Service
public class PersonConstructService {
public PersonConstructService(Repository repository,
RepositoryTwo repositoryTwo,
RepositoryThree repositoryThree) {
public Person constructPerson() {
person
.add(GetDataFromRepositoryOne())
.add(GetDataFromRepositoryTwo())
.add(GetDataFromRepositoryThree());
return person;
}
private SomeDataTypeReturnedOne GetDataFromRepositoryOne() {
repository.doSomething();
}
private SomeDataTypeReturnedTwo GetDataFromRepositoryTwo() {
repositoryTwo.doSomething();
}
private SomeDataTypeReturnedThree GetDataFromRepositoryThree() {
repositoryThree.doSomething();
}
}
}
PersonConstructService class uses all these interfaces just to construct a simple Person object. I am calling these repositories from different methods inside the PersonConstructService class. I have thought about spreading this class into multiple classes, but I do not think this is correct.
Instead I would like to use a repositoryService which would include all the repositories listed necessary for creation of a Person object. Is that a good approach? Is it possible in Spring?
The reason I am asking is that sometimes the count of injected Services into a class is about 7-8. This is definitely not good.
I do not think you can / shoudl create a meta-repository like abstraction. Repositories have a well defined meaning, conceptually, they are CRUD services (and a bit more sometimes :-)) for your Hibernate/JPA/Datastore entities. And I guess this is enough for them. Anything more is confusing.
Now what I would propose is a "smart" way of building your "Person" objects that is automa(g)tically aware of any new services that contribute to the meaning of the Person object.
The crux of it would be that :
you could have your Repositories implement a given Interface, say PersonDataProvider, which would have a method, say public PersonPart contributeDataToPersonBuidler(PersonBuilder).
You would make your #Service implement Spring's BeanFactoryPostProcessor interface, allowing you to inspect the container for all such PersonDataProvider instances, and inject them to your service (see accepted answer at How to collect and inject all beans of a given type in Spring XML configuration)
Your #Service implementation would then be to ask all the PersonDataProviders in turn to ask them to contribute their data.
I could expand a bit, but this seems to me like the way to go.
One could argue that this is not clean (it makes your Repositories aware of "something" that happens at the service layer, and they should not have to), and one could work around that, but it's simpler to expose the gist of the solution that way.
EDIT : since this post was first written, I came aware that Spring can auto-detect and inject all beans of a certain type, without the need of PostProcessors. See the accepted answer here : Autowire reference beans into list by type
I see it as a quite reasonable and practical data aggregation on Service layer.
It's perfectly achievable in Spring. If you have access to repositories code you can name them all like:
#Repository("repoOne")
public class RepositoryOne {
#Repository("repoTwo")
public class RepositoryTwo {
And inject them into the aggregation service as necessary:
#Service
public class MultipleRepoService {
#Autowired
#Qualifier("repoOne")
private RepositoryOne repositoryOne;
#Autowired
#Qualifier("repoTwo")
private RepositoryTwo repositoryTwo;
public void doMultipleBusiness() {
repositoryOne.one();
repositoryTwo.two();
}
}
In fact, you even don't need to name and Qualify them if they are different classes, but if they are in hierarchy or have the same interface...
Also, you can inject directly to constructing method if autowiring is not a case:
public void construct(#Qualifier("repoOne")RepositoryOne repoOne,
#Qualifier("repoTwo")RepositoryTwo repoTwo) {
repoOne.one();
repoTwo.two();
}

Gemfire EntryNotFoundException for #CacheEvict

In short, when #CacheEvict is called on a method and if the key for the entry is not found, Gemfire is throwing EntryNotFoundException.
Now in detail,
I have a class
class Person {
String mobile;
int dept;
String name;
}
I have two Cache regions defined as personRegion and personByDeptRegion and the Service is as below
#Service
class PersonServiceImpl {
#Cacheable(value = "personRegion")
public Person findByMobile(String mobile) {
return personRepository.findByMobile(mobile);
}
#Cacheable(value = "personByDeptRegion")
public List<Person> findByDept(int deptCode) {
return personRepository.findByDept(deptCode);
}
#Caching(
evict = { #CacheEvict(value = "personByDeptRegion", key="#p0.dept"},
put = { #CachePut(value = "personRegion",key = "#p0.mobile")}
)
public Person updatePerson(Person p1) {
return personRepository.save(p1);
}
}
When there is a call to updatePerson and if there are no entries in the personByDeptRegion, this would throw an exception that EntryNotFoundException for the key 1 ( or whatever is the dept code ). There is a very good chance that this method will be called before the #Cacheable methods are called and want to avoid this exception.
Is there any way we could tweak the Gemfire behavior to gracefully return when the key is not existing for a given region ?.
Alternatively, I am also eager to know if there is a better implementation of the above scenario using Gemfire as cache.
Spring Data Gemfire : 1.7.4
Gemfire Version : v8.2.1
Note: The above code is for representation purpose only and I have multiple services with same issue in actual project.
First, I commend you for using Spring's Caching annotations on your application #Service components. All too often developers enable caching in their Repositories, which I think is bad form, especially if complex business rules (or even additional IO; e.g. calling a web service from a service component) are involved prior to or after the Repository interaction(s), particularly in cases where caching behavior should not be affected (or determined).
I also think your caching UC (updating one cache (personRegion) while invalidating another (personByDeptRegion) on a data store update) by following a CachePut with a CacheEvict seems reasonable to me. Though, I would point out that the seemingly intended use of the #Caching annotation is to combine multiple Caching annotations of the same type (e.g. multiple #CacheEvict or multiple #CachePut) as explained in the core Spring Framework Reference Guide. Still, there is nothing preventing your intended use.
I created a similar test class here, modeled after your example above, to verify the problem. Indeed the jonDoeUpdateSuccessful test case fails (with the GemFire EntryNotFoundException, shown below) since no people in Department "R&D" were previously cached in the "DepartmentPeople" GemFire Region prior to the update, unlike the janeDoeUpdateSuccessful test case, which causes the cache to be populated before the update (even if the entry has no values, which is of no consequence).
com.gemstone.gemfire.cache.EntryNotFoundException: RESEARCH_DEVELOPMENT
at com.gemstone.gemfire.internal.cache.AbstractRegionMap.destroy(AbstractRegionMap.java:1435)
NOTE: My test uses GemFire as both a "cache provider" and a System of Record (SOR).
The problem really lies in SDG's use of Region.destroy(key) in the GemfireCache.evict(key) implementation rather than, and perhaps more appropriately, Region.remove(key).
GemfireCache.evict(key) has been implemented with Region.destroy(key) since inception. However, Region.remove(key) was not introduced until GemFire v5.0. Still, I can see no discernible difference between Region.destroy(key) and Region.remove(key) other than the EntryNotFoundException thrown by Region.destroy(key). Essentially, they both destroy the local entry (both key and value) as well as distribute the operation to other caches in the cluster (providing a non-LOCAL Scope is used).
So, I have filed SGF-539 to change SDG to call Region.remove(key) in GemfireCache.evict(key) rather than Region.destroy(key).
As for a workaround, well, there is basically only 2 things you can do:
Restructure your code and your use of the #CacheEvict annotation, and/or...
Make use of the condition on #CacheEvict.
It is unfortunate that a condition cannot be specified using a class type, something akin to a Spring Condition (in addition to SpEL), but this interface is intended for another purpose and the #CacheEvict, condition attribute does not accept a class type.
At the moment, I don't have a good example of how this might work so I am moving forward on SGF-539.
You can following this ticket for more details and progress.
Sorry for the inconvenience.
-John

Categories