Replicated MongoDB + spring-data-mongodb - java

There is a replicated mongodb (mongodb-1 - primary, mongodb-2 - secondary, mongodb-3 - secondary).
The app runs through spring-boot-starter-data-mongodb.
Service:
public class FooBarService {
private FooBarRepository repository;
public FooBar method1() {
return repository.someQuery();
}
public FooBar method2() {
return repository.someQuery();
}
}
Repository:
public interface FooBarRepository extends MongoRepository<FooBar, String> {
FooBar someQuery();
}
My question is, how nice to make it so that method1 reads from the primary participant in the mongo replica set, and method2 reads from the secondary participant in the mongo replica set?
Would like to find some way to manage this at the service level (Something like #Transactional, but to select a mongo replica set member).
Can you advise me on any solutions in this regard?

Solution 1: #Meta Annotation
If you want to continue using Repository Interfaces, you can annotate the query method definition with the #Meta annotation which lets you pass flags to indicate to read from a secondary mongodb member.
public interface FooBarRepository extends MongoRepository<FooBar, String> {
#Query("{}")
#Meta(flags = Meta.CursorOption.SECONDARY_READS)
FooBar someQuery();
}
But you cannot control this flag from the service level. You would have to create 2 query methods: One with the flag and one without it. Like someQueryFromSecondary() and someQueryFromPrimary().
Solution 2: Using MongoTemplate
Another option would be to use MongoTemplate directly and set the flag on the Query.
public void someQuery(boolean readFromSecondary) {
var query = Query.query(Criteria.where("someKey").is("1"));
if (readFromSecondary) {
query.allowSecondaryReads();
}
return mongoTemplate.findOne(query, FooBar.class);
}
Regardless which solution you choose: be aware of that reading from secondary members can lead to retrieving stale data. Consider taking a look at the mongodb docs.

Related

Javers not recognizing insert as an initial change

Working on a SpringBoot application using MongoDB as a persistent store.
Using spring data and MongoRepository to access MongoDB.
Using Javers to provide auditting.
If I use mongoRepository.insert(document) followed later by a mongoRepository.save(document) and then use javers to query the changes to that document, javers does not detect the differences between the object inserted and the object saved. It reports only a single change as if the save call was the original object persisted.
If I replace the insert call with a save and let spring data handle whether or not to insert or update, javers reports the expected change.
Example:
Consider the following:
#JaversSpringDataAuditable
public interface SomeDocumentRepository extends MongoRepository<SomeDocument, String> {
}
#Builder
#Data
#Document(collection = "someDocuments")
public class SomeDocument {
#Id
private String id;
private String status;
}
#Service
public class SomeDocumentService {
#Autowired
private SomeDocumentRepository someDocumentRepository;
public SomeDocument insert(SomeDocument doc) {
return someDocumentRepository.insert(doc);
}
public SomeDocument save(SomeDocument doc) {
return someDocumentRepository.save(doc);
}
}
#Service
public class AuditService {
#Autowired
private Javers javers;
public List<Change> getStatusChangesById(String documentId) {
JqlQuery query = QueryBuilder
.byInstanceId(documentId, SomeDocument.class)
.withChangedProperty("status")
.build();
return javers.findChanges(query);
}
}
If I call my service as follows:
var doc = SomeDocument.builder().status("new").build();
doc = someDocumentService.insert(doc);
doc.setStatus("change1");
doc = someDocumentService.save(doc);
and then call the audit service to get the changes:
auditService.getStatusChangesById(doc.getId());
I get a single change with "left" set to a blank and "right" set to "change1".
If I call "save" instead of "insert" like:
var doc = SomeDocument.builder().status("new").build();
doc = someDocumentService.save(doc);
doc.setStatus("change1");
doc = someDocumentService.save(doc);
and then call the audit service to get the changes I get 2 changes, the first being the most recent change with "left" set to "new", and "right" set to "change1" and a second change with "left" set to "" and "right" set to "new".
Is this a bug?
That's a good point. In case of Mongo, Javers covers only the methods from the CrudRepository interface. See https://github.com/javers/javers/blob/master/javers-spring/src/main/java/org/javers/spring/auditable/aspect/springdata/JaversSpringDataAuditableRepositoryAspect.java
Looks like MongoRepository#insert() should be also covered by the aspect.
Feel free to contribute a PR to javers, I will merge it. If you want to discuss the design first - please create a discussion here https://github.com/javers/javers/discussions

Not able to update data in MySQL using test class in spring boot

#Test
public void testUpdateStudent(){
Optional<Student> student = studentRepository.findById((long) 1);
student.setName();
}
When I am using Optional I am not getting setName() method, how can I update record in my database.
Also, why should I use this Optional?
You can use getOne(), if not willing to receive as Optional.
#Test
public void testUpdateStudent(){
Student student = studentRepository.getOne(1L);
student.setName();
studentRepository.saveAndFlush(student)
}
There might be a chance of getting NullPointerException, when the record not present in DB.
When we use findById(<some_parameter>), Spring Data JPA returns an Optional object. Can get the object after isPresent() condition satisfied as follows:
#Test
public void testUpdateStudent(){
Optional<Student> studentOpt = studentRepository.getOne(1L);
if(studentOpt.isPresent()) {
Student student = studentOpt.get();
student.setName();
studentRepository.saveAndFlush(student)
}
}
When I am using Optional I am not getting setName() method, how can I update record in my database.
Since you are getting Optional, you can set the value in the following manner:
#Test
public void testUpdateStudent(){
Optional<Student> student = studentRepository.findById((long) 1);
student.get().setName();
}
Optional.get() gives you the value T present in the Optional.
Also, why should I use this Optional?
Honestly, this really depends on which type of Repository your studentRepository is extending from. You have a variety of Spring Data Repostories available like CrudRepository, PagingAndSortingRepository etc. You can arrive at a consensus after going through this official documentation.

Cache invalidation through a setter or service?

I've recently had to implement a cache invalidation system, and ended up hesitating between several ways of doing it.
I'd like to know what the best practice is in my case. I have a classic java back-end, with entities, services and repositories.
Let's say I have a Person object, with the usual setters and getters, persisted in a database.
public class Person {
private Long id;
private String firstName;
private String lastName;
...
}
A PersonRepository, instance of JpaRepository<Person, Long>.
public class PersonRepository extends JpaRepository<Person, Long> {
public Person save(Person person) {return super.save(person);}
public Person find(Person person) {return super.find(person);}
public void delete(Person person) {super.delete(person);}
}
I have a PersonService, with the usual save(), find(), delete() methods and other more functional methods.
public class PersonService {
public Person save(Person person) {
doSomeValidation(person)
return personRepository.save(person);
}
...
}
Now I also have so jobs that run periodically and manipulate the Person objects. One of which is running every second and uses a cache of Person objects, that needs to be rebuilt only if the firstName attribute of a Person has been modified elsewhere in the application.
public class EverySecondPersonJob {
private List<Person> cache;
private boolean cacheValid;
public void invalidateCache() {
cacheValid = false;
}
public void execute() { // run every second
if (!cacheValid)
cache = buildCache();
doStuff(cache);
}
}
There are lots of places in the code that manipulate Person objects and persist them, some may change the firstName attribute, requiring an invalidation of the cache, some change other things, not requiring it, for example:
public class ServiceA {
public void doStuffA(Person person) {
doStuff();
person.setFirstName("aaa");
personRepository.save(person);
}
public void doStuffB(Person person) {
doStuff();
person.setLastName("aaa");
personService.save(person);
}
}
What is the best way of invaliding the cache?
First idea:
Create a PersonService.saveAndInvalidateCache() method then check every method that calls personService.save(), see if they modify an attribute, and if yes, make it call PersonService.saveAndInvalidateCache() instead:
public class PersonService {
public Person save(Person person) {
doSomeValidation(person)
return personRepository.save(person);
}
public Person saveAndInvalidateCache(Person person) {
doSomeValidation(person)
Person saved = personRepository.save(person);
everySecondPersonJob.invalidateCache();
return saved;
}
...
}
public class ServiceA {
public class doStuffA(Person person) {
doStuff();
person.setFirstName("aaa");
personService.saveAndInvalidateCache(person);
}
public class doStuffB(Person person) {
doStuff();
person.setLastName("aaa");
personService.save(person);
}
}
It requires lots of modifications and makes it error prone if doStuffX() are modified or added. Every doStuffX() has to be aware if they must invalidate or not the cache of an entirely unrelated job.
Second idea:
Modify the setFirstName() to track the state of th ePerson object, and make PersonService.save() handle the cache invalidation:
public class Person {
private Long id;
private String firstName;
private String lastName;
private boolean mustInvalidateCache;
setFirstName(String firstName) {
this.firstName = firstName;
this.mustInvalidateCache = true;
}
...
}
public class PersonService {
public Person save(Person person) {
doSomeValidation(person);
Person saved = personRepository.save(person);
if (person.isMustInvalidateCache)
everySecondPersonJob.invalidateCache();
}
...
}
That solution makes it less error prone by not making every doStuffX() need to be aware of if they must invalidate the cache or not, but it makes the setter do more than just change the attribute, which seems to be a big nono.
Which solution is the best practice and why?
Thanks in advance.
Clarification: My job running every second calls, if the cache is invalid, a method that retrieves the Person objects from the database, builds a cache of other objects based upon the properties of the Person objects (here, firstName), and doesn't modify the Person.
The job then uses that cache of other objects for its job, and doesn't persist anything in the database either, so there is no potential consistency issue.
1) You don't
In the usage scenario you described the best practice is not to do any self grown caching but use the cache inside the JPA implementation. A lot of JPA implementations provide that (e.g. Hibernate, EclipseLink, Datanucleus, Apache OpenJPA).
Now I also have so jobs that run periodically and manipulate the Person objects
You would never manipulate a cached object. To manipulate, you need a session/transaction context and the database JPA implementation makes sure that you have the current object.
If you do "invalidation", as you described, you loose transactional properties and get inconsistencies. What happens if a transaction fails and you updated the cache with the new value already? But if you update the cache after the transaction went through, concurrent jobs read the old value.
2) Different Usage Scenario with Eventual Consistent View
You could do caching "on top" of your data storage layer, that provides an eventual consistent view. But you cannot write data back into the same object.
JPA always updates (and caches) the complete object.
Maybe you can store the data that your "doStuff" code derives in another entity?
If this is a possibility, then you have several options. I would "wire in" the cache invalidation via JPA triggers or the "Change Data Capture" capabilities of the database. JPA triggers are similar to your second idea, except that you don't need that all code is using your PersonService. If you run the tigger inside the application, your application cannot have multiple instances, so I would prefer getting change events from the database. You should reread everything from time to time in case you miss an event.

Spring Cache - How can i get list of all actual data entities after changing in other cache-methods with custom key?

I have crud methods that modify the data in the cache and database
I also have a method that returns all entities when I use it after changes in the cache and database, I get irrelevant data.
As I understand it, the point is in the method of returning all entities. It uses the default key, it is different from other methods.
What do I need to do so that I can return the actual data sheet?
#Service
#CacheConfig(cacheNames = "configuration")
class ServiceConfiguration{
#Cacheable //this method returns non actual data
public List<MySomeConfiguration> getAllProxyConfigurations() {
return repository.getAllConfigurations();
}
#Cacheable(key = "#root.target.getConfigurationById(#id).serverId")
public MySomeConfiguration getConfigurationById(Long id) {
...
return configuration;
}
#CachePut(key = "#configuration.serverId", condition = "#result.id != null")
public MySomeConfiguration addOrUpdateConfiguration(Configuration configuration) {
return configuration;
}
#Cacheable(key = "#serverId")
public MySomeConfiguration getConfigurationByServerId(String serverId) {...
return configuration;
}
#CacheEvict(key = "#root.target.getConfigurationById(#id).serverId")
public void deleteConfigurationById(Long id) {
...
}
}//end class
p.s. sorry for my english
By default redis cache manager uses StringRedisSerializer for Key serializer
Your class toString() is used to serializer the key of the object so don't give different keys for different methods like put, get, evict etc jus rely on your toString() to get the key or override by using Spring Cache Custom KeyGenerator
Refer
https://www.baeldung.com/spring-cache-custom-keygenerator

Fetch HashMap instead of entity from Spring Repository using ehcache

I am using ehcache for caching data in my spring project.
For example if you are fetching data mst_store table then currently I am using below code
public interface MstStateRepository extends JpaRepository<MstState, Integer> {
#Override
#Cacheable("getAllState")
List<MstState> findAll();
You can see that findAll method return List<MstState>
But instead of List what I need return type as Map. Means key as stateId and object in Value.
I can do this thing in service label but I need to write seperate logic for that as below
#Service
class CacheService {
#Autowired
private MstStateRepository mstStateRepository;
Map<Integer, MstState> cacheData = new HashMap<>();
public List<MstState> findAllState() {
List<MstState> mstStates = mstStateRepository.findAll();
for (MstState mstState : mstStates) {
cacheData.put(mstState.getStateId);
cacheData.value(mstState);
}
}
}
So instead of writing separate logic can we get directly Map from repository. Please suggest
You could use Java 8 default methods for that which allows you to write a default implementation that could be overriden by jpa but won't be. You may also use the streams introduced in java 8:
public interface MstStateRepository extends JpaRepository<MstState, Integer> {
#Cacheable("getAllState")
default Map<Integer, MstState> getAllState(){
return findAll().stream()
.collect(Collectors.toMap(
MstState::getStateId,
UnaryOperator.identity()
));
}
}

Categories