I'm attempting to use both the #Cacheable and #PostFilter annotations in Spring. The desired behavior is that the application will cache the full, unfiltered listed of Segments (it's a very small and very frequently referenced list so performance is the desire), but that a User will only have access to certain Segments based on their roles.
I started out with both #Cacheable and #PostFilter on a single method, but when that wasn't working I broke them out into two separate classes so I could have one annotation on each method. However, it seems to behave the same either way I do it, which is to say when User A hits the service for the first time they get their correct filtered list, then when User B hits the service next they get NO results because the cache is only storing User A's filtered results, and User B does not have access to any of them. (So the PostFilter still runs, but the Cache seems to be storing the filtered list, not the full list.)
So here's the relevant code:
configuration:
#Configuration
#EnableCaching
#EnableGlobalMethodSecurity(prePostEnabled = true)
public class BcmsSecurityAutoConfiguration {
#Bean
public CacheManager cacheManager() {
SimpleCacheManager cacheManager = new SimpleCacheManager();
cacheManager.setCaches(Arrays.asList(
new ConcurrentMapCache("bcmsSegRoles"),
new ConcurrentMapCache("bcmsSegments")
));
return cacheManager;
}
}
Service:
#Service
public class ScopeService {
private final ScopeRepository scopeRepository;
public ScopeService(final ScopeRepository scopeRepository) {
this.scopeRepository = scopeRepository;
}
// Filters the list of segments based on User Roles. User will have 1 role for each segment they have access to, and then it's just a simple equality check between the role and the Segment model.
#PostFilter(value = "#bcmsSecurityService.canAccessSegment( principal, filterObject )")
public List<BusinessSegment> getSegments() {
List<BusinessSegment> segments = scopeRepository.getSegments();
return segments; // Debugging shows 4 results for User A (post-filtered to 1), and 1 result for User B (post-filtered to 0)
}
}
Repository:
#Repository
public class ScopeRepository {
private final ScopeDao scopeDao; // This is a MyBatis interface.
public ScopeRepository(final ScopeDao scopeDao) {
this.scopeDao = scopeDao;
}
#Cacheable(value = "bcmsSegments")
public List<BusinessSegment> getSegments() {
List<BusinessSegment> segments = scopeDao.getSegments(); // Simple SELECT * FROM TABLE; Works as expected.
return segments; // Shows 4 results for User A, breakpoint not hit for User B cache takes over.
}
}
Does anyone know why the Cache seems to be storing the result of the Service method after the filter runs, rather than storing the full result set at the Repository level as I'm expecting it should? Or have another way to achieve my desired behavior?
Bonus points if you know how I could gracefully achieve both caching and filtering on the same method in the Service. I only built the superfluous Repository because I thought splitting the methods would resolve the caching problem.
Turns out that the contents of Spring caches are mutable, and the #PostFilter annotation modifies the returned list, it does not filter into a new one.
So when #PostFilter ran after my Service method call above it was actually removing items from the list stored in the Cache, so the second request only had 1 result to start with, and the third would have zero.
My solution was to modify the Service to return new ArrayList<>(scopeRepo.getSegments()); so that PostFilter wasn't changing the cached list.
(NOTE, that's not a deep clone of course, so if someone modified a Segment model upstream from the Service it would likely change in the model in the cache as well. So this may not be the best solution, but it works for my personal use case.)
I can't believe Spring Caches are mutable...
Related
I'm relatively new to Webflux and am trying to zip two objects together in order to return a tuple, which is then being used to build an object in the following step.
I'm doing it like so:
//Will shorten my code quite a bit.
//I'm including only the pieces that are invovled in my problem //"Flux.zip" call.
//This is a repository that is used in my "problem" code. It is simply an
//interface which extends ReactiveCrudRepository from spring data.
MyRepository repo;
//wiring in my repository...
public MyClass(MyRepository repo) {
this.repo = repo;
}
//Below is later in my reactive chain
//Starting halfway down the chain, we have Flux of objA
(flatMapMany returning Flux<ObjectA>)
//Problem code below...
//Some context :: I am zipping ObjectA with a reference to an object
//I am saving. I am saving an object from earlier, which is stored in an
//AtomicReference<ObjectB>
.flatMap(obj ->
Flux.zip(Mono.just(obj), repo.save(atomicReferenceFromEarlier.get()))
//Below, when calling "getId()" it logs the SAME ID FOR EACH OBJECT,
//even though I want it to return EACH OBJECT'S ID THAT WAS SAVED.
.map(myTuple2 -> log("I've saved this object {} ::" myTuple2.getT2().getId())))
//Further processing....
So, my ultimate issue is, the "second" parameter I'm zipping, the repo.save(someAtomicReferencedObject.get()) is the same for every "zipped" tuple.
in the following step, I'm logging something like "I'm now building object", just to see what object I've returned for each event, but my "second" object in my tuple is always the same...
How can I zip and ensure that the "save" call to the repo returns a unique object for each event in the flux?
However, when I check the database, I really have saved unique entities for each event in my flux. So, the save is happening as expected, but when the repo returns a Mono, it's the same one for each tuple returned.
Please let me know if I should clarify something if anything is unclear. Thank you in advance for any and all help.
I am currently working on a project to manage the reservation system.
There is a new requirement, which is to be able to keep track of all booking status changes.
I hope this does not affect the existing logic and exists as an independent module.
At first I thought of AOP, but there are some problems.
This request should record how what data was changed by what action.
I thought that I could extract the different data by applying AOP to the save method of the repository.
However, this is not possible because there are many different actions that update data.
For example, for reservation, the update by using the save method in the repository, but this method is used in various actions such as check in, check out and etc..
Therefore, the difference in data can be obtained, but it is not possible to tell which action the data was updated.
#Service
public class BookingService {
#Autowired
private BookingRepository bookingRepository;
public Booking create(Booking booking) {
return bookingRepository.save(booking);
}
public void update(Booking booking) {
Booking oldBooking = bookingRepository.findById(booking.getId()).orElseThrow(() -> new RuntimeException("Entity not found"));
oldBooking.update(booking);
bookingRepository.save(oldBooking);
}
public void checkIn(long id) {
Booking booking = bookingRepository.findById(id).orElseThrow(() -> new RuntimeException("Entity not found"));
booking.setStatus(Booking.Status.CheckIn);
bookingRepository.save(booking);
}
}
And since I use AOP, I don't want the parameters or result values of the existing logic to fit in a certain form.
While contemplating how to solve this, how about using the method used by Mockito.
In Mockito, We can know when a method is executed within a method.
Wouldn't it be possible to create a method like this, for example?
#Aspect
public class BookingHistory {
#Autowired
private BookingRepository bookingRepository;
#Around("execution(* *Service.update(..))")
public void update(ProceedingJoinPoint proceedingJoinPoint) {
long id = getBookingId(proceedingJoinPoint);
Booking origin = getBooking(id);
final DiffData diffData;
when(bookingRepository::save).thenReturn(result -> diffData = diff(origin, result));
saveHistory("UPDATE", "Booking", diffData);
}
}
But I have no idea how to implement "when", "thenReturn" etc in Mockito.
Could I get some hints to implement Mockito?
And if not this way, is there any other good way?
Mockito is a testing framework and should only be used for unit testing. If you want to keep track of which method changes the data using Spring's AOP, you can use custom annotations. With custom annotations, you can just pass the value which identifies the action and do whatever you want to it, e.g.: log it, publish it to MQ for analytics, etc. Try the following article on creating custom annotations and this on how to get the method's caller information.
Is there a way to populate a Map once from the DB (through Mongo repository) data and reuse it when required from multiple classes instead of hitting the Database through the repository.
As per your comment, what you are looking for is a Caching mechanism. Caches are components which allow data to live in memory, as opposed to files, databases or other mediums so as to allow for the fast retrieval of information (against a higher memory footprint).
There are probably various tutorials online, but usually caches all have the following behaviour:
1. They are key-value pair structures.
2. Each entity living in the cache also has a Time To Live, that is, how long will it considered to be valid.
You can implement this in the repository layer, so the cache mechanism will be transparent to the rest of your application (but you might want to consider exposing functionality that allows to clear/invalidate part or all the cache).
So basically, when a query comes to your repository layer, check in the cache. If it exists in there, check the time to live. If it is still valid, return that.
If the key does not exist or the TTL has expired, you add/overwrite the data in the cache. Keep in mind that when updating the data model yourself, you also invalidate the cache accordingly so that new/fresh data will be pulled from the DB on the next call.
You can declare the map field as public static and this would allow application wide access to hit via ClassLoadingData.mapField
I think a better solution, if I understood the problem would be a memoized function, that is a function storing the value of its call. Here is a sketch of how this could be done (note this does not handle possible synchronization problem in a multi threaded environment):
class ClassLoadingData {
private static Map<KeyType,ValueType> memoizedValues = new HashMap<>();
public Map<KeyType,ValueType> getMyData() {
if (memoizedData.isEmpty()) { // you can use more complex if to handle data refresh
populateData(memoizedData);
} else {
return memoizedData;
}
}
private void populateData() {
// do your query, and assign result to memoizedData
}
}
Premise: I suggest you to use an object-relational mapping tool like Hibernate on your java project to map the object-oriented
domain model to a relational database and let the tool handle the
cache mechanism implicitally. Hibernate specifically implements a multi-level
caching scheme ( take a look at the following link to get more
informations:
https://www.tutorialspoint.com/hibernate/hibernate_caching.htm )
Regardless my suggestion on premise you can also manually create a singleton class that will be used from every class in the project that goes to interact with the DB:
public class MongoDBConnector {
private static final Logger LOGGER = LoggerFactory.getLogger(MongoDBConnector.class);
private static MongoDBConnector instance;
//Cache period in seconds
public static int DB_ELEMENTS_CACHE_PERIOD = 30;
//Latest cache update time
private DateTime latestUpdateTime;
//The cache data layer from DB
private Map<KType,VType> elements;
private MongoDBConnector() {
}
public static synchronized MongoDBConnector getInstance() {
if (instance == null) {
instance = new MongoDBConnector();
}
return instance;
}
}
Here you can define then a load method that goes to update the map with values stored on the DB and also a write method that instead goes to write values on the DB with the following characteristics:
1- These methods should be synchronized in order to avoid issues if multiple calls are performed.
2- The load method should apply a cache period logic ( maybe with period configurable ) to avoid to load for each method call the data from the DB.
Example: Suppose your cache period is 30s. This means that if 10 read are performed from different points of the code within 30s you
will load data from DB only on the first call while others will read
from cached map improving the performance.
Note: The greater is the cache period the more is the performance of your code but if the DB is managed you'll create inconsistency
with cache if an insertion is performed externally ( from another tool
or manually ). So choose the best value for you.
public synchronized Map<KType, VType> getElements() throws ConnectorException {
final DateTime currentTime = new DateTime();
if (latestUpdateTime == null || (Seconds.secondsBetween(latestUpdateTime, currentTime).getSeconds() > DB_ELEMENTS_CACHE_PERIOD)) {
LOGGER.debug("Cache is expired. Reading values from DB");
//Read from DB and update cache
//....
sampleTime = currentTime;
}
return elements;
}
3- The store method should automatically update the cache if insert is performed correctly regardless the cache period is expired:
public synchronized void storeElement(final VType object) throws ConnectorException {
//Insert object on DB ( throws a ConnectorException if insert fails )
//...
//Update cache regardless the cache period
loadElementsIgnoreCachePeriod();
}
Then you can get elements from every point in your code as follow:
Map<KType,VType> liveElements = MongoDBConnector.getElements();
I have a List I need to return in a Spring Hateoas powered REST API as a PagedResources. I have tried this:
List<User> users = someUserGenerationMethod();
PageImpl<User> page = new PageImpl<User>(users);//users size is greater than 1
return parAssembler.toResource(page, userResourceAssembler);
having:
#Autowired
private PagedResourcesAssembler<User> parAssembler;
and userResourceAssembler is an instance of:
public class UserResourceAssembler extends ResourceAssemblerSupport<User, UserResource> {...}
and:
public class UserResource extends ResourceSupport{...}
but it results on java.lang.IllegalArgumentException: Page size must not be less than one!
How could I achieve that?
The problem was instantiating PageImpl, not sure why but using a different constructor:
Page<User> page = new PageImpl<User>(users, new PageRequest(0, DEFAULT_USER_PAGE_SIZE), 1);
solved the problem. Does anybody know why? Bug or bad use?
There are two ways to achieve it.
Either re-query the database repo.findAll() to get list of users from database (this is useful in case there are some db processing involved like time-stamping or seed generation for id).
If no db-processing involved then I will return generatedUsers instead of savedUsers. (In this case make sure that repo.saveAll(generatedUsers) executed successfully and without errors).
example from : SpringSource
#Cacheable(value = "vets")
public Collection<Vet> findVets() throws DataAccessException {
return vetRepository.findAll();
}
How does findVets() work exactly ?
For the first time, it takes the data from vetRepository and saves the result in cache. But what happens if a new vet is inserted in the database - does the cache update (out of the box behavior) ? If not, can we configure it to update ?
EDIT:
But what happens if the DB is updated from an external source (e.g. an application which uses the same DB) ?
#CachePut("vets")
public void save(Vet vet) {..}
You have to tell the cache that an object is stale. If data change without using your service methods then, of course, you would have a problem. You can, however, clear the whole cache with
#CacheEvict(value = "vets", allEntries = true)
public void clearCache() {..}
It depends on the Caching Provider though. If another app updates the database without notifying your app, but it uses the same cache it, then the other app would probably update the cache too.
It would not do it automatically and there is not way for the cache to know if the data has been externally introduced.
Check #CacheEvict which will help you invalidate the cache entry in case of any change to the underlying collections.
#CacheEvict(value = "vet", allEntries = true)
public void saveVet() {
// Intentionally blank
}
allEntries
Whether or not all the entries inside the cache(s) are removed or not.
By default, only the value under the associated key is removed. Note that specifying setting this parameter to true and specifying a key is not allowed.
you also can use #CachePut on the method which creates the new entry. The return type has to be the same as in your #Cachable method.
#CachePut(value = "vets")
public Collection<Vet> updateVets() throws DataAccessException {
return vetRepository.findAll();
}
In my opinion an exernal service has to call the same methods.