Setting Hazelcast Cache for Multi-tenancy - java

I am currently using the JHipster generator for really boiler plate code which involves HazelCast as a second level cache. I was able to get Multi-tenancy (schema per tenant) working with a header based tenant context. The problem I have now, is that the #Cacheable annotations all share a context. If the cache is hot, I end up with cross-schema data. For example, tenant1 pulls all records from their table which is cached. Tenant 2 goes to pull the same items from their table, the cache is read, and it never goes to the actual tenant db. An easy fix would be disable caching all together but I would like to not do that. I can not for the life of me figure out how to make hazelcast aware of the tenant context - documentation is lacking. Some others have solved this with using custom name resolvers but it doesn't appear to be as dynamic as I was hoping (i.e. you have to know all of the tenants ahead of time). Thoughts?
Current cache config:
#Configuration
#EnableCaching
public class CacheConfiguration implements DisposableBean {
private final Logger log = LoggerFactory.getLogger(CacheConfiguration.class);
private final Environment env;
private final ServerProperties serverProperties;
private final DiscoveryClient discoveryClient;
private Registration registration;
public CacheConfiguration(Environment env, ServerProperties serverProperties, DiscoveryClient discoveryClient) {
this.env = env;
this.serverProperties = serverProperties;
this.discoveryClient = discoveryClient;
}
#Autowired(required = false)
public void setRegistration(Registration registration) {
this.registration = registration;
}
#Override
public void destroy() throws Exception {
log.info("Closing Cache Manager");
Hazelcast.shutdownAll();
}
#Bean
public CacheManager cacheManager(HazelcastInstance hazelcastInstance) {
log.debug("Starting HazelcastCacheManager");
return new com.hazelcast.spring.cache.HazelcastCacheManager(hazelcastInstance);
}
#Bean
public HazelcastInstance hazelcastInstance(JHipsterProperties jHipsterProperties) {
log.debug("Configuring Hazelcast");
HazelcastInstance hazelCastInstance = Hazelcast.getHazelcastInstanceByName("SampleApp");
if (hazelCastInstance != null) {
log.debug("Hazelcast already initialized");
return hazelCastInstance;
}
Config config = new Config();
config.setInstanceName("SampleApp");
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
if (this.registration == null) {
log.warn("No discovery service is set up, Hazelcast cannot create a cluster.");
} else {
// The serviceId is by default the application's name,
// see the "spring.application.name" standard Spring property
String serviceId = registration.getServiceId();
log.debug("Configuring Hazelcast clustering for instanceId: {}", serviceId);
// In development, everything goes through 127.0.0.1, with a different port
if (env.acceptsProfiles(Profiles.of(JHipsterConstants.SPRING_PROFILE_DEVELOPMENT))) {
log.debug("Application is running with the \"dev\" profile, Hazelcast " +
"cluster will only work with localhost instances");
System.setProperty("hazelcast.local.localAddress", "127.0.0.1");
config.getNetworkConfig().setPort(serverProperties.getPort() + 5701);
config.getNetworkConfig().getJoin().getTcpIpConfig().setEnabled(true);
for (ServiceInstance instance : discoveryClient.getInstances(serviceId)) {
String clusterMember = "127.0.0.1:" + (instance.getPort() + 5701);
log.debug("Adding Hazelcast (dev) cluster member {}", clusterMember);
config.getNetworkConfig().getJoin().getTcpIpConfig().addMember(clusterMember);
}
} else { // Production configuration, one host per instance all using port 5701
config.getNetworkConfig().setPort(5701);
config.getNetworkConfig().getJoin().getTcpIpConfig().setEnabled(true);
for (ServiceInstance instance : discoveryClient.getInstances(serviceId)) {
String clusterMember = instance.getHost() + ":5701";
log.debug("Adding Hazelcast (prod) cluster member {}", clusterMember);
config.getNetworkConfig().getJoin().getTcpIpConfig().addMember(clusterMember);
}
}
}
config.getMapConfigs().put("default", initializeDefaultMapConfig(jHipsterProperties));
// Full reference is available at: http://docs.hazelcast.org/docs/management-center/3.9/manual/html/Deploying_and_Starting.html
config.setManagementCenterConfig(initializeDefaultManagementCenterConfig(jHipsterProperties));
config.getMapConfigs().put("com.test.sampleapp.domain.*", initializeDomainMapConfig(jHipsterProperties));
return Hazelcast.newHazelcastInstance(config);
}
private ManagementCenterConfig initializeDefaultManagementCenterConfig(JHipsterProperties jHipsterProperties) {
ManagementCenterConfig managementCenterConfig = new ManagementCenterConfig();
managementCenterConfig.setEnabled(jHipsterProperties.getCache().getHazelcast().getManagementCenter().isEnabled());
managementCenterConfig.setUrl(jHipsterProperties.getCache().getHazelcast().getManagementCenter().getUrl());
managementCenterConfig.setUpdateInterval(jHipsterProperties.getCache().getHazelcast().getManagementCenter().getUpdateInterval());
return managementCenterConfig;
}
private MapConfig initializeDefaultMapConfig(JHipsterProperties jHipsterProperties) {
MapConfig mapConfig = new MapConfig();
/*
Number of backups. If 1 is set as the backup-count for example,
then all entries of the map will be copied to another JVM for
fail-safety. Valid numbers are 0 (no backup), 1, 2, 3.
*/
mapConfig.setBackupCount(jHipsterProperties.getCache().getHazelcast().getBackupCount());
/*
Valid values are:
NONE (no eviction),
LRU (Least Recently Used),
LFU (Least Frequently Used).
NONE is the default.
*/
mapConfig.setEvictionPolicy(EvictionPolicy.LRU);
/*
Maximum size of the map. When max size is reached,
map is evicted based on the policy defined.
Any integer between 0 and Integer.MAX_VALUE. 0 means
Integer.MAX_VALUE. Default is 0.
*/
mapConfig.setMaxSizeConfig(new MaxSizeConfig(0, MaxSizeConfig.MaxSizePolicy.USED_HEAP_SIZE));
return mapConfig;
}
private MapConfig initializeDomainMapConfig(JHipsterProperties jHipsterProperties) {
MapConfig mapConfig = new MapConfig();
mapConfig.setTimeToLiveSeconds(jHipsterProperties.getCache().getHazelcast().getTimeToLiveSeconds());
return mapConfig;
}
}
Sample Repository using cacheNames...
#Repository
public interface UserRepository extends JpaRepository<User, Long> {
String USERS_BY_LOGIN_CACHE = "usersByLogin";
String USERS_BY_EMAIL_CACHE = "usersByEmail";
String USERS_BY_ID_CACHE = "usersById";
Optional<User> findOneByActivationKey(String activationKey);
List<User> findAllByActivatedIsFalseAndActivationKeyIsNotNullAndCreatedDateBefore(Instant dateTime);
Optional<User> findOneByResetKey(String resetKey);
Optional<User> findOneByEmailIgnoreCase(String email);
Optional<User> findOneByLogin(String login);
#EntityGraph(attributePaths = "roles")
#Cacheable(cacheNames = USERS_BY_ID_CACHE)
Optional<User> findOneWithRolesById(Long id);
#EntityGraph(attributePaths = "roles")
#Cacheable(cacheNames = USERS_BY_LOGIN_CACHE)
Optional<User> findOneWithRolesByLogin(String login);
#EntityGraph(attributePaths = { "roles", "roles.permissions" })
#Cacheable(cacheNames = USERS_BY_LOGIN_CACHE)
Optional<User> findOneWithRolesAndPermissionsByLogin(String login);
#EntityGraph(attributePaths = "roles")
#Cacheable(cacheNames = USERS_BY_EMAIL_CACHE)
Optional<User> findOneWithRolesByEmail(String email);
Page<User> findAllByLoginNot(Pageable pageable, String login);
}

I am using tenant per database (MySQL), but as long as you are setting a thread context above here is what I'm doing - I'm using Spring Boot. I've created a custom Cache Key generator which combines the tenant name + class + and method. You can really choose any combination. Whenever I pass that tenant back it pulls the correct entries. In the Hazelcast command center for my AppointmentType map type I see the number of entries increment per tenant.
Some other references that may be helpful:
https://www.javadevjournal.com/spring/spring-cache-custom-keygenerator/
https://docs.spring.io/spring-framework/docs/4.3.x/spring-framework-reference/html/cache.html (search for keyGenerator="myKeyGenerator")
In your class where you want to cache (mine is a service class):
#Service
public class AppointmentTypeService {
private static final Logger LOGGER = LoggerFactory.getLogger(AppointmentTypeService.class);
private final AppointmentTypeRepository appointmentTypeRepository;
#Autowired
AppointmentTypeService(AppointmentTypeRepository appointmentTypeRepository) {
this.appointmentTypeRepository = appointmentTypeRepository;
}
//ADD keyGenerator value. Name is the name of the bean of the class
#Cacheable(value="appointmentType", keyGenerator = "multiTenantCacheKeyGenerator")
public List<AppointmentType> list() {
return this.appointmentTypeRepository.findAll();
}
#CacheEvict(value="appointmentType", allEntries=true)
public Long create(AppointmentType request) {
this.appointmentTypeRepository.saveAndFlush(request);
return request.getAppointmentTypeId();
}
#CacheEvict(value="appointmentType", allEntries=true)
public void delete(Long id) {
this.appointmentTypeRepository.deleteById(id);
}
public Optional<AppointmentType> findById(Long id) {
return this.appointmentTypeRepository.findById(id);
}
}
Create key generator class
//setting the bean name here
#Component("multiTenantCacheKeyGenerator")
public class MultiTenantCacheKeyGenerator implements KeyGenerator {
#Override
public Object generate(Object o, Method method, Object... os) {
StringBuilder sb = new StringBuilder();
sb.append(TenantContext.getCurrentTenantInstanceName()) //my tenant context class which is using local thread. I set the value in the Spring filter.
.append("_")
.append(o.getClass().getSimpleName())
.append("-")
.append(method.getName());
}
return sb.toString();
}
}

One approach to defining different cache keys for the tenants is to override the method getCache in org.springframework.cache.CacheManager, as suggested here: Extended spring cache...
As of Jhipster 7.0.1, the CacheManager for Hazelcast is defined in the class CacheConfiguration as stated bellow:
#Configuration
#EnableCaching
public class CacheConfiguration {
//...
#Bean
public CacheManager cacheManager(HazelcastInstance hazelcastInstance) {
return new com.hazelcast.spring.cache.HazelcastCacheManager(hazelcastInstance);
}
//...
}
To have the cache keys prefixed with the tenant id, the following code may be used as a starting point:
#Configuration
#EnableCaching
public class CacheConfiguration {
#Bean
public CacheManager cacheManager(HazelcastInstance hazelcastInstance) {
return new com.hazelcast.spring.cache.HazelcastCacheManager(hazelcastInstance){
#Override
public Cache getCache(String name) {
String tenantId = TenantStorage.getTenantId();
if (StringUtils.isNotBlank(tenantId)){
return super.getCache(String.format("%s:%s", tenantId, name));
}
return super.getCache(name);
}
};
}
}
Note: in the above code, TenantStorage.getTenantId() is a static function one should implement and that returns the current tenant id.
Consider the class posted by the OP:
#Cacheable(cacheNames = "usersByLogin")
Optional<User> findOneWithRolesByLogin(String login);
The following cache values will be used by HazelCast:
tenant1 => tenant1:usersByLogin
tenant2 => tenant2:usersByLogin
null => usersByLogin

Related

(FIXED) Multi-tenant application: Can't set the desired dataSource (Separated Schema, Shared Database)

I have an application where I want to use different DataSources. All the requests coming from the front-end will user the primary DataSource (this works so far), but I also have to perform operations every certain amount of minutes on another database with different schemas.
By looking in here, I found this approach:
Application.yml
datasource:
primary:
url: jdbc:mysql://SERVER_IP:3306/DATABASE?useSSL=false&useUnicode=true&useLegacyDatetimeCode=false&serverTimezone=UTC
username: MyUser
password: MyPassword
driver-class-name: com.mysql.cj.jdbc.Driver
secondary:
url: jdbc:mysql://localhost:3306/
urlEnd: ?useSSL=false&useUnicode=true&useLegacyDatetimeCode=false&serverTimezone=UTC
username: root
password: root
driver-class-name: com.mysql.cj.jdbc.Driver
Here i separate "url" and "urlEnd" because in the middle I will paste the name of the schema to use in each case as shown later.
ContextHolder
public abstract class ContextHolder {
private static final Logger logger = LoggerFactory.getLogger(ContextHolder.class);
private static final ThreadLocal<String> contextHolder = new ThreadLocal<>();
public static void setClient(String context) {
contextHolder.set(context);
}
public static String getClient() {
return contextHolder.get();
}
}
CustomRoutingDataSource
#Component
public class CustomRoutingDataSource extends AbstractRoutingDataSource {
private org.slf4j.Logger logger = LoggerFactory.getLogger(CustomRoutingDataSource.class);
#Autowired
DataSourceMap dataSources;
#Autowired
private Environment env;
public void setCurrentLookupKey() {
determineCurrentLookupKey();
}
#Override
protected Object determineCurrentLookupKey() {
String key = ContextHolder.getClient();
if(key == null || key.equals("primary")) {
DriverManagerDataSource ds = new DriverManagerDataSource();
ds.setDriverClassName(env.getProperty("spring.datasource.primary.driver-class-name"));
ds.setPassword(env.getProperty("spring.datasource.primary.password"));
ds.setUsername(env.getProperty("spring.datasource.primary.username"));
ds.setUrl(env.getProperty("spring.datasource.primary.url"));
dataSources.addDataSource("primary", ds);
setDataSources(dataSources);
afterPropertiesSet();
return "primary";
}
else {
DriverManagerDataSource ds = new DriverManagerDataSource();
ds.setDriverClassName(env.getProperty("spring.datasource.secondary.driver-class-name"));
ds.setPassword(env.getProperty("spring.datasource.secondary.password"));
ds.setUsername(env.getProperty("spring.datasource.secondary.username"));
ds.setUrl(env.getProperty("spring.datasource.secondary.url") + key + env.getProperty("spring.datasource.secondary.urlEnd"));
dataSources.addDataSource(key, ds);
setDataSources(dataSources);
afterPropertiesSet();
}
return key;
}
#Autowired
public void setDataSources(DataSourceMap dataSources) {
setTargetDataSources(dataSources.getDataSourceMap());
}
}
DatabaseSwitchInterceptor (Not used so far AFAIK)
#Component
public class DatabaseSwitchInterceptor implements HandlerInterceptor {
#Autowired
private CustomRoutingDataSource customRoutingDataSource;
private static final Logger logger = LoggerFactory
.getLogger(DatabaseSwitchInterceptor.class);
#Override
public boolean preHandle(HttpServletRequest request,
HttpServletResponse response, Object handler) throws Exception {
String hostname = request.getServerName();
ContextHolder.setClient(hostname);
return true;
}
}
DataSourceMap
#Component
public class DataSourceMap {
private static final Logger logger = LoggerFactory
.getLogger(DataSourceMap.class);
private Map<Object, Object> dataSourceMap = new ConcurrentHashMap<>();
public void addDataSource(String session, DataSource dataSource) {
this.dataSourceMap.put(session, dataSource);
}
public Map<Object, Object> getDataSourceMap() {
return dataSourceMap;
}
}
And last but not least, the controller where I am doing my test
#RestController
#RequestMapping("/open/company")
public class CompanyOpenController extends GenericCoreController<Company, Integer> {
#Autowired
private CompanyService companyService;
#Autowired
private CompltpvRepository compltpvRepository;
#Autowired
private CustomRoutingDataSource customRoutingDataSource;
#GetMapping("/prueba/{companyId}")
public List<CompltpvDTO> getAll(#PathVariable Integer companyId) throws ServiceException{
List<CompltpvDTO> response = new ArrayList<>();
ContextHolder.setClient(companyService.getById(companyId).getSchema());
for(Compltpv e : compltpvRepository.findAll()) {
response.add(new CompltpvDTO(e));
}
return response;
}
}
What I want all this to do is that, when I call "/open/company/test/3" it searches (in the main database) for the company with ID = 3. Then it retrieves its "schema" attribute value (let's say its "12345678" and then switches to the secondary datasource with the following url:
url = env.getProperty("spring.datasource.secondary.url") + key + env.getProperty("spring.datasource.secondary.urlEnd")
which is something like:
jdbc:mysql://localhost:3306/1245678?useSSL=false&useUnicode=true&useLegacyDatetimeCode=false&serverTimezone=UTC
When I try this and look into the DataSource pool, both exist with keys "primary" and "12345678", but it's always using the "primary" one.
How can I tell Spring to use the DataSource I need it to use?
EDIT: Found the solution
I finally got a deeper understaing of what was happening and also found the problem.
In case someone is interested on this approach, the problem I was having was this line in my application.yml:
spring:
jpa:
open-in-view: true
which does the following:
Default: true
Register OpenEntityManagerInViewInterceptor. Binds a JPA EntityManager to the thread for the entire processing of the request.
And that was the reason that, despite creating the datasource for every company (tenant), it wasn't using it. So if you are reading this and are in my situation, find that line and set it to false. If you don't find that property, notice that by default it'll be set to true.

How to create intrumentation for add contraint on number of queries executed in single API call?

I'm using spring-graphql 1.0.1 version, looking to add custom instrumentation to deny user to query if he tries to execute queries > n. It will help to protect graphql server by serving limited number of queries per request.
To add the maxQueryComplexity and maxQueryDepth instrumentation are available, which I have configured as follows. Is there any way to create a custom instrumentation to for maxQueries
#Bean
#ConditionalOnMissingBean
#ConditionalOnProperty(prefix = "spring.graphql.instrumentation", name = "max-query-complexity")
public MaxQueryComplexityInstrumentation maxQueryComplexityInstrumentation(#Value("${spring.graphql.instrumentation.max-query-complexity}") int maxComplexity) {
return new MaxQueryComplexityInstrumentation(maxComplexity);
}
#Bean
#ConditionalOnMissingBean
#ConditionalOnProperty(prefix = "spring.graphql.instrumentation", name = "max-query-depth")
public MaxQueryDepthInstrumentation maxQueryDepthInstrumentation(#Value("${spring.graphql.instrumentation.max-query-depth}") int maxDepth) {
return new MaxQueryDepthInstrumentation(maxDepth);
}
I found a solution to by writing the custom instrumentation by extending SimpleInstrumentation from graphql-java
Instrumentation class as below
public class MaxQueryCountInstrumentation extends SimpleInstrumentation {
private int maxQueryCount;
public MaxQueryCountInstrumentation(int maxqueryCount) {
this.maxQueryCount=maxqueryCount;
}
#Override
public InstrumentationContext<ExecutionResult> beginExecuteOperation(InstrumentationExecuteOperationParameters parameters) {
int size = ((OperationDefinition) parameters.getExecutionContext().getDocument().getDefinitions().get(0)).getSelectionSet().getSelections().size();
if (maxQueryCount < size)
throw new AbortExecutionException("Maximum query count exceeded " + size + " > " + maxQueryCount);
return SimpleInstrumentationContext.noOp();
}
}
Config as below:
#Bean
#ConditionalOnMissingBean
#ConditionalOnProperty(prefix = "spring.graphql.instrumentation", name = "max-query-count")
public MaxQueryCountInstrumentation maxQueryCountComplexity(#Value("${spring.graphql.instrumentation.max-query-count}") int maxqueryCount) {
return new MaxQueryCountInstrumentation(maxqueryCount);
}

Spring Boot cache not caching method call based on dynamic controller parameter

I am attempting to use Spring Boot Cache with a Caffeine cacheManager.
I have injected a service class into a controller like this:
#RestController
#RequestMapping("property")
public class PropertyController {
private final PropertyService propertyService;
#Autowired
public PropertyController(PropertyService propertyService) {
this.propertyService = propertyService;
}
#PostMapping("get")
public Property getPropertyByName(#RequestParam("name") String name) {
return propertyService.get(name);
}
}
and the PropertyService looks like this:
#CacheConfig(cacheNames = "property")
#Service
public class PropertyServiceImpl implements PropertyService {
private final PropertyRepository propertyRepository;
#Autowired
public PropertyServiceImpl(PropertyRepository propertyRepository) {
this.propertyRepository = propertyRepository;
}
#Override
public Property get(#NonNull String name, #Nullable String entity, #Nullable Long entityId) {
System.out.println("inside: " + name);
return propertyRepository.findByNameAndEntityAndEntityId(name, entity, entityId);
}
#Cacheable
#Override
public Property get(#NonNull String name) {
return get(name, null, null);
}
}
Now, when I call the RestController get endpoint and supply a value for the name, every request ends up doing inside the method that should be getting cached.
However, if I call the controller get endpoint but pass a hardcoded String into the service class method, like this:
#PostMapping("get")
public Property getPropertyByName(#RequestParam("name") String name) {
return propertyService.get("hardcoded");
}
Then the method is only invoked the first time, but not on subsequent calls.
What's going on here? Why is it not caching the method call when I supply a value dynamically?
Here is some configuration:
#Configuration
public class CacheConfiguration {
#Bean
public CacheManager cacheManager() {
val caffeineCacheManager = new CaffeineCacheManager("property", "another");
caffeineCacheManager.setCaffeine(caffeineCacheBuilder());
return caffeineCacheManager;
}
public Caffeine<Object, Object> caffeineCacheBuilder() {
return Caffeine.newBuilder()
.initialCapacity(200)
.maximumSize(500)
.weakKeys()
.recordStats();
}
}
2 solutions (they work for me):
remove .weakKeys()
propertyService.get(name.intern()) - wouldn't really do that, possibly a big cost
Sorry, but I don't have enough knowledge to explain this. Probably something to do with internal key representation by Caffeine.

Custom hibernate second level cache name lookup

I'm using spring boot with second level cache for the entities, e.g.
#Entity
#Table(name = "customer")
#Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE)
public class Customer implements Serializable {
....
}
This is working as expected, but now I have to turn the application into a multi-tenant version. Each tenant has it's own database, where we use a ThreadLocal to store the current tenant and AbstractRoutingDataSource for routing to the tenant's database. This is working, if the 2nd level cache is off.
It would be nice to get also the 2nd level cache working. The problem seems to be the cache name, which is the FQCN for an entity. Since the cache is not tenant or database-aware, each tenant uses the same cache.
For this, we use a ThreadLocal for resolving the current tenant is simply accessible by
TenantContext.getCurrentTenant();
and returns the tenant name.
We use the EhCache, which is backed by the spring cache abstraction:
#Bean
public CacheManager cacheManager() {
net.sf.ehcache.CacheManager cacheManager = net.sf.ehcache.CacheManager.create();
EhCacheCacheManager ehCacheManager = new EhCacheCacheManager();
ehCacheManager.setCacheManager(cacheManager);
return ehCacheManager;
}
Is it possible to intercept the generation of the cache-name, so that the current tenant name is used instead of the FQCN and each insert/lookup/evict resolves this tenant-aware-cache-name?
In order to intercept the generation of the cache-name one approach is to override the method getCache(String name) of the EhCacheCacheManager as follows:
#Bean
public CacheManager cacheManager() {
net.sf.ehcache.CacheManager cacheManager = net.sf.ehcache.CacheManager.create();
EhCacheCacheManager ehCacheManager = new EhCacheCacheManager(){
#Override
public Cache getCache(String name) {
String tenantId = TenantContext.getCurrentTenant();
return super.getCache(String.format("%s:%s", tenantId, name));
}
};
ehCacheManager.setCacheManager(cacheManager);
return ehCacheManager;
}

My #Cacheable seems to be ignored (Spring)

I have to cache the result of the following public method :
#Cacheable(value = "tasks", key = "#user.username")
public Set<MyPojo> retrieveCurrentUserTailingTasks(UserInformation user) {
Set<MyPojo> resultSet;
try {
nodeInformationList = taskService.getTaskList(user);
} catch (Exception e) {
throw new ApiException("Error while retrieving tailing tasks", e);
}
return resultSet;
}
I also configured Caching here :
#Configuration
#EnableCaching(mode = AdviceMode.PROXY)
public class CacheConfig {
#Bean
public CacheManager cacheManager() {
final SimpleCacheManager cacheManager = new SimpleCacheManager();
cacheManager.setCaches(Arrays.asList(new ConcurrentMapCache("tasks"),new ConcurrentMapCache("templates")));
return cacheManager;
}
#Bean
public CacheResolver cacheResolver() {
final SimpleCacheResolver cacheResolver = new SimpleCacheResolver(cacheManager());
return cacheResolver;
}
}
I assert the following :
Cache is initialized and does exist within Spring Context
I used jvisualvm to track ConcurrentMapCache (2 instances), they are
there in the heap but empty
Method returns same values per user.username
I tried the same configuration using spring-boot based project and
it worked
The method is public and is inside a Spring Controller
The annotation #CacheConfig(cacheNames = "tasks") added on top of my
controller
Spring version 4.1.3.RELEASE
Jdk 1.6
Update 001 :
#RequestMapping(value = "/{kinematicId}/status/{status}", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE)
public DocumentNodeWrapper getDocumentsByKinematicByStatus(#PathVariable String kinematicId, #PathVariable String status, HttpServletRequest request) {
UserInformation user = getUserInformation(request);
Set<ParapheurNodeInformation> nodeInformationList = retrieveCurrentUserTailingTasks(user);
final List<DocumentNodeVO> documentsList = getDocumentsByKinematic(kinematicId, user, nodeInformationList);
List<DocumentNodeVO> onlyWithGivenStatus = filterByStatus(documentsList);
return new DocumentNodeWrapper("filesModel", onlyWithGivenStatus, user, currentkinematic);
}
Thanks
Is the calling method getDocumentsByKinematicByStatus() in the same bean as the cacheable method ? If true, then this is a normal behavior because you're not calling the cacheable method via proxy but directly.

Categories