I'm having an issue with the unless condition of the Cacheable annotation.
From the documentation, I understand that the unless condition is verified after the method being annotated is called and the value the method returns is cached (and actually returned) only if the unless condition is not met. Otherwise, the cached value should be returned.
Firstly, is this assumption true?
EDIT:
[From the Spring documentation] As the name implies, #Cacheable is used
to demarcate methods that are cacheable - that is, methods for whom
the result is stored into the cache so on subsequent invocations (with
the same arguments), the value in the cache is returned without having
to actually execute the method.
[My understanding] So for a given key, the method will be always executed until the unless condition is not met once. Then the cached value will be
returned for all subsequent calls to the method.
To illustrate my issue, I tried to break down my code into four simples classes:
1) DummyObject that represents instances to be cached and retrieved. It's a timestamp wrapper to show what is last value that has been cached. The toBeCached boolean is a flag that should be checked in the unless condition to know if the instance returned should be cached or not.
2) DummyDAO that returns DummyObject instances based on provided keys. Upon retrieval of an instance, the DAO checks when was the last value retrieved and verified if it should be cached or not (independently of what key is provided. Doesn't matter if this logic is "broken" as I'm always using the same key for my example). The DAO then marks the instance returned with the flag toBeCached. If the value is marked to be cached, the DAO actually updates its lastRetrieved timestamp as the instance should be eventually cached by the CachedDAO (because the unless condition won't be met).
3) DummyCachedDao that calls the DummyDAO to retrieve instances of DummyObject. If instances are marked toBeCached, it should cache the newly returned value. It should return the previously cached value otherwise.
4) The Application that retrieves a value (that will be cached), sleeps for a short time (not long enough for the cache duration to pass), retrieves a value (that should be the cached one), sleeps again (long enough for cache duration to pass), retrieves again a value (that should be a new value to be cached).
Unfortunately, this code does not work as expected as the retrieved value is always the original value that has been cached.
To ensure that the logic worked as expected, I checked if the unless conditions are met or not by replacing the retrieveTimestamp by retrieveTimestampBypass in the Application class. Since internal calls bypass the Spring proxy, the retrieveTimestamp method and whatever breakpoints or logs I put in are actually caught/shown.
What would cause the value to never be cached again? Does the cache need to be clean from previous values first?
public class DummyObject
{
private long timestamp;
private boolean toBeCached;
public DummyObject(long timestamp, boolean toBeCached)
{
this.timestamp = timestamp;
this.toBeCached = toBeCached;
}
public long getTimestamp()
{
return timestamp;
}
public boolean isToBeCached()
{
return toBeCached;
}
}
#Service
public class DummyDAO
{
private long cacheDuration = 3000;
private long lastRetrieved;
public DummyObject retrieveTimestamp(String key)
{
long renewalTime = lastRetrieved + cacheDuration;
long time = System.currentTimeMillis();
boolean markedToBeCached = renewalTime < time;
System.out.println(renewalTime + " < " + time + " = " + markedToBeCached);
if(markedToBeCached)
{
lastRetrieved = time;
}
return new DummyObject(time, markedToBeCached);
}
}
#Service
public class DummyCachedDAO
{
#Autowired
private DummyDAO dao;
// to check the flow.
public DummyObject retrieveTimestampBypass(String key)
{
return retrieveTimestamp(key);
}
#Cacheable(cacheNames = "timestamps", unless = "#result.isToBeCached() != true")
public DummyObject retrieveTimestamp(String key)
{
return dao.retrieveTimestamp(key);
}
}
#SpringBootApplication
#EnableCaching
public class Application
{
public final static String KEY = "cache";
public final static String MESSAGE = "Cached timestamp is: %s [%s]";
public static void main(String[] args) throws InterruptedException
{
SpringApplication app = new SpringApplication(Application.class);
ApplicationContext context = app.run(args);
DummyCachedDAO cache = (DummyCachedDAO) context.getBean(DummyCachedDAO.class);
// new value
long value = cache.retrieveTimestamp(KEY).getTimestamp();
System.out.println(String.format(MESSAGE, value, new Date(value)));
Thread.sleep(1000);
// expecting same value
value = cache.retrieveTimestamp(KEY).getTimestamp();
System.out.println(String.format(MESSAGE, value, new Date(value));
Thread.sleep(5000);
// expecting new value
value = cache.retrieveTimestamp(KEY).getTimestamp();
System.out.println(String.format(MESSAGE, value, new Date(value));
SpringApplication.exit(context, () -> 0);
}
}
There are so many details and maybe issues here but first of all you should remove
private long lastRetrieved;
from DummyDao class.
DummyDao is a singleton instance lastRetrieved field is not thread safe.
As you can also see from the logs after you cache the item first time it will always be retrieved from there as it has cached in the first call.
Otherwise you should have seen following log
3000 < 1590064933733 = true
The problem is actually quite simple.
There is no solution to my problem and rightfully so.
The original assumption I had was that "the unless condition is verified every time after the method being annotated is called and the value the method returns is cached (and actually returned) only if the unless condition is not met. Otherwise, the cached value should be returned."
However, this was not the actual behavior because as the documentation states, "#Cacheable is used to demarcate methods that are cacheable - that is, methods for whom the result is stored into the cache so on subsequent invocations (with the same arguments), the value in the cache is returned without having to actually execute the method."
So for a given key, the method will be always executed until the unless condition is not met once. Then the cached value will be returned for all subsequent calls to the method.
So I tried to approach the problem in a different way for my experiment, by trying to use a combination of annotations (#Caching with #Cacheable and #CachePut, although the documentation advises against it).
The value that I was retrieving was always the new one while the one in the cache was always the expected one. (*)
That's when I tilted THAT I couldn't upload the value in the cache based on an internal timestamp that would have been generated in the method that is being cached AND retrieving at the same the cached value if the unless condition was met or the new one otherwise.
What would be the point of executing the method every single time to compute the latest value but returning the cached one (because of the unless condition I was setting)? There is no point...
What I wanted to achieve (update the cache if a period expired) would have been possible if the condition of cacheable was to specify when to retrieve the cached version or retrieve/generate a new one. As far as I am aware, the cacheable is only to specify when a method needs to be cached in the first place.
That is the end of my experiment. The need to test this arose when I came across an issue with an actual production project that used an internal timestamp with this unless condition.
FYI, the most obvious solution to this problem is to use a cache provider that actually provides TTL capabilities.
(*) PS: I also tried few #caching combinations of #CacheEvict (with condition="#root.target.myNewExpiringCheckMethod()==true") and #Cacheable but it failed as well as the CacheEvict enforce the execution of the annotated method.
Related
I'm trying to create that uses the java_rosbridge library, but I am having issues with accessing and updating the variable status between class scopes.
Boolean isDoorbellRinging() {
Boolean status = false;
bridge.subscribe(SubscriptionRequestMsg.generate("/doorbell").setType("std_msgs/Bool").setThrottleRate(1)
.setQueueLength(1), new RosListenDelegate() {
public void receive(JsonNode data, String stringRep) {
MessageUnpacker<PrimitiveMsg<String>> unpacker = new MessageUnpacker<PrimitiveMsg<String>>(
PrimitiveMsg.class);
PrimitiveMsg<String> msg = unpacker.unpackRosMessage(data);
logger.info(data.get("msg").get("data").asText());
status = ((data.get("msg").get("data").asInt() > 0) ? true : false);
}
});
return status;
}
It's receiving the data correctly as I get the correct output with logger.info(..) when not trying to access status. However, when including status = ((data.get("msg")...
I'm currently receiving this error:
Local variables referenced from an inner class must be final or effectively final
What that message means, is that in closures (lambdas and methods in anonymous classes), after their definition, variables from outer scope must not be reassigned. You can circumvent this using containers (collections, arrays, atomics, and so on). In this case, AtomicBoolean may come in handy: define your variable as AtomicBoolean and use set instead of assignment. If you have to distinguish between null and false, use AtomicReference<Boolean>.
'status' is on a stack frame corresponding to the execution of isDoorbellRinging().
The 'receive' call presumably will execute at some future time. At that point, isDoorbellRinging() will have returned. There will be no 'status' to modify. It will have ceased to be.
By the same point, isDoorbellRinging() cannot 'now' return a value that will be determined at some point in the future.
You need some way to handle the asynchronous nature of this.
Is there a way to populate a Map once from the DB (through Mongo repository) data and reuse it when required from multiple classes instead of hitting the Database through the repository.
As per your comment, what you are looking for is a Caching mechanism. Caches are components which allow data to live in memory, as opposed to files, databases or other mediums so as to allow for the fast retrieval of information (against a higher memory footprint).
There are probably various tutorials online, but usually caches all have the following behaviour:
1. They are key-value pair structures.
2. Each entity living in the cache also has a Time To Live, that is, how long will it considered to be valid.
You can implement this in the repository layer, so the cache mechanism will be transparent to the rest of your application (but you might want to consider exposing functionality that allows to clear/invalidate part or all the cache).
So basically, when a query comes to your repository layer, check in the cache. If it exists in there, check the time to live. If it is still valid, return that.
If the key does not exist or the TTL has expired, you add/overwrite the data in the cache. Keep in mind that when updating the data model yourself, you also invalidate the cache accordingly so that new/fresh data will be pulled from the DB on the next call.
You can declare the map field as public static and this would allow application wide access to hit via ClassLoadingData.mapField
I think a better solution, if I understood the problem would be a memoized function, that is a function storing the value of its call. Here is a sketch of how this could be done (note this does not handle possible synchronization problem in a multi threaded environment):
class ClassLoadingData {
private static Map<KeyType,ValueType> memoizedValues = new HashMap<>();
public Map<KeyType,ValueType> getMyData() {
if (memoizedData.isEmpty()) { // you can use more complex if to handle data refresh
populateData(memoizedData);
} else {
return memoizedData;
}
}
private void populateData() {
// do your query, and assign result to memoizedData
}
}
Premise: I suggest you to use an object-relational mapping tool like Hibernate on your java project to map the object-oriented
domain model to a relational database and let the tool handle the
cache mechanism implicitally. Hibernate specifically implements a multi-level
caching scheme ( take a look at the following link to get more
informations:
https://www.tutorialspoint.com/hibernate/hibernate_caching.htm )
Regardless my suggestion on premise you can also manually create a singleton class that will be used from every class in the project that goes to interact with the DB:
public class MongoDBConnector {
private static final Logger LOGGER = LoggerFactory.getLogger(MongoDBConnector.class);
private static MongoDBConnector instance;
//Cache period in seconds
public static int DB_ELEMENTS_CACHE_PERIOD = 30;
//Latest cache update time
private DateTime latestUpdateTime;
//The cache data layer from DB
private Map<KType,VType> elements;
private MongoDBConnector() {
}
public static synchronized MongoDBConnector getInstance() {
if (instance == null) {
instance = new MongoDBConnector();
}
return instance;
}
}
Here you can define then a load method that goes to update the map with values stored on the DB and also a write method that instead goes to write values on the DB with the following characteristics:
1- These methods should be synchronized in order to avoid issues if multiple calls are performed.
2- The load method should apply a cache period logic ( maybe with period configurable ) to avoid to load for each method call the data from the DB.
Example: Suppose your cache period is 30s. This means that if 10 read are performed from different points of the code within 30s you
will load data from DB only on the first call while others will read
from cached map improving the performance.
Note: The greater is the cache period the more is the performance of your code but if the DB is managed you'll create inconsistency
with cache if an insertion is performed externally ( from another tool
or manually ). So choose the best value for you.
public synchronized Map<KType, VType> getElements() throws ConnectorException {
final DateTime currentTime = new DateTime();
if (latestUpdateTime == null || (Seconds.secondsBetween(latestUpdateTime, currentTime).getSeconds() > DB_ELEMENTS_CACHE_PERIOD)) {
LOGGER.debug("Cache is expired. Reading values from DB");
//Read from DB and update cache
//....
sampleTime = currentTime;
}
return elements;
}
3- The store method should automatically update the cache if insert is performed correctly regardless the cache period is expired:
public synchronized void storeElement(final VType object) throws ConnectorException {
//Insert object on DB ( throws a ConnectorException if insert fails )
//...
//Update cache regardless the cache period
loadElementsIgnoreCachePeriod();
}
Then you can get elements from every point in your code as follow:
Map<KType,VType> liveElements = MongoDBConnector.getElements();
I'm attempting to use both the #Cacheable and #PostFilter annotations in Spring. The desired behavior is that the application will cache the full, unfiltered listed of Segments (it's a very small and very frequently referenced list so performance is the desire), but that a User will only have access to certain Segments based on their roles.
I started out with both #Cacheable and #PostFilter on a single method, but when that wasn't working I broke them out into two separate classes so I could have one annotation on each method. However, it seems to behave the same either way I do it, which is to say when User A hits the service for the first time they get their correct filtered list, then when User B hits the service next they get NO results because the cache is only storing User A's filtered results, and User B does not have access to any of them. (So the PostFilter still runs, but the Cache seems to be storing the filtered list, not the full list.)
So here's the relevant code:
configuration:
#Configuration
#EnableCaching
#EnableGlobalMethodSecurity(prePostEnabled = true)
public class BcmsSecurityAutoConfiguration {
#Bean
public CacheManager cacheManager() {
SimpleCacheManager cacheManager = new SimpleCacheManager();
cacheManager.setCaches(Arrays.asList(
new ConcurrentMapCache("bcmsSegRoles"),
new ConcurrentMapCache("bcmsSegments")
));
return cacheManager;
}
}
Service:
#Service
public class ScopeService {
private final ScopeRepository scopeRepository;
public ScopeService(final ScopeRepository scopeRepository) {
this.scopeRepository = scopeRepository;
}
// Filters the list of segments based on User Roles. User will have 1 role for each segment they have access to, and then it's just a simple equality check between the role and the Segment model.
#PostFilter(value = "#bcmsSecurityService.canAccessSegment( principal, filterObject )")
public List<BusinessSegment> getSegments() {
List<BusinessSegment> segments = scopeRepository.getSegments();
return segments; // Debugging shows 4 results for User A (post-filtered to 1), and 1 result for User B (post-filtered to 0)
}
}
Repository:
#Repository
public class ScopeRepository {
private final ScopeDao scopeDao; // This is a MyBatis interface.
public ScopeRepository(final ScopeDao scopeDao) {
this.scopeDao = scopeDao;
}
#Cacheable(value = "bcmsSegments")
public List<BusinessSegment> getSegments() {
List<BusinessSegment> segments = scopeDao.getSegments(); // Simple SELECT * FROM TABLE; Works as expected.
return segments; // Shows 4 results for User A, breakpoint not hit for User B cache takes over.
}
}
Does anyone know why the Cache seems to be storing the result of the Service method after the filter runs, rather than storing the full result set at the Repository level as I'm expecting it should? Or have another way to achieve my desired behavior?
Bonus points if you know how I could gracefully achieve both caching and filtering on the same method in the Service. I only built the superfluous Repository because I thought splitting the methods would resolve the caching problem.
Turns out that the contents of Spring caches are mutable, and the #PostFilter annotation modifies the returned list, it does not filter into a new one.
So when #PostFilter ran after my Service method call above it was actually removing items from the list stored in the Cache, so the second request only had 1 result to start with, and the third would have zero.
My solution was to modify the Service to return new ArrayList<>(scopeRepo.getSegments()); so that PostFilter wasn't changing the cached list.
(NOTE, that's not a deep clone of course, so if someone modified a Segment model upstream from the Service it would likely change in the model in the cache as well. So this may not be the best solution, but it works for my personal use case.)
I can't believe Spring Caches are mutable...
I had the problem, that every time i retrieved a collection from the gwt request factory, there was the "findEntity()"-method called for every entity in that collection. And this "findEntity()"-method calls the SQL-Database.
I found out that this happens because request factory checks the "liveness" of every entity in the "ServiceLayerDecorator.isLive()"-method (also described here: requestfactory and findEntity method in GWT)
So i provided my own RequestFactoryServlet:
public class MyCustomRequestFactoryServlet extends RequestFactoryServlet {
public MyCustomRequestFactoryServlet() {
super(new DefaultExceptionHandler(), new MyCustomServiceLayerDecorator());
}
}
And my own ServiceLayerDecorator:
public class MyCustomServiceLayerDecorator extends ServiceLayerDecorator {
/**
* This check does normally a lookup against the db for every element in a collection
* -> Therefore overridden
*/
#Override
public boolean isLive(Object domainObject) {
return true;
}
}
This works so far and I don't get this massive amount of queries against the database.
Now I am wondering if I will get some other issues with that? Or is there a better way to solve this?
RequestFactory expects a session-per-request pattern with the session guaranteeing a single instance per entity (i.e. using a cache).
The proper fix is to have isLive hit that cache, not the database. If you use JPA or JDO, they should do that for you for free. What matters is what "the request" thinks about it (if you issued a delete request, isLive should return false), not really what's exactly stored in the DB, taking into account what other users could have done concurrently.
That being said, isLive is only used for driving EntityProxyChange events on the client side, so if you don't use them, it shouldn't cause any problem unconditionally returning true like you do.
i'm trying to store a number in the applicationscope of a glassfish webservice
the webservice:
#WebService()
public class datacheck {
//TODO 080 disable sql_log in the settings of hibernate
//TODO 090 check todo's from webservice_1
private int counter = 5;
when i request the counter variable i get 5
and
#WebMethod(operationName = "increaseCounter")
public Integer increaseCounter() {
counter++;
return counter;
}
returns 6 but
when i try to do this afterwards i get 5 again:
#WebMethod(operationName = "getCounter")
public Integer getCounter() {
return counter;
}
how do i store a variable that is available for all methods in the webservice?
This depends on your use case and architecture to an extent. If every user should see the result of increment counter then yo could declare it statically in your code.
private static int counter = 5;
This will only work if you have only one JVM in your application though and would require careful thought about synchronization.
Alternatively you could persist it externally ( to a database or file for example )
Implementing the Singleton pattern should work. you will end up with the same instance in the whole JVM. Beware though: writing to a singleton from different threads might be a contented lock, and that way be dragons!
There's also ThreadLocal if you want to constraint an object to one thread (i think glassfish is one thread per request but dont cite me :)