I have two functions which must run in a critical section:
public synchronized void f1() { ... }
public synchronized void f2() { ... }
Assume that the behavior is as following:
f1 is almost never called. Actually, under normal conditions, this method is never called. If f1 is called anyway, it should return quickly.
f2 is called at a very high rate. It returns very quickly.
These methods never call each other and there is no reentrancy as well.
In other words, there is very low contention. So when f2 is called, we have some overhead to acquire the lock, which is granted immediately in 99,9% of the cases. I am wondering if there are approaches to avoid this overhead.
I came up with the following alternative:
private final AtomicInteger lock = new AtomicInteger(0);
public void f1() {
while (!lock.compareAndSet(0, 1)) {}
try {
...
} finally {
lock.set(0);
}
}
public void f2() {
while (!lock.compareAndSet(0, 2)) {}
try {
...
} finally {
lock.set(0);
}
}
Are there other approaches? Does the java.util.concurrent package offer something natively?
update
Although my intention is to have a generic question, some information regarding my situation:
f1: This method creates a new remote stream, if for some reason the current one becomes corrupt, for example due to a timeout. A remote stream could be considered as a socket connection which consumes a remote queue starting from a given location:
private Stream stream;
public synchronized void f1() {
final Stream stream = new Stream(...);
if (this.stream != null) {
stream.setPosition(this.stream.getPosition());
}
this.stream = stream;
return stream;
}
f2: This method advances the stream position. It is a plain setter:
public synchronized void f2(Long p) {
stream.setPosition(p);
}
Here, stream.setPosition(Long) is implemented as a plain setter as well:
public class Stream {
private volatile Long position = 0;
public void setPosition(Long position) {
this.position = position;
}
}
In Stream, the current position will be sent to the server periodically asynchronously. Note that Stream is not implemented by myself.
My idea was to introduce compare-and-swap as illustrated above, and mark stream as volatile.
Your example isn't doing what you want it to. You are actually executing your code when the lock is being used. Try something like this:
public void f1() {
while (!lock.compareAndSet(0, 1)) {
}
try {
...
} finally {
lock.set(0);
}
}
To answer your question, I don't believe that this will be any faster than using synchronized methods, and this method is harder to read and comprehend.
From the description and your example code, I've inferred the following:
Stream has its own internal position, and you're also tracking the most recent position externally. You use this as a sort of 'resume point': when you need to reinitialize the stream, you advance it to this point.
The last known position may be stale; I'm assuming this based on your assertion that the stream periodically does asynchronously notifies the server of its current position.
At the time f1 is called, the stream is known to be in a bad state.
The functions f1 and f2 access the same data, and may run concurrently. However, neither f1 nor f2 will ever run concurrently against itself. In other words, you almost have a single-threaded program, except for the rare cases when both f1 and f2 are executing.
[Side note: My solution doesn't actually care if f1 gets called concurrently with itself; it only cares that f2 is not called concurrently with itself]
If any of this is wrong, then the solution below is wrong. Heck, it might be wrong anyway, either because of some detail left out, or because I made a mistake. Writing low-lock code is hard, which is exactly why you should avoid it unless you've observed an actual performance issue.
static class Stream {
private long position = 0L;
void setPosition(long position) {
this.position = position;
}
}
final static class StreamInfo {
final Stream stream = new Stream();
volatile long resumePosition = -1;
final void setPosition(final long position) {
stream.setPosition(position);
resumePosition = position;
}
}
private final Object updateLock = new Object();
private final AtomicReference<StreamInfo> currentInfo = new AtomicReference<>(new StreamInfo());
void f1() {
synchronized (updateLock) {
final StreamInfo oldInfo = currentInfo.getAndSet(null);
final StreamInfo newInfo = new StreamInfo();
if (oldInfo != null && oldInfo.resumePosition > 0L) {
newInfo.setPosition(oldInfo.resumePosition);
}
// Only `f2` can modify `currentInfo`, so update it last.
currentInfo.set(newInfo);
// The `f2` thread might be waiting for us, so wake them up.
updateLock.notifyAll();
}
}
void f2(final long newPosition) {
while (true) {
final StreamInfo s = acquireStream();
s.setPosition(newPosition);
s.resumePosition = newPosition;
// Make sure the stream wasn't replaced while we worked.
// If it was, run again with the new stream.
if (acquireStream() == s) {
break;
}
}
}
private StreamInfo acquireStream() {
// Optimistic concurrency: hope we get a stream that's ready to go.
// If we fail, branch off into a slower code path that waits for it.
final StreamInfo s = currentInfo.get();
return s != null ? s : acquireStreamSlow();
}
private StreamInfo acquireStreamSlow() {
synchronized (updateLock) {
while (true) {
final StreamInfo s = currentInfo.get();
if (s != null) {
return s;
}
try {
updateLock.wait();
}
catch (final InterruptedException ignored) {
}
}
}
}
If the stream has faulted and is being replaced by f1, it is possible that an earlier call to f2 is still performing some operations on the (now defunct) stream. I'm assuming this is okay, and that it won't introduce undesirable side effects (beyond those already present in your lock-based version). I make this assumption because we've already established in the list above that your resume point may be stale, and we also established that f1 is only called once the stream is known to be in a bad state.
Based on my JMH benchmarks, this approach is around 3x faster than the CAS or synchronized versions (which are pretty close themselves).
Another approach is to use a timestamp lock which works like a modification count. This works well if you have a high read to write ratio.
Another approach is to have an immutable object which stores state via an AtomicReference. This works well if you have a very high read to write ratio.
Related
I have a scenario where I have to maintain a Map which can be populated by multiple threads, each modifying their respective List (unique identifier/key being the thread name), and when the list size for a thread exceeds a fixed batch size, we have to persist the records to the database.
Aggregator class
private volatile ConcurrentHashMap<String, List<T>> instrumentMap = new ConcurrentHashMap<String, List<T>>();
private ReentrantLock lock ;
public void addAll(List<T> entityList, String threadName) {
try {
lock.lock();
List<T> instrumentList = instrumentMap.get(threadName);
if(instrumentList == null) {
instrumentList = new ArrayList<T>(batchSize);
instrumentMap.put(threadName, instrumentList);
}
if(instrumentList.size() >= batchSize -1){
instrumentList.addAll(entityList);
recordSaver.persist(instrumentList);
instrumentList.clear();
} else {
instrumentList.addAll(entityList);
}
} finally {
lock.unlock();
}
}
There is one more separate thread running after every 2 minutes (using the same lock) to persist all the records in Map (to make sure we have something persisted after every 2 minutes and the map size does not gets too big)
if(//Some condition) {
Thread.sleep(//2 minutes);
aggregator.getLock().lock();
List<T> instrumentList = instrumentMap.values().stream().flatMap(x->x.stream()).collect(Collectors.toList());
if(instrumentList.size() > 0) {
saver.persist(instrumentList);
instrumentMap .values().parallelStream().forEach(x -> x.clear());
aggregator.getLock().unlock();
}
}
This solution is working fine in almost for every scenario that we tested, except sometimes we see some of the records went missing, i.e. they are not persisted at all, although they were added fine to the Map.
My questions are:
What is the problem with this code?
Is ConcurrentHashMap not the best solution here?
Does the List that is used with the ConcurrentHashMap have an issue?
Should I use the compute method of ConcurrentHashMap here (no need I think, as ReentrantLock is already doing the same job)?
The answer provided by #Slaw in the comments did the trick. We were letting the instrumentList instance escape in non-synchronized way i.e. access/operations are happening over list without any synchonization. Fixing the same by passing the copy to further methods did the trick.
Following line of code is the one where this issue was happening
recordSaver.persist(instrumentList);
instrumentList.clear();
Here we are allowing the instrumentList instance to escape in non-synchronized way i.e. it is passed to another class (recordSaver.persist) where it was to be actioned on but we are also clearing the list in very next line(in Aggregator class) and all of this is happening in non-synchronized way. List state can't be predicted in record saver... a really stupid mistake.
We fixed the issue by passing a cloned copy of instrumentList to recordSaver.persist(...) method. In this way instrumentList.clear() has no affect on list available in recordSaver for further operations.
I see, that you are using ConcurrentHashMap's parallelStream within a lock. I am not knowledgeable about Java 8+ stream support, but quick searching shows, that
ConcurrentHashMap is a complex data structure, that used to have concurrency bugs in past
Parallel streams must abide to complex and poorly documented usage restrictions
You are modifying your data within a parallel stream
Based on that information (and my gut-driven concurrency bugs detector™), I wager a guess, that removing the call to parallelStream might improve robustness of your code. In addition, as mentioned by #Slaw, you should use ordinary HashMap in place of ConcurrentHashMap if all instrumentMap usage is already guarded by lock.
Of course, since you don't post the code of recordSaver, it is possible, that it too has bugs (and not necessarily concurrency-related ones). In particular, you should make sure, that the code that reads records from persistent storage — the one, that you are using to detect loss of records — is safe, correct, and properly synchronized with rest of your system (preferably by using a robust, industry-standard SQL database).
It looks like this was an attempt at optimization where it was not needed. In that case, less is more and simpler is better. In the code below, only two concepts for concurrency are used: synchronized to ensure a shared list is properly updated and final to ensure all threads see the same value.
import java.util.ArrayList;
import java.util.List;
public class Aggregator<T> implements Runnable {
private final List<T> instruments = new ArrayList<>();
private final RecordSaver recordSaver;
private final int batchSize;
public Aggregator(RecordSaver recordSaver, int batchSize) {
super();
this.recordSaver = recordSaver;
this.batchSize = batchSize;
}
public synchronized void addAll(List<T> moreInstruments) {
instruments.addAll(moreInstruments);
if (instruments.size() >= batchSize) {
storeInstruments();
}
}
public synchronized void storeInstruments() {
if (instruments.size() > 0) {
// in case recordSaver works async
// recordSaver.persist(new ArrayList<T>(instruments));
// else just:
recordSaver.persist(instruments);
instruments.clear();
}
}
#Override
public void run() {
while (true) {
try { Thread.sleep(1L); } catch (Exception ignored) {
break;
}
storeInstruments();
}
}
class RecordSaver {
void persist(List<?> l) {}
}
}
I have already topic with same code:
public abstract class Digest {
private Map<String, byte[]> cache = new HashMap<>();
public byte[] digest(String input) {
byte[] result = cache.get(input);
if (result == null) {
synchronized (cache) {
result = cache.get(input);
if (result == null) {
result = doDigest(input);
cache.put(input, result);
}
}
}
return result;
}
protected abstract byte[] doDigest(String input);
}
At previous I got prove that code is not thread safe.
At this topic I want to provide solutions which I have in my head and I ask you to review these solutions:
Solution#1 through ReadWriteLock:
public abstract class Digest {
private final ReadWriteLock rwl = new ReentrantReadWriteLock();
private final Lock readLock = rwl.readLock();
private final Lock writeLock = rwl.writeLock();
private Map<String, byte[]> cache = new HashMap<>(); // I still don't know should I use volatile or not
public byte[] digest(String input) {
byte[] result = null;
readLock.lock();
try {
result = cache.get(input);
} finally {
readLock.unlock();
}
if (result == null) {
writeLock.lock();
try {
result = cache.get(input);
if (result == null) {
result = doDigest(input);
cache.put(input, result);
}
} finally {
writeLock.unlock();
}
}
return result;
}
protected abstract byte[] doDigest(String input);
}
Solution#2 through CHM
public abstract class Digest {
private Map<String, byte[]> cache = new ConcurrentHashMap<>(); //should be volatile?
public byte[] digest(String input) {
return cache.computeIfAbsent(input, this::doDigest);
}
protected abstract byte[] doDigest(String input);
}
Please review correctness of both solutions. It is not question about what the solution better. I undestand that CHM better. Please, review correctnes of implementation
Unlike the clusterfudge we got into in the last question, this is better.
As was shown in the prefious question's duplicate, the original code is not thread-safe since HashMap is not threadsafe and the initial get() can be called while the put() is being executed inside the synchronized block. This can break all sorts of things, so that's definitely not threadsafe.
The second solution is thread-safe, since all accesses to cache are done in guarded code. The inital get() is protected by a readlock, and the put() is done inside a writelock, guaranteeing that threads can't read the cache while it's being written to, but they're free to read it at the same time as other reading threads. No concurrency issues, no visibility issues, no chances of deadlocks. Everything's fine.
The last is of course the most elegant one. Since computeIfAbsent() is an atomic operation, it guarantees that the value is either directly returned or computed at most once, from the javadoc:
If the specified key is not already associated with a value, attempts
to compute its value using the given mapping function and enters it
into this map unless null. The entire method invocation is
performed atomically, so the function is applied at most once per key.
Some attempted update operations on this map by other threads may be
blocked while computation is in progress, so the computation should be
short and simple, and must not attempt to update any other mappings of
this map.
The Map in question shouldn't be volatile, but it should be final. If it's not final, it could (at least in theory) be changed and it would be possible for 2 threads to work on different objects, which is not what you want.
for learning purpose i have tried to implements a queue data-structure + Consumer/producer chain that is thread-safe, for learning purpose too i have not used notify/wait mechanism :
SyncQueue :
package syncpc;
/**
* Created by Administrator on 01/07/2009.
*/
public class SyncQueue {
private int val = 0;
private boolean set = false;
boolean isSet() {
return set;
}
synchronized public void enqueue(int val) {
this.val = val;
set = true;
}
synchronized public int dequeue() {
set = false;
return val;
}
}
Consumer :
package syncpc;
/**
* Created by Administrator on 01/07/2009.
*/
public class Consumer implements Runnable {
SyncQueue queue;
public Consumer(SyncQueue queue, String name) {
this.queue = queue;
new Thread(this, name).start();
}
public void run() {
while(true) {
if(queue.isSet()) {
System.out.println(queue.dequeue());
}
}
}
}
Producer :
package syncpc;
import java.util.Random;
/**
* Created by Administrator on 01/07/2009.
*/
public class Producer implements Runnable {
SyncQueue queue;
public Producer(SyncQueue queue, String name) {
this.queue = queue;
new Thread(this, name).start();
}
public void run() {
Random r = new Random();
while(true) {
if(!queue.isSet()) {
queue.enqueue(r.nextInt() % 100);
}
}
}
}
Main :
import syncpcwn.*;
/**
* Created by Administrator on 27/07/2015.
*/
public class Program {
public static void main(String[] args) {
SyncQueue queue = new SyncQueue();
new Producer(queue, "PROCUDER");
new Consumer(queue, "CONSUMER");
}
}
The problem here, is that if isSet method is not synchronized , i got an ouput like that :
97,
55
and the program just continue running without outputting any value. while if isSet method is synchronized the program work correctly.
i don't understand why, there is no deadlock, isSet method just query the set instance variable without setting it, so there is no race condition.
set needs to be volatile:
private boolean volatile set = false;
This ensures that all readers see the updated value when a write completes. Otherwise they will end up seeing the cached value. This is discussed in more detail in this article on concurrency, and also provides examples of different patterns that use volatile.
Now the reason that your code works with synchronized is probably best explained with an example. synchronized methods can be written as follows (i.e., they are equivalent to the following representation):
public class SyncQueue {
private int val = 0;
private boolean set = false;
boolean isSet() {
synchronized(this) {
return set;
}
}
public void enqueue(int val) {
synchronized(this) {
this.val = val;
set = true;
}
}
public int dequeue() {
synchronized(this) {
set = false;
return val;
}
}
}
Here, the instance is itself used as a lock. This means that only thread can hold that lock. What this means is that any thread will always get the updated value because only one thread could be writing the value, and a thread that wants to read set won't be able to execute isSet until the other thread releases the lock on this, at which point the value of set will have been updated.
If you want to understand concurrency in Java properly you should really read Java: Concurrency In Practice (I think there's a free PDF floating around somewhere as well). I'm still going through this book because there are still many things that I do not understand or am wrong about.
As matt forsythe commented, you will run into issues when you have multiple consumers. This is because they could both check isSet() and find that there is a value to dequeue, which means that they will both attempt to dequeue that same value. It comes down to the fact that what you really want is for the "check and dequeue if set" operation to be effectively atomic, but it is not so the way you have coded it. This is because the same thread that initially called isSet may not necessarily be the same thread that then calls dequeue. So the operation as a whole is not atomic which means that you would have to synchronize the entire operation.
The problem you have is visibility (or rather, the lack of it).
Without any instructions to the contrary, the JVM will assume that the value assigned to a variable in one thread need not be visible to the other threads. It may be made visible sometimes later (when it's convenient to do so), or maybe not ever. The rules governing what should be made visible and when are defined by the Java Memory Model and they're summed up here. (They may be a bit dry and scary at first, but it's absolutely crucial to understand them.)
So even though the producer sets set to true, the consumer will continue to see it as false. How can you publish a new value?
Mark the field as volatile. This works well for primitive values like boolean, with references you have to be a bit more careful.
synchronized provides not just mutual exclusion but also guarantees that any values set in it will be visible to anyone entering a synchronized block that uses the same object. (This is why everything works if you declare the isSet() method synchronized.)
Using a thread-safe library class, like the Atomic* classes of java.util.concurrent
In your case volatile is probably the best solution because you're only updating a boolean, so atomicity of the update is guaranteed by default.
As #matt forsythe pointed out, there is also a TOCTTOU issue with your code too because your threads can be interrupted by another between isSet() and enqueue()/dequeue().
I assume that when we get stuck in threading issue, the first step was to make sure that both the threads are running well. ( i know they will as there are no locks to create deadlock)
For that you could have added a printf statement in enqueue function as well. That would make sure that enqueue and dequeue threads are running well.
Then second step should have been that "set" is the shared resource, so is the value toggling well enough so that code can run in desired fashion.
I think if you could reason and put the logging well enough, you can realize the issues in problem.
What's a good way of allowing searches from multiple threads on a list (or other data structure), but preventing searches on the list and edits to the list on different threads from interleaving? I tried using synchronized blocks in the searching and editing methods, but that can cause unnecessary blocking when trying to run searches in multiple threads.
EDIT: The ReadWriteLock is exactly what I was looking for! Thanks.
Usually, yes ReadWriteLock is good enough.
But, if you're using Java 8 you can get a performance boost with the new StampedLock that lets you avoid the read lock. This applies when you have much more frequent reads(searches) compared with writes(edits).
private StampedLock sl = new StampedLock();
public void edit() { // write method
long stamp = sl.writeLock();
try {
doEdit();
} finally {
sl.unlockWrite(stamp);
}
}
public Object search() { // read method
long stamp = sl.tryOptimisticRead();
Object result = doSearch(); //first try without lock, search ideally should be fast
if (!sl.validate(stamp)) { //if something has modified
stamp = sl.readLock(); //acquire read lock and search again
try {
result = doSearch();
} finally {
sl.unlockRead(stamp);
}
}
return result;
}
I was recently looking for a way to implement a doubly buffered thread-safe cache for regular objects.
The need arose because we had some cached data structures that were being hit numerous times for each request and needed to be reloaded from cache from a very large document (1s+ unmarshalling time) and we couldn't afford to let all requests be delayed by that long every minute.
Since I couldn't find a good threadsafe implementation I wrote my own and now I am wondering if it's correct and if it can be made smaller... Here it is:
package nl.trimpe.michiel
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
/**
* Abstract class implementing a double buffered cache for a single object.
*
* Implementing classes can load the object to be cached by implementing the
* {#link #retrieve()} method.
*
* #param <T>
* The type of the object to be cached.
*/
public abstract class DoublyBufferedCache<T> {
private static final Log log = LogFactory.getLog(DoublyBufferedCache.class);
private Long timeToLive;
private long lastRetrieval;
private T cachedObject;
private Object lock = new Object();
private volatile Boolean isLoading = false;
public T getCachedObject() {
checkForReload();
return cachedObject;
}
private void checkForReload() {
if (cachedObject == null || isExpired()) {
if (!isReloading()) {
synchronized (lock) {
// Recheck expiration because another thread might have
// refreshed the cache before we were allowed into the
// synchronized block.
if (isExpired()) {
isLoading = true;
try {
cachedObject = retrieve();
lastRetrieval = System.currentTimeMillis();
} catch (Exception e) {
log.error("Exception occurred retrieving cached object", e);
} finally {
isLoading = false;
}
}
}
}
}
}
protected abstract T retrieve() throws Exception;
private boolean isExpired() {
return (timeToLive > 0) ? ((System.currentTimeMillis() - lastRetrieval) > (timeToLive * 1000)) : true;
}
private boolean isReloading() {
return cachedObject != null && isLoading;
}
public void setTimeToLive(Long timeToLive) {
this.timeToLive = timeToLive;
}
}
What you've written isn't threadsafe. In fact, you've stumbled onto a common fallacy that is quite a famous problem. It's called the double-checked locking problem and many such solutions as yours (and there are several variations on this theme) all have issues.
There are a few potential solutions to this but imho the easiest is simply to use a ScheduledThreadExecutorService and reload what you need every minute or however often you need to. When you reload it put it into the cache result and the calls for it just return the latest version. This is threadsafe and easy to implement. Sure it's not on-demand loaded but, apart from the initial value, you'll never take a performance hit while you retrieve the value. I'd call this over-eager loading rather than lazy-loading.
For example:
public class Cache<T> {
private final ScheduledExecutorsService executor =
Executors.newSingleThreadExecutorService();
private final Callable<T> method;
private final Runnable refresh;
private Future<T> result;
private final long ttl;
public Cache(Callable<T> method, long ttl) {
if (method == null) {
throw new NullPointerException("method cannot be null");
}
if (ttl <= 0) {
throw new IllegalArgumentException("ttl must be positive");
}
this.method = method;
this.ttl = ttl;
// initial hits may result in a delay until we've loaded
// the result once, after which there will never be another
// delay because we will only refresh with complete results
result = executor.submit(method);
// schedule the refresh process
refresh = new Runnable() {
public void run() {
Future<T> future = executor.submit(method);
future.get();
result = future;
executor.schedule(refresh, ttl, TimeUnit.MILLISECONDS);
}
}
executor.schedule(refresh, ttl, TimeUnit.MILLISECONDS);
}
public T getResult() {
return result.get();
}
}
That takes a little explanation. Basically, you're creating a generic interface for caching the result of a Callable, which will be your document load. Submitting a Callable (or Runnable) returns a Future. Calling Future.get() blocks until it returns (completes).
So what this does is implement a get() method in terms of a Future so initial queries won't fail (they will block). After that, every 'ttl' milliseconds the refresh method is called. It submits the method to the scheduler and calls Future.get(), which yields and waits for the result to complete. Once complete, it replaces the 'result' member. Subsequence Cache.get() calls will return the new value.
There is a scheduleWithFixedRate() method on ScheduledExecutorService but I avoid it because if the Callable takes longer than the scheduled delay you will end up with multiple running at the same time and then have to worry about that or throttling. It's easier just for the process to submit itself at the end of a refresh.
I'm not sure I understand your need. Is your need to a have a faster loading (and reloading) of the cache, for a portion of the values?
If so, I would suggest breaking your datastructure into smaller pieces.
Just load the piece that you need at the time. If you divide the size by 10, you will divide the loading time by something related to 10.
This could apply to the original document you are reading, if possible. Otherwise, it would be the way you read it, where you skip a large part of it and load only the relevant part.
I believe that most data can be broken down into pieces. Choose the more appropriate, here are examples:
by starting letter : A*, B* ...
partition your id into two part : first part is a category, look for it in the cache, load it if needed, then look for your second part inside.
If your need is not the initial loading time, but the reloading, maybe you don't mind the actual time for reloading, but want to be able to use the old version while loading the new?
If that is your need, I suggest making your cache an instance (as opposed to static) that is available in a field.
You trigger reloading every minute with a dedicated thread (or a least not the regular threads), so that you don't delay your regular threads.
Reloading creates a new instance, load it with data (takes 1 second), and then simply replace the old instance with the new. (The old will get garbage-collected.) Replacing an object with another is an atomic operation.
Analysis: What happens in that case is that any other thread can get access to the old cache until the last instant ?
In the worst case, the instruction just after getting the old cache instance, another thread replaces the old instance with a new. But this doesn't make your code faulty, asking the old cache instance will still give a value that was correct just before, which is acceptable by the requirement I gave as first sentence.
To make your code more correct, you can create your cache instance as immutable (no setters available, no way to modify internal state). This makes it clearer that it is correct to use it in a multi-threaded context.
You appare to be locking more then is required, in your good case (cache full and valid) every request aquires a lock. you can get away with only locking if the cache is expired.
If we are reloading, do nothing.
If we are not reloading, check if expired if not expired go ahead.
If we are not reloading and we are expired, get the lock and double check expired to make sure we have not sucessfuly loaded seince last check.
Also note you may wish to reload the cache in a background thread so not event the one requrest is heldup waiting for cache to fill.
private void checkForReload() {
if (cachedObject == null || isExpired()) {
if (!isReloading()) {
// Recheck expiration because another thread might have
// refreshed the cache before we were allowed into the
// synchronized block.
if (isExpired()) {
synchronized (lock) {
if (isExpired()) {
isLoading = true;
try {
cachedObject = retrieve();
lastRetrieval = System.currentTimeMillis();
} catch (Exception e) {
log.error("Exception occurred retrieving cached object", e);
} finally {
isLoading = false;
}
}
}
}
}
}