I am trying to implement Thread Safe Access Counter. What I need to do is create a HashMap which contains the path and integer(Counter). It has two methods which checks whether the map contains the path or not and accordingly increases the count if the path occurs as shown below
public class AccessCounter {
private HashMap<Integer,java.nio.file.Path> counter= new HashMap<Integer,java.nio.file.Path>();
private ReentrantLock lock = new ReentrantLock();
private Path path ;
private RequestHandler rq;
public void map(HashMap<Integer,java.nio.file.Path> counter){
counter.put(10,Paths.get("/Users/a.html"));
counter.put(5, Paths.get("b.html"));
counter.put(2, Paths.get("c.txt"));
counter.put(7, Paths.get("d.txt"));
}
public void increment(){
lock.lock();
System.out.println("Lock Obtained");
try{
if(counter.keySet().equals(rq.fileName())){
for(Integer key: counter.keySet()){
Path text = counter.get(key);
key++;
}
}
else {
counter.put(1,path);
}
}finally{
lock.unlock();
System.out.println("Lock released");
}
}
public void getCount(){
lock.lock();
System.out.println("Lock Obtained");
try{
if(counter.keySet().equals(rq.fileName())){
for(Integer key: counter.keySet()){
Path text = counter.get(key);
key++;
}
}
else {
return ;
}
}finally{
lock.unlock();
System.out.println("Lock released");
}
}
}
RequestHandler(Runnable Class) -> Run() -> picks up one of the files and calls increment method and getCount(). In main method I have to create multiple thread s to AccessCounter concurrently. Could anyone suggest me the right direction. I am doing something wrong but havent able to find it.
public class RequestHandler implements Runnable {
private AccessCounter ac;
private File file;
public void select(File file) {
File file1 = new File("a.html");
File file2 = new File("b.html");
File file3 = new File("a.txt");
}
public File fileName() {
return file;
}
public void run() {
select(file);
ac.increment();
ac.getCount();
}
public static void main(String[] args) {
Thread thread1 = new Thread(new RequestHandler());
thread1.start();
Thread thread2 = new Thread(new RequestHandler());
thread2.start();
}
}
Using a ConcurrentHashMap and a AtomicLong is all you need for thread safety and simplicity.
final ConcurrentMap<Path, AtomicLong> counterMap = new ConcurrentHashMap<>();
public void incrementFor(Path path) {
counterMap.computeIfAbsent(path, p -> new AtomicLong()).incrementAndGet();
}
public long getCount(Path path) {
AtomicLong l = counterMap.get(path);
return l == null ? 0 : l.get();
}
computeIfAbsent will place a new AtomicLong as required in a thread safe manner.
Note: as ConcurrentMap supports concurrent access, you can have many thread using this Map at the same time (provided they are accessing a different Path)
private HashMap counter= new
HashMap();
Instead of HashMap<Integer,java.nio.file.Path> you need to have HashMap<java.nio.file.Path, Integer>, as your intend is to have the count of number of entries with same Path.
A good optimization I see, unless you want to try with normal HashMap and lock in the user code:
You can use ConcurrentHashMap<Path, AtomicInteger> instead of HashMap<..> above.
ConcurrentHashMap has a putIfAbsent(...) method that performs it atomically in a single statement.
AtomicInteger allows us to increment atomically using incrementAndGet method.
Both without extra locks/synchronization in the user code.
Related
I need to implement thread-safe synchronization to multiple resources, where each resource can be accessed by one thread at a time, but different resources can be accessed concurrently. I have come up with the following code, meant to be used in a try-with-resources statement.
public class Gatekeeper implements AutoCloseable
{
private static final ConcurrentMap<Long, ReentrantLock> lockMap = new ConcurrentHashMap<>();
private final ReentrantLock lock;
private final Long key;
public Gatekeeper(Long key)
{
this.key = key;
lock = lockMap.computeIfAbsent(key, (Long absentKey) -> new ReentrantLock(true)); // computeIfAbsent is an atomic operation
try
{
lock.tryLock(30, TimeUnit.SECONDS);
}
catch (InterruptedException e)
{
Thread.currentThread().interrupt();
throw new Something(":(", e);
}
}
#Override
public void close()
{
if(lock.isHeldByCurrentThread())
{
lock.unlock();
}
}
}
One problem with this code is that no items are ever removed from the lockMap, and I don't know how to do this thread-safe. The following is definitely not thread-safe:
#Override
public void close()
{
if (lock.isHeldByCurrentThread())
{
if (lock.getQueueLength() == 1) // todo: getQueueLength is meant for system monitoring purposes only
{
lockMap.remove(key); // todo: not thread-safe, queuelength could have changed by now
}
lock.unlock();
}
}
the documentation for getQueueLength:
Returns an estimate of the number of threads waiting to
acquire this lock. The value is only an estimate because the number of
threads may change dynamically while this method traverses
internal data structures. This method is designed for use in
monitoring of the system state, not for synchronization
control.
Does anyone know a solution for this? Are there different strategies to achieve my goal?
After some more experimentation I came up with the code below, can anyone comment on whether this is a good approach and the code is correct?
public class Gatekeeper implements AutoCloseable
{
private static final ConcurrentMap<Long, ReentrantLock> lockMap = new ConcurrentHashMap<>();
private final ReentrantLock lock;
private final Long key;
private static final ConcurrentMap<Long, Integer> claimsPerLock = new ConcurrentHashMap<>();
private static final Object mutex = new Object();
public Gatekeeper(Long key)
{
this.key = key;
synchronized (mutex)
{
lock = lockMap.computeIfAbsent(key, (Long absentKey) -> new ReentrantLock(true));
claimsPerLock.compute(key, (k, oldValue) -> oldValue == null ? 1 : ++oldValue);
}
try
{
if(!lock.tryLock(30, TimeUnit.SECONDS))
{
throw new SomeException("Timeout occurred while trying to acquire lock");
}
}
catch (InterruptedException e)
{
Thread.currentThread().interrupt();
throw new SomeException("Interrupted", e);
}
}
#Override
public void close()
{
lock.unlock();
synchronized (mutex)
{
claimsPerLock.compute(key, (k, oldValue) -> oldValue == null ? 0 : --oldValue);
if (claimsPerLock.get(key) <= 0)
{
lockMap.remove(key);
claimsPerLock.remove(key);
}
}
}
}
I have this small sample of code, while modifying the list, i lock it with synchronized, but while reading the list it comes to ConcurrentModificationException, because without "synchronized" the lock has no effect. Is it possible to lock an object for all threads which use the object, even the un-synchronized, too?
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
public class Test {
public static void main(String[] args) {
final Random r = new Random(System.currentTimeMillis());
List<Integer> list = new ArrayList<>();
new Thread(new Runnable() {
public void run() {
while (true) {
synchronized (list) {
list.add(r.nextInt());
}
}
}
}).start();
new Thread(new Runnable() {
public void run() {
while (true) {
for (Integer i : list) {
System.out.println(i);
}
}
}
}).start();
}
}
the backgorund is that i dont want to change all pieces in my code which read the object
You might consider using a concurrent implementation of List, instead of ArrayList. Perhaps a CopyOnWriteArrayList.
final List<Integer> list = new CopyOnWriteArrayList<Integer>();
Is it possible to lock an object for all threads which use the object.
In a word, No. When one thread enters a synchronized(foo) {...} block, that does not prevent other threads from accessing or modifying foo. The only thing it prevents is, it prevents other threads from synchronizing on the same object at the same time.
What you can do, is you can create your own class that encapsulates both the lock and the data that the lock protects.
class MyLockedList {
private final Object lock = new Object();
private final List<Integer> theList = new ArrayList<>();
void add(int i) {
synchronized(lock) {
theList.add(i);
}
}
void printAll() {
synchronized(lock) {
for (Integer i : theList) {
System.out.println(... i ...);
}
}
}
...
}
If you can modify the function which concurrently uses the object, just add synchronized in every critical section:
while (true) {
synchronized(list){
for (Integer i : list) {
System.out.println(i);
}
}
}
if you can't , create a specified lock that is responsible for locking the threads:
Lock lock = new Lock();
new Thread(new Runnable(){
//...
synchronized(lock){
do unsynchonized function on list
}
//...
}).start();
new Thread(new Runnable(){
//...
synchronized(lock){
do unsynchonized function on list
}
//...
}).start();
the latter may slow down the process if one of the functions already doing some locking, but in this way you can ensure you always synchronize the access to concurrent objects.
I have a Metrics class that's supposed to keep track of how many transactions we process each second and how long they take. The relevant part of its structure looks like this:
public class Metrics {
AtomicLong sent = new AtomicLong();
AtomicLong totalElapsedMsgTime = new AtomicLong();
AtomicLong sentLastSecond = new AtomicLong();
AtomicLong avgTimeLastSecond = new AtomicLong();
public void outTick(long elapsedMsgTime){
sent.getAndIncrement();
totalElapsedMsgTime.getAndAdd(elapsedMsgTime);
}
class CalcMetrics extends TimerTask {
#Override
public void run() {
sentLastSecond.set(sent.getAndSet(0));
long tmpElapsed = totalElapsedMsgTime.getAndSet(0);
long tmpSent = sentLastSecond.longValue();
if(tmpSent != 0) {
avgTimeLastSecond.set(tmpElapsed / tmpSent);
} else {
avgTimeLastSecond.set(0);
}
}
}
}
My issue is that the outTick function will get called hundreds of times a second from lots of different threads. Being AtomicLong already ensures that each variable is individually thread safe, and they don't interact with each other in that function, so I don't want a lock that will make one call to outTick block another thread's call to outTick. It's perfectly fine if a couple of different threads increment the sent variable and then they both add to the totalElapsedMsgTime variable.
However, once it gets into CalcMetrics run method (which only happens once each second), they do interact. I want to ensure that I can pick up and reset both of those variables without being in the middle of an outTick call or having another outTick call occur between picking up one variable and the next.
Is there any way of doing this? (Does my explanation even make sense?) Is there a way of saying that A cannot interleave with B but multiple B's can interleave with each other?
EDIT:
I went with the ReadWriteLock that James suggested. Here's what my result looks like for anyone interested:
public class Metrics {
AtomicLong numSent = new AtomicLong();
AtomicLong totalElapsedMsgTime = new AtomicLong();
long sentLastSecond = 0;
long avgTimeLastSecond = 0;
private final ReadWriteLock readWriteLock = new ReentrantReadWriteLock();
private final Lock readLock = readWriteLock.readLock();
private final Lock writeLock = readWriteLock.writeLock();
public void outTick(long elapsedMsgTime) {
readLock.lock();
try {
numSent.getAndIncrement();
totalElapsedMsgTime.getAndAdd(elapsedMsgTime);
}
finally
{
readLock.unlock();
}
}
class CalcMetrics extends TimerTask {
#Override
public void run() {
long elapsed;
writeLock.lock();
try {
sentLastSecond = numSent.getAndSet(0);
elapsed = totalElapsedMsgTime.getAndSet(0);
}
finally {
writeLock.unlock();
}
if(sentLastSecond != 0) {
avgTimeLastSecond = (elapsed / sentLastSecond);
} else {
avgTimeLastSecond = 0;
}
}
}
}
The usual solution is to wrap all variables as one atomic data type.
class Data
{
long v1, v2;
Data add(Data another){ ... }
}
AtomicReference<Data> aData = ...;
public void outTick(long elapsedMsgTime)
{
Data delta = new Data(1, elapsedMsgTime);
aData.accumulateAndGet( delta, Data:add );
}
In your case, it may not be much faster than just locking.
There is another interesting lock in java8 - StampedLock . The javadoc example pretty much matches your use case. Basically, you can do optimistic reads on multiple variables; afterwards, check to make sure that no writes were done during the reads. In your case, "hundreds" of writes per second, the optimistic reads mostly would succeed.
Sounds like you need a reader/writer lock. (java.util.concurrent.locks.ReentrantReadWriteLock).
Your outTick() function would lock the ReaderLock. Any number of threads are allowed to lock the ReaderLock at the same time.
Your calcMetrics() would lock the WriterLock. No new readers are allowed in once a thread is waiting for the writer lock, and the writer is not allowed in until all the readers are out.
You would still need the atomics to protect the individual counters that are incremented by outTick().
Use locks ( https://docs.oracle.com/javase/tutorial/essential/concurrency/locksync.html ). Once you implement locks you'll have finer control. An additional side effect will be that you won't need to use AtomicLong anymore (although you still can); you can use volatile long instead, which would be more efficient. I did not make that change in the example.
Basically just create a new Object:
private Object lock = new Object();
Then, use the synchronized keyword with that object around all the code that should never happen at the same time as another synchronized block with the same lock. Example:
synchronized(lock)
{
sent.getAndIncrement();
totalElapsedMsgTime.getAndAdd(elapsedMsgTime);
}
So your whole program will look like this (note: untested code)
public class Metrics {
private Object lock = new Object();
AtomicLong sent = new AtomicLong();
AtomicLong totalElapsedMsgTime = new AtomicLong();
AtomicLong sentLastSecond = new AtomicLong();
AtomicLong avgTimeLastSecond = new AtomicLong();
public void outTick(long elapsedMsgTime){
synchronized (lock)
{
sent.getAndIncrement();
totalElapsedMsgTime.getAndAdd(elapsedMsgTime);
}
}
class CalcMetrics extends TimerTask {
#Override
public void run() {
synchronized (lock)
{
sentLastSecond.set(sent.getAndSet(0));
long tmpElapsed = totalElapsedMsgTime.getAndSet(0);
long tmpSent = sentLastSecond.longValue();
if(tmpSent != 0) {
avgTimeLastSecond.set(tmpElapsed / tmpSent);
} else {
avgTimeLastSecond.set(0);
}
}
}
}
}
Edit: I threw together a quick (and ugly) efficiency test program and found that when I synchronize with locks, I get overall better performance. Note that the results of the first 2 runs are discarded because the timing results when the Java JIT still hasn't compiled all code paths to machine code are not representative of the long term runtime.
Results:
With Locks: 8365ms
AtomicLong: 21254ms
Code:
import java.util.concurrent.atomic.AtomicLong;
public class Main
{
private AtomicLong testA_1 = new AtomicLong();
private AtomicLong testB_1 = new AtomicLong();
private volatile long testA_2 = 0;
private volatile long testB_2 = 0;
private Object lock = new Object();
private volatile boolean a = false;
private volatile boolean b = false;
private volatile boolean c = false;
private static boolean useLocks = false;
public static void main(String args[])
{
System.out.println("Locks:");
useLocks = true;
test();
System.out.println("No Locks:");
useLocks = false;
test();
System.out.println("Locks:");
useLocks = true;
test();
System.out.println("No Locks:");
useLocks = false;
test();
}
private static void test()
{
final Main main = new Main();
new Thread()
{
public void run()
{
for (int i = 0; i < 80000000; ++i)
main.outTick(10);
main.a = true;
}
}.start();
new Thread()
{
public void run()
{
for (int i = 0; i < 80000000; ++i)
main.outTick(10);
main.b = true;
}
}.start();
new Thread()
{
public void run()
{
for (int i = 0; i < 80000000; ++i)
main.outTick(10);
main.c = true;
}
}.start();
long startTime = System.currentTimeMillis();
// Okay this isn't the best way to do this, but it's good enough
while (!main.a || !main.b || !main.c)
{
try
{
Thread.sleep(1);
} catch (InterruptedException e)
{
}
}
System.out.println("Elapsed time: " + (System.currentTimeMillis() - startTime) + "ms");
System.out.println("Test A: " + main.testA_1 + " " + main.testA_2);
System.out.println("Test B: " + main.testB_1 + " " + main.testB_2);
System.out.println();
}
public void outTick(long elapsedMsgTime)
{
if (!useLocks)
{
testA_1.getAndIncrement();
testB_1.getAndAdd(elapsedMsgTime);
}
else
{
synchronized (lock)
{
++testA_2;
testB_2 += elapsedMsgTime;
}
}
}
}
I am working on a project in which I am making connections to database. And I need to see how many times an exception is happening if there are any. I am working with Multithreaded code, meaning multiple threads will be making connection to database and inserting into database. So it might be possible that at some point connection get lost so we need to see how many times those exception has occurred.
So I wrote a below code and in the catch block, I am catching exception and making a counter to increased every time if there is any exeption and putting it in ConcurrentHashMap.
class Task implements Runnable {
public static final AtomicInteger counter_sql_exception = new AtomicInteger(0);
public static final AtomicInteger counter_exception = new AtomicInteger(0);
public static ConcurrentHashMap<String, Integer> exceptionMap = new ConcurrentHashMap<String, Integer>();
#Override
public void run() {
try {
//Make a db connection and then executing the SQL-
} catch (SQLException e) {
synchronized(this) {
exceptionMap.put(e.getCause().toString(), counter_sql_exception.incrementAndGet());
}
LOG.Error("Log Exception")
} catch (Exception e) {
synchronized(this) {
exceptionMap.put(e.getCause().toString(), counter_exception.incrementAndGet());
}
LOG.Error("Log Exception")
}
}
}
My Question is- Today I had a code review and one of my senior team members said, you won't be needing synchronized(this) on the exceptionMap in the catch block. I said yes we will be needing because incrementing the counter is atomic. Putting a new value in the map is atomic. But doing both without synchronization is not atomic . And he said ConurrentHashMap will be doing this for you.
So does I will be needing synchronized(this) block on that exceptionMap or not.? If not then why? And if Yes then what reason should I quote to him.
if you are tying to count the number of times each exception occured, then you need something like this:
private static final ConcurrentMap<String, AtomicInteger> exceptionMap = new ConcurrentHashMap<String, AtomicInteger>();
private static void addException(String cause) {
AtomicInteger count = exceptionMap.get(cause);
if(count == null) {
count = new AtomicInteger();
AtomicInteger curCount = exception.putIfAbsent(cause, count);
if(curCount != null) {
count = curCount;
}
}
count.incrementAndGet();
}
note that having a static map of exceptions is a resource leak unless you periodically clean it out.
As #dnault mentioned, you could also use guava's AtomicLongMap.
UPDATE: some comments on your original:
you are correct, that you do need another wrapping synchronized block to ensure that the latest value actually makes it into the map. however, as #Perception already pointed out in comments, you are synchronizing on the wrong object instance (since you are updating static maps, you need a static instance, such as Task.class)
however, you are using a static counter, but a String key which could be different for different exceptions, thus you aren't actually counting each exception cause, but instead sticking random numbers in as various map values
lastly, as i've shown in my example, you can solve the aforementioned issues and discard the synchronized blocks completely by making appropriate use of the ConcurrentMap.
And this way should also work.
private static final ConcurrentMap<String, Integer> exceptionMap = new ConcurrentHashMap<String, Integer>();
private static void addException(String cause) {
Integer oldVal, newVal;
do {
oldVal = exceptionMap .get(cause);
newVal = (oldVal == null) ? 1 : (oldVal + 1);
} while (!queryCounts.replace(q, oldVal, newVal));
}
ConcurrentHashMap doesn't allow null values, so replace will throw exception if called with oldValue == null. I used this code to increase counter by delta with returning oldValue.
private final ConcurrentMap<Integer,Integer> counters = new ConcurrentHashMap<Integer,Integer>();
private Integer counterAddDeltaAndGet(Integer key, Integer delta) {
Integer oldValue = counters.putIfAbsent(key, delta);
if(oldValue == null) return null;
while(!counters.replace(key, oldValue, oldValue + delta)) {
oldValue = counters.get(key);
}
return oldValue;
}
You don't have to use synchronized block and AtomicInteger. You can do it just with ConcurrentHashMap.compute method which is a thread-safe atomic operation. So your code will look something like this
public class Task implements Runnable {
public static final Map<String, Integer> EXCEPTION_MAP = new ConcurrentHashMap<String, Integer>();
#Override
public void run() {
try {
// Make a db connection and then executing the SQL-
} catch (Exception e) {
EXCEPTION_MAP.compute(e.getCause().toString(), (key, value) -> {
if (value == null) {
return 1;
}
return ++value;
});
}
}
}
In my application I'm performing somewhat heavy lookup operations. These operations must be done within a single thread (persistence framework limitation).
I want to cache the results. Thus, I have a class UMRCache, with an inner class Worker:
public class UMRCache {
private Worker worker;
private List<String> requests = Collections.synchronizedList<new ArrayList<String>>());
private Map<String, Object> cache = Collections.synchronizedMap(new HashMap<String, Object>());
public UMRCache(Repository repository) {
this.worker = new Worker(repository);
this.worker.start();
}
public Object get(String key) {
if (this.cache.containsKey(key)) {
// If the element is already cached, get value from cache
return this.cache.get(key);
}
synchronized (this.requests) {
// Add request to queue
this.requests.add(key);
// Notify the Worker thread that there's work to do
this.requests.notifyAll();
}
synchronized (this.cache) {
// Wait until Worker has updated the cache
this.cache.wait();
// Now, cache should contain a value for key
return this.cache.get(key);
}
}
private class Worker extends Thread {
public void run() {
boolean doRun = true;
while (doRun) {
synchronized (requests) {
while (requests.isEmpty() && doRun) {
requests.wait(); // Wait until there's work to do
}
synchronized (cache) {
Set<String> processed = new HashSet<String>();
for (String key : requests) {
// Do the lookup
Object result = respository.lookup(key);
// Save to cache
cache.put(key, result);
processed.add(key);
}
// Remove processed requests from queue
requests.removeAll(processed);
// Notify all threads waiting for their requests to be served
cache.notifyAll();
}
}
}
}
}
}
I have a testcase for this:
public class UMRCacheTest extends TestCase {
private UMRCache umrCache;
public void setUp() throws Exception {
super.setUp();
umrCache = new UMRCache(repository);
}
public void testGet() throws Exception {
for (int i = 0; i < 10000; i++) {
final List fetched = Collections.synchronizedList(new ArrayList());
final String[] keys = new String[]{"key1", "key2"};
final String[] expected = new String[]{"result1", "result2"}
final Random random = new Random();
Runnable run1 = new Runnable() {
public void run() {
for (int i = 0; i < keys.length; i++) {
final String key = keys[i];
final Object result = umrCache.get(key);
assertEquals(key, results[i]);
fetched.add(um);
try {
Thread.sleep(random.nextInt(3));
} catch (InterruptedException ignore) {
}
}
}
};
Runnable run2 = new Runnable() {
public void run() {
for (int i = keys.length - 1; i >= 0; i--) {
final String key = keys[i];
final String result = umrCache.get(key);
assertEquals(key, results[i]);
fetched.add(um);
try {
Thread.sleep(random.nextInt(3));
} catch (InterruptedException ignore) {
}
}
}
};
final Thread thread1 = new Thread(run1);
thread1.start();
final Thread thread2 = new Thread(run2);
thread2.start();
final Thread thread3 = new Thread(run1);
thread3.start();
thread1.join();
thread2.join();
thread3.join();
umrCache.dispose();
assertEquals(6, fetched.size());
}
}
}
The test fails randomly, at about 1 out of 10 runs. It will fail at the last assertion: assertEquals(6, fetched.size()), at assertEquals(key, results[i]), or sometimes the test runner will never finish.
So there's something buggy about my thread logic. Any tips?
EDIT:
I might have cracked it now, thanks to all who have helped.
The solution seems to be:
public Object get(String key) {
if (this.cache.containsKey(key)) {
// If the element is already cached, get value from cache
return this.cache.get(key);
}
synchronized (this.requests) {
// Add request to queue
this.requests.add(key);
// Notify the Worker thread that there's work to do
this.requests.notifyAll();
}
synchronized (this.cache) {
// Wait until Worker has updated the cache
while (!this.cache.containsKey(key)) {
this.cache.wait();
}
// Now, cache should contain a value for key
return this.cache.get(key);
}
}
get() method logic can miss result and get stuck
synchronized (this.requests) {
// Add request to queue
this.requests.add(key);
// Notify the Worker thread that there's work to do
this.requests.notifyAll();
}
// ----- MOMENT1. If at this moment Worker puts result into cache it
// will be missed since notification will be lost
synchronized (this.cache) {
// Wait until Worker has updated the cache
this.cache.wait();
// ----- MOMENT2. May be too late, since cache notifiation happened before at MOMENT1
// Now, cache should contain a value for key
return this.cache.get(key);
}
The variable fetched in your test is an ArrayList and is accessed and updated from your two anonymous Runnable instances.
ArrayList is not thread safe, from the documentation:
Note that this implementation is not
synchronized. If multiple threads
access an ArrayList instance
concurrently, and at least one of the
threads modifies the list
structurally, it must be synchronized
externally. (A structural modification
is any operation that adds or deletes
one or more elements, or explicitly
resizes the backing array; merely
setting the value of an element is not
a structural modification.) This is
typically accomplished by
synchronizing on some object that
naturally encapsulates the list. If no
such object exists, the list should be
"wrapped" using the
Collections.synchronizedList method.
This is best done at creation time, to
prevent accidental unsynchronized
access to the list:
Hence I think your test needs a little adjusting.
I noticed your lookup in cache isn't atomic operation:
if (this.cache.containsKey(key)) {
// If the element is already cached, get value from cache
return this.cache.get(key);
}
Since you never delete from cache in your code, you always will get some value by this code. But if, in future, you plan to clean cache, lack of atomicity here will become a problem.