public static ConcurrentHashMap<Integer,Session> USER_SESSIONS...
Everything works fine. But what if the system is allowed to be authorized two sessions with the same user ID? Well, that is roughly two PCs sitting under the same account, but different session.
Tried to do so:
ConcurrentHashMap<Integer,List<Session>> USER_SESSIONS....
...............
private void addUser(Session session){
List<Session> userSessions = Server.USER_SESSIONS.get(session.mUserId);
if(userSessions==null){
userSessions = new List<Session>();
userSessions.add(session);
Server.USER_SESSIONS.put(session.getUserId(),userSessions);
}else{
userSessions.add(session);
}
}
private void removeUser(Session session){
List<Session> userSessions = Server.USER_SESSIONS.get(session.mUserId);
if(userSessions!=null){
userSessions.remove(session);
if(userSessions.size()==0)
{
Server.USER_SESSIONS.remove(session.getUserId());
}
}
}
.................
private void workWithUsers(int userId){
for(Session session : Server.USER_SESSIONS.get(userId))
{
<do it!>
}
}
Naturally, all these methods can be called from different threads, and I'm getting errors related to List . And this is natural, because while I have foreach-list session can be removed by removeUser from another thread. What to do? How to make so that-be at work with a list of all the threads List waited until it occupies the thread is finished with it? Yet done so :)
public static ConcurrentHashMap<Integer,ConcurrentHashMap<Session,Session>> USER_SESSIONS
Since ConcurrentHashMap thread safe. But I think it's crooked decision. Many thanks in advance for your help!
P.S: JRE 1.6
Please sorry for my English.
You could use List myList = Collections.synchronizedList(new ArrayList<String>()); if you don't want to use CopyOnWriteArrayList.
The only thing you need to have in mind is that it is mandatory to synchronized the code where you will be iterating over the list. You can see more info here: Collections.synchronizedList and synchronized
use List myList = Collections.synchronizedList(new ArrayList<String>()); will be better
But if there is much more read operation than write, you can also use CopyOnWriteArrayList which is safe to iterate.
Using a thread safe list is still not enough to prevent race conditions in this case.
There's a possibility that two addUser calls at the same time may overwrite each other's puts. Also an add could happen between the check for size and the remove call in remoeUser.
You need something like this (not tested). This code assumes that a session will not be removed before a call to it's add.
private void addUser(Session session) {
while (true) {
List<Session> userSessions = Collections.synchronizedList(new ArrayList<Session>());
List<Session> oldSessions = USER_SESSIONS.putIfAbsent(session.mUserId, userSessions);
if (oldSessions != null) {
userSessions = oldSessions;
}
userSessions.add(session);
// want to make sure the map still contains this list and not another
// so checking references
// this could be false if the list was removed since the call to putIfAbsent
if (userSessions == USER_SESSIONS.get(session.mUserId)) {
break;
}
}
}
private void removeUser(Session session) {
List<Session> userSessions = USER_SESSIONS.get(session.mUserId);
if (userSessions != null) {
// make whole operation synchronized to make sure a new session is not added
// after the check for empty
synchronized (userSessions) {
userSessions.remove(session);
if (userSessions.isEmpty()) {
USER_SESSIONS.remove(session.mUserId);
}
}
}
}
You could try to use CopyOnWriteArrayList or CopyOnWriteArraySet in your case.
Related
I would like to convert the following code to fit multithread environment.
List<Observer> list = new ArrayList<>();
public void removeObserver(Observer p) {
for (Observer observer: list) {
if (observer.equals(p)) {
list.remove(observer);
break;
}
}
}
public void addObserver(Observer p) {
list.add(p);
}
public void notifyObserver(Event obj) {
for (Observer observer: list) {
observer.notify(obj);
}
}
Definitely, one of the easiest way to do so, is to add synchronized keyword, which ensure only one thread can runs the logic, and thereby ensuring result is correct.
However, is there better way to solve the issue. I have do some sort of research, and found that I can use Collections.synchronizedList, and also notice such list.iterator is not thread-safe, so I should avoid use of forEach loop or iterator directly unless I do a synchronized (list)
I just don't want to use synchronized, and think if there is another possible approach. Here is my second attempt.
List<Observer> list = Collections.synchronizedList(new ArrayList<Observer>()); // which is thread safe
public void removeObserver(Observer p) {
// as the list may get modify, I create a copy first
List<Observer> copy = new CopyOnWriteArrayList(list);
for (Observer observer: copy) {
if (observer.equals(p)) {
// but now, no use of iterator
list.remove(observer); // remove it from the original copy
break;
}
}
}
public void addObserver(Observer p) {
list.add(p);
}
public void notifyObserver(Event obj) {
List<Observer> copy = new CopyOnWriteArrayList(list);
// not use iterator, as thread safe list's iterator can be thread unsafe
// and for-each loop use iterator concept
for (Observer observer: copy) {
observer.notify(obj);
}
}
I just want to ask if my second attempt is thread-safe? Also, is there a better approach to do this then my proposed second method?
Definitely, one of the easiest way to do so, is to add synchronized keyword, which ensure only one thread can runs the logic, and thereby ensuring result is correct.
This is correct.
However, is there better way to solve the issue?
Possibly. But lets take look at your second attempt:
List<Observer> list = Collections.synchronizedList(new ArrayList<Observer>());
// which is thread safe
Yes it is thread-safe. With certain constraints.
public void removeObserver(Observer p) {
// as the list may get modify, I create a copy first
List<Observer> copy = new CopyOnWriteArrayList(list);
...
Three problems here:
You are creating a copy of the list. That is an O(N) operation.
The CopyOnWriteArrayList constructor is going to iterate list ... and iteration of a list created by synchronizedList is not atomic / thread-safe so you have a race condition.
There is no actual benefit of using CopyOnWriteArrayList here over (say) ArrayList. The copy object is local and thread-confined so it doesn't need to be thread-safe.
In summary, this is not thread-safe AND it is more expensive simply making the original methods synchronized.
A possibly better way:
List<Observer> list = new CopyOnWriteArrayList()
public void removeObserver(Observer p) {
list.remove(p)
}
public void addObserver(Observer p) {
list.add(p);
}
public void notifyObserver(Event obj) {
for (Observer observer: list) {
observer.notify(obj);
}
}
This is thread-safe with the caveat that an Observer added while a notifyObserver call is in progress will not be notified.
The only potential problem is that mutations to a CopyOnWriteArrayList are expensive since they create a copy of the entire list. So if the ratio of mutations to notifies is too high, this may be more expensive than the solution using synchronized methods. On the other hand, if multiple threads call notifyObserver, those calls can proceed in parallel.
I have a Set with any type of values and an AtomicBoolean that indicates if the functionality provided by that class is running.
private Set<Object> set = new HashSet<>();
private AtomicBoolean running;
Now, i have two methods, one of them is adding objects to the set and the other serves as a setup method for my class.
public void start() {
// ...
set.foreEach(someApi::addObject);
// ...
running.set(true);
}
public void addObject(Object o) {
set.add(o);
if(running.get()) {
someApi.addObject(o);
}
}
However, there is a problem with that code. If the method is invoked from another thread while the start method is iterating through the set running is still false. Thus, the object will not be added to the api.
Question: How can i guarantee that all objects in the set and objects added with addObject will be added to the api exactly one time?
My ideas:
use a lock and block the addObject method if the setup is currently adding methods to the api (or make both methods synchronized, which will slightly decrease performence tough)
Question: How can i guarantee that all objects in the set and objects added with addObject will be added to the api exactly one time?
You have to be careful here because this gets close to the ole "double check locking bug".
If I understand you question you want to:
queue the objects passed into addObject(...) in the set before the call to start().
then when start() is called, call the API method on the objects in the set.
handle the overlap if additional objects are added during the call to start()
call the method once and only once on all objects passed to addObject(...).
What is confusing is that your API call is also named addObject(). I assume this is different from the addObject(...) method in your code sample. I'm going to rename it below to be someApiMethod(...) to show that it's not going recursive.
The easiest way is going to be, unfortunately, having a synchronized block in each of the methods:
private final Set<Object> set = new HashSet<>();
public void start() {
synchronized (set) {
set.forEach(someApi::someApiMethod);
}
}
public void addObject(Object obj) {
synchronized (set) {
if (set.add(obj)) {
someApi.addObject(obj);
}
}
}
}
To make it faster is going to take a lot more complicated code. One thing you could do is use a ConcurrentHashMap and a AtomicBoolean running. Something like:
private final ConcurrentMap<Object, Object> map = new ConcurrentHashMap<>();
private final Set<Object> beforeStart = new HashSet<>();
private final AtomicBoolean running = new AtomicBoolean();
public void start() {
synchronized (beforeStart) {
for (Object obj : beforeStart) {
doIfAbsent(obj);
}
running.set(true);
}
}
public void addObject(Object obj) {
if (running.get()) {
doIfAbsent(obj);
} else {
synchronized (beforeStart) {
// we have to test running again once we get the lock
if (running.get()) {
doIfAbsent(obj);
} else {
beforeStart.add(obj);
}
}
}
}
private void doIfAbsent(Object obj) {
if (map.putIfAbsent(obj, obj)) {
someApi.someApiMethod(obj);
}
}
This is pretty complicated and it may not be any faster depending on how large your hash map is and other factors.
Is it possible to modify the runnable object after it has been submitted to the executor service (single thread with unbounded queue) ?
For example:
public class Test {
#Autowired
private Runner userRunner;
#Autowired
private ExecutorService executorService;
public void init() {
for (int i = 0; i < 100; ++i) {
userRunner.add("Temp" + i);
Future runnerFuture = executorService.submit(userRunner);
}
}
}
public class Runner implements Runnable {
private List<String> users = new ArrayList<>();
public void add(String user) {
users.add(user);
}
public void run() {
/* Something here to do with users*/
}
}
As you can see in the above example, if we submit a runnable object and modify the contents of the object too inside the loop, will the 1st submit to executor service use the newly added users. Consider that the run method is doing something really intensive and subsequent submits are queued.
if we submit a runnable object and modify the contents of the object too inside the loop, will the 1st submit to executor service use the newly added users.
Only if the users ArrayList is properly synchronized. What you are doing is trying to modify the users field from two different threads which can cause exceptions and other unpredictable results. Synchronization ensures mutex so multiple threads aren't changing ArrayList at the same time unexpectedly, as well as memory synchronization which ensures that one thread's modifications are seen by the other.
What you could do is to add synchronization to your example:
public void add(String user) {
synchronized (users) {
users.add(user);
}
}
...
public void run() {
synchronized (users) {
/* Something here to do with users*/
}
}
Another option would be to synchronize the list:
// you can't use this if you are iterating on this list (for, etc.)
private List<String> users = Collections.synchronizedList(new ArrayList<>());
However, you'll need to manually synchronize if you are using a for loop on the list or otherwise iterating across it.
The cleanest, most straightforward approach would be to call cancel on the Future, then submit a new task with the updated user list. Otherwise not only do you face visibility issues from tampering with the list across threads, but there's no way to know if you're modifying a task that's already running.
I have a service bean which provides access to a Map. From time to time I need to rebuild the content of the Map wich takes several seconds and I want to block the access to the map while its rebuilding, because it can be accessed from different Threads.
#Service
public class MyService {
private Map<Key,Value> cache = null;
private ReentrantLock reentrantLock = new ReentrantLock();
public void rebuildCache(){
try {
reentrantLock.lock();
cache = new ConcurrentHashMap<>();
... //processing time consuming stuff and building up the cache
}finally {
reentrantLock.unlock();
}
}
public Value getValue(Key key){
while (lock.isLocked()){}
return cache.get(key);
}
...
}
As you can see I use
while (reentrantLock.isLocked()){}
to check if the lock is locked and wait until its unlocked. This solution seems to be quite dirty. Is there a better solution?
Use a ReentrantReadWriteLock instead.
In your write method:
theLock.writeLock().lock();
try {
// update the map
} finally {
theLock.writeLock().unlock();
}
In the read method, use the .readLock() instead.
This has the problem however that during the update of the map, all readers will be blocked; another solution would be to use a plain lock to replace the reference of the old map to a new, updated one, and use a plain old synchronized.
More importantly though, your use of locks is incorrect. You should do:
theLock.lock();
try {
// whatever
} finally {
theLock.unlock();
}
Imagine what happens if the locking fails with your current lock: you'll always try to unlock and you'll end up with an IllegalLockStateException.
I would propose a ReadWriteLock.
With it you can read as many times as you want, as long as read lock is not locked.
#Service
public class MyService {
private Map<Key,Value> cache = null;
private ReentrantLock reentrantLock = new ReentrantLock();
public void rebuildCache(){
try {
reentrantLock.writeLock().lock();
cache = new ConcurrentHashMap<>();
... //processing time consuming stuff and building up the cache
}finally {
reentrantLock.writeLock().unlock();
}
}
public Value getValue(Key key){
if(reentrantLock.getReadLock().lock()){
return cache.get(key);
}finally{
reentrantLock.getReadLock().unlock();
}
}
...
}
We need to lock a method responsible for loading database date into a HashMap based cache.
A possible situation is that a second thread tries to access the method while the first method is still loading cache.
We consider the second thread's effort in this case to be superfluous. We would therefore like to have that second thread wait until the first thread is finished, and then return (without loading the cache again).
What I have works, but it seems quite inelegant. Are there better solutions?
private static final ReentrantLock cacheLock = new ReentrantLock();
private void loadCachemap() {
if (cacheLock.tryLock()) {
try {
this.cachemap = retrieveParamCacheMap();
} finally {
cacheLock.unlock();
}
} else {
try {
cacheLock.lock(); // wait until thread doing the load is finished
} finally {
try {
cacheLock.unlock();
} catch (IllegalMonitorStateException e) {
logger.error("loadCachemap() finally {}",e);
}
}
}
}
I prefer a more resilient approach using read locks AND write locks. Something like:
private static final ReadWriteLock cacheLock = new ReentrantReadWriteLock();
private static final Lock cacheReadLock = cacheLock.readLock();
private static final Lock cacheWriteLock = cacheLock.writeLock();
private void loadCache() throws Exception {
// Expiry.
while (storeCache.expired(CachePill)) {
/**
* Allow only one in - all others will wait for 5 seconds before checking again.
*
* Eventually the one that got in will finish loading, refresh the Cache pill and let all the waiting ones out.
*
* Also waits until all read locks have been released - not sure if that might cause problems under busy conditions.
*/
if (cacheWriteLock.tryLock(5, TimeUnit.SECONDS)) {
try {
// Got a lock! Start the rebuild if still out of date.
if (storeCache.expired(CachePill)) {
rebuildCache();
}
} finally {
cacheWriteLock.unlock();
}
}
}
}
Note that the storeCache.expired(CachePill) detects a stale cache which may be more than you are wanting but the concept here is the same, establish a write lock before updating the cache which will deny all read attempts until the rebuild is done. Also, manage multiple attempts at write in a loop of some sort or just drop out and let the read lock wait for access.
A read from the cache now looks like this:
public Object load(String id) throws Exception {
Store store = null;
// Make sure cache is fresh.
loadCache();
try {
// Establish a read lock so we do not attempt a read while teh cache is being updated.
cacheReadLock.lock();
store = storeCache.get(storeId);
} finally {
// Make sure the lock is cleared.
cacheReadLock.unlock();
}
return store;
}
The primary benefit of this form is that read access does not block other read access but everything stops cleanly during a rebuild - even other rebuilds.
You didn't say how complicated your structure is and how much concurrency / congestion you need. There are many ways to address your need.
If your data is simple, use a ConcurrentHashMap or similar to hold your data. Then just read and write in threads regardlessly.
Another alternative is to use actor model and put read/write on the same queue.
If all you need is to fill a read-only map which is initialized from database once requested, you could use any form of double-check locking which may be implemented in a number of ways. The easiest variant would be the following:
private volatile Map<T, V> cacheMap;
public void loadCacheMap() {
if (cacheMap == null) {
synchronized (this) {
if (cacheMap == null) {
cacheMap = retrieveParamCacheMap();
}
}
}
}
But I would personally prefer to avoid any form of synchronization here and just make sure that the initialization is done before any other thread can access it (for example in a form of init method in a DI container). In this case you would even avoid overhead of volatile.
EDIT: The answer works only when initial load is expected. In case of multiple updates, you could try to replace the tryLock by some other form of test and test-and-set, for example using something like this:
private final AtomicReference<CountDownLatch> sync =
new AtomicReference<>(new CountDownLatch(0));
private void loadCacheMap() {
CountDownLatch oldSync = sync.get();
if (oldSync.getCount() == 0) { // if nobody updating now
CountDownLatch newSync = new CountDownLatch(1);
if (sync.compareAndSet(oldSync, newSync)) {
cacheMap = retrieveParamCacheMap();
newSync.countDown();
return;
}
}
sync.get().await();
}