Simple Java String cache with expiration possibility - java

I am looking for a concurrent Set with expiration functionality for a Java 1.5 application. It would be used as a simple way to store / cache names (i.e. String values) that expire after a certain time.
The problem I'm trying to solve is that two threads should not be able to use the same name value within a certain time (so this is sort of a blacklist ensuring the same "name", which is something like a message reference, can't be reused by another thread until a certain time period has passed). I do not control name generation myself, so there's nothing I can do about the actual names / strings to enforce uniqueness, it should rather be seen as a throttling / limiting mechanism to prevent the same name to be used more than once per second.
Example:
Thread #1 does cache.add("unique_string, 1) which stores the name "unique_string" for 1 second.
If any thread is looking for "unique_string" by doing e.g. cache.get("unique_string") within 1 second it will get a positive response (item exists), but after that the item should be expired and removed from the set.
The container would at times handle 50-100 inserts / reads per second.
I have really been looking around at different solutions but am not finding anything that I feel really suites my needs. It feels like an easy problem, but all solutions I find are way too complex or overkill.
A simple idea would be to have a ConcurrentHashMap object with key set to "name" and value to the expiration time then a thread running every second and removing all elements whose value (expiration time) has passed, but I'm not sure how efficient that would be? Is there not a simpler solution I'm missing?

Google's Guava library contains exactly such cache: CacheBuilder.

How about creating a Map where the item expires using a thread executor
//Declare your Map and executor service
final Map<String, ScheduledFuture<String>> cacheNames = new HashMap<String, ScheduledFuture<String>>();
ScheduledExecutorService executorService = Executors.newSingleThreadScheduledExecutor();
You can then have a method that adds the cache name to your collection which will remove it after it has expired, in this example its one second. I know it seems like quite a bit of code but it can be quite an elegant solution in just a couple of methods.
ScheduledFuture<String> task = executorService.schedule(new Callable<String>() {
#Override
public String call() {
cacheNames.remove("unique_string");
return "unique_string";
}
}, 1, TimeUnit.SECONDS);
cacheNames.put("unique_string", task);

A simple unique string pattern which doesn't repeat
private static final AtomicLong COUNTER = new AtomicLong(System.currentTimeMillis()*1000);
public static String generateId() {
return Long.toString(COUNTER.getAndIncrement(), 36);
}
This won't repeat even if you restart your application.
Note: It will repeat after:
you restart and you have been generating over one million ids per second.
after 293 years. If this is not long enough you can reduce the 1000 to 100 and get 2930 years.

It depends - If you need strict condition of time, or soft (like 1 sec +/- 20ms).
Also if you need discrete cache invalidation or 'by-call'.
For strict conditions I would suggest to add a distinct thread which will invalidate cache each 20milliseconds.
Also you can have inside the stored key timestamp and check if it's expired or not.

Why not store the time for which the key is blacklisted in the map (as Konoplianko hinted)?
Something like this:
private final Map<String, Long> _blacklist = new LinkedHashMap<String, Long>() {
#Override
protected boolean removeEldestEntry(Map.Entry<String, Long> eldest) {
return size() > 1000;
}
};
public boolean isBlacklisted(String key, long timeoutMs) {
synchronized (_blacklist) {
long now = System.currentTimeMillis();
Long blacklistUntil = _blacklist.get(key);
if (blacklistUntil != null && blacklistUntil >= now) {
// still blacklisted
return true;
} else {
// not blacklisted, or blacklisting has expired
_blacklist.put(key, now + timeoutMs);
return false;
}
}
}

Related

Interactive Broker Java API

Everytime before I place a new order to IB, I need to make a request to IB for next valid orderId and do Thread.Sleep(500) to sleep for 0.5 seconds and wait for IB API's callback function nextValidId to return the latest orderID. If I want to place multiple orders out, then I have to naively do thread.sleep multiple times, This is not a very good way to handle this, as the orderID could have been updated earlier and hence the new order could have been placed earlier. And what if the orderID takes longer time to update than thread sleep time, this would result in error.
Is there a more efficient and elegant way to do this ?
Ideally, I want the program to prevent running placeNewOrder until the latest available orderID is updated and notify the program to run placeNewOrder.
I do not know much about Java data synchronization but I reckon there might be a better solution using synchronized or wait-notify or locking or blocking.
my code:
// place first order
ib_client.reqIds(-1);
Thread.sleep(500);
int currentOrderId = ib_wrapper.getCurrentOrderId();
placeNewOrder(currentOrderId, orderDetails); // my order placement method
// place 2nd order
ib_client.reqIds(-1);
Thread.sleep(500);
int currentOrderId = ib_wrapper.getCurrentOrderId();
placeNewOrder(currentOrderId, orderDetails); // my order placement method
IB EWrapper:
public class EWrapperImpl implements EWrapper {
...
protected int currentOrderId = -1;
...
public int getCurrentOrderId() {
return currentOrderId;
}
public void nextValidId(int orderId) {
System.out.println("Next Valid Id: ["+orderId+"]");
currentOrderId = orderId;
}
...
}
You never need to ask for id's. Just increment by one for every order.
When you first connect, nextValidId is the first or second message to be received, just keep track of the id and keep incrementing.
The only rules for orderId is to use an integer and always increment by some amount. This is per clientId so if you connect with a new clientId then the last orderId is something else.
I always use max(1000, nextValidId) to make sure my id's start at 1000 or more since I use <1000 for data requests. It just helps with errors that have ids.
You can also reset the sequence somehow.
https://interactivebrokers.github.io/tws-api/order_submission.html
This means that if there is a single client application submitting
orders to an account, it does not have to obtain a new valid
identifier every time it needs to submit a new order. It is enough to
increase the last value received from the nextValidId method by one.
You should not mess around with order ID, it's automatically tracked and being set by the API. Otherwise you will get the annoying "Duplicate order id" error 103. From ApiController class:
public void placeOrModifyOrder(Contract contract, final Order order, final IOrderHandler handler) {
if (!checkConnection())
return;
// when placing new order, assign new order id
if (order.orderId() == 0) {
order.orderId( m_orderId++);
if (handler != null) {
m_orderHandlers.put( order.orderId(), handler);
}
}
m_client.placeOrder( contract, order);
sendEOM();
}

Atomically searchKeys() and put() in a ConcurrentHashMap

I am developing a web server in java which, among other things, is supposed to implement a challenge service between couples of users.
Each user can compete in only one challenge at a time.
Actually I am storing the "Challenge" objects in a ConcurrentHashMap<String, Challenge> and I am using a String that is the union of the two players usernames as keys for mappings.
For example, if the usernames of the two players are "Mickey" and "Goofy" then the key of the Challenge object inside the ConcurrentHashMap will be the string:
Mickey:Goofy
When recording a new challenge between two users in the ConcurrentHashMap, i have to check if they are already engaged in others challenges before actually putting the challenge in the Map, in other words, i have to check if there is a key stored in the Map that contains one of the two usernames of the players which want to start the new challenge.
For example, given a filled ConcurrentHashMap<String, Challenge> and a challenge request for the users Mickey and Goofy, i want to know in an atomic way and without locking whole map, if one (or eventually both) of them is/are already engaged in other registered challenge within the Map and if not, then put the new Challenge in the Map.
I hope to have been clear enough.
Do any of you have a suggestion?
Thanks in advance.
Using string concatenation is a bad choice for a compound key. String concatenation is an expensive operation and it doesn’t guaranty uniqueness, as the key becomes ambiguous when one of the strings contains the separator of your choice.
Of course, you can forbid that particular character in user names, but this adds additional requirements you have to check, whereas a dedicated key object holding two references is simpler and more efficient. You may even use a two element List<String> as an add-hoc key type, as it has useful hashCode and equals implementations.
But since you want to perform lookups for both parts of the compound key anyway, you should not use a compound key in the first place. Just associate both user names with the same Challenge object. This still can’t be done in a single atomic operation, but it doesn’t need to:
final ConcurrentHashMap<String, Challenge> challenges = new ConcurrentHashMap<>();
Challenge startNewChallenge(String user1, String user2) {
if(user1.equals(user2))
throw new IllegalArgumentException("same user");
Challenge c = new Challenge();
if(challenges.putIfAbsent(user1, c) != null)
throw new IllegalStateException(user1+" has an ongoing challenge");
if(challenges.putIfAbsent(user2, c) != null) {
challenges.remove(user1, c);
throw new IllegalStateException(user2+" has an ongoing challenge");
}
return c;
}
This code will never overwrite an existing value. If both putIfAbsent were successful, both user definitely had no ongoing challenge and are now both associated with the same new challenge.
When the first putIfAbsent succeeded but the second fails, we have to remove the first association. remove(user1, c) will only remove it when the user still is associated with our new challenge. When all operations on the map follow the principle to never overwrite an existing entry (unless all prerequisites are met), this is not necessary, a plain remove(user1) would do as well. But it doesn’t hurt to use the safe variant here.
The only issue with the non-atomicity is that two overlapping attempts involving the same user could both fail, due to the temporarily added first user, when actually one of them could succeed. I do not consider that a significant problem; the user simply shouldn’t attempt to join two challenges at the same time.
You must review your code.
You cannot do this in one time as you have two names to check.
Even in a conventional (iterating) way you would have in the best case two operation.
So anyway you will need to do at least two access on the map.
I suggest you to use your actual map without the concatenation of strings, so yes, one Challenge will appear two time in the map, one for each participant. Then you will be able to check easily if a user is engaged.
If you need to know with whom he is engaged, simply store the both names in the Challenge class.
Of course lock your map when you are looking for both entries. A function who return a Boolean will do the job !
From my perspective it's possible, but the map has to use individual player names as keys, so for both players we have to put one challenge twice.
Having this, we can introduce additional async checking whether the new challenge was successful stored for the both players.
private boolean put(Map<String, Challenge> challenges, String firstPlayerName,
String secondPlayerName,
Challenge newChallenge) {
if(firstPlayerName.compareTo(secondPlayerName) > 0) {
String tmp = firstPlayerName;
firstPlayerName = secondPlayerName;
secondPlayerName = tmp;
}
boolean firstPlayerAccepted = newChallenge == challenges.merge(firstPlayerName, newChallenge,
(oldValue, newValue) -> oldValue.isInitiated() ? oldValue : newValue);
boolean secondPlayerAccepted = firstPlayerAccepted
&& newChallenge == challenges.merge(secondPlayerName, newChallenge,
(oldValue, newValue) -> oldValue.isInitiated() ? oldValue : newValue);
boolean success = firstPlayerAccepted && secondPlayerAccepted;
newChallenge.initiate(success);
if (firstPlayerAccepted) {
// remove unsuccessful
challenges.computeIfPresent(firstPlayerName, (s, challenge) -> challenge.isInitiated() ? challenge : null);
if (secondPlayerAccepted) {
challenges.computeIfPresent(secondPlayerName, (s, challenge) -> challenge.isInitiated() ? challenge : null);
}
}
return success;
}
class Challenge {
private final CompletableFuture<Boolean> initiated = new CompletableFuture<>();
public void initiate(boolean success) {
initiated.complete(success);
}
public boolean isInitiated() {
try {
return initiated.get();
} catch (ExecutionException e) {
throw new IllegalStateException(e);
} catch (InterruptedException e) {
return false;
}
}
enter code here
...
}

Adding an Immutable Array Slows Down a Thread

I have encountered a bit of a paradox that I am trying to understand. Basically I have two variants of an object in a threaded setting - the variants only differ in that one has an immutable array of immutable objects of fixed length, and yet this second variant is considerable slower than the first. Here is the set up:
final class Object {
public Pair<Long, ImmutableThing> cache,
public ImmutableThing getThing(long timestamp) {
if (timestamp > cache.getKey()) {
ImmutableThing newThing = doExpensiveComputation(timestamp);
cache = new Pair(newThing.getLong(), newThing);
return newThing; }
else { return cache.getValue()}
This first version shows much better performance for the getThing method: It looks up the cache, if the data is valid it returns it, otherwise does a fairly expensive computation, updates the cache, and returns the new value. I understand this is not thread safe as written, but here is the second variant:
final class SlowerObject {
public Pair<Long, ImmutableThing> cache;
public final ArrayList[ImmutableThing] timelineOfThings;
public ImmutableThing getThing(long timestamp) {
if (timestamp > cache.getKey()) {
ImmutableThing newThing = findInTimelineOfThings(timestamp);
cache = new Pair(newThing.getLong(), newThing);
return newThing; }
else { return cache.getValue()}
In this second variant, we pre-compute an array which stores all the possible values of the things we want to return from getThing (there are only 4 possibilities in my case). Instead of doing a computation if the cache is invalid, we just lookup in the array until we find the correct one, and the computation to figure out which is correct is nearly instant - just comparing long values. The array is never rewritten, just read.
This is all occurring in a threaded environment. Why should the second one be slower?

Java Server Client, shared variable between threads

I am working on a project to create a simple auction server that multiple clients connect to. The server class implements Runnable and so creates a new thread for each client that connects.
I am trying to have the current highest bid stored in a variable that can be seen by each client. I found answers saying to use AtomicInteger, but when I used it with methods such as atomicVariable.intValue() I got null pointer exception errors.
What ways can I manipulate the AtomicInteger without getting this error or is there an other way to have a shared variable that is relatively simple?
Any help would be appreciated, thanks.
Update
I have the AtomicInteger working. The problem is now that only the most recent client to connect to the server seems to be able to interact with it. The other client just sort of freeze.
Would I be correct in saying this is a problem with locking?
Well, most likely you forgot to initialize it:
private final AtomicInteger highestBid = new AtomicInteger();
However working with highestBid requires a great deal of knowledge to get it right without any locking. For example if you want to update it with new highest bid:
public boolean saveIfHighest(int bid) {
int currentBid = highestBid.get();
while (currentBid < bid) {
if (highestBid.compareAndSet(currentBid, bid)) {
return true;
}
currentBid = highestBid.get();
}
return false;
}
or in a more compact way:
for(int currentBid = highestBid.get(); currentBid < bid; currentBid = highestBid.get()) {
if (highestBid.compareAndSet(currentBid, bid)) {
return true;
}
}
return false;
You might wonder, why is it so hard? Image two threads (requests) biding at the same time. Current highest bid is 10. One is biding 11, another 12. Both threads compare current highestBid and realize they are bigger. Now the second thread happens to be first and update it to 12. Unfortunately the first request now steps in and revert it to 11 (because it already checked the condition).
This is a typical race condition that you can avoid either by explicit synchronization or by using atomic variables with implicit compare-and-set low-level support.
Seeing the complexity introduced by much more performant lock-free atomic integer you might want to restore to classic synchronization:
public synchronized boolean saveIfHighest(int bid) {
if (highestBid < bid) {
highestBid = bid;
return true;
} else {
return false;
}
}
I wouldn't look at the problem like that. I would simply store all the bids in a ConcurrentSkipListSet, which is a thread-safe SortedSet. With the correct implementation of compareTo(), which determines the ordering, the first element of the Set will automatically be the highest bid.
Here's some sample code:
public class Bid implements Comparable<Bid> {
String user;
int amountInCents;
Date created;
#Override
public int compareTo(Bid o) {
if (amountInCents == o.amountInCents) {
return created.compareTo(created); // earlier bids sort first
}
return o.amountInCents - amountInCents; // larger bids sort first
}
}
public class Auction {
private SortedSet<Bid> bids = new ConcurrentSkipListSet<Bid>();
public Bid getHighestBid() {
return bids.isEmpty() ? null : bids.first();
}
public void addBid(Bid bid) {
bids.add(bid);
}
}
Doing this has the following advantages:
Automatically provides a bidding history
Allows a simple way to save any other bid info you need
You could also consider this method:
/**
* #param bid
* #return true if the bid was successful
*/
public boolean makeBid(Bid bid) {
if (bids.isEmpty()) {
bids.add(bid);
return true;
}
if (bid.compareTo(bids.first()) <= 0) {
return false;
}
bids.add(bid);
return true;
}
Using an AtomicInteger is fine, provided you initialise it as Tomasz has suggested.
What you might like to think about, however, is whether all you will literally ever need to store is just the highest bid as an integer. Will you never need to store associated information, such as the bidding time, user ID of the bidder etc? Because if at a later stage you do, you'll have to start undoing your AtomicInteger code and replacing it.
I would be tempted from the outset to set things up to store arbitrary information associated with the bid. For example, you can define a "Bid" class with the relevant field(s). Then on each bid, use an AtomicReference to store an instance of "Bid" with the relevant information. To be thread-safe, make all the fields on your Bid class final.
You could also consider using an explicit Lock (e.g. see the ReentrantLock class) to control access to the highest bid. As Tomasz mentions, even with an AtomicInteger (or AtomicReference: the logic is essentially the same) you need to be a little careful about how you access it. The atomic classes are really designed for cases where they are very frequently accessed (as in thousands of times per second, not every few minutes as on a typical auction site). They won't really give you any performance benefit here, and an explicit Lock object might be more intuitive to program with.

How to handle synchronized access to List within Map<String, List>?

UPDATE: Please note.
The question I have asked was answered. Unfortunately for me, the issue is quite bigger than question in the Title. Apart from adding new entries to the map I had to handle updates and removals at the same time. The scenario I have in mind seems not possible to implement without one or the other:
a. deadlocks
b. complex & time consuming checks and locks
Check the bottom of the Question for final thoughts.
ORIGINAL POST:
Hi,
I've got a spring bean with a Map.
Here's what I want to use it for:
few concurrent JMS Listeners will receive messages with actions. Each action consist of two users: long userA and long userB. Message will have it's own String replyTo queue which will be used to identify the action.
Because I cannot allow to execute an action when one of the users participates in another action which is executed I am going to use this map as a registry of what is going on and in order to control execution of actions.
So let's say I receive three actions:
1. userA, userB
2. userB, userC
3. userC, userA
When first action is received the map is empty so I am going to record info about the action in it and start executing the action.
When second action is received I can see that userB is 'busy' with first action so I simply record information about the action.
Same thing for third action.
Map is going to look like this:
[userA:[action1, action3],
userB:[action1, action2],
userC:[action2, action3]]
Once first action is complete I will remove information about it from the registry and get info about next actions for userA and userB [action3, action2]. Then I will try to restart them.
I think by now you get what I want to do with this map.
Because map is going to be accessed from several threads at the same time I have to handle synchronization somehow.
I will have methods to add new information to the map and to remove info from the map when action is done. The remove method will return next actions [if there are any] for the two users for whom the action just finished.
Because there could be hundreds of actions executed at the same time and the percentage of actions with busy users is supposed to be low I don't want to block access to the map for every add/remove operation.
I thought about making synchronized access only to each of the Lists within the Map to allow concurrent access to several user entries at the same time. However... because when there are no actions left for the user I want to remove entry for this user from the map. Also... when user has no entry in the map I will have to create one. I am a little bit afraid there could be clashes in there somewhere.
What would be the best way to handle this scenario?
Is making both methods - add and remove - synchronized (which I consider the worst case scenario) the only proper [safe] way to do it?
Additionally I will have another map which will contain action id as keys and user ids as values so it's easier to identify/remove user pairs. I believe I can skip synchronization on this one since there's no scenario where one action would be executed twice at the same time.
Although code is in Groovy I believe no Java programmer will find it difficult to read. It is Java behind it.
Please consider following as pseudo code as I am just prototyping.
class UserRegistry {
// ['actionA':[userA, userB]]
// ['actionB':[userC, userA]]
// ['actionC':[userB, userC]]
private Map<String, List<Long>> messages = [:]
/**
* ['userA':['actionA', 'actionB'],
* ['userB':['actionA', 'actionC'],
* ['userC':['actionB', 'actionC']
*/
private Map<long, List<String>> users = [:].asSynchronized()
/**
* Function will add entries for users and action to the registry.
* #param userA
* #param userB
* #param action
* #return true if a new entry was added, false if entries for at least one user already existed
*/
public boolean add(long userA, long userB, String action) {
boolean userABusy = users.containsKey(userA)
boolean userBBusy = users.containsKey(userB)
boolean retValue
if (userABusy || userBBusy) {
if (userABusy) {
users.get(userA).add(action)
} else {
users.put(userA, [action].asSynchronized())
}
if (userBBusy) {
users.get(userB).add(action)
} else {
users.put(userB, [action].asSynchronized())
}
messages.put(action, [userA, userB])
retValue = false
} else {
users.put(userA, [action].asSynchronized())
users.put(userB, [action].asSynchronized())
messages.put(action, [userA, userB])
retValue = true
}
return retValue
}
public List remove(String action) {
if(!messages.containsKey(action)) throw new Exception("we're screwed, I'll figure this out later")
List nextActions = []
long userA = messages.get(action).get(0)
long userB = messages.get(action).get(1)
if (users.get(userA).size() > 1) {
users.get(userA).remove(0)
nextActions.add(users.get(userA).get(0))
} else {
users.remove(userA)
}
if (users.get(userB).size() > 1) {
users.get(userB).remove(0)
nextActions.add(users.get(userB).get(0))
} else {
users.remove(userB)
}
messages.remove(action)
return nextActions
}
}
EDIT
I thought about this solution last night and it seems that messages map could go away and users Map would be:
Map users<String, List<UserRegistryEntry>>
where
UserRegistryEntry:
String actionId
boolean waiting
now let's assume I get these actions:
action1: userA, userC
action2: userA, userD
action3: userB, userC
action4: userB, userD
This means that action1 and action4 can be executed simultaneously and action2 and action3 are blocked. Map would look like this:
[
[userAId: [actionId: action1, waiting: false],[actionId: action2, waiting: true]],
[userBId: [actionId: action3, waiting: true], [actionId: action4, waiting: false]],
[userCId: [actionId: action1, waiting: false],[actionId: action3, waiting: true]],
[userDId: [actionId: action2, waiting: true], [actionId: action4, waiting: false]]
]
This way, when action execution is finished I remove entry from the map using:
userAId, userBId, actionId
And take details about first non blocked waiting action on userA and userB [if there are any] and pass them for execution.
So now the two methods I will need, which are going to write data to the Map and remove it from the map.
public boolean add(long userA, long userB, String action) {
boolean userAEntryExists = users.containsKey(userA)
boolean userBEntryExists = users.containsKey(userB)
boolean actionWaiting = true
UserRegistryEntry userAEntry = new UserRegistryEntry(actionId: action, waiting: false)
UserRegistryEntry userBEntry = new UserRegistryEntry(actionId: action, waiting: false)
if (userAEntryExists || userBEntryExists) {
if (userAEntryExists) {
for (entry in users.get(userA)) {
if (!entry.waiting) {
userAEntry.waiting = true
userBEntry.waiting = true
actionWaiting = true
break;
}
}
}
if (!actionWaiting && userBEntryExists) {
for (entry in users.get(userB)) {
if (!entry.waiting) {
userAEntry.waiting = true
userBEntry.waiting = true
actionWaiting = true
break;
}
}
}
}
if (userBEntryExists) {
users.get(userA).add(userAEntry)
} else {
users.put(userA, [userAEntry])
}
if (userAEntryExists) {
users.get(userB).add(userBEntry)
} else {
users.put(userB, [userBEntry])
}
return actionWaiting
}
And for removes:
public List remove(long userA, long userB, String action) {
List<String> nextActions = []
finishActionAndReturnNew(userA, action, nextActions)
finishActionAndReturnNew(userB, action, nextActions)
return nextActions;
}
private def finishActionAndReturnNew(long userA, String action, List<String> nextActions) {
boolean userRemoved = false
boolean actionFound = false
Iterator itA = users.get(userA).iterator()
while (itA.hasNext()) {
UserRegistryEntry entry = itA.next()
if (!userRemoved && entry.actionId == action) {
itA.remove()
} else {
if (!actionFound && isUserFree(entry.otherUser)) {
nextActions.add(entry.actionId)
}
}
if (userRemoved && actionFound) break
}
}
public boolean isUserFree(long userId) {
boolean userFree = true
if (!users.containsKey(userId)) return true
for (entry in users.get(userId)) {
if (!entry.waiting) userFree = false
}
return userFree
}
FINAL THOUGHT:
This scenario is a killer:
[ActionID, userA,userB]
[a, 1,2]
[b, 1,3]
[c, 3,4]
[d, 3,1]
Action a and c are executed simultaneously, b and d are waiting.
When a and c are done, entries for users 1,2,3,4 will have to be removed, thus one thread will have 1 and 2 locked, the other thread will have 3 and 4 locked. When these users are locked a check for next action for each of them has to be performed. When code determines that for user 1 next action is with user 3 and for user 3 next action is with user 1, whey will try to lock them. This is when the deadlock happens. I know I could code around that, but it seems it will take a lot of time to execute and it will block two workers.
For now I will ask another question on SO, more on the subject of my issue and try to prototype the solution using JMS in the meantime.
You may need to review how synchronized (collections) work again:
This (as a non-exclusive example) is not thread-safe:
if (users.get(userA).size() > 1) {
users.get(userA).remove(0)
Remember that only individual "synchronized" methods are guaranteed atomic without a larger lock scope.
Happy coding.
Edit - per-user synchronization locks (updated for comment):
Just by using the standard data-structures you can achieve per-key locks by using ConcurrentHashMap -- in particular by using the 'putIfAbsent' method. (This is significantly different than just using get/put of a 'synchronized HashMap', see above.)
Below is some pseudo-code and notes:
public boolean add(long userA, long userB, String action) {
// The put-if-absent ensures the *the same* object but may be violated when:
// -users is re-assigned
// -following approach is violated
// A new list is created if needed and the current list is returned if
// it already exists (as per the method name).
// Since we have synchronized manually here, these lists
// themselves do not need to be synchronized, provided:
// Access should consistently be protected across the "higher"
// structure (per user-entry in the map) when using this approach.
List listA = users.putIfAbsent(userA, new List)
List listB = users.putIfAbsent(userB, new List)
// The locks must be ordered consistently so that
// a A B/B A deadlock does not occur.
Object lock1, lock2
if (userA < userB) {
lock1 = listA, lock2 = listB
} else {
lock1 = listB, lock2 = listA
}
synchronized (lock1) { synchronized (lock2) {{ // start locks
// The rest of the code can be simplified, since the
// list items are already *guaranteed* to exist there is no
// need to alternate between add and creating a new list.
bool eitherUserBusy = listA.length > 0 || listB.length > 0
listA.add(action)
listB.add(action)
// make sure messages allows thread-safe access as well
messages.put(action, [userA, userB])
return !eitherUserBusy
}} // end locks
}
I have no how this fairs under your usage scenario vs. a single common lock object. It is often advisable to go with "simpler" unless there is a clear advantage to do otherwise.
HTH and Happy coding.
You might want to check out Collections.synchronizedMap() or Collections.synchronizedList()
You have two global state-holders in the class and compound-actions in each of the two methods that modify both of them. So, even if we changed the Map's to be ConcurrentHashMap's and the List to something like CopyOnWriteArrayList, it would still not guarantee a consistent state.
I see that you will be writing to the List often, so, CopyOnWriteArrayList might be too expensive anyway. ConcurrentHashMap is only 16-way striped. If you have better hardware, an alternative would be Cliff Click's highscalelib (after appropriate locking in the methods).
Back to the consistency question, how about use a ReentrantLock instead of synchronizing and see if you can exclude some statements out of the lock()-to-unlock() sequence. If you went with a ConcurrentMap, the first two statements in the add() that do containsKey() can be optimistic and you may be able to exclude them from the lock block.
Do you really need the messages map? It is kind of like an inverse index of users. One other option would be to have another watch() method that periodically updates the messages map based on a signal from add() after a change to users. The refresh could alternatively be completely async. In doing that, you might be able to use a ReadWriteLock with the readLock() on users while you update messages. In this situation, add() can safely acquire a writeLock() on users. It is just some more work to get this reasonably correct.

Categories