Everytime before I place a new order to IB, I need to make a request to IB for next valid orderId and do Thread.Sleep(500) to sleep for 0.5 seconds and wait for IB API's callback function nextValidId to return the latest orderID. If I want to place multiple orders out, then I have to naively do thread.sleep multiple times, This is not a very good way to handle this, as the orderID could have been updated earlier and hence the new order could have been placed earlier. And what if the orderID takes longer time to update than thread sleep time, this would result in error.
Is there a more efficient and elegant way to do this ?
Ideally, I want the program to prevent running placeNewOrder until the latest available orderID is updated and notify the program to run placeNewOrder.
I do not know much about Java data synchronization but I reckon there might be a better solution using synchronized or wait-notify or locking or blocking.
my code:
// place first order
ib_client.reqIds(-1);
Thread.sleep(500);
int currentOrderId = ib_wrapper.getCurrentOrderId();
placeNewOrder(currentOrderId, orderDetails); // my order placement method
// place 2nd order
ib_client.reqIds(-1);
Thread.sleep(500);
int currentOrderId = ib_wrapper.getCurrentOrderId();
placeNewOrder(currentOrderId, orderDetails); // my order placement method
IB EWrapper:
public class EWrapperImpl implements EWrapper {
...
protected int currentOrderId = -1;
...
public int getCurrentOrderId() {
return currentOrderId;
}
public void nextValidId(int orderId) {
System.out.println("Next Valid Id: ["+orderId+"]");
currentOrderId = orderId;
}
...
}
You never need to ask for id's. Just increment by one for every order.
When you first connect, nextValidId is the first or second message to be received, just keep track of the id and keep incrementing.
The only rules for orderId is to use an integer and always increment by some amount. This is per clientId so if you connect with a new clientId then the last orderId is something else.
I always use max(1000, nextValidId) to make sure my id's start at 1000 or more since I use <1000 for data requests. It just helps with errors that have ids.
You can also reset the sequence somehow.
https://interactivebrokers.github.io/tws-api/order_submission.html
This means that if there is a single client application submitting
orders to an account, it does not have to obtain a new valid
identifier every time it needs to submit a new order. It is enough to
increase the last value received from the nextValidId method by one.
You should not mess around with order ID, it's automatically tracked and being set by the API. Otherwise you will get the annoying "Duplicate order id" error 103. From ApiController class:
public void placeOrModifyOrder(Contract contract, final Order order, final IOrderHandler handler) {
if (!checkConnection())
return;
// when placing new order, assign new order id
if (order.orderId() == 0) {
order.orderId( m_orderId++);
if (handler != null) {
m_orderHandlers.put( order.orderId(), handler);
}
}
m_client.placeOrder( contract, order);
sendEOM();
}
Related
I am just getting started with IBKR API on Java. I am following the API sample code, specifically the options chain example, to figure out how to get options chains for specific stocks.
The example works well for this, but I have one question - how do I know once ALL data has been loaded? There does not seem to be a way to tell. The sample code is able to tell when each individual row has been loaded, but there doesn't seem to be a way to tell when ALL strikes have been successfully loaded.
I thought that using tickSnapshotEnd() would be beneficial, but it doesn't not seem to work as I would expect it to. I would expect it to be called once for every request that completes. For example, if I do a query for a stock like SOFI on the 2022/03/18 expiry, I see that there are 35 strikes but tickSnapshotEnd() is called 40+ times, with some strikes repeated more than once.
Note that I am doing requests for snapshot data, not live/streaming data
reqOptionsMktData is obviously a method in the sample code you are using. Not sure what particular code your using, so this is a general response.
Firstly you are correct, there is no way to tell via the API, this must be done by the client. Of course it will provide the requestID that was used when the request was made. The client needs to remember what each requestID was for and decide how to process that information when it is received in the callbacks.
This can be done via a dictionary or hashtable, where upon receiving data in the callback then check if the chain is complete.
Message delivery from the API often has unexpected results, receiving extra messages is common and is something that needs to be taken into account by the client. Consider the API stateless, and track everything in the client.
Seems you are referring to Regulatory Snapshots, I would encourage you to look at the cost. It could quite quickly add up to the price of streaming live data. Add to that the 1/sec limit will make a chain take a long time to load. I wouldn't even recommend using snapshots with live data, cancelling the request yourself is trivial and much faster.
Something like (this is obviously incomplete C#, just a starting point)
class OptionData
{
public int ReqId { get; }
public double Strike { get; }
public string Expiry { get; }
public double? Bid { get; set; } = null;
public double? Ask { get; set; } = null;
public bool IsComplete()
{
return Bid != null && Ask != null;
}
public OptionData(int reqId, double strike, ....
{ ...
}
...
class MyData()
{
// Create somewhere to store our data, indexed by reqId.
Dictionary<int, OptionData> optChain = new();
public MyData()
{
// We would want to call reqSecDefOptParams to get a list of strikes etc.
// Choose which part of the chain you want, likely you'll want to
// get the current price of the underlying to decide.
int reqId = 1;
...
optChain.Add(++reqId, new OptionData(reqId,strike,expiry));
...
// Request data for each contract
// Note the 50 msg/sec limit https://interactivebrokers.github.io/tws-api/introduction.html#fifty_messages
// Only 1/sec for Reg snapshot
foreach(OptionData opt in optChain)
{
Contract con = new()
{
Symbol = "SPY",
Currency = "USD"
Exchange = "SMART",
Right = "C",
SecType = "OPT",
Strike = opt.strike,
Expiry = opt.Expiry
};
ibClient.ClientSocket.reqMktData(opt.ReqId, con, "", false, true, new List<TagValue>());
}
}
...
private void Recv_TickPrice(TickPriceMessage msg)
{
if(optChain.ContainsKey(msg.RequestId))
{
if (msg.Field == 2) optChain[msg.RequestId].Ask = msg.Price;
if (msg.Field == 1) optChain[msg.RequestId].Bid = msg.Price;
// You may want other tick types as well
// see https://interactivebrokers.github.io/tws-api/tick_types.html
if(optChain[msg.RequestId].IsComplete())
{
// This wont apply for reg snapshot.
ibClient.ClientSocket.cancelMktData(msg.RequestId);
// You have the data, and have cancelled the request.
// Maybe request more data or update display etc...
// Check if the whole chain is complete
bool complete=true;
foreach(OptionData opt in optChain)
if(!opt.IsComplete()) complete=false;
if(complete)
// do whatever
}
}
}
This program is about showing the oldest, youngest ect person in a network.
I need to figure out how I can improve it, so I dont get the ConcurrentModificationException. I get this when I ask for displaying more of these multiple time, like asking for youngest, oldest, and make it refresh to tell me whos the current youngest.
public void randomIncreaseCoupling(int amount, double chance, double inverseChance) {
randomChangeCoupling(amount,chance,inverseChance,true);
}
public void randomDecreaseCoupling(int amount, double chance, double inverseChance) {
randomChangeCoupling(amount,chance,inverseChance,false);
This code is used in the network to randomly change the date outcome.
Also, I have this running in a Thread currently, but I need to fasten it, so I need to run each of the 'functions' to run in their own Thread.
The Class MainController is starting the Thread by:
public void startEvolution() {
if (display == null)
throw new Error("Controller not initialized before start");
evolutionThread = new NetworkEvolutionSimulator(network, display);
evolutionThread.start();
}
When I click on any button ex a button to show me the oldest in this network, it is done by:
public void startOldest() {
if (display == null)
throw new Error("Not properly initialized");
int order = display.getDistanceFor(Identifier.OLDEST);
Set<Person> oldest = network.applyPredicate(PredicateFactory.IS_OLDEST,
order);
display.displayData(Identifier.OLDEST, summarize(order, oldest));
I tried to make it like:
public void startOldest() {
if (display == null)
throw new Error("Not properly initialized");
int order = display.getDistanceFor(Identifier.OLDEST);
Set<Person> oldest = network.applyPredicate(PredicateFactory.IS_OLDEST,
order);
display.displayData(Identifier.OLDEST, summarize(order, oldest));
evolutionThread2 = new NetworkEvolutionSimulator(network, display);
evolutionThread2.start();
But this starts main thread over and over when I press the button. What I want is that this specific function and the others when I press the cercain button it has to start each of them in their own threads so I will be able to use more than one of them at a time. How shall I do this?
I can explain more if needed.
Thanks in advance.
My first post, so sorry if I didn't follow a specific rule.
You could use the synchronized keyword -
The synchronized keyword can be used to mark several types of code blocks:
Instance methods
Static methods
Code blocks inside instance methods
Code blocks inside static methods
Everywhere you're using your set oldest you could add a synchronized code block like this
synchronized(oldest) { ... }
I am working on a project to create a simple auction server that multiple clients connect to. The server class implements Runnable and so creates a new thread for each client that connects.
I am trying to have the current highest bid stored in a variable that can be seen by each client. I found answers saying to use AtomicInteger, but when I used it with methods such as atomicVariable.intValue() I got null pointer exception errors.
What ways can I manipulate the AtomicInteger without getting this error or is there an other way to have a shared variable that is relatively simple?
Any help would be appreciated, thanks.
Update
I have the AtomicInteger working. The problem is now that only the most recent client to connect to the server seems to be able to interact with it. The other client just sort of freeze.
Would I be correct in saying this is a problem with locking?
Well, most likely you forgot to initialize it:
private final AtomicInteger highestBid = new AtomicInteger();
However working with highestBid requires a great deal of knowledge to get it right without any locking. For example if you want to update it with new highest bid:
public boolean saveIfHighest(int bid) {
int currentBid = highestBid.get();
while (currentBid < bid) {
if (highestBid.compareAndSet(currentBid, bid)) {
return true;
}
currentBid = highestBid.get();
}
return false;
}
or in a more compact way:
for(int currentBid = highestBid.get(); currentBid < bid; currentBid = highestBid.get()) {
if (highestBid.compareAndSet(currentBid, bid)) {
return true;
}
}
return false;
You might wonder, why is it so hard? Image two threads (requests) biding at the same time. Current highest bid is 10. One is biding 11, another 12. Both threads compare current highestBid and realize they are bigger. Now the second thread happens to be first and update it to 12. Unfortunately the first request now steps in and revert it to 11 (because it already checked the condition).
This is a typical race condition that you can avoid either by explicit synchronization or by using atomic variables with implicit compare-and-set low-level support.
Seeing the complexity introduced by much more performant lock-free atomic integer you might want to restore to classic synchronization:
public synchronized boolean saveIfHighest(int bid) {
if (highestBid < bid) {
highestBid = bid;
return true;
} else {
return false;
}
}
I wouldn't look at the problem like that. I would simply store all the bids in a ConcurrentSkipListSet, which is a thread-safe SortedSet. With the correct implementation of compareTo(), which determines the ordering, the first element of the Set will automatically be the highest bid.
Here's some sample code:
public class Bid implements Comparable<Bid> {
String user;
int amountInCents;
Date created;
#Override
public int compareTo(Bid o) {
if (amountInCents == o.amountInCents) {
return created.compareTo(created); // earlier bids sort first
}
return o.amountInCents - amountInCents; // larger bids sort first
}
}
public class Auction {
private SortedSet<Bid> bids = new ConcurrentSkipListSet<Bid>();
public Bid getHighestBid() {
return bids.isEmpty() ? null : bids.first();
}
public void addBid(Bid bid) {
bids.add(bid);
}
}
Doing this has the following advantages:
Automatically provides a bidding history
Allows a simple way to save any other bid info you need
You could also consider this method:
/**
* #param bid
* #return true if the bid was successful
*/
public boolean makeBid(Bid bid) {
if (bids.isEmpty()) {
bids.add(bid);
return true;
}
if (bid.compareTo(bids.first()) <= 0) {
return false;
}
bids.add(bid);
return true;
}
Using an AtomicInteger is fine, provided you initialise it as Tomasz has suggested.
What you might like to think about, however, is whether all you will literally ever need to store is just the highest bid as an integer. Will you never need to store associated information, such as the bidding time, user ID of the bidder etc? Because if at a later stage you do, you'll have to start undoing your AtomicInteger code and replacing it.
I would be tempted from the outset to set things up to store arbitrary information associated with the bid. For example, you can define a "Bid" class with the relevant field(s). Then on each bid, use an AtomicReference to store an instance of "Bid" with the relevant information. To be thread-safe, make all the fields on your Bid class final.
You could also consider using an explicit Lock (e.g. see the ReentrantLock class) to control access to the highest bid. As Tomasz mentions, even with an AtomicInteger (or AtomicReference: the logic is essentially the same) you need to be a little careful about how you access it. The atomic classes are really designed for cases where they are very frequently accessed (as in thousands of times per second, not every few minutes as on a typical auction site). They won't really give you any performance benefit here, and an explicit Lock object might be more intuitive to program with.
I am looking for a concurrent Set with expiration functionality for a Java 1.5 application. It would be used as a simple way to store / cache names (i.e. String values) that expire after a certain time.
The problem I'm trying to solve is that two threads should not be able to use the same name value within a certain time (so this is sort of a blacklist ensuring the same "name", which is something like a message reference, can't be reused by another thread until a certain time period has passed). I do not control name generation myself, so there's nothing I can do about the actual names / strings to enforce uniqueness, it should rather be seen as a throttling / limiting mechanism to prevent the same name to be used more than once per second.
Example:
Thread #1 does cache.add("unique_string, 1) which stores the name "unique_string" for 1 second.
If any thread is looking for "unique_string" by doing e.g. cache.get("unique_string") within 1 second it will get a positive response (item exists), but after that the item should be expired and removed from the set.
The container would at times handle 50-100 inserts / reads per second.
I have really been looking around at different solutions but am not finding anything that I feel really suites my needs. It feels like an easy problem, but all solutions I find are way too complex or overkill.
A simple idea would be to have a ConcurrentHashMap object with key set to "name" and value to the expiration time then a thread running every second and removing all elements whose value (expiration time) has passed, but I'm not sure how efficient that would be? Is there not a simpler solution I'm missing?
Google's Guava library contains exactly such cache: CacheBuilder.
How about creating a Map where the item expires using a thread executor
//Declare your Map and executor service
final Map<String, ScheduledFuture<String>> cacheNames = new HashMap<String, ScheduledFuture<String>>();
ScheduledExecutorService executorService = Executors.newSingleThreadScheduledExecutor();
You can then have a method that adds the cache name to your collection which will remove it after it has expired, in this example its one second. I know it seems like quite a bit of code but it can be quite an elegant solution in just a couple of methods.
ScheduledFuture<String> task = executorService.schedule(new Callable<String>() {
#Override
public String call() {
cacheNames.remove("unique_string");
return "unique_string";
}
}, 1, TimeUnit.SECONDS);
cacheNames.put("unique_string", task);
A simple unique string pattern which doesn't repeat
private static final AtomicLong COUNTER = new AtomicLong(System.currentTimeMillis()*1000);
public static String generateId() {
return Long.toString(COUNTER.getAndIncrement(), 36);
}
This won't repeat even if you restart your application.
Note: It will repeat after:
you restart and you have been generating over one million ids per second.
after 293 years. If this is not long enough you can reduce the 1000 to 100 and get 2930 years.
It depends - If you need strict condition of time, or soft (like 1 sec +/- 20ms).
Also if you need discrete cache invalidation or 'by-call'.
For strict conditions I would suggest to add a distinct thread which will invalidate cache each 20milliseconds.
Also you can have inside the stored key timestamp and check if it's expired or not.
Why not store the time for which the key is blacklisted in the map (as Konoplianko hinted)?
Something like this:
private final Map<String, Long> _blacklist = new LinkedHashMap<String, Long>() {
#Override
protected boolean removeEldestEntry(Map.Entry<String, Long> eldest) {
return size() > 1000;
}
};
public boolean isBlacklisted(String key, long timeoutMs) {
synchronized (_blacklist) {
long now = System.currentTimeMillis();
Long blacklistUntil = _blacklist.get(key);
if (blacklistUntil != null && blacklistUntil >= now) {
// still blacklisted
return true;
} else {
// not blacklisted, or blacklisting has expired
_blacklist.put(key, now + timeoutMs);
return false;
}
}
}
UPDATE: Please note.
The question I have asked was answered. Unfortunately for me, the issue is quite bigger than question in the Title. Apart from adding new entries to the map I had to handle updates and removals at the same time. The scenario I have in mind seems not possible to implement without one or the other:
a. deadlocks
b. complex & time consuming checks and locks
Check the bottom of the Question for final thoughts.
ORIGINAL POST:
Hi,
I've got a spring bean with a Map.
Here's what I want to use it for:
few concurrent JMS Listeners will receive messages with actions. Each action consist of two users: long userA and long userB. Message will have it's own String replyTo queue which will be used to identify the action.
Because I cannot allow to execute an action when one of the users participates in another action which is executed I am going to use this map as a registry of what is going on and in order to control execution of actions.
So let's say I receive three actions:
1. userA, userB
2. userB, userC
3. userC, userA
When first action is received the map is empty so I am going to record info about the action in it and start executing the action.
When second action is received I can see that userB is 'busy' with first action so I simply record information about the action.
Same thing for third action.
Map is going to look like this:
[userA:[action1, action3],
userB:[action1, action2],
userC:[action2, action3]]
Once first action is complete I will remove information about it from the registry and get info about next actions for userA and userB [action3, action2]. Then I will try to restart them.
I think by now you get what I want to do with this map.
Because map is going to be accessed from several threads at the same time I have to handle synchronization somehow.
I will have methods to add new information to the map and to remove info from the map when action is done. The remove method will return next actions [if there are any] for the two users for whom the action just finished.
Because there could be hundreds of actions executed at the same time and the percentage of actions with busy users is supposed to be low I don't want to block access to the map for every add/remove operation.
I thought about making synchronized access only to each of the Lists within the Map to allow concurrent access to several user entries at the same time. However... because when there are no actions left for the user I want to remove entry for this user from the map. Also... when user has no entry in the map I will have to create one. I am a little bit afraid there could be clashes in there somewhere.
What would be the best way to handle this scenario?
Is making both methods - add and remove - synchronized (which I consider the worst case scenario) the only proper [safe] way to do it?
Additionally I will have another map which will contain action id as keys and user ids as values so it's easier to identify/remove user pairs. I believe I can skip synchronization on this one since there's no scenario where one action would be executed twice at the same time.
Although code is in Groovy I believe no Java programmer will find it difficult to read. It is Java behind it.
Please consider following as pseudo code as I am just prototyping.
class UserRegistry {
// ['actionA':[userA, userB]]
// ['actionB':[userC, userA]]
// ['actionC':[userB, userC]]
private Map<String, List<Long>> messages = [:]
/**
* ['userA':['actionA', 'actionB'],
* ['userB':['actionA', 'actionC'],
* ['userC':['actionB', 'actionC']
*/
private Map<long, List<String>> users = [:].asSynchronized()
/**
* Function will add entries for users and action to the registry.
* #param userA
* #param userB
* #param action
* #return true if a new entry was added, false if entries for at least one user already existed
*/
public boolean add(long userA, long userB, String action) {
boolean userABusy = users.containsKey(userA)
boolean userBBusy = users.containsKey(userB)
boolean retValue
if (userABusy || userBBusy) {
if (userABusy) {
users.get(userA).add(action)
} else {
users.put(userA, [action].asSynchronized())
}
if (userBBusy) {
users.get(userB).add(action)
} else {
users.put(userB, [action].asSynchronized())
}
messages.put(action, [userA, userB])
retValue = false
} else {
users.put(userA, [action].asSynchronized())
users.put(userB, [action].asSynchronized())
messages.put(action, [userA, userB])
retValue = true
}
return retValue
}
public List remove(String action) {
if(!messages.containsKey(action)) throw new Exception("we're screwed, I'll figure this out later")
List nextActions = []
long userA = messages.get(action).get(0)
long userB = messages.get(action).get(1)
if (users.get(userA).size() > 1) {
users.get(userA).remove(0)
nextActions.add(users.get(userA).get(0))
} else {
users.remove(userA)
}
if (users.get(userB).size() > 1) {
users.get(userB).remove(0)
nextActions.add(users.get(userB).get(0))
} else {
users.remove(userB)
}
messages.remove(action)
return nextActions
}
}
EDIT
I thought about this solution last night and it seems that messages map could go away and users Map would be:
Map users<String, List<UserRegistryEntry>>
where
UserRegistryEntry:
String actionId
boolean waiting
now let's assume I get these actions:
action1: userA, userC
action2: userA, userD
action3: userB, userC
action4: userB, userD
This means that action1 and action4 can be executed simultaneously and action2 and action3 are blocked. Map would look like this:
[
[userAId: [actionId: action1, waiting: false],[actionId: action2, waiting: true]],
[userBId: [actionId: action3, waiting: true], [actionId: action4, waiting: false]],
[userCId: [actionId: action1, waiting: false],[actionId: action3, waiting: true]],
[userDId: [actionId: action2, waiting: true], [actionId: action4, waiting: false]]
]
This way, when action execution is finished I remove entry from the map using:
userAId, userBId, actionId
And take details about first non blocked waiting action on userA and userB [if there are any] and pass them for execution.
So now the two methods I will need, which are going to write data to the Map and remove it from the map.
public boolean add(long userA, long userB, String action) {
boolean userAEntryExists = users.containsKey(userA)
boolean userBEntryExists = users.containsKey(userB)
boolean actionWaiting = true
UserRegistryEntry userAEntry = new UserRegistryEntry(actionId: action, waiting: false)
UserRegistryEntry userBEntry = new UserRegistryEntry(actionId: action, waiting: false)
if (userAEntryExists || userBEntryExists) {
if (userAEntryExists) {
for (entry in users.get(userA)) {
if (!entry.waiting) {
userAEntry.waiting = true
userBEntry.waiting = true
actionWaiting = true
break;
}
}
}
if (!actionWaiting && userBEntryExists) {
for (entry in users.get(userB)) {
if (!entry.waiting) {
userAEntry.waiting = true
userBEntry.waiting = true
actionWaiting = true
break;
}
}
}
}
if (userBEntryExists) {
users.get(userA).add(userAEntry)
} else {
users.put(userA, [userAEntry])
}
if (userAEntryExists) {
users.get(userB).add(userBEntry)
} else {
users.put(userB, [userBEntry])
}
return actionWaiting
}
And for removes:
public List remove(long userA, long userB, String action) {
List<String> nextActions = []
finishActionAndReturnNew(userA, action, nextActions)
finishActionAndReturnNew(userB, action, nextActions)
return nextActions;
}
private def finishActionAndReturnNew(long userA, String action, List<String> nextActions) {
boolean userRemoved = false
boolean actionFound = false
Iterator itA = users.get(userA).iterator()
while (itA.hasNext()) {
UserRegistryEntry entry = itA.next()
if (!userRemoved && entry.actionId == action) {
itA.remove()
} else {
if (!actionFound && isUserFree(entry.otherUser)) {
nextActions.add(entry.actionId)
}
}
if (userRemoved && actionFound) break
}
}
public boolean isUserFree(long userId) {
boolean userFree = true
if (!users.containsKey(userId)) return true
for (entry in users.get(userId)) {
if (!entry.waiting) userFree = false
}
return userFree
}
FINAL THOUGHT:
This scenario is a killer:
[ActionID, userA,userB]
[a, 1,2]
[b, 1,3]
[c, 3,4]
[d, 3,1]
Action a and c are executed simultaneously, b and d are waiting.
When a and c are done, entries for users 1,2,3,4 will have to be removed, thus one thread will have 1 and 2 locked, the other thread will have 3 and 4 locked. When these users are locked a check for next action for each of them has to be performed. When code determines that for user 1 next action is with user 3 and for user 3 next action is with user 1, whey will try to lock them. This is when the deadlock happens. I know I could code around that, but it seems it will take a lot of time to execute and it will block two workers.
For now I will ask another question on SO, more on the subject of my issue and try to prototype the solution using JMS in the meantime.
You may need to review how synchronized (collections) work again:
This (as a non-exclusive example) is not thread-safe:
if (users.get(userA).size() > 1) {
users.get(userA).remove(0)
Remember that only individual "synchronized" methods are guaranteed atomic without a larger lock scope.
Happy coding.
Edit - per-user synchronization locks (updated for comment):
Just by using the standard data-structures you can achieve per-key locks by using ConcurrentHashMap -- in particular by using the 'putIfAbsent' method. (This is significantly different than just using get/put of a 'synchronized HashMap', see above.)
Below is some pseudo-code and notes:
public boolean add(long userA, long userB, String action) {
// The put-if-absent ensures the *the same* object but may be violated when:
// -users is re-assigned
// -following approach is violated
// A new list is created if needed and the current list is returned if
// it already exists (as per the method name).
// Since we have synchronized manually here, these lists
// themselves do not need to be synchronized, provided:
// Access should consistently be protected across the "higher"
// structure (per user-entry in the map) when using this approach.
List listA = users.putIfAbsent(userA, new List)
List listB = users.putIfAbsent(userB, new List)
// The locks must be ordered consistently so that
// a A B/B A deadlock does not occur.
Object lock1, lock2
if (userA < userB) {
lock1 = listA, lock2 = listB
} else {
lock1 = listB, lock2 = listA
}
synchronized (lock1) { synchronized (lock2) {{ // start locks
// The rest of the code can be simplified, since the
// list items are already *guaranteed* to exist there is no
// need to alternate between add and creating a new list.
bool eitherUserBusy = listA.length > 0 || listB.length > 0
listA.add(action)
listB.add(action)
// make sure messages allows thread-safe access as well
messages.put(action, [userA, userB])
return !eitherUserBusy
}} // end locks
}
I have no how this fairs under your usage scenario vs. a single common lock object. It is often advisable to go with "simpler" unless there is a clear advantage to do otherwise.
HTH and Happy coding.
You might want to check out Collections.synchronizedMap() or Collections.synchronizedList()
You have two global state-holders in the class and compound-actions in each of the two methods that modify both of them. So, even if we changed the Map's to be ConcurrentHashMap's and the List to something like CopyOnWriteArrayList, it would still not guarantee a consistent state.
I see that you will be writing to the List often, so, CopyOnWriteArrayList might be too expensive anyway. ConcurrentHashMap is only 16-way striped. If you have better hardware, an alternative would be Cliff Click's highscalelib (after appropriate locking in the methods).
Back to the consistency question, how about use a ReentrantLock instead of synchronizing and see if you can exclude some statements out of the lock()-to-unlock() sequence. If you went with a ConcurrentMap, the first two statements in the add() that do containsKey() can be optimistic and you may be able to exclude them from the lock block.
Do you really need the messages map? It is kind of like an inverse index of users. One other option would be to have another watch() method that periodically updates the messages map based on a signal from add() after a change to users. The refresh could alternatively be completely async. In doing that, you might be able to use a ReadWriteLock with the readLock() on users while you update messages. In this situation, add() can safely acquire a writeLock() on users. It is just some more work to get this reasonably correct.