I'm looking for a collection that:
is a Deque/List - i.e. supports inserting elements at "the top" (newest items go to the top) - deque.addFirst(..) / list.add(0, ..). It could be a Queue, but the iteration order should be reverse - i.e. the most recently added items should come first.
is bounded - i.e. has a limit of 20 items
auto-discards the oldest items (those "at the bottom", added first) when the capacity is reached
non-blocking - if the deque is empty, retrievals should not block. It should also not block / return false / null / throw exception is the deque is full.
concurrent - multiple threads should be able to operate on it
I can take LinkedBlockingDeque and wrap it into my custom collection that, on add operations checks the size and discards the last item(s). Is there a better option?
I made this simple imeplementation:
public class AutoDiscardingDeque<E> extends LinkedBlockingDeque<E> {
public AutoDiscardingDeque() {
super();
}
public AutoDiscardingDeque(int capacity) {
super(capacity);
}
#Override
public synchronized boolean offerFirst(E e) {
if (remainingCapacity() == 0) {
removeLast();
}
super.offerFirst(e);
return true;
}
}
For my needs this suffices, but it should be well-documented methods different than addFirst / offerFirst are still following the semantics of a blocking deque.
I believe what you're looking for is a bounded stack. There isn't a core library class that does this, so I think the best way of doing this is to take a non-synchronized stack (LinkedList) and wrap it in a synchronized collection that does the auto-discard and returning null on empty pop. Something like this:
import java.util.Iterator;
import java.util.LinkedList;
public class BoundedStack<T> implements Iterable<T> {
private final LinkedList<T> ll = new LinkedList<T>();
private final int bound;
public BoundedStack(int bound) {
this.bound = bound;
}
public synchronized void push(T item) {
ll.push(item);
if (ll.size() > bound) {
ll.removeLast();
}
}
public synchronized T pop() {
return ll.poll();
}
public synchronized Iterator<T> iterator() {
return ll.iterator();
}
}
...adding methods like isEmpty as required, if you want it to implement eg List.
The simplest and classic solution is a bounded ring buffer that overrides the oldest elements.
The implementation is rather easy. You need one AtomicInteger/Long for index + AtomicReferenceArray and you have a lock free general purpose stack with 2 methods only offer/poll, no size(). Most concurrent/lock-free structures have hardships w/ size(). Non-overriding stack can have O(1) but w/ an allocation on put.
Something along the lines of:
package bestsss.util;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.atomic.AtomicReferenceArray;
public class ConcurrentArrayStack<E> extends AtomicReferenceArray<E>{
//easy to extend and avoid indirections,
//feel free to contain the ConcurrentArrayStack if feel purist
final AtomicLong index = new AtomicLong(-1);
public ConcurrentArrayStack(int length) {
super(length); //returns
}
/**
* #param e the element to offer into the stack
* #return the previously evicted element
*/
public E offer(E e){
for (;;){
long i = index.get();
//get the result, CAS expect before the claim
int idx = idx(i+1);
E result = get(idx);
if (!index.compareAndSet(i, i+1))//claim index spot
continue;
if (compareAndSet(idx, result, e)){
return result;
}
}
}
private int idx(long idx){//can/should use golden ratio to spread the index around and reduce false sharing
return (int)(idx%length());
}
public E poll(){
for (;;){
long i = index.get();
if (i==-1)
return null;
int idx = idx(i);
E result = get(idx);//get before the claim
if (!index.compareAndSet(i, i-1))//claim index spot
continue;
if (compareAndSet(idx, result, null)){
return result;
}
}
}
}
Last note:
having mod operation is an expensive one and power-of-2 capacity is to preferred, via &length()-1 (also guards vs long overflow).
Here is an implementation that handles concurrency and never returns Null.
import com.google.common.base.Optional;
import java.util.Deque;
import java.util.concurrent.ConcurrentLinkedDeque;
import java.util.concurrent.locks.ReentrantLock;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.base.Preconditions.checkNotNull;
public class BoundedStack<T> {
private final Deque<T> list = new ConcurrentLinkedDeque<>();
private final int maxEntries;
private final ReentrantLock lock = new ReentrantLock();
public BoundedStack(final int maxEntries) {
checkArgument(maxEntries > 0, "maxEntries must be greater than zero");
this.maxEntries = maxEntries;
}
public void push(final T item) {
checkNotNull(item, "item must not be null");
lock.lock();
try {
list.push(item);
if (list.size() > maxEntries) {
list.removeLast();
}
} finally {
lock.unlock();
}
}
public Optional<T> pop() {
lock.lock();
try {
return Optional.ofNullable(list.poll());
} finally {
lock.unlock();
}
}
public Optional<T> peek() {
return Optional.fromNullable(list.peekFirst());
}
public boolean empty() {
return list.isEmpty();
}
}
For the solution #remery gave, could you not run into a race condition where after if (list.size() > maxEntries) you could erroneously remove the last element if another thread runs pop() in that time period and the list is now within capacity. Given there is no thread synchronization across pop() and public void push(final T item).
For the solution #Bozho gave I would think a similar scenario could be possible? The synchronization is happening on the AutoDiscardingDeque and not with the ReentrantLock inside LinkedBlockingDeque so after running remainingCapacity() another thread could remove some objects from the list and the removeLast() would remove an extra object?
Related
So I have this piece of Java code written with recursion.
package test;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.stream.IntStream;
public class Main {
public static void main(String[] args) {
List<AllocationCandidateSet> candidates = new ArrayList<>();
IntStream.range(0, 5000).forEach(__ -> candidates.add(new AllocationCandidateSet()));
AllocationContext ctx = new AllocationContext();
ctx.currentPlacementZoneAllocationCandidatesProvider = candidates.iterator();
Main main = new Main();
main.attemptAllocation(ctx).join();
System.out.println(ctx.firstSuccessfulCandidateSet);
System.out.println(ctx.iteration);
}
private CompletableFuture<AllocationContext> attemptAllocation(AllocationContext context) {
return CompletableFuture.completedFuture(context)
.thenCompose(this::getNextCandidateSet)
.thenCompose(this::runAffinityFilters)
.thenCompose(ctx -> {
if (ctx.firstSuccessfulCandidateSet == null
&& context.currentPlacementZoneAllocationCandidatesProvider.hasNext()) {
return attemptAllocation(ctx);
} else {
return CompletableFuture.completedFuture(ctx);
}
});
}
private CompletableFuture<AllocationContext> getNextCandidateSet(AllocationContext ctx) {
// For the sake of simplicity, I omitted most of the logic inside this method.
ctx.currentCandidateSet = ctx.currentPlacementZoneAllocationCandidatesProvider.next();
ctx.iteration++;
return CompletableFuture.completedFuture(ctx);
}
private CompletableFuture<AllocationContext> runAffinityFilters(AllocationContext ctx) {
// This is a long running async operation doing DB calls
return CompletableFuture.completedFuture(ctx)
.thenCompose(context -> {
// Do some DB calls and run business logic and evaluate if the current AllocationCandidateSet is successful
boolean success = context.iteration == 1876;
return CompletableFuture.completedFuture(success);
})
.thenAccept(success -> {
if (success) {
ctx.firstSuccessfulCandidateSet = ctx.currentCandidateSet;
}
})
.thenApply(__ -> ctx);
}
public static class AllocationContext {
Iterator<AllocationCandidateSet> currentPlacementZoneAllocationCandidatesProvider;
AllocationCandidateSet currentCandidateSet;
AllocationCandidateSet firstSuccessfulCandidateSet;
int iteration = 0;
}
public static class AllocationCandidateSet {
}
}
Results are accumulated inside the AllocationContext which is a thread-safe object holder that contains Iterator - context.currentPlacementZoneAllocationCandidatesProvider.
The iterator is a normal one - having next() and hasNext() methods.
The getNextCandidateSet() method advances the iterator forward. This method is actually sync as opposed to async so theoretically .thenCompose(this::getNextCandidateSet) can be replaced with .thenApply(this::getNextCandidateSet) with little refactoring.
The problem I'm having is with the runAffinityFilters() method... This method is doing async calls to the Database with must be non-blocking. Its signature returns CompletableFuture<AllocationContext> and it's not feasible to change it.
Therefore, calls to runAffinityFilters() are chained in this CompletableFuture chain with recursion - after each runAffinityFilters() call, a check for successful result is made - if running the affinity filters yielded no success ( (ctx.firstSuccessfulCandidateSet == null ) and the iterator can be advanced ( context.currentPlacementZoneAllocationCandidatesProvider.hasNext() )
... then allocateAttempt(ctx) is invoked again.
Is there a way to replace the recursion with regular iteration? With the recursive solution I'm hitting StackOverflowError near the 1000th loop...
It would be ideal to continue running runAffinityFilters() as async call, waiting for it to complete, checking the condition if (ctx.firstSuccessfulCandidateSet == null && context.currentPlacementZoneAllocationCandidatesProvider.hasNext()) and then continuing with next iteration.
how can I provide synchronization upon method parameter values?
All method calls using the 'same' parameter value A should be synchronized. A method call with a different parameter value e.g. B can access, even when calls with A are already waiting. The next concurrent call for B must wait also for the first B to be released.
My use case: I want to synchronize the access to JPA entities on ID level but want to avoid pessimistic locking because I need kind of a queue. The 'key' for locking is intended to be the entity ID - which is in fact of the type Java Long.
protected void entityLockedAccess(SomeEntity myEntity) {
//getId() returns different Long objects so the lock does not work
synchronized (myEntity.getId()) {
//the critical section ...
}
}
I read about lock objects but I am not sure how they would suit in my case.
On the top level I want to manage a specific REST call to my application which executes critical code.
Thanks,
Chris
As far as I understood you basically want a different, unique lock for each of your SomeEntity IDs.
You could realize this with a Map<Integer, Object>.
You simply map each ID to an object. Should there already be an object, you reuse it. This could look something like this:
static Map<Integer, Object> locks = new ConcurrentHashMap<>();
public static void main(String[] args)
{
int i1 = 1;
int i2 = 2;
foo(i1);
foo(i1);
foo(i2);
}
public static void foo(int o)
{
synchronized (locks.computeIfAbsent(o, k -> new Object()))
{
// computation
}
}
This will create 2 lock objects in the map as the object for i1 is reused in the second foo(i1) call.
Objects which are pooled and potentially reused should not be used for synchronization. If they are, it can cause unrelated threads to deadlock with unhelpful stacktraces.
Specifically, String literals, and boxed primitives such as Integers should NOT be used as lock objects because they are pooled and reused.
The story is even worse for Boolean objects because there are only two instances of Boolean, Boolean.TRUE and Boolean.FALSE and every class that uses a Boolean will be referring to one of the two.
I read about lock objects but I am not sure how they would suit in my
case. On the top level I want to manage a specific REST call to my
application which executes critical code.
You DB will take care for concurrent writes and other transactional issues.
All you need to do is use Transactions.
I would also recommend you to go through the classical problems (DIRTY READs NON Repeatable reads). You can also use Optimistic Locking for
The problem is that you simply should not synchronize on values (for example strings, or Integer objects).
Meaning: you would need to define some special EntityId class here, and of course, all "data" that uses the same ID would somehow need to be using the same EntityId object then.
private static final Set<Integer> lockedIds = new HashSet<>();
private void lock(Integer id) throws InterruptedException {
synchronized (lockedIds) {
while (!lockedIds.add(id)) {
lockedIds.wait();
}
}
}
private void unlock(Integer id) {
synchronized (lockedIds) {
lockedIds.remove(id);
lockedIds.notifyAll();
}
}
public void entityLockedAccess(SomeEntity myEntity) throws InterruptedException {
try {
lock(myEntity.getId());
//Put your code here.
//For different ids it is executed in parallel.
//For equal ids it is executed synchronously.
} finally {
unlock(myEntity.getId());
}
}
id can be not only an 'Integer' but any class with correctly overridden 'equals' and 'hashCode' methods.
try-finally - is very important - you must guarantee to unlock waiting threads after your operation even if your operation threw exception.
It will not work if your back-end is distributed across multiple servers/JVMs.
Just use this class:
(and the map will NOT increase in size over time)
import java.util.concurrent.ConcurrentHashMap;
import java.util.function.Consumer;
public class SameKeySynchronizer<T> {
private final ConcurrentHashMap<T, Object> sameKeyTasks = new ConcurrentHashMap<>();
public void serializeSameKeys(T key, Consumer<T> keyConsumer) {
// This map will never be filled (because function returns null), it is only used for synchronization purposes for the same key
sameKeyTasks.computeIfAbsent(key, inputArgumentKey -> acceptReturningNull(inputArgumentKey, keyConsumer));
}
private Object acceptReturningNull(T inputArgumentKey, Consumer<T> keyConsumer) {
keyConsumer.accept(inputArgumentKey);
return null;
}
}
Like in this test:
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
class SameKeySynchronizerTest {
private static final boolean SHOW_FAILING_TEST = false;
#Test
void sameKeysAreNotExecutedParallel() throws InterruptedException {
TestService testService = new TestService();
TestServiceThread testServiceThread1 = new TestServiceThread(testService, "a");
TestServiceThread testServiceThread2 = new TestServiceThread(testService, "a");
testServiceThread1.start();
testServiceThread2.start();
testServiceThread1.join();
testServiceThread2.join();
Assertions.assertFalse(testService.sameKeyInProgressSimultaneously);
}
#Test
void differentKeysAreExecutedParallel() throws InterruptedException {
TestService testService = new TestService();
TestServiceThread testServiceThread1 = new TestServiceThread(testService, "a");
TestServiceThread testServiceThread2 = new TestServiceThread(testService, "b");
testServiceThread1.start();
testServiceThread2.start();
testServiceThread1.join();
testServiceThread2.join();
Assertions.assertFalse(testService.sameKeyInProgressSimultaneously);
Assertions.assertTrue(testService.differentKeysInProgressSimultaneously);
}
private class TestServiceThread extends Thread {
TestService testService;
String key;
TestServiceThread(TestService testService, String key) {
this.testService = testService;
this.key = key;
}
#Override
public void run() {
testService.process(key);
}
}
private class TestService {
private final SameKeySynchronizer<String> sameKeySynchronizer = new SameKeySynchronizer<>();
private Set<String> keysInProgress = ConcurrentHashMap.newKeySet();
private boolean sameKeyInProgressSimultaneously = false;
private boolean differentKeysInProgressSimultaneously = false;
void process(String key) {
if (SHOW_FAILING_TEST) {
processInternal(key);
} else {
sameKeySynchronizer.serializeSameKeys(key, inputArgumentKey -> processInternal(inputArgumentKey));
}
}
#SuppressWarnings("MagicNumber")
private void processInternal(String key) {
try {
boolean keyInProgress = !keysInProgress.add(key);
if (keyInProgress) {
sameKeyInProgressSimultaneously = true;
}
try {
int sleepTimeInMillis = 100;
for (long elapsedTimeInMillis = 0; elapsedTimeInMillis < 1000; elapsedTimeInMillis += sleepTimeInMillis) {
Thread.sleep(sleepTimeInMillis);
if (keysInProgress.size() > 1) {
differentKeysInProgressSimultaneously = true;
}
}
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
} finally {
keysInProgress.remove(key);
}
}
}
}
I was asked to demonstrate a Singleton class design for my assignment. The version I submitted uses Strings and works fine, but I just can't get the reserveLane method to work properly with integers. Whenever I call the reserveLane method in the code below, it removes the element with the index of the integer passed into it instead of the element containing the value that matches the integer passed in. The program is supposed to print each message in the removeLane method once.
import java.util.*;
public class Race {
// store one instance
private static final Race INSTANCE = new Race(); // (this is the singleton)
List<Integer> lanes = new ArrayList<>();
public static Race getInstance() { // callers can get to
return INSTANCE; // the instance
}
private Race() {
lanes.add(1);
lanes.add(2);
}
public void removeLane(int lane) {
if(lanes.contains(lane)){
lanes.remove(lane);
System.out.println("Lane successfully reserved.");
} else {
System.out.println("Lane is already reserved.");
}
}
public static void main(String[] args) {
assignLane(1);
assignLane(1);
}
private static void assignLane(int lane) {
Race race = Race.getInstance();
race.removeLane(lane);
}
}
I'm wondering if I'm wasting my time trying to go this route or is there a way to fix it?
Integer integer = new Integer(lane);
lanes.remove(integer);
Your lanes is an arraylist of Integer objects, not int. Passing an int to Arraylist.remove(int index) will remove an object at that index, but if you pass an Integer object, the remove() function will delete the first occurrence of that object.
You are using primitive type to do your removal of element. You can convert it the Wrapper class and do it. Change the removeLane method as follows:
public void removeLane(Integer lane) {
if(lanes.contains(lane)){
lanes.remove(lane);
System.out.println("Lane successfully reserved.");
}
else{
System.out.println("Lane is already reserved.");
}
}
ArrayList Docs
E remove(int index)- Removes the element at the specified position in this list.
boolean remove(Object o) - Removes the first occurrence of the specified element from this list, if it is present.
As you have sent an int primitive type to method remove it has called remove(int index). Instead just send an Integer object and then it will call method remove(Object o) and it will work fine.
Working Code:
package stackoverflow;
import java.util.ArrayList;
import java.util.List;
public class Race {
private static final Race INSTANCE // store one instance
= new Race(); // (this is the singleton)
List<Integer> lanes = new ArrayList<>();
public static Race getInstance() { // callers can get to
return INSTANCE; // the instance
}
private Race() {
lanes.add(1);
lanes.add(2);
}
public void removeLane(int lane) {
if (lanes.contains(lane)) {
lanes.remove((Integer) lane);
System.out.println("Lane successfully reserved.");
} else {
System.out.println("Lane is already reserved.");
}
}
public static void main(String[] args) {
assignLane(1);
assignLane(1);
}
private static void assignLane(int lane) {
Race race = Race.getInstance();
race.removeLane(lane);
}
}
I would like to know what would be the best mechanism to implement multiple Producer - single Consumer scenario, where i have to keep the current number of unprocessed requests up to date.
My first thought was to use ConcurrentLinkedQueue:
public class SomeQueueAbstraction {
private Queue<SomeObject> concurrentQueue = new ConcurrentLinkedQueue<>();
private int size;
public void add(Object request) {
SomeObject object = convertIncomingRequest(request);
concurrentQueue.add(object);
size++;
}
public SomeObject getHead() {
SomeObject object = concurrentQueue.poll();
size--;
}
// other methods
Problem with this is that i have to explicitly synchronize on add and size ++, as well as on the poll and size--, to have always accurate size which makes ConccurentLinkedQueue pointless to begin with.
What would be the best way to achieve as good as possible performance while maintaining data consistency ?
Should I use ArrayDequeue instead and explicitly synchronize or there is a better way to achieve this ?
There is sort of similar question/answer here:
java.util.ConcurrentLinkedQueue
where it is discussed how composite operations on ConcurrentLinkedQueue are naturally not atomic but there is no direct answer what is the best option for the given scenario.
Note: I am calculating size explicitly because time complexity for inherent .size() method is O(n).
Note2: I am also worried that getSize() method, which i haven't explicitly written, will add to even more contention overhead. It could be called relatively frequently.
I am looking for the most efficient way to handle Multiple Producers - single Consumer with frequent getSize() calls.
Alternative suggestion: If there was elementId in SomeObject structure, i could get current size from ConcurrentLinkedQueue.poll() and only locking would have to be done within mechanism to generate such id. Add and get could now properly be used without additional locking. How would this fare as an alternative ?
So the requirement is to report an up to date current number of unprocessed requests. And this is requested often which indeed makes ConcurrentLinkedQueue.size() unsuitable.
This can be done using an AtomicInteger: it is fast and is always as close to the current number of unprocessed requests as possible.
Here is an example, note some small updates to ensure that the reported size is accurate:
import java.util.Queue;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.atomic.AtomicInteger;
public class SomeQueueAbstraction {
private final Queue<SomeObject> concurrentQueue = new ConcurrentLinkedQueue<>();
private final AtomicInteger size = new AtomicInteger();
public boolean add(Object request) {
SomeObject object = convertIncomingRequest(request);
if (concurrentQueue.add(object)) {
size.incrementAndGet();
return true;
}
return false;
}
public SomeObject remove() {
SomeObject object = concurrentQueue.poll();
if (object != null) {
size.decrementAndGet();
}
return object;
}
public int getSize() { return size.get(); }
private SomeObject convertIncomingRequest(Object request) {
return new SomeObject(getSize());
}
class SomeObject {
int id;
SomeObject(int id) { this.id = id; }
}
}
You can use an explicit lock, which means you probably won't need a concurrent queue.
public class SomeQueueAbstraction {
private Queue<SomeObject> queue = new LinkedList<>();
private volatile int size;
private Object lock = new Object();
public void add(Object request) {
SomeObject object = convertIncomingRequest(request);
synchronized(lock) {
queue.add(object);
size++;
}
}
public SomeObject getHead() {
SomeObject object = null;
synchronized(lock) {
object = queue.poll();
size--;
}
return object;
}
public int getSize() {
synchronized(lock) {
return size;
}
}
// other methods
}
This way, adding/removing elements to/from the queue and updating the size will be done safely.
Is it possible to have multiple iterators in a single collection and have each keep track independently? This is assuming no deletes or inserts after the iterators were assigned.
Yes.
Sometimes it's really annoying that answers have to be 30 characters.
Yes, it is possible. That's one reason they are iterators, and not simply methods of the collection.
For example List iterators (defined in AbstractList) hold an int to the current index (for the iterator). If you create multiple iterators and call next() a different number of times, each of them will have its int cursor with a different value.
Yes and no. That depend of the implementation of the interface Iterable<T>.
Usually it should return new instance of a class that implement Iterable interface, the class AbstractList implements this like that:
public Iterator<E> iterator() {
return new Itr(); //Where Itr is an internal private class that implement Itrable<T>
}
If you are using standard Java classes You may expect that this is done this way.
Otherwise You can do a simple test by calling iterator() form the object and then run over first and after that second one, if they are depend the second should not produce any result. But this is very unlikely possible.
You could do something like this:
import java.util.ArrayList;
import java.util.Iterator;
public class Miterate {
abstract class IteratorCaster<E> implements Iterable<E>, Iterator<E> {
int mIteratorIndex = 0;
public boolean hasNext() {
return mStorage.size() > mIteratorIndex;
}
public void remove() {
}
public Iterator<E> iterator() {
return this;
}
}
class FloatCast extends IteratorCaster<Float> {
public Float next() {
Float tFloat = Float.parseFloat((String)mStorage.get(mIteratorIndex));
mIteratorIndex ++;
return tFloat;
}
}
class StringCast extends IteratorCaster<String> {
public String next() {
String tString = (String)mStorage.get(mIteratorIndex);
mIteratorIndex ++;
return tString;
}
}
class IntegerCast extends IteratorCaster<Integer> {
public Integer next() {
Integer tInteger = Integer.parseInt((String)mStorage.get(mIteratorIndex));
mIteratorIndex ++;
return tInteger;
}
}
ArrayList<Object> mStorage;
StringCast mSC;
IntegerCast mIC;
FloatCast mFC;
Miterate() {
mStorage = new ArrayList<Object>();
mSC = new StringCast();
mIC = new IntegerCast();
mFC = new FloatCast();
mStorage.add(new String("1"));
mStorage.add(new String("2"));
mStorage.add(new String("3"));
}
Iterable<String> getStringIterator() {
return mSC;
}
Iterable<Integer> getIntegerIterator() {
return mIC;
}
Iterable<Float> getFloatIterator() {
return mFC;
}
public static void main(String[] args) {
Miterate tMiterate = new Miterate();
for (String tString : tMiterate.getStringIterator()) {
System.out.println(tString);
}
for (Integer tInteger : tMiterate.getIntegerIterator()) {
System.out.println(tInteger);
}
for (Float tFloat : tMiterate.getFloatIterator()) {
System.out.println(tFloat);
}
}
}
With the concurrent collections you can have multiple iterators in different threads even if there inserts and deletes.