I'm trying to implement a concurrent cache in java for learning propose.
This code is responsable for garantee thread-safy operations. So, whenever a thread try to fetch a value, if this value is not already cached, the algorithm should calculate it from the last cached one.
My problem is that i'm getting null values that are supposed to be already cached. I'm using semaphore (though i've tried with ReentrantLock too, so i think it's not the problem) to assure the thread-safety access to an HashMap.
Note that i would like to restrict the locked area to the smallest possible. So i would not like to synchronize the entire method or utilize an already thread safe ConcurrentMap.
Here is a complete simple code:
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.Semaphore;
public class ConcurrentCache {
private final Semaphore semaphore = new Semaphore(1);
private final Map<Integer, Integer> cache;
private int lastCachedNumber;
public ConcurrentCache() {
cache = new HashMap<Integer, Integer>();
cache.put(0, 0);
lastCachedNumber = 0;
}
public Integer fetchAndCache(int n) {
//if it's already cached, supposedly i can access it in an unlocked way
if (n <= lastCachedNumber)
return cache.get(n);
lock();
Integer number;
if (n < lastCachedNumber) { // check it again. it may be updated by another thread
number = cache.get(n);
} else {
//fetch a previous calculated number.
number = cache.get(lastCachedNumber);
if (number == null)
throw new IllegalStateException(String.format(
"this should be cached. n=%d, lastCachedNumber=%d", n,
lastCachedNumber));
for (int i = lastCachedNumber + 1; i <= n; i++) {
number = number + 1;
cache.put(i, number);
lastCachedNumber = i;
}
}
unlock();
return number;
}
private void lock() {
try {
semaphore.acquire();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
private void unlock() {
semaphore.release();
}
public static void main(String[] args) {
ConcurrentCache cachedObject = new ConcurrentCache();
for (int nThreads = 0; nThreads < 5; nThreads++) {
new Thread(new Runnable() {
#Override
public void run() {
for (int cacheValue = 0; cacheValue < 1000; cacheValue++) {
if (cachedObject.fetchAndCache(cacheValue) == null) {
throw new IllegalStateException(String.format(
"the number %d should be cached",
cacheValue));
}
}
}
}).start();
}
}
}
Thank you for you help.
Few pointers/ideas:
1) pre-size your Map when you create it to accommodate all/many of your future cached values, Map resizing is very thread unsafe and time consuming
2) you can simplify your whole algorithm to
YourClass.get(int i) {
if (!entryExists(i)) {
lockEntry(i);
entry = createEntry(i);
putEntryInCache(i, entry);
unlockEntry(i);
}
return entry;
}
Edit
Another point:
3) your approach to caching is very bad - imagine what will happen if the 1st request is to get something # position 1,000,000?
Pre-populate in separate thread is going to be a lot better...
Related
I am writing a thread safe counter. When I test and the threads go first one, then the second everything works correctly. But when threads enter the increment () method at the same time, the counter does not work properly. The reason is not clear, I am using atomic integer.
public class CASCount {
private final AtomicReference<Integer> count = new AtomicReference<>(0);
private AtomicInteger oldValue = new AtomicInteger(0);
private AtomicInteger newValue = new AtomicInteger(0);
public void increment() {
do {
oldValue.set(count.get());
System.out.println(oldValue + " old");
if (oldValue.get() == -1) {
throw new UnsupportedOperationException("Count is not impl.");
}
newValue.incrementAndGet();
System.out.println(newValue + " new");
} while (!count.compareAndSet(oldValue.get(), newValue.get()));
}
public int get() {
int result = -1;
result = count.get();
if (result == -1) {
throw new UnsupportedOperationException("Count is not impl.");
}
return result;
}
}
#Test
public void whenUseCASCount() throws InterruptedException {
CASCount count = new CASCount();
Thread one = new Thread(() -> {
for (int i = 0; i < 5; i++) {
System.out.println("one");
count.increment();
}
});
Thread two = new Thread(() -> {
for (int i = 0; i < 5; i++) {
System.out.println("two");
count.increment();
}
});
one.start();
two.start();
one.join();
two.join();
assertThat(count.get(), is(10));
}
its my decision
private final AtomicReference<Integer> count = new AtomicReference<>(0);
public void increment() {
int current, next;
do {
current = count.get();
next = current + 1;
} while (!count.compareAndSet(current, next));
}
public int get() {
return count.get();
}
TL;DR - Make your increment method synchronized.
Details - Even though you have atomic variables that you use, that does not mean that your class is thread safe. It's not safe because there can be (and are) race conditions between the checks and increments for your variables.
do {
oldValue.set(count.get());
System.out.println(oldValue + " old");
if (oldValue.get() == -1) {
throw new UnsupportedOperationException("Count is not impl.");
}
newValue.incrementAndGet(); <--- between here
System.out.println(newValue + " new");
} while (!count.compareAndSet(oldValue.get(), newValue.get())); <--- and here
A typical case of check-then-act race condition.
This happens because your atomic variables can be accessed by multiple threads and their shared state can mutate from one thread and not be seen in another.
To preserve state consistency, update related state variables in a single
atomic operation.
- Java Concurrency in Practice
Hence, we use intrinsic locks (built-in synchronized) to make the method safe when multiple threads access it. What happens is that the state of the atomic variables would not change because each thread will access the increment method one at a time.
This is for learning purposes.
Imagine I want to calculcate prime numbers and use a ThreadPoolExecutor to do so.
Below you can see my current implementation, which is kind of silly.
My structure:
I generate numbers in a certain range.
For each generated number, create a task to check whether the given number is a prime.
If it is a prime, the result of the operation is the number, else it is null.
A collector goes through the resultlist and checks if there is a number or null. In case it is a number, write that number down to a certain file (here: sorted by amount of digits)
What I would like to do instead: If the number to be checked in the task is not a prime, delete my future from the list/cancel it. As far as I know, only the Executor itself can cancel a Future.
What I want is the task itself to say "Hey, I know my result is no use to you, so please ignore me while iterating throught the list".
I do not know how to do so.
What I do right now (relevant part):
final List<Future<Long>> resultList = new ArrayList<>();
final BlockingQueue<Runnable> workingQueue = new ArrayBlockingQueue<>(CAPACITY);
final ExecutorService exec = new ThreadPoolExecutor(
Runtime.getRuntime().availableProcessors() - 2,
Runtime.getRuntime().availableProcessors() - 1,
5, TimeUnit.SECONDS,
workingQueue,
new ThreadPoolExecutor.CallerRunsPolicy()
);
for (long i = GENERATEFROM; i <= GENERATETO; i++) {
Future<Long> result = exec.submit(new Worker(i));
resultList.add(result);
}
Collector collector = new Collector(resultList,GENERATETO);
collector.start();
exec.shutdown();
A Worker is there to execute one task(is it a prime number?)
public class Worker implements Callable<Long> {
private long number;
public Worker(long number) {
this.number = number;
}
//checks whether an int is prime or not.
boolean isPrime(long n) {
//check if n is a multiple of 2
if (n % 2 == 0) return false;
//if not, then just check the odds
for (long i = 3; i * i <= n; i += 2) {
if (n % i == 0)
return false;
}
return true;
}
#Override
public Long call() throws Exception {
if (isPrime(number)) {
return number;
}
return null;
}
}
And, for the sake of completence, my collector:
public class Collector {
private List<Future<Long>> primeNumbers;
private long maxNumberGenerated;
private HashMap<Integer, PrintWriter> digitMap;
private final long maxWaitTime;
private final TimeUnit timeUnit;
public Collector(List<Future<Long>> primeNumbers, long maxNumberGenerated) {
this.primeNumbers = primeNumbers;
this.maxNumberGenerated = maxNumberGenerated;
this.digitMap = new HashMap<>();
this.maxWaitTime = 1000;
this.timeUnit = TimeUnit.MILLISECONDS;
}
public void start() {
try {
//create Files
int filesToCreate = getDigits(maxNumberGenerated);
for (int i = 1; i <= filesToCreate; i++) {
File f = new File(System.getProperty("user.dir") + "/src/solutionWithExecutor/PrimeNumsWith_" + i +
"_Digits.txt");
PrintWriter pw = new PrintWriter(f, "UTF-8");
digitMap.put(i, pw);
}
for (Future<Long> future : primeNumbers) {
Object possibleNumber = future.get();
if (possibleNumber != null) {
long numberToTest = (long) possibleNumber;
int numOfDigits = getDigits(numberToTest);
PrintWriter correspondingFileWriter = digitMap.get(numOfDigits);
correspondingFileWriter.println(possibleNumber.toString());
correspondingFileWriter.flush();
}
}
for (PrintWriter fw : digitMap.values()) {
fw.close();
}
} catch (InterruptedException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
private int getDigits(long maxNumberGenerated) {
return String.valueOf(maxNumberGenerated).length();
}
}
What I would like to do instead: If the number to be checked in the task is not a prime, delete my future from the list/cancel it. As far as I know, only the Executor itself can cancel a Future.
To me this seems like an unnecessary optimization. The Future is there so that the task can return a value. Once the task figures out it's not a prime and returns null the "cost" to the program associated with the Future is negligible. There is nothing to "cancel". That task has completed and all that is left is the memory that allows the Future to pass back the null or the prime Long.
Since we are talking about learning, in many situations programmers worry too quickly about performance and we often spend time optimizing parts of our application which really aren't the problem. If I saw using some JVM monitor (maybe jconsole) that the application was running out of memory then I maybe would worry about the list of Futuress but otherwise I'd write clean and easily maintained code.
If you really are worried about the Future then don't save them in a list at all and just share a BlockingQueue<Long> between the prime checking tasks and the main thread. The prime checking jobs would add(...) to the queue and the main thread would take(). You should consider putting nulls in the list because otherwise you wouldn't know if the prime tasks were done unless you counted the results. You'd want to check X random numbers and then you'll know that it was done when X results (null or numbers) get taken from the BlockingQueue<Long>.
Hope this helps.
I am doing a sample program with wait() and notify(), but when notify() is called, more than one thread is wakes up instead of one.
The code is:
public class MyQueue<T> {
Object[] entryArr;
private volatile int addIndex;
private volatile int pending = -1;
private final Object lock = new Object();
private volatile long notifiedThreadId;
private int capacity;
public MyQueue(int capacity) {
entryArr = new Object[capacity];
this.capacity = capacity;
}
public void add(T t) {
synchronized (lock) {
if (pending >= 0) {
try {
pending++;
lock.wait();
System.out.println(notifiedThreadId + ":" + Thread.currentThread().getId());
} catch (InterruptedException e) {
e.printStackTrace();
}
} else if (pending == -1) {
pending++;
}
}
if (addIndex == capacity) { // its ok to replace existing value
addIndex = 0;
}
try {
entryArr[addIndex] = t;
} catch (ArrayIndexOutOfBoundsException e) {
System.out.println("ARRAYException:" + Thread.currentThread().getId() + ":" + pending + ":" + addIndex);
e.printStackTrace();
}
addIndex++;
synchronized (lock) {
if (pending > 0) {
pending--;
notifiedThreadId = Thread.currentThread().getId();
lock.notify();
} else if (pending == 0) {
pending--;
}
}
}
}
public class TestMyQueue {
public static void main(String args[]) {
final MyQueue<String> queue = new MyQueue<>(2);
for (int i = 0; i < 200; i++) {
Runnable r = new Runnable() {
#Override
public void run() {
for (int i = 0; i < Integer.MAX_VALUE; i++) {
queue.add(Thread.currentThread().getName() + ":" + i);
}
}
};
Thread t = new Thread(r);
t.start();
}
}
}
After some time, I see two threads being wake up by single thread. The output looks like:
91:114
114:124
124:198
198:106
106:202
202:121
121:40
40:42
42:83
83:81
81:17
17:189
189:73
73:66
66:95
95:199
199:68
68:201
201:70
70:110
110:204
204:171
171:87
87:64
64:205
205:115
Here I see 115 thread notified two threads, and 84 thread notified two threads; because of this we are seeing the ArrayIndexOutOfBoundsException.
115:84
115:111
84:203
84:200
ARRAYException:200:199:3
ARRAYException:203:199:3
What is the issue in the program?
What is the issue in the program?
You have a couple of problems with your code that may be causing this behavior. First, as #Holder commented on, there are a lot of code segments that can be run by multiple threads simultaneously that should be protected using synchronized blocks.
For example:
if (addIndex == capacity) {
addIndex = 0;
}
If multiple threads run this then multiple threads might see addIndex == capacity and multiple would be overwriting the 0th index. Another example is:
addIndex++;
This is a classic race condition if 2 threads try to execute this statement at the same time. If addIndex was 0 beforehand, after the 2 threads execute this statement, the value of addIndex might be 1 or 2 depending on the race conditions.
Any statements that could be executed at the same time by multiple threads have to be properly locked within a synchronized block or otherwise protected. Even though you have volatile fields, there can still be race conditions because there are multiple operations being executed.
Also, a classic mistake is to use if statements when checking for over or under flows on your array. They should be while statements to make sure you don't have the class consumer producer race conditions. See my docs here or take a look at the associated SO question: Why does java.util.concurrent.ArrayBlockingQueue use 'while' loops instead of 'if' around calls to await()?
I have a huge table about 1 m record , i want to do some processing on all records , so 1 thread way , would be , get say... 1000 record , process them , get another 1000 record etc...
but what if i want to use multitasking ? that is 2 threads each fetching 1000 record and do the processing in parallel , how can i make sure that each thread will fetch different 1000 record ?
note : am using hibernate
something looks like that
public void run() {
partList=getKParts(10);
operateOnList(partList);
}
Sure, you can synchronize the code.
public class MyClass {
private final HibernateFetcher hibernateFetcher = new HibernateFetcher();
private class Worker implements Runnable {
public run() {
List partList = hibernateFetcher.fetchRecords();
operateOnList(partList);
}
}
public void myBatchProcessor() {
while(!hibernateFetcher.isFinished()) {
// create *n* workers and go!
}
}
}
class HibernateFetcher {
private int count = 0;
private final Object lock = new Object();
private volatile boolean isFinished = false;
public List fetchRecords() {
Criteria criteria = ...;
synchronized(lock) {
criteria.setFirstResult(count) // offset
.setMaxResults(1000);
count=count+1000;
}
List result = criteria.list();
isFinished = result.length > 0 ? false: true;
return result;
}
public synchronized boolean isFinished(){
return isFinished;
}
}
If I understood correctly you don't want 1m record fetched upfront but want it in batches of 1000 then to process them in 2 threads but make it parallel.
First you have to implement paging type feature in your database query using RowCount or something. From Java you can pass fromRowCount to toRowCount and fetch records in 1000 batches and process them parallel in threads. I am adding sample code here but you have to further implement your logic for different variables.
int totalRecordCount = 100000;
int batchSize =1000;
ExecutorService executor = Executors.newFixedThreadPool(totalRecordCount/batchSize);
for(int x=0; x < totalRecordCount;){
int toRowCount = x+batchSize;
partList=getKParts(10,x,toRowCount);
x= toRowCount + 1;
executor.submit(new Runnable<>() {
#Override
public void run() {
operateOnList(partList);
}
});
}
Hope this helps. Let me know in case further clarification required
If your records in the database do have a primary key of type int or long, add a restriction to each thread to fetch only records from ranges:
Thread1: 0000 - 0999, 2000 - 2999, etc
Thread2: 1000 - 1999, 3000 - 3999, etc
This way you need only an offset, a counter and an increment for each thread. For example Thread1 would have an offset of 0 while Thread2 would have an offset of 1000. Because of two threads in this example, you have an increment of 2000. For each round increment the counter (starting at 0) of each thread and calculate the next ranges as:
form = offset + (count * 2000)
to = from + 999
import com.se.sas.persistance.utils.HibernateUtils;
public class FinderWorker implements Runnable {
#Override
public void run() {
operateOnList(getNParts(IndexLocker.getAllowedListSize()));
}
public List<Parts> getNParts(int listSize) {
try {
criteria = .....
// *********** SYNCHRONIZATION OCCURS HERE ********************//
criteria.setFirstResult(IndexLocker.getAvailableIndex());
criteria.setMaxResults(listSize);
partList = criteria.list();
} catch (Exception e) {
e.printStackTrace();
} finally {
session.close();
}
return partList;
}
public void operateOnList(List<Parts> partList) {
....
}
}
locker class
public class IndexLocker {
private static AtomicInteger index = new AtomicInteger(0);
private final static int batchSize = 1000;
public IndexLocker() {
}
public static int getAllowedListSize() {
return batchSize;
}
public static synchronized void incrmntIndex(int hop) {
index.getAndAdd(hop);
}
public static synchronized int getAvailableIndex() {
int result = index.get();
index.getAndAdd(batchSize);
return result;
}
}
I have some thread-related questions, assuming the following code. Please ignore the possible inefficiency of the code, I'm only interested in the thread part.
//code without thread use
public static int getNextPrime(int from) {
int nextPrime = from+1;
boolean superPrime = false;
while(!superPrime) {
boolean prime = true;
for(int i = 2;i < nextPrime;i++) {
if(nextPrime % i == 0) {
prime = false;
}
}
if(prime) {
superPrime = true;
} else {
nextPrime++;
}
}
return nextPrime;
}
public static void main(String[] args) {
int primeStart = 5;
ArrayList list = new ArrayList();
for(int i = 0;i < 10000;i++) {
list.add(primeStart);
primeStart = getNextPrime(primeStart);
}
}
If I'm running the code like this and it takes about 56 seconds. If, however, I have the following code (as an alternative):
public class PrimeRunnable implements Runnable {
private int from;
private int lastPrime;
public PrimeRunnable(int from) {
this.from = from;
}
public boolean isPrime(int number) {
for(int i = 2;i < from;i++) {
if((number % i) == 0) {
return false;
}
}
lastPrime = number;
return true;
}
public int getLastPrime() {
return lastPrime;
}
public void run() {
while(!isPrime(++from))
;
}
}
public static void main(String[] args) {
int primeStart = 5;
ArrayList list = new ArrayList();
for(int i = 0;i < 10000;i++) {
PrimeRunnable pr = new PrimeRunnable(primeStart);
Thread t = new Thread(pr);
t.start();
t.join();
primeStart = pr.getLastPrime();
list.add(primeStart);
}
}
The whole operation takes about 7 seconds. I am almost certain that even though I only create one thread at a time, a thread doesn't always finish when another is created. Is that right? I am also curious: why is the operation ending so fast?
When I'm joining a thread, do other threads keep running in the background, or is the joined thread the only one that's running?
By putting the join() in the loop, you're starting a thread, then waiting for that thread to stop before running the next one. I think you probably want something more like this:
public static void main(String[] args) {
int primeStart = 5;
// Make thread-safe list for adding results to
List list = Collections.synchronizedList(new ArrayList());
// Pull thread pool count out into a value so you can easily change it
int threadCount = 10000;
Thread[] threads = new Thread[threadCount];
// Start all threads
for(int i = 0;i < threadCount;i++) {
// Pass list to each Runnable here
// Also, I added +i here as I think the intention is
// to test 10000 possible numbers>5 for primeness -
// was testing 5 in all loops
PrimeRunnable pr = new PrimeRunnable(primeStart+i, list);
Thread[i] threads = new Thread(pr);
threads[i].start(); // thread is now running in parallel
}
// All threads now running in parallel
// Then wait for all threads to complete
for(int i=0; i<threadCount; i++) {
threads[i].join();
}
}
By the way pr.getLastPrime() will return 0 in the case of no prime, so you might want to filter that out before adding it to your list. The PrimeRunnable has to absorb the work of adding to the final results list. Also, I think PrimeRunnable was actually broken by still having incrementing code in it. I think this is fixed, but I'm not actually compiling this.
public class PrimeRunnable implements Runnable {
private int from;
private List results; // shared but thread-safe
public PrimeRunnable(int from, List results) {
this.from = from;
this.results = results;
}
public void isPrime(int number) {
for(int i = 2;i < from;i++) {
if((number % i) == 0) {
return;
}
}
// found prime, add to shared results
this.results.add(number);
}
public void run() {
isPrime(from); // don't increment, just check one number
}
}
Running 10000 threads in parallel is not a good idea. It's a much better idea to create a reasonably sized fixed thread pool and have them pull work from a shared queue. Basically every worker pulls tasks from the same queue, works on them and saves the results somewhere. The closest port of this with Java 5+ is to use an ExecutorService backed by a thread pool. You could also use a CompletionService which combines an ExecutorService with a result queue.
An ExecutorService version would look like:
public static void main(String[] args) {
int primeStart = 5;
// Make thread-safe list for adding results to
List list = Collections.synchronizedList(new ArrayList());
int threadCount = 16; // Experiment with this to find best on your machine
ExecutorService exec = Executors.newFixedThreadPool(threadCount);
int workCount = 10000; // See how # of work is now separate from # of threads?
for(int i = 0;i < workCount;i++) {
// submit work to the svc for execution across the thread pool
exec.execute(new PrimeRunnable(primeStart+i, list));
}
// Wait for all tasks to be done or timeout to go off
exec.awaitTermination(1, TimeUnit.DAYS);
}
Hope that gave you some ideas. And I hope the last example seemed a lot better than the first.
You can test this better by making the exact code in your first example run with threads. Sub your main method with this:
private static int currentPrime;
public static void main(String[] args) throws InterruptedException {
for (currentPrime = 0; currentPrime < 10000; currentPrime++) {
Thread t = new Thread(new Runnable() {
public void run() {
getNextPrime(currentPrime);
}});
t.run();
t.join();
}
}
This will run in the same time as the original.
To answer your "join" question: yes, other threads can be running in the background when you use "join", but in this particular case you will only have one active thread at a time, because you are blocking the creation of new threads until the last thread is done executing.
JesperE is right, but I don't believe in only giving hints (at least outside a classroom):
Note this loop in the non-threaded version:
for(int i = 2;i < nextPrime;i++) {
if(nextPrime % i == 0) {
prime = false;
}
}
As opposed to this in the threaded version:
for(int i = 2;i < from;i++) {
if((number % i) == 0) {
return false;
}
}
The first loop will always run completely through, while the second will exit early if it finds a divisor.
You could make the first loop also exit early by adding a break statement like this:
for(int i = 2;i < nextPrime;i++) {
if(nextPrime % i == 0) {
prime = false;
break;
}
}
Read your code carefully. The two cases aren't doing the same thing, and it has nothing to do with threads.
When you join a thread, other threads will run in the background, yes.
Running a test, the second one doesn't seem to take 9 seconds--in fact, it takes at least as long as the first (which is to be expected, threding can't help the way it's implemented in your example.
Thread.join will only return when the thread.joined terminates, then the current thread will continue, the one you called join on will be dead.
For a quick reference--think threading when starting one iteration does not depend on the result of the previous one.