Java REST optimize data structure access - java

I have a Java REST application where one endpoint always deals with a ConcurrentMap. I am doing load tests and it is really bad when the load test starts to increase.
What strategies can I implement in order to improve the efficiency of the application?
Should I play around with Jetty threads, as it is the server I'm using? Or is it mainly code? Or both?
The method that becomes the bottleneck is the one below.
Basically I need to read some line from a given file. I can't store it on a DB, so I came up with this processing with a Map. However, I'm aware that for large files it will take long not only to get to the line and I risk the fact that the Map will consume much memory when it has many entries...
dict is the ConcurrentMap.
public String getLine(int lineNr) throws IllegalArgumentException {
if (lineNr > nrLines) {
throw new IllegalArgumentException();
}
if (dict.containsKey(lineNr)) {
return dict.get(lineNr);
}
synchronized (this) {
try (Stream<String> st = Files.lines(doc.toPath())
Optional<String> optionalLine = st.skip(lineNr - 1).findFirst();
if (optionalLine.isPresent()) {
dict.put(lineNr, optionalLine.get());
} else {
nrLines = nrLines > lineNr ? lineNr : nrLines;
throw new IllegalArgumentException();
}
} catch (IOException e) {
e.printStackTrace();
}
return cache.get(lineNr);
}

Mixing up ConcurrentMap with synchronized(this) is probably not the right approach. Classes from java.util.concurrent package are designed for specific use cases and try to optimize synchronization internally.
Instead I'd suggest to first try a well designed caching library and see if the performance is good enough. One example would be Caffeine. As per Population docs it gives you a way to declare how to load the data, even asynchronously:
AsyncLoadingCache<Key, Graph> cache = Caffeine.newBuilder()
.maximumSize(10_000)
.expireAfterWrite(10, TimeUnit.MINUTES)
// Either: Build with a synchronous computation that is wrapped as asynchronous
.buildAsync(key -> createExpensiveGraph(key));
// Or: Build with a asynchronous computation that returns a future
.buildAsync((key, executor) -> createExpensiveGraphAsync(key, executor));

This solution is based on ConcurrentHashMap#computeIfAbsent, with two assumptions:
Multiple threads reading the same file is not a problem.
While the documentations says the computation should be simple and short because of blocking, I believe it is only a problem for same key (or bucket/stripe) access and only for updates (not reads)? In this scenario, it is not a problem, as we either succesfully compute the value or throw IllegalArgumentException.
Using this, we achieve only opening the file once per key, by placing that as the computation required to put a key.
public String getLine(int lineNr) throws IllegalArgumentException {
if (lineNr > nrLines) {
throw new IllegalArgumentException();
}
return cache.computeIfAbsent(lineNr, (l) -> {
try (Stream<String> st = Files.lines(path)) {
Optional<String> optionalLine = st.skip(lineNr - 1).findFirst();
if (optionalLine.isPresent()) {
return optionalLine.get();
} else {
nrLines = nrLines > lineNr ? lineNr : nrLines;
throw new IllegalArgumentException();
}
} catch (IOException e) {
e.printStackTrace();
}
return null;
});
}
I "verified" the second assumption by spawning 3 threads, where:
Thread1 computes key 0 by looping infinitely (blocks forever).
Thread2 attempts to put at key 0, but never does because Thread1 blocks.
Thread3 attempts to put at key 1, and does so immediately.
Try it out, maybe it works or maybe assumptions are wrong and it sucks. The Map uses buckets internally, so the computation may become a bottleneck even with different keys, as it locks the bucket/stripe.

Related

Nested spin-lock vs volatile check

I was about to write something about this, but maybe it is better to have a second opinion before appearing like a fool...
So the idea in the next piece of code (android's room package v2.4.1, RoomTrackingLiveData), is that the winner thread is kept alive, and is forced to check for contention that may have entered the process (coming from losing threads) while computing.
While fail CAS operations performed by these losing threads keep them out from entering and executing code, preventing repeating signals (mComputeFunction.call() OR postValue()).
final Runnable mRefreshRunnable = new Runnable() {
#WorkerThread
#Override
public void run() {
if (mRegisteredObserver.compareAndSet(false, true)) {
mDatabase.getInvalidationTracker().addWeakObserver(mObserver);
}
boolean computed;
do {
computed = false;
if (mComputing.compareAndSet(false, true)) {
try {
T value = null;
while (mInvalid.compareAndSet(true, false)) {
computed = true;
try {
value = mComputeFunction.call();
} catch (Exception e) {
throw new RuntimeException("Exception while computing database"
+ " live data.", e);
}
}
if (computed) {
postValue(value);
}
} finally {
mComputing.set(false);
}
}
} while (computed && mInvalid.get());
}
};
final Runnable mInvalidationRunnable = new Runnable() {
#MainThread
#Override
public void run() {
boolean isActive = hasActiveObservers();
if (mInvalid.compareAndSet(false, true)) {
if (isActive) {
getQueryExecutor().execute(mRefreshRunnable);
}
}
}
};
The most obvious thing here is that atomics are being used for everything they are not good at:
Identifying losers and ignoring winners (what reactive patterns need).
AND a happens once behavior, performed by the loser thread.
So this is completely counter intuitive to what atomics are able to achieve, since they are extremely good at defining winners, AND anything that requires a "happens once" becomes impossible to ensure state consistency (the last one is suitable to start a philosophical debate about concurrency, and I will definitely agree with any conclusion).
If atomics are used as: "Contention checkers" and "Contention blockers" then we can implement the exact principle with a volatile check of an atomic reference after a successful CAS.
And checking this volatile against the snapshot/witness during every other step of the process.
private final AtomicInteger invalidationCount = new AtomicInteger();
private final IntFunction<Runnable> invalidationRunnableFun = invalidationVersion -> (Runnable) () -> {
if (invalidationVersion != invalidationCount.get()) return;
try {
T value = computeFunction.call();
if (invalidationVersion != invalidationCount.get()) return; //In case computation takes too long...
postValue(value);
} catch (Exception e) {
e.printStackTrace();
}
};
getQueryExecutor().execute(invalidationRunnableFun.apply(invalidationCount.incrementAndGet()));
In this case, each thread is left with the individual responsibility of checking their position in the contention lane, if their position moved and is not at the front anymore, it means that a new thread entered the process, and they should stop further processing.
This alternative is so laughably simple that my first question is:
Why didn't they do it like this?
Maybe my solution has a flaw... but the thing about the first alternative (the nested spin-lock) is that it follows the idea that an atomic CAS operation cannot be verified a second time, and that a verification can only be achieved with a cmpxchg process.... which is... false.
It also follows the common (but wrong) believe that what you define after a successful CAS is the sacred word of GOD... as I've seen code seldom check for concurrency issues once they enter the if body.
if (mInvalid.compareAndSet(false, true)) {
// Ummm... yes... mInvalid is still true...
// Let's use a second atomicReference just in case...
}
It also follows common code conventions that involve "double-<enter something>" in concurrency scenarios.
So only because the first code follows those ideas, is that I am inclined to believe that my solution is a valid and better alternative.
Even though there is an argument in favor of the "nested spin-lock" option, but does not hold up much:
The first alternative is "safer" precisely because it is SLOWER, so it has MORE time to identify contention at the end of the current of incoming threads.
BUT is not even 100% safe because of the "happens once" thing that is impossible to ensure.
There is also a behavior with the code, that, when it reaches the end of a continuos flow of incoming threads, 2 signals are dispatched one after the other, the second to last one, and then the last one.
But IF it is safer because it is slower, wouldn't that defeat the goal of using atomics, since their usage is supposed to be with the aim of being a better performance alternative in the first place?

How can I block ConcurrentHashMap get() operations during a put()

ConcurrentHashMap<String, Config> configStore = new ConcurrentHashMap<>();
...
void updateStore() {
Config newConfig = generateNewConfig();
Config oldConfig = configStore.get(configName);
if (newConfig.replaces(oldConfig)) {
configStore.put(configName, newConfig);
}
}
The ConcurrentHashMap can be read by multiple threads but can be updated only by a single thread. I'd like to block the get() operations when a put() operation is in progress. The rationale here being that if a put() operation is in progress, that implies the current entry in the map is stale and all get() operations should block until the put() is complete. How can I go about achieving this in Java without synchronizing the whole map?
It surely looks like you can defer this to compute and it will take care for that for you:
Config newConfig = generateNewConfig();
configStore.compute(
newConfig,
(oldConfig, value) -> {
if (newConfig.replaces(oldConfig)) {
return key;
}
return oldConfig;
}
);
You get two guarantees from using this method:
Some attempted update operations on this map by other threads may be blocked while computation is in progress, so the computation should be short and simple
and
The entire method invocation is performed atomically
according to its documentation.
The accepted answer proposed to use compute(...) instead of put().
But if you want
to block the get() operations when a put() operation is in progress
then you should also use compute(...) instead of get().
That's because for ConcurrentHashMap get() doesn't block while compute() is in progress.
Here is a unit test to prove it:
#Test
public void myTest() throws Exception {
var map = new ConcurrentHashMap<>(Map.of("key", "v1"));
var insideComputeLatch = new CountDownLatch(1);
var threadGet = new Thread(() -> {
try {
insideComputeLatch.await();
System.out.println("threadGet: before get()");
var v = map.get("key");
System.out.println("threadGet: after get() (v='" + v + "')");
} catch (InterruptedException e) {
throw new Error(e);
}
});
var threadCompute = new Thread(() -> {
System.out.println("threadCompute: before compute()");
map.compute("key", (k, v) -> {
try {
System.out.println("threadCompute: inside compute(): start");
insideComputeLatch.countDown();
threadGet.join();
System.out.println("threadCompute: inside compute(): end");
return "v2";
} catch (InterruptedException e) {
throw new Error(e);
}
});
System.out.println("threadCompute: after compute()");
});
threadGet.start();
threadCompute.start();
threadGet.join();
threadCompute.join();
}
Output:
threadCompute: before compute()
threadCompute: inside compute(): start
threadGet: before get()
threadGet: after get() (v='v1')
threadCompute: inside compute(): end
threadCompute: after compute()
This fundamentally doesn't work. Think about it: When the code realizes that the information is stale, some time passes and then a .put call is done. Even if the .put call somehow blocks, the timeline is as follows:
Some event occurs in the cosmos that makes your config stale.
Some time passes. [A]
Your run some code that realizes that this is the case.
Some time passes. [B]
Your code begins the .put call.
An extremely tiny amount of time passes. [C]
Your code finishes the .put call.
What you're asking for is a strategy that eliminates [C] while doing absolutely nothing whatsoever to prevent reads of stale data at point [A] and [B], both of which seem considerably more problematic.
Whatever, just give me the answer
ConcurrentHashMap is just wrong if you want this, it's a thing that is designed for multiple concurrent (hence the name) accesses. What you want is a plain old HashMap, where every access to it goes through a lock. Or, you can turn the logic around: The only way to do what you want is to engage a lock for everything (both reads and writes); at which point the 'Concurrent' part of ConcurrentHashMap has become completely pointless:
private final Object lock = new Object[0];
public void updateConfig() {
synchronized (lock) {
// do the stuff
}
}
public Config getConfig(String key) {
synchronized (lock) {
return configStore.get(key);
}
}
NB: Use private locks; public locks are like public fields. If there is an object that code outside of your control can get a ref to, and you lock on it, you need to describe the behaviour of your code in regards to that lock, and then sign up to maintain that behaviour forever, or indicate clearly when you change the behaviour that your API just went through a breaking change, and you should thus also bump the major version number.
For the same reason public fields are almost invariably a bad idea in light of the fact that you want API control, you want the refs you lock on to be not accessible to anything except code you have under your direct control. Hence why the above code does not use the synchronized keyword on the method itself (as this is usually a ref that leaks all over the place).
Okay, maybe I want the different answer
The answer is either 'it does not matter' or 'use locks'. If [C] truly is all you care about, that time is so short, and pales in comparison to the times for [A] and [B], that if A/B are acceptable, certainly so is C. In that case: Just accept the situation.
Alternatively, you can use locks but lock even before the data ever becomes stale. This timeline guarantees that no stale data reads can ever occur:
The cosmos cannot ever make your data stale.
Your code, itself, is the only causal agent for stale date.
Whenever code runs that will or may end up making data stale:
Acquire a lock before you even start.
Do the thing that (may) make some config stale.
Keep holding on to the lock; fix the config.
Release the lock.
How can I go about achieving this in Java without synchronizing the whole map?
There are some good answers here but there is a simpler answer to use the ConcurrentMap.replace(key, oldValue, newValue) method which is atomic.
while (true) {
Config newConfig = generateNewConfig();
Config oldConfig = configStore.get(configName);
if (!newConfig.replaces(oldConfig)) {
// nothing to do
break;
}
// this is atomic and will only replace the config if the old hasn't changed
if (configStore.replace(configName, oldConfig, newConfig)) {
// if we replaced it then we are done
break;
}
// otherwise, loop around and create a new config
}

Closing external process in CompletableFuture chain

I'm looking for better way to "close" some resource, here destroy external Process, in CompletableFuture chain. Right now my code looks roughly like this:
public CompletableFuture<ExecutionContext> createFuture()
{
final Process[] processHolder = new Process[1];
return CompletableFuture.supplyAsync(
() -> {
try {
processHolder[0] = new ProcessBuilder(COMMAND)
.redirectErrorStream(true)
.start();
} catch (IOException e) {
throw new UncheckedIOException(e);
}
return PARSER.parse(processHolder[0].getInputStream());
}, SCHEDULER)
.applyToEither(createTimeoutFuture(DURATION), Function.identity())
.exceptionally(throwable -> {
processHolder[0].destroyForcibly();
if (throwable instanceof TimeoutException) {
throw new DatasourceTimeoutException(throwable);
}
Throwables.propagateIfInstanceOf(throwable, DatasourceException.class);
throw new DatasourceException(throwable);
});
}
The problem I see is a "hacky" one-element array which holds reference to the process, so that it can be closed in case of error. Is there some CompletableFuture API which allows to pass some "context" to exceptionally (or some other method to achieve that)?
I was considering custom CompletionStage implementation, but it looks like a big task to get rid of "holder" variable.
There is no need to have linear chain of CompletableFutures. Well actually, you already haven’t due to the createTimeoutFuture(DURATION) which is quite convoluted for implementing a timeout. You can simply put it this way:
public CompletableFuture<ExecutionContext> createFuture() {
CompletableFuture<Process> proc=CompletableFuture.supplyAsync(
() -> {
try {
return new ProcessBuilder(COMMAND).redirectErrorStream(true).start();
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}, SCHEDULER);
CompletableFuture<ExecutionContext> result
=proc.thenApplyAsync(process -> PARSER.parse(process.getInputStream()), SCHEDULER);
proc.thenAcceptAsync(process -> {
if(!process.waitFor(DURATION, TimeUnit.WHATEVER_DURATION_REFERS_TO)) {
process.destroyForcibly();
result.completeExceptionally(
new DatasourceTimeoutException(new TimeoutException()));
}
});
return result;
}
If you want to keep the timout future, perhaps you consider the process startup time to be significant, you could use
public CompletableFuture<ExecutionContext> createFuture() {
CompletableFuture<Throwable> timeout=createTimeoutFuture(DURATION);
CompletableFuture<Process> proc=CompletableFuture.supplyAsync(
() -> {
try {
return new ProcessBuilder(COMMAND).redirectErrorStream(true).start();
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}, SCHEDULER);
CompletableFuture<ExecutionContext> result
=proc.thenApplyAsync(process -> PARSER.parse(process.getInputStream()), SCHEDULER);
timeout.exceptionally(t -> new DatasourceTimeoutException(t))
.thenAcceptBoth(proc, (x, process) -> {
if(process.isAlive()) {
process.destroyForcibly();
result.completeExceptionally(x);
}
});
return result;
}
I've used the one item array myself to emulate what would be proper closures in Java.
Another option is using a private static class with fields. The advantages are that it makes the purpose clearer and has a bit less impact on the garbage collector with big closures, i.e. an object with N of fields versus N arrays of length 1. It also becomes useful if you need to close over the same fields in other methods.
This is a de facto pattern, even outside the scope of CompletableFuture and it has been (ab)used long before lambdas were a thing in Java, e.g. anonymous classes. So, don't feel so bad, it's just that Java's evolution didn't provide us with proper closures (yet? ever?).
If you want, you may return values from CompletableFutures inside .handle(), so you can wrap the completion result in full and return a wrapper. In my opinion, this is not any better than manual closures, added the fact that you'll create such wrappers per future.
Subclassing CompletableFuture is not necessary. You're not interested in altering its behavior, only in attaching data to it, which you can do with current Java's final variable capturing. That is, unless you profile and see that creating these closures is actually affecting performance somehow, which I highly doubt.

Per-key blocking Map in Java

I'm dealing with some third-party library code that involves creating expensive objects and caching them in a Map. The existing implementation is something like
lock.lock()
try {
Foo result = cache.get(key);
if (result == null) {
result = createFooExpensively(key);
cache.put(key, result);
}
return result;
} finally {
lock.unlock();
}
Obviously this is not the best design when Foos for different keys can be created independently.
My current hack is to use a Map of Futures:
lock.lock();
Future<Foo> future;
try {
future = allFutures.get(key);
if (future == null) {
future = executorService.submit(new Callable<Foo>() {
public Foo call() {
return createFooExpensively(key);
}
});
allFutures.put(key, future);
}
} finally {
lock.unlock();
}
try {
return future.get();
} catch (InterruptedException e) {
throw new MyRuntimeException(e);
} catch (ExecutionException e) {
throw new MyRuntimeException(e);
}
But this seems... a little hacky, for two reasons:
The work is done on an arbitrary pooled thread. I'd be happy to have the work
done on the first thread that tries to get that particular key, especially since
it's going to be blocked anyway.
Even when the Map is fully populated, we still go through Future.get() to get
the results. I expect this is pretty cheap, but it's ugly.
What I'd like is to replace cache with a Map that will block gets for a given key until that key has a value, but allow other gets meanwhile. Does any such thing exist? Or does someone have a cleaner alternative to the Map of Futures?
Creating a lock per key sounds tempting, but it may not be what you want, especially when the number of keys is large.
As you would probably need to create a dedicated (read-write) lock for each key, it has impact on your memory usage. Also, that fine granularity may hit a point of diminishing returns given a finite number of cores if concurrency is truly high.
ConcurrentHashMap is oftentimes a good enough solution in a situation like this. It provides normally full reader concurrency (normally readers do not block), and updates can be concurrent up to the level of concurrency level desired. This gives you pretty good scalability. The above code may be expressed with ConcurrentHashMap like the following:
ConcurrentMap<Key,Foo> cache = new ConcurrentHashMap<>();
...
Foo result = cache.get(key);
if (result == null) {
result = createFooExpensively(key);
Foo old = cache.putIfAbsent(key, result);
if (old != null) {
result = old;
}
}
The straightforward use of ConcurrentHashMap does have one drawback, which is that multiple threads may find that the key is not cached, and each may invoke createFooExpensively(). As a result, some threads may do throw-away work. To avoid this, you would want to use the memoizer pattern that's mentioned in "Java Concurrency in Practice".
But then again, the nice folks at Google already solved these problems for you in the form of CacheBuilder:
LoadingCache<Key,Foo> cache = CacheBuilder.newBuilder().
concurrencyLevel(32).
build(new CacheLoader<Key,Foo>() {
public Foo load(Key key) {
return createFooExpensively(key);
}
});
...
Foo result = cache.get(key);
You can use funtom-java-utils - PerKeySynchronizedExecutor.
It will create a lock for each key but will clear it for you immediately when it becomes unused.
It will also grantee memory visibility between invocations with the same key, and is designed to be very fast and minimize the contention between invocations off different keys.
Declare it in your class:
final PerKeySynchronizedExecutor<KEY_CLASS> executor = new PerKeySynchronizedExecutor<>();
Use it:
Foo foo = executor.execute(key, () -> createFooExpensively());
public class Cache {
private static final Set<String> lockedKeys = new HashSet<>();
private void lock(String key) {
synchronized (lockedKeys) {
while (!lockedKeys.add(key)) {
try {
lockedKeys.wait();
} catch (InterruptedException e) {
log.error("...");
throw new RuntimeException(e);
}
}
}
}
private void unlock(String key) {
synchronized (lockedKeys) {
lockedKeys.remove(key);
lockedKeys.notifyAll();
}
}
public Foo getFromCache(String key) {
try {
lock(key);
Foo result = cache.get(key);
if (result == null) {
result = createFooExpensively(key);
cache.put(key, result);
}
return result;
//For different keys it is executed in parallel.
//For the same key it is executed synchronously.
} finally {
unlock(key);
}
}
}
key can be not only a 'String' but any class with correctly overridden 'equals' and 'hashCode' methods.
try-finally - is very important - you must guarantee to unlock waiting threads after your operation even if your operation threw exception.
It will not work if your back-end is distributed across multiple servers/JVMs.

Java concurrency pattern to parallel parts of task

I read lines from file, in one thread of course. Lines was sorted by key.
Then I collect lines with same key (15-20 lines), make parsing, big calculation, etc, and push resulting object to statistic class.
I want to paralell my programm to read in one thread, make parsing and calc in many threads, and join results in one thread to write to stat class.
Is any ready pattern or solution in java7 framework for this problem?
I realize it with executor for multithreading, pushing to blockingQueue, and reading queue in another thread, but i think my code sucks and will produce bugs
Many thanks
upd:
I can't map all file in memory - it's very big
You already have the main classes of approaches in mind. CountDownLatch, Thread.join, Executors, Fork/Join. Another option is the Akka framework, which has message passing overheads measured in 1-2 microseconds and is open source. However let me share another approach that often out performs the above approaches and is simpler, this approach is born from working on batch file loads in Java for a number of companies.
Assuming that your goal of splitting the work up is performance, rather than learning. Performance as measured by how long it takes from start to finish. Then it is often difficult to make it faster than memory mapping the file, and processing in a single thread that has been pinned to a single core. It is also gives much simpler code too. A double win.
This may be counter intuitive, however the speed of processing files is nearly always limited by how efficient the file loading is. Not how parallel the processing is. Hence memory mapping the file is a huge win. Once memory mapped we want the algorithm to have low contention with the hardware as it performs the file load. Modern hardware tend to have the IO controller and the memory controller on the same socket as the CPU; which when combined with the prefetchers within the CPU itself lead to a hell of a lot of efficiency when processing the file in a orderly fashion from a single thread. This can be so extreme that going parallel may actually be a lot slower. Pinning a thread to a core usually speeds up memory bound algorithms by a factor of 5. Which is why the memory mapping part is so important.
If you have not already, give it a try.
Without facts and numbers it is hard to give you advices. So let's start from the beginning:
You must identify the bottleneck. Do you really need to perform the computation in parallel or is your job IO bound ? Avoid concurrency if possible, it could be faster.
If computations must be done in parallel you must decide how fine or coarse grained your tasks must be. You need to measure your computations and tasks to be able to size them. Avoid to create too many tasks
You should have a IO thread, several workers, and a "data gatherer" thread. No mutable data.
Be sure to not slow down the IO thread because of task submission. Otherwise you should use more coarse grained tasks or use a better task dispatcher (who said disruptor ?)
The "Data gatherer" thread should be the only one to mutate the final state
Avoid unnecessary data copy and object creation. Quite often, when iterating on large files the bottleneck is the GC. Last week, I achieved a 6x speedup replacing a standard scala object by a flyweight pattern. You should also try to pre-allocate everything and use large buffers (page sized).
Avoid disk seeks.
Having that said you should be one the right track. You can start with an Executor using properly sized tasks. Tasks write into a data structure, like your blocking queue, shared between workers and the "data gatherer" thread. This threading model is really simple, efficient and hard to get wrong. It is usually efficient enough. If you still require better performances then you must profile your application and understand the bottleneck. Then you can decide the way to go: refine your task size, use faster tools like the disruptor/Akka, improve IO, create fewer objects, tune your code, buy a bigger machine or faster disks, move to Hadoop etc. Pinning each thread to a core (require platform specific code) could also provide a significant boost.
You can use CountDownLatch
http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/CountDownLatch.html
to synchronize the starting and joining of threads. This is better than looping on the set of threads and calling join() on each thread reference.
Here is what I would do if asked to split work as you are trying to:
public class App {
public static class Statistics {
}
public static class StatisticsCalculator implements Callable<Statistics> {
private final List<String> lines;
public StatisticsCalculator(List<String> lines) {
this.lines = lines;
}
#Override
public Statistics call() throws Exception {
//do stuff with lines
return new Statistics();
}
}
public static void main(String[] args) {
final File file = new File("path/to/my/file");
final List<List<String>> partitionedWork = partitionWork(readLines(file), 10);
final List<Callable<Statistics>> callables = new LinkedList<>();
for (final List<String> work : partitionedWork) {
callables.add(new StatisticsCalculator(work));
}
final ExecutorService executorService = Executors.newFixedThreadPool(Math.min(partitionedWork.size(), 10));
final List<Future<Statistics>> futures;
try {
futures = executorService.invokeAll(callables);
} catch (InterruptedException ex) {
throw new RuntimeException(ex);
}
try {
for (final Future<Statistics> future : futures) {
final Statistics statistics = future.get();
//do whatever to aggregate the individual
}
} catch (InterruptedException | ExecutionException ex) {
throw new RuntimeException(ex);
}
executorService.shutdown();
try {
executorService.awaitTermination(1, TimeUnit.DAYS);
} catch (InterruptedException ex) {
throw new RuntimeException(ex);
}
}
static List<String> readLines(final File file) {
//read lines
return new ArrayList<>();
}
static List<List<String>> partitionWork(final List<String> lines, final int blockSize) {
//divide up the incoming list into a number of chunks
final List<List<String>> partitionedWork = new LinkedList<>();
for (int i = lines.size(); i > 0; i -= blockSize) {
int start = i > blockSize ? i - blockSize : 0;
partitionedWork.add(lines.subList(start, i));
}
return partitionedWork;
}
}
I have create a Statistics object, this holds the result of the work done.
There is a StatisticsCalculator object which is a Callable<Statistics> - this does the calculation. It is given a List<String> and it processes the lines and creates the Statistics.
The readLines method I leave to you to implement.
The most important method in many ways is the partitionWork method, this divides the incoming List<String> which is all the lines in the file into a List<List<String>> using the blockSize. This essentially decides how much work each thread should have, tuning of the blockSize parameter is very important. As if each work is only one line then the overheads would probably outweight the advantages whereas if each work of ten thousand lines then you only have one working Thread.
Finally the meat of the opertation is the main method. This calls the read and then partition methods. It spawns an ExecutorService with a number of threads equal to the number of bits of work but up to a maximum of 10. You may way to make this equal to the number of cores you have.
The main method then submits a List of all the Callables, one for each chunk, to the executorService. The invokeAll method blocks until the work is done.
The method now loops over each returned List<Future> and gets the generated Statistics object for each; ready for aggregation.
Afterwards don't forget to shutdown the executorService as it will prevent your application form exiting.
EDIT
OP wants to read line by line so here is a revised main
public static void main(String[] args) throws IOException {
final File file = new File("path/to/my/file");
final ExecutorService executorService = Executors.newFixedThreadPool(10);
final List<Future<Statistics>> futures = new LinkedList<>();
try (final BufferedReader reader = new BufferedReader(new FileReader(file))) {
List<String> tmp = new LinkedList<>();
String line = null;
while ((line = reader.readLine()) != null) {
tmp.add(line);
if (tmp.size() == 100) {
futures.add(executorService.submit(new StatisticsCalculator(tmp)));
tmp = new LinkedList<>();
}
}
if (!tmp.isEmpty()) {
futures.add(executorService.submit(new StatisticsCalculator(tmp)));
}
}
try {
for (final Future<Statistics> future : futures) {
final Statistics statistics = future.get();
//do whatever to aggregate the individual
}
} catch (InterruptedException | ExecutionException ex) {
throw new RuntimeException(ex);
}
executorService.shutdown();
try {
executorService.awaitTermination(1, TimeUnit.DAYS);
} catch (InterruptedException ex) {
throw new RuntimeException(ex);
}
}
This streams the file line by line and, after a given number of lines fires a new task to process the lines to the executor.
You would need to call clear on the List<String> in the Callable when you are done with it as the Callable instances are references by the Futures they return. If you clear the Lists when you're done with them that should reduce the memory footprint considerably.
A further enhancement may well be to use the suggestion here for a ExecutorService that blocks until there is a spare thread - this will guranatee that there are never more than threads*blocksize lines in memory at a time if you clear the Lists when the Callables are done with them.

Categories