ParallelStream pool size - java

I have a parallel stream with a few database queries inside like this:
private void processParallel() {
List\<Result\> = objects.parallelStream().peek(object -\> {
doSomething(object)
}
}
private void doSomething(object) {
CompletableFuture<String> param =
CompletableFuture.supplyAsync(() -> objectService.getParam(object.getField())),
executor)
.thenApply(object-> Optional.ofNullable(object)
.map(Object::getParam)
.orElse(null));
}
I need to specify the pool size, but setting the property "java.util.concurrent.ForkJoinPool.common.parallelism","20" is not working, probably because of locking the stream. Is there any way to limit the max amount of threads?

Since parallel streams are using Fork/Join framework under the hood, to limit the number of treads employed by the stream, you can wrap the stream with a Callable and define a ForkJoinPool having the required level of parallelism as described.
The threads occupied by the parallel Stream would be taken from the new created ForkJoinPool, to which the callable task was submitted (not from the common poll) as described here.
The downside of this approach is that you're relying on the implementation detail of the Stream API.
And also as #Louis Wasserman has pointed out in the comment
you probably might need another way of limiting the number of threads used in the stream. Because you're performing database queries, each tread would require a database connection to do its job, hence the number of threads should not be greater than the number of available connections that the data source can provide at the moment. And if you have multiple processes that can fire these asynchronous tasks simultaneously (for instance in a web application), it doesn't seem to be a good idea to try to develop your own solution. If that's the case, you might consider using a framework like Spring WebFlax.

Related

Using multiple threadpools and connection pool

I am currently using 5 threadpools and I want to find optimal sizing for these pools. This is some kind of prior analysis. Pools are divided by usage: for handling commands (cmdPool), handling inventory transactions (invPool), pool for database transactions (dbPool), also pool for common things that simply need to run async like I/O (fastPool) and for scheduled tasks (timerPool). I do not have any statistical data that could be used for solving problem yet.
For database query I am using HikariCP with default values. I will try to change count of maximum connections and minimum idle connections later to find optimal performance. But for now, when using Hikari pool it will be always called from one of the pools to not affect main thread. Usual database query is called under dbPool but only when code block is not part of already runnable submited into one of the thread pools.
Actual setup looks it just works right in application. So my questions are:
1.) How will impact performance and resources when I decide to stop using cachedThreadPool and use pool with some minimum idle threads like timerPool or I should stick with cached ?
2.) Is right solution to set a maximum pool size to prevent spikes when like 100 clients will join in small period of time and let them keep wait for some short time while other task will complete.
3.) Is there any better solution how to manage many kinds of tasks ?
cmdPool = Executors.newFixedThreadPool(3);
invPool = Executors.newFixedThreadPool(2);
dbPool = Executors.newCachedThreadPool();
fastPool = Executors.newCachedThreadPool();
timerPool = new ScheduledThreadPoolExecutor(5);
timerPool.allowCoreThreadTimeOut(true);
timerPool.setKeepAliveTime(3, TimeUnit.MINUTES);
So first of all, every action depends on how many clients are connected, lets assume values like 5-25 clients. Pools should be designed to maintain even extremes like 100 clients and not make too many threads in small time period.
Expected uses may vary and are not same every second even may happen no task will come to run at all. Expected usage of cmdPool is like 3-8 uses per second (lightweight tasks). For invPool is usage nearly same like for cmdPool 2-6 uses per second (also lightweight tasks). As for dbPool this is more unpredictable than all others, but still expected usage is from 5-20 uses per second (lightweight and mediumweight tasks) also depends on how busy is network. Timer and fast pools are designed to take any kind of task and just do it, there is expected use of 20-50 uses per second.
I appreciate any suggestions, thank you.
The best solution is to adapt your application to the expected traffic.
You can do that in many manners:
Design it with a microservice architecture leaving the orchestrator to handle peak of traffic
Design the application that reads the parameters of the size of thread pools on the fly (from a database from a file, from a configuration server...), so you can change the values when needed
If you need only to tune your application but you don't need to change the values on the fly put your configurations in a file (or database). Check different configurations to find the most adapted to your needs
What is important is to move away a code similar to this one:
cmdPool = Executors.newFixedThreadPool(3);
and replace it with a code similar to this one
#Value("${cmdPoolSize}")
private int cmdPoolSize;
...
cmdPool = Executors.newFixedThreadPool(cmdPoolSize);
where the size of the pool is not taken from the code, but from an external configuration.
A better way is also to define the kind of pool with parameters:
#Value("${cmdPoolType}")
private String cmtPoolType;
#Value("${cmdPoolSize}")
private int cmdPoolSize;
...
if (cmdPoolType.equals("cached")) {
cmdPool = Executors.newCachedThreadPool();
} else if (cmdPoolType.equals("fixed")) {
cmdPool = Executors.newFixedThreadPool(cmdPoolSize);
}
Where you choose the reasonable kind of available pools.
In this last case you can also use a spring configuration file and change it before starting the application.

Java8 stream().map().reduce() is really map reduce

I saw this code somewhere using stream().map().reduce().
Does this map() function really works parallel? If Yes, then how many maximum number of threads it can initiate for map() function?
What if I use parallelStream() instead of just stream() for the below particular use-case.
Can anyone give me good example of where to NOT use parallelStream()
Below code is just to extract tName from tCode and returns comma separated String.
String ts = atList.stream().map(tcode -> {
return CacheUtil.getTCache().getTInfo(tCode).getTName();
}).reduce((tName1, tName2) -> {
return tName1 + ", " + tName2;
}).get();
this stream().map().reduce() is not parallel, thus a single thread acts on the stream.
you have to add parallel or in other cases parallelStream (depends on the API, but it's the same thing). Using parallel by default you will get number of available processors - 1; but the main thread is used too in the ForkJoinPool#commonPool; thus there will be usually 2, 4, 8 threads etc. To check how many you will get, use:
Runtime.getRuntime().availableProcessors()
You can use a custom pool and get as many threads as you want, as shown here.
Also notice that the entire pipeline is run in parallel, not just the map operation.
There isn't a golden law about when to use and when not to use parallel streams, the best way is to measure. But there are obvious choices, like a stream of 10 elements - this is way too little to have any real benefit from parallelization.
All parallel streams use common fork-join thread pool and if you submit a long-running task, you effectively block all threads in the pool. Consequently you block all other tasks that are using parallel streams.
There are only two options how to make sure that such thing will never happen. The first is to ensure that all tasks submitted to the common fork-join pool will not get stuck and will finish in a reasonable time. But it's easier said than done, especially in complex applications. The other option is to not use parallel streams and wait until Oracle allows us to specify the thread pool to be used for parallel streams.
Use case
Lets say you have a collection (List) which gets loaded with values at the start of application and no new value is added to it at any later point. In above​ scenario you can use parallel stream without any concerns.
Don't worry stream is efficient and safe.

How to keep a fixed size pool of ListenableFutures?

I am reading a large file of Urls and do requests to a service. The request are executed by a client that returns ListenableFuture. Now I want to keep a pool of ListenableFutures, e.g. have N Futures being executed concurrently in maximum.
The problem I see is that I have no control over the ExecutorService the ListenableFutures are executed in because of the third-party library. Otherwise I would just create a FixedSizePool and create my own Callables.
1) A naïve implementation would be to spawn N Futures and then use AllAsList which would satisfy the fixed size criteria but makes all wait for the slowest request.
Out of order processing is ok.
2) A slightly better naïve option would be to use the first idea and combine it with a rate limiter, by setting N and rate in a way that the amount of concurrent requests is in good approximation to the desired pool size. But I am actually not looking for a way to a Throttle the calls, e.g. using RateLimiter.
3) A last option would be to spawn N Futures and have a Callback that spawns a new one. This satisfies the criteria of a fixed size and minimizes the idle time, but there I don't know how to detect the end if my program, i.e. close the file.
4) A non-ListenableFuture-related approach would be to just .get() the result directly and deal with the embarrassly parallel tasks by creating a simple Threadpool.
For knowing the job queue is empty i.e. closing the file I am thinking of using a CountdownLatch. Which should work for many options.
Hmm. How do you feel about just using a java.util.concurrent.Semaphore?
Semaphore gate = new Semaphore(10);
Runnable release = gate::release; // java 8 syntax.
Iterator<URL> work = ...;
while(work.hasNext() && gate.acquire()) {
ListenableFuture f = ThirdPartyLibrary.doWork(work.next());
f.addListener( release, MoreExecutors.sameThreadExecutor() );
}
You could add other listeners maybe by using Futures.addCallback(ListenableFuture, FutureCallback) to do something with the results, as long as you're careful to release() on both success and error.
It might work.
Your option 3 sounds reasonable. If you want to cleanly detect when all requests have completed, one simple approach is to create a new SettableFuture to represent completion.
When your callback tries to take the next request from the queue, and finds it to be empty, you can call set() on the future to notify anything that's waiting for all requests to complete. Propagating exceptions from the individual request futures is left as an exercise for the reader.
Use a FixedSizePool for embarrassedly parallel tasks and .get() the future's result immediately.
This simplifies the code and allows each worker to have modifiable context.

Best practices with Akka in Scala and third-party Java libraries

I need to use memcached Java API in my Scala/Akka code. This API gives you both synchronous and asynchronous methods. The asynchronous ones return java.util.concurrent.Future. There was a question here about dealing with Java Futures in Scala here How do I wrap a java.util.concurrent.Future in an Akka Future?. However in my case I have two options:
Using synchronous API and wrapping blocking code in future and mark blocking:
Future {
blocking {
cache.get(key) //synchronous blocking call
}
}
Using asynchronous Java API and do polling every n ms on Java Future to check if the future completed (like described in one of the answers above in the linked question above).
Which one is better? I am leaning towards the first option because polling can dramatically impact response times. Shouldn't blocking { } block prevent from blocking the whole pool?
I always go with the first option. But i am doing it in a slightly different way. I don't use the blocking feature. (Actually i have not thought about it yet.) Instead i am providing a custom execution context to the Future that wraps the synchronous blocking call. So it looks basically like this:
val ecForBlockingMemcachedStuff = ExecutionContext.fromExecutorService(Executors.newFixedThreadPool(100)) // whatever number you think is appropriate
// i create a separate ec for each blocking client/resource/api i use
Future {
cache.get(key) //synchronous blocking call
}(ecForBlockingMemcachedStuff) // or mark the execution context implicit. I like to mention it explicitly.
So all the blocking calls will use a dedicated execution context (= Threadpool). So it is separated from your main execution context responsible for non blocking stuff.
This approach is also explained in a online training video for Play/Akka provided by Typesafe. There is a video in lesson 4 about how to handle blocking calls. It is explained by Nilanjan Raychaudhuri (hope i spelled it correctly), who is a well known author for Scala books.
Update: I had a discussion with Nilanjan on twitter. He explained what the difference between the approach with blocking and a custom ExecutionContext is. The blocking feature just creates a special ExecutionContext. It provides a naive approach to the question how many threads you will need. It spawns a new thread every time, when all the other existing threads in the pool are busy. So it is actually an uncontrolled ExecutionContext. It could create lots of threads and lead to problems like an out of memory error. So the solution with the custom execution context is actually better, because it makes this problem obvious. Nilanjan also added that you need to consider circuit breaking for the case this pool gets overloaded with requests.
TLDR: Yeah, blocking calls suck. Use a custom/dedicated ExecutionContext for blocking calls. Also consider circuit breaking.
The Akka documentation provides a few suggestions on how to deal with blocking calls:
In some cases it is unavoidable to do blocking operations, i.e. to put
a thread to sleep for an indeterminate time, waiting for an external
event to occur. Examples are legacy RDBMS drivers or messaging APIs,
and the underlying reason is typically that (network) I/O occurs under
the covers. When facing this, you may be tempted to just wrap the
blocking call inside a Future and work with that instead, but this
strategy is too simple: you are quite likely to find bottlenecks or
run out of memory or threads when the application runs under increased
load.
The non-exhaustive list of adequate solutions to the “blocking
problem” includes the following suggestions:
Do the blocking call within an actor (or a set of actors managed by a router), making sure to configure a thread pool which is either
dedicated for this purpose or sufficiently sized.
Do the blocking call within a Future, ensuring an upper bound on the number of such calls at any point in time (submitting an unbounded
number of tasks of this nature will exhaust your memory or thread
limits).
Do the blocking call within a Future, providing a thread pool with an upper limit on the number of threads which is appropriate for the
hardware on which the application runs.
Dedicate a single thread to manage a set of blocking resources (e.g. a NIO selector driving multiple channels) and dispatch events as they
occur as actor messages.
The first possibility is especially well-suited for resources which
are single-threaded in nature, like database handles which
traditionally can only execute one outstanding query at a time and use
internal synchronization to ensure this. A common pattern is to create
a router for N actors, each of which wraps a single DB connection and
handles queries as sent to the router. The number N must then be tuned
for maximum throughput, which will vary depending on which DBMS is
deployed on what hardware.

Java: TaskExecutor for Asynchronous Database Writes?

I'm thinking of using Java's TaskExecutor to fire off asynchronous database writes. Understandably threads don't come for free, but assuming I'm using a fixed threadpool size of say 5-10, how is this a bad idea?
Our application reads from a very large file using a buffer and flushes this information to a database after performing some data manipulation. Using asynchronous writes seems ideal here so that we can continue working on the file. What am I missing? Why doesn't every application use asynchronous writes?
Why doesn't every application use asynchronous writes?
It's often necessary/usefull/easier to deal with a write failure in a synchronous manner.
I'm not sure a threadpool is even necessary. I would consider using a dedicated databaseWriter thread which does all writing and error handling for you. Something like:
public class AsyncDatabaseWriter implements Runnable {
private LinkedBlockingQueue<Data> queue = ....
private volatile boolean terminate = false;
public void run() {
while(!terminate) {
Data data = queue.take();
// write to database
}
}
public void ScheduleWrite(Data data) {
queue.add(data);
}
}
I personally fancy the style of using a Proxy for threading out operations which might take a long time. I'm not saying this approach is better than using executors in any way, just adding it as an alternative.
Idea is not bad at all. Actually I just tried it yesterday because I needed to create a copy of online database which has 5 different categories with like 60000 items each.
By moving parse/save operation of each category into the parallel tasks and partitioning each category import into smaller batches run in parallel I reduced the total import time from several hours (estimated) to 26 minutes. Along the way I found good piece of code for splitting the collection: http://www.vogella.de/articles/JavaAlgorithmsPartitionCollection/article.html
I used ThreadPoolTaskExecutor to run tasks. Your tasks are just simple implementation of Callable interface.
why doesn't every application use asynchronous writes? - erm because every application does a different thing.
can you believe some applications don't even use a database OMG!!!!!!!!!
seriously though, given as you don't say what your failure strategies are - sounds like it could be reasonable. What happens if the write fails? or the db does away somehow
some databases - like sybase - have (or at least had) a thing where they really don't like multiple writers to a single table - all the writers ended up blocking each other - so maybe it wont actually make much difference...

Categories