Using Threads in Loop - java

I have a for loop which needs to execute 36000 times
for(int i=0;i<36000;i++)
{
}
Whether its possible to use Multiple threads inorder to execute the loop faster at the same time
Please suggest how to use it.

If you want a more explicit method, you can use thread pools with Thread, Callable or Runnable. See my answere here for examples:
Java : a method to do multiple calculations on arrays quickly
Thread won't naturally exit at end of run()
I do not recommend using Java's Fork/Join as they are not that great as they were hyped to be. Performance is pretty bad. Instead, I would use Java 8's map and parallel streams if you want to make it easy. You have several options using this method.
IntStream.range(1, 4)
.mapToObj(i -> "testing " + i)
.forEach(System.out::println);
You would want to call map( lambda ). Java 8 finally brings lambda functions. It is possible to feed the stream one huge list, but there will be a performance impact. IntStream.range will do what you want. Then you need to figure out which of the new functions you want to use like filter, map, count, sum, reduce, etc. You may have to tell it that you want it to be a parallel stream. See these links.
https://docs.oracle.com/javase/tutorial/collections/streams/parallelism.html
http://winterbe.com/posts/2014/07/31/java8-stream-tutorial-examples/
Classic method and still has the best performance is to do it yourself using a thread pool:
Basically, you would create a Runnable (does not return something) or Callable (returns a result) object that will do some work on one of the treads in the pool. The pool with handle scheduling, which is great for us. Java has several options on the pool you use. You can create a Runnable/Callable in a loop, then submit that into the pool. The pool immediately returns a Future object that represents the task. You can add that Future to an ArrayList if you have many of these. After adding all the futures to the list, loop through them and call future.get(), which will wait for the end of execution. See the linked example above, which does not use a list, but does everything else I said.

Related

Thread vs Runnable vs CompletableFuture in Java multi threading

I am trying to implement multi threading in my Spring Boot app. I am just beginner on multi threading in Java and after making some search and reading articles on various pages, I need to be clarified about the following points. So;
As far as I see, I can use Thread, Runnable or CompletableFuture in order to implement multi threading in a Java app. CompletableFuture seems a newer and cleaner way, but Thread may have more advantages. So, should I stick to CompletableFuture or use all of them based on the scenario?
Basically I want to send 2 concurrent requests to the same service method by using CompletableFuture:
CompletableFuture<Integer> future1 = fetchAsync(1);
CompletableFuture<Integer> future2 = fetchAsync(2);
Integer result1 = future1.get();
Integer result2 = future2.get();
How can I send these request concurrently and then return result based on the following condition:
if the first result is not null, return result and stop process
if the first result is null, return the second result and stop process
How can I do this? Should I use CompletableFuture.anyOf() for that?
CompletableFuture is a tool which settles atop the Executor/ExecutorService abstraction, which has implementations dealing with Runnable and Thread. You usually have no reason to deal with Thread creation manually. If you find CompletableFuture unsuitable for a particular task you may try the other tools/abstractions first.
If you want to proceed with the first (in the sense of faster) non‑null result, you can use something like
CompletableFuture<Integer> future1 = fetchAsync(1);
CompletableFuture<Integer> future2 = fetchAsync(2);
Integer result = CompletableFuture.anyOf(future1, future2)
.thenCompose(i -> i != null?
CompletableFuture.completedFuture((Integer)i):
future1.thenCombine(future2, (a, b) -> a != null? a: b))
.join();
anyOf allows you to proceed with the first result, but regardless of its actual value. So to use the first non‑null result we need to chain another operation which will resort to thenCombine if the first result is null. This will only complete when both futures have been completed but at this point we already know that the faster result was null and the second is needed. The overall code will still result in null when both results were null.
Note that anyOf accepts arbitrarily typed futures and results in a CompletableFuture<Object>. Hence, i is of type Object and a type cast needed. An alternative with full type safety would be
CompletableFuture<Integer> future1 = fetchAsync(1);
CompletableFuture<Integer> future2 = fetchAsync(2);
Integer result = future1.applyToEither(future2, Function.identity())
.thenCompose(i -> i != null?
CompletableFuture.completedFuture(i):
future1.thenCombine(future2, (a, b) -> a != null? a: b))
.join();
which requires us to specify a function which we do not need here, so this code resorts to Function.identity(). You could also just use i -> i to denote an identity function; that’s mostly a stylistic choice.
Note that most complications stem from the design that tries to avoid blocking threads by always chaining a dependent operation to be executed when the previous stage has been completed. The examples above follow this principle as the final join() call is only for demonstration purposes; you can easily remove it and return the future, if the caller expects a future rather than being blocked.
If you are going to perform the final blocking join() anyway, because you need the result value immediately, you can also use
Integer result = future1.applyToEither(future2, Function.identity()).join();
if(result == null) {
Integer a = future1.join(), b = future2.join();
result = a != null? a: b;
}
which might be easier to read and debug. This ease of use is the motivation behind the upcoming Virtual Threads feature. When an action is running on a virtual thread, you don’t need to avoid blocking calls. So with this feature, if you still need to return a CompletableFuture without blocking the your caller thread, you can use
CompletableFuture<Integer> resultFuture = future1.applyToEitherAsync(future2, r-> {
if(r != null) return r;
Integer a = future1.join(), b = future2.join();
return a != null? a: b;
}, Executors.newVirtualThreadPerTaskExecutor());
By requesting a virtual thread for the dependent action, we can use blocking join() calls within the function without hesitation which makes the code simpler, in fact, similar to the previous non-asynchronous variant.
In all cases, the code will provide the faster result if it is non‑null, without waiting for the completion of the second future. But it does not stop the evaluation of the unnecessary future. Stopping an already ongoing evaluation is not supported by CompletableFuture at all. You can call cancel(…) on it, but this will will only set the completion state (result) of the future to “exceptionally completed with a CancellationException”
So whether you call cancel or not, the already ongoing evaluation will continue in the background and only its final result will be ignored.
This might be acceptable for some operations. If not, you would have to change the implementation of fetchAsync significantly. You could use an ExecutorService directly and submit an operation to get a Future which support cancellation with interruption.
But it also requires the operation’s code to be sensitive to interruption, to have an actual effect:
When calling blocking operations, use those methods that may abort and throw an InterruptedException and do not catch-and-continue.
When performing a long running computational intense task, poll Thread.interrupted() occasionally and bail out when true.
So, should I stick to CompletableFuture or use all of them based on the scenario?
Use the one that is most appropriate to the scenario. Obviously, we can't be more specific unless you explain the scenario.
There are various factors to take into account. For example:
Thread + Runnable doesn't have a natural way to wait for / return a result. (But it is not hard to implement.)
Repeatedly creating bare Thread objects is inefficient because thread creation is expensive. Thread pooling is better but you shouldn't implement a thread pool yourself.
Solutions that use an ExecutorService take care of thread pooling and allow you to use Callable and return a Future. But for a once-off async computation this might be over-kill.
Solutions that involve ComputableFuture allow you to compose and combine asynchronous tasks. But if you don't need to do that, using ComputableFuture may be overkill.
As you can see ... there is no single correct answer for all scenarios.
Should I use CompletableFuture.anyOf() for that?
No. The logic of your example requires that you must have the result for future1 to determine whether or not you need the result for future2. So the solution is something like this:
Integer i1 = future1.get();
if (i1 == null) {
return future2.get();
} else {
future2.cancel(true);
return i1;
}
Note that the above works with plain Future as well as CompletableFuture. If you were using CompletableFuture because you thought that anyOf was the solution, then you didn't need to do that. Calling ExecutorService.submit(Callable) will give you a Future ...
It will be more complicated if you need to deal with exceptions thrown by the tasks and/or timeouts. In the former case, you need to catch ExecutionException and the extract its cause exception to get the exception thrown by the task.
There is also the caveat that the second computation may ignore the interrupt and continue on regardless.
So, should I stick to CompletableFuture or use all of them based on the scenario?
Well, they all have different purposes and you'll probably use them all either directly or indirectly:
Thread represents a thread and while it can be subclassed in most cases you shouldn't do so. Many frameworks maintain thread pools, i.e. they spin up several threads that then can take tasks from a task pool. This is done to reduce the overhead that thread creation brings as well as to reduce the amount of contention (many threads and few cpu cores mean a lot of context switches so you'd normally try to have fewer threads that just work on one task after another).
Runnable was one of the first interfaces to represent tasks that a thread can work on. Another is Callable which has 2 major differences to Runnable: 1) it can return a value while Runnable has void and 2) it can throw checked exceptions. Depending on your case you can use either but since you want to get a result, you'll more likely use Callable.
CompletableFuture and Future are basically a way for cross-thread communication, i.e. you can use those to check whether the task is done already (non-blocking) or to wait for completion (blocking).
So in many cases it's like this:
you submit a Runnable or Callable to some executor
the executor maintains a pool of Threads to execute the tasks you submitted
the executor returns a Future (one implementation being CompletableFuture) for you to check on the status and results of the task without having to synchronize yourself.
However, there may be other cases where you directly provide a Runnable to a Thread or even subclass Thread but nowadays those are far less common.
How can I do this? Should I use CompletableFuture.anyOf() for that?
CompletableFuture.anyOf() wouldn't work since you'd not be able to determine which of the 2 you'd pass in was successful first.
Since you're interested in result1 first (which btw can't be null if the type is int) you basically want to do the following:
Integer result1 = future1.get(); //block until result 1 is ready
if( result1 != null ) {
return result1;
} else {
return future2.get(); //result1 was null so wait for result2 and return it
}
You'd not want to call future2.get() right away since that would block until both are done but instead you're first interested in future1 only so if that produces a result you wouldn't have for future2 to ever finish.
Note that the code above doesn't handle exceptional completions and there's also probably a more elegant way of composing the futures like you want but I don't remember it atm (if I do I'll add it as an edit).
Another note: you could call future2.cancel() if result1 isn't null but I'd suggest you first check whether cancelling would even work (e.g. you'd have a hard time really cancelling a webservice request) and what the results of interrupting the service would be. If it's ok to just let it complete and ignore the result that's probably the easier way to go.

Can we improve performance on lists other than java 8 parallel streams

I have to dump data from somewhere by calling rest API which returns List.
First i have to get some List object from one rest api. Now used parallel stream and gone through each item with forEach.
Now on for each element i have to call some other api to get the data which returns again list and save the same list by calling another rest api.
This is taking around 1 Hour for 6000 records of step 1.
I tried like below:
restApiMethodWhichReturns6000Records
.parallelStream().forEach(id ->{
anotherMethodWhichgetsSomeDataAndPostsToOtherRestCall(id);
});
public void anotherMethodWhichgetsSomeDataAndPostsToOtherRestCall(String id) {
sestApiToPostData(url,methodThatGetsListOfData(id));
}
parallelStream can cause unexpected behavior some times. It uses a common ForkJoinPool. So if you have parallel streams somewhere else in the code, it may have a blocking nature for long running tasks. Even in the same stream if some tasks are time taking, all the worker threads will be blocked.
A good discussion on this stackoverflow. Here you see some tricks to assign task specific ForkJoinPool.
First of all make sure your REST service is non-blocking.
One more thing you can do is to play with pool size by supplying -Djava.util.concurrent.ForkJoinPool.common.parallelism=4 to JVM.
IF the API calls are blocking, even when you run them in parallel, you will be able to do just a few calls in parallel.
I would try out a solution using CompletableFuture.
The code would be something like this:
List<CompletableFuture>> apiCallsFutures = restApiMethodWhichReturns6000Records
.stream()
.map(id -> CompletableFuture.supplyAsync(() -> getListOfData(id)) // Mapping the get list of data call to a Completable Future
.thenApply(listOfData -> callAPItoPOSTData(url, listOfData)) // when the get list call is complete, the post call can be performed
.collect(Collectors.toList());
CompletableFuture[] completableFutures = apiCallsFutures.toArray(new CompletableFuture[apiCallsFutures.size()]); // CompletableFuture.allOf accepts only arrays :(
CompletableFuture<Void> all = CompletableFuture.allOf(completableFutures); // Combine all the futures
all.get(); // perform calls
For more details about CompletableFutures, have a look over: https://www.baeldung.com/java-completablefuture

Processing sub-streams of a stream in Java using executors

I have a program that processes a huge stream (not in the sense of java.util.stream, but rather InputStream) of data coming in through the network. The stream consists of objects, each having a sort of sub-stream identifier. Right now the whole processing is done in a single thread, but it takes a lot of CPU time and each sub-stream can easily be processed independently, so I'm thinking of multi-threading it.
However, each sub-stream requires to keep a lot of bulky state, including various buffers, hash maps and such. There is no particular reason to make it concurrent or synchronized since sub-streams are independent of each other. Moreover, each sub-stream requires that its objects are processed in the order they arrive, which means that probably there should be a single thread for each sub-stream (but possibly one thread processing multiple sub-streams).
I'm thinking of several approaches to this, but they are not quite elegant.
Create a single ThreadPoolExecutor for all tasks. Each task will contain the next object to process and the reference to a Processor instance which keeps all the state. That would ensure the necessary happens-before relationship thus ensuring that the processing thread will see the up-to-date state for this sub-stream. This approach has no way to make sure that the next object of the same sub-stream will be processed in the same thread, as far as I can see. Moreover, it needs some guarantee that objects will be processed in the order they come in, which will require additional synchronization of the Processor objects, introducing unnecessary delays.
Create multiple single-thread executors manually and a sort of hash-map that maps sub-stream identifiers to executor. This approach requires manual management of executors, creating or shutting down them as new sub-streams begin or end, and distributing the tasks between them accordingly.
Create a custom executor that processes a special subclass of tasks each having a sub-stream ID. This executor would use it as a hint to use the same thread for executing this task as the previous one with the same ID. However, I don't see an easy way to implement such executor. Unfortunately, it doesn't seem possible to extend any of the existing executor classes, and implementing an executor from scratch is kind of overkill.
Create a single ThreadPoolExecutor, but instead of creating a task for each incoming object, create a single long-running task for each sub-stream that would block in a concurrent queue, waiting for the next object. Then put objects in queues according to their sub-stream IDs. This approach needs as many threads as there are sub-streams because the tasks will be blocked. The expected number of sub-streams is about 30-60, so that may be acceptable.
Alternatively, proceed as in 4, but limit the number of threads, assigning multiple sub-streams to a single task. This is sort of a hybrid between 2 and 4. As far as I can see, this is the best approach of these, but it still requires some sort of manual sub-stream distribution between tasks and some way to shut the extra tasks down as sub-streams end.
What would be the best way to ensure that each sub-stream is processed in its own thread without a lot of error-prone code? So that the following pseudo-code will work:
// loop {
Item next = stream.read();
int id = next.getSubstreamID();
Processor processor = getProcessor(id);
SubstreamTask task = new SubstreamTask(processor, next, id);
executor.submit(task); // This makes sure that the task will
// be executed in the same thread as the
// previous task with the same ID.
// } // loop
I suggest having an array of single threaded executors. If you can devise a consistent hashing strategy for sub-streams, you can map sub-streams to individual threads. e.g.
final ExecutorsService[] es = ...
public void submit(int id, Runnable run) {
es[(id & 0x7FFFFFFF) % es.length].submit(run);
}
The key could be an String or long but some way to identify the sub-stream. If you know a particular sub-stream is very expensive, you could assign it a dedicated thread.
The solution I finally chose looks like this:
private final Executor[] streamThreads
= new Executor[Runtime.getRuntime().availableProcessors()];
{
for (int i = 0; i < streamThreads.length; ++i) {
streamThreads[i] = Executors.newSingleThreadExecutor();
}
}
private final ConcurrentHashMap<SubstreamId, Integer>
threadById = new ConcurrentHashMap<>();
This code determines which executor to use:
Message msg = in.readNext();
SubstreamId msgSubstream = msg.getSubstreamId();
int exe = threadById.computeIfAbsent(msgSubstream,
id -> findBestExecutor());
streamThreads[exe].execute(() -> {
// processing goes here
});
And the findBestExecutor() function is this:
private int findBestExecutor() {
// Thread index -> substream count mapping:
final int[] loads = new int[streamThreads.length];
for (int thread : threadById.values()) {
++loads[thread];
}
// return the index of the minimum load
return IntStream.range(0, streamThreads.length)
.reduce((i, j) -> loads[i] <= loads[j] ? i : j)
.orElse(0);
}
This is, of course, not very efficient, but note that this function is only called when a new sub-stream shows up (which happens several times every few hours, so it's not a big deal in my case). My real code looks a bit more complicated because I have a way to determine whether two sub-streams are likely to finish simultaneously, and if they are, I try to assign them to different threads in order to maintain even load after they do finish. But since I never mentioned this detail in the question, I guess it doesn't belong to the answer either.

LinkedList Iterator throwing Concurrent Modification Exception

Is there a way to stop a ListIterator from throwing a ConcurrentModificationException? This is what I want to do:
Create a LinkedList with a bunch of objects that have a certain method that is to be executed frequently.
Have a set number of threads (say N) all of which are responsible for executing the said method of the objects in the LinkedList. For example, if there are k objects in the list, thread n would execute the method of the n-th object in the list, then move on to n+N-th object, then to n+2N-th, etc., until it loops back to the beginning.
The problem here lies in the retrieval of these objects. I would obviously be using a ListIterator to do this work. However, I predict this will not get very far, thanks to the ConcurrentModificationException that will be thrown according to the documentation. I want the list to be modifiable, and for the iterators to not care. In fact, it is expected that these objects will create and destroy other objects in the list.
I've thought of a few work-arounds:
Create and destroy a new iterator to retrieve the object at the given index. However, this is O(n), undesirable.
Use an ArrayedList instead; however, this is also undesirable, since deletions are O(n) and there are problems with the list needing to expand (and perhaps contract?) from time to time.
Write my own LinkedList class. Don't want to.
Thus, my question. Is there a way to stop a ListIterator from throwing a ConcurrentModificationException?
You seem concerned with performance. Have you actually measured the performance hit of using an O(n) vs O(1) algorithm? Depending on what you are doing and how frequently you are doing it, it might be acceptable to simply use a CopyOnWriteArrayList which is thread safe. Its iterators are also thread safe.
The main performance drag is on mutative operations (set, add, remove...): a new list is recreated each time.
However, the performance will be good enough for most applications. I would personally try using that, profile my application to check that the performance is good enough, and move on if it is. If it is not, you will need to find other ways.
Is there a way to stop a ListIterator from throwing a ConcurrentModificationException?
That you are asking this question this way shows a lack of understanding of how to properly use threads to increase the performance of your application.
The whole purpose of using threads is to divide processing and IO into separate runnable entities that can be executed in parallel -- independent of each other. If you are forking threads to all work on the same LinkedList then you most likely will have a performance loss or minimal gain since the overhead of the synchronization necessary to keep each of the threads' "view" of the LinkedList in sync would counter any gains due to parallel execution.
The question should not be "how to I stop ConcurrentModificationException", it should be "how can I use threads to improve the processing of a list of objects". That's the right question.
To process a collection of objects in parallel with a number of threads, you should be using an ExecutorService thread-pool. You create the pool with something like the following code. Each of the entries in your LinkedList (in this example Job) would then be processed by the threads in the pool in parallel.
// create a thread pool with 10 workers
ExecutorService threadPool = Executors.newFixedThreadPool(10);
// submit each of the objects in the list to the pool
for (Job job : jobLinkedList) {
    threadPool.submit(new MyJobProcessor(job));
}
// once we have submitted all jobs to the thread pool, it should be shutdown
threadPool.shutdown();
// wait for the thread-pool jobs to finish
threadPool.awaitTermination(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
synchronized (jobLinkedList) {
// not sure this is necessary but we need to a memory barrier somewhere
}
...
// you wouldn't need this if Job implemented Runnable
public class MyJobProcessor implements Runnable {
    private Job job;
public MyJobProcessor(Job job) {
        this.job = job;
}
  public void run() {
    // process the job
    }
}
You could use one Iterator to scan the list, and use an Executor to do the work on each object by passing off to a pool of threads. That's easy. There's overhead in packaging up work units this way. You still have to be careful to use Iterator method to modify the list, only, but maybe that simplifies the problem.
Or can you perform your work in one pass, then list modification in the next?
Can you split into N lists?
Please see the answer from #assylias -- his advice is good. I would add that if you decide to write your own linked list class, you need to think very carefully about how to make it thread-safe.
Think about all the ways your list could get mangled if multiple threads tried to modify it simultaneously. Just locking 1 or 2 nodes is not enough -- as an example, take the following list:
A -> B -> C -> D
Imagine that one thread tries to remove B, just as another thread is removing C. To remove B, the link from A needs to "jump" over B to C. But what if C is no longer part of the list by that time? Likewise, to remove C, the link from B needs to be changed to jump to D, but what if B has already been removed from the list by that time? Similar issues arise when nodes are added simultaneously to nearby parts of the list.
If you have 1 lock per node, and you lock 3 nodes when doing a "remove" operation (the node to be removed, and the nodes before and after it), I think it will be thread-safe. You need to also think carefully about which nodes must be locked when adding nodes, and when traversing the list. To avoid deadlocks, you need to make sure to always acquire locks in a constant order, and when traversing the list, you need to use "hand-over-hand" locking (which precludes the use of ordinary Java monitors -- you need explicit lock objects).

Java Iterator Concurrency

I'm trying to loop over a Java iterator concurrently, but am having troubles with the best way to do this.
Here is what I have where I don't try to do anything concurrently.
Long l;
Iterator<Long> i = getUserIDs();
while (i.hasNext()) {
l = i.next();
someObject.doSomething(l);
anotheObject.doSomething(l);
}
There should be no race conditions between the things I'm doing on the non iterator objects, so I'm not too worried about that. I'd just like to speed up how long it takes to loop through the iterator by not doing it sequentially.
Thanks in advance.
One solution is to use an executor to parallelise your work.
Simple example:
ExecutorService executor = Executors.newCachedThreadPool();
Iterator<Long> i = getUserIDs();
while (i.hasNext()) {
final Long l = i.next();
Runnable task = new Runnable() {
public void run() {
someObject.doSomething(l);
anotheObject.doSomething(l);
}
}
executor.submit(task);
}
executor.shutdown();
This will create a new thread for each item in the iterator, which will then do the work. You can tune how many threads are used by using a different method on the Executors class, or subdivide the work as you see fit (e.g. a different Runnable for each of the method calls).
A can offer two possible approaches:
Use a thread pool and dispatch the items received from the iterator to a set of processing threads. This will not accelerate the iterator operations themselves, since those would still happen in a single thread, but it will parallelize the actual processing.
Depending on how the iteration is created, you might be able to split the iteration process to multiple segments, each to be processed by a separate thread via a different Iterator object. For an example, have a look at the List.sublist(int fromIndex, int toIndex) and List.listIterator(int index) methods.
This would allow the iterator operations to happen in parallel, but it is not always possible to segment the iteration like this, usually due to the simple fact that the items to be iterated over are not immediately available.
As a bonus trick, if the iteration operations are expensive or slow, such as those required to access a database, you might see a throughput improvement if you separate them out to a separate thread that will use the iterator to fill in a BlockingQueue. The dispatcher thread will then only have to access the queue, without waiting on the iterator object to retrieve the next item.
The most important advice in this case is this: "Use your profiler", usually to be followed by "Do not optimise prematurely". By using a profiler, such as VisualVM, you should be able to ascertain the exact cause of any performance issues, without taking shots in the dark.
If you are using Java 7, you can use the new fork/join; see the tutorial.
Not only does it split automatically the tasks among the threads, but if some thread finishes its tasks earlier than the other threads, it "steals" some tasks from the other threads.

Categories