Executor framework to process 1 million records - java

I had a requirement where I had to process a file containing 1 million records and save it in a redis cache. I was supposed to use redis pipeline but I didn't get any information on it. Here was my question: Question
So I decided to use multithreading-executor framework. I am new to multithreading
Here is my code:
#Async
public void createSubscribersAsync(Subscription subscription, MultipartFile file)throws EntityNotFoundException, InterruptedException, ExecutionException, TimeoutException {
ExecutorService executorService = Executors.newFixedThreadPool(8);
Collection<Callable<String>> callables = new ArrayList<>();
List<Subscriber> cache = new ArrayList<>();
int batchSize = defaultBatchSize.intValue();
while ((line = br.readLine()) != null) {
try {
Subscriber subscriber = createSubscriber(subscription, line);
cache.add(subscriber);
if (cache.size() >= batchSize) {
IntStream.rangeClosed(1, 8).forEach(i -> {
callables.add(createCallable(cache, subscription.getSubscriptionId()));});
}
} catch (InvalidSubscriberDataException e) {
invalidRows.add(line + ":" + e.getMessage());
invalidCount++;
}
}
List<Future<String>> taskFutureList = executorService.invokeAll(callables);
for (Future<String> future : taskFutureList) {
String value = future.get(4, TimeUnit.SECONDS);
System.out.println(String.format("TaskFuture returned value %s", value));
}
}
private Callable<String> createCallable(List<Subscriber> cache, String subscriptionId) {
return new Callable<String>() {
public String call() throws Exception {
System.out.println(String.format("starting expensive task thread %s", Thread.currentThread().getName()));
processSubscribers(cache,subscriptionId);
System.out.println(String.format("finished expensive task thread %s", Thread.currentThread().getName()));
return "Finish Thread:" + Thread.currentThread().getName();
}
};
}
private void processSubscribers(List<Subscriber> cache, String subscriptionId) {
subscriberRedisRepository.saveAll(cache);
cache.clear();
}
Idea here is I want to split a file in a batch and save that batch using a thread. I created the pool of 8 threads.
Is this a correct way to implement executor framework? If not could you please help me out in this? Appreciate the help.

Quick modifications to your current code to achive the ask:
In your while loop once the current cache exceeds batch size, create a callable passing in the current cache. Reset the cache list, create a new list and assign it as cache.
You are creating a list of callables to submit them as a batch, why not submit your callables right after creating them? This will start writing already read records to redis, while your main thread continues reading from file.
List<Future<String>> taskFutureList = new LinkedList<Future<String>>();
while ((line = br.readLine()) != null) {
try {
Subscriber subscriber = createSubscriber(subscription, line);
cache.add(subscriber);
if (cache.size() >= batchSize) {
taskFutureList.add(executorService.submit(createCallable(cache,subscription.getSubscriptionId())));
List<Subscriber> cache = new ArrayList<>();
}
} catch (InvalidSubscriberDataException e) {
invalidRows.add(line + ":" + e.getMessage());
invalidCount++;
}
}
//submit last batch that could be < batchSize
if(!cache.isEmpty()){
taskFutureList.add(executorService.submit(createCallable(cache,subscription.getSubscriptionId())));
}
You do not have to store a seperate list of callables.

Related

Process large text file concurrently

So I have a large text file, in this case it's roughly 4.5 GB, and I need to process the entire file as fast as is possible. Right now I have multi-threaded this using 3 threads (not including the main thread). An input thread for reading the input file, a processing thread to process the data, and an output thread to output the processed data to a file.
Currently, the bottleneck is the processing section. Therefore, I'd like to add more processing threads into the mix. However, this creates a situation where I've got multiple threads accessing the same BlockingQueue, and their results are therefore not maintaining the order of the input file.
An example of the functionality I'm looking for would be something like this:
Input file: 1, 2, 3, 4, 5
Output file: ^ the same. Not 2, 1, 4, 3, 5 or any other combination.
I've written a dummy program that is identical in functionality to the actual program minus the processing part, (I can't give you the actual program due to the processing class containing info that is confidential). I should also mention, all of the classes (Input, Processing, and Output) are all Inner classes contained within a Main class that contains the initialise() method and the class level variables mentioned in the main thread code listed below.
Main thread:
static volatile boolean readerFinished = false; // class level variables
static volatile boolean writerFinished = false;
private void initialise() throws IOException {
BlockingQueue<String> inputQueue = new LinkedBlockingQueue<>(1_000_000);
BlockingQueue<String> outputQueue = new LinkedBlockingQueue<>(1_000_000); // capacity 1 million.
String inputFileName = "test.txt";
String outputFileName = "outputTest.txt";
BufferedReader reader = new BufferedReader(new FileReader(inputFileName));
BufferedWriter writer = new BufferedWriter(new FileWriter(outputFileName));
Thread T1 = new Thread(new Input(reader, inputQueue));
Thread T2 = new Thread(new Processing(inputQueue, outputQueue));
Thread T3 = new Thread(new Output(writer, outputQueue));
T1.start();
T2.start();
T3.start();
while (!writerFinished) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
reader.close();
writer.close();
System.out.println("Exited.");
}
Input thread: (Please forgive the commented debug code, was using it to ensure the reader thread was actually executing properly).
class Input implements Runnable {
BufferedReader reader;
BlockingQueue<String> inputQueue;
Input(BufferedReader reader, BlockingQueue<String> inputQueue) {
this.reader = reader;
this.inputQueue = inputQueue;
}
#Override
public void run() {
String poisonPill = "ChH92PU2KYkZUBR";
String line;
//int linesRead = 0;
try {
while ((line = reader.readLine()) != null) {
inputQueue.put(line);
//linesRead++;
/*
if (linesRead == 500_000) {
//batchesRead += 1;
//System.out.println("Batch read");
linesRead = 0;
}
*/
}
inputQueue.put(poisonPill);
} catch (IOException | InterruptedException e) {
e.printStackTrace();
}
readerFinished = true;
}
}
Processing thread: (Normally this would actually be doing something to the line, but for purposes of the mockup I've just made it immediately push to the output thread). If necessary we can simulate it doing some work by making the thread sleep for a small amount of time for each line.
class Processing implements Runnable {
BlockingQueue<String> inputQueue;
BlockingQueue<String> outputQueue;
Processing(BlockingQueue<String> inputQueue, BlockingQueue<String> outputQueue) {
this.inputQueue = inputQueue;
this.outputQueue = outputQueue;
}
#Override
public void run() {
while (true) {
try {
if (inputQueue.isEmpty() && readerFinished) {
break;
}
String line = inputQueue.take();
outputQueue.put(line);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
Output thread:
class Output implements Runnable {
BufferedWriter writer;
BlockingQueue<String> outputQueue;
Output(BufferedWriter writer, BlockingQueue<String> outputQueue) {
this.writer = writer;
this.outputQueue = outputQueue;
}
#Override
public void run() {
String line;
ArrayList<String> outputList = new ArrayList<>();
while (true) {
try {
line = outputQueue.take();
if (line.equals("ChH92PU2KYkZUBR")) {
for (String outputLine : outputList) {
writer.write(outputLine);
}
System.out.println("Writer finished - executing termination");
writerFinished = true;
break;
}
line += "\n";
outputList.add(line);
if (outputList.size() == 500_000) {
for (String outputLine : outputList) {
writer.write(outputLine);
}
System.out.println("Writer wrote batch");
outputList = new ArrayList<>();
}
} catch (IOException | InterruptedException e) {
e.printStackTrace();
}
}
}
}
So right now the general data flow is very linear, looking something like this:
Input > Processing > Output.
But what I'd like to have is something like this:
But the catch is, when the data gets to output, it either needs to be sorted into the correct order, or it needs to already be in the correct order.
Recommendations or examples on how to go about this would be greatly appreciated.
In the past I have used the Future and Callable interfaces to solve a task involving parallel data flows like this, but unfortunately that code was not reading from a single queue, and so is of minimal help here.
I should also add, for those of you that will notice this, batchSize and poisonPill are normally defined in the main thread and then passed around via variables, they are not usually hard coded as they are in the code for Input thread, and the output checks for the writer thread. I was just a wee bit lazy when writing the mockup for experimentation at ~1am.
Edit: I should also mention, this is required to use Java 8 at most. Java 9 features and above cannot be used due to these versions not being installed in the environments in which this program will be run.
What you could do:
Take X threads for processing, where X is the number of cores available for processing
Give each thread its own input queue.
The reader thread gives records to each thread's input queue round-robin in a predictable fashion.
Since the output files are too big for memory, you write X output files, one for each thread, and each file name has the index of the thread in it, so that you can reconstitute the original order from the file names.
After the process is complete, you merge the X output files. One line from the file for thread 1, one from the files for thread 2, etc. in a round-robin fashion again. This reconstitutes the original order.
As an added bonus, since you have an input queue per thread, you don't have lock contention on the queue between readers. (only between the reader and the writer) You could even optimize this by putting things in the input queues in batches larger than 1.
As was also proposed by Alexei, you can create OrderedTask:
class OrderedTask implements Comparable<OrderedTask> {
private final Integer index;
private final String line;
public OrderedTask(Integer index, String line) {
this.index = index;
this.line = line;
}
#Override
public int compareTo(OrderedTask o) {
return index < o.getIndex() ? -1 : index == o.getIndex() ? 0 : 1;
}
public Integer getIndex() {
return index;
}
public String getLine() {
return line;
}
}
As an output queue you can use your own backed by priority queue:
class OrderedTaskQueue {
private final ReentrantLock lock;
private final Condition waitForOrderedItem;
private final int maxQueuesize;
private final PriorityQueue<OrderedTask> backedQueue;
private int expectedIndex;
public OrderedTaskQueue(int maxQueueSize, int startIndex) {
this.maxQueuesize = maxQueueSize;
this.expectedIndex = startIndex;
this.backedQueue = new PriorityQueue<>(2 * this.maxQueuesize);
this.lock = new ReentrantLock();
this.waitForOrderedItem = this.lock.newCondition();
}
public boolean put(OrderedTask item) {
ReentrantLock lock = this.lock;
lock.lock();
try {
while (this.backedQueue.size() >= maxQueuesize && item.getIndex() != expectedIndex) {
this.waitForOrderedItem.await();
}
boolean result = this.backedQueue.add(item);
this.waitForOrderedItem.signalAll();
return result;
} catch (InterruptedException e) {
throw new RuntimeException();
} finally {
lock.unlock();
}
}
public OrderedTask take() {
ReentrantLock lock = this.lock;
lock.lock();
try {
while (this.backedQueue.peek() == null || this.backedQueue.peek().getIndex() != expectedIndex) {
this.waitForOrderedItem.await();
}
OrderedTask result = this.backedQueue.poll();
expectedIndex++;
this.waitForOrderedItem.signalAll();
return result;
} catch (InterruptedException e) {
throw new RuntimeException();
} finally {
lock.unlock();
}
}
}
StartIndex is the index of the first ordered task, and
maxQueueSize is used to stop processing of other tasks (not to fill the memory), when we wait for some earlier task to finish. It should be double/tripple of the number of processing thread, to not stop the processing immediatelly and allow the scalability.
Then you should create your task :
int indexOrder =0;
while ((line = reader.readLine()) != null) {
inputQueue.put(new OrderedTask(indexOrder++,line);
}
The line by line is only used because of your example. You should change the OrderedTask to support the batch of lines.
Why not reverse the flow ?
Output call for X batches;
Generate X promise/task (promise pattern) who will call randomly one of the processing core (keep a batch number, to pass through to the input core); batch the calls handler into a ordered list;
Each processing core call for a batch in the input core;
Enjoy ?

How to read lines from a CSV to use in multiple threads

Suppose I have a CSV file with hundreds of lines with two random keywords as cells I'd like to Google search and have the first result on the page printed to the console or stored in some array. In the case of this example, I imagine I would successfully do this reading one line at a time using something like the following:
CSVReader reader = new CSVReader(new FileReader(FILE_PATH));
String [] nextLine;
while ((nextLine = reader.readNext())) !=null) {
driver.get("http://google.com/");
driver.findElement(By.name("q").click();
driver.findElement(By.name("q").clear();
driver.findElement(By.name("q").sendKeys(nextLine[0] + " " + nextLine[1]);
System.out.println(driver.findElement(By.xpath(XPATH_TO_1ST));
}
How would I go about having 5 or however many threads of chromedriver through selenium process the CSV file as fast as possible? I've been able to get 5 lines done at a time implementing Runnable on a class that does this and starting 5 threads, but I would like to know if there is a solution where as soon as one thread is complete, it processes the next available or unprocessed line, as opposed to waiting for the 5 searches to process, then going on to the next 5 lines. Would appreciate any suggested reading or tips on cracking this!
This is a pure java response, rather than specifically a selenium response.
You want to partition the data. A crude but effective partitioner can be made by reading a row from the CSV file and putting it in a Queue. Afterwards, run as many threads as you can profitably use to simply pull the next entry off of the queue and process it.
If you want to do 5 (or more) threads at the same time, you would need to start 5 instances of WebDriver as it is not thread safe. As for updating the CSV, you would need to synchronize writes to that for each thread to prevent corruption to the file itself, or you could batch up updates at some threshold and write several lines at once.
See this Can Selenium use multi threading in one browser?
Update:
How about this? It ensures the web driver is not re-used between threads.
CSVReader reader = new CSVReader(new FileReader(FILE_PATH));
// number to do at same time
int concurrencyCount = 5;
ExecutorService executorService = Executors.newFixedThreadPool(concurrencyCount);
CompletionService<Boolean> completionService = new ExecutorCompletionService<Boolean>(executorService);
String[] nextLine;
// ensure we use a distinct WebDriver instance per thread
final LinkedBlockingQueue<WebDriver> webDrivers = new LinkedBlockingQueue<WebDriver>();
for (int i=0; i<concurrencyCount; i++) {
webDrivers.offer(new ChromeDriver());
}
int count = 0;
while ((nextLine = reader.readNext()) != null) {
final String [] line = nextLine;
completionService.submit(new Callable<Boolean>() {
public Boolean call() {
try {
// take a webdriver from the queue to use
final WebDriver driver = webDrivers.take();
driver.get("http://google.com/");
driver.findElement(By.name("q")).click();
driver.findElement(By.name("q")).clear();
driver.findElement(By.name("q")).sendKeys(line[0] + " " + line[1]);
System.out.println(line[1]);
line[2] = driver.findElement(By.xpath(XPATH_TO_1ST)).getText();
// put webdriver back on the queue
webDrivers.offer(driver);
return true;
} catch (InterruptedException e) {
e.printStackTrace();
return false;
}
}
});
count++;
}
boolean errors = false;
while(count-- > 0) {
Future<Boolean> resultFuture = completionService.take();
try {
Boolean result = resultFuture.get();
} catch(Exception e) {
e.printStackTrace();
errors = true;
}
}
System.out.println("done, errors=" + errors);
for (WebDriver webDriver : webDrivers) {
webDriver.close();
}
executorService.shutdown();
You can create Callable for each row and give it to the ExecutorService. It takes care of the execution of the tasks and manages the worker threads for you. Carefully choose the thread pool size for optimal execution time.
More information about thread pool size can be found here

One Producer ten consumers file-processing with Executors.newSingleThreadExecutor()

I have a LinkedBlockingQueue with an arbitrarily picked capacity of 10, and an input file with 1000 lines. I have one ExecutorService-type variable in the main method of the service class that, to my knowledge, first handles--using Executors.newSingleThreadExecutor()--a single thread to call buffer.readline() until file line == null, and then handles--within a loop using Executors.newSingleThreadExecutor()--ten threads to process lines and write them to output files, until !queue.take().equals("Stop"). However, after writing some lines to files, when I am in the debug mode, I see that the capacity of the queue eventually reaches max (10), and the processing threads do not execute queue.take(). All threads are in the running state, but the process halts after queue.put(). What would cause this problem, and is it solvable using some combination of thread-pooling or multiple ExecutorServicehandler variables, instead of a single variable?
Outline for current state of main method in service:
//app settings to get values for keys within a properties file
AppSettings appSettings = new AppSettings();
BlockingQueue<String> queue = new LinkedBlockingQueue<String>(10);
maxProdThreads = 1;
maxConsThreads = 10;
ExecutorService execSvc = null;
for (int i = 0; i < maxProdThreads; i++) {
execSvc = Executors.newSingleThreadExecutor();
execSvc.submit(new ReadJSONMessage(appSettings,queue));
}
for (int i = 0; i < maxConsThreads; i++) {
execSvc = Executors.newSingleThreadExecutor();
execSvc.submit(new ProcessJSONMessage(appSettings,queue));
}
Reading method code:
buffer = new BufferedReader(new FileReader(inputFilePath));
while((line = buffer.readLine()) != null){
line = line.trim();
queue.put(line);
}
Processing and Writing code:
while(!(line=queue.take()).equals("Stop")){
if(line.length() > 10)
{
try {
if(processMessage(line, outputFilePath) == true)
{
++count;
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
public boolean processMessage(String line, String outputFilePath){
CustomObject cO = new CustomObject();
cO.setText(line);
writeToFile1(cO,...);
writeToFile2(cO,...);
}
public void writeOutputAToFile(CustomObject cO,...){
synchronized(cO){
...
org.apache.commons.io.FileUtils.writeStringToFile(...)
}
}
public void writeOutputBToFile(CustomObject cO,...){
synchronized(cO){
...
org.apache.commons.io.FileUtils.writeStringToFile(...)
}
}
In the Processing and writing code..ensure that all resources are closed properly..Probably the resources might not be closed properly due to which the thread keeps running and the ExecutorService can not find an idle thread...

Increasing Disk Read Throughput By Concurrency

I am trying to read a log file and parse it that consumes only CPU. I have a server that reads a huge text file 230MB/second, just read text file not parse. When i try to parse the text file, using single thread, i can parse the file around 50-70MB/second.
I want to increase my throughput, doing that job concurrency. In this code, i reached 130 MB/second. At the peak, i saw 190MB/second. I tried BlockedQueue, Semaphore, ExecutionService etc. Is there any advice you give me reach at 200MB/second throughput.
public static void fileReaderTestUsingSemaphore(String[] args) throws Exception {
CustomFileReader reader = new CustomFileReader(args[0]);
final int concurrency = Integer.parseInt(args[1]);
ExecutorService executorService = Executors.newFixedThreadPool(concurrency);
Semaphore semaphore = new Semaphore(concurrency,true);
System.out.println("Conccurrency in Semaphore: " + concurrency);
String line;
while ((line = reader.getLine()) != null)
{
semaphore.acquire();
try
{
final String p = line;
executorService.execute(new Runnable() {
#Override
public void run() {
reader.splitNginxLinewithIntern(p); // that is the method which parser string and convert to class.
semaphore.release();
}
});
}
catch (Exception ex)
{
ex.printStackTrace();
}
finally {
semaphore.release();
}
}
executorService.shutdown();
executorService.awaitTermination(Long.MAX_VALUE, TimeUnit.MINUTES);
System.out.println("ReadByteCount: " + reader.getReadByteCount());
}
You might benefit from the Files.lines() method and the Stream paradigm introduced in Java 8. It will use the systems common fork/join pool. Try this pattern:
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
public class LineCounter
{
public static void main(String[] args) throws IOException
{
Files.lines(Paths.get("/your/file/here"))
.parallel()
.forEach(LineCounter::processLine);
}
private static void processLine(String line) {
// do the processing
}
}
Assuming that you don't care about order of lines:
final String MARKER = new String("");
BlockingQueue<String> q = new LinkedBlockingDeque<>(1024);
for (int i = 0; i < concurrency; i++)
executorService.execute(() -> {
for (;;) {
try {
String s = q.take();
if(s == MARKER) {
q.put(s);
return;
}
reader.splitNginxLinewithIntern(s);
} catch (InterruptedException e) {
return;
}
}
});
String line;
while ((line = reader.readLine()) != null) {
q.put(line);
}
q.put(MARKER);
executorService.awaitTermination(10, TimeUnit.MINUTES);
This starts a number of threads that each runs a specific task; that task is to read from the queue and run the split method. The reader just feeds the queue, notifies when it's complete and waits for termination.
If you were to use RxJava2 and rxjava2-extras that would simply be
Strings.from(reader)
.flatMap(str -> Flowable
.just(str)
.observeOn(Schedulers.computation())
.doOnNext(reader::splitNginxLinewithIntern)
)
.blockingSubscribe();
You need to go multi-thread, and you need to have the reader thread delegate the parsing to worker threads, that's clear. The point is how to do this delegating with as little overhead as possible.
#Tassos provided code that looks like a solid improvement.
One more thing you can try is to change the delegation granularity, not delegating every single line individually, but building chunks of e.g. 100 lines, thus reducing the delegating/synchronizing overhead by a factor of 100 (but then needing a String[] array or similar, which shouldn't hurt too much).

Using a threadpool to add in to a list

I am trying to read a file and add each line to a list.
Simple drawing explaining the goal
Main class -
public class SimpleTreadPoolMain {
public static void main(String[] args) {
ReadFile reader = new ReadFile();
File file = new File("C:\\myFile.csv");
try {
reader.readFile(file);
} catch (IOException e) {
e.printStackTrace();
}
}
}
Reader class -
public class ReadFile {
ExecutorService executor = Executors.newFixedThreadPool(5);//creating a pool of 5 threads
List<String> list = new ArrayList<>();
void readFile(File file) throws IOException {
try (BufferedReader br = new BufferedReader(new FileReader(file))) {
String line;
while ((line = br.readLine()) != "") {
Runnable saver = new SaveToList(line,list);
executor.execute(saver);//calling execute method of ExecutorService
}
}
executor.shutdown();
while (!executor.isTerminated()) { }
}
}
Saver class -
public class SaveToList<E> implements Runnable{
List<E> myList;
E line;
public SaveToList(E line, List<E> list) {
this.line = line;
this.myList = list;
}
public void run() {
//modify the line
myList.add(line);
}
}
I tried to have many saver threads to add in to a same list instead of one saver adding to the list one by one. I want to use threads because I need to modify the data before adding to the list. So I assume modifying the data would take up some time. So paralleling this part would reduce the time consumption, right?
But this doesn't work. I am unable to return a global list which includes all the values from the file. I want to have only one global list of values from the file. So the code definitely should change. If one can guide me it would be greatly appreciated.
Even though adding one by one in a single thread would work, using a thread pool would make it faster, right?
Using multiple threads won't speed anything up here.
You are:
Reading a line from a file, serially.
Creating a runnable and submitting it into a thread pool
The runnable then adds things into a list
Given that you're using an ArrayList, you need to synchronize access to it, because you're mutating it from multiple threads. So, you are adding things into the list serially.
But even without the synchronization, the time taken for the IO will far exceed the time taken to add the string into the list. And adding in multithreading is just going to slow it down more, because it's doing work to construct the runnable, submit it to the thread pool, schedule it, etc.
It's simpler just to miss out the whole middle step:
Read a line from a file, serially.
Add the list to the list, serially.
So:
try (BufferedReader br = new BufferedReader(new FileReader(file))) {
String line;
while (!(line = br.readLine()).isEmpty()) {
list.add(line);
}
}
You should in fact try if it's worth using multi threading in you application, just compare how much time it takes to read the whole file without any processing on rows done, and compare it with the time it takes to process serially the whole file.
If your process is not too complex my guess is it is not worth to use multi threading.
If you find that the time it takes is much more then you can think about using one or more threads to do the computations.
If so, you could use Futures to process batches of input strings or maybe you could use a thread safe Queue to send string to another process.
private static final int BATCH_SIZE = 1000;
public static void main(String[] args) throws IOException {
BufferedReader reader = new BufferedReader(new InputStreamReader(new FileInputStream("big_file.csv"), "utf-8"));
ExecutorService pool = Executors.newFixedThreadPool(8);
String line;
List<String> batch = new ArrayList<>(BATCH_SIZE);
List<Future> results = new LinkedList<>();
while((line=reader.readLine())!=null){
batch.add(line);
if(batch.size()>=BATCH_SIZE){
Future<Object> f = noWaitExec(batch, pool);
results.add(f);
batch = new ArrayList<>(BATCH_SIZE);
}
}
Future<List> f = noWaitExec(batch,pool);
results.add(f);
for (Future future : results) {
try {
Object object = future.get();
// Use your results here
} catch (Exception e) {
// Manage this....
}
}
}
private static Future<List> noWaitExec(final List<String> batch, ExecutorService pool) {
return pool.submit(new Callable<List>() {
public List call() throws Exception {
List result = new ArrayList<>(batch.size());
for (String string : batch) {
result.add(process(string));
}
return result;
}
});
}
private static Object process(String string) {
// Your process ....
return null;
};
There are many other possible solutions (Observables, ParallelStreams, Pipes, CompletableFutures ... you name it), still I think that most of the time spent is the time it takes to read the file, just using a BufferedInputStream to read the file with a big enough buffer could cut your times more then parallel computing.

Categories