I would like to create 5 million csv files, I have waiting for almost 3 hours, but the program is still running. Can somebody give me some advice, how to speed up the file generation.
After these 5 million files generation complete, I have to upload them to s3 bucket.
It would be better if someone know how to generate these files through AWS, thus, we can move files to s3 bucket directly and ignore network speed issue.(Just start to learning AWS, there are lots of knowledge need to know)
The following is my code.
public class ParallelCsvGenerate implements Runnable {
private static AtomicLong baseID = new AtomicLong(8160123456L);
private static ThreadLocalRandom random = ThreadLocalRandom.current();
private static ThreadLocalRandom random2 = ThreadLocalRandom.current();
private static String filePath = "C:\\5millionfiles\\";
private static List<String> headList = null;
private static String csvHeader = null;
public ParallelCsvGenerate() {
headList = generateHeadList();
csvHeader = String.join(",", headList);
}
#Override
public void run() {
for(int i = 0; i < 1000000; i++) {
generateCSV();
}s
}
private void generateCSV() {
StringBuilder builder = new StringBuilder();
builder.append(csvHeader).append(System.lineSeparator());
for (int i = 0; i < headList.size(); i++) {
if(i < headList.size() - 1) {
builder.append(i % 2 == 0 ? generateRandomInteger() : generateRandomStr()).append(",");
} else {
builder.append(i % 2 == 0 ? generateRandomInteger() : generateRandomStr());
}
}
String fileName = String.valueOf(baseID.addAndGet(1));
File csvFile = new File(filePath + fileName + ".csv");
FileWriter fileWriter = null;
try {
fileWriter = new FileWriter(csvFile);
fileWriter.write(builder.toString());
fileWriter.flush();
} catch (Exception e) {
System.err.println(e);
} finally {
try {
if(fileWriter != null) {
fileWriter.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
private static List<String> generateHeadList() {
List<String> headList = new ArrayList<>(20);
String baseFiledName = "Field";
for(int i = 1; i <=20; i++) {
headList.add(baseFiledName + i);
}
return headList;
}
/**
* generate a number in range of 0-50000
* #return
*/
private Integer generateRandomInteger() {
return random.nextInt(0,50000);
}
/**
* generate a string length is 5 - 8
* #return
*/
private String generateRandomStr() {
int strLength = random2.nextInt(5, 8);
String str="abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";
int length = str.length();
StringBuilder builder = new StringBuilder();
for (int i = 0; i < strLength; i++) {
builder.append(str.charAt(random.nextInt(length)));
}
return builder.toString();
}
Main
ParallelCsvGenerate generate = new ParallelCsvGenerate();
Thread a = new Thread(generate, "A");
Thread b = new Thread(generate, "B");
Thread c = new Thread(generate, "C");
Thread d = new Thread(generate, "D");
Thread e = new Thread(generate, "E");
a.run();
b.run();
c.run();
d.run();
e.run();
Thanks for your guys advice, just refactor the code, and generate 3.8million files using 2.8h, which is much better.
Refactor code:
public class ParallelCsvGenerate implements Callable<Integer> {
private static String filePath = "C:\\5millionfiles\\";
private static String[] header = new String[]{
"FIELD1","FIELD2","FIELD3","FIELD4","FIELD5",
"FIELD6","FIELD7","FIELD8","FIELD9","FIELD10",
"FIELD11","FIELD12","FIELD13","FIELD14","FIELD15",
"FIELD16","FIELD17","FIELD18","FIELD19","FIELD20",
};
private String fileName;
public ParallelCsvGenerate(String fileName) {
this.fileName = fileName;
}
#Override
public Integer call() throws Exception {
try {
generateCSV();
} catch (IOException e) {
e.printStackTrace();
}
return 0;
}
private void generateCSV() throws IOException {
CSVWriter writer = new CSVWriter(new FileWriter(filePath + fileName + ".csv"), CSVWriter.DEFAULT_SEPARATOR, CSVWriter.NO_QUOTE_CHARACTER);
String[] content = new String[]{
RandomGenerator.generateRandomInteger(),
RandomGenerator.generateRandomStr(),
RandomGenerator.generateRandomInteger(),
RandomGenerator.generateRandomStr(),
RandomGenerator.generateRandomInteger(),
RandomGenerator.generateRandomStr(),
RandomGenerator.generateRandomInteger(),
RandomGenerator.generateRandomStr(),
RandomGenerator.generateRandomInteger(),
RandomGenerator.generateRandomStr(),
RandomGenerator.generateRandomInteger(),
RandomGenerator.generateRandomStr(),
RandomGenerator.generateRandomInteger(),
RandomGenerator.generateRandomStr(),
RandomGenerator.generateRandomInteger(),
RandomGenerator.generateRandomStr(),
RandomGenerator.generateRandomInteger(),
RandomGenerator.generateRandomStr(),
RandomGenerator.generateRandomInteger(),
RandomGenerator.generateRandomStr()
};
writer.writeNext(header);
writer.writeNext(content);
writer.close();
}
}
Main
public static void main(String[] args) {
System.out.println("Start generate");
long start = System.currentTimeMillis();
ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(8, 8,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
List<ParallelCsvGenerate> taskList = new ArrayList<>(3800000);
for(int i = 0; i < 3800000; i++) {
taskList.add(new ParallelCsvGenerate(i+""));
}
try {
List<Future<Integer>> futures = threadPoolExecutor.invokeAll(taskList);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Success");
long end = System.currentTimeMillis();
System.out.println("Using time: " + (end-start));
}
You could write directly into the file (without allocating the whole file in one StringBuilder). (I think this is the biggest time+memory bottleneck here: builder.toString())
You could generate each file in parallel.
(little tweaks:) Omit the if's inside loop.
if(i < headList.size() - 1) is not needed, when you do a more clever loop + 1 extra iteration.
The i % 2 == 0 can be eliminated by a better iteration (i+=2) ..and more labor inside the loop (i -> int, i + 1 -> string)
If applicable prefer append(char) to append(String). (Better append(',') than append(",")!)
...
You can use Fork/Join framework (java 7 and above) to make your process in parallel and use multi core of your Cpu.
I'll take an example for you.
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.ForkJoinTask;
import java.util.concurrent.RecursiveTask;
import java.util.stream.LongStream;
public class ForkJoinAdd extends RecursiveTask<Long> {
private final long[] numbers;
private final int start;
private final int end;
public static final long threshold = 10_000;
public ForkJoinAdd(long[] numbers) {
this(numbers, 0, numbers.length);
}
private ForkJoinAdd(long[] numbers, int start, int end) {
this.numbers = numbers;
this.start = start;
this.end = end;
}
#Override
protected Long compute() {
int length = end - start;
if (length <= threshold) {
return add();
}
ForkJoinAdd firstTask = new ForkJoinAdd(numbers, start, start + length / 2);
firstTask.fork(); //start asynchronously
ForkJoinAdd secondTask = new ForkJoinAdd(numbers, start + length / 2, end);
Long secondTaskResult = secondTask.compute();
Long firstTaskResult = firstTask.join();
return firstTaskResult + secondTaskResult;
}
private long add() {
long result = 0;
for (int i = start; i < end; i++) {
result += numbers[i];
}
return result;
}
public static long startForkJoinSum(long n) {
long[] numbers = LongStream.rangeClosed(1, n).toArray();
ForkJoinTask<Long> task = new ForkJoinAdd(numbers);
return new ForkJoinPool().invoke(task);
}
}
use this example
And if you want to read more about it, Guide to the Fork/Join Framework in Java | Baeldung
and Fork/Join (The Java™ Tutorials
can help you to better understand and better design your app.
be lucky.
Remove the for(int i = 0; i < 1000000; i++) loop from run method (leave a single generateCSV() call.
Create 5 million ParallelCsvGenerate objects.
Submit them to a ThreadPoolExecutor
Converted main:
public static void main(String[] args) {
ThreadPoolExecutor ex = (ThreadPoolExecutor) Executors.newFixedThreadPool(8);
for(int i = 0; i < 5000000; i++) {
ParallelCsvGenerate generate = new ParallelCsvGenerate();
ex.submit(generate);
}
ex.shutdown();
}
It takes roughly 5 minutes to complete on my laptop (4 physical cores with hyperthreading, SSD drive).
EDIT:
I've replaced FileWriter with AsynchronousFileChannel using the following code:
Path file = Paths.get(filePath + fileName + ".csv");
try(AsynchronousFileChannel asyncFile = AsynchronousFileChannel.open(file,
StandardOpenOption.WRITE,
StandardOpenOption.CREATE)) {
asyncFile.write(ByteBuffer.wrap(builder.toString().getBytes()), 0);
} catch (IOException e) {
e.printStackTrace();
}
to achieve 30% speedup.
I believe that the main bottleneck is the hard drive and filesystem itself. Not much more can be achieved here.
I have a series of mp4 files saved on the device that need to be merged together to make a single mp4 file.
video_p1.mp4 video_p2.mp4 video_p3.mp4 > video.mp4
The solutions I have researched such as the mp4parser framework use deprecated code.
The best solution I could find is using a MediaMuxer and MediaExtractor.
The code runs but my videos are not merged (only the content in video_p1.mp4 is displayed and it is in landscape orientation, not portrait).
Can anyone help me sort this out?
public static boolean concatenateFiles(File dst, File... sources) {
if ((sources == null) || (sources.length == 0)) {
return false;
}
boolean result;
MediaExtractor extractor = null;
MediaMuxer muxer = null;
try {
// Set up MediaMuxer for the destination.
muxer = new MediaMuxer(dst.getPath(), MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
// Copy the samples from MediaExtractor to MediaMuxer.
boolean sawEOS = false;
//int bufferSize = MAX_SAMPLE_SIZE;
int bufferSize = 1 * 1024 * 1024;
int frameCount = 0;
int offset = 100;
ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
long timeOffsetUs = 0;
int dstTrackIndex = -1;
for (int fileIndex = 0; fileIndex < sources.length; fileIndex++) {
int numberOfSamplesInSource = getNumberOfSamples(sources[fileIndex]);
// Set up MediaExtractor to read from the source.
extractor = new MediaExtractor();
extractor.setDataSource(sources[fileIndex].getPath());
// Set up the tracks.
SparseIntArray indexMap = new SparseIntArray(extractor.getTrackCount());
for (int i = 0; i < extractor.getTrackCount(); i++) {
extractor.selectTrack(i);
MediaFormat format = extractor.getTrackFormat(i);
if (dstTrackIndex < 0) {
dstTrackIndex = muxer.addTrack(format);
muxer.start();
}
indexMap.put(i, dstTrackIndex);
}
long lastPresentationTimeUs = 0;
int currentSample = 0;
while (!sawEOS) {
bufferInfo.offset = offset;
bufferInfo.size = extractor.readSampleData(dstBuf, offset);
if (bufferInfo.size < 0) {
sawEOS = true;
bufferInfo.size = 0;
timeOffsetUs += (lastPresentationTimeUs + 0);
}
else {
lastPresentationTimeUs = extractor.getSampleTime();
bufferInfo.presentationTimeUs = extractor.getSampleTime() + timeOffsetUs;
bufferInfo.flags = extractor.getSampleFlags();
int trackIndex = extractor.getSampleTrackIndex();
if ((currentSample < numberOfSamplesInSource) || (fileIndex == sources.length - 1)) {
muxer.writeSampleData(indexMap.get(trackIndex), dstBuf, bufferInfo);
}
extractor.advance();
frameCount++;
currentSample++;
Log.d("tag2", "Frame (" + frameCount + ") " +
"PresentationTimeUs:" + bufferInfo.presentationTimeUs +
" Flags:" + bufferInfo.flags +
" TrackIndex:" + trackIndex +
" Size(KB) " + bufferInfo.size / 1024);
}
}
extractor.release();
extractor = null;
}
result = true;
}
catch (IOException e) {
result = false;
}
finally {
if (extractor != null) {
extractor.release();
}
if (muxer != null) {
muxer.stop();
muxer.release();
}
}
return result;
}
public static int getNumberOfSamples(File src) {
MediaExtractor extractor = new MediaExtractor();
int result;
try {
extractor.setDataSource(src.getPath());
extractor.selectTrack(0);
result = 0;
while (extractor.advance()) {
result ++;
}
}
catch(IOException e) {
result = -1;
}
finally {
extractor.release();
}
return result;
}
I'm using this library for muxing videos: ffmpeg-android-java
gradle dependency:
implementation 'com.writingminds:FFmpegAndroid:0.3.2'
Here's how I use it in my project to mux video and audio in kotlin: VideoAudioMuxer
So basically it works like the ffmpeg in terminal but you're inputing your command to a method as an array of strings along with a listener.
fmpeg.execute(arrayOf("-i", videoPath, "-i", audioPath, "$targetPath.mp4"), object : ExecuteBinaryResponseHandler() {
You'll have to search how to merge videos in ffmpeg and convert the commands into array of strings for the argument you need.
You could probably do almost anything, since ffmpeg is a very powerful tool.
I have a csv file, after I overwrite 1 line with the Write method, after re-writing to the file everything is already added to the end of the file, and not to a specific line
using System.Collections;
using System.Collections.Generic;
using UnityEngine.UI;
using UnityEngine;
using System.Text;
using System.IO;
public class LoadQuestion : MonoBehaviour
{
int index;
string path;
FileStream file;
StreamReader reader;
StreamWriter writer;
public Text City;
public string[] allQuestion;
public string[] addedQuestion;
private void Start()
{
index = 0;
path = Application.dataPath + "/Files/Questions.csv";
allQuestion = File.ReadAllLines(path, Encoding.GetEncoding(1251));
file = new FileStream(path, FileMode.Open, FileAccess.ReadWrite);
writer = new StreamWriter(file, Encoding.GetEncoding(1251));
reader = new StreamReader(file, Encoding.GetEncoding(1251));
writer.AutoFlush = true;
List<string> _questions = new List<string>();
for (int i = 0; i < allQuestion.Length; i++)
{
char status = allQuestion[i][0];
if (status == '0')
{
_questions.Add(allQuestion[i]);
}
}
addedQuestion = _questions.ToArray();
City.text = ParseToCity(addedQuestion[0]);
}
private string ParseToCity(string current)
{
string _city = "";
string[] data = current.Split(';');
_city = data[2];
return _city;
}
private void OnApplicationQuit()
{
writer.Close();
reader.Close();
file.Close();
}
public void IKnow()
{
string[] quest = addedQuestion[index].Split(';');
int indexFromFile = int.Parse(quest[1]);
string questBeforeAnsver = "";
for (int i = 0; i < quest.Length; i++)
{
if (i == 0)
{
questBeforeAnsver += "1";
}
else
{
questBeforeAnsver += ";" + quest[i];
}
}
Debug.Log("indexFromFile : " + indexFromFile);
for (int i = 0; i < allQuestion.Length; i++)
{
if (i == indexFromFile)
{
writer.Write(questBeforeAnsver);
break;
}
else
{
reader.ReadLine();
}
}
reader.DiscardBufferedData();
reader.BaseStream.Seek(0, SeekOrigin.Begin);
if (index < addedQuestion.Length - 1)
{
index++;
}
City.text = ParseToCity(addedQuestion[index]);
}
}
There are lines in the file by type :
0;0;Africa
0;1;London
0;2;Paris
The bottom line is that this is a game, and only those questions whose status is 0, that is, unanswered, are downloaded from the file. And if during the game the user clicks that he knows the answer, then there is a line in the file and is overwritten, only the status is no longer 0, but 1 and when the game is repeated, this question will not load.
It turns out for me that the first question is overwritten successfully, and all subsequent ones are simply added at the end of the file :
1;0;Africa
0;1;London
0;2;Paris1;1;London1;2;Paris
What's wrong ?
The video shows everything in detail
I am trying to split a text file with multiple threads. The file is of 1 GB. I am reading the file by char. The Execution time is 24 min 54 seconds. Instead of reading a file by char is their any better way where I can reduce the execution time.
I'm having a hard time figuring out an approach that will reduce the execution time. Please do suggest me also, if there is any other better way to split file with multiple threads. I am very new to java.
Any help will be appreciated. :)
public static void main(String[] args) throws Exception {
RandomAccessFile raf = new RandomAccessFile("D:\\sample\\file.txt", "r");
long numSplits = 10;
long sourceSize = raf.length();
System.out.println("file length:" + sourceSize);
long bytesPerSplit = sourceSize / numSplits;
long remainingBytes = sourceSize % numSplits;
int maxReadBufferSize = 9 * 1024;
List<String> filePositionList = new ArrayList<String>();
long startPosition = 0;
long endPosition = bytesPerSplit;
for (int i = 0; i < numSplits; i++) {
raf.seek(endPosition);
String strData = raf.readLine();
if (strData != null) {
endPosition = endPosition + strData.length();
}
String str = startPosition + "|" + endPosition;
if (sourceSize > endPosition) {
startPosition = endPosition;
endPosition = startPosition + bytesPerSplit;
} else {
break;
}
filePositionList.add(str);
}
for (int i = 0; i < filePositionList.size(); i++) {
String str = filePositionList.get(i);
String[] strArr = str.split("\\|");
String strStartPosition = strArr[0];
String strEndPosition = strArr[1];
long startPositionFile = Long.parseLong(strStartPosition);
long endPositionFile = Long.parseLong(strEndPosition);
MultithreadedSplit objMultithreadedSplit = new MultithreadedSplit(startPositionFile, endPositionFile);
objMultithreadedSplit.start();
}
long endTime = System.currentTimeMillis();
System.out.println("It took " + (endTime - startTime) + " milliseconds");
}
}
public class MultithreadedSplit extends Thread {
public static String filePath = "D:\\tenlakh\\file.txt";
private int localCounter = 0;
private long start;
private long end;
public static String outPath;
List<String> result = new ArrayList<String>();
public MultithreadedSplit(long startPos, long endPos) {
start = startPos;
end = endPos;
}
#Override
public void run() {
try {
String threadName = Thread.currentThread().getName();
long currentTime = System.currentTimeMillis();
RandomAccessFile file = new RandomAccessFile("D:\\sample\\file.txt", "r");
String outFile = "out_" + threadName + ".txt";
System.out.println("Thread Reading started for start:" + start + ";End:" + end+";threadname:"+threadName);
FileOutputStream out2 = new FileOutputStream("D:\\sample\\" + outFile);
file.seek(start);
int nRecordCount = 0;
char c = (char) file.read();
StringBuilder objBuilder = new StringBuilder();
int nCounter = 1;
while (c != -1) {
objBuilder.append(c);
// System.out.println("char-->" + c);
if (c == '\n') {
nRecordCount++;
out2.write(objBuilder.toString().getBytes());
objBuilder.delete(0, objBuilder.length());
//System.out.println("--->" + nRecordCount);
// break;
}
c = (char) file.read();
nCounter++;
if (nCounter > end) {
break;
}
}
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
The fastest way would be to map the file into memory segment by segment (mapping a large file as a whole may cause undesired side effects). It will skip few relatively expensive copy operations. The operating system will load file into RAM and JRE will expose it to your application as a view into an off-heap memory area in a form of a ByteBuffer. It would usually allow you to squeze last 2x/3x of the performance.
Memory-mapped way requires quite a bit of helper code (see the fragment in the bottom), it's not always the best tactical way. Instead, if your input is line-based and you just need reasonable performance (what you have now is probably not) then just do something like:
import java.nio.Files;
import java.nio.Paths;
...
File.lines(Paths.get("/path/to/the/file"), StandardCharsets.ISO_8859_1)
// .parallel() // parallel processing is still possible
.forEach(line -> { /* your code goes here */ });
For the contrast, a working example of the code for working with the file via memory mapping would look something like below. In case of fixed-size records (when segments can be selected precisely to match record boundaries) subsequent segments can be processed in parallel.
static ByteBuffer mapFileSegment(FileChannel fileChannel, long fileSize, long regionOffset, long segmentSize) throws IOException {
long regionSize = min(segmentSize, fileSize - regionOffset);
// small last region prevention
final long remainingSize = fileSize - (regionOffset + regionSize);
if (remainingSize < segmentSize / 2) {
regionSize += remainingSize;
}
return fileChannel.map(FileChannel.MapMode.READ_ONLY, regionOffset, regionSize);
}
...
final ToIntFunction<ByteBuffer> consumer = ...
try (FileChannel fileChannel = FileChannel.open(Paths.get("/path/to/file", StandardOpenOption.READ)) {
final long fileSize = fileChannel.size();
long regionOffset = 0;
while (regionOffset < fileSize) {
final ByteBuffer regionBuffer = mapFileSegment(fileChannel, fileSize, regionOffset, segmentSize);
while (regionBuffer.hasRemaining()) {
final int usedBytes = consumer.applyAsInt(regionBuffer);
if (usedBytes == 0)
break;
}
regionOffset += regionBuffer.position();
}
} catch (IOException ex) {
throw new UncheckedIOException(ex);
}
This question already has answers here:
How to use an ExecutorCompletionService
(2 answers)
Closed 7 years ago.
public static void getTestData() {
try {
filename = "InventoryData_" + form_id;
PrintWriter writer = new PrintWriter("/Users/pnroy/Documents/" +filename + ".txt");
pids = new ArrayList<ProductId>();
GetData productList = new GetData();
System.out.println("Getting productId");
pids = productList.GetProductIds(form_id);
int perThreadSize = pids.size() / numberOfCrawlers;
ArrayList<ArrayList<ProductId>> perThreadData = new
ArrayList<ArrayList<ProductId>>(numberOfCrawlers);
for (int i = 1; i <= numberOfCrawlers; i++) {
perThreadData.add(new ArrayList<ProductId>(perThreadSize));
for (int j = 0; j < perThreadSize; j++) {
ProductId ids = new ProductId();
ids.setEbProductID((pids.get(((i - 1) * perThreadSize + j))).getEbProductID());
ids.setECProductID((pids.get(((i - 1) * perThreadSize + j))).getECProductID());
perThreadData.get(i - 1).add(ids);
}
}
BlockingQueue<String> q = new LinkedBlockingQueue<String>();
Consumer c1 = new Consumer(q);
Thread[] thread = new Thread[numberOfCrawlers];
for (int k = 0; k <= numberOfCrawlers; k++) {
// System.out.println(k);
GetCombinedData data = new GetCombinedData();
thread[k] = new Thread(data);
thread[k].setDaemon(true);
data.setVal(perThreadData.get(k), filename, q);
thread[k].start();
// writer.println(data.getResult());
}
new Thread(c1).start();
for (int l = 0; l <= numberOfCrawlers; l++) {
thread[l].join();
}
} catch (Exception e) {
}
}
Here number of crawlers is the number of threads.
The run method of GetCombined class has the following code:
The pids is passed as perThreadData.get(k-1) from the main method
The class CassController queries a API and i get a string result after some processing.
public void run(){
try{
for(int i=0;i<pids.size();i++){
//System.out.println("before cassini");
CassController cass = new CassController();
String result=cass.getPaginationDetails(pids.get(i));
queue.put(result);
// System.out.println(result);
Thread.sleep(1000);
}
writer.close();
}catch(Exception ex){
}
Consumer.java has the following code :
public class Consumer implements Runnable{
private final BlockingQueue queue;
Consumer(BlockingQueue q) { queue = q; }
public void run(){
try {
while (queue.size()>0)
{
consume(queue.take());
}
} catch (InterruptedException ex)
{
}
}
void consume(Object x) {
try{
PrintWriter writer = new PrintWriter(new FileWriter("/Users/pnroy/Documents/Inventory", true));
writer.println(x.toString());
writer.close();
}catch(IOException ex){
}
}
So if i set the number of crawlers to 10 and if there are 500 records each thread will process 50 records.I need to write the results into a file.I am confused how i can achieve this since its a array of thread and each thread is doing a bunch of operations.
I tried using blocking queue but that is printing repetitive results.I am new to multi threading and not sure how can i handle the case.
Can you please suggest.
With the introduction of many useful high-level concurrency classes, it now recommended not to directly use the Thread class anymore. Even the BlockingQueue class is rather low-level.
Instead, you have a nice application for the CompletionService, which builds upon the ExecutorService. The below example shows how to use it.
You want to replace the code in PartialResultTask (that's where the main processing happens) and System.out.println (that's where you probably want to write your result to a file).
public class ParallelProcessing {
public static void main(String[] args) {
ExecutorService executionService = Executors.newFixedThreadPool(10);
CompletionService<String> completionService = new ExecutorCompletionService<>(executionService);
// submit tasks
for (int i = 0; i < 500; i++) {
completionService.submit(new PartialResultTask(i));
}
// collect result
for (int i = 0; i < 500; i++) {
String result = getNextResult(completionService);
if (result != null)
System.out.println(result);
}
executionService.shutdown();
}
private static String getNextResult(CompletionService<String> completionService) {
Future<String> result = null;
while (result == null) {
try {
result = completionService.take();
} catch (InterruptedException e) {
// ignore and retry
}
}
try {
return result.get();
} catch (ExecutionException e) {
e.printStackTrace();
return null;
} catch (InterruptedException e) {
e.printStackTrace();
return null;
}
}
static class PartialResultTask implements Callable<String> {
private int n;
public PartialResultTask(int n) {
this.n = n;
}
#Override
public String call() {
return String.format("Partial result %d", n);
}
}
}