Split Files in a directory uniformly across threads in JAVA - java

I have a variable list of files in a directory and I have different threads in Java to process them. The threads are variable depending upon the current processor
int numberOfThreads=Runtime.getRuntime().availableProcessors();
File[] inputFilesArr=currentDirectory.listFiles();
How do I split the files uniformly across threads? If I do simple math like
int filesPerThread=inputFilesArr.length/numberOfThreads
then I might end up missing some files if the inputFilesArr.length and numberOfThreads are not exactly divisible by each other. What is an efficient way of doing this so that the partition and load across all the threads are uniform?

Here is another take on this problem:
Use java's ThreaPoolExecutor. Here is an example.
It works on the principle of Thread Pool (you need not create threads every time you need but creates a specified number of threads at the start and uses the threads from the pool)
Idea is to treat the processing of each file in a directory as independent task, to be performed by each thread.
Now when you submit all tasks to the executor in loop (this makes sure that no files are left out).
Executor will actually add all of these tasks to a queue and the same time it will pick up threads from the Thread pool and assign them the task till all the threads are busy.
It waits till a thread becomes available. So configuring the threadpool size is vital here. Either you can have as many threads as number of files or lesser number than that.
Here I made an assumption that each file to be processed is independent of each other and its not required that a certain bunch of files to be processed by a single thread.

You can use round robin algorithm for most optimal distribution. Here is the pseudocode:
ProcessThread t[] = new ProcessThread[Number of Cores];
int i = 0;
foreach(File f in files)
{
t[i++ % t.length].queueForProcessing(f);
}
foreach(Thread tt in t)
{
tt.join();
}

The Producer Consumer pattern will solve this gracefully. Have one producer (the main thread) put all the files on a bound blocking queue (see BlockingQueue). Then have a number of worker threads take a file from the queue and process it.
The work (rather than the files) will be uniformly distributed over threads, since threads that are done processing one file, come ask for the next file to process. This avoids the possible problem that one thread gets assigned only large files to process, and other threads get only small files to process.

you can try to get the range (index of start and end in inputFilesArr) of files per thread:
if (inputFilesArr.length < numberOfThreads)
numberOfThreads = inputFilesArr.length;
int[][] filesRangePerThread = getFilesRangePerThread(inputFilesArr.length, numberOfThreads);
and
private static int[][] getFilesRangePerThread(int filesCount, int threadsCount)
{
int[][] filesRangePerThread = new int[threadsCount][2];
if (threadsCount > 1)
{
float odtRangeIncrementFactor = (float) filesCount / threadsCount;
float lastEndIndexSet = odtRangeIncrementFactor - 1;
int rangeStartIndex = 0;
int rangeEndIndex = Math.round(lastEndIndexSet);
filesRangePerThread[0] = new int[] { rangeStartIndex, rangeEndIndex };
for (int processCounter = 1; processCounter < threadsCount; processCounter++)
{
rangeStartIndex = rangeEndIndex + 1;
lastEndIndexSet += odtRangeIncrementFactor;
rangeEndIndex = Math.round(lastEndIndexSet);
filesRangePerThread[processCounter] = new int[] { rangeStartIndex, rangeEndIndex };
}
}
else
{
filesRangePerThread[0] = new int[] { 0, filesCount - 1 };
}
return filesRangePerThread;
}

If you are dealing with I/O even with one processor multiple threads can work in parallel, because while one thread is waiting on read(byte[]) processor can run another thread.
Anyway, this is my solution
int nThreads = 2;
File[] files = new File[9];
int filesPerThread = files.length / nThreads;
class Task extends Thread {
List<File> list = new ArrayList<>();
// implement run here
}
Task task = new Task();
List<Task> tasks = new ArrayList<>();
tasks.add(task);
for (int i = 0; i < files.length; i++) {
if (task.list.size() == filesPerThread && files.length - i >= filesPerThread) {
task = new Task();
tasks.add(task);
}
task.list.add(files[i]);
}
for(Task t : tasks) {
System.out.println(t.list.size());
}
prints 4 5
Note that it will create 3 threads if you have 3 files and 5 processors

Related

Why the speedup of parallel programming using Executor larger than cores?

I am writing a program dealing with matrix parallel programming using Executorservice framework. And I set the fixedpoolsize to 4, however what surprises me is that when the matrix dimension is set to 5000, the speedup of using multithreading against serial execution is greater than 4 (which is also my CPU cores). And I have checked that my CPU does not support for hyperthreading.
Actually I use the Callable and Future container since my multithreading task requires the result to be returned.
// Part of code for parallel programming
double[][] x = new double[N][N];
List<Future<double[]>> futureList = new ArrayList<>();
for (int k=0;k<N;k++)
{
Future<double[]>temp=service.submit(new Thread.Task(N,k,matrix,vector));
futureList.add(temp);
}
for (int j = 0; j < N; j++) {
x[j]=futureList.get(j).get();
}
public double[] call() throws Exception {
for (int i = N - 1; i >= 0; i--)
{
double sum = 0;
for (int j = i + 1; j < N; j++)
{
sum += matrix[i][j] * x[j];
}
x[i] = (vector[i][k] - sum) / matrix[i][i];
}
return x;
}
// Part of code for Serial programming
double[][] x = new double[N][N];
for (int k=0;k<N;k++)
{
for (int i = N - 1; i >= 0; i--)
{
double sum = 0;
for (int j = i + 1; j < N; j++)
{
sum += matrix[i][j] * x[j][k];
}
x[i][k] = (vector[i][k] - sum) / matrix[i][i];
}
}
In short, I just take the inside loop away to let it be run by the thread and leave the outside loop unchanged.
But how can the speedup be like this?
Since from my previous concept it is that the maximum speedup can only be 4. And I have checked that the task is just done by 4 threads actually.
Threads can be utilized on the same cpu. You do not need a multi core processor to execute multi threaded applications.
Think of a thread as small process, that gets created by the parent program and destroyed once its done. Even single cpu computers can run multiple threads at once.
ExecutorService schedules threads to be executed and will run as many parallel threads as there are resources available including the cores.
Here is the docs on fixedThreadPool
public static ExecutorService newFixedThreadPool(int nThreads)
Creates a thread pool that reuses a fixed number of threads operating
off a shared unbounded queue. At any point, at most nThreads threads
will be active processing tasks. If additional tasks are submitted
when all threads are active, they will wait in the queue until a
thread is available. If any thread terminates due to a failure during
execution prior to shutdown, a new one will take its place if needed
to execute subsequent tasks. The threads in the pool will exist until
it is explicitly shutdown
You can also try workStealingPool
public static ExecutorService newWorkStealingPool()
Creates a
work-stealing thread pool using all available processors as its target
parallelism level.
This could be an effect of CPU cache affinity. If each core works on a different part of a problem it may achieve greater efficiency in cache use. Because RAM is up to 10's or more times slower than cache this can make a HUGE difference.

Mulithreading Usage

I am iterating through a HashMap with +- 20 Million entries. In each iteration I am again iterating through HashMap with +- 20 Million entries.
HashMap<String, BitSet> data_1 = new HashMap<String, BitSet>
HashMap<String, BitSet> data_2 = new HashMap<String, BitSet>
I am dividng data_1 into chunks based on number of threads(threads = cores, i have four core processor).
My code is taking more than 20 Hrs to excute. Excluding not storing the results into a file.
1) If i want to store the results of each thread without overlapping into a file, How can i
do that?.
2) How can i make the following much faster.
3) How to create the chunks dynamically, based on number of cores?
int cores = Runtime.getRuntime().availableProcessors();
int threads = cores;
//Number of threads
int Chunks = data_1.size() / threads;
//I don't trust with chunks created by the below line, that's why i created chunk1, chunk2, chunk3, chunk4 seperately and validated them.
Map<Integer, BitSet>[] Chunk= (Map<Integer, BitSet>[]) new HashMap<?,?>[threads];
4) How to create threads using for loops? Is it correct what i am doing?
ClassName thread1 = new ClassName(data2, chunk1);
ClassName thread2 = new ClassName(data2, chunk2);
ClassName thread3 = new ClassName(data2, chunk3);
ClassName thread4 = new ClassName(data2, chunk4);
thread1.start();
thread2.start();
thread3.start();
thread4.start();
thread1.join();
thread2.join();
thread3.join();
thread4.join();
Representation of My Code
Public class ClassName {
Integer nSimilarEntities = 30;
public void run() {
for (String kNonRepeater : data_1.keySet()) {
// Extract the feature vector
BitSet vFeaturesNonRepeater = data_1.get(kNonRepeater);
// Calculate the sum of 1s (L2 norm is the sqrt of this)
double nNormNonRepeater = Math.sqrt(vFeaturesNonRepeater.cardinality());
// Loop through the repeater set
double nMinSimilarity = 100;
int nMinSimIndex = 0;
// Maintain the list of top similar repeaters and the similarity values
long dpind = 0;
ArrayList<String> vSimilarKeys = new ArrayList<String>();
ArrayList<Double> vSimilarValues = new ArrayList<Double>();
for (String kRepeater : data_2.keySet()) {
// Status output at regular intervals
dpind++;
if (Math.floorMod(dpind, pct) == 0) {
System.out.println(dpind + " dot products (" + Math.round(dpind / pct) + "%) out of "
+ nNumSimilaritiesToCompute + " completed!");
}
// Calculate the norm of repeater, and the dot product
BitSet vFeaturesRepeater = data_2.get(kRepeater);
double nNormRepeater = Math.sqrt(vFeaturesRepeater.cardinality());
BitSet vTemp = (BitSet) vFeaturesNonRepeater.clone();
vTemp.and(vFeaturesRepeater);
double nCosineDistance = vTemp.cardinality() / (nNormNonRepeater * nNormRepeater);
// queue.add(new MyClass(kRepeater,kNonRepeater,nCosineDistance));
// if(queue.size() > YOUR_LIMIT)
// queue.remove();
// Don't bother if the similarity is 0, obviously
if ((vSimilarKeys.size() < nSimilarEntities) && (nCosineDistance > 0)) {
vSimilarKeys.add(kRepeater);
vSimilarValues.add(nCosineDistance);
nMinSimilarity = vSimilarValues.get(0);
nMinSimIndex = 0;
for (int j = 0; j < vSimilarValues.size(); j++) {
if (vSimilarValues.get(j) < nMinSimilarity) {
nMinSimilarity = vSimilarValues.get(j);
nMinSimIndex = j;
}
}
} else { // If there are more, keep only the best
// If this is better than the smallest distance, then remove the smallest
if (nCosineDistance > nMinSimilarity) {
// Remove the lowest similarity value
vSimilarKeys.remove(nMinSimIndex);
vSimilarValues.remove(nMinSimIndex);
// Add this one
vSimilarKeys.add(kRepeater);
vSimilarValues.add(nCosineDistance);
// Refresh the index of lowest similarity value
nMinSimilarity = vSimilarValues.get(0);
nMinSimIndex = 0;
for (int j = 0; j < vSimilarValues.size(); j++) {
if (vSimilarValues.get(j) < nMinSimilarity) {
nMinSimilarity = vSimilarValues.get(j);
nMinSimIndex = j;
}
}
}
} // End loop for maintaining list of similar entries
}// End iteration through repeaters
for (int i = 0; i < vSimilarValues.size(); i++) {
System.out.println(Thread.currentThread().getName() + kNonRepeater + "|" + vSimilarKeys.get(i) + "|" + vSimilarValues.get(i));
}
}
}
}
Finally, If not Multithreading, is there any other approaches in java, to reduce time complexity.
The computer works similarly to what you have to do by hand (It processes more digits/bits at a time but the problem is the same.
If you do addition, the time is proportional to the of the size of the number.
If you do multiplication or divisor it's proportional to the square of the size of the number.
For the computer the size is based on multiples of 32 or 64 significant bits depending on the implementation.
I'd say this task is suitable for parallel streams. Don't hesitate to take a look at this conception if you have time. Parallel streams seamlessly use multithreading at full speed.
The top-level processing will look like this:
data_1.entrySet()
.parallelStream()
.flatmap(nonRepeaterEntry -> processOne(nonRepeaterEntry.getKey(), nonRepeaterEntry.getValue(), data2))
.forEach(System.out::println);
You should provide processOne function with prototype like this:
Stream<String> processOne(String nonRepeaterKey, String nonRepeaterBitSet, Map<String BitSet> data2);
It will return prepared string list with what you print now into file.
To make stream inside you can prepare List list first and then turn it into stream in return statement:
return list.stream();
Even though inner loop can be processed in streams, parallel streaming inside is discouraged - you already have enough parallelism.
For your questions:
1) If i want to store the results of each thread without overlapping into a file, How can i do that?.
Any logging framework (logback, log4j) can deal with it. Parallel streams can deal with it. Also you can store prepared lines into some queue/array and print them in separate thread. It takes a bit of care, though, ready solutions are easier and effectively they do such thing.
2) How can i make the following much faster.
Optimize and parallelize. At normal situation you get number_of_threads/1.5..number_of_threads times faster processing thinking you have hyperthreading in play, but it depends on things you do not-so-parallel and underlying implementations of stuff.
3) How to create the chunks dynamically, based on number of cores?
You don't have to. Make a list of tasks (1 task per data_1 entry) and feed executor service with them - that's already big enough task size. You can use FixedThreadPool with number of threads as parameter, and it will deal will distribute tasks evenly.
Not you should create task class, get Future for each task upon threadpool.submit and in the end run a loop doing .get for each Future. It will throttle main thread down to executor processing speed implicitly doing fork-join like behaviour.
4) Direct threads creation is outdated technique. It's recommended to use executor service of some sort, parallel streams etc. For loop processing you need to create list of chunks, and in loop create thread, add it to list of threads. And in another loop join to each thread if the list.
Ad hoc optimizations:
1) Make Repeater class that will store key, bitset and cardinality. Preprocess your hashsets turning them into Repeater instances and calculating cardinality once (i.e. not for every inner loop run). It will save you 20mil*(20mil-1) calls of .cardinality(). You still need to call it for difference.
2) Replace similarKeys, similarValues with limited size priorityQueue on combined entries. It works faster for 30 elements.
Take a look at this question for infor about PriorityQueue:
Java PriorityQueue with fixed size
3) You can skip processing of nonRepeater if its cardinality is already 0 - bitSet and will never increase resulting cardinality, and you'll filter out all 0-distance values.
4) You can skip (remove from temporary list you create in p.1 optimization) every Repeater with zero cardinality. Like in p.3 it will never produce anything fruitful.

Set thread number limitation [duplicate]

I want to launch a lot of tasks to run on a database of +-42Mio records. I want to run this in batches of 5000 records/time (results in 850 tasks).
I also want to limit the number of threads (to 16) java starts to do this for me and I am using the current code to accomplish this task:
ExecutorService executorService = Executors.newFixedThreadPool(16);
for (int j = 1; j < 900 + 1; j++) {
int start = (j - 1) * 5000;
int stop = (j) * 5000- 1;
FetcherRunner runner = new FetcherRunner(routes, start, stop);
executorService.submit(runner);
Thread t = new Thread(runner);
threadsList.add(t);
t.start();
}
Is this the correct way to do this? Particularly as I have the impression that java just fires away all tasks ...(FetcherRunner implements runnable)
The first part using ExecutorService looks good:
...
FetcherRunner runner = new FetcherRunner(routes, start, stop);
executorService.submit(runner);
The part with Thread should not be there, I am assuming you have it there just to show how you had it before?
Update:
Yes, you don't require the code after executorService.submit(runner), that is going to end up spawning a huge number of threads. If your objective is to wait for all submitted tasks to complete after the loop, then you can get a reference to Future when submitting tasks and wait on the Future, something like this:
ExecutorService executorService = Executors.newFixedThreadPool(16);
List<Future<Result>> futures = ..;
for (int j = 1; j < 900+ 1; j++) {
int start = (j - 1) * 5000;
int stop = (j) * 5000- 1;
FetcherRunner runner = new FetcherRunner(routes, start, stop);
futures.add(executorService.submit(runner));
}
for (Future<Result> future:futures){
future.get(); //Do something with the results..
}
Is this the correct way of working?
The first part is correct. But you shouldn't be creating and starting new Thread objects. When you submit the Runnable, the ExecutorService puts it on its queue, and then runs it when a worker thread becomes available.
.... I use the threadlist to detect when all my threads are finished so I can continue processing results.
Well if you do what you are currently doing, you are running each task twice. Worse still, the swarm of manually created threads will all try to run in parallel.
A simple way to make sure that all of the tasks have completed is to call awaitTermination(...) on the ExecutorService. (An orderly shutdown of the executor service will have the same effect ... if you don't intend to use it again.)
The other approach is to create a Future for each FetcherRunner's results, and attempt to get the result after all of the tasks have been submitted. That has the advantage that you can start processing early results before later ones have been produced. (However, if you don't need to ... or can't ... do that, using Futures won't achieve anything.)
You don't need to the part after the call to submit. The code you have that creates a Thread will result in 900 threads being created! Yowza. The ExecutorService has a pool of 16 threads and you can run 16 jobs at once. Any jobs submitted when all 16 threads are busy will be queued. From the docs:
Creates a thread pool that reuses a fixed number of threads operating
off a shared
unbounded queue. At any point, at most nThreads threads will be active processing tasks.
If additional tasks are submitted when all threads are active, they will wait in the
queue until a thread is available. If any thread terminates due to a failure during
execution prior to shutdown, a new one will take its place if needed to execute
subsequent tasks. The threads in the pool will exist until it is explicitly shutdown.
So there is no need for yet another thread. If you need to be notified after a task has finished you can have it call out. Other options are to cache all of the Future's returned from submit, and upon each task being finished you can check to see if all Future's are done. After all Future's are finished you can dispatch another function to run. But it will run ON one of the threads in the ExecutorService.
Changed from your code:
ExecutorService executorService = Executors.newFixedThreadPool(16);
for (int j = 1; j < 900 + 1; j++) {
int start = (j - 1) * 5000;
int stop = (j) * 5000 - 1;
FetcherRunner runner = new FetcherRunner(routes, start, stop);
executorService.submit(runner);
}
The best way would be to use countdownlatch as follows
ExecutorService executorService = Executors.newFixedThreadPool(16);
CountdownLatch latch = new CountdownLatch(900);
FetcherRunner runner = new FetcherRunner(routes, start, stop, latch);
latch.await();
in the FetcherRunner under finally block use latch.countDown(); code after await() will be executed only when all the tasks are completed.

Divide calculations among multiple threads

I've just started working with threads in java. I have a simple algorithm that does a lot of calculations. What I need to do is to divide those calculations among different threads. It looks like this:
while(...) {
....
doCalculations(rangeStart, rangeEnd);
}
And what I want to do is something like this:
while(...) {
...
// Notify N threads to start calculations in specific range
// Wait for them to finish calculating
// Check results
... Repeat
}
Calculating threads don't have to have a critical section or be synchronized between each other, because they don't change any shared variables.
What I can't figure out is how to order threads to start and wait them to finish.
thread[n].start() and thread[n].join() throws an exception.
Thank you!
I use an ExecutorService
private static final int procs = Runtime.getRuntime().availableProcessors();
private final ExecutorService es = new Executors.newFixedThreadPool(procs);
int tasks = ....
int blockSize = (tasks + procss -1) / procs;
List<Future<Results>> futures = new ArrayList<>();
for(int i = 0; i < procs; i++) {
int start = i * blockSize;
int end = Math.min(tasks, (i + 1) * blockSize);
futures.add(es.submit(new Task(start, end));
}
for(Future<Result> future: futures) {
Result result = future.get();
// check/accumulate result.
}
Use a CountDownLatch to start, and another CountDownLatch to finish:
CountDownLatch start = new CountDownLatch(1);
CountDownLatch finish = new CountDownLatch(NUMBER_OF_THREADS);
start.countDown();
finish.await();
And in each worker thread:
start.await();
// do the computation
finish.countDown();
And if you need to do that several times, then a CyclicBarrier is probably what you should use.
Learn MapReduce and Hadoop. I think the could be a better approach than rolling your own, at the cost of greater dependencies.

Code inside thread slower than outside thread..?

I'm trying to alter some code so it can work with multithreading. I stumbled upon a performance loss when putting a Runnable around some code.
For clarification: The original code, let's call it
//doSomething
got a Runnable around it like this:
Runnable r = new Runnable()
{
public void run()
{
//doSomething
}
}
Then I submit the runnable to a ChachedThreadPool ExecutorService. This is my first step towards multithreading this code, to see if the code runs as fast with one thread as the original code.
However, this is not the case. Where //doSomething executes in about 2 seconds, the Runnable executes in about 2.5 seconds. I need to mention that some other code, say, //doSomethingElse, inside a Runnable had no performance loss compared to the original //doSomethingElse.
My guess is that //doSomething has some operations that are not as fast when working in a Thread, but I don't know what it could be or what, in that aspect is the difference with //doSomethingElse.
Could it be the use of final int[]/float[] arrays that makes a Runnable so much slower? The //doSomethingElse code also used some finals, but //doSomething uses more. This is the only thing I could think of.
Unfortunately, the //doSomething code is quite long and out-of-context, but I will post it here anyway. For those who know the Mean Shift segmentation algorithm, this a part of the code where the mean shift vector is being calculated for each pixel. The for-loop
for(int i=0; i<L; i++)
runs through each pixel.
timer.start(); // this is where I start the timer
// Initialize mode table used for basin of attraction
char[] modeTable = new char [L]; // (L is a class property and is about 100,000)
Arrays.fill(modeTable, (char)0);
int[] pointList = new int [L];
// Allcocate memory for yk (current vector)
double[] yk = new double [lN]; // (lN is a final int, defined earlier)
// Allocate memory for Mh (mean shift vector)
double[] Mh = new double [lN];
int idxs2 = 0; int idxd2 = 0;
for (int i = 0; i < L; i++) {
// if a mode was already assigned to this data point
// then skip this point, otherwise proceed to
// find its mode by applying mean shift...
if (modeTable[i] == 1) {
continue;
}
// initialize point list...
int pointCount = 0;
// Assign window center (window centers are
// initialized by createLattice to be the point
// data[i])
idxs2 = i*lN;
for (int j=0; j<lN; j++)
yk[j] = sdata[idxs2+j]; // (sdata is an earlier defined final float[] of about 100,000 items)
// Calculate the mean shift vector using the lattice
/*****************************************************/
// Initialize mean shift vector
for (int j = 0; j < lN; j++) {
Mh[j] = 0;
}
double wsuml = 0;
double weight;
// find bucket of yk
int cBucket1 = (int) yk[0] + 1;
int cBucket2 = (int) yk[1] + 1;
int cBucket3 = (int) (yk[2] - sMinsFinal) + 1;
int cBucket = cBucket1 + nBuck1*(cBucket2 + nBuck2*cBucket3);
for (int j=0; j<27; j++) {
idxd2 = buckets[cBucket+bucNeigh[j]]; // (buckets is a final int[] of about 75,000 items)
// list parse, crt point is cHeadList
while (idxd2>=0) {
idxs2 = lN*idxd2;
// determine if inside search window
double el = sdata[idxs2+0]-yk[0];
double diff = el*el;
el = sdata[idxs2+1]-yk[1];
diff += el*el;
//...
idxd2 = slist[idxd2]; // (slist is a final int[] of about 100,000 items)
}
}
//...
}
timer.end(); // this is where I stop the timer.
There is more code, but the last while loop was where I first noticed the difference in performance.
Could anyone think of a reason why this code runs slower inside a Runnable than original?
Thanks.
Edit: The measured time is inside the code, so excluding startup of the thread.
All code always runs "inside a thread".
The slowdown you see is most likely caused by the overhead that multithreading adds. Try parallelizing different parts of your code - the tasks should neither be too large, nor too small. For example, you'd probably be better off running each of the outer loops as a separate task, rather than the innermost loops.
There is no single correct way to split up tasks, though, it all depends on how the data looks and what the target machine looks like (2 cores, 8 cores, 512 cores?).
Edit: What happens if you run the test repeatedly? E.g., if you do it like this:
Executor executor = ...;
for (int i = 0; i < 10; i++) {
final int lap = i;
Runnable r = new Runnable() {
public void run() {
long start = System.currentTimeMillis();
//doSomething
long duration = System.currentTimeMillis() - start;
System.out.printf("Lap %d: %d ms%n", lap, duration);
}
};
executor.execute(r);
}
Do you notice any difference in the results?
I personally do not see any reason for this. Any program has at least one thread. All threads are equal. All threads are created by default with medium priority (5). So, the code should show the same performance in both the main application thread and other thread that you open.
Are you sure you are measuring the time of "do something" and not the overall time that your program runs? I believe that you are measuring the time of operation together with the time that is required to create and start the thread.
When you create a new thread you always have an overhead. If you have a small piece of code, you may experience performance loss.
Once you have more code (bigger tasks) you make get a performance improvement by your parallelization (the code on the thread will not necessarily run faster, but you are doing two thing at once).
Just a detail: this decision of how big small can a task be so parallelizing it is still worth is a known topic in parallel computation :)
You haven't explained exactly how you are measuring the time taken. Clearly there are thread start-up costs but I infer that you are using some mechanism that ensures that these costs don't distort your picture.
Generally speaking when measuring performance it's easy to get mislead when measuring small pieces of work. I would be looking to get a run of at least 1,000 times longer, putting the whole thing in a loop or whatever.
Here the one different between the "No Thread" and "Threaded" cases is actually that you have gone from having one Thread (as has been pointed out you always have a thread) and two threads so now the JVM has to mediate between two threads. For this kind of work I can't see why that should make a difference, but it is a difference.
I would want to be using a good profiling tool to really dig into this.

Categories