Threads and stringbuilder - java

I have this code which should output String length = 10000, but i keep getting different outputs, and i am confused on how is it exactly happening? is that because for example thread 1 will append like 95 times and then another thread will interrupt thread 1 eg: thread 2, and then thread 2 will append up to 98 before getting interrupted by t3 etc.. ?

StringBuilder is not thread-safe. You can't use one from concurrent threads.
Replace it with a thread-safe StringBuffer, and you'll get the result you expect.
Since it's not thread-safe, you can't expect a deterministic result when using it from different threads. For example, the code of StringBuilder might contain something like
int newIndex = size();
buffer[newIndex] = appendedCharacter;
If two threads execute these two lines concurrently, then both might eecute the first instruction and get the same value for newIndex, and then both would insert the new character at the same index. That's called a data race. And such data races are the primary reason why non-threadsafe classes shouldn't be used from multiple threads.

Related

Concurrent in-order processing of work items from a Java BlockingQueue

I have part of a system that processes a BlockingQueue of input items within a worker thread, and puts the results on an BlockingQueue of output items, where the relevant code (simplified) looks something like this:
while (running()) {
InputObject a=inputQueue.take(); // Get from input BlockingQueue
OutputObject b=doProcessing(a); // Process the item
outputQueue.put(b); // Place on output BlockingQueue
}
doProcessing is the main performance bottleneck in this code, but the processing of queue items could be parallelised since the processing steps are all independent of each other.
I would therefore like to improve this so that items can be processed concurrently by multiple threads, with the constraint that this must not change the order of outputs (e.g. I can't simply have 10 threads running the loop above, because that might result in outputs being ordered differently depending on processing times).
What is the best way to achieve this in pure, idiomatic Java?
Parallel streams from List preserve ordering:
List<T> input = ...
List<T> output = input.parallelStream()
.filter(this::running)
.map(this::doProcessing)
.collect(Collectors.toList());
PriorityBlockingQueue can be used if your work items can be compared to one another, and you will wait until running() is false before reading from the output queue:
outputQueue = new PriorityBlockingQueue<>();
Or you could order them after they have all been processed (if they can be compared to one another):
outputQueue.drainTo(outputList);
outputList.sort(null);
A simple way to implement comparation would be assigning a progressive ID to each element put into the input queue.
Create X event-loop threads, where X is the amount of steps that can be processed in parallel.
They will be processed in parallel, except one after another, i.e. not on the same item. While one step will be carried on on one item, the previous step will be carried on on the previous item, etc.
To further optimize it, you can use concurrent queues provided by JCTools, which are optimized for Single-Producer Single-Consumer scenarios (JDK's BlockingQueue implementations support Multiple-Producer Multiple-Consumer).
// Thread 1
while (running()) {
InputObject a = inputQueue.take();
OutputObject b = doProcessingStep1(a);
queue1.put(b);
}
// Thread 2
while (running()) {
InputObject a = queue1.take();
OutputObject b = doProcessingStep2(a);
queue2.put(b);
}
// Thread 3
while (running()) {
InputObject a = queue2.take();
OutputObject b = doProcessingStep3(a);
outputQueue.put(b);
}

MPJ Express eclipse - remove combination of letters

I have to do an exercise for a parallel computing course.
The task is using N parallel processes to remove all combinations of letters "RTY" from the string.
Normally I'll do it with
String strAfter=str1.replaceAll("[RTY]","") ;
But how to make it in parallel?
Split, work, merge.
Split in the main thread storing the output in a Set
Create N worker threads.
Have each worker thread syncrhonized pick() a string from the set at a given index, increase the index and process the entry
When index reaches Set size, glue everything back together. You may want to use StringBuilder and append() instead of concatenating Strings
Split the String into N parts then make each process work on one chunk of String. The splitting mechanism should be intelligent enough to handle boundary values. You need to communicate one chunk of String to corresponding processes using Send() and Recv() methods for processing and in the end updated String should be communicated in same manner. Here you can find Javadocs http://mpj-express.org/docs/javadocs/index.html
My guess is you need to find a way to do this without using single-threaded functions on the entire string. What about breaking the string into N parts and letting each of your N parallel processes run the replace function on that part and concatenating the string after all the threads finished?

thread skips some iterations

I wrote a method that should be repeated 1000 times and the method is again another loop (like a nested loop).Since the running time was not reasonable I decided to write a thread to run it faster. Here is the method:
public class NewClass implements Runnable {
#Override
public void run() {
for (int j = 0; j < 50; j++) {
System.out.println(i+","+j);
/*
*
* my method
*/
}
}
}
and here is the way the main class calls it:
for (int i = 0; i < 1000; i++) {
NewClass myMethod = new NewClass();
Thread thread = new Thread(myMethod);
thread.start();
}
the problem is that when I run it, the thread skips the first iteration (when i=0) in the main class and then in the next iterations, it skips some iterations of inner loop (myMethod). Here is the result of println:
1,0
1,1
2,0
2,1
3,0
3,1
3,2
...
3,22
4,0
...
clearly it skips i=0 and for other iterations it cannot finish it. I'm sure the problem is not in the body of method. I run it several times without thread. It is the first time I write thread and I think the problem is in thread.
Your loop index for i starts from 1 (I'm guessing? It's not even clear why it's in scope), so it's unsurprising that i = 0 does not occur. Similarly, I think you're confused about the printing order, which need not be deterministic. It may be printing the output interspersed with output from other threads. This is normal and expected. I think the code is behaving correctly, it might just not be doing what you want.
I noticed you edited your code. This doesn't really bode well for anybody being able to help you diagnose whatever unexpected behavior you're seeing.
It just so happens that no thread calls the print function while i is zero. You don't compel this ordering, so it may or may not happen in that order. Either of these things can happen.
i is zero
A thread is created.
The thread prints the value of i.
i is incremented.
Or
i is zero
A thread is created.
i is incremented.
The thread prints the value of i.
Your code doesn't enforce either ordering, so it can happen either way. You only have guaranteed ordering between threads when something in your code guarantees such ordering.
clearly it skips i=0 and for other iterations it cannot finish it. I'm sure the problem is not in the body of method. I run it several times without thread. It is the first time I write thread and I think the problem is in thread.
There are 2 reason why this could be happeneing.
You seem to be printing out i which is a shared between threads without any memory synchronization. See the tutorial on memory synchronization. Each processor has internal cached memory so even though i is being modified in the main memory, the processor cache may still see the old value.
You also are probably seeing a race condition. What is happening is that the first two threads (0 and 1) are started before either of their run() methods actually being called. So when they both run and print out 1 because that is what i's value is at the time.
Because of either or both of these reasons, if you run your application 10 times you should see vastly different output.
If you want i to be shared then you could turn it into an AtomicInteger and do something like:
final AtomicInteger i = new AtomicInteger(0);
...
// replacement for your for loop
i.set(0);
while (true) {
if (i.incrementAndGet() >= 1000 {
break;
}
...
}
...
// inside the thread you would print
System.out.println(i + "," + j);
But this mechanism does not solve the race condition. Even better would be to pass in the i value to the NewClass constructor:
private int i;
public NetClass(int i) {
this.i = i;
}
...
Then when you create the instances you do:
NewClass myMethod = new NewClass(i);

Java- FixedThreadPool with known pool size but unknown workers

So I think I sort of understand how fixed thread pools work (using the Executor.fixedThreadPool built into Java), but from what I can see, there's usually a set number of jobs you want done and you know how many to when you start the program. For example
int numWorkers = Integer.parseInt(args[0]);
int threadPoolSize = Integer.parseInt(args[1]);
ExecutorService tpes =
Executors.newFixedThreadPool(threadPoolSize);
WorkerThread[] workers = new WorkerThread[numWorkers];
for (int i = 0; i < numWorkers; i++) {
workers[i] = new WorkerThread(i);
tpes.execute(workers[i]);
}
Where each workerThread does something really simple,that part is arbitrary. What I want to know is, what if you have a fixed pool size (say 8 max) but you don't know how many workers you'll need to finish the task until runtime.
The specific example is: If I have a pool size of 8 and I'm reading from standard input. As I read, I split the input into blocks of a set size. Each one of these blocks is given to a thread (along with some other information) so that they can compress it. As such, I don't know how many threads I'll need to create as I need to keep going until I reach the end of the input. I also have to somehow ensure that the data stays in the same order. If thread 2 finishes before thread 1 and just submits its work, my data will be out of order!
Would a thread pool be the wrong approach in this situation then? It seems like it'd be great (since I can't use more than 8 threads at a time).
Basically, I want to do something like this:
ExecutorService tpes = Executors.newFixedThreadPool(threadPoolSize);
BufferedInputStream inBytes = new BufferedInputStream(System.in);
byte[] buff = new byte[BLOCK_SIZE];
byte[] dict = new byte[DICT_SIZE];
WorkerThread worker;
int bytesRead = 0;
while((bytesRead = inBytes.read(buff)) != -1) {
System.arraycopy(buff, BLOCK_SIZE-DICT_SIZE, dict, 0, DICT_SIZE);
worker = new WorkerThread(buff, dict)
tpes.execute(worker);
}
This is not working code, I know, but I'm just trying to illustrate what I want.
I left out a bit, but see how buff and dict have changing values and that I don't know how long the input is. I don't think I can't actually do this thought because, well worker already exists after the first call! I can't just say worker = new WorkerThread a bunch of time since isn't it already pointing towards an existing thread (true, a thread that might be dead) and obviously in this implemenation if it did work I wouldn't be running in parallel. But my point is, I want to keep creating threads until I hit the max pool size, wait till a thread is done, then keep creating threads until I hit the end of the input.
I also need to keep stuff in order, which is the part that's really annoying.
Your solution is completely fine (the only point is that parallelism is perhaps not necessary if the workload of your WorkerThreads is very small).
With a thread pool, the number of submitted tasks is not relevant. There may be less or more than the number of threads in the pool, the thread pool takes care of that.
However, and this is important: You rely on some kind of order of the results of your WorkerThreads, but when using parallelism, this order is not guaranteed! It doesn't matter whether you use a thread pool, or how much worker threads you have, etc., it will always be possible that your results will be finished in an arbitrary order!
To keep the order right, give each WorkerThread the number of the current item in its constructor, and let them put their results in the right order after they are finished:
int noOfWorkItem = 0;
while((bytesRead = inBytes.read(buff)) != -1) {
System.arraycopy(buff, BLOCK_SIZE-DICT_SIZE, dict, 0, DICT_SIZE);
worker = new WorkerThread(buff, dict, noOfWorkItem++)
tpes.execute(worker);
}
As #ignis points out, parallel execution may not be the best answer for your situation.
However, to answer the more general question, there are several other Executor implementations to consider beyond FixedThreadPool, some of which may have the characteristics that you desire.
As far as keeping things in order, typically you would submit tasks to the executor, and for each submission, you get a Future (which is an object that promises to give you a result later, when the task finishes). So, you can keep track of the Futures in the order that you submitted tasks, and then when all tasks are done, invoke get() on each Future in order, to get the results.

Extra bytes appearing when building file data using multiple threads

I am working on a large scale dataset and after building a model, I use multithreading (whole project in Java) as follows:
OutputStream out = new BufferedOutputStream(new FileOutputStream(outFile));
int i=0;
Collection<Track1Callable> callables = new ArrayList<Track1Callable>();
// For each entry in the test file, do watever needs to be done.
// Track1Callable actually processes that entry and returns a double value.
for (Pair<PreferenceArray, long[]> tests : new DataFileIterable(
KDDCupDataModel.getTestFile(dataFileDirectory))) {
PreferenceArray userTest = tests.getFirst();
callables.add(new Track1Callable(recommender, userTest));
i++;
}
ExecutorService executor = Executors.newFixedThreadPool(cores); //24 cores
List<Future<byte[]>> results = executor.invokeAll(callables);
executor.shutdown();
for (Future<byte[]> result : results) {
for (byte estimate : result.get()) {
out.write(estimate);
}
}
out.flush();
out.close();
When I receive the result from each callable, output it to a file. Does this output in the exact order as the list of initial Callables was made? In spite of some completing before others? Seems it should but not sure.
Also, I expect a total of 6.2 million bytes to be written to the outfile. But I get an additional 2000 bytes (Yeah for free). That messes up my submission and I think it is because of some concurrency issues. I tested this on small dataset and it seems to work fine there (264 bytes expected and received).
Anyhing wrong I am doing with the Executor framework or Futures?
Q: Does the order is the same as the one specified for the tasks? Yes.
From the API:
Returns: A list of Futures
representing the tasks, in the same
sequential order as produced by the
iterator for the given task list. If
the operation did not time out, each
task will have completed. If it did
time out, some of these tasks will not
have completed.
As for the "extra" bytes: have you tried doing all of this in sequential order (i.e., without using an executor) and checking if you obtain different results? It seems that your problem is outside the code provided (and probably is not due to concurrency).
The order in which the callable's are executed doesn't matter from the code you have here. You write the results in the order you store the futures in the list. Even if they were executed in reverse order, the file should appear the same as your file writing is single threaded.
I suspect your callables are interacting with each other and you get different results depending on the number of core you use. e.g. You might be using SimpleDateFormat.
I suggest you run this twice in the same program with a dataset which completes in a short time. Run it first with only one thread in the thread pool and a second time with 24 threads You should be able to compare the results from both runs with Arrays.equals(byte[], byte[]) and see that you get exactly the same results.

Categories