Multithreading and Virtual Memory System - java

I'm trying to model a virtual memory system. What I would like to do is simulate multiple concurrent user processes using multi-threading.
I'm going to take in, through the command line: page size (bytes as ints), the number of pages each process gets in the simulated logical memory, the number of frames in the corresponding simulated physical memory, the number of total processes to simulate, and the actual logical addresses for each process (which will just be arbitrary ints contained in text files).
I guess what is mainly relevant to my question is how to create the n threads on input and fill each thread's array of logical addresses. I'm reading into multithreading but struggling a bit.
I'm trying to work with this thread code:
class Proccess implements Runnable {
Thread t;
List<Integer> addresses = new ArrayList<>();
Proccess(String pNum) {
t = new Thread(this, pNum);
System.out.println("Child " + t.getName());
t.start();
}
public void run() {
try {
//Is this where I want to fill in the addresses list?
//I will be reading in a file to do this. Each
//file is unique for each individual process
//so I don't have to worry about multiple processes
//accessing the same file.
} catch (InterruptedException e) {
System.out.println("Interrupted.");
}
System.out.println(".");
}
}
Each process is going to have its own page table as well, and I am open to suggestions on how to effectively add/maintain that.
This isn't for school, so there are no specs I need to follow.

Related

How to avoid context switching in Java ExecutorService

I use a software (AnyLogic) to export runnable jar files that themselves repeated re-run a set of simulations with different parameters (so-called parameter variation experiments). The simulations I'm running have very RAM intensive, so I have to limit the number of cores available to the jar file. In AnyLogic, the number of available cores is easily set, but from the Linux command line on the servers, the only way I know how to do this is by using the taskset command to just manually specify the available cores to use (using a CPU affinity "mask"). This has worked very well so far, but since you have to specify individual cores to use, I'm learning that there can be pretty substantial differences in performance depending on which cores you select. For example, you would want to maximize the use of CPU cache levels, so if you choose cores that share too much cache, you'll get much slower performance.
Since AnyLogic is written in Java, I can use Java code to specify the running of simulations. I'm looking at using the Java ExecutorService to build a pool of individual runs such that I can just specify the size of the pool to be whatever number of cores would match the RAM of the machine I'm using. I'm thinking that this would offer a number of benefits, most importantly perhaps the computer's scehduler can do a better job of selecting the cores to minimize runtime.
In my tests, I built a small AnyLogic model that take about 10 seconds to run (it just switches between 2 statechart states repeatedly). Then I created a custom experiment with this simple code.
ExecutorService service = Executors.newFixedThreadPool(2);
for (int i=0; i<10; i++)
{
Simulation experiment = new Simulation();
experiment.variable = i;
service.execute( () -> experiment.run() );
}
What I would hope to see is that only 2 Simulation objects start up at a time, since that's the size of the thread pool. But I see all 10 start up and running in parallel over the 2 threads. This makes me think that context switching is happening, which I assume is pretty inefficient.
When, instead of calling the AnyLogic Simulation, I just call a custom Java class (below) in the service.execute function, it seems to work fine, showing only 2 Tasks running at a time.
public class Task implements Runnable, Serializable {
public void run() {
traceln("Starting task on thread " + Thread.currentThread().getName());
try {
TimeUnit.SECONDS.sleep(5);
} catch (InterruptedException e) {
e.printStackTrace();
}
traceln("Ending task on thread " + Thread.currentThread().getName());
}
}
Does anyone know why the AnyLogic function seems to be setting up all the simulations at once?
I'm guessing Simulation extends from ExperimentParamVariation. The key to achieve what you want would be to determine when the experiment has ended.
The documentation shows some interesting methods like getProgress() and getState(), but you would have to poll those methods until the progress is 1 or the state is FINISHED or ERROR. There are also the methods onAfterExperiment() and onError() that should be called by the engine to indicate that the experiment has ended or there was an error. I think you could use these last two methods with a Semaphore to control how many experiments run at once:
import java.util.concurrent.Semaphore;
import com.anylogic.engine.ExperimentParamVariation;
public class Simulation extends ExperimentParamVariation</* Agent */> {
private final Semaphore semaphore;
public Simulation(Semaphore semaphore) {
this.semaphore = semaphore;
}
public void onAfterExperiment() {
this.semaphore.release();
super.onAfterExperiment();
}
public void onError(Throwable error) {
this.semaphore.release();
super.onError(error);
}
// run() cannot be overriden because it is final
// You could create another run method or acquire a permit from the semaphore elsewhere
public void runWithSemaphore() throws InterruptedException {
// This acquire() will block until a permit is available or the thread is interrupted
this.semaphore.acquire();
this.run();
}
}
Then you will have to configure a semaphore with the desired number of permits an pass it to the Simulation instances:
import java.util.concurrent.Semaphore;
// ...
Semaphore semaphore = new Semaphore(2);
for (int i = 0; i < 10; i++)
{
Simulation experiment = new Simulation(semaphore);
// ...
// Handle the InterruptedException thrown here
experiment.runWithSemaphore();
/* Alternative to runWithSemaphore(): acquire the permit and call run().
semaphore.acquire();
experiment.run();
*/
}
Firstly, this whole question has been nullified by what I think is a relatively new addition to AnyLogic's functionality. You can specify an ini file with a specified number of "parallel workers".
https://help.anylogic.com/index.jsp?topic=%2Fcom.anylogic.help%2Fhtml%2Frunning%2Fexport-java-application.html&cp=0_3_9&anchor=customize-settings
But I had managed to find a workable solution just before finding this (better) option. Hernan's answer was almost enough. I think it was hampered by some vagaries of AnyLogic's engine (as I detailed in a comment).
The best version I could muster myself was using ExecuterService. In a Custom Experiment, I put this code:
ExecutorService service = Executors.newFixedThreadPool(2);
List<Callable<Integer>> tasks = new ArrayList<>();
for (int i=0; i<10; i++)
{
int t = i;
tasks.add( () -> simulate(t) );
}
try{
traceln("starting setting up service");
List<Future<Integer>> futureResults = service.invokeAll(tasks);
traceln("finished setting up service");
List<Integer> res = futureResults.stream().parallel().map(
f -> {
try {
return f.get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
return null;
}).collect(Collectors.toList());
System.out.println("----- Future Results are ready -------");
System.out.println("----- Finished -------");
} catch (InterruptedException e) {
e.printStackTrace();
}
service.shutdown();
The key here was using the Java Future. Also, to use the invokeAll function, I created a function in the Additional class code block:
public int simulate(int variable){
// Create Engine, initialize random number generator:
Engine engine = createEngine();
// Set stop time
engine.setStopTime( 100000 );
// Create new root object:
Main root = new Main( engine, null, null );
root.parameter = variable;
// Prepare Engine for simulation:
engine.start( root );
// Start simulation in fast mode:
//traceln("attempting to acquire 1 permit on run "+variable);
//s.acquireUninterruptibly(1);
traceln("starting run "+variable);
engine.runFast();
traceln("ending run "+variable);
//s.release();
// Destroy the model:
engine.stop();
traceln( "Finished, run "+variable);
return 1;
}
The only limitation I could see to this approach is that I don't have a waiting-while loop to output progress every few minutes. But instead of finding a solution to that, I must abandon this work for the much better settings file solution in the link up top.

Send data to serial port when data is available

I'm building an interactive LED table with a 14x14 matrix consisting of addressable LED strips for an university assignment. Those are being controlled by 2 arduinos that get the data about which LED should have which RGB value from a Pi running a server that runs several games which should be playable on the LED table. To control the games I send respective int codes from an android app to the server running on the Raspi.
The serial communication is realized by using jSerialComm. The problem I'm facing is, that I don't want to permanently send data over the serial port but only at the moment, when a new array that specifies the matrix is available.
Therefore I don't want to be busy waiting and permanently checking if the matrix got updated not do I want to check for a update with
while(!matrixUpdated) {
try {
Thread.sleep(100);
} catch (InterruptedException e) {}
}
So what I've been trying was running a while(true) in which I call wait(), so the thread stops until I wake the thread up by calling notify when an updated matrix is available.
My run() method in the serial thread looks like this at the moment:
#Override
public void run() {
arduino1.setComPortTimeouts(SerialPort.TIMEOUT_SCANNER, 0, 0);
arduino2.setComPortTimeouts(SerialPort.TIMEOUT_SCANNER, 0, 0);
try {
Thread.sleep(100);
} catch (Exception e) {}
PrintWriter outToArduino1 = new PrintWriter(arduino1.getOutputStream());
PrintWriter outToArduino2 = new PrintWriter(arduino2.getOutputStream());
while(true) {
try {
wait();
} catch (InterruptedException e) {}
System.out.println("Matrix received");
outToArduino1.print(matrix);
outToArduino2.print(matrix);
}
}
I wake the thread up by this method which is nested in the same class:
public void setMatrix(int[][][] pixelIdentifier) {
matrix = pixelIdentifier;
notify();
}
I also tried notifyAll() which didn't change the outcome.
In one of the games (Tic Tac Toe) I call this method after every game turn to update and send the matrix to the arduinos:
private void promptToMatrix() {
synchronized (GameCenter.serialConnection) {
GameCenter.serialConnection.setMatrix(matrix);
}
}
I previously called it without using the synchronized block but as I've been reading through many articles on that topic on StackOverflow I have read that one should use synchronized for this. Further I have also read that using wait() and notify() is not recommended, however as the assignment needs to get done quite quickly I don't know if any other approach makes sense as I don't want to restructure my whole application as I run up to 5 threads when a game is being played (due to threads for communication and so on).
If there is a possibility to solve this using wait() and notify() I would be really grateful to hear how that would be done, as I have not been able to really comprehend how working properly with the synchronized block is being done and so on.
However if such a solution is not possible or would also end in restructuring the whole application I'm also open to different suggestions. Pointing out that using wait() and notify() is not recommended however doesn't help me, as I've already read that often enough, I'm aware of that but prefer to use it in that case if possible!!!
EDIT:
The application executes like this:
Main Thread
|--> SerialCommunication Thread --> waiting for updated data
|--> NetworkController Thread
|--> Client Thread --> interacting with the game thread
|--> Game Thread --> sending updated data to the waiting SerialCommunication Thread
Really appreciate any help and thanks in advance for your time!
You are dealing with asynchronous update possibly running on different threads, the best match in my opinion is using RxJava.
You could use a Subject to receive matrix event and then subscribe to it to update the leds.
You can write something like this (don't take it too literally).
public static void main(String[] args) {
int[][] initialValue = new int[32][32];
BehaviorSubject<int[][]> matrixSubject = BehaviorSubject.createDefault(initialValue);
SerialPort arduino1 = initSerial("COM1");
SerialPort arduino2 = initSerial("COM2");;
PrintWriter outToArduino1 = new PrintWriter(arduino1.getOutputStream());
PrintWriter outToArduino2 = new PrintWriter(arduino2.getOutputStream());
Observable<String> serializedMatrix = matrixSubject.map(Sample::toChars);
serializedMatrix.observeOn(Schedulers.io()).subscribe(mat -> {
// Will run on a newly created thread
outToArduino1.println(mat);
});
serializedMatrix.observeOn(Schedulers.io()).subscribe(mat -> {
// Will run on a newly created thread
outToArduino2.println(mat);
});
// Wait forever
while(true) {
try {
// get your matrix somehow ...
// then publish it on your subject
// your subscribers will receive the data and use it.
matrixSubject.onNext(matrix);
Thread.sleep(100);
} catch (InterruptedException e) {
// SWALLOW error
}
}
}
public static String toChars(int[][] data) {
// Serialize data
return null;
}
There are may operators that you could use to make it do what you need, also you can use different schedulers to choose from different thread policies.
You can also transform your input in the subject you publish, an observable or a subject can be created directly from your input.

Using Multiple Threads in Java To Shorten Program Time

I do not have much experience making multi-threaded applications but I feel like my program is at a point where it may benefit from having multiple threads. I am doing a larger scale project that involves using a classifier (as in machine learning) to classify roughly 32000 customers. I have debugged the program and discovered that it takes about a second to classify each user. So in other words this would take 8.8 hours to complete!
Is there any way that I can run 4 threads handling 8000 users each? The first thread would handle 1-8000, the second 8001-16000, the third 16001-23000, the fourth 23001-32000. Also, as of now each classification is done by calling a static function from another class...
Then when the other threads besides the main one should end. Is something like this feasible? If so, I would greatly appreciate it if someone could provide tips or steps on how to do this. I am familiar with the idea of critical sections (wait/signal) but have little experience with it.
Again, any help would be very much appreciated! Tips and suggestions on how to handle a situation like this are welcomed! Not sure it matters but I have a Core 2 Duo PC with a 2.53 GHZ processor speed.
This is too lightweight for Apache Hadoop, which requires around 64MB chunks of data per server... but.. it's a perfect opportunity for Akka Actors, and, it just happens to support Java!
http://doc.akka.io/docs/akka/2.1.4/java/untyped-actors.html
Basically, you can have 4 actors doing the work, and as they finish classifying a user, or probably better, a number of users, they either pass it to a "receiver" actor, that puts the info into a data structure or a file for output, or, you can do concurrent I/O by having each write to a file on their own.. then the files can be examined/combined when they're all done.
If you want to get even more fancy/powerful, you can put the actors on remote servers. It's still really easy to communicate with them, and you'd be leveraging the CPU/resources of multiple servers.
I wrote an article myself on Akka actors, but it's in Scala, so I'll spare you that. But if you google "akka actors", you'll get lots of hand-holding examples on how to use it. Be brave, dive right in and experiment. The "actor system" is such an easy concept to pick up. I know you can do it!
Split the data up into objects that implement Runnable, then pass them to new threads.
Having more than four threads in this case won't kill you, but you cannot get more parallel work than you have cores (as mentioned in the comments) - if there are more threads than cores the system will have to handle who gets to go when.
If I had a class customer, and I want to issue a thread to prioritize 8000 customers of a greater collection I might do something like this:
public class CustomerClassifier implements Runnable {
private customer[] customers;
public CustomerClassifier(customer[] customers) {
this.customers = customers;
}
#Override
public void run() {
for (int i=0; i< customers.length; i++) {
classify(customer);//critical that this classify function does not
//attempt to modify a resource outside this class
//unless it handles locking, or is talking to a database
//or something that won't throw fits about resource locking
}
}
}
then to issue these threads elsewhere
int jobSize = 8000;
customer[] customers = new customer[jobSize]();
int j = 0;
for (int i =0; i+j< fullCustomerArray.length; i++) {
if (i == jobSize-1) {
new Thread(new CustomerClassifier(customers)).start();//run will be invoked by thread
customers = new Customer[jobSize]();
j += i;
i = 0;
}
customers[i] = fullCustomerArray[i+j];
}
If you have your classify method affect the same resource somewhere you will have to
implement locking and will also kill off your advantage gained to some degree.
Concurrency is extremely complicated and requires a lot of thought, I also recommend looking at the oracle docs http://docs.oracle.com/javase/tutorial/essential/concurrency/index.html
(I know links are bad, but hopefully the oracle docs don't move around too much?)
Disclaimer: I am no expert in concurrent design or in multithreading (different topics).
If you split the input array in 4 equal subarrays for 4 threads, there is no guarantee that all threads finish simultaneously. You better put all data in a single queue and let all working threads feed from that common queue. Use thead-safe BlockingQueue implementations in order to not write low level synchronize/wait/notify code.
From java 6 we have some handy utils for concurrency. You might want to consider using thread pools for cleaner implementation.
package com.threads;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
public class ParalleliseArrayConsumption {
private int[] itemsToBeProcessed ;
public ParalleliseArrayConsumption(int size){
itemsToBeProcessed = new int[size];
}
/**
* #param args
*/
public static void main(String[] args) {
(new ParalleliseArrayConsumption(32)).processUsers(4);
}
public void processUsers(int numOfWorkerThreads){
ExecutorService threadPool = Executors.newFixedThreadPool(numOfWorkerThreads);
int chunk = itemsToBeProcessed.length/numOfWorkerThreads;
int start = 0;
List<Future> tasks = new ArrayList<Future>();
for(int i=0;i<numOfWorkerThreads;i++){
tasks.add(threadPool.submit(new WorkerThread(start, start+chunk)));
start = start+chunk;
}
// join all worker threads to main thread
for(Future f:tasks){
try {
f.get();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ExecutionException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
threadPool.shutdown();
while(!threadPool.isTerminated()){
}
}
private class WorkerThread implements Callable{
private int startIndex;
private int endIndex;
public WorkerThread(int startIndex, int endIndex){
this.startIndex = startIndex;
this.endIndex = endIndex;
}
#Override
public Object call() throws Exception {
for(int currentUserIndex = startIndex;currentUserIndex<endIndex;currentUserIndex++){
// process the user. Add your logic here
System.out.println(currentUserIndex+" is the user being processed in thread " +Thread.currentThread().getName());
}
return null;
}
}
}

Java concurrency pattern to parallel parts of task

I read lines from file, in one thread of course. Lines was sorted by key.
Then I collect lines with same key (15-20 lines), make parsing, big calculation, etc, and push resulting object to statistic class.
I want to paralell my programm to read in one thread, make parsing and calc in many threads, and join results in one thread to write to stat class.
Is any ready pattern or solution in java7 framework for this problem?
I realize it with executor for multithreading, pushing to blockingQueue, and reading queue in another thread, but i think my code sucks and will produce bugs
Many thanks
upd:
I can't map all file in memory - it's very big
You already have the main classes of approaches in mind. CountDownLatch, Thread.join, Executors, Fork/Join. Another option is the Akka framework, which has message passing overheads measured in 1-2 microseconds and is open source. However let me share another approach that often out performs the above approaches and is simpler, this approach is born from working on batch file loads in Java for a number of companies.
Assuming that your goal of splitting the work up is performance, rather than learning. Performance as measured by how long it takes from start to finish. Then it is often difficult to make it faster than memory mapping the file, and processing in a single thread that has been pinned to a single core. It is also gives much simpler code too. A double win.
This may be counter intuitive, however the speed of processing files is nearly always limited by how efficient the file loading is. Not how parallel the processing is. Hence memory mapping the file is a huge win. Once memory mapped we want the algorithm to have low contention with the hardware as it performs the file load. Modern hardware tend to have the IO controller and the memory controller on the same socket as the CPU; which when combined with the prefetchers within the CPU itself lead to a hell of a lot of efficiency when processing the file in a orderly fashion from a single thread. This can be so extreme that going parallel may actually be a lot slower. Pinning a thread to a core usually speeds up memory bound algorithms by a factor of 5. Which is why the memory mapping part is so important.
If you have not already, give it a try.
Without facts and numbers it is hard to give you advices. So let's start from the beginning:
You must identify the bottleneck. Do you really need to perform the computation in parallel or is your job IO bound ? Avoid concurrency if possible, it could be faster.
If computations must be done in parallel you must decide how fine or coarse grained your tasks must be. You need to measure your computations and tasks to be able to size them. Avoid to create too many tasks
You should have a IO thread, several workers, and a "data gatherer" thread. No mutable data.
Be sure to not slow down the IO thread because of task submission. Otherwise you should use more coarse grained tasks or use a better task dispatcher (who said disruptor ?)
The "Data gatherer" thread should be the only one to mutate the final state
Avoid unnecessary data copy and object creation. Quite often, when iterating on large files the bottleneck is the GC. Last week, I achieved a 6x speedup replacing a standard scala object by a flyweight pattern. You should also try to pre-allocate everything and use large buffers (page sized).
Avoid disk seeks.
Having that said you should be one the right track. You can start with an Executor using properly sized tasks. Tasks write into a data structure, like your blocking queue, shared between workers and the "data gatherer" thread. This threading model is really simple, efficient and hard to get wrong. It is usually efficient enough. If you still require better performances then you must profile your application and understand the bottleneck. Then you can decide the way to go: refine your task size, use faster tools like the disruptor/Akka, improve IO, create fewer objects, tune your code, buy a bigger machine or faster disks, move to Hadoop etc. Pinning each thread to a core (require platform specific code) could also provide a significant boost.
You can use CountDownLatch
http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/CountDownLatch.html
to synchronize the starting and joining of threads. This is better than looping on the set of threads and calling join() on each thread reference.
Here is what I would do if asked to split work as you are trying to:
public class App {
public static class Statistics {
}
public static class StatisticsCalculator implements Callable<Statistics> {
private final List<String> lines;
public StatisticsCalculator(List<String> lines) {
this.lines = lines;
}
#Override
public Statistics call() throws Exception {
//do stuff with lines
return new Statistics();
}
}
public static void main(String[] args) {
final File file = new File("path/to/my/file");
final List<List<String>> partitionedWork = partitionWork(readLines(file), 10);
final List<Callable<Statistics>> callables = new LinkedList<>();
for (final List<String> work : partitionedWork) {
callables.add(new StatisticsCalculator(work));
}
final ExecutorService executorService = Executors.newFixedThreadPool(Math.min(partitionedWork.size(), 10));
final List<Future<Statistics>> futures;
try {
futures = executorService.invokeAll(callables);
} catch (InterruptedException ex) {
throw new RuntimeException(ex);
}
try {
for (final Future<Statistics> future : futures) {
final Statistics statistics = future.get();
//do whatever to aggregate the individual
}
} catch (InterruptedException | ExecutionException ex) {
throw new RuntimeException(ex);
}
executorService.shutdown();
try {
executorService.awaitTermination(1, TimeUnit.DAYS);
} catch (InterruptedException ex) {
throw new RuntimeException(ex);
}
}
static List<String> readLines(final File file) {
//read lines
return new ArrayList<>();
}
static List<List<String>> partitionWork(final List<String> lines, final int blockSize) {
//divide up the incoming list into a number of chunks
final List<List<String>> partitionedWork = new LinkedList<>();
for (int i = lines.size(); i > 0; i -= blockSize) {
int start = i > blockSize ? i - blockSize : 0;
partitionedWork.add(lines.subList(start, i));
}
return partitionedWork;
}
}
I have create a Statistics object, this holds the result of the work done.
There is a StatisticsCalculator object which is a Callable<Statistics> - this does the calculation. It is given a List<String> and it processes the lines and creates the Statistics.
The readLines method I leave to you to implement.
The most important method in many ways is the partitionWork method, this divides the incoming List<String> which is all the lines in the file into a List<List<String>> using the blockSize. This essentially decides how much work each thread should have, tuning of the blockSize parameter is very important. As if each work is only one line then the overheads would probably outweight the advantages whereas if each work of ten thousand lines then you only have one working Thread.
Finally the meat of the opertation is the main method. This calls the read and then partition methods. It spawns an ExecutorService with a number of threads equal to the number of bits of work but up to a maximum of 10. You may way to make this equal to the number of cores you have.
The main method then submits a List of all the Callables, one for each chunk, to the executorService. The invokeAll method blocks until the work is done.
The method now loops over each returned List<Future> and gets the generated Statistics object for each; ready for aggregation.
Afterwards don't forget to shutdown the executorService as it will prevent your application form exiting.
EDIT
OP wants to read line by line so here is a revised main
public static void main(String[] args) throws IOException {
final File file = new File("path/to/my/file");
final ExecutorService executorService = Executors.newFixedThreadPool(10);
final List<Future<Statistics>> futures = new LinkedList<>();
try (final BufferedReader reader = new BufferedReader(new FileReader(file))) {
List<String> tmp = new LinkedList<>();
String line = null;
while ((line = reader.readLine()) != null) {
tmp.add(line);
if (tmp.size() == 100) {
futures.add(executorService.submit(new StatisticsCalculator(tmp)));
tmp = new LinkedList<>();
}
}
if (!tmp.isEmpty()) {
futures.add(executorService.submit(new StatisticsCalculator(tmp)));
}
}
try {
for (final Future<Statistics> future : futures) {
final Statistics statistics = future.get();
//do whatever to aggregate the individual
}
} catch (InterruptedException | ExecutionException ex) {
throw new RuntimeException(ex);
}
executorService.shutdown();
try {
executorService.awaitTermination(1, TimeUnit.DAYS);
} catch (InterruptedException ex) {
throw new RuntimeException(ex);
}
}
This streams the file line by line and, after a given number of lines fires a new task to process the lines to the executor.
You would need to call clear on the List<String> in the Callable when you are done with it as the Callable instances are references by the Futures they return. If you clear the Lists when you're done with them that should reduce the memory footprint considerably.
A further enhancement may well be to use the suggestion here for a ExecutorService that blocks until there is a spare thread - this will guranatee that there are never more than threads*blocksize lines in memory at a time if you clear the Lists when the Callables are done with them.

reduce in performance when used multithreading in java

I am new to multi-threading and I have to write a program using multiple threads to increase its efficiency. At my first attempt what I wrote produced just opposite results. Here is what I have written:
class ThreadImpl implements Callable<ArrayList<Integer>> {
//Bloom filter instance for one of the table
BloomFilter<Integer> bloomFilterInstance = null;
// Data member for complete data access.
ArrayList< ArrayList<UserBean> > data = null;
// Store the result of the testing
ArrayList<Integer> result = null;
int tableNo;
public ThreadImpl(BloomFilter<Integer> bloomFilterInstance,
ArrayList< ArrayList<UserBean> > data, int tableNo) {
this.bloomFilterInstance = bloomFilterInstance;
this.data = data;
result = new ArrayList<Integer>(this.data.size());
this.tableNo = tableNo;
}
public ArrayList<Integer> call() {
int[] tempResult = new int[this.data.size()];
for(int i=0; i<data.size() ;++i) {
tempResult[i] = 0;
}
ArrayList<UserBean> chkDataSet = null;
for(int i=0; i<this.data.size(); ++i) {
if(i==tableNo) {
//do nothing;
} else {
chkDataSet = new ArrayList<UserBean> (data.get(i));
for(UserBean toChk: chkDataSet) {
if(bloomFilterInstance.contains(toChk.getUserId())) {
++tempResult[i];
}
}
}
this.result.add(new Integer(tempResult[i]));
}
return result;
}
}
In the above class there are two data members data and bloomFilterInstance and they(the references) are passed from the main program. So actually there is only one instance of data and bloomFilterInstance and all the threads are accessing it simultaneously.
The class that launches the thread is(few irrelevant details have been left out, so all variables etc. you can assume them to be declared):
class MultithreadedVrsion {
public static void main(String[] args) {
if(args.length > 1) {
ExecutorService es = Executors.newFixedThreadPool(noOfTables);
List<Callable<ArrayList<Integer>>> threadedBloom = new ArrayList<Callable<ArrayList<Integer>>>(noOfTables);
for (int i=0; i<noOfTables; ++i) {
threadedBloom.add(new ThreadImpl(eval.bloomFilter.get(i),
eval.data, i));
}
try {
List<Future<ArrayList<Integer>>> answers = es.invokeAll(threadedBloom);
long endTime = System.currentTimeMillis();
System.out.println("using more than one thread for bloom filters: " + (endTime - startTime) + " milliseconds");
System.out.println("**Printing the results**");
for(Future<ArrayList<Integer>> element: answers) {
ArrayList<Integer> arrInt = element.get();
for(Integer i: arrInt) {
System.out.print(i.intValue());
System.out.print("\t");
}
System.out.println("");
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
I did the profiling with jprofiler and
![here]:(http://tinypic.com/r/wh1v8p/6)
is a snapshot of cpu threads where red color shows blocked, green runnable and yellow is waiting. I problem is that threads are running one at a time I do not know why?
Note:I know that this is not thread safe but I know that I will only be doing read operations throughout now and just want to analyse raw performance gain that can be achieved, later I will implement a better version.
Can anyone please tell where I have missed
One possibility is that the cost of creating threads is swamping any possible performance gains from doing the computations in parallel. We can't really tell if this is a real possibility because you haven't included the relevant code in the question.
Another possibility is that you only have one processor / core available. Threads only run when there is a processor to run them. So your expectation of a linear speed with the number of threads and only possibly achieved (in theory) if is a free processor for each thread.
Finally, there could be memory contention due to the threads all attempting to access a shared array. If you had proper synchronization, that would potentially add further contention. (Note: I haven't tried to understand the algorithm to figure out if contention is likely in your example.)
My initial advice would be to profile your code, and see if that offers any insights.
And take a look at the way you are measuring performance to make sure that you aren't just seeing some benchmarking artefact; e.g. JVM warmup effects.
That process looks CPU bound. (no I/O, database calls, network calls, etc.) I can think of two explanations:
How many CPUs does your machine have? How many is Java allowed to use? - if the threads are competing for the same CPU, you've added coordination work and placed more demand on the same resource.
How long does the whole method take to run? For very short times, the additional work in context switching threads could overpower the actual work. The way to deal with this is to make a longer job. Also, run it a lot of times in a loop not counting the first few iterations (like a warm up, they aren't representative.)
Several possibilities come to mind:
There is some synchronization going on inside bloomFilterInstance's implementation (which is not given).
There is a lot of memory allocation going on, e.g., what appears to be an unnecessary copy of an ArrayList when chkDataSet is created, use of new Integer instead of Integer.valueOf. You may be running into overhead costs for memory allocation.
You may be CPU-bound (if bloomFilterInstance#contains is expensive) and threads are simply blocking for CPU instead of executing.
A profiler may help reveal the actual problem.

Categories