Calling different bundles in parallel using Multithreaded code - java

I am working on a project in which I will be having different Bundles. Let's take an example, Suppose I have 5 Bundles and each of those bundles will have a method name process.
Now currently, I am calling the process method of all those 5 bundles sequentially, one by one and then I am writing to the database. But that's what I don't want.
Below are the things that I am looking for-
I need to call all those 5 Bundles process method in parallel using multithreaded code and then write to the database. I am not sure what is the right way to do that? Should I have five thread? One thread for each bundle? But what will happen in that scenario, suppose if I have 50 bundles, then I will have 50 threads?
And also, I want to have timeout feature as well. If any bundles is taking lot of time than the threshold setup by us, then it should get timeout and log as an error that this bundle has taken lot of time.
I hope the question is clear enough.
Below is the code I have so far which is calling process method of each bundles sequentially one by one.
public void callBundles(final Map<String, Object> eventData) {
final Map<String, String> outputs = (Map<String, String>)eventData.get(Constants.HOLDER);
for (final BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) {
// calling the process method of a bundle
final Map<String, String> response = entry.getPlugin().process(outputs);
// then write to the database.
System.out.println(response);
}
}
I am not sure what is the best and efficient way to do this? And I don't want to write sequentially. Because, in future, it might be possible that I will have more than 5 bundles.
Can anyone provide me an example of how can I do this? I have tried doing it but somehow it is not the way I am looking for.
Any help will be appreciated on this. Thanks.
Update:-
This is what I came up with-
public void callBundles(final Map<String, Object> eventData) {
// Three threads: one thread for the database writer, five threads for the plugin processors
final ExecutorService executor = Executors.newFixedThreadPool(5);
final BlockingQueue<Map<String, String>> queue = new LinkedBlockingQueue<Map<String, String>>();
#SuppressWarnings("unchecked")
final Map<String, String> outputs = (Map<String, String>)eventData.get(Constants.EVENT_HOLDER);
for (final BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) {
executor.submit(new Runnable () {
public void run() {
final Map<String, String> response = entry.getPlugin().process(outputs);
// put the response map in the queue for the database to read
queue.offer(response);
}
});
}
Future<?> future = executor.submit(new Runnable () {
public void run() {
Map<String, String> map;
try {
while(true) {
// blocks until a map is available in the queue, or until interrupted
map = queue.take();
// write map to database
System.out.println(map);
}
} catch (InterruptedException ex) {
// IF we're catching InterruptedException then this means that future.cancel(true)
// was called, which means that the plugin processors are finished;
// process the rest of the queue and then exit
while((map = queue.poll()) != null) {
// write map to database
System.out.println(map);
}
}
}
});
// this interrupts the database thread, which sends it into its catch block
// where it processes the rest of the queue and exits
future.cancel(true); // interrupt database thread
// wait for the threads to finish
try {
executor.awaitTermination(5, TimeUnit.MINUTES);
} catch (InterruptedException e) {
//log error here
}
}
But I was not able to add any timeout feature into this yet.. And also If I am run my above code as it is, then also it is not running.. I am missing anything?
Can anybody help me with this?

This is BASIC example, partially based on the solution presented in ExecutorService that interrupts tasks after a timeout.
You will have to figure out the best way to get this implemented into your own code. Use it only as a guide!
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
public class ExecutorExample {
// This is used to "expire" long running tasks
protected static final ScheduledExecutorService EXPIRE_SERVICE = Executors.newScheduledThreadPool(1);
// This is used to manage the bundles and process them as required
protected static final ExecutorService BUNDLES_SERVICE = Executors.newFixedThreadPool(10);
public static void main(String[] args) {
// A list of the future tasks created by the BUNDLES_SERVICE.
// We need this so we can monitor the progress of the output
List<Future<String>> futureTasks = new ArrayList<>(100);
// This is a list of all the tasks that have either completed
// or begin canceled...we want these so we can determine
// the results...
List<Future<String>> completedTasks = new ArrayList<>(100);
// Add all the Bundles to the BUNDLES_SERVICE
for (int index = 0; index < 100; index++) {
Bundle bundle = new Bundle();
// We need a reference to the future so we can cancel it if we
// need to
Future<String> futureBundle = BUNDLES_SERVICE.submit(bundle);
// Set this bundles future, see Bundle for details
bundle.setFuture(futureBundle);
// Add it to our monitor queue...
futureTasks.add(futureBundle);
}
// Basically we are going to move all completed/canceled bundles
// from the "active" to the completed list and wait until there
// are no more "active" tasks
while (futureTasks.size() > 0) {
try {
// Little bit of a pressure release...
Thread.sleep(1000);
} catch (InterruptedException ex) {
}
// Check all the bundles...
for (Future<String> future : futureTasks) {
// If it has completed or was cancelled, move it to the completed
// list. AKAIK, isDone will return true is isCancelled is true as well,
// but this illustrates the point
if (future.isCancelled() || future.isDone()) {
completedTasks.add(future);
}
}
// Remove all the completed tasks from the future tasks lists
futureTasks.removeAll(completedTasks);
// Some idea of progress...
System.out.println("Still have " + futureTasks.size() + " outstanding tasks...");
}
// Dump the results...
int index = 0;
for (Future<String> future : completedTasks) {
index++;
System.out.print("Task " + index);
if (future.isCancelled()) {
System.out.println(" was canceled");
} else if (future.isDone()) {
try {
System.out.println(" completed with " + future.get());
} catch (Exception ex) {
System.out.println(" failed because of " + ex.getMessage());
}
}
}
System.exit(0);
}
public static class ExpireBundle implements Runnable {
private final Future futureBundle;
public ExpireBundle(Future futureBundle) {
this.futureBundle = futureBundle;
}
#Override
public void run() {
futureBundle.cancel(true);
}
}
public static class Bundle implements Callable<String> {
private volatile Future<String> future;
#Override
public String call() throws Exception {
// This is the tricky bit. In order to cancel a task, we
// need to wait until it runs, but we also need it's future...
// We could use another, single threaded queue to do the job
// but that's getting messy again and it won't provide the information
// we need back to the original calling thread that we are using
// to schedule and monitor the threads...
// We need to have a valid future before we can continue...
while (future == null) {
Thread.sleep(250);
}
// Schedule an expiry call for 5 seconds from NOW...this is important
// I original thought about doing this when I schedule the original
// bundle, but that precluded the fact that some tasks would not
// have started yet...
EXPIRE_SERVICE.schedule(new ExpireBundle(future), 5, TimeUnit.SECONDS);
// Sleep for a random amount of time from 1-10 seconds
Thread.sleep((long) (Math.random() * 9000) + 1000);
return "Happy";
}
protected void setFuture(Future<String> future) {
this.future = future;
}
}
}
Also. I had thought of using ExecutorService#invokeAll to wait for the tasks to complete, but this precluded the ability to timeout tasks. I don't like having to feed the Future into the Callable, but no other solution seemed to come to mind that would allow me to get the results from the submitted Future.

Related

How to wait for the function finish (Threads in Thread)

I think I'm a little lost in Threads because I don't understand why it's not working...
I have a program that has Server Clients and Nodes.
The client sends grouped data (GroupedDataFrame) to the server asking for calculation of e.g. averages. The server distributes this task to its Nodes which calculate it and return their results to Server.
My problem is that NODE returns an empty DataFrame (without calculated results) when I use applyWithThreads (if I use normal "apply" (without threads) it is working good).
From what I realized, the command to create NodeResult does not wait for the previous function to finish i.e. applywithThreads, as a result, it sends empty DF (because if I set waiting for example 2 sec its working :/, or if I use this commented out loop, but I know it is bad practice)
I would like to solve it somehow! How to wait for applythread to finish (will form returnToServer)?
class ListenFromServer extends Thread {
public void run() {
while (true) {
try {
Object obj = sInput.readObject();
if (obj instanceof String) {
display((String) obj);
} else if (obj instanceof NodeRequestGDF) {
display("I have received NodeRequest from client ID: " + ((NodeRequestGDF) obj).getClientID());
// DataFrame returnToServer = (((NodeRequestGDF) obj).groupedDF.apply(((NodeRequestGDF) obj).getFunction()));
DataFrame returnToServer = (((NodeRequestGDF) obj).groupedDF.applywithThreads(((NodeRequestGDF) obj).getFunction()));
// while (returnToServer.size() != (((NodeRequestGDF) obj).getGroupedDF().getSize()));
NodeResultDF nodeResultDF = new NodeResultDF(returnToServer, ((NodeRequestGDF) obj).getClientID());
sendToServer(nodeResultDF);
display("I have returned result to Server to client ID: " + nodeResultDF.clientID);
} else {
display("I have received something i do not know what is this :(");
}
} catch (IOException e) {
display("Server has close the connection: " + e);
} catch (ClassNotFoundException e2) {
e2.printStackTrace();
}
}
}
}
Here is code of applyWithThreads:
public DataFrame applyWithThreads(Applyable fun) {
DataFrame ret = new DataFrame();
ArrayList<DataFrameThread> threadList = new ArrayList<>();
for (DataFrame df : this.data) {
DataFrameThread tmp = new DataFrameThread(df, fun, ret);
threadList.add(tmp);
}
ExecutorService threadPool = Executors.newFixedThreadPool(MAX_Threads);
for (DataFrameThread th : threadList) {
threadPool.execute(th);
}
threadPool.shutdown();
return ret;
}
and code of DataFrameThread:
import GroupFunctions.Applyable;
public class DataFrameThread extends Thread {
DataFrame ret;
DataFrame DF;
Applyable fun;
public DataFrameThread(DataFrame df, Applyable fun, DataFrame ret) {
this.DF = df;
this.fun = fun;
this.ret = ret;
}
#Override
public void run() {
DataFrame d = null;
try {
d = fun.apply(DF);
} catch (InconsistentTypeException e) {
e.printStackTrace();
}
synchronized (ret) {
ret.addAnotherDF(d);
}
}
}
``
There are quite a few problems with your code. I'll try to go over them.
The critical ones:
Your DataFrameThread extends Thread, but if you use a thread-pool, as you do with ExecutorService, the threads are not created by you anymore. Instead of saying extends Thread, say implements Runnable.
Calling shutdown() on an ExecutorService does not wait for it to stop. You could call awaitTermination after you call shutdown, but that's not how you are supposed to use an ExecutorService.
One improvement is to use submit instead of execute. submit returns Future<?>, and you can call get() on a Future, which will block until its complete.
Even better, instead of implementing Runnable, implement Callable. Callable is for tasks that return values. Instead of calling ret.addAnotherDF in your Runnable, you return your DataFrame from your callable. Then, submit will return a Future<DataFrame> and when you call get on it, it returns the DataFrame object when the thread is done.
Note: only call get on the Future when you have submitted all your tasks to the ExecutorService, not after each task (if you do it after each task, you are no longer parallelizing the problem)
Important:
don't create a new ExecutorService for each applyWithThreads call. The point of a thread-pool is to keep it around for as long as possible, as creating threads is an expensive operation. And that's why you shouldn't use shutdown and awaitTermination to find out when your task has completed; that's why you can do that with Future objects.

Thread pools and stopping execution of a for loop when a counter reaches X

I'm a bit confused with thread pools and providing an exit condition from a for loop. I haven't found a decent explanation yet on how to do it properly. I have been experimenting with a few possibilities but I'm stuck
I have this piece of code.
#Override
#Transactional(propagation=Propagation.REQUIRED, rollbackFor=Throwable.class)
public void auditAllDomainConfigurationStatuses() {
logger.info("Starting audit of all domain configuration statusses");
int errorStatusCounter = 0;
Map<SubdomainRegistryStatus, List<String>> domainsByStatus = new HashMap<SubdomainRegistryStatus, List<String>>();
List<DomainConfigurationStatus> domains = domainConfigurationStatusDao.findAll();
for (DomainConfigurationStatus domainConfigurationStatus : domains) {
String domainName = domainConfigurationStatus.getDomainName();
DomainConfigurationStatus result = domainConfigurationStatusAuditor.auditDomainConfigurationStatus(domainConfigurationStatus.getId());
addDomainToDomainsByStatusMap(domainsByStatus, result, domainName);
if(SubdomainRegistryStatus.ERROR.equals(result.getStatus())){
errorStatusCounter++;
if(errorStatusCounter >= EMERGENCY_AUDIT_STOP_LIMIT){
logger.error("Emergency audit stop more then " + EMERGENCY_AUDIT_STOP_LIMIT + " records went into status ERROR");
mailEmergencyDomainConfigurationStatusAuditStop();
return;
}
}else{
errorStatusCounter = 0;
}
}
mailDomainConfigurationStatusReport(domainsByStatus);
logger.info("Audit of all domain configuration statusses completed");
}
This code will somewhere call the dns of a domain to fetch it's ip. Then it will update a status in the database. Quite a simple thing. However business wants us to stop the entire process if X times after each other the status translated to ERROR. I managed to write this , quite simple with the above method. However the call to the dns to fetch the ip is slow, I can process about 6 domains per second. We have to process over 32 000 domains. We need to get performance up and for this multithreading is advicable.
So I started with progamming a task, creating a threadpool in spring etc... Then I realized wait that EMERGENCY_AUDIT_STOP_LIMIT how can I still do this if the counter runs over multiple threads ... Without any callback. So I tried with a Callable instead of Runnable so I was working with a Future, then I came to the conclusion WTH am I thinking, the future will block on it's future.get() method so all I'm going to end up with is a method just as slow or slower then my original implementation.
So this was my road sofar and I'm now a bit blocked, a Runnable can't throw an exception so passing the counter to the task won't work either and a Callable will block so that's no option either.
If any multithreading guru has an idea I would be very grateful. Below was my latest attempt , it wasn't broken but just as slow as my above method.
#Override
#Transactional(propagation=Propagation.REQUIRED, rollbackFor=Throwable.class)
public void auditAllDomainConfigurationStatuses() throws InterruptedException, ExecutionException {
logger.info("Starting audit of all domain configuration statusses");
int errorStatusCounter = 0;
Map<SubdomainRegistryStatus, List<String>> domainsByStatus = new HashMap<SubdomainRegistryStatus, List<String>>();
List<DomainConfigurationStatus> domains = domainConfigurationStatusDao.findAll();
for (DomainConfigurationStatus domainConfigurationStatus : domains) {
try {
Future<Integer> futureResult = taskExecutor.submit(new DomainConfigurationAuditTask(errorStatusCounter, domainConfigurationStatusAuditor, domainConfigurationStatus.getId(), domainsByStatus, EMERGENCY_AUDIT_STOP_LIMIT));
futureResult.get();
}
catch (Exception e) {
logger.error("Emergency audit stop more then " + EMERGENCY_AUDIT_STOP_LIMIT + " records went into status ERROR");
mailEmergencyDomainConfigurationStatusAuditStop();
return;
}
}
mailDomainConfigurationStatusReport(domainsByStatus);
logger.info("Audit of all domain configuration statusses completed");
}
Here's a pretty simple solution. Basically, the tasks for doing some work (i.e. DNS lookup) is completely isolated and parallelizable. Part of it's work after a success or failure is to submit a success boolean to another ExecutoService with a fixed size of 1, which can do whatever error condition checking you want.
In this case, it's simply incrementing an integer with consecutive errors, until a max condition is reached and then sets an error condition which the work threads (DNS lookups) all check first for a fail-fast approach, so all queued up tasks will exit quickly after the error condition is met.
This ends up being pretty simple way for tracking consecutive errors in a multi-threaded scenario like this, as you're single-threading a check on the responses
I can think of a much more elegant solution using Java 8's CompletableFuture, but sounds like that is off the table
package so.thread.errcondition;
import java.util.Random;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;
public class Main {
public static Random rand = new Random();
public static ExecutorService workers = Executors.newFixedThreadPool(5);
// NOTE, this executor has a FIXED size of 1 for in-order processing
public static ExecutorService watcher = Executors.newFixedThreadPool(1);
public static AtomicBoolean errorCondition = new AtomicBoolean(false);
public static AtomicInteger errorCount = new AtomicInteger(0);
public static Integer MAX_ERRORS = 5;
public static void main(String[] args) throws Exception {
int jobs = 1000;
for (int i = 0; i < jobs; i++) {
workers.submit(getWork());
}
Thread.sleep(TimeUnit.SECONDS.toMillis(5));
}
// parallelizable task, the number of parallel workers is irrelevant
public static Runnable getWork() {
return new Runnable() {
#Override
public void run() {
// fail fast
if (errorCondition.get()) {
System.out.println("%%% MAX_ERRORS of [" + MAX_ERRORS + "] occurred, skipping task");
return;
}
// do work
if (rand.nextBoolean()) {
// GOOD JOB
System.out.println("+++ GOOD RESULT");
submitDoneTask(true);
} else {
// ERROR
System.out.println("*** BAD RESULT");
submitDoneTask(false);
}
}
};
}
public static void submitDoneTask(final boolean success) {
watcher.submit(new Runnable() {
#Override
public void run() {
if (!errorCondition.get() && success) {
errorCount.set(0);
} else {
int errors = errorCount.incrementAndGet();
if (errors >= MAX_ERRORS) {
errorCondition.set(true);
}
}
}
});
}
}

Execution timeout strategy for a Java method

I have a system which services a large amount of transactions.
The timeout strategy applies for only a part of a transaction.
A complete transaction here consists of some execution workflow, pre-processing, a remote call, post-processing etc..
For example,
// some code
// START TIMER
try
{
CallInput remoteInput = fInputProcessor.transform(callContext);
CallOutput remoteOutput = fRemoteInvoker.invoke(remoteInput);
TransactionOutput output = fOutputProcessor.transform(remoteOutput);
}
catch(TimeoutException ex)
{
}
// some code
Say the timeout is for 500ms. It may occur during the Input processing, the remote call or the output processing.
Can you list some possible ways of generating the Timeout after 500ms? Assume I cannot split the 3 blocks into a new thread.
Consider using java.util.concurrent.Future.get(long, TimeUnit).
Runnable yourTask = <wrap your work code in a Runnable>;
Future future = threadpool.submit(yourTask);
try {
Object result = future.get(10, TimeUnit.SECONDS);
} catch (Exception e) {
<handle exception, e.g. TimeoutException>
} finally {
<your closing code>
}
Here is an example to create/destroy a threadpool :
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
private ExecutorService threadpool;
public void start() {
threadpool = Executors.newCachedThreadPool();
}
public void stop() throws InterruptedException {
threadpool.shutdown();
if (false == threadpool.awaitTermination(1, TimeUnit.SECONDS))
threadpool.shutdownNow();
}
You can use Guava and use this such code:
TimeLimiter limiter = new SimpleTimeLimiter();
limiter.callWithTimeout(new Callable<Void>() {
public Void call() {
CallInput remoteInput = fInputProcessor.transform(callContext);
CallOutput remoteOutput = fRemoteInvoker.invoke(remoteInput);
TransactionOutput output = fOutputProcessor.transform(remoteOutput);
}
}, 500, TimeUnit.MILISECONDS, false);
DelayQueues (http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/DelayQueue.html) are also useful. You add elements to the queue that implements the Delayed interface, which is used to define the expiration for the Delayed object. Then you can poll the queue for elements that have expired and do something with them
while (!Thread.currentThread().isInterrupted()) {
doSomethingWithExpiredObj();
}

Timeout the thread if any bundle is taking lot of time

I am working on a project in which I will have different Bundles. Let's take an example, Suppose I have 5 bundles and each of those bundles will have a method name process.
Now currently, I am calling the process method of all those 5 bundles sequentially, one by one and then I am writing to the database. But that's what I don't want.
I need to call all those 5 Bundles process method in parallel using multithread and then write to the database.
And I also want to have some timeout feature for those threads. I will be having a default timeout settings for all the threads for the bundles. If any bundle is taking some higher time than the timeout settings I have, then I want to timeout those threads and then log back saying this bundle got timeout bcoz it was taking lot of time.
I hope question is clear enough...
Below is the code I have so far which is calling process method sequentially one by one.
public void processEvents(final Map<String, Object> eventData) {
final Map<String, String> outputs = (Map<String, String>)eventData.get(BConstants.EVENT_HOLDER);
for (final BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) {
final Map<String, String> response = entry.getPlugin().process(outputs);
// write to the database.
System.out.println(response);
}
}
I am not sure what is the best and efficient way to do this? Because, in future, it might be possible that I will have more than 5 bundles.
Can anyone provide me an example of how can I achieve this? Any help will be appreciated on this. Thanks.
It is not too difficult to achieve what you want, but you should be aware that both with concurrency and timeouts you get added complexity, especially when it comes to error handling.
For instance, threads that are running when a timeout occurs may keep running after the timeout. Only well behaved threads that cooperate by handling an interrupt signal will be able to stop succefully in the middle of processing.
You must also make sure that individual bundle entries may be processed in parallel, i.e. that the are thread safe. If they modify some shared resource while processing, then you might get strange errors as a result.
I was also wondering whether you wanted to include the database writing to each of these threads. If so, you will need to handle interruptions while writing to the database; e.g. by rolling back a transaction.
Anyways, to get thread pooling and a total timeout for all threads, you can use ExecutorService with (for instance) a fixed pool size and invoke all threads using the invokeAll method.
The following attempt is most probably flawed and error handling is by no means complete, but it should give you a starting point.
First, an implementation of Callable for your threads:
public class ProcessBundleHolderEntry implements Callable {
private BundleRegistration.BundlesHolderEntry entry;
private Map<String, String> outputs;
public ProcessBundleHolderEntry(BundleRegistration.BundlesHolderEntry entry, Map<String, String> outputs) {
this.entry = entry;
this.outputs = outputs;
}
public Object call() throws Exception {
final Map<String, String> response = entry.getPlugin().process(outputs);
// write to the database.
System.out.println(response);
return response;
}
}
and now, the modified processEvents method:
public void processEvents(final Map<String, Object> eventData) {
ExecutorService pool = Executors.newFixedThreadPool(5);
List<ProcessBundleHolderEntry> entries = new ArrayList<ProcessBundleHolderEntry>();
Map<String, String> outputs = (Map<String, String>)eventData.get(BConstants.EVENT_HOLDER);
for (BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) {
ProcessBundleHolderEntry processBundleHolderEntry = new ProcessBundleHolderEntry(entry, outputs);
entries.add(processBundleHolderEntry);
}
try {
List<Future<Object>> futures = pool.invokeAll(entries, 30, TimeUnit.SECONDS);
for (int i = 0; i < futures.size(); i++) {
// This works since the list of future objects are in the
// same sequential order as the list of entries
Future<Object> future = futures.get(i);
ProcessBundleHolderEntry entry = entries.get(i);
if (!future.isDone()) {
// log error for this entry
}
}
} catch (InterruptedException e) {
// handle this exception!
}
}
The reply given by Steinar is correct but this solution is not scalable as you said "in future, it might be possible that I will have more than 5 bundles." and I am sure that you might be adding bundles at the runtime or afterwards if some tasks are being completed and there might also be a limitation that you can execute atmost 'n' bundles parallely, in that case the executorService.InvokeAll will terminate the pending tasks which have not started if the timer specified is reached.
I have created a simple sample which might be useful to you, This example provides flexibility on how many threads you want to run parallely and you can add tasks or bundles as and when you need.
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import testproject.Bundles;
import testproject.ExecuteTimedOperation;
public class ParallelExecutor
{
public static int NUMBER_OF_PARALLEL_POLL = 4;
public static void main(String[] args)
{
ExecutorService executorService = Executors.newFixedThreadPool(NUMBER_OF_PARALLEL_POLL );
// Create bundle of objects you want
List<Bundles> lstBun = new ArrayList<Bundles>();
for (Bundles bundles : lstBun)
{
final ExecuteTimedOperation ope =new ExecuteTimedOperation(bundles, new HashMap<String, Object>());
executorService.submit(new Runnable()
{
public void run()
{
ope.ExecuteTask();
}
});
}
}
}
package testproject;
import java.util.Map;
import java.util.Random;
public class ExecuteTimedOperation
{
Bundles _bun;
Map<String, Object> _eventData;
public static long TimeInMilleToWait = 60 * 1000; //Time which each thread should wait to complete task
public ExecuteTimedOperation(Bundles bun, Map<String, Object> eventData)
{
_bun = bun;
_eventData = eventData;
}
public void ExecuteTask()
{
try
{
Thread t = new Thread(new Runnable()
{
public void run()
{
_bun.processEvents(_eventData);
}
});
t.start();
t.join(TimeInMilleToWait);
}
catch (InterruptedException e) {
//log back saying this bundle got timeout bcoz it was taking lot of time.
}
catch (Exception e) {
//All other type of exception will be handled here
}
}
}
package testproject;
import java.util.Map;
public class Bundles
{
public void processEvents(final Map<String, Object> eventData)
{
//THE code you want to execute
}
}

How to call same method of a different class in multithreaded way

I have a method named process in two of my Classes, lets say CLASS-A and CLASS-B. Now in the below loop, I am calling process method of both of my classes sequentially meaning one by one and it works fine but that is the not the way I am looking for.
for (ModuleRegistration.ModulesHolderEntry entry : ModuleRegistration.getInstance()) {
final Map<String, String> response = entry.getPlugin().process(outputs);
// write to database
System.out.println(response);
}
Is there any way, I can call the process method of both of my classes in a multithreaded way. Meaning one thread will call process method of CLASS-A and second thread will call process method of CLASS-B.
And then after that I was thinking to write the data that is being returned by the process method into the database. So I can have one more thread for writing into database.
Below is the code that I came up with in a multithreaded way but somehow it is not running at all.
public void writeEvents(final Map<String, Object> data) {
// Three threads: one thread for the database writer, two threads for the plugin processors
final ExecutorService executor = Executors.newFixedThreadPool(3);
final BlockingQueue<Map<String, String>> queue = new LinkedBlockingQueue<Map<String, String>>();
#SuppressWarnings("unchecked")
final Map<String, String> outputs = (Map<String, String>)data.get(ModelConstants.EVENT_HOLDER);
for (final ModuleRegistration.ModulesHolderEntry entry : ModuleRegistration.getInstance()) {
executor.submit(new Runnable () {
public void run() {
final Map<String, String> response = entry.getPlugin().process(outputs);
// put the response map in the queue for the database to read
queue.offer(response);
}
});
}
Future<?> future = executor.submit(new Runnable () {
public void run() {
Map<String, String> map;
try {
while(true) {
// blocks until a map is available in the queue, or until interrupted
map = queue.take();
// write map to database
System.out.println(map);
}
} catch (InterruptedException ex) {
// IF we're catching InterruptedException then this means that future.cancel(true)
// was called, which means that the plugin processors are finished;
// process the rest of the queue and then exit
while((map = queue.poll()) != null) {
// write map to database
System.out.println(map);
}
}
}
});
// this interrupts the database thread, which sends it into its catch block
// where it processes the rest of the queue and exits
future.cancel(true); // interrupt database thread
// wait for the threads to finish
try {
executor.awaitTermination(5, TimeUnit.MINUTES);
} catch (InterruptedException e) {
//log error here
}
}
But If I remove the last line executor.awaitTermination(5, TimeUnit.MINUTES); then it start running fine and after some time, I always get error like this-
JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait.
JVMDUMP032I JVM requested Heap dump using 'S:\GitViews\Stream\goldseye\heapdump.20130827.142415.16456.0001.phd' in response to an event
JVMDUMP010I Heap dump written to S:\GitViews\Stream\goldseye\heapdump.20130827.142415.16456.0001.phd
JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait.
Can anybody help me in figuring out what's the problem and what wrong I am doing in my above code? if I am running sequentially then I don't get any errors and it works fine.
And also is there any better way of doing this as compared to the way I am doing? Because in future I can have multiple plugin processor as compared to two.
What I am trying to do is- Call the process method of both of my classes in a multithreaded way and then write into the database bcoz my process method will return back a Map.
Any help will be appreciated on this.. And I am looking for a workable example on this if possible. Thanks for the help,
The code snippet you pasted has few issues, if you fix them, this should work.
1. You are using an infinite loop to fetch element from the blocking queue and trying to break this using future. This is definitely not a good approach. The problem with this approach is it is possible that your database thread would never run because it could be cancelled by the future task running in the caller thread even before it runs. This is error-prone.
- You should run the while loop fixed number of times (you already know how many producers are there or how many times you are going to get the response).
Also, tasks submitted to executor service should be independent tasks...here your database task is dependent on the execution of other tasks..this can also lead to deadlock if your execution policy changes..for example if you use single thread pool executor and if database thread is scheduled it would just block waiting for producers to add data in the queue.
A good way is to create task that retrieves data and update the database in the same thread.
Or retrieve all the responses first and then execute database operations in parallel
public void writeEvents(final Map data) {
final ExecutorService executor = Executors.newFixedThreadPool(3);
#SuppressWarnings("unchecked")
final Map<String, String> outputs = (Map<String, String>)data.get(ModelConstants.EVENT_HOLDER);
for (final ModuleRegistration.ModulesHolderEntry entry : ModuleRegistration.getInstance()) {
executor.submit(new Runnable () {
public void run() {
try {
final Map<String, String> response = entry.getPlugin().process(outputs);
//process the response and update database.
System.out.println(map);
} catch (Throwable e) {
//handle execption
} finally {
//clean up resources
}
}
});
}
// This will wait for running threads to complete ..it's an orderly shutdown.
executor.shutdown();
}
OK, here's some code for the comments I suggested above. Disclaimer: I'm not sure whether it works or even compiles, or whether it solves the problem. But the idea is to take control of the cancellation process instead of relying on future.cancel which I suspect could cause problems.
class CheckQueue implements Runnable {
private volatile boolean cancelled = false;
public void cancel() { cancelled = true; }
public void run() {
Map<String, String> map;
try {
while(!cancelled) {
// blocks until a map is available in the queue, or until interrupted
map = queue.take();
if (cancelled) break;
// write map to database
System.out.println(map);
} catch (InterruptedException e) {
}
while((map = queue.poll()) != null) {
// write map to database
System.out.println(map);
}
}
}
CheckQueue queueChecker = new CheckQueue ();
Future<?> future = executor.submit(queueChecker);
// this interrupts the database thread, which sends it into its catch block
// where it processes the rest of the queue and exits
queueChecker.cancel();

Categories