Tapestry Hibernate session closed after exceeding ExecutorService fixed thread pool - java

I'm a tapestry-hibernate user and I'm experiencing an issue where my session remains closed once I exceed my Executors.newFixedThreadPool(1);
I have the following code which will work perfectly for the first thread while the remaining threads experience a closed session. If I increase the thread pool to 10, all the threads will run without issue. As soon as I exceed the fixedThreadPool, I get the session closed exception. I do not know how to open it since it's managed by tapestry-hibernate. If I use newCachedThreadPool, everything works perfectly. Does anybody know what might be happening here?
public void setupRender() {
ExecutorService executorService = Executors.newFixedThreadPool(1);
final ConcurrentHashMap<String, Computer> map = new ConcurrentHashMap<>();
final String key = "myKey";
final Date date = new Date();
List<Future> futures = new ArrayList<>();
for (int i = 0; i < 10; i++) {
final int thread = i;
Future future = executorService.submit(new Callable() {
#Override
public String call() {
try {
Computer computer = new Computer("Test Computer thread");
computer = getComputer(map, key, key, computer);
Monitor monitor = new Monitor();
monitor.setComputer(computer);
session.save(monitor);
session.flush();
System.out.println("thread " + thread);
try {
sessionManager.commit();
} catch (HibernateException ex) {
sessionManager.abort();
} finally {
session.close();
}
} catch (Exception ex) {
System.out.println("ex " + ex);
}
System.out.println( new Date().getTime() - date.getTime());
return "completed";
}
});
futures.add(future);
}
for(Future future : futures) {
try {
System.out.println(future.get());
} catch (InterruptedException | ExecutionException ex) {
Logger.getLogger(MultiThreadDemo.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
public synchronized Computer getComputer(ConcurrentHashMap<String, Computer> map, String key, String thread, Computer computer) {
if (map.putIfAbsent(key, computer) == null) {
session.save(computer);
} else {
computer = map.get(key);
}
return computer;
}

I've told you this before.... you MUST either use ParallelExecutor OR call PerThreadManager.cleanup(). You need to understand that tapestry-hibernate has PerThread scoped services that MUST be cleaned up if you are using them outside of a normal request/response (or ParallelExecutor).
I also don't think you should be calling session.close(). You should mimmic CommitAfterWorker.
It would probably look something like:
#Inject PerThreadManager perThreadManager;
#Inject HibernateSessionManager sessionManager; // this is a proxy to a per-thread value
#Inject Session session; // this is a proxy to a per-thread value
public void someMethod() {
ExecutorService executorService = ...;
executorService.submit(new Callable() {
public String call() {
try {
Monitor monitor = ...
session.save(monitor);
session.flush(); // optional
sessionManager.commit();
} catch (Exception ex) {
sessionManager.abort();
} finally {
// this allows Session and HibernateSessionManager to
// clean up after themselves
perThreadManager.cleanup();
}
return ...
}
});
}
If you choose to use the ParallelExecutor (and Invokable) instead of Executors.newFixedThreadPool(1) you can remove the references to PerThreadManager since it automatically cleans up the thread.

Related

Java Using CountDownLatch to poll a method until a success response

I am trying to call a method multiple times every 60 seconds until a success response from the method which actually calls a rest end point on a different service. As of now I am using do while loop and using
Thread.sleep(60000);
to make the main thread wait 60 seconds which I feel is not the ideal way due to concurrency issues.
I came across the CountDownLatch method using
CountDownLatch latch = new CountDownLatch(1);
boolean processingCompleteWithin60Second = latch.await(60, TimeUnit.SECONDS);
#Override
public void run(){
String processStat = null;
try {
status = getStat(processStatId);
if("SUCCEEDED".equals(processStat))
{
latch.countDown();
}
} catch (Exception e) {
e.printStackTrace();
}
}
I have the run method in a different class which implements runnable. Not able to get this working. Any idea what is wrong?
You could use a CompletableFuture instead of CountDownLatch to return the result:
CompletableFuture<String> future = new CompletableFuture<>();
invokeYourLogicInAnotherThread(future);
String result = future.get(); // this blocks
And in another thread (possibly in a loop):
#Override
public void run() {
String processStat = null;
try {
status = getStat(processStatId);
if("SUCCEEDED".equals(processStat))
{
future.complete(processStat);
}
} catch (Exception e) {
future.completeExceptionally(e);
}
}
future.get() will block until something is submitted via complete() method and return the submitted value, or it will throw the exception supplied via completeExceptionally() wrapped in an ExecutionException.
There is also get() version with timeout limit:
String result = future.get(60, TimeUnit.SECONDS);
Finally got it to work using Executor Framework.
final int[] value = new int[1];
pollExecutor.scheduleWithFixedDelay(new Runnable() {
Map<String, String> statMap = null;
#Override
public void run() {
try {
statMap = coldService.doPoll(id);
} catch (Exception e) {
}
if (statMap != null) {
for (Map.Entry<String, String> entry : statMap
.entrySet()) {
if ("failed".equals(entry.getValue())) {
value[0] = 2;
pollExecutor.shutdown();
}
}
}
}
}, 0, 5, TimeUnit.MINUTES);
try {
pollExecutor.awaitTermination(40, TimeUnit.MINUTES);
} catch (InterruptedException e) {
}

Java Executor Service Connection Pool

I am attempting to use connection pooling for Executor Service.
I am facing some problem when connection pool config is initialSize=3, maxToal=5, maxIdle=5.
I need to process 10 services at a time for every minute. But its picking only 5 services for every minute.
If i configure initialSize=3, maxToal=10, maxIdle=10 then its picking 10 services for every minute..
I am new to multithreading and connection. Below is my code snippet. Please provide suggestion.
public class TestScheduledExecutorService {
public static void main (String a[]) {
ScheduledExecutorService service = null;
try {
TestObject runnableBatch = new TestObject() {
public void run() {
testMethod ();
}
};
service = Executors.newSingleThreadScheduledExecutor();
service.scheduleAtFixedRate(runnableBatch, 0, 30, TimeUnit.SECONDS);
} catch (Exception e) {
e.printStackTrace();
}
}
}
public class TestObject implements Runnable{
public void testMethod (int inc) {
ExecutorService service = null;
try {
service = Executors.newFixedThreadPool(10);
for (int i = 0; i < 10; i++) {
service.submit(new TestService());
}
} catch (Exception e) {
e.printStackTrace();
}
}
#Override
public void run() {
}
}
public class TestService implements Callable{
Connection conn = null;
public void process(Connection conn) {
try {
if (conn != null) {
System.out.println("Thread & Connection pool conn : "+Thread.currentThread() + " :: " +conn);
// service process here
} else {
System.out.println("Connection pool conn is null : ");
}
} catch (Exception e) {
e.printStackTrace();
} finally {
}
}
#Override
public Object call() throws Exception {
ConnectionPoolTest cp = ConnectionPoolTest.getInstance();
BasicDataSource bds = cp.getBasicDataSource();
conn = bds.getConnection();
System.out.println(" call() "); **// it prints only 5 times for every minute eventhough total services are 10**
process(conn);
return null;
}
}
public class ConnectionPoolTest {
private static ConnectionPoolTest dataSource = new ConnectionPoolTest();
private static BasicDataSource basicDataSource = null;
private ConnectionPoolTest() {
}
public static ConnectionPoolTest getInstance() {
if (dataSource == null)
dataSource = new ConnectionPoolTest();
return dataSource;
}
public BasicDataSource getBasicDataSource() throws Exception {
try {
basicDataSource = new BasicDataSource();
basicDataSource.setInitialSize(3);
basicDataSource.setMaxTotal(10);
basicDataSource.setMaxIdle(10);
} catch (Exception e) {
throw e;
}
return basicDataSource;
}
}
For Executor Service
initialSize : Specified Number of Threads to spin , when New executor is created.
maxTotal : Number of Threads that can exist at max peak load.
maxIdle : Number of Thread that are kept active even if load goes below threshold.
As you mentioned, you want to pick up 10 number of tasks in parallel, we should have maxTotal set at 10. intialSize can be configured to a number that you think is optimal at the start , lets say 3 - 5. maxIdle is the number of threads you want to keep active , we generally assume how many threads are required if tasks are submitted. though there is no standard recomendation, vaues might be determined a number of various factors like .
Distribution of task submitted during the minute
Duration of Task
Urgency of executing those tasks in parallel.
As you mentioned you need 10 parallel tasks, then you will have to configure 10 as maxTotal, considering your task distribution and Duration causes overlap. If duration is pretty small , and distribution is even you can also survive with a lower number too.

How can I terminate Tasks that have timed out in multithreading?

I need to make a library in which I will have synchronous and asynchronous methods in it.
executeSynchronous() - waits until I have a result, returns the result.
executeAsynchronous() - returns a Future immediately which can be processed after other things are done, if needed.
Core Logic of my Library
The customer will use our library and they will call it by passing DataKey builder object. We will then construct a URL by using that DataKey object and make a HTTP client call to that URL by executing it and after we get the response back as a JSON String, we will send that JSON String back to our customer as it is by creating DataResponse object. Some customer will call executeSynchronous() and some might call executeAsynchronous() so that's why I need to provide two method separately in my library.
Interface:
public interface Client {
// for synchronous
public DataResponse executeSynchronous(DataKey key);
// for asynchronous
public Future<DataResponse> executeAsynchronous(DataKey key);
}
And then I have my DataClient which implements the above Client interface:
public class DataClient implements Client {
private RestTemplate restTemplate = new RestTemplate();
private ExecutorService executor = Executors.newFixedThreadPool(10);
// for synchronous call
#Override
public DataResponse executeSynchronous(DataKey key) {
DataResponse dataResponse = null;
Future<DataResponse> future = null;
try {
future = executeAsynchronous(key);
dataResponse = future.get(key.getTimeout(), TimeUnit.MILLISECONDS);
} catch (TimeoutException ex) {
PotoLogging.logErrors(ex, DataErrorEnum.TIMEOUT_ON_CLIENT, key);
dataResponse = new DataResponse(null, DataErrorEnum.TIMEOUT_ON_CLIENT, DataStatusEnum.ERROR);
// does this look right the way I am doing it?
future.cancel(true); // terminating tasks that have timed out.
} catch (Exception ex) {
PotoLogging.logErrors(ex, DataErrorEnum.CLIENT_ERROR, key);
dataResponse = new DataResponse(null, DataErrorEnum.CLIENT_ERROR, DataStatusEnum.ERROR);
}
return dataResponse;
}
//for asynchronous call
#Override
public Future<DataResponse> executeAsynchronous(DataKey key) {
Future<DataResponse> future = null;
try {
Task task = new Task(key, restTemplate);
future = executor.submit(task);
} catch (Exception ex) {
PotoLogging.logErrors(ex, DataErrorEnum.CLIENT_ERROR, key);
}
return future;
}
}
Simple class which will perform the actual task:
public class Task implements Callable<DataResponse> {
private DataKey key;
private RestTemplate restTemplate;
public Task(DataKey key, RestTemplate restTemplate) {
this.key = key;
this.restTemplate = restTemplate;
}
#Override
public DataResponse call() {
DataResponse dataResponse = null;
String response = null;
try {
String url = createURL();
response = restTemplate.getForObject(url, String.class);
// it is a successful response
dataResponse = new DataResponse(response, DataErrorEnum.NONE, DataStatusEnum.SUCCESS);
} catch (RestClientException ex) {
PotoLogging.logErrors(ex, DataErrorEnum.SERVER_DOWN, key);
dataResponse = new DataResponse(null, DataErrorEnum.SERVER_DOWN, DataStatusEnum.ERROR);
} catch (Exception ex) {
PotoLogging.logErrors(ex, DataErrorEnum.CLIENT_ERROR, key);
dataResponse = new DataResponse(null, DataErrorEnum.CLIENT_ERROR, DataStatusEnum.ERROR);
}
return dataResponse;
}
// create a URL by using key object
private String createURL() {
String url = somecode;
return url;
}
}
Problem Statement:-
When I started working on this solution, I was not terminating the tasks that have timed out. I was reporting the timeout to the client, but the task continues to run in the thread pool (potentially occupying one of my limited 10 threads for a long time). So I did some research online and I found that I can cancel my tasks those have timed out by using cancel on future as shown below -
future.cancel(true);
But I wanted to make sure, does it look right the way I am doing in my executeSynchronous method to cancel the tasks that have got timedout?
Since I am calling cancel() on theFuture which will stop it from running if tasks is still in the queue so I am not sure what I am doing is right or not? What is the right approach to do this?
If there is any better way, then can anyone provide an example for that?
If task is still in the queue then cancelling it by simply calling future.cancel() is ok but obviously you don't know if that is in the queue. Also even if you ask future to interrupt the task it may not work as your task can still be doing something which is ignoring the thread interrupted status.
So you can use the future.cancel(true) but you need to make sure that your task (thread) does regard the thread interrupted status. For example as you mentioned you make http call, so you might need to close the http client resource as soon as thread is interrupted.
Please refer to the example below.
I have tried to implement the task cancellation scenario. Normally a thread can check isInterrupted() and try to terminate itself. But this becomes more complex when you are using thread pool executors, callable and if the task is not really like while(!Thread.isInterrupted()) {// execute task}.
In this example, a task is writing a file (I did not use http call to keep the it simple). A thread pool executor starts running the task but the caller wants to cancel it just after 100 milli seconds. Now future sends the interrupt signal to the thread but the callable task can not check it immediately while writing to file. So to make this happen callable maintains a list of IO resources it is going to use and as soon as future wants to cancel the task it just calls cancel() on all IO resources which terminates the task with IOException and then thread finishes.
public class CancellableTaskTest {
public static void main(String[] args) throws Exception {
CancellableThreadPoolExecutor threadPoolExecutor = new CancellableThreadPoolExecutor(0, 10, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());
long startTime = System.currentTimeMillis();
Future<String> future = threadPoolExecutor.submit(new CancellableTask());
while (System.currentTimeMillis() - startTime < 100) {
Thread.sleep(10);
}
System.out.println("Trying to cancel task");
future.cancel(true);
}
}
class CancellableThreadPoolExecutor extends ThreadPoolExecutor {
public CancellableThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue) {
super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
}
#Override
protected <T> RunnableFuture<T> newTaskFor(Callable<T> callable) {
return new CancellableFutureTask<T>(callable);
}
}
class CancellableFutureTask<V> extends FutureTask<V> {
private WeakReference<CancellableTask> weakReference;
public CancellableFutureTask(Callable<V> callable) {
super(callable);
if (callable instanceof CancellableTask) {
this.weakReference = new WeakReference<CancellableTask>((CancellableTask) callable);
}
}
public boolean cancel(boolean mayInterruptIfRunning) {
boolean result = super.cancel(mayInterruptIfRunning);
if (weakReference != null) {
CancellableTask task = weakReference.get();
if (task != null) {
try {
task.cancel();
} catch (Exception e) {
e.printStackTrace();
result = false;
}
}
}
return result;
}
}
class CancellableTask implements Callable<String> {
private volatile boolean cancelled;
private final Object lock = new Object();
private LinkedList<Object> cancellableResources = new LinkedList<Object>();
#Override
public String call() throws Exception {
if (!cancelled) {
System.out.println("Task started");
// write file
File file = File.createTempFile("testfile", ".txt");
BufferedWriter writer = new BufferedWriter(new FileWriter(file));
synchronized (lock) {
cancellableResources.add(writer);
}
try {
long lineCount = 0;
while (lineCount++ < 100000000) {
writer.write("This is a test text at line: " + lineCount);
writer.newLine();
}
System.out.println("Task completed");
} catch (Exception e) {
e.printStackTrace();
} finally {
writer.close();
file.delete();
synchronized (lock) {
cancellableResources.clear();
}
}
}
return "done";
}
public void cancel() throws Exception {
cancelled = true;
Thread.sleep(1000);
boolean success = false;
synchronized (lock) {
for (Object cancellableResource : cancellableResources) {
if (cancellableResource instanceof Closeable) {
((Closeable) cancellableResource).close();
success = true;
}
}
}
System.out.println("Task " + (success ? "cancelled" : "could not be cancelled. It might have completed or not started at all"));
}
}
For your REST Http client related requirement you can modify the factory class something like this -
public class CancellableSimpleClientHttpRequestFactory extends SimpleClientHttpRequestFactory {
private List<Object> cancellableResources;
public CancellableSimpleClientHttpRequestFactory() {
}
public CancellableSimpleClientHttpRequestFactory(List<Object> cancellableResources) {
this.cancellableResources = cancellableResources;
}
protected HttpURLConnection openConnection(URL url, Proxy proxy) throws IOException {
HttpURLConnection connection = super.openConnection(url, proxy);
if (cancellableResources != null) {
cancellableResources.add(connection);
}
return connection;
}
}
Here you need to use this factory while creating RestTemplate in your runnable task.
RestTemplate template = new RestTemplate(new CancellableSimpleClientHttpRequestFactory(this.cancellableResources));
Make sure that you pass the same list of cancellable resources that you have maintained in CancellableTask.
Now you need to modify the cancel() method in CancellableTask like this -
synchronized (lock) {
for (Object cancellableResource : cancellableResources) {
if (cancellableResource instanceof HttpURLConnection) {
((HttpURLConnection) cancellableResource).disconnect();
success = true;
}
}
}

OutOfMemoryError - No trace in the console

I call the below testMethod, after putting it into a Callable(with other few Callable tasks), from an ExecutorService. I suspect that, the map.put() suffers OutOfMemoryError, as I'm trying to put some 20 million entries.
But, I'm not able to see the error trace in console. Just the thread stops still. I tried to catch the Error ( I know.. we shouldnt, but for debug I caught). But, the error is not caught. Directly enters finally and stops executing.. and the thread stands still.
private HashMap<String, Integer> testMethod(
String file ) {
try {
in = new FileInputStream(new File(file));
br = new BufferedReader(new InputStreamReader(in), 102400);
for (String line; (line= br.readLine()) != null;) {
map.put(line.substring(1,17),
Integer.parseInt(line.substring(18,20)));
}
System.out.println("Loop End"); // Not executed
} catch(Error e){
e.printStackTrace(); //Not executed
}finally {
System.out.println(map.size()); //Executed
br.close();
in.close();
}
return map;
}
Wt could be the mistake, I'm doing?
EDIT: This is how I execute the Thread.
Callable<Void> callable1 = new Callable<Void>() {
#Override
public Void call() throws Exception {
testMethod(inputFile);
return null;
}
};
Callable<Void> callable2 = new Callable<Void>() {
#Override
public Void call() throws Exception {
testMethod1();
return null;
}
};
List<Callable<Void>> taskList = new ArrayList<Callable<Void>>();
taskList.add(callable1);
taskList.add(callable2);
// create a pool executor with 3 threads
ExecutorService executor = Executors.newFixedThreadPool(3);
List<Future<Void>> future = executor.invokeAll(taskList);
//executor.invokeAll(taskList);
latch.await();
future.get(0);future.get(1); //Added this as per SubOptimal'sComment
But, this future.get() didn't show OOME in console.
You should not throw away the future after submitting the Callable.
Future future = pool.submit(callable);
future.get(); // this would show you the OOME
example based on the informations of the requestor to demonstrate
public static void main(String[] args) throws InterruptedException, ExecutionException {
Callable<Void> callableOOME = new Callable<Void>() {
#Override
public Void call() throws Exception {
System.out.println("callableOOME");
HashMap<String, Integer> map = new HashMap<>();
// some code to force an OOME
try {
for (int i = 0; i < 10_000_000; i++) {
map.put(Integer.toString(i), i);
}
} catch (Error e) {
e.printStackTrace();
} finally {
System.out.println("callableOOME: map size " + map.size());
}
return null;
}
};
Callable<Void> callableNormal = new Callable<Void>() {
#Override
public Void call() throws Exception {
System.out.println("callableNormal");
// some code to have a short "processing time"
try {
TimeUnit.SECONDS.sleep(5);
} catch (InterruptedException ex) {
System.err.println(ex.getMessage());
}
return null;
}
};
List<Callable<Void>> taskList = new ArrayList<>();
taskList.add(callableOOME);
taskList.add(callableNormal);
ExecutorService executor = Executors.newFixedThreadPool(3);
List<Future<Void>> future = executor.invokeAll(taskList);
System.out.println("get future 0: ");
future.get(0).get();
System.out.println("get future 1: ");
future.get(1).get();
}
Try catching Throwable as it could be an Exception like IOException or NullPointerException, Throwable captures everything except System.exit();
Another possibility is that the thread doesn't die, instead it becomes increasingly slower and slower due to almost running out of memory but never giving up. You should be able to see this with a stack dump or using jvisualvm while it is running.
BTW Unless all you strings are exactly 16 characters long, you might like to call trim() on the to remove any padding in the String. This could make them shorter and use less memory.
I assume you are using a recent version of Java 7 or 8. If you are using Java 6 or older, it will use more memory as .substring() doesn't create a new underlying char[] to save CPU, but in this case wastes memory.

Java - running jobs async using ReentrantLock?

The code below allows us to run a job while ensuring that only one job at a time can run by using ReentrantLock.
Is there any way to modify this code to run job.call() asynchronously and to return the MyConcurrentJobException to the client prior to starting the thread?
We tried wrapping the try/catch/finally block in a new Thread but the unlock and lock have to happen in the same thread so we get an IllegalMonitorException
??
final static Lock lock = new ReentrantLock();
public Object runJob(String desc, Callable job, boolean wait) {
logger.info("Acquiring lock");
if (!lock.tryLock()) {
throw new MyConcurrentJobException();
}
activeJob = new JobStatus(desc);
logger.info("Lock acquired");
try {
return job.call();
} catch (MarginServiceAssertionException e) {
throw e;
} catch (MarginServiceSystemException e) {
throw e;
} catch (Exception e) {
throw new MarginServiceSystemException(e);
} finally {
activeJob = null;
logger.info("Releasing lock");
lock.unlock();
logger.info("Lock released");
}
}
You can use Semaphore instead of ReentrantLock, its permits are not bound to thread.
Something like this (not sure what you want to do with the result of job.call() in the asynchronous case):
final static Semaphore lock = new Semaphore(1);
public void runJob(String desc, Callable job, boolean wait) {
logger.info("Acquiring lock");
if (!lock.tryAcquire()) {
throw new MyConcurrentJobException();
}
startThread(new Runnable() {
public void run() {
try {
job.call();
} finally {
lock.release();
}
}
});
}
I think I am misunderstanding completely because to block and wait while doing something asynchronously doesn't make too much sense to me unless some progress can be made on the invoking thread.
Could you do something like this:
final static Lock lock = new ReentrantLock();
final static ExecutorService service = Executors.newThreadPoolExecutor();
public Object runJob(String desc, Callable job, boolean wait) {
logger.info("Acquiring lock");
if (!lock.tryLock()) {
throw new MyConcurrentJobException();
}
activeJob = new JobStatus(desc);
logger.info("Lock acquired");
try {
Future<?> future = service.submit(job);
// This next line will block until the job is finished
// and also will hold onto the lock.
boolean finished = false;
Object o = null;
while(!finished) {
try {
o = future.get(300, TimeUnit.MILLISECONDS);
finished = true;
catch(TimeOutException e) {
// Do some periodic task while waiting
// foot.tapLots();
}
}
if (o instanceof MarginServiceAssertionException) {
throw ((MargineServiceAssertionException)o);
} else if (o instanceof MargineServiceSystemException) {
throw ((MarginServiceSystemException)o);
} else if (o instanceof Exception) {
throw new MarginServiceSystemException(e);
}
} catch (... InterruptedException e) { /// catch whatever exceptions throws as part of this
/// Whatever needs to be done.
} finally {
activeJob = null;
logger.info("Releasing lock");
lock.unlock();
logger.info("Lock released");
}
}

Categories