I need to make a library in which I will have synchronous and asynchronous methods in it.
executeSynchronous() - waits until I have a result, returns the result.
executeAsynchronous() - returns a Future immediately which can be processed after other things are done, if needed.
Core Logic of my Library
The customer will use our library and they will call it by passing DataKey builder object. We will then construct a URL by using that DataKey object and make a HTTP client call to that URL by executing it and after we get the response back as a JSON String, we will send that JSON String back to our customer as it is by creating DataResponse object. Some customer will call executeSynchronous() and some might call executeAsynchronous() so that's why I need to provide two method separately in my library.
Interface:
public interface Client {
// for synchronous
public DataResponse executeSynchronous(DataKey key);
// for asynchronous
public Future<DataResponse> executeAsynchronous(DataKey key);
}
And then I have my DataClient which implements the above Client interface:
public class DataClient implements Client {
private RestTemplate restTemplate = new RestTemplate();
private ExecutorService executor = Executors.newFixedThreadPool(10);
// for synchronous call
#Override
public DataResponse executeSynchronous(DataKey key) {
DataResponse dataResponse = null;
Future<DataResponse> future = null;
try {
future = executeAsynchronous(key);
dataResponse = future.get(key.getTimeout(), TimeUnit.MILLISECONDS);
} catch (TimeoutException ex) {
PotoLogging.logErrors(ex, DataErrorEnum.TIMEOUT_ON_CLIENT, key);
dataResponse = new DataResponse(null, DataErrorEnum.TIMEOUT_ON_CLIENT, DataStatusEnum.ERROR);
// does this look right the way I am doing it?
future.cancel(true); // terminating tasks that have timed out.
} catch (Exception ex) {
PotoLogging.logErrors(ex, DataErrorEnum.CLIENT_ERROR, key);
dataResponse = new DataResponse(null, DataErrorEnum.CLIENT_ERROR, DataStatusEnum.ERROR);
}
return dataResponse;
}
//for asynchronous call
#Override
public Future<DataResponse> executeAsynchronous(DataKey key) {
Future<DataResponse> future = null;
try {
Task task = new Task(key, restTemplate);
future = executor.submit(task);
} catch (Exception ex) {
PotoLogging.logErrors(ex, DataErrorEnum.CLIENT_ERROR, key);
}
return future;
}
}
Simple class which will perform the actual task:
public class Task implements Callable<DataResponse> {
private DataKey key;
private RestTemplate restTemplate;
public Task(DataKey key, RestTemplate restTemplate) {
this.key = key;
this.restTemplate = restTemplate;
}
#Override
public DataResponse call() {
DataResponse dataResponse = null;
String response = null;
try {
String url = createURL();
response = restTemplate.getForObject(url, String.class);
// it is a successful response
dataResponse = new DataResponse(response, DataErrorEnum.NONE, DataStatusEnum.SUCCESS);
} catch (RestClientException ex) {
PotoLogging.logErrors(ex, DataErrorEnum.SERVER_DOWN, key);
dataResponse = new DataResponse(null, DataErrorEnum.SERVER_DOWN, DataStatusEnum.ERROR);
} catch (Exception ex) {
PotoLogging.logErrors(ex, DataErrorEnum.CLIENT_ERROR, key);
dataResponse = new DataResponse(null, DataErrorEnum.CLIENT_ERROR, DataStatusEnum.ERROR);
}
return dataResponse;
}
// create a URL by using key object
private String createURL() {
String url = somecode;
return url;
}
}
Problem Statement:-
When I started working on this solution, I was not terminating the tasks that have timed out. I was reporting the timeout to the client, but the task continues to run in the thread pool (potentially occupying one of my limited 10 threads for a long time). So I did some research online and I found that I can cancel my tasks those have timed out by using cancel on future as shown below -
future.cancel(true);
But I wanted to make sure, does it look right the way I am doing in my executeSynchronous method to cancel the tasks that have got timedout?
Since I am calling cancel() on theFuture which will stop it from running if tasks is still in the queue so I am not sure what I am doing is right or not? What is the right approach to do this?
If there is any better way, then can anyone provide an example for that?
If task is still in the queue then cancelling it by simply calling future.cancel() is ok but obviously you don't know if that is in the queue. Also even if you ask future to interrupt the task it may not work as your task can still be doing something which is ignoring the thread interrupted status.
So you can use the future.cancel(true) but you need to make sure that your task (thread) does regard the thread interrupted status. For example as you mentioned you make http call, so you might need to close the http client resource as soon as thread is interrupted.
Please refer to the example below.
I have tried to implement the task cancellation scenario. Normally a thread can check isInterrupted() and try to terminate itself. But this becomes more complex when you are using thread pool executors, callable and if the task is not really like while(!Thread.isInterrupted()) {// execute task}.
In this example, a task is writing a file (I did not use http call to keep the it simple). A thread pool executor starts running the task but the caller wants to cancel it just after 100 milli seconds. Now future sends the interrupt signal to the thread but the callable task can not check it immediately while writing to file. So to make this happen callable maintains a list of IO resources it is going to use and as soon as future wants to cancel the task it just calls cancel() on all IO resources which terminates the task with IOException and then thread finishes.
public class CancellableTaskTest {
public static void main(String[] args) throws Exception {
CancellableThreadPoolExecutor threadPoolExecutor = new CancellableThreadPoolExecutor(0, 10, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());
long startTime = System.currentTimeMillis();
Future<String> future = threadPoolExecutor.submit(new CancellableTask());
while (System.currentTimeMillis() - startTime < 100) {
Thread.sleep(10);
}
System.out.println("Trying to cancel task");
future.cancel(true);
}
}
class CancellableThreadPoolExecutor extends ThreadPoolExecutor {
public CancellableThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue) {
super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
}
#Override
protected <T> RunnableFuture<T> newTaskFor(Callable<T> callable) {
return new CancellableFutureTask<T>(callable);
}
}
class CancellableFutureTask<V> extends FutureTask<V> {
private WeakReference<CancellableTask> weakReference;
public CancellableFutureTask(Callable<V> callable) {
super(callable);
if (callable instanceof CancellableTask) {
this.weakReference = new WeakReference<CancellableTask>((CancellableTask) callable);
}
}
public boolean cancel(boolean mayInterruptIfRunning) {
boolean result = super.cancel(mayInterruptIfRunning);
if (weakReference != null) {
CancellableTask task = weakReference.get();
if (task != null) {
try {
task.cancel();
} catch (Exception e) {
e.printStackTrace();
result = false;
}
}
}
return result;
}
}
class CancellableTask implements Callable<String> {
private volatile boolean cancelled;
private final Object lock = new Object();
private LinkedList<Object> cancellableResources = new LinkedList<Object>();
#Override
public String call() throws Exception {
if (!cancelled) {
System.out.println("Task started");
// write file
File file = File.createTempFile("testfile", ".txt");
BufferedWriter writer = new BufferedWriter(new FileWriter(file));
synchronized (lock) {
cancellableResources.add(writer);
}
try {
long lineCount = 0;
while (lineCount++ < 100000000) {
writer.write("This is a test text at line: " + lineCount);
writer.newLine();
}
System.out.println("Task completed");
} catch (Exception e) {
e.printStackTrace();
} finally {
writer.close();
file.delete();
synchronized (lock) {
cancellableResources.clear();
}
}
}
return "done";
}
public void cancel() throws Exception {
cancelled = true;
Thread.sleep(1000);
boolean success = false;
synchronized (lock) {
for (Object cancellableResource : cancellableResources) {
if (cancellableResource instanceof Closeable) {
((Closeable) cancellableResource).close();
success = true;
}
}
}
System.out.println("Task " + (success ? "cancelled" : "could not be cancelled. It might have completed or not started at all"));
}
}
For your REST Http client related requirement you can modify the factory class something like this -
public class CancellableSimpleClientHttpRequestFactory extends SimpleClientHttpRequestFactory {
private List<Object> cancellableResources;
public CancellableSimpleClientHttpRequestFactory() {
}
public CancellableSimpleClientHttpRequestFactory(List<Object> cancellableResources) {
this.cancellableResources = cancellableResources;
}
protected HttpURLConnection openConnection(URL url, Proxy proxy) throws IOException {
HttpURLConnection connection = super.openConnection(url, proxy);
if (cancellableResources != null) {
cancellableResources.add(connection);
}
return connection;
}
}
Here you need to use this factory while creating RestTemplate in your runnable task.
RestTemplate template = new RestTemplate(new CancellableSimpleClientHttpRequestFactory(this.cancellableResources));
Make sure that you pass the same list of cancellable resources that you have maintained in CancellableTask.
Now you need to modify the cancel() method in CancellableTask like this -
synchronized (lock) {
for (Object cancellableResource : cancellableResources) {
if (cancellableResource instanceof HttpURLConnection) {
((HttpURLConnection) cancellableResource).disconnect();
success = true;
}
}
}
Related
I am trying to call a method multiple times every 60 seconds until a success response from the method which actually calls a rest end point on a different service. As of now I am using do while loop and using
Thread.sleep(60000);
to make the main thread wait 60 seconds which I feel is not the ideal way due to concurrency issues.
I came across the CountDownLatch method using
CountDownLatch latch = new CountDownLatch(1);
boolean processingCompleteWithin60Second = latch.await(60, TimeUnit.SECONDS);
#Override
public void run(){
String processStat = null;
try {
status = getStat(processStatId);
if("SUCCEEDED".equals(processStat))
{
latch.countDown();
}
} catch (Exception e) {
e.printStackTrace();
}
}
I have the run method in a different class which implements runnable. Not able to get this working. Any idea what is wrong?
You could use a CompletableFuture instead of CountDownLatch to return the result:
CompletableFuture<String> future = new CompletableFuture<>();
invokeYourLogicInAnotherThread(future);
String result = future.get(); // this blocks
And in another thread (possibly in a loop):
#Override
public void run() {
String processStat = null;
try {
status = getStat(processStatId);
if("SUCCEEDED".equals(processStat))
{
future.complete(processStat);
}
} catch (Exception e) {
future.completeExceptionally(e);
}
}
future.get() will block until something is submitted via complete() method and return the submitted value, or it will throw the exception supplied via completeExceptionally() wrapped in an ExecutionException.
There is also get() version with timeout limit:
String result = future.get(60, TimeUnit.SECONDS);
Finally got it to work using Executor Framework.
final int[] value = new int[1];
pollExecutor.scheduleWithFixedDelay(new Runnable() {
Map<String, String> statMap = null;
#Override
public void run() {
try {
statMap = coldService.doPoll(id);
} catch (Exception e) {
}
if (statMap != null) {
for (Map.Entry<String, String> entry : statMap
.entrySet()) {
if ("failed".equals(entry.getValue())) {
value[0] = 2;
pollExecutor.shutdown();
}
}
}
}
}, 0, 5, TimeUnit.MINUTES);
try {
pollExecutor.awaitTermination(40, TimeUnit.MINUTES);
} catch (InterruptedException e) {
}
I am writing a job queue using BlockingQueue and ExecutorService. It basically waiting new data in the queue, if there are any data put into the queue, executorService will fetch data from queue. But the problem is that i am using a loop that loops to wait the queue to have data and thus the cpu usage is super high.
I am new to use this api. Not sure how to improve this.
ExecutorService mExecutorService = Executors.newSingleThreadExecutor();
BlockingQueue<T> mBlockingQueue = new ArrayBlockingQueue();
public void handleRequests() {
Future<T> future = mExecutorService.submit(new WorkerHandler(mBlockingQueue, mQueueState));
try {
value = future.get();
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
if (mListener != null && returnedValue != null) {
mListener.onNewItemDequeued(value);
}
}
}
private static class WorkerHandler<T> implements Callable<T> {
private final BlockingQueue<T> mBlockingQueue;
private PollingQueueState mQueueState;
PollingRequestHandler(BlockingQueue<T> blockingQueue, PollingQueueState state) {
mBlockingQueue = blockingQueue;
mQueueState = state;
}
#Override
public T call() throws Exception {
T value = null;
while (true) { // problem is here, this loop takes full cpu usage if queue is empty
if (mBlockingQueue.isEmpty()) {
mQueueState = PollingQueueState.WAITING;
} else {
mQueueState = PollingQueueState.FETCHING;
}
if (mQueueState == PollingQueueState.FETCHING) {
try {
value = mBlockingQueue.take();
break;
} catch (InterruptedException e) {
Log.e(TAG, e.getMessage(), e);
break;
}
}
}
Any suggestions on how to improve this would be much appreciated!
You don't need to test for the queue to be empty, you just take(), so the thread blocks until data is available.
When an element is put on the queue the thread awakens an value is set.
If you don't need to cancel the task you just need:
#Override
public T call() throws Exception {
T value = mBlockingQueue.take();
return value;
}
If you want to be able to cancel the task :
#Override
public T call() throws Exception {
T value = null;
while (value==null) {
try {
value = mBlockingQueue.poll(50L,TimeUnit.MILLISECONDS);
break;
} catch (InterruptedException e) {
Log.e(TAG, e.getMessage(), e);
break;
}
}
return value;
}
if (mBlockingQueue.isEmpty()) {
mQueueState = PollingQueueState.WAITING;
} else {
mQueueState = PollingQueueState.FETCHING;
}
if (mQueueState == PollingQueueState.FETCHING)
Remove these lines, the break;, and the matching closing brace.
In below code DataGather = endDataGather - beginDataGather takes 1.7ms
& time for service to respond = service_COMPLETED - service_REQUEST_SENT
which vary from 20us to 200 us(since they are mocked dummy on same lan hence so low)
now if i increase tomcat8 thread from 10 to 200,DataGather increase to 150ms + and even if I increase thread from 200 to 1000 then it even increase 250+.Machine specs 8 core Xenon,64gb ram. Time is measured when apache benchmark runs with -n 40000 -c 100 args , is this due to thread scheduling/context swtiching or something else? How do I get rid of this variation? Will it remain when real services will come into picture which have latency of 20-100ms.
public List<ServiceResponse> getData(final List<Service> services, final Data data) {
//beginDateGather;
final List<ServiceResponse> serviceResponses = Collections.synchronizedList(new ArrayList<>());
try {
final CountDownLatch latch = new CountDownLatch(services.size());
Map<Future<HttpResponse>, HttpRequestBase> responseRequestMap = new HashMap<Future<HttpResponse>, HttpRequestBase>();
for (final service service : services) {
//creating request for a service
try {
HttpRequestBase request = RequestCreator.getRequestBase(service, data);
//service_REQUEST_SENT
Future<HttpResponse> response = client.execute(request,
new MyFutureCallback(service, data, latch, serviceResponses));
responseRequestMap.put(response, request);
} catch (Exception e) {
latch.countDown();
}
}
try {
boolean isWaitIsOver = latch.await(timeout, TimeUnit.MILLISECONDS);
if (!isWaitIsOver) {
for (Future<HttpResponse> response : responseRequestMap.keySet()) {
if (!response.isDone()) {
response.cancel(true);
}
}
}
} catch (InterruptedException e) {
}
} catch (Exception e) {
}
//endDataGather
return serviceResponses;
}
public class MyFutureCallback implements FutureCallback<HttpResponse> {
private Service service;
private Data data;
private CountDownLatch latch;
private List<serviceResponse> serviceResponses;
public MyFutureCallback( Service service, Data data, CountDownLatch latch, List<ServiceResponse> serviceResponses) {
this.service = service;
this.data = data;
this.latch = latch;
this.serviceResponses = serviceResponses;
}
#Override
public void completed(HttpResponse result) {
try {
ServiceResponse serviceResponse = parseResponse(result, data, service);
serviceResponses.add(serviceResponse);
} catch (Exception e) {
} finally {
//service_COMPLETED
latch.countDown();
}
}
#Override
public void failed(Exception ex) {
latch.countDown();
}
#Override
public void cancelled() {
latch.countDown();
}
}
Yes it seems due to context switching of threads.
Increasing the number of threads won't help in this case.
You can use a thread pool for callbacks.
Check this link for your reference and try to use .PoolingClientAsyncConnectionManager
How to use HttpAsyncClient with multithreaded operation?
Facing the problem with the ThreadPoolExecutor in Java.
How can I execute a continuous task using it? For example, I want to execute something like this:
#Async
void MyVoid(){
Globals.getInstance().increment();
System.out.println(Thread.currentThread().getName()+" iteration # "+ Globals.getInstance().Iterator);
}
I want it to run forever in 2 parallel asynchronous threads until the user sends a request to stop the ThreadPoolExecutor in the "/stop" controller.
If I use this for example:
#Controller
#RequestMapping("api/test")
public class SendController {
ThreadPoolExecutor executor = new ErrorReportingThreadPoolExecutor(5);
boolean IsRunning = true;
#RequestMapping(value = "/start_new", method = RequestMethod.POST)
public Callable<String> StartNewTask(#RequestBody LaunchSend sendobj) throws IOException, InterruptedException {
Runnable runnable = () -> { MyVoid();};
executor.setCorePoolSize(2);
executor.setMaximumPoolSize(2);
while (IsRunning) {
executor.execute(runnable);
System.out.println("Active threads: " + executor.getActiveCount());
}
return () -> "Callable result";
}
#RequestMapping(value = "/stop", method = RequestMethod.GET)
public Callable<String> StopTasks() {
executor.shutdown(); //for test
if(SecurityContextHolder.getContext().getAuthentication().getName() != null && SecurityContextHolder.getContext().getAuthentication().getName() != "anonymousUser") {
executor.shutdown();
return () -> "Callable result good";
}
else { return () -> "Callable result bad";}
}
}
public class ErrorReportingThreadPoolExecutor extends ThreadPoolExecutor {
public ErrorReportingThreadPoolExecutor(int nThreads) {
super(nThreads, nThreads,
0, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
#Override
protected void afterExecute(Runnable task, Throwable thrown) {
super.afterExecute(task, thrown);
if (thrown != null) {
// an unexpected exception happened inside ThreadPoolExecutor
thrown.printStackTrace();
}
if (task instanceof Future<?>) {
// try getting result
// if an exception happened in the job, it'll be thrown here
try {
Object result = ((Future<?>)task).get();
} catch (CancellationException e) {
// the job get canceled (may happen at any state)
e.printStackTrace();
} catch (ExecutionException e) {
// some uncaught exception happened during execution
e.printStackTrace();
} catch (InterruptedException e) {
// current thread is interrupted
// ignore, just re-throw
Thread.currentThread().interrupt();
}
}
}
}
I'm getting the following errors:
As I understood, a lot of tasks got submitted into the 'executor' queue within a few seconds and then the executor handled all them. (But I need each thread to wait before the current task ends and then submit the new one to the executor, I think.)
HTTP Requests to these controllers are forever "IDLE" until the next request comes, i.e. after sending a request to /api/test/start_new the controller's code executed tasks that are running, but the request is IDLE.
How can I do this in Java?
P.S. Spring MVC is used in the project. It has its own implementation of ThreadPoolExecutor - ThreadPoolTaskExecutor, but I am facing similar problems with it.
I'm a tapestry-hibernate user and I'm experiencing an issue where my session remains closed once I exceed my Executors.newFixedThreadPool(1);
I have the following code which will work perfectly for the first thread while the remaining threads experience a closed session. If I increase the thread pool to 10, all the threads will run without issue. As soon as I exceed the fixedThreadPool, I get the session closed exception. I do not know how to open it since it's managed by tapestry-hibernate. If I use newCachedThreadPool, everything works perfectly. Does anybody know what might be happening here?
public void setupRender() {
ExecutorService executorService = Executors.newFixedThreadPool(1);
final ConcurrentHashMap<String, Computer> map = new ConcurrentHashMap<>();
final String key = "myKey";
final Date date = new Date();
List<Future> futures = new ArrayList<>();
for (int i = 0; i < 10; i++) {
final int thread = i;
Future future = executorService.submit(new Callable() {
#Override
public String call() {
try {
Computer computer = new Computer("Test Computer thread");
computer = getComputer(map, key, key, computer);
Monitor monitor = new Monitor();
monitor.setComputer(computer);
session.save(monitor);
session.flush();
System.out.println("thread " + thread);
try {
sessionManager.commit();
} catch (HibernateException ex) {
sessionManager.abort();
} finally {
session.close();
}
} catch (Exception ex) {
System.out.println("ex " + ex);
}
System.out.println( new Date().getTime() - date.getTime());
return "completed";
}
});
futures.add(future);
}
for(Future future : futures) {
try {
System.out.println(future.get());
} catch (InterruptedException | ExecutionException ex) {
Logger.getLogger(MultiThreadDemo.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
public synchronized Computer getComputer(ConcurrentHashMap<String, Computer> map, String key, String thread, Computer computer) {
if (map.putIfAbsent(key, computer) == null) {
session.save(computer);
} else {
computer = map.get(key);
}
return computer;
}
I've told you this before.... you MUST either use ParallelExecutor OR call PerThreadManager.cleanup(). You need to understand that tapestry-hibernate has PerThread scoped services that MUST be cleaned up if you are using them outside of a normal request/response (or ParallelExecutor).
I also don't think you should be calling session.close(). You should mimmic CommitAfterWorker.
It would probably look something like:
#Inject PerThreadManager perThreadManager;
#Inject HibernateSessionManager sessionManager; // this is a proxy to a per-thread value
#Inject Session session; // this is a proxy to a per-thread value
public void someMethod() {
ExecutorService executorService = ...;
executorService.submit(new Callable() {
public String call() {
try {
Monitor monitor = ...
session.save(monitor);
session.flush(); // optional
sessionManager.commit();
} catch (Exception ex) {
sessionManager.abort();
} finally {
// this allows Session and HibernateSessionManager to
// clean up after themselves
perThreadManager.cleanup();
}
return ...
}
});
}
If you choose to use the ParallelExecutor (and Invokable) instead of Executors.newFixedThreadPool(1) you can remove the references to PerThreadManager since it automatically cleans up the thread.