This may be bit old question. I am confused with the ExecutorService work in Jboss environment. I used some sample code, where i am submitting the task with ExecutorService and after everything is done i am shutdown the executor.
Problem i am facing is after submitting one request, i am getting below exception for subsequent request.
Caused by: java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask#518ad6a2 rejected from java.util.concurrent.ThreadPoolExecutor#72114f80[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
ExecutorService executorService = Executors.newFixedThreadPool(3);
#POST
#Path("/request")
public Response checkAsync(final MultiMedia multiMedia) {
final Random rand = new Random();
final String random = String.valueOf(rand.nextInt(50) + 1);
multiMediaJobs.put(random, multiMedia);
final String jobId = "{ 'jobId' : " + random + "}";
executorService.submit(new Runnable() {
#Override
public void run() {
boolean result = veryExpensiveOperation(jobId);
if (result) {
try {
MultiMedia multiMedia = (MultiMedia) multiMediaJobs.get(random);
multiMedia.getMediadata().getMetadata()
.setAssetId(random);
final String uri = multiMedia.getCallback().getUri()+multiMedia.getCallback().getResource();
RestTemplate restTemplate = new RestTemplate();
String code = restTemplate.postForObject(uri,
multiMedia, String.class);
System.out.println(code);
} finally {
logger.debug("Map size: " + multiMediaJobs.size());
logger.debug("Time: "+System.currentTimeMillis());
multiMediaJobs.remove(random);
}
}
}
private boolean veryExpensiveOperation(String jobId) {
try {
Thread.sleep(7000);
} catch (InterruptedException e) {
e.printStackTrace();
}
logger.debug("Task is processed fully");
return true;
}
});
executorService.shutdown();
try {
executorService.awaitTermination(1, TimeUnit.SECONDS);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return Response.status(Status.ACCEPTED)
.entity(commonHelper.toJSON(jobId)).build();
}
Is it really required to call shutdown in JBOSS environment? If i remove that it is accept all my request. Example i am seeing in all here is just main method. I just want to know how it works in real application.
Forgive me if i am misunderstood some concept.
The problem is that you shutdown the ExecutorService. So any subsequent task being submitted is rejected right away.
I think you have some misunderstanding here.
When you submit to an executor, you will normally get a Future<T> object back. If you need a result from this, you'll call Future.get() and that will block until the threadpool executes your job. Otherwise you can just leave your jobs to be executed.
You wouldn't normally shutdown the executor unless you really want to shut it down, not accept any jobs, and let those queued up execute.
Related
I am using resilience4j Timelimiter in my project.
The timelimiter is throwing an error if a request is taking more than 10s, but it is not interrupting the thread.
When call comes from postman, i have put the debug and tested, after 10s in postman it displays an exception, but the thread still executes the method and after that added some print statements and it executed as well.
How to cancel or interrupt the thread after 10s in resilience4j.
class A {
TimeLimiterConfig config = TimeLimiterConfig.custom().cancelRunningFuture(true)
.timeoutDuration(Duration.ofMillis(TimeLimit)).build();
TimeLimiterRegistry timeLimiterRegistry = TimeLimiterRegistry.of(config);
TimeLimiter timeLimiter = timeLimiterRegistry.timeLimiter("APITimelimiter", config);
public Response someMethod() throws Exception {
try {
timeLimiter.executeFutureSupplier(() -> CompletableFuture.supplyAsync(() -> {
return getData();
}));
} catch (Exception e) {
logger.error("Request has crossed the execution time of " + TimeLimit
+ " seconds");
throw new Exception("Your request has crossed the execution time of "+ TimeLimit+" seconds.");
}
}
public UserData getData() {
String jsonData = "";
return jsonData;
}
}
TimeLimiter cannot cancel a CompletableFuture. See #TimeLimiter times out slow method but does not cancel running future #905 Points out, that: the limited cancel() in case of CompletableFuture is not a bug, but a design decision. CompletableFuture is not inherently bound to any thread, while Future almost always represents background task.
I am trying to launch an async transcription job inside a lambda. I have a cloudwatch event configured that should trigger on completion of the transcription job; So that I can perform some action on job completion in a different lambda.
But the problem is that the async transcription job is lauched successfully with following jobResult in the log but the job never completes and the job completed event is not triggered.
jobResult = java.util.concurrent.CompletableFuture#481a996b[Not completed, 1 dependents]
My code is on following lines -
public class APIGatewayTranscriptHandler implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
public APIGatewayProxyResponseEvent handleRequest(APIGatewayProxyRequestEvent event, Context context) {
S3Client s3Client = S3Client.create();
String fileUrl = s3Client.utilities().getUrl(GetUrlRequest.builder().bucket("srcBucket").key("fileName").build()).toString();
Media media = Media.builder().mediaFileUri(fileUrl).build();
StartTranscriptionJobRequest request = StartTranscriptionJobRequest.builder().
languageCode(LanguageCode.ES_ES)
.media(media).outputBucketName("destBucket")
.transcriptionJobName("jobName")
.mediaFormat("mp3")
.settings(Settings.builder().showSpeakerLabels(true).maxSpeakerLabels(2).build())
.build();
TranscribeAsyncClient transcribeAsyncClient = TranscribeAsyncClient.create();
CompletableFuture<StartTranscriptionJobResponse> jobResult = transcribeAsyncClient.startTranscriptionJob(request);
logger.log("jobResult = " + jobResult.toString());
jobResult.whenComplete((jobResponse, err) -> {
try {
if (jobResponse != null) {
logger.log("CompletableFuture : response = " + jobResponse.toString());
} else {
logger.log("CompletableFuture : NULL response: error = " + err.getMessage());
}
} catch (Exception e) {
e.printStackTrace();
}
});
//Job is completed only if Thread is made to sleep
/*try {
Thread.sleep(50000);
} catch (InterruptedException e) {
e.printStackTrace();
}*/
APIGatewayProxyResponseEvent response = new APIGatewayProxyResponseEvent();
response.setStatusCode(200);
Map<String, String> responseBody = new HashMap<String, String>();
responseBody.put("Status", jobResult.toString());
String responseBodyString = new JSONObject(responseBody).toJSONString();
response.setBody(responseBodyString);
return response;
}
}
I have verified, the audio file exists in the source bucket.
The above job completes and the job completed event is triggered ONLY if I add some sleep time in the lambda after launching the job.
For example,
Thread.sleep(50000);
Every thing works as expected if sleep time is added.
But without Thread.sleep() the job never completes.
The Timeout for lambda is configured as 60 seconds.
Some help or pointers will be really appreciated.
You are starting a CompletableFuture, but not waiting for it to complete.
Call get() to wait for it to wait util it completes executing.
[...]
logger.log("jobResult = " + jobResult.toString());
jobResult.get();
APIGatewayProxyResponseEvent response = new APIGatewayProxyResponseEvent();
[...]
This also explains why it works when you do call sleep(), as it gives enough time to the Future to complete.
Even if the call only does an HTTPS request, the lambda will finish sooner (HTTPS connections are expensive to create).
In a Spring Boot service class, let's say that I am making a method call processEvent().
The method processEvent() might be doing N number of things including making REST calls to other services.
How to check the time being taken by the method in parallel and if it crosses the threshold, then do something else e.g. throw exception ?
class EventService {
public void processEvent(ServiceContext context, Event event) {
// Field a time checker here for the below method.
processEvent(event);
}
public void processEvent(Event event) {
// this method does many things.
}
}
Can this be achieved using CompletionService ? If yes, Please give an example!
EDIT:
The following code works but I have one query:
public void processEvent(ServiceContext context, Event event) {
LOGGER.debug("Timestamp before submitting task = {}", System.currentTimeMillis());
Future<EventResponse> future = executor.submit(() -> {
LOGGER.debug("Timestamp before invoking = {}", System.currentTimeMillis());
EventResponse eventResponse = processEvent(event);
LOGGER.debug("Timestamp after invoking = {}", System.currentTimeMillis());
return eventResponse;
});
try {
LOGGER.debug("Thread sleep starts at = {}", System.currentTimeMillis());
Thread.sleep(5000);
LOGGER.debug("Thread sleep ended at = {}", System.currentTimeMillis());
} catch (InterruptedException e) {
LOGGER.debug("Going to print stack trace....");
e.printStackTrace();
}
if (!future.isDone()) {
future.cancel(true);
LOGGER.debug("task executor cancelled at = {}", System.currentTimeMillis());
} else {
EventResponse response = future.get();
LOGGER.debug("Received Event ID = {}", response.getEventDetailsList().get(0).getEventID());
return response;
}
LOGGER.debug("Going to return error response at = {}", System.currentTimeMillis());
throw new Exception("Message");
}
I am getting the below logs:
Timestamp before submitting task = 1579005638324
Thread sleep starts at = 1579005638326
Timestamp before invoking = 1579005638326
Thread sleep ended at = 1579005638526
task executor cancelled at = 1579005638527
Going to return error response at = 1579005638527
Timestamp after invoking = 1579005645228
How "Timestamp after invoking" is logged after "task executor cancelled at" ?
You can use ThreadPoolTaskExecutor to submit the task, then sleep for a certain amount of time, then check if the task is completed and interrupt it, if it's still working. However, you can't just kill the task, you'll have to periodically check for the interrupted flag inside the task itself. The code would be something like:
#Autowired
private ThreadPoolTaskExecutor executor;
// ...
Future<?> future = executor.submit(() -> {
doOneThing();
if(Thread.interrupted()) {
return;
}
doAnotherThing();
if(Thread.interrupted()) {
return;
}
// etc.
});
Thread.sleep(10000);
if (!future.isDone()) {
future.cancel(true);
}
You can use a mix of a standard ThreadPoolExecutor with a ScheduledThreadPoolExecutor. The latter will cancel the submission of the former if it's still running.
ThreadPoolExecutor executor = ...;
ScheduledThreadPoolExecutor watcher = ...;
Future<?> future = executor.submit(() -> { ... })
watcher.schedule(() -> future.cancel(true), THRESHOLD_SECONDS, TimeUnit.SECONDS);
The future.cancel(true) will be a no-op if it completed. For this though, you should be aware of how to handle cross-thread communiccation and cancellation. cancel(true) says "Either prevent this from running entirely, or, if it is running, interrupt the thread indicating we need to stop execution entirely and immediately"
From there your Runnable should handle interruption as a stop condition:
executor.submit(()-> {
// do something
if(Thread.currentThread().isInterrupted()) {
// clean up and exit early
}
// continue doing something
});
I want to get an idea of using ManagedExecutorService in stateless bean. Basically i am trying to send a http call in a separate thread inside my j2EE application. executorService send this request and wait for x number of seconds to receive response, if no response comes in specified seconds OR get exeception then do another try(X times) and then finally give a feedback that either https service call successfully done or failed. Here is my code
#SuppressWarnings("EjbEnvironmentInspection")
#Resource
ManagedExecutorService executorService;
public static final long RETRY_DELAY = 3000;
public static final int MAX_RETRIES = 3;
executorService.execute(() -> {
int retry = 0;
Collection<Info> responseInfo = null;
while (responseInfo == null && retry++ < MAX_RETRIES) {
try {
responseInfo = httpsService.requestAccessInfo(requestInfo);
Thread.sleep(RETRY_DELAY);
} catch (Exception e) {
log.error("Error while receiving response retry attempt {}", retry);
}
}
boolean status = filledLockAccessInfo==null ? false : true;
event.fire(regularMessage(status,GENERATION_RESULT);
});
Can someone tell me is it a right way to do this OR not.
You shouldn't need to forcibly sleep (Thread.sleep(RETRY_DELAY);). What you need is an asynchronous invocation of the service that can support timeout.
The following two methods use the completable future API's timeout and error handling to implement that.
The following use recursion to retry the given number of times:
private static Collection<Info> callService(int retryCount) {
try {
CompletableFuture<Collection<Info>> f = invoke();
return f.get(RETRY_DELAY, TimeUnit.MILLISECONDS);
}catch(TimeoutException te) {
if(retryCount > 0) {
return callService(retryCount - 1);
} else {
throw new RuntimeException("Fatally failed!!");
}
} catch(Exception ee) {
throw new RuntimeException("Unexpectedly failed", ee);
}
}
Note that the executorService object is passed in the second argument of supplyAsync
private static CompletableFuture<Collection<Info>> invoke() {
return CompletableFuture.supplyAsync(() -> {
//call
return httpsService.requestAccessInfo(requestInfo);;
}, executorService);
}
With that, you can simply call it with the number of retries:
Collection<Info> responseInfo = callService(MAX_RETRIES);
To make the above call run asynchronously, you can replace the preceding statement with:
CompletableFuture.supplyAsync(() -> callService(MAX_RETRIES))
.thenAccept(res -> System.out.println("Result: " + res));
This will make the call in the background. Later, you can check how it completed:
f.isCompletedExceptionally() //will tell whether it completed with an exception.
I have got a Connector class that establishes the connection and delegates tasks to two subtasks - JobManager and DataRetriever. I used observer pattern with JobManager as Observable. This submits an entry pair to the Connector class.
A typical connector class looks like:
class Connector implements Observable, Closeable
{
....
private void submitandMonitor(List<Callable<String>> bulkTasks, List<Callable<String>> soapTasks)
throws InterruptedException
{
// Bulk job submission
bulkExecutor = Executors.newFixedThreadPool(NBULKTHREADS,
new ThreadFactoryBuilder().setNameFormat("BulkDownloader-%d").build());
bulkCompletionService = new ExecutorCompletionService<String>(bulkExecutor);
bulkTasks.forEach(task -> bulkCompletionService.submit(task));
// Status poll thread configuration
statusPollExec = Executors.newScheduledThreadPool(0,
new ThreadFactoryBuilder().setNameFormat("StatusPoller").build());
statusPollExec.scheduleAtFixedRate(statusPoller, 15, 15, TimeUnit.MINUTES);
// Wait until all the bulk jobs are completed
shutdownLatch.await();
bulkExecutor.shutdown();
}
#Override
public void close() throws SQLException, ClientProtocolException, IOException,
RetriesExhaustedException
{
try
{
if (bulkExecutor != null)
{
if (!bulkExecutor.isShutdown())
bulkExecutor.shutdown();
bulkExecutor.awaitTermination(15, TimeUnit.SECONDS);
logger.debug("Bulk executor shutdown completed");
}
}
catch (InterruptedException e)
{
logger.warn("Auto shutdown duration exceeded, manually terminating Bulk executor!");
bulkExecutor.shutdownNow();
logger.warn("Manual shutdown for Bulk executor completed");
}
....// Same set of try catches for executors
}
}
The job manager consists of:
class JobManager
{
// Method that does not bother about thread shutdown
private void submitJobs()
{
// Had sObjects.parallelStream() but changed to an iterative loop suspecting not responding to shutdown - Probably the offending method
for(Entry<SalesforceObject, Boolean> item : sObjects.entrySet())
{
SalesforceObject sObject = item.getKey();
Boolean queryAll = item.getValue();
try
{
// Method to submit the values for bulk requests. No loop
submitBulkJob(sObject, queryAll);
// Add to jobDetailMap <,> when job successful; used in monitorJobs()
}
catch (Exception e)
{
// Set the params and send info to observer
}
}
}
private void monitorJobs() throws InterruptedException
{
while (jobdetailMap.size() > 0)
{
for (Iterator<Entry<SalesforceObject, JobDetail>> iterator = jobdetailMap.entrySet().iterator(); iterator
.hasNext();)
{
Entry<SalesforceObject, JobDetail> entry = iterator.next();
SalesforceObject sObject = entry.getKey();
String sObjname = sObject.getsObjname();
// Check for status and send info to observer
}
Thread.sleep(Constants.sleep5000);
}
}
#Override
public String call() throws Exception
{
submitJobs();
monitorJobs();
setsObjectstatus(null);
return this.getClass().getSimpleName();
}
}
submitJobs() is the one that submits on iterating through tasklist and submitting. monitorJobs() iterates and checks for the status of the tasklist and does not terminate until it is complete.
Since close already closes and terminates this, I noticed that the job manager still notifies the connector and I frequently end up with TaskRejectedExecution. Does this imply that the shutDownNow() does not terminate the instance? Follow up: another question that arises is that if I use a parallel stream to submit the jobs in Job manager, and if the thread terminates, how should I handle ending the parallel stream?