Stop UI-method until Async task is finished - java

Sorry for me poor english.
i am messing about with a java class that needs to do UI-work. but the UI-work needs to wait for an async task. The asyncTask retrieves api soap from internet. Once api is retrieved it is set to global jsonResponseBody. Then UI-method then uses jsonResponseBody to do UI-stuff.
In my now code, I use while-loop to stop from moving on before jsonResponseBody is ready. Is while-loop best idea for me? I think maybe while-loop will slow down main-thread, no?
//Pre-async task stuff is run
connectDbAsync(db,sqlQuery); //This will set jsonResponseBody sooner or later
while(jsonResponseBody == null){
//Do nothing, just wait
}
//Post-async task stuff which uses jsonResponseBody

When performing asynchronous tasks in Java there are several ways to handle output. One way as you discovered, is to use a loop to block code execution until the task completes. I personally like to use a thread pool and Future objects to wait on my threads to complete. There are some advantages and disadvantages to both approaches, however in your case, your while loop should not slow your main thread because it only runs for a finite period of time and runs immediately after you start executing your asynchronous task.
That said, the benefit of an asynchronous task is that it can do its thing while your code is doing something else. If you MUST wait on the asynchronous task to complete before continuing on in your method, then the task could be done synchronously instead and you wouldn't need the loop to pause execution.
Example blocking code that waits on network poll of multiple "sites" before continuing execution. This shows the benefit of asynchronous tasks/multithreading when it comes to doing multiple things at one time:
//Invoke run method on each site simultaneously, store results in a list
List<Future<Site>> futures=threadmaker.invokeAll(active_sites.stream().map(TAG_SCANNER::new).collect(Collectors.toList()));
List<Site> alarm_sites = new ArrayList<>();
//Now fetch all the results serially
for(Future<Site> result: futures){
//SOUND THE ALARMS
alarm_sites.add(result.get());
}
// Continue synchronous method execution

You might have a look at java Future. You can use it to launch some code asynchronously but you get a handle to it and so you can check if it is finished (Future.isDone()) or block until it is finished: Future.get()

Related

Java 8 concurrency in AWS lambda?

We have an AWS lambda function that needs to perform a few checks done by calling remote services. As long as one of them returning false, lambda can return; otherwise, all the checks need to be finished to make sure none returning false. Right now I am using a parallel stream to run the tasks, as they can go independently.
In a may-not-be-rare situation, the main thread returns while one of the tasks is still running with its thread, or thread blocked waiting for I/O, as short-circuiting has seen a false with another task. AWS lambda documentation says that all threads in Lambda will be frozen when main thread returns. And they will thaw once lambda is handling the next request. Will the busy/blocked thread keep working on the original task after getting re-activated, or it will take on the new task for current request?
Would really appreciate it if Lambda gurus can share some insights.
I hope I understood correctly. You want to perform parallel activities while waiting for them to finish.
I just read in StackOverflow a comment saying the following:
Streams is about data-parallelism; data parallel problems are CPU-bound, not IO-bound. It seems that you're simply looking to run a number of mostly unrelated IO-intensive tasks concurrently. Use a plain-old thread pool for that; your first example is an ideal candidate for ExecutorService.invokeAll()
Maybe ExecutorService can help.
I don't know how your code is being structured but I can propose something like this:
int processors = Runtime.getRuntime().availableProcessors();
ExecutorService executorService = Executors.newFixedThreadPool(processors);
List<Callable<Boolean>> services = getURLToCheck().parallelStream()
.map(this::checkService)
.collect(Collectors.toList());
try {
List<Future<Boolean>> futures = executorService.invokeAll(services);
// do your validation with the concurrent tasks.
} catch (InterruptedException e) {
// Handle as you wish
}
Where also:
private List<URL> getURLToCheck() {
// Fetch your URL from wherever :)
}
private Callable<Boolean> checkService(URL url){
// Logic to check the service
}
The Future class has to key methods that may be useful for you. The isDone() method and the .get().
The first one indicates whether the task finished or not, and the second one will wait for it to finish throwing all the exceptions that occurred inside but wrapped in ExecutionException. Maybe you can combine those methods to have the validation done. Having a quick think, I imagined a while loop where you ask if the future finished, and if so, have the validation result and with that, break that loop if false. But I don't like it haha.
Hope I made my self clear. And also I hope that can help. If not, i tried my best.

Java: Controlling hardware tasks with pausable ThreadPoolExecutor

I want to implement a single-producer - multi-consumer logic where each consumer processing time depends on a hardware response.
**EDIT
I have a Set of objects (devices). Each object (device) corresponds to a hardware real unit I want to simulate in software.
My main class distributes a list of tasks to each device. Each task takes a certain time to complete - which I want to have control, in order to simulate the hardware operation. Each device object has its own SingleThreadExecutorService service executor to manage its own queued tasks. A Sleep on a task of a specific device object should not interfere on main, or other devices object's performance.
So far things are working but I am not sure how to get a future from the tasks without blocking the main thread with a while(!future.isDone()). When I do it, two problems occur:
task 1 is submitted to device[ 1 ].executor. Tasks 1 sleeps to simulate hardware operation time.
task 2 should be submitted to device[ 2 ].executor as soon as task 1 is submitted, but it won't, because main thread is hold while waiting for task 1 to return a Future. This issue accumulates delay on the simulation since every task added causes the next device to have to wait for the previous to complete, instead of running simultaneously.
Orange line indicates a command to force device to wait for 1000 milliseconds.
When Future returns, it then submits a new task to device 2, but it is already 1 second late, seen in blue line. And so on, green line shows the delay increment.
If I don't use Future to get when tasks were finished, the simulation seems to run correctly. I couldn't find a way to use future.isDone() without having to create a new thread just to check it. Also, I would really be glad if someone could advice me how to proceed in this scenario.
If your goal is to implement something where each consumer task is talking to a hardware device during the processing of its task, then the run method of the task should simply talk to the device and block until it receives the response from the device. (How you do that will depend on the device and its API ...)
If your goal is to do the above with a simulated device (i.e. for testing purposes) then have the task call Thread.sleep(...) to simulate the time that the device would take to respond.
Based on your problem description (as I understand it), the PausableSchedulerThreadPoolExecutor class that you have found won't help. What that class does is to pause the threads themselves. All of them.
UPDATE
task 2 should be submitted to device[ 2 ].executor as soon as task 1 is submitted, but it won't, because main thread is hold while waiting for task 1 to return a Future.
That is not correct. The Future object is returned immediately ... when the task is submitted.
You mistake (probably) is that the main thread is calling get on the Future. That will block. But the point is that is your main thread actually needs to call get on the Future before submitting the next task then it is essentially single-threaded.
Real solution: figure out how to break that dependency that makes your application single threaded. (But beware: if you pass the Future as a parameter to a task, then the corresponding worker thread may block. Unless you have enough threads in the thread pool you could end up with starvation and reduced concurrency.)

Run parallel tasks in a long running application

I am building a long running application, which is modeled as a service based on service oriented architecture. Call this as 'serviceA'. It has an activity to perform, call 'activityA', whenever an API call is made to it.
activityA has an activity handler that has to perform 'n' tasks in parallel after which it consolidates and returns result to the client who called the serviceA API.
I am planning to use the ExecutorService to achieve this parallelism.
There are 2 ways to go ahead with this:
Create ExecutorService in a singleton scope, and have it as an attribute of the activity handler. Thus this same ExecutorService object is available throughout the lifetime of the service. When a new request comes, handler uses this ExecutorService object to submit parallel tasks. Then wait on the Future objects for certain timeout time. After all the parallel tasks complete, consolidate and return the activityA response.
Create new ExecutorService object everytime a request to activityA is received, in the activity handler. Submit the parallel tasks to this object, wait for the Future results for certain timeout time, consolidate the results, call shutdown on the ExecutorService object, and return the activityA API response.
Thus,
Which of the 2 above approaches should be followed? Major difference b/w the 2 is the lifetime of the ExecutorService object.
The service is supposed to be called with a volume of ~15k transactions per second, if this data helps with the decision making b/w the 2 approaches?
Advantage of 1st approach is that we will not have the overhead of creating and shutting down new ExecutorService objects, and threads. But, what happens when there is no Future result till the timeout time? Does the thread automatically shuts down? Is it available for any new request that will be coming to the ExecutorService thread pool? Or it will be in some waiting state, and eat up memory - in which case we manually need to do something (and what)?
Also, Timeout time while we call future.get() is from the time we make this get call or from the time we submitted the task to the executor service?
Please also let me know if any of the 2 way is the obvious approach to this problem.
Thanks.
The first way looks like the obvious and correct way to solve this problem, especially with the given amount of transactions. You certainly don't want to restart threads.
Future.get timeout doesn't affect the executing thread. It will continue to run the task until it is either completed or throws an exception. Until then, it won't be accepting new tasks (but other threads in the same executor will). In this case you may want to cancel it explicitly by invoking Future.cancel to free the thread for new tasks. This requires the task itself to respond properly to interrupt (instead of looping forever, for example, or waiting blocked on I/O). However, this would be the same for any threading approach since interruption is the only safe way to terminate a thread anyway. To mitigate this issue you could use a dynamic pool of threads with maximum number of running threads more than n. This will allow to process new tasks while the stuck tasks are in process of termination.
It's from the time you call it.

ForkJoinTask completion handler

I have a long-running calculation that I have split up with Java's ForkJoinTask.
Java's FutureTask provides a template method done(). Overriding this method allows for "registering a completion handler".
Is it possible to register a completion handler for a ForkJoinTask?
I am asking because I don't want to have blocking threads in my application - but my application will have a blocking thread as soon as I retrieve the calculation result via calls to result = ForkJoinPool.invoke(myForkJoinTask) or result = ForkJoinPool.submit(myForkJoinTask).get().
I think you mean "lock free" programming http://en.wikipedia.org/wiki/Non-blocking_algorithm? While FutureTask.get() possibly blocks the current thread (and thus leaves an idling CPU) ForkJoinTask.get() (or join) tries to keep the CPU busy.
This works well if you are able to split your problem into many small peaces (ForkJoinTask). If one FJTask is internally waiting for the result of an other task, which is not ready, the ForkJoinTask tries to pick up some other work (Task) to do from its ForkJoinPool and executes that task(s) meanwhile.
Until all your Task are CPU bound, it works fine: all your CPU(s) are kept busy.
It does NOT work if any of your Task waits for some external event (i.e. sending a REST call to the Mars rover). Also the problem should form a DAG, else you may get a deadlock. But until you join only tasks you forked before in the same Task it works well. Even better if you join the task you forked at last.
So it is not too worse to call get() or join() within/between your Tasks.
You mentioned a completion handler to solve the problem. If you are implementing the ForkJoinTask yourself you may have a look at RecursiveTask or even RecursiveAction. You will implement compute() and you may easily forward the result of each task to some collector at the end of your compute() function instead of returning it.
But you have to consider that you collector will be called concurrently! For adding values or counting completion counts have a look at java.util.concurrent.atomic. Avoid using synchronized blocks. Else all your Tasks have to wait for this single bottleneck and only one CPU keeps working.
I think propagating the results involves more problems than returning them (since FJPool handles this). In addition it becomes difficult to decide (and to communicate to the outside) at which point your final result is done.

How can I get the jobs that are in execution in a ScheduledThreadPoolExecutor?

I need to force the release of resources when a task is interrupted. For that, I implemented this solution. But with this, when I call shutdown() all the task in the ScheduledThreadPoolExecutor.getQueue() are forced correctly, but some of my jobs kept the resources. I checked very well the behavior and I figured out this: when a task is in execution, he is not present in the ScheduledThreadPoolExecutor queue (I know it sounds obvious). The problem is that I need to force the release of resources for all the jobs (in the queue or in execution)
So, How can I get the jobs that are in execution? Do you have a better idea?
You can maintain a list of all the Future's you create when you submit the jobs.
Use this list to cancel all futures.
Don't you want to call
executor.shutdownNow()
that will attempt to cancel currently running tasks (using Thread.interrupt so you'll need to implement an 'interruption policy' in each task that uses the interrupt flag).
from the javadoc
Attempts to stop all actively executing tasks, halts the processing of waiting tasks, and returns a list of the tasks that were awaiting execution.
There are no guarantees beyond best-effort attempts to stop processing actively executing tasks. For example, typical implementations will cancel via Thread.interrupt, so any task that fails to respond to interrupts may never terminate.
This will return a list of waiting tasks, so you can always put this back onto a 'wait list' rather than loose them completely. You might also want follow that up with an 'await termination' to avoid run away code. For example, executor.awaitTermination(...).
tempus-fugit has some handy classes for handling this. You just call
shutdown(executor).waitingForShutdown(millis(400));
see here for details.
Also, the solution you outline in the blog post; I'm not sure if that's quite right. Future.cancel will only stop the task from being scheduled. If you were to update the example in the blog to allow interruption (ie cancel(true), it'd be equivalent (more or less) with the shutdownNow. That is to say, it will call interrupt on the underlying task which (if you've implemented an interruption policy) will stop it processing. As for cleaning up after interruption, you just need to make sure that you handle that appropriately within the interruption policy implementation. The upshot is that I think you can cancel and cleanup correctly using shutdownNow (or cancel(true))

Categories