Async custom method in Vertx - java

I am trying to create my own async custom method in Vert.x something similar to their code:
// call the external service
WebClient client = WebClient.create(vertx);
client.get(8080, "localhost:8080", "/fast").send(ar -> {
if (ar.succeeded()) {
HttpResponse<Buffer> response = ar.result();
System.out.println("response.bodyAsString()" + response.bodyAsString());
} else {
System.out.println("Something went wrong " + ar.cause().getMessage());
}
});
When you run this code the thread sleeps without blocking the owner thread, and the provided handler is executed when the endpoint responds.
I found out the way to do it with: "executeBlocking", "createSharedWorkerExecutor.executeBlocking" and using a bus, but in all of them the thread gets blocked.
I am looking for the way to do it without blocking the container thread but I don't find it. There is a post:
How can I implement custom asynchronous operation in Vert.x?
I tried to do it but it also blocks the thread:
vertx.runOnContext(v -> {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
}
handler.handle(Future.succeededFuture("result"));
});
The code above runs in the same thread but doesn't run concurrently, so I assume the thread is blocked.
Is there any way to do it?

The way you call Thread.sleep() will send your current JVM thread to sleep effectively blocking your current vert.x event loop which runs in the same thread. That is not the idiomatic way in vert.x to execute blocking code.
See here: "The Golden Rule - don't block the event loop".
If you have to run blocking code, like Thread.sleep(), you should implement that code using a worker verticle. Worker verticles use JVM threads from a different thread pool and consequently do not block the event loop.
The first code example that you posted above does not use blocking code, as you correctly described yourself. It uses the idiomatic way with asynchronous, non- blocking event handlers.
EDIT
See this short example of how to start a very simple worker verticle.
Code from the class WorkerVerticle will never block the event loop. You make it a worker during the verticle deployment by setting the correct option as it is shown in the DeployerVerticle.
public class DeployerVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
System.out.println("Main verticle has started, let's deploy another...");
// Deploy it as a worker verticle
vertx.deployVerticle("io.example.WorkerVerticle",
new DeploymentOptions().setWorker(true));
}
}
// ----
package io.example;
/**
* An example of a worker verticle
*/
public class WorkerVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
System.out.println("[Worker] Starting in " +
Thread.currentThread().getName());
// consume event bus messages sent to address "sample.data"
// reply with incoming message transformed to upper case
vertx.eventBus().<String>consumer("sample.data", message -> {
Thread.sleep(1000); // will not block the event loop
// but only this verticle
System.out.println("[Worker] Consuming data in " +
Thread.currentThread().getName());
String body = message.body();
message.reply(body.toUpperCase());
});
}
}

Related

How to keep the calling function thread on hold till the time all the threads processing gets completed?

I am working on a scenario as described below:
We are consuming messages from kafka, and each message received should be processed in parallel, given that I have to keep on hold the main( or calling) thread until all the messages received(1 message-1 thread) are done with processing.
Given that number of messages is known and is available from the kafka message headers.
Once the processing is completed for all the threads, only after that the calling thread should proceed ahead.
I tried using CountDownLatch with the count as number of messages going to be received, but using this, it is keeping the main thread on hold, and not allowing to consume the next messages.
Is there anyway, by which this can be achieved ?
Code Snippet:
class MessageHandler{
#Autowired private ProcessorAsync processorAsync;
public void handle()
{
CountdownLatch countdown = new CountdownLatch(noOfMessages);
CompletableFuture<Void> future = processorAsync.processMessage(message,countdown);
countdown.await();
if(future.isDone())
{//post msg processing methods/api calls
m1();
m2();
}
}
}
class ProcessorAsync
{
#Async("customThreadPool") // max 20 threads
public CompletableFuture<Void> processMessage(Message msg, CountdownLatch countdown)
{
//DB update statements
// .
countdown.countdown();
return CompletableFuture.allOf();
}
}

Solving consumer producer concurrency issue with SynchronousQueue. Fairness property not working

I am having a issue debugging my SynchronousQueue. its in android studio but should not matter its java code. I am passing in true to the constructor of SynchronousQueue so its "fair" meaning its a fifo queue. But its not obeying the rules, its still letting the consumer print first and the producer after. The second issue i have is i want these threads to never die, do you think i should use a while loop on the producer and the consumer thread and let them keep "producing and consuming" each other ?
here is my simple code:
package com.example.android.floatingactionbuttonbasic;
import java.util.concurrent.SynchronousQueue;
import trikita.log.Log;
public class SynchronousQueueDemo {
public SynchronousQueueDemo() {
}
public void startDemo() {
final SynchronousQueue<String> queue = new SynchronousQueue<String>(true);
Thread producer = new Thread("PRODUCER") {
public void run() {
String event = "FOUR";
try {
queue.put(event); // thread will block here
Log.v("myapp","published event:", Thread
.currentThread().getName(), event);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
producer.start(); // starting publisher thread
Thread consumer = new Thread("CONSUMER") {
public void run() {
try {
String event = queue.take(); // thread will block here
Log.v("myapp","consumed event:", Thread
.currentThread().getName(), event);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
consumer.start(); // starting consumer thread
}
}
to start the threads i simple call new SynchronousQueueDemo().startDemo();
The logs always look like this no matter what i pass to synchronousQueue constructor to be "fair":
/SynchronousQueueDemo$2$override(26747): myapp consumed event: CONSUMER FOUR
V/SynchronousQueueDemo$1$override(26747): myapp published event:PRODUCER FOUR
Checking the docs here, it says the following:
public SynchronousQueue(boolean fair)
Creates a SynchronousQueue with the specified fairness policy.
Parameters:
fair - if true, waiting threads contend in FIFO order for access; otherwise the order is unspecified.
The fairness policy relates to the order in which the queue is read. The order of execution for a producer/consumer is for the consumer to take(), releasing the producer (which was blocking on put()). Set fairness=true if the order of consumption is important.
If you want to keep the threads alive, have a loop condition which behaves well when interrupted (see below). Presumably you want to put a Thread.sleep() in the Producer, to limit the rate at which events are produced.
public void run() {
boolean interrupted = false;
while (!interrupted) {
try {
// or sleep, then queue.put(event)
queue.take(event);
} catch (InterruptedException e) {
interrupted = true;;
}
}
}
SynchronousQueue work on a simple concept. You can only produce if you have a consumer.
1) Now if you start doing queue.put() without any queue.take(), the thread will block there. So any soon as you have queue.take(), the Producer thread will be unblocked.
2) Similarly if you start doing queue.take() it will block until there is a producer. So once you have queue.put(), the Consumer Thread will be blocked.
So as soon as queue.take() is executed, both Producer and Consumer threads are unblocked. But you do realize that Producer and Consumer are both running in seperate threads. So any of the messages you put after the blocking calls can be executed. In my case the order of the output was this. Producer was getting printed first.
V/SynchronousQueueDemo$1$override(26747): myapp published event:PRODUCER FOUR
/SynchronousQueueDemo$2$override(26747): myapp consumed event: CONSUMER FOUR

Clever asynchronous repaint in Java

I have a use-case coming from a GUI problem I would like to submit to your sagacity.
Use case
I have a GUI that displays a computation result depending on some parameters the user set in a GUI. For instance, when the user moves a slider, several events are fired, that all trigger a new computation. When the user adjust the slider value from A to B, a dozens of events are fired.
But the computation can take up to several seconds, whereas the slider adjustment can fire an event every few 100 ms.
How to write a proper Thread that would listen to these events, and kind of filter them so that the repaint of the results is lively? Ideally you would like something like
start a new computation as soon as first change event is received;
cancel the first computation if a new event is received, and start a new one with the new parameters;
but ensure that the last event will not be lost, because the last completed computation needs to be the one with last updated parameters.
What I have tried
A friend of mine (A. Cardona) proposed this low level approach of an Updater thread that prevents too many events to trigger a computation. I copy-paste it here (GPL):
He puts this in a class that extends Thread:
public void doUpdate() {
if (isInterrupted())
return;
synchronized (this) {
request++;
notify();
}
}
public void quit() {
interrupt();
synchronized (this) {
notify();
}
}
public void run() {
while (!isInterrupted()) {
try {
final long r;
synchronized (this) {
r = request;
}
// Call refreshable update from this thread
if (r > 0)
refresh(); // Will trigger re-computation
synchronized (this) {
if (r == request) {
request = 0; // reset
wait();
}
// else loop through to update again
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
public void refresh() {
// Execute computation and paint it
...
}
Every-time an event is sent by the GUI stating that parameters have been changed, we call updater.doUpdate(). This causes the method refresh() to be called much less.
But I have no control on this.
Another way?
I was wondering if there is another way to do that, that would use the jaca.concurrent classes. But I could not sort in the Executors framework what would be the one I should start with.
Does any of you have some experience with a similar use case?
Thanks
If you're using Swing, the SwingWorker provides capabilities for this, and you don't have to deal with the thread pool yourself.
Fire off a SwingWorker for each request. If a new request comes in and the worker is not done, you can cancel() it, and just start a new SwingWorker. Regarding what the other poster said, I don't think publish() and process() are what you are looking for (although they are also very useful), since they are meant for a case where the worker might fire off events faster than the GUI can process it.
ThingyWorker worker;
public void actionPerformed(ActionEvent e) {
if( worker != null ) worker.cancel();
worker = new ThingyWorker();
worker.execute();
}
class ThingyWorker extends SwingWorker<YOURCLASS, Object> {
#Override protected YOURCLASS doInBackground() throws Exception {
return doSomeComputation(); // Should be interruptible
}
#Override protected void done() {
worker = null; // Reset the reference to worker
YOURCLASS data;
try {
data = get();
} catch (Exception e) {
// May be InterruptedException or ExecutionException
e.printStackTrace();
return;
}
// Do something with data
}
}
Both the action and the done() method are executed on the same thread, so they can effectively check the reference to whether there is an existing worker.
Note that effectively this is doing the same thing that allows a GUI to cancel an existing operation, except the cancel is done automatically when a new request is fired.
I would provide a further degree of disconnect between the GUI and the controls by using a queue.
If you use a BlockingQueue between the two processes. Whenever the controls change you can post the new settings to the queue.
Your graphics component can read the queue whenever it likes and act on the arriving events or discard them as necessary.
I would look into SwingWorker.publish() (http://docs.oracle.com/javase/6/docs/api/javax/swing/SwingWorker.html)
Publish allows the background thread of a SwingWorker object to cause calls to the process() method, but not every publish() call results in a process() call. If multiple process calls are made before process() returns and can be called again, SwingWorker concatenates the parameters used for multiple publish calls into one call to process.
I had a progress dialog which displayed files being processed; the files were processed faster than the UI could keep up with them, and I didn't want the processing to slow down to display the file names; I used this and had process display only the final filename sent to process(); all I wanted in this case was to indicate to the user where the current processing was, they weren't going to read all the filenames anyway. My UI worked very smoothly with this.
Take a look at the implementation of javax.swing.SwingWorker (source code in the Java JDK),
with a focus on the handshaking between two methods: publish and process.
These won't be directly applicable, as-is, to your problem - however they demonstrate how you might queue (publish) updates to a worker thread and then service them in your worker thread (process).
Since you only need the last work request, you don't even need a queue for your situation: keep only the last work request. Sample that "last request" over some small period (1 second), to avoid stopping/restarting many many times every 1 second, and if it's changed THEN stop the work and restart.
The reason you don't want to use publish / process as-is is that process always runs on the Swing Event Dispatch Thread - not at all suitable for long running calculations.
The key here is that you want to be able to cancel an ongoing computation. The computation must frequently check a condition to see if it needs to abort.
volatile Param newParam;
Result compute(Param param)
{
loop
compute a small sub problem
if(newParam!=null) // abort
return null;
return result
}
To handover param from event thread to compute thread
synchronized void put(Param param) // invoked by event thread
newParam = param;
notify();
synchronized Param take()
while(newParam==null)
wait();
Param param = newParam;
newParam=null;
return param;
And the compute thread does
public void run()
while(true)
Param param = take();
Result result = compute(param);
if(result!=null)
paint result in event thread

concurrency issue when implementing a simple httprequest queue for android app

I have a problem when I try to implement a queue for http requests from scratch. Sorry, this might be a very naive concurrency problem to someone.
Basically I want my application to execute only one request at any time. Extra requests go into queue and execute later.
I am aware of other advanced stuff such as FutureTask and Execution pool, but I want the answer because I am curious about how to solve the basic concurrency problem. Following is my Class maintains the requestQueue
private Queue<HttpRequest> requestQueue;
private AsyncTask myAsyncTask=null;
public boolean send(HttpRequest hr){
//if there isn't existing task, start a new one, otherwise just enqueue the request
//COMMENT 1.
if(myAsyncTask==null){
requestQueue.offer(hr);
myAsyncTask= new RequestTask();
myAsyncTask.execute(null);
return true;
}
else{
//enqueue
//COMMENT 2
requestQueue.offer(hr);
}
}
//nested class
RequestTask extends AsyncTask<boolean,void,void>{
protected HttpResponse doInBackground(void... v){
//send all request in the queue
while(requestQueue.peek != null){
HttpResquest r= requestQueue.poll
//... leave out code about executing the request
}
return true;
}
protected void doPostExecute(boolean success){
//COMMENT 3: if scheduler stop here just before myAsyncTask is set to null
myAsyncTask=null;
}
}
The question is, if thread scheduler stops the background thread at the point COMMENT 3 (just before the myAsyncTask is set to null).
//COMMENT 3: if scheduler stop here just before myAsyncTask is set to null
myAsyncTask=null;
At the time, other threads happen to go to the point COMMENT 1 and go into the if ... else ... block. Because the myAsyncTask have not be set to null, the task get enqueued in else block(COMMENT 2) but new asyncTask will not be created, which means the queue will stuck!
//COMMENT 1.
if(myAsyncTask==null){
requestQueue.offer(hr);
myAsyncTask= new RequestTask;
myAsyncTask.execute(null);
return true;
}
else{
//enqueue
//COMMENT 2
requestQueue.offer(hr);
}
I hope it is clear. There is a chance that the queue stop being processed. I am keen to know how to avoid this. Thank you in advance
The way I would normally implement something like this is to create a class that extends thread. This would contain a queue object (use whichever one you prefer) and would have methods for adding jobs. I'd use synchronization to keep everything thread safe. Notify and wait can be used to avoid polling.
Here's an example that might help...
import java.util.*;
public class JobProcessor extends Thread
{
private Queue queue = new LinkedList();
public void addJob(Object job)
{
synchronized(queue)
{
queue.add(job);
queue.notify(); // lests the thread know that an item is ready
}
}
#Overide
public void run()
{
while (true)
{
Object job = null;
synchronized(queue) // ensures thread safety
{
// waits until something is added to the queue.
try
while (queue.isEmpty()) queue.wait();
catch (InterruptedException e)
; // the wait method can throw an exception you have to catch.
// but can ignore if you like.
job = queue.poll();
}
// at this point you have the job object and can process it!
// with minimal time waiting on other threads.
// be sure to check that job isn't null anyway!
// in case you got an InterruptedException.
... processing code ...
// job done loop back and wait for another job in the queue.
}
}
}
You pretty much just have to instantiate a class like this and start the thread, then begin inserting objects to process jobs. When the queue is empty the wait causes this thread to sleep (and also temporarily releases the synchronization lock), notify in the addJob method wakes it back up when required. Synchronization is a way of ensuring that only one thread has access to the queue. If you're not sure about how it works look it up in the java SDK reference.
Your code doesn't have any thread safety code in it (synchronization stuff) and that's where your problem is. It's probably a little over complicated which won't help you debug it either. But the main thing is you need to add synchronization blocks, but make sure you keep them as short as possible.

Submitting a background task from spring mvc app

I've got working spring MVC app and what I'm trying to do next is to start or submit a background task from my app.
Basically I'd like to keep the task going until it completes even if the user decides to do something else on the app.
But also I'd like to stop/kill/pause the task if I needed to. Since I haven't done this before I'm looking for a good/better way to do this.
I found these to be useful:
http://blog.springsource.com/2010/01/05/task-scheduling-simplifications-in-spring-3-0/
How do you kill a thread in Java?
Java threads: Is it possible view/pause/kill a particular thread from a different java program running on the same JVM?
So I wanted to use #Async task to submit my background task, but wanted to use threads' id to obtain it later on and stop it if needed?
Is this the right approach? I don't have any experience with multithreading so I'm here to listen.
Code update :
public interface Worker {
public void work();
public void cancel();
}
implementation :
#Component("asyncWorker")
public class AsyncWorker implements Worker {
#Async
public void work() {
String threadName = Thread.currentThread().getName();
System.out.println(" " + threadName + " beginning work");
try {
Thread.sleep(10000); // simulates work
} catch (InterruptedException e) {
System.out.println("I stopped");
}
System.out.println(" " + threadName + " completed work");
}
public void cancel() { Thread.currentThread().interrupt(); }
}
Controller for testing purposes :
#ResponseBody
#RequestMapping("/job/start")
public String start() {
asyncWorker.work();
return "start";
}
#ResponseBody
#RequestMapping("/job/stop")
public String stop() {
asyncWorker.cancel();
return "stop";
}
When I visit /job/start, I can't execute more that one task simultaneously. The other one starts to execute only after first one has completed
Also when I visit /job/stop the process isn't stopped, what am I missing here?
Using thread ID is too low level and brittle. If you decided to use #Async annotation (good choice) you can use Future<T> to control the task execution. Basically your method should return a Future<T> instead of void:
#Async
public Future<Work> work() //...
Now you can cancel() that Future or wait for it to complete:
#ResponseBody
#RequestMapping("/job/start")
public String start() {
Future<Work> future = asyncWorker.work();
//store future somewhere
return "start";
}
#ResponseBody
#RequestMapping("/job/stop")
public String stop() {
future.cancel();
return "stop";
}
The tricky part is to store the returned future object somehow so it is available for subsequent requests. Of course you cannot use a field or ThreadLocal. You can put in session, note however that Future is not serializable and won't work across clusters.
Since #Async is typically backed by thread pool, chances are your tasks didn't even started. Cancelling will simply remove it from the pool. If the task is already running, you can the isInterrupted() thread flag or handle InterruptedException to discover cancel() call.

Categories