Vertx - wait till data available in redis - java

I am new to Vertx and was exploring request-reply using event bus.
I want to implement below flow
User requests for a data
controller sends a message on event bus to a redis-processor verticle
redis-processor will wait for n seconds till value is available in redis (there will be a background process which will keep on refreshing cache, hence the wait)
redis-processor will send reply back to controller
controller responds to user
In short I want to do something like this:
Now I want to implement this in Vertx since vertx can run asynchronously. Using event bus I can isolate controller from processor. So controller can accept multiple user request and stay responsive under load.
(I hope I am right with this!)
I have implemented this in very crude fashion in java-vertx. Stuck in below part.
//receive request from controller
vertx.eventBus().consumer(REQUEST_PROCESSOR, evtHandler -> {
String txnId = evtHandler.body().toString();
LOGGER.info("Received message:: {}", txnId);
this.redisAPI.get(txnId, result -> { // <=====
String value = result.result().toString();
LOGGER.info("Value in redis : {}", value);
evtHandler.reply(value); // reply to controller
});
});
pls see line denoted by arrow. How can I wait for x seconds without blocking event loop?
Please help.

Thats actually very simple, you need a timer. Please see docs for details but you will need more or less something like this:
vertx.setTimer(1000, id -> {
this.redisAPI.get(txnId, result -> {
String value = result.result().toString();
LOGGER.info("Value in redis : {}", value);
evtHandler.reply(value); // reply to controller
});
});
You might want to store the timer IDs somewhere so that you can cancel them or that at least you know something is running when a shutdown request comes in for your verticle to delay it. But this all depends on your needs.

As #mohamnag said, you could use a Vertx timer
here is another example on how to user timer.
Note that the timer value is in ms.
As an improvement to the, I will recommend checking that the callback has succeeded before attempting to get the value from redisAPI. This is done using the succeeded() method.
In an asynchronous environment getting that result could fail due to several issues (network errors etc)
vertx.setTimer(n * 1000, id -> {
this.redisAPI.get(txnId, result -> {
if(result.succeeded()){ // the callback succeeded to get a value from redis
String value = result.result().toString();
LOGGER.info("Value in redis : {}", value);
evtHandler.reply(value); // reply to controller
} else {
LOGGER.error("Value could not be gotten from redis : {}", result.cause());
evtHandler.fail(someIntegerCode, result.cause()); // reply with failure related info
}
});
});

Related

Wait 10 seconds after send command in RabbitListener

I have my RabbitListener witch received results of calculations. This is the lot of data so i batched it and next send a command to axon commandGateway. I wonder if it is possible for the listener to wait a few seconds before sending next command. I note that i can't use threadSleep here. It could be work something like this method for every batch
private void myMethod() {
commandGateway.send(batchedData)
//wait 10seconds
What is the reason for having a waiting period here?
If the reason is to wait before Axon Framework has processed all data present in the command, you can use commandGateway.sendAndWait(Object command, ...) instead. This will make the current thread wait for the command to have been executed.
If it is a mechanism to batch the data, I would suggest keeping an in-memory List to queue the items, and then sending a command every 10 seconds using the Spring scheduling mechanism. I created a small example in Kotlin to illustrate this:
#Service
class CalculationBatcher(
private val commandGateway: CommandGateway
) {
private val calculationQueue = LinkedList<Any>()
fun queueCalculation(calculation: Any) {
calculationQueue.add(calculation)
}
#Scheduled(fixedRate = 10000) // Send every 10 seconds
#PreDestroy // When destroying the application, send remaining events
fun sendCalculations() {
// Use pop here on the LinkedList while having items to prevent threading issues
val calculationsToSend = LinkedList<Any>()
while (calculationQueue.isNotEmpty()) {
calculationsToSend.push(calculationQueue.pop())
}
commandGateway.sendAndWait<Any>(MyEventsCommand(calculationsToSend), 10, TimeUnit.SECONDS)
}
data class MyEventsCommand(val events: List<Any>)
}
I hope this helps. If the reason was something else, let me know!

How to invoke Azure Function on User Defined Schedules

I've an HTTP triggered azure function (Java) that performs a series of action. In tmy application UI, I've a button which triggers this function to initiate the task.
Everything works as expected.
Now I've to perform this operation in user defined schedules. That is from the UI, user can specify the interval (say every 3 Hrs) at which the function need to be executed. As this schedule will be custom and dynamic I cannot rely on the timer-triggered azure functions. Also the same function need to be executed at different intervals with different input parameters.
How can I dynamically create schedules and invoke the azure function on the scheduled time? Does Azure have an option to run the function with specific events something like (AWS cloud watch rule + lambda invocation)?
EDIT: Its different from the suggested question as it changes the schedule of an existing function.And I think configuring a new schedule will break the previously configured schedules for the function. I want to run the same function in different schedules as per the user configuration and should not break any of the previous schedules set for the function.
You can have a try to modify the function.json, change the cron expression in function.json. Please refer to the steps below:
Use Kudu API to change function.json https://github.com/projectkudu/kudu/wiki/REST-API
PUT https://{functionAppName}.scm.azurewebsites.net/api/vfs/{pathToFunction.json}, Headers: If-Match: "*", Body: new function.json content.
Then send request to apply changes.
POST https://{functionAppName}.scm.azurewebsites.net/api/functions/synctriggers
You can use a durable function for this, applying the Monitor Pattern (shamelessly copied from this MSDN documentation).
This orchestration function sets a dynamic timer trigger using context.CreateTimer.
Code is in C#, but hopefully there is something you can use here.
[FunctionName("MonitorJobStatus")]
public static async Task Run(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
int jobId = GetJobId();
int pollingInterval = context.GetInput<int>();
DateTime expiryTime = GetExpiryTime();
while (context.CurrentUtcDateTime < expiryTime)
{
var jobStatus = await context.CallActivityAsync<string>("GetJobStatus", jobId);
if (jobStatus == "Completed")
{
// Perform an action when a condition is met.
await context.CallActivityAsync("SendAlert", machineId);
break;
}
// Orchestration sleeps until this time.
var nextCheck = context.CurrentUtcDateTime.AddSeconds(pollingInterval);
await context.CreateTimer(nextCheck, CancellationToken.None);
}
// Perform more work here, or let the orchestration end.
}

Project Reactor async send email with retry on error

I need to send some data after user registered. I want to do first attempt in main thread, but if there are any errors, I want to retry 5 times with 10 minutes interval.
#Override
public void sendRegisterInfo(MailData data) {
Mono.just(data)
.doOnNext(this::send)
.doOnError(ex -> logger.warn("Main queue {}", ex.getMessage()))
.doOnSuccess(d -> logger.info("Send mail to {}", d.getRecipient()))
.onErrorResume(ex -> retryQueue(data))
.subscribe();
}
private Mono<MailData> retryQueue(MailData data) {
return Mono.just(data)
.delayElement(Duration.of(10, ChronoUnit.MINUTES))
.doOnNext(this::send)
.doOnError(ex -> logger.warn("Retry queue {}", ex.getMessage()))
.doOnSuccess(d -> logger.info("Send mail to {}", d.getRecipient()))
.retry(5)
.subscribe();
}
It works.
But I've got some questions:
Did I correct to make operation in doOnNext function?
Is it correct to use delayElement to make a delay between executions?
Did the thread blocked when waiting for delay?
And what the best practice to make a retries on error and make a delay between it?
doOnXXX for logging is fine. But for the actual element processing, you must prefer using flatMap rather than doOnNext (assuming your processing is asynchronous / can be converted to returning a Flux/Mono).
This is correct. Another way is to turn the code around and start from a Flux.interval, but here delayElement is better IMO.
The delay runs on a separate thread/scheduler (by default, Schedulers.parallel()), so not blocking the main thread.
There's actually a Retry builder dedicated to that kind of use case in the reactor-extra addon: https://github.com/reactor/reactor-addons/blob/master/reactor-extra/src/main/java/reactor/retry/Retry.java

Google Cloud Platform blocking BatchRequest request - Java

Is it possible to wait until a BatchJob (BatchRequest objecT) in GCP is completed?
I.g. you can do it with a normal Job:
final Job job = createJob(jobId, projectId, datasetId, tableId, destinationBucket);
service.jobs().insert(projectId, job).execute();
final Get request = service.jobs().get(projectId, jobId);
JobStatus response;
while (true) {
Thread.sleep(500); // improve this sleep policy
response = request.execute().getStatus();
if (response.getState().equals("DONE") || response.getState().equals("FAILED"))
break;
}
Something like the above code works fine. The problem with batchRequest is that the jobRequest.execute() method does not return a Response object.
When you execute it, the batch request returns after it has initialised all the jobs specified in its queue but it does not wait until all of them are really finished. Indeed your execute() method returns but you can have failing jobs later on (i.g. error due to quota issue, schema issues etc.) and I can't notify the client on time with the right information.
You can just check the status of all the created jobs in the web UI with the job history button from the BigQuery view, you can't return error message to a client.
Any idea with that?

Modeling an event Sink in RxJava for events that need onComplete/onError

I'm in the process of writing a client for Apache Mesos' new HTTP Scheduler API using RxJava and RxNetty.
I've managed to successfully create the connection with RxNetty and create an Observable<Event> from the resulting chunked stream.
Now I'm at the point of trying to model a sink that can be used to send calls back to Mesos in order to claim/decline resource offers, acknowledge task status updates, etc.
The message that will be sent to sent to Mesos is a Call, I need to be able to provide an onCompleted or onError for every Call that comes into the Sink. This is due to Mesos performing synchronous validation on the Call being sent to it.
I'm essentially trying to allow for the following:
final MesosSchedulerClient client = new MesosSchedulerClient();
final Observable<Event> events = client.openEventStream(subscribeCall);
final Observable<Observable<Call>> ackCalls = events
.filter(event -> event.getType() == Event.Type.UPDATE && event.getUpdate().getStatus().hasUuid())
.zipWith(frameworkIDObservable, (Event e, AtomicReference<FrameworkID>> fwId) -> {
final TaskStatus status = e.getUpdate().getStatus();
final Call ackCall = ackUpdate(fwId.get(), status.getUuid(), status.getAgentId(), status.getTaskId());
return Observable.just(ackCall)
.doOnComplete(() -> { ... })
.doOnError((e) -> { ... });
});
client.sink(ackCalls);
Right now I've come up with a custom object[1] that extends Subject and specifies the Call and Action0 for onCompleted and Action1<Throwable> for onError. Though, I would prefer to use the existing constructs from RxJava if possible. Sample usage of what I've come up with[2].
Any guidance would be greatly appreciated.
[1] https://github.com/BenWhitehead/mesos-rxjava/blob/sink-operation/mesos-rxjava-core/src/main/java/org/apache/mesos/rx/java/SinkOperation.java#L17
[2] https://github.com/BenWhitehead/mesos-rxjava/blob/sink-operation/mesos-rxjava-example/mesos-rxjava-example-framework/src/main/java/org/apache/mesos/rx/java/example/framework/sleepy/Main.java#L117-L124
The solution I ended up with was to create a custom Subscriber that would process the event stream and send the requests back to mesos.

Categories