Following class is my worker verticle in which i want to execute a blocking code on recieving a message from event bus on a channel named events-config.
The objective is to generate and publish json messages indefinitely until i receive stop operation message on events-config channel.
I am using executeBlocking to achieve the desired functionality. However since am running the blocking operation indefinitely , vertx blocked threadchecker dumping warnings .
Question:
- Is there a way to disable blockedthreadchecker only for specific verticle ??
- Does the code below adheres to the best practice of executing infinite loop on need basis in vertx ? If not can you please suggest best way to do this ?
public class WorkerVerticle extends AbstractVerticle {
Logger logger = LoggerFactory.getLogger(WorkerVerticle.class);
private MessageConsumer<Object> mConfigConsumer;
AtomicBoolean shouldPublish = new AtomicBoolean(true);
private JsonGenerator json = new JsonGenerator();
#Override
public void start() {
mConfigConsumer = vertx.eventBus().consumer("events-config", message -> {
String msgBody = (String) message.body();
if (msgBody.contains(PublishOperation.START_PUBLISH.getName()) && !mJsonGenerator.isPublishOnGoing()) {
logger.info("Message received to start producing data onto kafka " + msgBody);
vertx.<Void>executeBlocking(voidFutureHandler -> {
Integer numberOfMessagesToBePublished = 100000;
if (numberOfMessagesToBePublished <= 0) {
logger.info("Skipping message publish :"+numberOfMessagesToBePublished);
return; // is it best way to do it ??
}
publishData(numberOfMessagesToBePublished);
},false, voidAsyncResult -> logger.info("Blocking publish operation is terminated"));
} else if (msgBody.contains(PublishOperation.STOP_PUBLISH.getName()) && mJsonGenerator.isPublishOnGoing()) {
logger.info("Message received to terminate " + msgBody);
mJsonGenerator.terminatePublish();
}
});
}
private void publishData(){
while(shouldPublish.get()){
//code to generate json indefinitely until some one reset shouldPublish variable
}
}
}
You don't want to use busy loops in your asynchronous code.
Use vertx.setPeriodic() or vertx.setTimer() instead:
vertx.setTimer(20, (l) -> {
// Generate your JSON
if (shouldPublish.get()) {
// Set timer again
}
});
Related
This question already has an answer here:
What is the difference between block() , subscribe() and subscribe(-)
(1 answer)
Closed last year.
I have a spring Webflux application. There are two important parts to this application:
A job is scheduled to run at a fixed interval.
The job fetches the data from DB and stores the data in Redis.
void run() {
redisAdapter.getTtl()
.doOnError(RefreshExternalCache::logError)
.switchIfEmpty(Mono.defer(() -> {
log.debug(">> RefreshExternalCache > refreshExternalCacheIfNeeded => Remaining TTL could not be retrieved. Cache does not exist. " +
"Trying to create the cache.");
return Mono.just(Duration.ofSeconds(0));
}))
.subscribe(remainingTtl -> {
log.debug(">> RefreshExternalCache > refreshExternalCacheIfNeeded => original ttl for the cache: {} | ttl for cache in seconds = {} | ttl for cache in minutes = {}",
remainingTtl, remainingTtl.getSeconds(), remainingTtl.toMinutes());
if (isExternalCacheRefreshNeeded(remainingTtl, offerServiceProperties.getExternalCacheExpiration(), offerServiceProperties.getExternalCacheRefreshPeriod())) {
log.debug(">> RefreshExternalCache > refreshExternalCacheIfNeeded => external cache is up-to-date, skipping refresh");
} else {
log.debug(">> RefreshExternalCache > refreshExternalCacheIfNeeded => external cache is outdated, updating the external cache");
offerService.refreshExternalCache();
}
});
}
This basically calls another method called refreshExternalCache(), the implementation below:
public void refreshExternalCache() {
fetchOffersFromSource()
.doOnNext(offerData -> {
log.debug(LOG_REFRESH_CACHE + "Updating local offer cache with data from source");
localCache.put(OFFER_DATA_KEY, offerData);
storeOffersInExternalCache(offerData, offerServiceProperties.getExternalCacheExpiration());
})
.doOnSuccess(offerData -> meterRegistry.counter(METRIC_EXTERNAL_CACHE_REFRESH_COUNTER, TAG_OUTCOME, SUCCESS).increment())
.doOnError(sourceThrowable -> {
log.debug(LOG_REFRESH_CACHE + "Error while refreshing external cache {}", sourceThrowable.getMessage());
meterRegistry.counter(METRIC_EXTERNAL_CACHE_REFRESH_COUNTER, TAG_OUTCOME, FAILURE).increment();
}).subscribe();
}
Also, in the above method, you can see a call to storeOffersInExternalCache
public void storeOffersInExternalCache(OfferData offerData, Duration ttl) {
log.info(LOG_STORING_OFFER_DATA + "Storing the offer data in external cache...");
redisAdapter.storeOffers(offerData, ttl);
}
public void storeOffers(OfferData offerData, Duration ttl) {
Mono.fromRunnable(() -> redisClient.storeSerializedOffers(serializeFromDomain(offerData), ttl)
.doOnNext(status -> {
if (Boolean.TRUE.equals(status)) {
log.info(LOG_STORE_OFFERS + "Data stored in redis.");
meterRegistry.counter(METRIC_REDIS_STORE_OFFERS, TAG_OUTCOME, SUCCESS).increment();
} else {
log.error(LOG_STORE_OFFERS + "Unable to store data in redis.");
meterRegistry.counter(METRIC_REDIS_STORE_OFFERS, TAG_OUTCOME, FAILURE).increment();
}
}).retryWhen(Retry.backoff(redisRetryProperties.getMaxAttempts(), redisRetryProperties.getWaitDuration()).jitter(redisRetryProperties.getBackoffJitter()))
.doOnError(throwable -> {
meterRegistry.counter(METRIC_REDIS_STORE_OFFERS, TAG_OUTCOME, FAILURE).increment();
log.error(LOG_STORE_OFFERS + "Unable to store data in redis. Error: [{}]", throwable.getMessage());
})).subscribeOn(Schedulers.boundedElastic());
}
Redis Client
#Slf4j
#Component
public class RedisClient {
private final ReactiveRedisTemplate<String, String> reactiveRedisTemplate;
private final ReactiveValueOperations<String, String> reactiveValueOps;
public RedisClient(#Qualifier("reactiveRedisTemplate") ReactiveRedisTemplate<String, String> reactiveRedisTemplate) {
this.reactiveRedisTemplate = reactiveRedisTemplate;
this.reactiveValueOps = reactiveRedisTemplate.opsForValue();
}
Mono<Optional<String>> fetchSerializedOffers() {
return reactiveValueOps.get(OFFER_DATA_KEY).map(Optional::ofNullable);
}
Mono<Boolean> storeSerializedOffers(String serializedOffers, Duration ttl) {
return reactiveValueOps.set(OFFER_DATA_KEY, serializedOffers, ttl);
}
Mono<Duration> getTtl() {
return reactiveRedisTemplate.getExpire(OFFER_DATA_KEY);
}
}
Now my concerns are:
If I do not call the subscribe method on these Mono streams, these methods are not even executed. This is fair as they won't execute until someone subscribes to them.
As I understand it correctly, subscribe is a blocking call. This defeats the whole purpose of Reactive programming. Isn't it?
I looked for several ways to make this work, one of them has been shown above. I tried calling one of the methods in Mono.fromRunnable but this also is not a very good approach. (read it on another thread in StackOverflow).
So, is the approach that I am taking above not correct? How do we execute the Mono streams that no one subscribes to?
Answering your concern number 2 (which seems to be the only real doubt in your question). Not really. block() (https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html#block--) is the one that subscribes to a Mono or Flux and waits indefinitely until a next signal is received. On the other hand subscribe() (https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html#subscribe--) subscribes to a Mono or Flux but it doesn't block and instead reacts when an element is emitted.
I am trying to create my own async custom method in Vert.x something similar to their code:
// call the external service
WebClient client = WebClient.create(vertx);
client.get(8080, "localhost:8080", "/fast").send(ar -> {
if (ar.succeeded()) {
HttpResponse<Buffer> response = ar.result();
System.out.println("response.bodyAsString()" + response.bodyAsString());
} else {
System.out.println("Something went wrong " + ar.cause().getMessage());
}
});
When you run this code the thread sleeps without blocking the owner thread, and the provided handler is executed when the endpoint responds.
I found out the way to do it with: "executeBlocking", "createSharedWorkerExecutor.executeBlocking" and using a bus, but in all of them the thread gets blocked.
I am looking for the way to do it without blocking the container thread but I don't find it. There is a post:
How can I implement custom asynchronous operation in Vert.x?
I tried to do it but it also blocks the thread:
vertx.runOnContext(v -> {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
}
handler.handle(Future.succeededFuture("result"));
});
The code above runs in the same thread but doesn't run concurrently, so I assume the thread is blocked.
Is there any way to do it?
The way you call Thread.sleep() will send your current JVM thread to sleep effectively blocking your current vert.x event loop which runs in the same thread. That is not the idiomatic way in vert.x to execute blocking code.
See here: "The Golden Rule - don't block the event loop".
If you have to run blocking code, like Thread.sleep(), you should implement that code using a worker verticle. Worker verticles use JVM threads from a different thread pool and consequently do not block the event loop.
The first code example that you posted above does not use blocking code, as you correctly described yourself. It uses the idiomatic way with asynchronous, non- blocking event handlers.
EDIT
See this short example of how to start a very simple worker verticle.
Code from the class WorkerVerticle will never block the event loop. You make it a worker during the verticle deployment by setting the correct option as it is shown in the DeployerVerticle.
public class DeployerVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
System.out.println("Main verticle has started, let's deploy another...");
// Deploy it as a worker verticle
vertx.deployVerticle("io.example.WorkerVerticle",
new DeploymentOptions().setWorker(true));
}
}
// ----
package io.example;
/**
* An example of a worker verticle
*/
public class WorkerVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
System.out.println("[Worker] Starting in " +
Thread.currentThread().getName());
// consume event bus messages sent to address "sample.data"
// reply with incoming message transformed to upper case
vertx.eventBus().<String>consumer("sample.data", message -> {
Thread.sleep(1000); // will not block the event loop
// but only this verticle
System.out.println("[Worker] Consuming data in " +
Thread.currentThread().getName());
String body = message.body();
message.reply(body.toUpperCase());
});
}
}
I have this camel route
from("file:{{PATH_INPUT}}?charset=iso-8859-1&delete=true")
.process(new ProcessorName())
.pollEnrich().simple("${property.URI_FILE}", String.class).aggregationStrategy(new Estrategia()).timeout(10000).aggregateOnException(true)
.choice()
.when(simple("${property.result} == 'OK'"))
.to(URI_OUTPUT)
.endChoice();
This route takes a file from PATH_INPUT, compare it with the file URI_FILE (I generate URI_FILE property in ProccessorName()) and if URI_FILE body contains a specific data, then the result is "OK" and send it to URI_OUTPUT (activeMQ).
This works ok, but later I noticed that this generated a lot of waiting threads, one for each exchange.
I don't know why is this happening. I have tried with a ConsumerTemplate and the results are the same.
Yes this is expected if you generate a unique URI per endpoint you poll. I assume you generate a dynamic fileName which you specify in that URI, and that you see a thread per endpoint?
I have logged a ticket to make this easier in the future
https://issues.apache.org/jira/browse/CAMEL-11250
If you just want to set the message body to a specify file name, then the fastest and easiest is to use setBody as a java.io.File type:
.setBody(simple("${property.URI_FILE}", java.io.File))
I have run into the same trouble and faced memory leak. As a workaround, I implemented my own 'org.apache.camel.spi.PollingConsumerPollStrategy' which catches the Consumer when it is begun (by pollEnrich) and sends it to a bean that shall hold all of these consumers in a Map.
Then, I added a timer-route only to trigger a purge action onto the Map that checks if a given time limit has been reached for each of them. If so, it stops the Consumer (leading to interrupt its related thread) and then removes it from the Map.
Like this:
from("direct://foo")
.to("an endpoint that returns the file name")
.pollEnrich()
.simple("file://{{app.runtime.draft.path}}"
+ "?fileName=${body}"
+ "&recursive=true"
+ "&delete=true"
+ "&pollStrategy=#myFilePollingStrategy" // my poll strategy
+ "&maxMessagesPerPoll=1")
.timeout(6 * 1000L)
.end()
.to("direct://a")
.to("direct://b")
.to("direct://c")
.end();
from("timer://file-consumer-purge?period=5s")
.bean(fileConsumerController, "purge")
.end();
#Component
public class FileConsumerController {
private Map<Consumer, Long> mapConsumers = new ConcurrentHashMap<>();
private static final long LIMIT = 25 * 1000L; // 25 seconds
public void hold(Consumer consumer) {
mapConsumers.put(consumer, System.currentTimeMillis());
}
public void purge() {
mapConsumers.forEach((consumer, startTime) -> {
if (System.currentTimeMillis() - startTime > LIMIT) {
try {
consumer.stop();
} catch (Exception e) {
e.printStackTrace();
} finally {
mapConsumers.remove(consumer);
}
}
});
}
}
#Component
public class MyFilePollingStrategy extends DefaultPollingConsumerPollStrategy {
#Autowired
FileConsumerController fileConsumerController;
#Override
public boolean begin(Consumer consumer, Endpoint endpoint) {
fileConsumerController.hold(consumer);
return super.begin(consumer, endpoint);
}
}
Notes:
I monitored the behavior through jconsole;
I've only overwritten the begin() method and haven't tested the effects over unexpected / error scenarios.
Hope this helps for now, and may the component be evolved. :)
First of all, yes I looked up this question on google and I did not find any answer to it. There are only answers, where the thread is FINISHED and than the value is returned. What I want, is to return an "infinite" amount of values.
Just to make it more clear for you: My thread is reading messages from a socket and never really finishes. So whenever a new message comes in, I want another class to get this message. How would I do that?
public void run(){
while(ircMessage != null){
ircMessage = in.readLine();
System.out.println(ircMessage);
if (ircMessage.contains("PRIVMSG")){
String[] ViewerNameRawRaw;
ViewerNameRawRaw = ircMessage.split("#");
String ViewerNameRaw = ViewerNameRawRaw[2];
String[] ViewerNameR = ViewerNameRaw.split(".tmi.twitch.tv");
viewerName = ViewerNameR[0];
String[] ViewerMessageRawRawRaw = ircMessage.split("PRIVMSG");
String ViewerMessageRawRaw = ViewerMessageRawRawRaw[1];
String ViewerMessageRaw[] = ViewerMessageRawRaw.split(":", 2);
viewerMessage = ViewerMessageRaw[1];
}
}
}
What you are describing is a typical scenario of asynchronous communication. Usually solution could be implemented with Queue. Your Thread is a producer. Each time your thread reads a message from socket it builds its result and sends it into a queue. Any Entity that is interested to receive the result should be listening to the Queue (i.e. be a consumer). Read more about queues as you can send your message so that only one consumer will get it or (publishing) means that all registered consumers may get it. Queue implementation could be a comercialy available products such as Rabbit MQ for example or as simple as Java provided classes that can work as in memory queues. (See Queue interface and its various implementations). Another way to go about it is communication over web (HTTP). Your thread reads a message from a socket, builds a result and sends it over http using let's say a REST protocol to a consumer that exposes a rest API that your thread can call to.
Why not have a status variable in your thread class? You can then update this during execution and before exiting. Once the thread has completed, you can still query the status.
public static void main(String[] args) throws InterruptedException {
threading th = new threading();
System.out.println("before run Status:" + th.getStatus());
th.start();
Thread.sleep(500);
System.out.println("running Status:" + th.getStatus());
while(th.isAlive()) {}
System.out.println("after run Status:" + th.getStatus());
}
Extend thread to be:
public class threading extends Thread {
private int status = -1; //not started
private void setStatus(int status){
this.status = status;
}
public void run(){
setStatus(1);//running
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
}
setStatus(0); //exit clean
}
public int getStatus(){
return this.status;
}
}
And get an output of:
before run Status:-1
running Status:1
after run Status:0
I have an application that receives alerts from other applications, usually once a minute or so but I need to be able to handle higher volume per minute. The interface I am using, and the Alert framework in general, requires that alerts may be processed asynchronously and can be stopped if they are being processed asynchronously. The stop method specifically is documented as stopping a thread. I wrote the code below to create an AlertRunner thread and then stop the thread. However, is this a proper way to handle terminating a thread? And will this code be able to scale easily (not to a ridiculous volume, but maybe an alert a second or multiple alerts at the same time)?
private AlertRunner alertRunner;
#Override
public void receive(Alert a) {
assert a != null;
alertRunner = new alertRunner(a.getName());
a.start();
}
#Override
public void stop(boolean synchronous) {
if(!synchronous) {
if(alertRunner != null) {
Thread.currentThread().interrupt();
}
}
}
class AlertRunner extends Thread {
private final String alertName;
public AlertRunner(String alertName) {
this.alertName = alertName;
}
#Override
public void run() {
try {
TimeUnit.SECONDS.sleep(5);
log.info("New alert received: " + alertName);
} catch (InterruptedException e) {
log.error("Thread interrupted: " + e.getMessage());
}
}
}
This code will not scale easily because Thread is quite 'heavy' object. It's expensive to create and it's expensive to start. It's much better to use ExecutorService for your task. It will contain a limited number of threads that are ready to process your requests:
int threadPoolSize = 5;
ExecutorService executor = Executors.newFixedThreadPool(threadPoolSize);
public void receive(Alert a) {
assert a != null;
executor.submit(() -> {
// Do your work here
});
}
Here executor.submit() will handle your request in a separate thread. If all threads are busy now, the request will wait in a queue, preventing resource exhausting. It also returns an instance of Future that you can use to wait for the completion of the handling, setting the timeout, receiving the result, for cancelling execution and many other useful things.