How to use Vertx EventBus to send messages between Verticles? - java

I am currently maintaining application written in Java with Vertx framework.
I would like to implement sending messages between 2 application instances (primary and secondary) using EventBus (over the network). Is it possible?
In the Vertx documentation I do not see the example how I can achieve that. https://vertx.io/docs/vertx-core/java/#event_bus
I see that there are send(...) methods in EventBus with address - but address can be any String. I would like to publish the events to another application instance (for example from Primary to Secondary).

It is possible using a Vert.x cluster manager.
Choose one of the supported cluster managers in the classpath of your application.
In your main method, instead of creating a standalone Vertx instance, create a clustered one:
Vertx.clusteredVertx(new VertxOptions(), res -> {
if (res.succeeded()) {
Vertx vertx = res.result();
} else {
// failed!
}
});
Deploy a receiver:
public class Receiver extends AbstractVerticle {
#Override
public void start() throws Exception {
EventBus eb = vertx.eventBus();
eb.consumer("ping-address", message -> {
System.out.println("Received message: " + message.body());
// Now send back reply
message.reply("pong!");
});
System.out.println("Receiver ready!");
}
}
In a separate JVM, deploy a sender:
public class Sender extends AbstractVerticle {
#Override
public void start() throws Exception {
EventBus eb = vertx.eventBus();
// Send a message every second
vertx.setPeriodic(1000, v -> {
eb.request("ping-address", "ping!", reply -> {
if (reply.succeeded()) {
System.out.println("Received reply " + reply.result().body());
} else {
System.out.println("No reply");
}
});
});
}
}
That's it for the basics. You may need to follow individual cluster manager configuration instructions in the docs.

Related

How to write Vertx worker verticle - indefinite blocking operation?

Following class is my worker verticle in which i want to execute a blocking code on recieving a message from event bus on a channel named events-config.
The objective is to generate and publish json messages indefinitely until i receive stop operation message on events-config channel.
I am using executeBlocking to achieve the desired functionality. However since am running the blocking operation indefinitely , vertx blocked threadchecker dumping warnings .
Question:
- Is there a way to disable blockedthreadchecker only for specific verticle ??
- Does the code below adheres to the best practice of executing infinite loop on need basis in vertx ? If not can you please suggest best way to do this ?
public class WorkerVerticle extends AbstractVerticle {
Logger logger = LoggerFactory.getLogger(WorkerVerticle.class);
private MessageConsumer<Object> mConfigConsumer;
AtomicBoolean shouldPublish = new AtomicBoolean(true);
private JsonGenerator json = new JsonGenerator();
#Override
public void start() {
mConfigConsumer = vertx.eventBus().consumer("events-config", message -> {
String msgBody = (String) message.body();
if (msgBody.contains(PublishOperation.START_PUBLISH.getName()) && !mJsonGenerator.isPublishOnGoing()) {
logger.info("Message received to start producing data onto kafka " + msgBody);
vertx.<Void>executeBlocking(voidFutureHandler -> {
Integer numberOfMessagesToBePublished = 100000;
if (numberOfMessagesToBePublished <= 0) {
logger.info("Skipping message publish :"+numberOfMessagesToBePublished);
return; // is it best way to do it ??
}
publishData(numberOfMessagesToBePublished);
},false, voidAsyncResult -> logger.info("Blocking publish operation is terminated"));
} else if (msgBody.contains(PublishOperation.STOP_PUBLISH.getName()) && mJsonGenerator.isPublishOnGoing()) {
logger.info("Message received to terminate " + msgBody);
mJsonGenerator.terminatePublish();
}
});
}
private void publishData(){
while(shouldPublish.get()){
//code to generate json indefinitely until some one reset shouldPublish variable
}
}
}
You don't want to use busy loops in your asynchronous code.
Use vertx.setPeriodic() or vertx.setTimer() instead:
vertx.setTimer(20, (l) -> {
// Generate your JSON
if (shouldPublish.get()) {
// Set timer again
}
});

How to keep apache camel context alive in thread main

I'm trying to make simple application that will listen one queue from artemis and then proceed messages and after that create new message in second queue.
I have created in method Main Camel context and added routing (it forwards messages to bean). And to test this routing and that this bean works correctly I'm sending
few messages to this queue - rigth after context started in main thread
public static void main(String args[]) throws Exception {
CamelContext context = new DefaultCamelContext();
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory("tcp://localhost:61616", "admin", "admin");
context.addComponent("cmp/q2", JmsComponent.jmsComponentAutoAcknowledge(connectionFactory));
context.addRoutes(new RouteBuilder() {
public void configure() {
from("cmp/q2:cmp/q2").bean(DataRequestor.class, "doSmth(${body}, ${headers})");
}
});
ProducerTemplate template = context.createProducerTemplate();
context.start();
for (int i = 0; i < 2; i++) {
HashMap<String, Object> headers = new HashMap<String, Object>();
headers.put("header1", "some header info");
template.sendBodyAndHeaders("cmp/q2:cmp/q2", "Test Message: " + i, headers);
}
context.stop();
}
And in this case application works fine, but it stops when method main completed - it proceess only messages that were created by it self.
Now after I have test bean that is used in routing, I want to modify application such way that it should start and stay active(keeping camle context and routin alive ) - so that i can create massages manually in web UI (active mq management console).
But I really don't know how.
I have tried infinite loop with Thread.sleep(5000);
I tried to start one more thread(also with infinite loop) in main method.
But it didn't work.(The most suspicious for me in case with infinite loop is that apllication is running, but when i create message in web UI it just desapears - and no any traces in system out that it was processed by my bean in routing, a suppose that it should be processed by my bean or just stay in the queue untouched, but it just disapears).
I now that my question is dummy, but I already have wasted 3 days to find a solution, so any advices or link to tutorials or some valueable information are appreciated.
PS: I've got one painfull restriction - Spring frameworks are not allowed.
I think the most simple solution for running standalone camel is starting it with camel Main. Camel online documentation has also an example of using it http://camel.apache.org/running-camel-standalone-and-have-it-keep-running.html.
I will copy paste the example code here just in case:
public class MainExample {
private Main main;
public static void main(String[] args) throws Exception {
MainExample example = new MainExample();
example.boot();
}
public void boot() throws Exception {
// create a Main instance
main = new Main();
// bind MyBean into the registry
main.bind("foo", new MyBean());
// add routes
main.addRouteBuilder(new MyRouteBuilder());
// add event listener
main.addMainListener(new Events());
// set the properties from a file
main.setPropertyPlaceholderLocations("example.properties");
// run until you terminate the JVM
System.out.println("Starting Camel. Use ctrl + c to terminate the JVM.\n");
main.run();
}
private static class MyRouteBuilder extends RouteBuilder {
#Override
public void configure() throws Exception {
from("timer:foo?delay={{millisecs}}")
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
System.out.println("Invoked timer at " + new Date());
}
})
.bean("foo");
}
}
public static class MyBean {
public void callMe() {
System.out.println("MyBean.callMe method has been called");
}
}
public static class Events extends MainListenerSupport {
#Override
public void afterStart(MainSupport main) {
System.out.println("MainExample with Camel is now started!");
}
#Override
public void beforeStop(MainSupport main) {
System.out.println("MainExample with Camel is now being stopped!");
}
}
}
The route keeps executing until you hit Ctlr+c or stop it in some other way...
If you test this, notice that you need example.properties file in your classpath, with the property millisecs.
At the very minimum you need a main thread to kick off a thread to run the camel route and then check for when that thread is done. The simple java threading approach using the main loop to check .wait() and the end of the camel route thread to signal .notify() when it finishes (or shutdown) would get the job done.
From there you can look into an executor service or use a micro-container like Apache Karaf
PS. Props for going Spring-free!
Disclaimer: this is written in Kotlin but it is somewhat trivial to port to java
Disclaimer: this is written for Apache-Camel 2.24.2
Disclaimer: I am also learning about Apache-Camel. The docs are a little heavy for me
I tried the Main route to set it up but it quickly got a little convoluted. I know that this is a java thread but I'm using kotlin ATM, I'll leave most of the types and imports available so it's easier for java devs.
class Listener
The first I had to fight with was understanding the lifecycle of Main. It turns out that there is an interface you can implement to add in the implementations of such events. With such an implementation I can hook up any routines that have to be sure that camel has started (no guessing required).
import org.apache.camel.CamelContext
import org.apache.camel.main.MainListener
import org.apache.camel.main.MainSupport
typealias Action = () -> Unit
class Listener : MainListener {
private var afterStart: Action? = null
fun registerOnStart(action:Action) {
afterStart = action
}
override fun configure(context: CamelContext) {}
override fun afterStop(main: MainSupport?) {}
override fun afterStart(main: MainSupport?) {
println("started!")
afterStarted?.invoke().also { println("Launched the registered function") }
?: println("Nothing registered to start")
}
override fun beforeStop(main: MainSupport?) {}
override fun beforeStart(main: MainSupport?) {}
}
class ApplicationCore
Then I set up the configuration of the context (Routes, Components, etc,...)
import org.apache.camel.CamelContext
import org.apache.camel.impl.DefaultCamelContext
import org.apache.camel.impl.SimpleRegistry
import org.apache.camel.main.Main
class ApplicationCore : Runnable {
private val main = Main()
private val registry = SimpleRegistry()
private val context = DefaultCamelContext(registry)
private val listener = Listener() // defined above
// for Java devs: this is more or less a constructor block
init {
main.camelContexts.clear()
listener.registerOnStart({ whateverYouAreDoing().start() })// <- your stuff should run in its own thread because main will be blocked
main.camelContexts.add(context)
main.duration = -1
context.addComponent("artemis", ...)// <- you need to implement your own
context.addRoutes(...)// <- you already know how to do this
...// <- anything else you could need to initialize
main.addMainListener(listener)
}
fun run() {
/* ... add whatever else you need ... */
// The next line blocks the thread until you close it
main.run()
}
fun whateverYouAreDoing(): Thread {
return Thread() {
ProducerTemplate template = context.createProducerTemplate();
for (i in 0..1) {
val headers = HashMap<String, Any>()
headers["header1"] = "some header info"
template.sendBodyAndHeaders("cmp/q2:cmp/q2", "Test Message: $i", headers)
}
context.stop()// <- this is not good practice here but its what you seem to want
}
}
}
In kotlin, initialization is rather easy. You can easily translate this into java because it is quite straight forward
// top level declaration
fun main(vararg args:List<String>) = { ApplicationCore().run() }

Polling SQS using dropwizard

What I am trying to achieve:
I want to make a dropwizard client that polls Amazon SQS.
Whenever a message is found in the queue, it is processed and stored.
Some information about the processed messages will be available through an API.
Why I chose Dropwizard:
Seemed like a good choice to make a REST client. I need to have metrics, DB connections and integrate with some Java services.
What I need help with:
It is not very clear how and where the SQS polling will fit in a typical dropwizard application.
Should it be a managed resource? Or a console reporter console-reporter? Or something else.
You can use com.google.common.util.concurrent.AbstractScheduledService to create a consumer thread and add it to the dropwizard's environment lifecycle as ManagedTask. Following is the pseudocode -
public class YourSQSConsumer extends AbstractScheduledService {
#Override
protected void startUp() {
// may be print something
}
#Override
protected void shutDown() {
// may be print something
}
#Override
protected void runOneIteration() {
// code to poll on SQS
}
#Override
protected Scheduler scheduler() {
return newFixedRateSchedule(5, 1, SECONDS);
}
}
In Main do this -
YourSQSConsumer consumer = new YourSQSConsumer();
Managed managedTask = new ManagedTask(consumer);
environment.lifecycle().manage(managedTask);
As an alternative to RishikeshDhokare's answer, one can also go ahead with the following code which does not need to include additional jar as a dependency in your project to keep the uber jar as much lightweight as possible.
public class SQSPoller implements Managed, Runnable {
private ScheduledExecutorService mainRunner;
#Override
public void start() throws Exception {
mainRunner = Executors.newSingleThreadScheduledExecutor()
mainRunner.scheduleWithFixedDelay(this, 0, 100, TimeUnit.MILLISECONDS);
}
#Override
public void run() {
// poll SQS here
}
#Override
public void stop() throws Exception {
mainRunner.shutdown();
}
}
And in the run() of your Application class, you can register the above class as follows.
environment.lifecycle().manage(new SQSPoller());
You can use either scheduleWithFixedDelay() or scheduleAtFixedRate() depending upon your use case.

Akka Distributed Pub/Sub back-pressure

I am using Akka Distributed Pub/Sub and have a single publisher and a subscriber. My publisher is way faster than the subscriber. Is there a way to slow down the publisher after a certain point?
Publisher code:
public class Publisher extends AbstractActor {
private ActorRef mediator;
static public Props props() {
return Props.create(Publisher.class, () -> new Publisher());
}
public Publisher () {
this.mediator = DistributedPubSub.get(getContext().system()).mediator();
this.self().tell(0, ActorRef.noSender());
}
#Override
public Receive createReceive() {
return receiveBuilder()
.match(Integer.class, msg -> {
// Sending message to Subscriber
mediator.tell(
new DistributedPubSubMediator.Send(
"/user/" + Subscriber.class.getName(),
msg.toString(),
false),
getSelf());
getSelf().tell(++msg, ActorRef.noSender());
})
.build();
}
}
Subscriber code:
public class Subscriber extends AbstractActor {
static public Props props() {
return Props.create(Subscriber.class, () -> new Subscriber());
}
public Subscriber () {
ActorRef mediator = DistributedPubSub.get(getContext().system()).mediator();
mediator.tell(new DistributedPubSubMediator.Put(getSelf()), getSelf());
}
#Override
public Receive createReceive() {
return receiveBuilder()
.match(String.class, msg -> {
System.out.println("Subscriber message received: " + msg);
Thread.sleep(10000);
})
.build();
}
}
Unfortunately, as currently designed, I don't think that there is a way to provide "back-pressure" to the original Sender. Since you are using ActorRef.tell to send the message to the mediator there is no way to get a signal that the downstream receiver is backing up. This is because tell, the method you are using, returns a void.
Switch To Ask
If you switch your tell to an ask you can set an appropriate Timeout value that will at least let you know when you don't receive a response within a particular duration.
Switch To Streams
"Back-pressure" is a primary feature of akka streams. Therefore, by switching to a stream implementation you will be able to achieve your desired goal.
If it possible to create a stream Source from your original data, then you could use Sink.actorRef to create a Sink from the mediator and use Flow.throttle to control the rate of flow to the mediator.

Why does Vert.x create a new event loop for an http server?

I have a very simple Vert.x application that exposes a ping endpoint:
LauncherVerticle.java
public class LauncherVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> future) throws Exception {
DeploymentOptions options = new DeploymentOptions();
options.setConfig(config());
options.setInstances(1);
String verticleName = Example1HttpServerVerticle.class.getName();
vertx.deployVerticle(verticleName, options, ar -> {
if (ar.succeeded()) {
future.complete();
} else {
future.fail(ar.cause());
}
});
}
}
PingVerticle.java
public class PingVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> future) throws Exception {
Router router = Router.router(vertx);
router.get("/ping").handler(context -> {
String payload = new JsonObject().put("hey", "ho").encode();
context.response().putHeader("content-type", "application/json").end(payload);
});
}
}
As expected, by default Vert.x creates two event loop threads that I can see with VisualVM:
Of course, the application doesn't do anything, so I know go and add an http server to PingVerticle:
String host = "0.0.0.0";
int port = 7777;
vertx.createHttpServer().requestHandler(router::accept).listen(port, host, ar -> {
if (ar.succeeded()) {
future.complete();
} else {
future.fail(ar.cause());
}
});
Now I see in VisualVM that there are two new threads, an acceptor-thread, that I can more or less understand, and another eventloop-thread:
Why is this third eventloop-thread created?
According to vert.x javadoc:
The default number of event loop threads to be used = 2 * number of
cores on the machine.
It seems, you have more than 1 core.
Not much documentation on vert.x architecture but there is an interesting read on Understanding Vert.x Architecture
BTW, I have four core machine and i see the same number of threads at the application startup. I noticed increase in number of eventloop threads as more load is generated, while other threads remain single per vert.x process.
In short,
vert.x-acceptor-thread-0
Always there when HttpServer is created
vert.x-eventloop-thread-0
vert.x-eventloop-thread-1
Vert.x app starts with two eventloop threads and adds more dynamically as needed until double the number of cores i.e. 2 * cores as per documentation.
vert.x-blocked-thread-checker
Always there to detect blocking routine on eventloop for more than 2000 milliseconds.

Categories