Vertx http server only creating one instance - java

I am creating a simple micro service using vertx and when i start my server it only create one event thread when available is 12.
My code to start server is
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
int processorCounts = Runtime.getRuntime().availableProcessors();
DeploymentOptions options = new DeploymentOptions().setInstances(processorCounts);
vertx.deployVerticle( HttpRouter.class.getName(),options);
}
And my http router looks like this
#Override
public void start() throws Exception {
super.start();
Router router = Router.router(vertx);
router.get("/").handler(event -> {
event.response().end("Hello World");
});
vertx.createHttpServer().requestHandler(router::accept).listen(8001);
}

What is your process for testing? I assume you opened a browser and hit refresh on the same page. Then yes, the same verticle instance will handle the requests. The reason is Vert.x load balances connections among verticles instances, not requests.
Open a different browser and you should see different event loop names.

Related

How to use Vertx EventBus to send messages between Verticles?

I am currently maintaining application written in Java with Vertx framework.
I would like to implement sending messages between 2 application instances (primary and secondary) using EventBus (over the network). Is it possible?
In the Vertx documentation I do not see the example how I can achieve that. https://vertx.io/docs/vertx-core/java/#event_bus
I see that there are send(...) methods in EventBus with address - but address can be any String. I would like to publish the events to another application instance (for example from Primary to Secondary).
It is possible using a Vert.x cluster manager.
Choose one of the supported cluster managers in the classpath of your application.
In your main method, instead of creating a standalone Vertx instance, create a clustered one:
Vertx.clusteredVertx(new VertxOptions(), res -> {
if (res.succeeded()) {
Vertx vertx = res.result();
} else {
// failed!
}
});
Deploy a receiver:
public class Receiver extends AbstractVerticle {
#Override
public void start() throws Exception {
EventBus eb = vertx.eventBus();
eb.consumer("ping-address", message -> {
System.out.println("Received message: " + message.body());
// Now send back reply
message.reply("pong!");
});
System.out.println("Receiver ready!");
}
}
In a separate JVM, deploy a sender:
public class Sender extends AbstractVerticle {
#Override
public void start() throws Exception {
EventBus eb = vertx.eventBus();
// Send a message every second
vertx.setPeriodic(1000, v -> {
eb.request("ping-address", "ping!", reply -> {
if (reply.succeeded()) {
System.out.println("Received reply " + reply.result().body());
} else {
System.out.println("No reply");
}
});
});
}
}
That's it for the basics. You may need to follow individual cluster manager configuration instructions in the docs.

Vert.x - How to get POST Value passed between Verticles?

This the class that runs verticles :
public class RequestResponseExample {
public static void main(String[] args) throws InterruptedException {
Logger LOG = LoggerFactory.getLogger(RequestResponseExample.class);
final Vertx vertx = Vertx.vertx();
final Handler<AsyncResult<String>> RequestHandler = dar-> vertx.deployVerticle(new ResponseVerticle());
vertx.deployVerticle(new ResponseVerticle(), RequestHandler);
}
}
This is The Request verticle Class :
public class RequestVerticle extends AbstractVerticle{
public static final Logger LOG = LoggerFactory.getLogger(RequestVerticle.class);
static final String ADDRESS = "my.request.address";
#Override
public void start() throws Exception {
Router router = Router.router(vertx);
router.get("/Test").handler(rc -> rc.response().sendFile("index.html"));
vertx.createHttpServer()
.requestHandler(router)
.listen(8080);
}
}
This is The Response verticle Class: Here Im having a difficulty getting The Inserted value in the HTML file
public class ResponseVerticle extends AbstractVerticle{
public static final Logger LOG = LoggerFactory.getLogger(RequestVerticle.class);
#Override
public void start() throws Exception {
Router router = Router.router(vertx);
// How to handle the POST value ?
router.post("/Test/Result").handler(rc -> rc.end("The Post Value"));
vertx.createHttpServer()
.requestHandler(router)
.listen(8080);
}
When the user invokes POST /Test/Result and sends some POST value, you receive it in your Response verticle class (third snippet). If you want to share that method with other verticles, you should store it somewhere inside the handler method so other verticles access it or immediately forward it to other verticle via the event bus.
One possible solution would be to create a third verticle (e.g. StorageVerticle) which has get and set methods. That way, ResponseVerticle invokes the set method to store the value it got, and the RequestVerticle invokes the get method to fetch the value the user sent on calling POST method.
The other solution of direct communication between verticles involves the Event Bus message exchange - one verticle publishes/sends a message and all other verticles can register as a consumer to get that message. More on this you can find here: https://vertx.io/docs/vertx-core/java/#_the_event_bus_api.
It is hard to say which approach is better because it is case-to-case basis and I have limited information here about the scope of the project.

Why does Vert.x create a new event loop for an http server?

I have a very simple Vert.x application that exposes a ping endpoint:
LauncherVerticle.java
public class LauncherVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> future) throws Exception {
DeploymentOptions options = new DeploymentOptions();
options.setConfig(config());
options.setInstances(1);
String verticleName = Example1HttpServerVerticle.class.getName();
vertx.deployVerticle(verticleName, options, ar -> {
if (ar.succeeded()) {
future.complete();
} else {
future.fail(ar.cause());
}
});
}
}
PingVerticle.java
public class PingVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> future) throws Exception {
Router router = Router.router(vertx);
router.get("/ping").handler(context -> {
String payload = new JsonObject().put("hey", "ho").encode();
context.response().putHeader("content-type", "application/json").end(payload);
});
}
}
As expected, by default Vert.x creates two event loop threads that I can see with VisualVM:
Of course, the application doesn't do anything, so I know go and add an http server to PingVerticle:
String host = "0.0.0.0";
int port = 7777;
vertx.createHttpServer().requestHandler(router::accept).listen(port, host, ar -> {
if (ar.succeeded()) {
future.complete();
} else {
future.fail(ar.cause());
}
});
Now I see in VisualVM that there are two new threads, an acceptor-thread, that I can more or less understand, and another eventloop-thread:
Why is this third eventloop-thread created?
According to vert.x javadoc:
The default number of event loop threads to be used = 2 * number of
cores on the machine.
It seems, you have more than 1 core.
Not much documentation on vert.x architecture but there is an interesting read on Understanding Vert.x Architecture
BTW, I have four core machine and i see the same number of threads at the application startup. I noticed increase in number of eventloop threads as more load is generated, while other threads remain single per vert.x process.
In short,
vert.x-acceptor-thread-0
Always there when HttpServer is created
vert.x-eventloop-thread-0
vert.x-eventloop-thread-1
Vert.x app starts with two eventloop threads and adds more dynamically as needed until double the number of cores i.e. 2 * cores as per documentation.
vert.x-blocked-thread-checker
Always there to detect blocking routine on eventloop for more than 2000 milliseconds.

Communication in Netty Nio java

I want to create a communication system with two clients and a server in Netty nio. More specifically, firstly, I want when two clients are connected with the server to send a message from the server and after that to be able to exchnage data between the two clients. I am using the code provided from this example. My modifications in the code can be found here: link
It seems that the channelRead in the serverHandler works when the first client is connceted so it always return 1 but when a second client is connected does not change to 2. How can I check properly from the server when both clients are connected to the server? How can I read this value dynamically from my main function of the Client? Then which is the best way to let both clients communicate?
EDIT1: Apparently it seems that the client service is running and close directly so every time that I am running a new NettyClient is connected but the connection is closed after that. So the counter is always chnages from zero to one. As I was advised in the below comments I tested it using telnet in the same port and the counter seems to increasing normally, however, with the NettyClient service no.
EDIT2: It seems that the issue I got was from future.addListener(ChannelFutureListener.CLOSE); which was in channelRead in the ProcessingHandler class. When I commented it that out it seems that the code works. However, am not sure what are the consequences of commented that out. Moreover, I want from my main function of the client to check when the return message is specific two. How, could I create a method that waits for a specific message from server and meanwhile it blocks the main functionality.
static EventLoopGroup workerGroup = new NioEventLoopGroup();
static Promise<Object> promise = workerGroup.next().newPromise();
public static void callClient() throws Exception {
String host = "localhost";
int port = 8080;
try {
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new RequestDataEncoder(), new ResponseDataDecoder(), new ClientHandler(promise));
}
});
ChannelFuture f = b.connect(host, port).sync();
} finally {
//workerGroup.shutdownGracefully();
}
}
I want inside the main function to call the method and return the result and when it is 2 to continue with the main functionality. However, I cannot call callClient inside the while since it will run multiple times the same client.
callBack();
while (true) {
Object msg = promise.get();
System.out.println("Case1: the connected clients is not two");
int ret = Integer.parseInt(msg.toString());
if (ret == 2){
break;
}
}
System.out.println("Case2: the connected clients is two");
// proceed with the main functionality
How can I update the promise variable for the first client. When I run two clients, for the first client I always received the message :
Case1: the connected clients is not two
seems that the promise is not updated normally, while for the second client I always received the:
Case2: the connected clients is two
If my memory is correct, ChannelHandlerContext is one per channel and it can have multiple ChannelHandlers in it's pipeline. Your channels variable is an instance variable of your handler class. And you create a new ProcessingHandler instance for each connection. Thus each will have one and only one connection in channels variable once initialized - the one it was created for.
See new ProcessingHandler() in initChannel function in the server code (NettyServer.java).
You can either make channels variable static so that it is shared between ProcessingHandler instances. Or you can create a single ProcessingHandler instance elsewhere (e.g. as a local variable in the run() function) and then pass that instance to addLast call instead of new ProcessingHandler().
Why the size of ChannelGroup channels is always one. Even if I connect
more clients?
Because child ChannelInitializer is called for every new Channel (client). There you are creating new instance of ProcessingHandler, so every channel see its own instance of ChannelGroup.
Solution 1 - Channel Attribute
Use Attribute and associate it with Channel.
Create attribute somewhere (let's say inside Constants class):
public static final AttributeKey<ChannelGroup> CH_GRP_ATTR =
AttributeKey.valueOf(SomeClass.class.getName());
Now, create ChannelGroup which will be used by all instances of ProcessingHandler:
final ChannelGroup channels = new DefaultChannelGroup(GlobalEventExecutor.INSTANCE);
Update your child ChannelInitializer in NettyServer :
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(
new RequestDecoder(),
new ResponseDataEncoder(),
new ProcessingHandler());
ch.attr(Constants.CH_GRP_ATTR).set(channels);
}
Now you can access instance of ChannelGroup inside your handlers like this:
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
final ChannelGroup channels = ctx.channel().attr(Constants.CH_GRP_ATTR).get();
channels.add(ctx.channel());
This will work, because every time new client connects, ChannelInitializer will be called with same reference to ChannelGroup.
Solution 2 - static field
If you declare ChannelGroup as static, all class instances will see same ChannelGroup instance:
private static final ChannelGroup channels =
new DefaultChannelGroup(GlobalEventExecutor.INSTANCE);
Solution 3 - propagate shared instance
Introduce parameter into constructor of ProcessingHandler:
private final ChannelGroup channels;
public ProcessingHandler(ChannelGroup chg) {
this.channels = chg;
}
Now, inside your NettyServer class create instance of ChannelGroup and propagate it to ProcessingHandler constructor:
final ChannelGroup channels = new
DefaultChannelGroup(GlobalEventExecutor.INSTANCE);
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(
new RequestDecoder(),
new ResponseDataEncoder(),
new ProcessingHandler(channels)); // <- here
}
Personally, I would choose first solution, because
It clearly associate ChannelGroup with Channel context
You can access same ChannelGroup in other handlers
You can have multiple instances of server (running on different port, within same JVM)

Can I make a Java HttpServer threaded/process requests in parallel?

I have built a simple HttpServer following tutorials i have found online, using Sun's lightweight HttpServer.
Basically the main function looks like this:
public static void main(String[] args) throws Exception {
HttpServer server = HttpServer.create(new InetSocketAddress(8000), 0);
//Create the context for the server.
server.createContext("/", new BaseHandler());
server.setExecutor(null); // creates a default executor
server.start();
}
And I have implemented the BaseHandler Interface's method to process the Http request and return a response.
static class BaseHandler implements HttpHandler {
//Handler method
public void handle(HttpExchange t) throws IOException {
//Implementation of http request processing
//Read the request, get the parameters and print them
//in the console, then build a response and send it back.
}
}
I have also created a Client that sends multiple requests via threads. Each thread sends the following request to the server:
http://localhost:8000/[context]?int="+threadID
On Each client run, The requests seem to arrive in different order to the server, but they are served in a serial manner.
What i wish to acomplish is for the requests to be processed in a parallel manner if that is possible.
Is it possible, for example, to run each handler in a seperate thread, and if so, is it a good thing to do.
Or should i just drop using Sun's lightweight server altogether and focus an building something from scratch?
Thanks for any help.
As you can see in ServerImpl, the default executor just "run" the task :
157 private static class DefaultExecutor implements Executor {
158 public void execute (Runnable task) {
159 task.run();
160 }
161 }
you must provide a real executor for your httpServer, like that :
server.setExecutor(java.util.concurrent.Executors.newCachedThreadPool());
and your server will run in parallel.
Carefull, this is a non-limited Executor, see Executors.newFixedThreadPool to limit the number of Thread.
You used server.setExecutor(null) that runs the handler in the same caller thread. In this case, the main thread which runs the server.
You only need to change the line as
public static void main(String[] args) throws Exception {
HttpServer server = HttpServer.create(new InetSocketAddress(8000), 0);
//Create the context for the server.
server.createContext("/", new BaseHandler());
server.setExecutor(Executors.newCachedThreadPool());
server.start();
}

Categories