How to vertically scale Vert.x without Verticals? - java

According the the Vert.x docs - deploying using Verticles is optional. If this is the case - how can I deploy say an HTTP server onto multiple event loops? Here's what I tried - also read the api docs and couldn't find anything:
Vertx vertx = Vertx.vertx(new VertxOptions().setEventLoopPoolSize(10));
HttpServerOptions options = new HttpServerOptions().setLogActivity(true);
for (int i = 0; i < 10; i++) {
vertx.createHttpServer(options).requestHandler(request -> {
request.response().end("Hello world");
}).listen(8081);
}
This appears to create 10 HTTP servers on the first event loop but I'm hoping for 1 server per event loop.
Here's what I see in my logs - all eventloop-thread-0:
08:42:46.667 [vert.x-eventloop-thread-0] DEBUG
io.netty.handler.logging.LoggingHandler - [id: 0x0c651def,
L:/0:0:0:0:0:0:0:1:8081 - R:/0:0:0:0:0:0:0:1:50978] READ: 78B
08:42:46.805 [vert.x-eventloop-thread-0] DEBUG
io.netty.handler.logging.LoggingHandler - [id: 0xe050d078,
L:/0:0:0:0:0:0:0:1:8081 - R:/0:0:0:0:0:0:0:1:51000] READ: 78B
08:42:47.400 [vert.x-eventloop-thread-0] DEBUG
io.netty.handler.logging.LoggingHandler - [id: 0x22b626b8,
L:/0:0:0:0:0:0:0:1:8081 - R:/0:0:0:0:0:0:0:1:51002] READ: 78B

"Optional" doesn't mean "you can, getting the same benefits". "Optional" simply means "you can".
Vert.x has the notion of thread affinity. HTTP Server created from the same thread will always be assigned to the same event loop. Otherwise you'll get nasty thread-safety problems.
You can compare the example code from above with the following code:
Vertx vertx = Vertx.vertx();
HttpServerOptions options = new HttpServerOptions().setLogActivity(true);
// Spawn multiple threads, so EventLoops won't be bound to main
ExecutorService tp = Executors.newWorkStealingPool(10);
CountDownLatch l = new CountDownLatch(1);
for (int i = 0; i < 10; i++) {
tp.execute(() -> {
vertx.createHttpServer(options).requestHandler(request -> {
System.out.println(Thread.currentThread().getName());
// Slow the response somewhat
vertx.setTimer(1000, (h) -> {
request.response().end("Hello world");
});
}).listen(8081);
});
}
// Just wait here
l.await();
Output is something like:
vert.x-eventloop-thread-0
vert.x-eventloop-thread-1
vert.x-eventloop-thread-2
vert.x-eventloop-thread-0
That's because each event loop thread now is bound to a separate executing thread.

Related

Spring Boot WebSocket Broker does not sends CONNECTED frame

I recently faced and issue, unfortunately on production environment, when my Spring Boot Server was not sending response of CONNECT frame (i.e CONNECTED frame). It first started happening occasionally but later on all the CONNECT requests sent by the browser were not replied to.
On console I was able to see following log
After some investigating, I found out that inboundChannel queue was holding many requests at that time. I believe this was the reason.
2022-06-01 18:22:59,943 INFO Thread-id-74- springframework.web.socket.config.WebSocketMessageBrokerStats: WebSocketSession[130 current WS(129)-HttpStream(1)-HttpPoll(0), 225 total, 0 closed abnormally (0 connect failure, 0 send limit, 2 transport error)], stompSubProtocol[processed CONNECT(220)-CONNECTED(132)-DISCONNECT(0)], stompBrokerRelay[null], inboundChannel[pool size = 2, active threads = 2, queued tasks = 10774, completed tasks = 31806], outboundChannel[pool size = 2, active threads = 0, queued tasks = 0, completed tasks = 570895], sockJsScheduler[pool size = 1, active threads = 1, queued tasks = 134, completed tasks = 1985]
I was wondering what might be the cause of the issue, what can cause queuing in the inboundChannel queue?
here is my current STOMP config on my angular application.
const config: StompJS.StompConfig = {
brokerURL: this.serverUrl,
connectHeaders: {
ccid: this.cookieService.get('ccid'),
username: `${this.globalContext.get('me')['username']}`,
},
debug: (str) => {
this.loggerService.log(this.sessionId, ' | ', str);
},
webSocketFactory: () => {
return new SockJS(this.serverUrl);
},
logRawCommunication: true,
reconnectDelay: 3000,
heartbeatIncoming: 100,
heartbeatOutgoing: 100,
discardWebsocketOnCommFailure: true,
connectionTimeout: 4000
};
Finally I think I found the solution, so the problem was around queued-tasks for inbound-channel as can be seen in the log appended
2022-06-01 18:22:59,943 INFO Thread-id-74- springframework.web.socket.config.WebSocketMessageBrokerStats: WebSocketSession[130 current WS(129)-HttpStream(1)-HttpPoll(0), 225 total, 0 closed abnormally (0 connect failure, 0 send limit, 2 transport error)], stompSubProtocol[processed CONNECT(220)-CONNECTED(132)-DISCONNECT(0)], stompBrokerRelay[null], inboundChannel[pool size = 2, active threads = 2, queued tasks = 10774, completed tasks = 31806], outboundChannel[pool size = 2, active threads = 0, queued tasks = 0, completed tasks = 570895], sockJsScheduler[pool size = 1, active threads = 1, queued tasks = 134, completed tasks = 1985]
I was socked to say only 2 threads were allocated to the task through I was running on a 8 core machine. So I checked the code for TaskExecutor and found this.
this.taskExecutor.setCorePoolSize(Runtime.getRuntime().availableProcessors() * 2);
According to this my corePoolSize should had been around 8*2=16 and figured out there is some bug with Runtime.getRuntime().availableProcessors() because of which it does not returns correct value in Java8, but has been fixed for newer version. Hence, i decided to fix this manually.
#Override
public void configureClientInboundChannel(ChannelRegistration registration) {
logger.debug("Configuring task executor for Client Inbound Channel");
if(inboundCoreThreads != null && inboundCoreThreads > 0) {
registration.taskExecutor().corePoolSize(inboundCoreThreads);
}
}
Now the question was why is it getting queued, so we started looking at the thread dump. And figured out that most of the threads are stuck in WAITING state due to cachelimit. And hence updated the cacheLimit to 4096 from 1024
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.setCacheLimit(messageBrokerCacheLimit);
}
ofcourse, inboundCoreThreads and messageBrokerCacheLimit are the variable names and have to put values in these.
After this, everything seems to working just fine. Thankyou #Ilya Lapitan for help.

Why doesn't this thread pool execute HTTP requests simultaneously?

I wrote a few lines of code which will send 50 HTTP GET requests to a service running on my machine. The service will always sleep 1 second and return a HTTP status code 200 with an empty body. As expected the code runs for about 50 seconds.
To speed things up a little I tried to create an ExecutorService with 4 threads so I could always send 4 requests at the same time to my service. I expected the code to run for about 13 seconds.
final List<String> urls = new ArrayList<>();
for (int i = 0; i < 50; i++)
urls.add("http://localhost:5000/test/" + i);
final RestTemplate restTemplate = new RestTemplate();
final List<Callable<String>> tasks = urls
.stream()
.map(u -> (Callable<String>) () -> {
System.out.println(LocalDateTime.now() + " - " + Thread.currentThread().getName() + ": " + u);
return restTemplate.getForObject(u, String.class);
}).collect(Collectors.toList());
final ExecutorService executorService = Executors.newFixedThreadPool(4);
final long start = System.currentTimeMillis();
try {
final List<Future<String>> futures = executorService.invokeAll(tasks);
final List<String> results = futures.stream().map(f -> {
try {
return f.get();
} catch (InterruptedException | ExecutionException e) {
throw new IllegalStateException(e);
}
}).collect(Collectors.toList());
System.out.println(results);
} finally {
executorService.shutdown();
executorService.awaitTermination(10, TimeUnit.SECONDS);
}
final long elapsed = System.currentTimeMillis() - start;
System.out.println("Took " + elapsed + " ms...");
But - if you look at the seconds of the debug output - it seems like the first 4 requests are executed simultaneously but all other request are executed one after another:
2018-10-21T17:42:16.160 - pool-1-thread-3: http://localhost:5000/test/2
2018-10-21T17:42:16.160 - pool-1-thread-1: http://localhost:5000/test/0
2018-10-21T17:42:16.160 - pool-1-thread-2: http://localhost:5000/test/1
2018-10-21T17:42:16.159 - pool-1-thread-4: http://localhost:5000/test/3
2018-10-21T17:42:17.233 - pool-1-thread-3: http://localhost:5000/test/4
2018-10-21T17:42:18.232 - pool-1-thread-2: http://localhost:5000/test/5
2018-10-21T17:42:19.237 - pool-1-thread-4: http://localhost:5000/test/6
2018-10-21T17:42:20.241 - pool-1-thread-1: http://localhost:5000/test/7
...
Took 50310 ms...
So for debugging purposes I changed the HTTP request to a sleep call:
// return restTemplate.getForObject(u, String.class);
TimeUnit.SECONDS.sleep(1);
return "";
And now the code works as expected:
...
Took 13068 ms...
So my question is why does the code with the sleep call work as expected and the code with the HTTP request doesn't? And how can I get it to behave in the way I expected?
From the information, I can see this is the most probable root cause:
The requests you make are done in parallel but the HTTP server which fulfils these request handles 1 request at a time.
So when you start making requests, the executor service fires up the requests concurrently, thus you get the first 4 at same time.
But the HTTP server can respond to requests one at a time i.e. after 1 second each.
Now when 1st request is fulfilled the executor service picks another request and fires it and this goes on till last request.
4 request are blocked at HTTP server at a time, which are being served serially one after the other.
To get a Proof of Concept of this theory what you can do is use a messaging service (queue) which can receive concurrently from 4 channels an test. That should reduce the time.

Concurrent Execution Java 8

I am new to concurrent programming with java and trying to start some Callables asynchronously. But the code seems to block my programm flow, where the Callables are given to the executorService es.invokeAll(tasks):
public void checkSensorConnections(boolean fireEvent) {
List<Callable<Void>> tasks = new ArrayList<>();
getSensors().forEach(sensor -> {
tasks.add(writerService.openWriteConnection(sensor));
tasks.add(readerService.openReadConnection(sensor));
});
try {
LOG.info("Submmitting tasks");
ExecutorService es = Executors.newWorkStealingPool();
es.invokeAll(tasks);
LOG.info("Tasks submitted");
} catch (InterruptedException e) {
LOG.error("could not open sensor-connections", e);
error(MeasurmentScrewMinerError.OPEN_CONNECTION_ERROR);
}
}
I have some log statements controlling the flow of the program. As you can see is that the execution waits until the two tasks are executed.
2017-01-19 16:06:06,474 INFO [main]
de.cgh.screwminer.service.measurement.MeasurementService
(MeasurementService.java:127) - Submmitting tasks
2017-01-19 16:06:08,477 ERROR [pool-2-thread-2]
de.cgh.screwminer.service.measurement.SensorReadService
(SensorReadService.java:68) - sensor Drehmoment read-connection could
not be opened java.net.SocketTimeoutException: Receive timed out ...
2017-01-19 16:06:08,477
ERROR [pool-2-thread-4]
de.cgh.screwminer.service.measurement.SensorReadService
(SensorReadService.java:68) - sensor Kraft read-connection could not
be opened java.net.SocketTimeoutException: Receive timed out ...
2017-01-19 16:06:08,482 INFO
[main] de.cgh.screwminer.service.measurement.MeasurementService
(MeasurementService.java:132) - Tasks submitted
From the Javadoc of invokeAll:
Returns:
a list of Futures representing the tasks, in the same sequential order as produced by the iterator for the given task list, each of which has completed
So yes invokeAll Tasks are finished.
What you can do is just hold the Executor in the class and submit each task in your forEach() which would do the same. You then get a list of Futures which you should check for errors.
you could do something like this:
getSensors().forEach(s -> {
CompletableFuture<Void> cf = (s -> writerService.openWriteConnection(s)).exceptionally(ex -> errorhandling)
exec.submit(cf)
});
CompletableFuture is a Java8 feature and lets you control errors nicely as you don't have to ask the Futures if they completed successfully (Which often leads to unexpected non logged errors)

Handling remote events with Java futures

I'm programming RPC style communication with microcontrollers in Java. The issue I'm facing is to block client code execution until I receive result from the microcontroller, which comes asynchronously.
Namely, I send commands out and receive results in two different threads (same class, though). The approach I've taken is to use CompletableFuture, but that does not work as I expect it to.
My RPC invoke method sends command out and instantiates CompletableFuture as below:
protected synchronized CompletableFuture<String> sendCommand(String command) {
... send command ...
this.handler = new CompletableFuture<String>();
return this.handler;
}
In the calling code that looks like that:
CompletableFuture<String> result = procedure.sendCommand("readSensor(0x1508)");
String result = result.get(5, TimeUnit.SECONDS); // line X
Next, there is listener method which receives data from microcontroller:
protected synchronized void onReceiveResult(String data) {
this.handler.complete(data); // line Y
}
I expect that client code execution will block at line X and it indeed does that. But for some reason line Y does not unblock it resulting in the timeout exception.
To answer comments from below...
Calling code (sorry, names do not match exactly what I have provided above, but that's the only difference, I think):
CompletableFuture<String> result = this.device.sendCommand(cmd);
log.debug("Waiting for callback, result=" + result);
String sid = result.get(timeout, unit);
Produces output:
2016-10-14 21:58:30 DEBUG RemoteProcedure:36 - Waiting for callback, result=com.***.rpc.RemoteDevice$ActiveProcedure#44c519a2[Not completed]
Completion code:
log.debug("Dispatching msg [" + msg + "] to a procedure: " + this.commandForResult);
log.debug("result=" + this.result);
log.debug("Cancelled = " + this.result.isCancelled());
log.debug("Done = " + this.result.isDone());
log.debug("CompletedExceptionally = " + this.result.isCompletedExceptionally());
boolean b = this.result.complete(msg);
this.result = null;
log.debug("b=" + b);
Produces output:
2016-10-14 21:58:35 DEBUG RemoteDevice:141 - Dispatching msg [123] to a procedure: getId;
2016-10-14 21:58:35 DEBUG RemoteDevice:142 - result=com.***.rpc.RemoteDevice$ActiveProcedure#44c519a2[Not completed]
2016-10-14 21:58:35 DEBUG RemoteDevice:143 - Cancelled = false
2016-10-14 21:58:35 DEBUG RemoteDevice:144 - Done = false
2016-10-14 21:58:35 DEBUG RemoteDevice:145 - CompletedExceptionally = false
2016-10-14 21:58:35 DEBUG RemoteDevice:150 - b=true
ActiveProcedure is actual CompletableFuture:
public static class ActiveProcedure extends CompletableFuture<String> {
#Getter String command;
public ActiveProcedure(String command) {
this.command = command;
}
}
Ok. Things got clear:
There was integration issue with underlying library, which I use to communicate with microcontroller. I expected that I receive data from device in a separate thread, but that was happening in same thread. Therefore CompletableFuture.get did not unblock.
I do not understand exactly the mechanism leading to such behaviour, but placing
handler.complete(msg);
into a separate thread solved the issue.

Vert.x multi-thread web-socket

I have simple vert.x app:
public class Main {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx(new VertxOptions().setWorkerPoolSize(40).setInternalBlockingPoolSize(40));
Router router = Router.router(vertx);
long main_pid = Thread.currentThread().getId();
Handler<ServerWebSocket> wsHandler = serverWebSocket -> {
if(!serverWebSocket.path().equalsIgnoreCase("/ws")){
serverWebSocket.reject();
} else {
long socket_pid = Thread.currentThread().getId();
serverWebSocket.handler(buffer -> {
String str = buffer.getString(0, buffer.length());
long handler_pid = Thread.currentThread().getId();
log.info("Got ws msg: " + str);
String res = String.format("(req:%s)main:%d sock:%d handlr:%d", str, main_pid, socket_pid, handler_pid);
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
serverWebSocket.writeFinalTextFrame(res);
});
}
};
vertx
.createHttpServer()
.websocketHandler(wsHandler)
.listen(8080);
}
}
When I connect this server with multiple clients I see that it works in one thread. But I want to handle each client connection parallelly. How I should change this code to do it?
This:
new VertxOptions().setWorkerPoolSize(40).setInternalBlockingPoolSize(40)
looks like you're trying to create your own HTTP connection pool, which is likely not what you really want.
The idea of Vert.x and other non-blocking event-loop based frameworks, is that we don't attempt the 1 thread -> 1 connection affinity, rather, when a request, currently being served by the event loop thread is waiting for IO - EG the response from a DB - that event-loop thread is freed to service another connection. This then allows a single event loop thread to service multiple connections in a concurrent-like fashion.
If you want to fully utilise all core on your machine, and you're only going to be running a single verticle, then set the number of instances to the number of cores when your deploy your verticle.
IE
Vertx.vertx().deployVerticle("MyVerticle", new DeploymentOptions().setInstances(Runtime.getRuntime().availableProcessors()));
Vert.x is a reactive framework, which means that it uses a single thread model to handle all your application load. This model is known to scale better than the threaded model.
The key point to know is that all code you put in a handler must never block (like your Thread.sleep) since it will block the main thread. If you have blocking code (say for example a JDBC call) you should wrap your blocking code in a executingBlocking handler, e.g.:
serverWebSocket.handler(buffer -> {
String str = buffer.getString(0, buffer.length());
long handler_pid = Thread.currentThread().getId();
log.info("Got ws msg: " + str);
String res = String.format("(req:%s)main:%d sock:%d handlr:%d", str, main_pid, socket_pid, handler_pid);
vertx.executeBlocking(future -> {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
serverWebSocket.writeFinalTextFrame(res);
future.complete();
});
});
Now all the blocking code will be run on a thread from the thread pool that you can configure as already shown in other replies.
If you would like to avoid writing all these execute blocking handlers and you know that you need to do several blocking calls then you should consider using a worker verticle, since these will scale at the event bus level.
A final note for multi threading is that if you use multiple threads your server will not be as efficient as a single thread, for example it won't be able to handle 10 million websockets since 10 million threads event on a modern machine (we're in 2016) will bring your OS scheduler to its knees.

Categories