Why isn't the awaiting thread activated with a signalAll? - java

I got 2 functions. The first one discoverHosts() sends an request message to other computers. After this it goes to sleep with the await command. A separate threat calls the handleMessage() function when he receives a response. After he handles the response he uses notifyAll() to let discoderHosts() know that he has to check that all responses are received.
DiscoverHosts() does await when he calls the function. However when the separate threat calls handleMessage(), discoverHosts() doesn't awake when handleMessage calls the signalAll(). I checked while debuggig if signalAll() is called and this is the case. I have almost the same bit of code somewhere else in my project where it does work.
Do any of you guys know what I am overlooking?
private final Lock lock = new ReentrantLock();
private final Condition allReceived = lock.newCondition();
private void discoverHosts() throws Exception {
lock.lock();
externalNodes = new HashMap<String, NodeAddress>();
Message msg = new Message(null, "REQUEST_IP");
logger.debug("Broadcasting ip request, waiting responses");
channel.send(msg);
// TODO:Write a time-out
while (channel.getView().size() - 1 != externalNodes.keySet().size()) {
logger.debug("Channel: " + (channel.getView().size() - 1));
logger.debug("Responses: "+ externalNodes.keySet().size());
allReceived.await();
}
logger.debug("All answers received");
lock.unlock();
}
protected void handleMessage(Message msg) {
lock.lock();
if (!((String) msg.getObject()).matches("IP_RESPONSE:[0-9.]*"))
return;
logger.debug("Received answer from " + msg.getObject());
String ip = ((String) msg.getObject()).replaceAll("IP_RESPONSE:", "");
// externalHostIps.add(ip);
NodeAddress currentAddress = new NodeAddress(ip, msg.getSrc());
externalNodes.put(ip, currentAddress);
logger.debug("Signalling all threads");
allReceived.signalAll();
lock.unlock();
logger.debug("Unlocked");
}
Logger output:
4372 [main] DEBUG com.conbit.webhackarena.monitor.monitor.Monitor#3b91eb - Broadcasting ip request, waiting responses
4372 [main] DEBUG com.conbit.webhackarena.monitor.monitor.Monitor#3b91eb - Channel: 1
4372 [main] DEBUG com.conbit.webhackarena.monitor.monitor.Monitor#3b91eb - Responses: 0
4394 [Incoming-1,webhackarena,leendert-K53SV-53745] DEBUG com.conbit.webhackarena.monitor.monitor.Monitor#3b91eb - Received answer from IP_RESPONSE:192.168.1.106
4396 [Incoming-1,webhackarena,leendert-K53SV-53745] DEBUG com.conbit.webhackarena.monitor.monitor.Monitor#3b91eb - Signalling all threads
4397 [Incoming-1,webhackarena,leendert-K53SV-53745] DEBUG com.conbit.webhackarena.monitor.monitor.Monitor#3b91eb - Unlocked

I think your problem is this line
if (!((String) msg.getObject()).matches("IP_RESPONSE:[0-9.]*"))
return;
Which means under some condition you acquire the lock and never release it.
Always use try...finally blocks with Lock to avoid this issue.
protected void handleMessage(Message msg) {
lock.lock();
try {
if (!((String) msg.getObject()).matches("IP_RESPONSE:[0-9.]*"))
return;
logger.debug("Received answer from " + msg.getObject());
String ip = ((String) msg.getObject()).replaceAll("IP_RESPONSE:", "");
// externalHostIps.add(ip);
NodeAddress currentAddress = new NodeAddress(ip, msg.getSrc());
externalNodes.put(ip, currentAddress);
logger.debug("Signalling all threads");
allReceived.signalAll();
} finally {
lock.unlock();
}
}

However when the separate threat calls handleMessage(), discoverHosts() doesn't awake when handleMessage calls the signalAll()
I suspect that you have two different instances of the class in question and since the Condition allReceived is private final, they are not actually dealing with the same condition.
If you are trying to debug this (or use System.out.println debugging), be sure that your instance of the wrapping class is the same in both threads.

Related

Why doesn't this thread pool execute HTTP requests simultaneously?

I wrote a few lines of code which will send 50 HTTP GET requests to a service running on my machine. The service will always sleep 1 second and return a HTTP status code 200 with an empty body. As expected the code runs for about 50 seconds.
To speed things up a little I tried to create an ExecutorService with 4 threads so I could always send 4 requests at the same time to my service. I expected the code to run for about 13 seconds.
final List<String> urls = new ArrayList<>();
for (int i = 0; i < 50; i++)
urls.add("http://localhost:5000/test/" + i);
final RestTemplate restTemplate = new RestTemplate();
final List<Callable<String>> tasks = urls
.stream()
.map(u -> (Callable<String>) () -> {
System.out.println(LocalDateTime.now() + " - " + Thread.currentThread().getName() + ": " + u);
return restTemplate.getForObject(u, String.class);
}).collect(Collectors.toList());
final ExecutorService executorService = Executors.newFixedThreadPool(4);
final long start = System.currentTimeMillis();
try {
final List<Future<String>> futures = executorService.invokeAll(tasks);
final List<String> results = futures.stream().map(f -> {
try {
return f.get();
} catch (InterruptedException | ExecutionException e) {
throw new IllegalStateException(e);
}
}).collect(Collectors.toList());
System.out.println(results);
} finally {
executorService.shutdown();
executorService.awaitTermination(10, TimeUnit.SECONDS);
}
final long elapsed = System.currentTimeMillis() - start;
System.out.println("Took " + elapsed + " ms...");
But - if you look at the seconds of the debug output - it seems like the first 4 requests are executed simultaneously but all other request are executed one after another:
2018-10-21T17:42:16.160 - pool-1-thread-3: http://localhost:5000/test/2
2018-10-21T17:42:16.160 - pool-1-thread-1: http://localhost:5000/test/0
2018-10-21T17:42:16.160 - pool-1-thread-2: http://localhost:5000/test/1
2018-10-21T17:42:16.159 - pool-1-thread-4: http://localhost:5000/test/3
2018-10-21T17:42:17.233 - pool-1-thread-3: http://localhost:5000/test/4
2018-10-21T17:42:18.232 - pool-1-thread-2: http://localhost:5000/test/5
2018-10-21T17:42:19.237 - pool-1-thread-4: http://localhost:5000/test/6
2018-10-21T17:42:20.241 - pool-1-thread-1: http://localhost:5000/test/7
...
Took 50310 ms...
So for debugging purposes I changed the HTTP request to a sleep call:
// return restTemplate.getForObject(u, String.class);
TimeUnit.SECONDS.sleep(1);
return "";
And now the code works as expected:
...
Took 13068 ms...
So my question is why does the code with the sleep call work as expected and the code with the HTTP request doesn't? And how can I get it to behave in the way I expected?
From the information, I can see this is the most probable root cause:
The requests you make are done in parallel but the HTTP server which fulfils these request handles 1 request at a time.
So when you start making requests, the executor service fires up the requests concurrently, thus you get the first 4 at same time.
But the HTTP server can respond to requests one at a time i.e. after 1 second each.
Now when 1st request is fulfilled the executor service picks another request and fires it and this goes on till last request.
4 request are blocked at HTTP server at a time, which are being served serially one after the other.
To get a Proof of Concept of this theory what you can do is use a messaging service (queue) which can receive concurrently from 4 channels an test. That should reduce the time.

scheduleAtFixedRate not executing after first run

I have a scheduled executor to reset a parameter to 0 and awake all active threads to continue processing. However after initial run of the thread it is not executing again.
ScheduledExecutorService exec = Executors.newScheduledThreadPool(4);
exec.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
logger.info("Setting hourly limit record count back to 0 to continue processing");
lines = 0;
executor.notifyAll();
Thread.currentThread().interrupt();
return;
}
}, 0, 1, TimeUnit.MINUTES);
There is another Executor defined in the class which executes further processes and not sure if this influences it:
ExecutorService executor = Executors.newCachedThreadPool();
for (String processList : processFiles) {
String appName = processList.substring(0,processList.indexOf("-"));
String scope = processList.substring(processList.lastIndexOf("-") + 1);
logger.info("Starting execution of thread for app " + appName + " under scope: " + scope);
try {
File processedFile = new File(ConfigurationReader.processedDirectory + appName + "-" + scope + ".csv");
processedFile.createNewFile();
executor.execute(new APIInitialisation(appName,processedFile.length(),scope));
} catch (InterruptedException | IOException e) {
e.printStackTrace();
}
}
From the documentation for ScheduledExecutorService.scheduleAtFixedRate():
If any execution of the task encounters an exception, subsequent executions are suppressed.
So something in your task is throwing an exception. I would guess the call to executor.notifyAll() which is documented to throw an IllegalMonitorStateException:
if the current thread is not the owner of this object's monitor.
Your scheduled task will most probably end up in a uncaught Exception. Taken from the JavaDoc of ScheduledExecutorService.scheduleAtFixedRate
If any execution of the task encounters an exception, subsequent
executions are suppressed.
Because you are provoking a uncaught exception, all further executions are cancelled.

SSH Server Identification never received - Handshake Deadlock [SSHJ]

We're having some trouble trying to implement a Pool of SftpConnections for our application.
We're currently using SSHJ (Schmizz) as the transport library, and facing an issue we simply cannot simulate in our development environment (but the error keeps showing randomly in production, sometimes after three days, sometimes after just 10 minutes).
The problem is, when trying to send a file via SFTP, the thread gets locked in the init method from schmizz' TransportImpl class:
#Override
public void init(String remoteHost, int remotePort, InputStream in, OutputStream out)
throws TransportException {
connInfo = new ConnInfo(remoteHost, remotePort, in, out);
try {
if (config.isWaitForServerIdentBeforeSendingClientIdent()) {
receiveServerIdent();
sendClientIdent();
} else {
sendClientIdent();
receiveServerIdent();
}
log.info("Server identity string: {}", serverID);
} catch (IOException e) {
throw new TransportException(e);
}
reader.start();
}
isWaitForServerIdentBeforeSendingClientIdent is FALSE for us, so first of all the client (we) send our identification, as appears in logs:
"Client identity String: blabla"
Then it's turn for the receiveServerIdent:
private void receiveServerIdent() throws IOException
{
final Buffer.PlainBuffer buf = new Buffer.PlainBuffer();
while ((serverID = readIdentification(buf)).isEmpty()) {
int b = connInfo.in.read();
if (b == -1)
throw new TransportException("Server closed connection during identification exchange");
buf.putByte((byte) b);
}
}
The thread never gets the control back, as the server never replies with its identity. Seems like the code is stuck in this While loop. No timeouts, or SSH exceptions are thrown, my client just keeps waiting forever, and the thread gets deadlocked.
This is the readIdentification method's impl:
private String readIdentification(Buffer.PlainBuffer buffer)
throws IOException {
String ident = new IdentificationStringParser(buffer, loggerFactory).parseIdentificationString();
if (ident.isEmpty()) {
return ident;
}
if (!ident.startsWith("SSH-2.0-") && !ident.startsWith("SSH-1.99-"))
throw new TransportException(DisconnectReason.PROTOCOL_VERSION_NOT_SUPPORTED,
"Server does not support SSHv2, identified as: " + ident);
return ident;
}
Seems like ConnectionInfo's inputstream never gets data to read, as if the server closed the connection (even if, as said earlier, no exception is thrown).
I've tried to simulate this error by saturating the negotiation, closing sockets while connecting, using conntrack to kill established connections while the handshake is being made, but with no luck at all, so any help would be HIGHLY appreciated.
: )
I bet following code creates a problem:
String ident = new IdentificationStringParser(buffer, loggerFactory).parseIdentificationString();
if (ident.isEmpty()) {
return ident;
}
If the IdentificationStringParser.parseIdentificationString() returns empty string, it will be returned to the caller method. The caller method will keep calling the while ((serverID = readIdentification(buf)).isEmpty()) since the string is always empty. The only way to break the loop would be if call to int b = connInfo.in.read(); returns -1... but if server keeps sending the data (or resending the data) this condition is never met.
If this is the case I would add some kind of artificial way to detect this like:
private String readIdentification(Buffer.PlainBuffer buffer, AtomicInteger numberOfAttempts)
throws IOException {
String ident = new IdentificationStringParser(buffer, loggerFactory).parseIdentificationString();
numberOfAttempts.incrementAndGet();
if (ident.isEmpty() && numberOfAttempts.intValue() < 1000) { // 1000
return ident;
} else if (numberOfAttempts.intValue() >= 1000) {
throw new TransportException("To many attempts to read the server ident").
}
if (!ident.startsWith("SSH-2.0-") && !ident.startsWith("SSH-1.99-"))
throw new TransportException(DisconnectReason.PROTOCOL_VERSION_NOT_SUPPORTED,
"Server does not support SSHv2, identified as: " + ident);
return ident;
}
This way you would at least confirm that this is the case and can dig further why .parseIdentificationString() returns empty string.
Faced a similar issue where we would see:
INFO [net.schmizz.sshj.transport.TransportImpl : pool-6-thread-2] - Client identity string: blablabla
INFO [net.schmizz.sshj.transport.TransportImpl : pool-6-thread-2] - Server identity string: blablabla
But on some occasions, there were no server response.
Our service would typically wake up and transfer several files simultaneously, one file per connection / thread.
The issue was in the sshd server config, we increased maxStartups from default value 10
(we noticed the problems started shortly after batch sizes increased to above 10)
Default in /etc/ssh/sshd_config:
MaxStartups 10:30:100
Changed to:
MaxStartups 30:30:100
MaxStartups
Specifies the maximum number of concurrent unauthenticated connections to the SSH daemon. Additional connections will be dropped until authentication succeeds or the LoginGraceTime expires for a connection. The default is 10:30:100. Alternatively, random early drop can be enabled by specifying the three colon separated values start:rate:full (e.g. "10:30:60"). sshd will refuse connection attempts with a probability of rate/100 (30%) if there are currently start (10) unauthenticated connections. The probability increases linearly and all connection attempts are refused if the number of unauthenticated connections reaches full (60).
If you cannot control the server, you might have to find a way to limit your concurrent connection attempts in your client code instead.

Handling remote events with Java futures

I'm programming RPC style communication with microcontrollers in Java. The issue I'm facing is to block client code execution until I receive result from the microcontroller, which comes asynchronously.
Namely, I send commands out and receive results in two different threads (same class, though). The approach I've taken is to use CompletableFuture, but that does not work as I expect it to.
My RPC invoke method sends command out and instantiates CompletableFuture as below:
protected synchronized CompletableFuture<String> sendCommand(String command) {
... send command ...
this.handler = new CompletableFuture<String>();
return this.handler;
}
In the calling code that looks like that:
CompletableFuture<String> result = procedure.sendCommand("readSensor(0x1508)");
String result = result.get(5, TimeUnit.SECONDS); // line X
Next, there is listener method which receives data from microcontroller:
protected synchronized void onReceiveResult(String data) {
this.handler.complete(data); // line Y
}
I expect that client code execution will block at line X and it indeed does that. But for some reason line Y does not unblock it resulting in the timeout exception.
To answer comments from below...
Calling code (sorry, names do not match exactly what I have provided above, but that's the only difference, I think):
CompletableFuture<String> result = this.device.sendCommand(cmd);
log.debug("Waiting for callback, result=" + result);
String sid = result.get(timeout, unit);
Produces output:
2016-10-14 21:58:30 DEBUG RemoteProcedure:36 - Waiting for callback, result=com.***.rpc.RemoteDevice$ActiveProcedure#44c519a2[Not completed]
Completion code:
log.debug("Dispatching msg [" + msg + "] to a procedure: " + this.commandForResult);
log.debug("result=" + this.result);
log.debug("Cancelled = " + this.result.isCancelled());
log.debug("Done = " + this.result.isDone());
log.debug("CompletedExceptionally = " + this.result.isCompletedExceptionally());
boolean b = this.result.complete(msg);
this.result = null;
log.debug("b=" + b);
Produces output:
2016-10-14 21:58:35 DEBUG RemoteDevice:141 - Dispatching msg [123] to a procedure: getId;
2016-10-14 21:58:35 DEBUG RemoteDevice:142 - result=com.***.rpc.RemoteDevice$ActiveProcedure#44c519a2[Not completed]
2016-10-14 21:58:35 DEBUG RemoteDevice:143 - Cancelled = false
2016-10-14 21:58:35 DEBUG RemoteDevice:144 - Done = false
2016-10-14 21:58:35 DEBUG RemoteDevice:145 - CompletedExceptionally = false
2016-10-14 21:58:35 DEBUG RemoteDevice:150 - b=true
ActiveProcedure is actual CompletableFuture:
public static class ActiveProcedure extends CompletableFuture<String> {
#Getter String command;
public ActiveProcedure(String command) {
this.command = command;
}
}
Ok. Things got clear:
There was integration issue with underlying library, which I use to communicate with microcontroller. I expected that I receive data from device in a separate thread, but that was happening in same thread. Therefore CompletableFuture.get did not unblock.
I do not understand exactly the mechanism leading to such behaviour, but placing
handler.complete(msg);
into a separate thread solved the issue.

Vert.x multi-thread web-socket

I have simple vert.x app:
public class Main {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx(new VertxOptions().setWorkerPoolSize(40).setInternalBlockingPoolSize(40));
Router router = Router.router(vertx);
long main_pid = Thread.currentThread().getId();
Handler<ServerWebSocket> wsHandler = serverWebSocket -> {
if(!serverWebSocket.path().equalsIgnoreCase("/ws")){
serverWebSocket.reject();
} else {
long socket_pid = Thread.currentThread().getId();
serverWebSocket.handler(buffer -> {
String str = buffer.getString(0, buffer.length());
long handler_pid = Thread.currentThread().getId();
log.info("Got ws msg: " + str);
String res = String.format("(req:%s)main:%d sock:%d handlr:%d", str, main_pid, socket_pid, handler_pid);
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
serverWebSocket.writeFinalTextFrame(res);
});
}
};
vertx
.createHttpServer()
.websocketHandler(wsHandler)
.listen(8080);
}
}
When I connect this server with multiple clients I see that it works in one thread. But I want to handle each client connection parallelly. How I should change this code to do it?
This:
new VertxOptions().setWorkerPoolSize(40).setInternalBlockingPoolSize(40)
looks like you're trying to create your own HTTP connection pool, which is likely not what you really want.
The idea of Vert.x and other non-blocking event-loop based frameworks, is that we don't attempt the 1 thread -> 1 connection affinity, rather, when a request, currently being served by the event loop thread is waiting for IO - EG the response from a DB - that event-loop thread is freed to service another connection. This then allows a single event loop thread to service multiple connections in a concurrent-like fashion.
If you want to fully utilise all core on your machine, and you're only going to be running a single verticle, then set the number of instances to the number of cores when your deploy your verticle.
IE
Vertx.vertx().deployVerticle("MyVerticle", new DeploymentOptions().setInstances(Runtime.getRuntime().availableProcessors()));
Vert.x is a reactive framework, which means that it uses a single thread model to handle all your application load. This model is known to scale better than the threaded model.
The key point to know is that all code you put in a handler must never block (like your Thread.sleep) since it will block the main thread. If you have blocking code (say for example a JDBC call) you should wrap your blocking code in a executingBlocking handler, e.g.:
serverWebSocket.handler(buffer -> {
String str = buffer.getString(0, buffer.length());
long handler_pid = Thread.currentThread().getId();
log.info("Got ws msg: " + str);
String res = String.format("(req:%s)main:%d sock:%d handlr:%d", str, main_pid, socket_pid, handler_pid);
vertx.executeBlocking(future -> {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
serverWebSocket.writeFinalTextFrame(res);
future.complete();
});
});
Now all the blocking code will be run on a thread from the thread pool that you can configure as already shown in other replies.
If you would like to avoid writing all these execute blocking handlers and you know that you need to do several blocking calls then you should consider using a worker verticle, since these will scale at the event bus level.
A final note for multi threading is that if you use multiple threads your server will not be as efficient as a single thread, for example it won't be able to handle 10 million websockets since 10 million threads event on a modern machine (we're in 2016) will bring your OS scheduler to its knees.

Categories