Concurrent Execution Java 8 - java

I am new to concurrent programming with java and trying to start some Callables asynchronously. But the code seems to block my programm flow, where the Callables are given to the executorService es.invokeAll(tasks):
public void checkSensorConnections(boolean fireEvent) {
List<Callable<Void>> tasks = new ArrayList<>();
getSensors().forEach(sensor -> {
tasks.add(writerService.openWriteConnection(sensor));
tasks.add(readerService.openReadConnection(sensor));
});
try {
LOG.info("Submmitting tasks");
ExecutorService es = Executors.newWorkStealingPool();
es.invokeAll(tasks);
LOG.info("Tasks submitted");
} catch (InterruptedException e) {
LOG.error("could not open sensor-connections", e);
error(MeasurmentScrewMinerError.OPEN_CONNECTION_ERROR);
}
}
I have some log statements controlling the flow of the program. As you can see is that the execution waits until the two tasks are executed.
2017-01-19 16:06:06,474 INFO [main]
de.cgh.screwminer.service.measurement.MeasurementService
(MeasurementService.java:127) - Submmitting tasks
2017-01-19 16:06:08,477 ERROR [pool-2-thread-2]
de.cgh.screwminer.service.measurement.SensorReadService
(SensorReadService.java:68) - sensor Drehmoment read-connection could
not be opened java.net.SocketTimeoutException: Receive timed out ...
2017-01-19 16:06:08,477
ERROR [pool-2-thread-4]
de.cgh.screwminer.service.measurement.SensorReadService
(SensorReadService.java:68) - sensor Kraft read-connection could not
be opened java.net.SocketTimeoutException: Receive timed out ...
2017-01-19 16:06:08,482 INFO
[main] de.cgh.screwminer.service.measurement.MeasurementService
(MeasurementService.java:132) - Tasks submitted

From the Javadoc of invokeAll:
Returns:
a list of Futures representing the tasks, in the same sequential order as produced by the iterator for the given task list, each of which has completed
So yes invokeAll Tasks are finished.
What you can do is just hold the Executor in the class and submit each task in your forEach() which would do the same. You then get a list of Futures which you should check for errors.
you could do something like this:
getSensors().forEach(s -> {
CompletableFuture<Void> cf = (s -> writerService.openWriteConnection(s)).exceptionally(ex -> errorhandling)
exec.submit(cf)
});
CompletableFuture is a Java8 feature and lets you control errors nicely as you don't have to ask the Futures if they completed successfully (Which often leads to unexpected non logged errors)

Related

Azure ServiceBus weird behavior of IMessageReciever.receiveBatch()

I'm dealing with an Azure Service Bus Subscription and its Dead Letter Queue in Java SpringBoot setup.
I need to process the messages in DLQ when there is a trigger.
I have 12 messages in the DLQ, I need to read 5 messages in one go and submit it to an ExecutorService to process the individual messages.
I created an IMessageReceiver deadLetterReceiver, then did batch receiving as deadLetterReceiver.receiveBatch(5)
The catch here is until the messages in 1st batch are not processed, the next batch of messages is not read, and the 1st batch of messages will not be removed from DLQ, and it remains there.
The problem is after I process the 1st batch and read the 2nd batch from the ASB, instead of getting the next 5 messages, I get the same messages again.
For example; if I have messages with messageId 1 to 12 in DLQ, after reading the 1st batch, I get messages with messageId 1,2,3,4,5. After reading the second batch instead of getting 6,7,8,9,10 I'm getting 1,2,3,4,5.
Here is the code:
public void processDeadLetterQueue(){
IMessageReceiver deadLetterReceiver = getDeadLetterMessageReceiver();
Long deadLetterMessageCount = getDeadLetterMessageCount();
Long receivedMessageCount = 0L;
ExecutorService executor = Executors.newFixedThreadPool(2);
while(receivedMessageCount < deadLetterMessageCount) {
Collection<IMessage> messageList = deadLetterReceiver.receiveBatch(5);
receivedMessageCount += messageList.size();
List<Callable<Void>> callableDeadLetterMessages = new ArrayList<>();
messageList.forEach(message ->callableDeadLetterMessages.add(() -> {
handleDeadLetterMessage(message, deadLetterReceiver);
return null;
}));
try {
List<Future<Void>> futureList = executor.invokeAll(callableDeadLetterMessages);
for (Future<Void> future : futureList) {
future.get();
}
} catch (InterruptedException | ExecutionException ex){
log.error("Interrupted during processing callableDeadLetterMessage: ", ex);
Thread.currentThread().interrupt();
}
}
executor.shutdown();
deadLetterReceiver.close();
}
How can I stop it reading the same message again in the next batch and read the next available messages instead?
Note: I' not abandoning the message from DLQ (deadLetterReceiver.abandon(message.getLockToken());)

ScheduledThreadPoolExecutor stops executing with caught exceptions

I have the following class
public class MaintanceTools {
public static final ScheduledThreadPoolExecutor THREADSUPERVISER;
private static final int ALLOWEDIDLESECONDS = 1*20;
static {
THREADSUPERVISER = new ScheduledThreadPoolExecutor(10);
}
public static void launchThreadsSupervising() {
THREADSUPERVISER.scheduleWithFixedDelay(() -> {
System.out.println("maintance launched " + Instant.now());
ACTIVECONNECTIONS.forEach((connection) -> {
try {
if ( !connection.isDataConnected() &&
(connection.getLastActionInstant()
.until(Instant.now(), SECONDS) > ALLOWEDIDLESECONDS)) {
connection.closeFTPConnection();
ACTIVECONNECTIONS.remove(connection);
}
} catch (Throwable e) { }
});
System.out.println("maintance finished " + Instant.now());
}, 0, 20, TimeUnit.SECONDS);
}
}
Which iterates over all FTP connections (cause I write FTP server), checks if connection not transmitting any data and idle for some time and closes the connection if so.
The problem is task never runs after some exceptions thrown in the interrupting thread. I know that it's written in docs
If any execution of the task encounters an exception, subsequent executions are suppressed. Otherwise, the task will only terminate via cancellation or termination of the executor.
And I have the exception, but it is caught and do not go outside throwing function.
This function throws AsynchronousCloseException because it hangs on channel.read(readBuffer); and when connection is closed, exception thrown and caught.
The question is how to make THREADSUPERVISER work regardless any thrown and handled exceptions.
Debug output:
maintance launched 2017-08-30T14:03:05.504Z // launched and finished as expected
maintance finished 2017-08-30T14:03:05.566Z
output: FTPConnection id: 176 220 Service ready.
……
output: FTPConnection id: 190 226 File stored 135 bytes.
closing data socket: FTP connection 190, /0:0:0:0:0:0:0:1:1409
maintance launched 2017-08-30T14:03:25.581Z // launched and finished as expected
maintance finished 2017-08-30T14:03:25.581Z
async exception error reading. // got exception
maintance launched 2017-08-30T14:03:45.596Z // launched, but not finished and never run again
output: FTPConnection id: 176 221 Timeout exceeded, closing control and data connection.
closing data socket: FTP connection 176, /0:0:0:0:0:0:0:1:1407
As turns out, the problem was in
ACTIVECONNECTIONS.remove(connection);
I had ConcurrentModifyingException. Solution in http://code.nomad-labs.com/2011/12/09/mother-fk-the-scheduledexecutorservice/ worked perfectly

Vert.x multi-thread web-socket

I have simple vert.x app:
public class Main {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx(new VertxOptions().setWorkerPoolSize(40).setInternalBlockingPoolSize(40));
Router router = Router.router(vertx);
long main_pid = Thread.currentThread().getId();
Handler<ServerWebSocket> wsHandler = serverWebSocket -> {
if(!serverWebSocket.path().equalsIgnoreCase("/ws")){
serverWebSocket.reject();
} else {
long socket_pid = Thread.currentThread().getId();
serverWebSocket.handler(buffer -> {
String str = buffer.getString(0, buffer.length());
long handler_pid = Thread.currentThread().getId();
log.info("Got ws msg: " + str);
String res = String.format("(req:%s)main:%d sock:%d handlr:%d", str, main_pid, socket_pid, handler_pid);
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
serverWebSocket.writeFinalTextFrame(res);
});
}
};
vertx
.createHttpServer()
.websocketHandler(wsHandler)
.listen(8080);
}
}
When I connect this server with multiple clients I see that it works in one thread. But I want to handle each client connection parallelly. How I should change this code to do it?
This:
new VertxOptions().setWorkerPoolSize(40).setInternalBlockingPoolSize(40)
looks like you're trying to create your own HTTP connection pool, which is likely not what you really want.
The idea of Vert.x and other non-blocking event-loop based frameworks, is that we don't attempt the 1 thread -> 1 connection affinity, rather, when a request, currently being served by the event loop thread is waiting for IO - EG the response from a DB - that event-loop thread is freed to service another connection. This then allows a single event loop thread to service multiple connections in a concurrent-like fashion.
If you want to fully utilise all core on your machine, and you're only going to be running a single verticle, then set the number of instances to the number of cores when your deploy your verticle.
IE
Vertx.vertx().deployVerticle("MyVerticle", new DeploymentOptions().setInstances(Runtime.getRuntime().availableProcessors()));
Vert.x is a reactive framework, which means that it uses a single thread model to handle all your application load. This model is known to scale better than the threaded model.
The key point to know is that all code you put in a handler must never block (like your Thread.sleep) since it will block the main thread. If you have blocking code (say for example a JDBC call) you should wrap your blocking code in a executingBlocking handler, e.g.:
serverWebSocket.handler(buffer -> {
String str = buffer.getString(0, buffer.length());
long handler_pid = Thread.currentThread().getId();
log.info("Got ws msg: " + str);
String res = String.format("(req:%s)main:%d sock:%d handlr:%d", str, main_pid, socket_pid, handler_pid);
vertx.executeBlocking(future -> {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
serverWebSocket.writeFinalTextFrame(res);
future.complete();
});
});
Now all the blocking code will be run on a thread from the thread pool that you can configure as already shown in other replies.
If you would like to avoid writing all these execute blocking handlers and you know that you need to do several blocking calls then you should consider using a worker verticle, since these will scale at the event bus level.
A final note for multi threading is that if you use multiple threads your server will not be as efficient as a single thread, for example it won't be able to handle 10 million websockets since 10 million threads event on a modern machine (we're in 2016) will bring your OS scheduler to its knees.

Java executorsevice is closing even its threads are running

I am Using Executorsevice to generate files from database. I am using jdbc and core java to get the table data into files.
After creating the Executorservice with 10 threads I am submitting 60 threads in a for loop to get 60 files parallelly. This is working fine with small data and a table with few columns. But in case of a huge file and for tables having more columns, the thread which is working on huge table data/ more columns table is stopping without giving any information in the log when the other threads are completed .
ExecutorService executor = Executors.newFixedThreadPool(THREAD_COUNT);
for (String filename : filenames) {
EachFileThread worker = new EachFileThread(destdir, converter,
filename, this);
executor.execute(worker);
}
executor.shutdown();
Inside Eachfilethread I am reading the xml and get columns, table and form a query and executing the query and formatting the data and putting the data into file
forTable = (FileData) converter.convertFromXMLToObject( filename + ".xml");
String query = getQuery(forTable);
statement = connection.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_READ_ONLY);
resultSet = statement.executeQuery(query);
resultSet.setFetchSize(3000);
WriteData(resultSet, filepath, forTable);(formatting the data from db and then writing to a file)
The problem is that you are not waiting for all the jobs to finish what they were doing. As #msandiford suggested in the comment you should add call to awaitTermination(..) after calling shutdown() as it is in sample shutdownAndAwaitTermination() method on https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ExecutorService.html
For example you can try to do it like so:
ExecutorService executor = Executors.newFixedThreadPool(THREAD_COUNT);
for (String filename : filenames) {
EachFileThread worker = new EachFileThread(destdir, converter, filename, this);
executor.execute(worker);
}
executor.shutdown();
try {
// Wait a while for existing tasks to terminate
if (!executor.awaitTermination(60, TimeUnit.SECONDS)) {
executor.shutdownNow(); // Cancel currently executing tasks
// Wait a while for tasks to respond to being cancelled
if (!executor.awaitTermination(60, TimeUnit.SECONDS))
System.err.println("Executor did not terminate");
}
} catch (InterruptedException ie) {
// (Re-)Cancel if current thread also interrupted
executor.shutdownNow();
// Preserve interrupt status
Thread.currentThread().interrupt();
}

Hibernate Search synchronous execution in main thread

It seems that Hibernate Search synchronous execution uses other threads than the calling thread for parallel execution.
How do I execute the Hibernate Search executions serially in the calling thread?
The problem seems to be in the org.hibernate.search.backend.impl.lucene.QueueProcessors class :
private void runAllWaiting() throws InterruptedException {
List<Future<Object>> futures = new ArrayList<Future<Object>>( dpProcessors.size() );
// execute all work in parallel on each DirectoryProvider;
// each DP has it's own ExecutorService.
for ( PerDPQueueProcessor process : dpProcessors.values() ) {
ExecutorService executor = process.getOwningExecutor();
//wrap each Runnable in a Future
FutureTask<Object> f = new FutureTask<Object>( process, null );
futures.add( f );
executor.execute( f );
}
// and then wait for all tasks to be finished:
for ( Future<Object> f : futures ) {
if ( !f.isDone() ) {
try {
f.get();
}
catch (CancellationException ignore) {
// ignored, as in java.util.concurrent.AbstractExecutorService.invokeAll(Collection<Callable<T>>
// tasks)
}
catch (ExecutionException error) {
// rethrow cause to serviced thread - this could hide more exception:
Throwable cause = error.getCause();
throw new SearchException( cause );
}
}
}
}
A serial synchronous execution would happen in the calling thread and would expose context information such as authentication information to the underlying DirectoryProvider.
Very old question, but I might as well answer it...
Hibernate Search does that to ensure single-threaded access to the Lucene IndexWriter for a directory (which is required by Lucene). I imagine the use of an single-threaded executor per-directory was a way of dealing with the queueing problem.
If you want it all to run in the calling thread you need to re-implement the LuceneBackendQueueProcessorFactory and bind it to hibernate.search.worker.backend in your hibernate properties. Not trivial, but do-able.

Categories