Synchronize Threads triggered by ScheduledExecutorService - java

I have the following test project:
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class myThreadTest {
private static final Logger log = LoggerFactory.getLogger(myThreadTest.class);
private ScheduledExecutorService executorService1 = Executors.newSingleThreadScheduledExecutor();
private ScheduledExecutorService executorService2 = Executors.newSingleThreadScheduledExecutor();
private Future<?> task1;
private Future<?> task2;
private class Task1 implements Runnable {
#Override
public synchronized void run() {
log.debug("-----------------------");
for (int i = 0; i < 100; i++) {
log.debug("{} Hello from Task 1",i);
try {
Thread.sleep(2);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
log.debug("-----------------------");
}
}
private class Task2 implements Runnable {
#Override
public synchronized void run() {
log.debug("********************");
for (int i = 0; i < 100; i++) {
log.debug("{} Hello from Task 2",i);
}
log.debug("********************");
}
}
private void start() {
task1 = executorService1.scheduleAtFixedRate(new Task1(), 1, 500, TimeUnit.MILLISECONDS);
task2 = executorService2.scheduleAtFixedRate(new Task2(), 1, 505, TimeUnit.MILLISECONDS);
}
public void stop() throws InterruptedException {
if (task1 != null) {
task1.cancel(false);
}
if (task2 != null) {
task2.cancel(false);
}
}
public static void main(String[] args) {
myThreadTest mtt = new myThreadTest();
mtt.start();
}
}
I have 2 tasks running in different threads. However the run methods are synchonized I can see in the debug log that threads are changing within executing a method - why is this?
...
17:56:09,593 pool-2-thread-1 DEBUG myThreadTest:39 - ********************
17:56:09,593 pool-1-thread-1 DEBUG myThreadTest:21 - -----------------------
17:56:09,600 pool-2-thread-1 DEBUG myThreadTest:41 - 0 Hello from Task 2
17:56:09,600 pool-2-thread-1 DEBUG myThreadTest:41 - 1 Hello from Task 2
17:56:09,600 pool-1-thread-1 DEBUG myThreadTest:23 - 0 Hello from Task 1
17:56:09,600 pool-2-thread-1 DEBUG myThreadTest:41 - 2 Hello from Task 2
17:56:09,600 pool-2-thread-1 DEBUG myThreadTest:41 - 3 Hello from Task 2
17:56:09,601 pool-2-thread-1 DEBUG myThreadTest:41 - 4 Hello from Task 2
17:56:09,601 pool-2-thread-1 DEBUG myThreadTest:41 - 5 Hello from Task 2
17:56:09,601 pool-2-thread-1 DEBUG myThreadTest:41 - 6 Hello from Task 2
17:56:09,601 pool-2-thread-1 DEBUG myThreadTest:41 - 7 Hello from Task 2
17:56:09,602 pool-2-thread-1 DEBUG myThreadTest:41 - 8 Hello from Task 2
17:56:09,602 pool-2-thread-1 DEBUG myThreadTest:41 - 9 Hello from Task 2
17:56:09,602 pool-2-thread-1 DEBUG myThreadTest:41 - 10 Hello from Task 2
17:56:09,603 pool-2-thread-1 DEBUG myThreadTest:41 - 11 Hello from Task 2
17:56:09,603 pool-1-thread-1 DEBUG myThreadTest:23 - 1 Hello from Task 1
17:56:09,603 pool-2-thread-1 DEBUG myThreadTest:41 - 12 Hello from Task 2
17:56:09,606 pool-2-thread-1 DEBUG myThreadTest:41 - 13 Hello from Task 2
17:56:09,607 pool-2-thread-1 DEBUG myThreadTest:41 - 14 Hello from Task 2
17:56:09,607 pool-2-thread-1 DEBUG myThreadTest:41 - 15 Hello from Task 2
17:56:09,607 pool-2-thread-1 DEBUG myThreadTest:41 - 16 Hello from Task 2
17:56:09,608 pool-2-thread-1 DEBUG myThreadTest:41 - 17 Hello from Task 2
17:56:09,608 pool-2-thread-1 DEBUG myThreadTest:41 - 18 Hello from Task 2
17:56:09,608 pool-2-thread-1 DEBUG myThreadTest:41 - 19 Hello from Task 2
17:56:09,608 pool-2-thread-1 DEBUG myThreadTest:41 - 20 Hello from Task 2
17:56:09,609 pool-1-thread-1 DEBUG myThreadTest:23 - 2 Hello from Task 1
17:56:09,609 pool-2-thread-1 DEBUG myThreadTest:41 - 21 Hello from Task 2
17:56:09,609 pool-2-thread-1 DEBUG myThreadTest:41 - 22 Hello from Task 2
17:56:09,609 pool-2-thread-1 DEBUG myThreadTest:41 - 23 Hello from Task 2
17:56:09,610 pool-2-thread-1 DEBUG myThreadTest:41 - 24 Hello from Task 2
17:56:09,610 pool-2-thread-1 DEBUG myThreadTest:41 - 25 Hello from Task 2
17:56:09,610 pool-2-thread-1 DEBUG myThreadTest:41 - 26 Hello from Task 2
17:56:09,611 pool-2-thread-1 DEBUG myThreadTest:41 - 27 Hello from Task 2
17:56:09,611 pool-2-thread-1 DEBUG myThreadTest:41 - 28 Hello from Task 2
17:56:09,611 pool-2-thread-1 DEBUG myThreadTest:41 - 29 Hello from Task 2
17:56:09,611 pool-2-thread-1 DEBUG myThreadTest:41 - 30 Hello from Task 2
17:56:09,611 pool-1-thread-1 DEBUG myThreadTest:23 - 3 Hello from Task 1
17:56:09,612 pool-2-thread-1 DEBUG myThreadTest:41 - 31 Hello from Task 2
17:56:09,612 pool-2-thread-1 DEBUG myThreadTest:41 - 32 Hello from Task 2
17:56:09,612 pool-2-thread-1 DEBUG myThreadTest:41 - 33 Hello from Task 2
...

Both new Task1() and new Task2() are different objects. The behavior is as expected.

Related

Parallel requests from one client processed in series in RSocket

I expect that all invocations of the server will be processed in parallel, but it is not true.
Here is simple example.
RSocket version: 1.1.0
Server
public class ServerApp {
private static final Logger log = LoggerFactory.getLogger(ServerApp.class);
public static void main(String[] args) throws InterruptedException {
RSocketServer.create(SocketAcceptor.forRequestResponse(payload ->
Mono.fromCallable(() -> {
log.debug("Start of my business logic");
sleepSeconds(5);
return DefaultPayload.create("OK");
})))
.bind(WebsocketServerTransport.create(15000))
.block();
log.debug("Server started");
TimeUnit.MINUTES.sleep(30);
}
private static void sleepSeconds(int sec) {
try {
TimeUnit.SECONDS.sleep(sec);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Client
public class ClientApp {
private static final Logger log = LoggerFactory.getLogger(ClientApp.class);
public static void main(String[] args) throws InterruptedException {
RSocket client = RSocketConnector.create()
.connect(WebsocketClientTransport.create(15000))
.block();
long start1 = System.currentTimeMillis();
client.requestResponse(DefaultPayload.create("Request 1"))
.doOnNext(r -> log.debug("finished within {}ms", System.currentTimeMillis() - start1))
.subscribe();
long start2 = System.currentTimeMillis();
client.requestResponse(DefaultPayload.create("Request 2"))
.doOnNext(r -> log.debug("finished within {}ms", System.currentTimeMillis() - start2))
.subscribe();
TimeUnit.SECONDS.sleep(20);
}
}
In client logs, we can see that both request was sent at the same time, and both responses was received at the same time after 10sec (each request was proceed in 5 seconds).
In server logs, we can see that requests executed sequentially and not in parallel.
Could you please help me to understand this behavior?
Why we have received the first response after 10 seconds and not 5?
How do I create the server correctly if I want all requests to be processed in parallel?
If I replace Mono.fromCallable by Mono.fromFuture(CompletableFuture.supplyAsync(() -> myBusinessLogic(), executorService)), then it will resolve 1.
If I replace Mono.fromCallable by Mono.delay(Duration.ZERO).map(ignore -> myBusinessLogic(), then it will resolve 1. and 2.
If I replace Mono.fromCallable by Mono.create(sink -> sink.success(myBusinessLogic())), then it will not resolve my issues.
Client logs:
2021-07-16 10:39:46,880 DEBUG [reactor-tcp-nio-1] [/] - sending ->
Frame => Stream ID: 0 Type: SETUP Flags: 0b0 Length: 56
Data:
2021-07-16 10:39:46,952 DEBUG [main] [/] - sending ->
Frame => Stream ID: 1 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 31 |Request 1 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:46,957 DEBUG [main] [/] - sending ->
Frame => Stream ID: 3 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 32 |Request 2 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:57,043 DEBUG [reactor-tcp-nio-1] [/] - receiving ->
Frame => Stream ID: 1 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:57,046 DEBUG [reactor-tcp-nio-1] [/] - finished within 10120ms
2021-07-16 10:39:57,046 DEBUG [reactor-tcp-nio-1] [/] - receiving ->
Frame => Stream ID: 3 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:57,046 DEBUG [reactor-tcp-nio-1] [/] - finished within 10094ms
Server Logs:
2021-07-16 10:39:46,965 DEBUG [reactor-http-nio-2] [/] - receiving ->
Frame => Stream ID: 0 Type: SETUP Flags: 0b0 Length: 56
Data:
2021-07-16 10:39:47,021 DEBUG [reactor-http-nio-2] [/] - receiving ->
Frame => Stream ID: 1 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 31 |Request 1 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:47,027 DEBUG [reactor-http-nio-2] [/] - Start of my business logic
2021-07-16 10:39:52,037 DEBUG [reactor-http-nio-2] [/] - sending ->
Frame => Stream ID: 1 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:52,038 DEBUG [reactor-http-nio-2] [/] - receiving ->
Frame => Stream ID: 3 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 32 |Request 2 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:52,038 DEBUG [reactor-http-nio-2] [/] - Start of my business logic
2021-07-16 10:39:57,039 DEBUG [reactor-http-nio-2] [/] - sending ->
Frame => Stream ID: 3 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
You shouldn't mix asynchronous code like Reactive Mono operations with blocking code like
private static void sleepSeconds(int sec) {
try {
TimeUnit.SECONDS.sleep(sec);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
I suspect the central issue here is that a framework like rsocket-java doesn't want to run everything on new threads, at the cost of excessive context switching. So generally relies on you run long running CPU or IO operations appropriately.
You should look at the various async delay operations instead https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html#delayElement-java.time.Duration-
If your delay is meant to simulate a long running operation, then you should look at subscribing on a different scheduler like https://projectreactor.io/docs/core/release/api/reactor/core/scheduler/Schedulers.html#boundedElastic--

Trying to understand threads in java in this case

Hi I'm trying to understand my code, I have user synchronized key word in a method, to make that only one thread use it.
The main class:
public class Principal {
public static void main(String[] args) {
Messenger messenger = new Messenger();
Hilo t1 = new Hilo(messenger);
Hilo t2 = new Hilo(messenger);
t1.start();
t2.start();
}
}
The messenger class:
public class Messenger {
private String msg = "hello";
synchronized public void sendMessage() {
System.out.println(msg + " from " + Thread.currentThread().getName());
}
}
And the thred class:
public class Hilo extends Thread {
private Messenger messenger;
public Hilo(Messenger messenger) {
this.messenger = messenger;
}
#Override
public void run() {
while (true) {
messenger.sendMessage();
}
}
}
I had this output:
hello from Thread-1
hello from Thread-1
hello from Thread-1
hello from Thread-1
hello from Thread-1
hello from Thread-0
hello from Thread-0
hello from Thread-1
hello from Thread-1
hello from Thread-1
hello from Thread-1
hello from Thread-0
hello from Thread-0
hello from Thread-0
hello from Thread-0
hello from Thread-0
hello from Thread-0
hello from Thread-0
...
But I was expected this:
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
...
I've been thinking but I can't understand the failure.
Please add your opinion .
Thanks in advance.
Your resulting output got about 18 cycles done with about 3 context switches. Your "expected" output got 14 cycles done with 14 context switches. It seems the behavior you got is much better than you expected.
The question is why you would expect such inefficient behavior. Alternation is about the worst possible performance given that more context switches are needed, more caches are blown out, and so on. No sensible implementation would do that if it could find any way to avoid it.
Generally speaking, you want to keep a thread running as long as possible because context switches have cost. A good implementation balances performance with other priorities, sure, but it doesn't give up performance for no good reason.
Always use timestamps to verify order of happening
Actually, System.out.println makes poor mechanism for testing concurrency. The calls do not actually get output in the order they were called. In other words, never rely on the order of appearance in System.out as representing actual order of happening.
You can see this behavior by including calls to Instant.now() or System.nanoTime. I suggest always adding such calls in almost any kind of testing/debugging where order matters. If you look carefully at the microseconds, you will see later items appearing on your console before earlier items.
Even better suggestion: Put your outputs into a thread-safe collection. Dump to console at end of your test.
Executor service
In modern Java, we rarely need to address the Thread class directly.
Instead, use an executor service. The service is backed by a pool of one or more threads. The service handles creating and recreating threads as needed, depending on its promised behavior.
Write your task as a Runnable, an object with a run method, or a Callable with an call method.
Example code
Here is my revised version of your code.
Our singleton Messenger class. One instance to be shared across two threads.
Tip: Call Thread#getId rather than Thread#getName to identify a thread. Virtual threads in the future Project Loom may lack names by default.
package work.basil.example.order;
import java.time.Instant;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class Messenger
{
final private String msg = "hello";
final List < String > outputs = new ArrayList <>( 100 ); // Need not be thread-safe for this demo, as we only touch it from within our `synchronized` method `sendMessage`.
synchronized public void sendMessage ( )
{
String output = this.msg + " from thread id # " + Thread.currentThread().getId() + " at " + Instant.now();
this.outputs.add( output );
}
}
Our Runnable task class. It keeps hold of a Messenger object to be used on each run execution.
Tip: Rather than running endlessly as seen in your code with while ( true ), write while ( ! Thread.interrupted() ) to run until the interrupted flag has been thrown for that thread. The ExecutorService#shutdownNow method will likely set that flag for us, enabling our threads to shut themselves down.
package work.basil.example.order;
import java.util.Objects;
public class Hilo implements Runnable
{
// Member fields
final private String id;
final private Messenger messenger;
// Constructors
public Hilo ( final String id , final Messenger messenger )
{
this.id = id;
this.messenger = Objects.requireNonNull( messenger );
}
#Override
public void run ( )
{
while ( ! Thread.interrupted() )
{
this.messenger.sendMessage();
}
}
}
An app class to exercise run our demo.
package work.basil.example.order;
import java.time.Instant;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class App
{
public static void main ( String[] args )
{
App app = new App();
app.demo();
}
private void demo ( )
{
Messenger messenger = new Messenger();
ExecutorService executorService = Executors.newFixedThreadPool( 2 );
System.out.println( "INFO - Submitting tasks. " + Instant.now() );
executorService.submit( new Hilo( "Alice" , messenger ) );
executorService.submit( new Hilo( "Bob" , messenger ) );
executorService.shutdown();
try
{
// Wait a while for existing tasks to terminate
if ( ! executorService.awaitTermination( 15 , TimeUnit.MILLISECONDS ) )
{
executorService.shutdownNow(); // Set "interrupted" flag on threads currently executing tasks.
// Wait a while for tasks to respond to interrupted flag being set.
if ( ! executorService.awaitTermination( 1 , TimeUnit.SECONDS ) )
System.err.println( "WARN - executorService did not terminate." );
}
}
catch ( InterruptedException e ) { e.printStackTrace(); }
System.out.println( "INFO - Done with demo. Results array appears next. " + Instant.now() );
int nthOutput = 0;
for ( String output : messenger.outputs )
{
nthOutput++;
System.out.println( "output " + nthOutput + " = " + output );
}
}
}
When run on my Mac mini Intel with six real cores and no Hyper-Threading using early-access Java 17, I see dozens of outputs at a time per thread. Notice in this sample below how the first 3 are thread ID 14, followed by # 4-71 being all thread ID 15.
As the Answer by David Schwartz explains, letting a thread run a while is usually more efficient.
INFO - Submitting tasks. 2021-03-23T02:46:58.490916Z
INFO - Done with demo. Results array appears next. 2021-03-23T02:46:58.527018Z
output 1 = hello from thread id # 14 at 2021-03-23T02:46:58.509450Z
output 2 = hello from thread id # 14 at 2021-03-23T02:46:58.522884Z
output 3 = hello from thread id # 14 at 2021-03-23T02:46:58.522923Z
output 4 = hello from thread id # 15 at 2021-03-23T02:46:58.522956Z
output 5 = hello from thread id # 15 at 2021-03-23T02:46:58.523011Z
output 6 = hello from thread id # 15 at 2021-03-23T02:46:58.523041Z
output 7 = hello from thread id # 15 at 2021-03-23T02:46:58.523077Z
output 8 = hello from thread id # 15 at 2021-03-23T02:46:58.523106Z
output 9 = hello from thread id # 15 at 2021-03-23T02:46:58.523134Z
output 10 = hello from thread id # 15 at 2021-03-23T02:46:58.523165Z
output 11 = hello from thread id # 15 at 2021-03-23T02:46:58.523197Z
output 12 = hello from thread id # 15 at 2021-03-23T02:46:58.523227Z
output 13 = hello from thread id # 15 at 2021-03-23T02:46:58.523254Z
output 14 = hello from thread id # 15 at 2021-03-23T02:46:58.523282Z
output 15 = hello from thread id # 15 at 2021-03-23T02:46:58.523312Z
output 16 = hello from thread id # 15 at 2021-03-23T02:46:58.523343Z
output 17 = hello from thread id # 15 at 2021-03-23T02:46:58.523381Z
output 18 = hello from thread id # 15 at 2021-03-23T02:46:58.523410Z
output 19 = hello from thread id # 15 at 2021-03-23T02:46:58.523436Z
output 20 = hello from thread id # 15 at 2021-03-23T02:46:58.523466Z
output 21 = hello from thread id # 15 at 2021-03-23T02:46:58.523495Z
output 22 = hello from thread id # 15 at 2021-03-23T02:46:58.523522Z
output 23 = hello from thread id # 15 at 2021-03-23T02:46:58.523550Z
output 24 = hello from thread id # 15 at 2021-03-23T02:46:58.523583Z
output 25 = hello from thread id # 15 at 2021-03-23T02:46:58.523612Z
output 26 = hello from thread id # 15 at 2021-03-23T02:46:58.523640Z
output 27 = hello from thread id # 15 at 2021-03-23T02:46:58.523668Z
output 28 = hello from thread id # 15 at 2021-03-23T02:46:58.523696Z
output 29 = hello from thread id # 15 at 2021-03-23T02:46:58.523760Z
output 30 = hello from thread id # 15 at 2021-03-23T02:46:58.523798Z
output 31 = hello from thread id # 15 at 2021-03-23T02:46:58.523828Z
output 32 = hello from thread id # 15 at 2021-03-23T02:46:58.523858Z
output 33 = hello from thread id # 15 at 2021-03-23T02:46:58.523883Z
output 34 = hello from thread id # 15 at 2021-03-23T02:46:58.523915Z
output 35 = hello from thread id # 15 at 2021-03-23T02:46:58.523943Z
output 36 = hello from thread id # 15 at 2021-03-23T02:46:58.523971Z
output 37 = hello from thread id # 15 at 2021-03-23T02:46:58.523996Z
output 38 = hello from thread id # 15 at 2021-03-23T02:46:58.524020Z
output 39 = hello from thread id # 15 at 2021-03-23T02:46:58.524049Z
output 40 = hello from thread id # 15 at 2021-03-23T02:46:58.524077Z
output 41 = hello from thread id # 15 at 2021-03-23T02:46:58.524102Z
output 42 = hello from thread id # 15 at 2021-03-23T02:46:58.524128Z
output 43 = hello from thread id # 15 at 2021-03-23T02:46:58.524156Z
output 44 = hello from thread id # 15 at 2021-03-23T02:46:58.524181Z
output 45 = hello from thread id # 15 at 2021-03-23T02:46:58.524212Z
output 46 = hello from thread id # 15 at 2021-03-23T02:46:58.524239Z
output 47 = hello from thread id # 15 at 2021-03-23T02:46:58.524262Z
output 48 = hello from thread id # 15 at 2021-03-23T02:46:58.524284Z
output 49 = hello from thread id # 15 at 2021-03-23T02:46:58.524308Z
output 50 = hello from thread id # 15 at 2021-03-23T02:46:58.524336Z
output 51 = hello from thread id # 15 at 2021-03-23T02:46:58.524359Z
output 52 = hello from thread id # 15 at 2021-03-23T02:46:58.524381Z
output 53 = hello from thread id # 15 at 2021-03-23T02:46:58.524405Z
output 54 = hello from thread id # 15 at 2021-03-23T02:46:58.524428Z
output 55 = hello from thread id # 15 at 2021-03-23T02:46:58.524454Z
output 56 = hello from thread id # 15 at 2021-03-23T02:46:58.524477Z
output 57 = hello from thread id # 15 at 2021-03-23T02:46:58.524499Z
output 58 = hello from thread id # 15 at 2021-03-23T02:46:58.524521Z
output 59 = hello from thread id # 15 at 2021-03-23T02:46:58.524544Z
output 60 = hello from thread id # 15 at 2021-03-23T02:46:58.524570Z
output 61 = hello from thread id # 15 at 2021-03-23T02:46:58.524591Z
output 62 = hello from thread id # 15 at 2021-03-23T02:46:58.524613Z
output 63 = hello from thread id # 15 at 2021-03-23T02:46:58.524634Z
output 64 = hello from thread id # 15 at 2021-03-23T02:46:58.524659Z
output 65 = hello from thread id # 15 at 2021-03-23T02:46:58.524685Z
output 66 = hello from thread id # 15 at 2021-03-23T02:46:58.524710Z
output 67 = hello from thread id # 15 at 2021-03-23T02:46:58.524731Z
output 68 = hello from thread id # 15 at 2021-03-23T02:46:58.524752Z
output 69 = hello from thread id # 15 at 2021-03-23T02:46:58.524780Z
output 70 = hello from thread id # 15 at 2021-03-23T02:46:58.524801Z
output 71 = hello from thread id # 15 at 2021-03-23T02:46:58.524826Z
output 72 = hello from thread id # 14 at 2021-03-23T02:46:58.524852Z
output 73 = hello from thread id # 14 at 2021-03-23T02:46:58.524902Z
output 74 = hello from thread id # 14 at 2021-03-23T02:46:58.524929Z
output 75 = hello from thread id # 14 at 2021-03-23T02:46:58.524954Z
output 76 = hello from thread id # 14 at 2021-03-23T02:46:58.524975Z
output 77 = hello from thread id # 14 at 2021-03-23T02:46:58.524998Z
output 78 = hello from thread id # 14 at 2021-03-23T02:46:58.525021Z
output 79 = hello from thread id # 14 at 2021-03-23T02:46:58.525042Z
output 80 = hello from thread id # 14 at 2021-03-23T02:46:58.525075Z
output 81 = hello from thread id # 14 at 2021-03-23T02:46:58.525095Z
output 82 = hello from thread id # 14 at 2021-03-23T02:46:58.525115Z
output 83 = hello from thread id # 14 at 2021-03-23T02:46:58.525138Z
output 84 = hello from thread id # 14 at 2021-03-23T02:46:58.525159Z
output 85 = hello from thread id # 14 at 2021-03-23T02:46:58.525194Z
output 86 = hello from thread id # 14 at 2021-03-23T02:46:58.525215Z
output 87 = hello from thread id # 14 at 2021-03-23T02:46:58.525241Z
output 88 = hello from thread id # 14 at 2021-03-23T02:46:58.525277Z
output 89 = hello from thread id # 14 at 2021-03-23T02:46:58.525298Z
output 90 = hello from thread id # 14 at 2021-03-23T02:46:58.525319Z
output 91 = hello from thread id # 14 at 2021-03-23T02:46:58.525339Z
output 92 = hello from thread id # 14 at 2021-03-23T02:46:58.525359Z
output 93 = hello from thread id # 14 at 2021-03-23T02:46:58.525381Z
output 94 = hello from thread id # 14 at 2021-03-23T02:46:58.525401Z
output 95 = hello from thread id # 14 at 2021-03-23T02:46:58.525422Z
output 96 = hello from thread id # 14 at 2021-03-23T02:46:58.525452Z
output 97 = hello from thread id # 14 at 2021-03-23T02:46:58.525474Z
output 98 = hello from thread id # 14 at 2021-03-23T02:46:58.525496Z
output 99 = hello from thread id # 14 at 2021-03-23T02:46:58.525515Z
output 100 = hello from thread id # 14 at 2021-03-23T02:46:58.525533Z
output 101 = hello from thread id # 14 at 2021-03-23T02:46:58.525555Z
output 102 = hello from thread id # 14 at 2021-03-23T02:46:58.525581Z
output 103 = hello from thread id # 14 at 2021-03-23T02:46:58.525603Z
output 104 = hello from thread id # 14 at 2021-03-23T02:46:58.525625Z
output 105 = hello from thread id # 14 at 2021-03-23T02:46:58.525645Z
output 106 = hello from thread id # 14 at 2021-03-23T02:46:58.525664Z
output 107 = hello from thread id # 14 at 2021-03-23T02:46:58.525686Z
output 108 = hello from thread id # 14 at 2021-03-23T02:46:58.525705Z
output 109 = hello from thread id # 14 at 2021-03-23T02:46:58.525723Z
output 110 = hello from thread id # 14 at 2021-03-23T02:46:58.525741Z
output 111 = hello from thread id # 14 at 2021-03-23T02:46:58.525758Z
output 112 = hello from thread id # 14 at 2021-03-23T02:46:58.525783Z
output 113 = hello from thread id # 14 at 2021-03-23T02:46:58.525801Z
output 114 = hello from thread id # 14 at 2021-03-23T02:46:58.525818Z
output 115 = hello from thread id # 14 at 2021-03-23T02:46:58.525837Z
output 116 = hello from thread id # 14 at 2021-03-23T02:46:58.525855Z
output 117 = hello from thread id # 14 at 2021-03-23T02:46:58.525875Z
output 118 = hello from thread id # 14 at 2021-03-23T02:46:58.525897Z
output 119 = hello from thread id # 14 at 2021-03-23T02:46:58.525913Z
output 120 = hello from thread id # 15 at 2021-03-23T02:46:58.525931Z
output 121 = hello from thread id # 15 at 2021-03-23T02:46:58.525965Z
output 122 = hello from thread id # 15 at 2021-03-23T02:46:58.526002Z
output 123 = hello from thread id # 15 at 2021-03-23T02:46:58.526023Z
output 124 = hello from thread id # 15 at 2021-03-23T02:46:58.526050Z
output 125 = hello from thread id # 15 at 2021-03-23T02:46:58.526075Z
output 126 = hello from thread id # 15 at 2021-03-23T02:46:58.526095Z
output 127 = hello from thread id # 15 at 2021-03-23T02:46:58.526135Z
output 128 = hello from thread id # 15 at 2021-03-23T02:46:58.526169Z
output 129 = hello from thread id # 15 at 2021-03-23T02:46:58.526233Z
output 130 = hello from thread id # 15 at 2021-03-23T02:46:58.526260Z
output 131 = hello from thread id # 15 at 2021-03-23T02:46:58.526279Z
output 132 = hello from thread id # 15 at 2021-03-23T02:46:58.526297Z
output 133 = hello from thread id # 15 at 2021-03-23T02:46:58.526315Z
output 134 = hello from thread id # 15 at 2021-03-23T02:46:58.526335Z
output 135 = hello from thread id # 15 at 2021-03-23T02:46:58.526352Z
output 136 = hello from thread id # 15 at 2021-03-23T02:46:58.526370Z
output 137 = hello from thread id # 15 at 2021-03-23T02:46:58.526389Z
output 138 = hello from thread id # 15 at 2021-03-23T02:46:58.526405Z
output 139 = hello from thread id # 15 at 2021-03-23T02:46:58.526424Z
output 140 = hello from thread id # 15 at 2021-03-23T02:46:58.526441Z
output 141 = hello from thread id # 14 at 2021-03-23T02:46:58.526465Z
output 142 = hello from thread id # 14 at 2021-03-23T02:46:58.526500Z
output 143 = hello from thread id # 14 at 2021-03-23T02:46:58.526524Z
output 144 = hello from thread id # 14 at 2021-03-23T02:46:58.526552Z
output 145 = hello from thread id # 14 at 2021-03-23T02:46:58.526570Z
output 146 = hello from thread id # 14 at 2021-03-23T02:46:58.526588Z
output 147 = hello from thread id # 14 at 2021-03-23T02:46:58.526605Z
output 148 = hello from thread id # 14 at 2021-03-23T02:46:58.526621Z
output 149 = hello from thread id # 14 at 2021-03-23T02:46:58.526642Z
output 150 = hello from thread id # 14 at 2021-03-23T02:46:58.526658Z
output 151 = hello from thread id # 14 at 2021-03-23T02:46:58.526674Z
output 152 = hello from thread id # 14 at 2021-03-23T02:46:58.526696Z
output 153 = hello from thread id # 15 at 2021-03-23T02:46:58.526715Z
Your expectation seems to be that the thread scheduler was going to give equal, round-robin, time slices of liveliness for each thread.
The Java language, however, offers no guarantees with respects to thread scheduling, leaving that responsibility instead to the underlying operating system. If the threads in your application aren't performing time-sensitive work, you can sort of file this away as an implementation detail and just assume the scheduler is going try to give some fair distribution of running time for each thread (assuming of course the thread logic itself doesn't starve the other runnable threads).
Note also that you can assign a priority to threads, but again, this doesn't offer any guarantee of execution order and is only declarative of your intent since the scheduling is ultimately left up to the underlying OS.
Java thread scheduling algorithm is the one who decide which thread should be running at any given point of time. And this algorithm is only concerned about threads in the Runnable state.
Now comes the role of Thread priority. Java threads can have an integer value from 1 to 10, as its priority. You can set the priority to a thread by threadName.setPriority(value).
If not specified, thread will take the priority value 5 by default.
So, in your case, both the threads have same priority value 5.
As to java scheduling rule, at any given point of time, the highest priority thread is running. But this rule is not guaranteed due to few reasons (ex:JVM try to avoid starvation).
Now here since you have 2 threads of same priority level, Java thread scheduler randomly select one thread (let's say t1) and execute. The other thread (t2) will get the chance to be executed when one of the following things happen,
t1 completes whatever inside t1's run method and compete again to execute. But you can't guarantee next chance will be t2's. (This is what's happening in your case)
you call t1.yield() method. t1 give up the processor and give the chance to any other same priority thread. well, t2. (But can't always guarantee)
Preempted by a higher priority thread.
The system is using Time Slicing Rule and that specific time expires. (Again, can't guarantee t2 will be next).
If you really want to get the desired 0,1,0,1 "fair" output here, you can write a logic to the synchronized method in the Messenger class. Something like this;
class Messenger {
private String msg = "hello";
private boolean state; //define a state variable
synchronized public void sendMessage() {
while(state == false){
if(!Thread.currentThread().getName().equals("Thread-0")){ //if not thread-0, then go to wait queue
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
else{//if thread-0 , print and change state variable value.
System.out.println(msg + " from " + Thread.currentThread().getName());
state = true;
notifyAll();
}
}
if(!Thread.currentThread().getName().equals("Thread-1")){//if not thread-1, then go to wait queue
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
else{//if thread-1 , print and change state variable value.
System.out.println(msg + " from " + Thread.currentThread().getName());
state = false;
notifyAll();
}
}
}

Netty: Best way to create stateful, synchronous outbound pipeline?

Using Netty, I'm receiving multiple asynchronous messages from a framework on multiple threads. I need to send these messages to a network device (UDP) which uses a synchronous, stateful protocol. So, I need to use a state variable, and only allow one message to be sent at a time, which should only happen when the client is in the "idle" state.
In addition, the state-machine will need to send it's own internally generated messages - retrys - ahead of whatever is waiting in queue. For this use case I know how to inject messages into the pipeline, which would work as long as outbound messages can be held at the head of the pipeline.
Any idea how to control the output using a client state?
TIA
I've come up with a proof-of-concept / proposed solution, although, hopefully someone knows a better way. While this works as intended it introduces a number of undesirable side effects which would need to be solved for.
Run a Blocking Write Handler on its own thread.
Put the thread to sleep if the state isn't idle, and wake it when it becomes Idle.
If the State was Idle, or becomes Idle, send the message on its way.
This is the bootstrap I used
public class UDPConnector {
public Init() {
this.workerGroup = EPOLL ? new EpollEventLoopGroup() : new NioEventLoopGroup();
this.blockingExecutor = new DefaultEventExecutor();
bootstrap = new Bootstrap()
.channel(EPOLL ? EpollDatagramChannel.class : NioDatagramChannel.class)
.group(workerGroup)
.handler(new ChannelInitializer<DatagramChannel>() {
#Override
public void initChannel(DatagramChannel ch) throws Exception {
ch.pipeline().addLast("logging", new LoggingHandler());
ch.pipeline().addLast("encode", new RequestEncoderNetty());
ch.pipeline().addLast("decode", new ResponseDecoderNetty());
ch.pipeline().addLast("ack", new AckHandler());
ch.pipeline().addLast(blockingExecutor, "blocking", new BlockingOutboundHandler());
}
});
}
}
The Blocking Outbound Handler looks like this
public class BlockingOutboundHandler extends ChannelOutboundHandlerAdapter {
private final Logger logger = LoggerFactory.getLogger(BlockingOutboundHandler.class);
private final AtomicBoolean isIdle = new AtomicBoolean(true);
public void setIdle() {
synchronized (this.isIdle) {
logger.debug("setIdle() called");
this.isIdle.set(true);
this.isIdle.notify();
}
}
#Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception {
synchronized (isIdle) {
if (!isIdle.get()) {
logger.debug("write(): I/O State not Idle, Waiting");
isIdle.wait();
logger.debug("write(): Finished waiting on I/O State");
}
isIdle.set(false);
}
logger.debug("write(): {}", msg.toString());
ctx.write(msg, promise);
}
}
Finally, when the StateMachine transitions to idle the the block is released
Optional.ofNullable((BlockingOutboundHandler) ctx.pipeline().get("blocking")).ifPresent(h -> h.setIdle());
All of this results in the Outbound messages being synchronized with the synchronous, statefull responses from the device.
Of course, I'd prefer not to have to deal with additional threads and the synchronization which come with them. I'm also not sure what kind of "yet to be discovered" issues I'm going to run into doing it this way. It does have the side effect of causing the main handler to be visited by multiple threads which just creates a new problem hich needs to be solved.
Also, next up, is implementation of a timeout and retry with back-off; this thread can't stay blocked indefinitely.
15:07:16.539 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2] REGISTERED
15:07:16.540 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2] CONNECT: portserver1.tedworld.net/192.168.2.173:2102
15:07:16.541 [DEBUG] [internal.connection.UDPConnectorNetty] - connect(): connect() complete
15:07:16.541 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] ACTIVE
15:07:16.542 [DEBUG] [c.projector.internal.ProjectorHandler] - scheduler.execute: creating test message
15:07:16.543 [DEBUG] [c.projector.internal.ProjectorHandler] - scheduler.execute: sending test message
15:07:16.544 [DEBUG] [internal.connection.UDPConnectorNetty] - sendRequest: Adding msg to queue { super={ messageType=21, channelId=test, data= } }
15:07:16.546 [DEBUG] [c.projector.internal.ProjectorHandler] - scheduler.execute: Finished
15:07:16.545 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - write { super={ messageType=21, channelId=test, data= } }
15:07:16.547 [DEBUG] [internal.connection.UDPConnectorNetty] - sendRequest: Adding msg to queue { super={ messageType=3F, channelId=lamp, data= } }
15:07:16.548 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - write(): I/O State not Idle, Waiting
15:07:16.548 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] WRITE: 6B
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 21 89 01 00 00 0a |!..... |
+--------+-------------------------------------------------+----------------+
15:07:16.550 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] FLUSH
15:07:16.567 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ: DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010, PooledUnsafeDirectByteBuf(ridx: 0, widx: 6, cap: 2048)), 6B
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 06 89 01 00 00 0a |...... |
+--------+-------------------------------------------------+----------------+
15:07:16.568 [DEBUG] [nternal.protocol.ResponseDecoderNetty] - decode DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010, PooledUnsafeDirectByteBuf(ridx: 0, widx: 6, cap: 2048))
15:07:16.569 [DEBUG] [rojector.internal.protocol.AckHandler] - channelRead0 { super={ messageType=06, channelId=test, data= } }
15:07:16.570 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - setIdle called
15:07:16.571 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - write(): Finished waiting on I/O State
15:07:16.571 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ COMPLETE
15:07:16.571 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - write { super={ messageType=3F, channelId=lamp, data= } }
15:07:16.573 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] WRITE: 6B
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 3f 89 01 50 57 0a |?..PW. |
+--------+-------------------------------------------------+----------------+
15:07:16.573 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] FLUSH
15:07:16.587 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ: DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010, PooledUnsafeDirectByteBuf(ridx: 0, widx: 6, cap: 2048)), 6B
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 06 89 01 50 57 0a |...PW. |
+--------+-------------------------------------------------+----------------+
15:07:16.588 [DEBUG] [nternal.protocol.ResponseDecoderNetty] - decode DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010, PooledUnsafeDirectByteBuf(ridx: 0, widx: 6, cap: 2048))
15:07:16.589 [DEBUG] [rojector.internal.protocol.AckHandler] - channelRead0 { super={ messageType=06, channelId=lamp, data= } }
15:07:16.590 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - setIdle called
15:07:16.591 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ COMPLETE
15:07:16.592 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ: DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010, PooledUnsafeDirectByteBuf(ridx: 0, widx: 7, cap: 2048)), 7B
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 40 89 01 50 57 30 0a |#..PW0. |
+--------+-------------------------------------------------+----------------+
15:07:16.593 [DEBUG] [nternal.protocol.ResponseDecoderNetty] - decode DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010, PooledUnsafeDirectByteBuf(ridx: 0, widx: 7, cap: 2048))
15:07:16.594 [DEBUG] [.netty.channel.DefaultChannelPipeline] - Discarded inbound message { super={ messageType=40, channelId=lamp, data=30 } } that reached at the tail of the pipeline. Please check your pipeline configuration.
15:07:16.595 [DEBUG] [.netty.channel.DefaultChannelPipeline] - Discarded message pipeline : [logging, encode, decode, ack, blocking, DefaultChannelPipeline$TailContext#0]. Channel : [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102].
15:07:16.596 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ COMPLETE

Apache Camel quartz2 timer starting multiple exchanges

I have an application that creates routes to connect to a REST endpoint and process the responses for several vendors. Each route is triggered with a quartz2 timer. Recently when the timer fires it creates multiple exchanges instead of just one and I cannot determine what is causing it.
The method that creates the routes is here:
public String generateRoute(String vendorId) {
routeBuilders.add(new RouteBuilder() {
#Override
public void configure() throws Exception {
System.out.println("Building REST input route for vendor " + vendorId);
String vendorCron = vendorProps.getProperty(vendorId + ".rest.cron");
String vendorEndpoint = vendorProps.getProperty(vendorId + ".rest.endpoint");
String vendorAuth = vendorProps.getProperty(vendorId + ".rest.auth");
int vendorTimer = Integer.valueOf(vendorId) * 10000;
GsonDataFormat format = new GsonDataFormat(RestResponse.class);
from("quartz2://timer" + vendorId + "?cron=" + vendorCron)
.routeId("Rte-vendor" + vendorId)
.streamCaching()
.log("Starting route " + vendorId)
.setHeader("Authorization",constant(vendorAuth))
.to("rest:get:" + vendorEndpoint)
.to("direct:processRestResponse")
.end();
};
});
return "direct:myRoute." + vendorId;
and a sample 'vendorCron' string is
"*+5+*+*+*+?&trigger.timeZone=America/New_York".
When the quartz route fires I see this type of output in the log
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
When I should ( and used to) only see one of these.
Any ideas what would cause this?
Thanks!
This is because of your vendorCron
If Cron trigger is every 5secs then you see this log in every 5 secs..
If Cron trigger is every 5mins/hours you see these login in 5 mins/hours.
I was staring so hard I missed the obvious. I need a 0 in the seconds place of the cron expression.
Thank you for the time.

java Producer-Consumer Not always terminating

I have a system that reads names from a list, calls an external server for a true/false status check and actions those with a true status. the call to the external server take some time so running it all in one thread isn't very efficient.
I am currently trying to implement it as a producer/consumer system where many consumer threads read the names from a list, call the external server, put the valid names in a blocking queue and have a single consumer pick items from the queue and action them. sadly however the system will at times run to completion and will at other times hang indefinitely.
Test code is as follows
public class SubscriberTest {
static Queue<String> subscribed = new ConcurrentLinkedQueue<String>();
static BlockingQueue<String> valid = new LinkedBlockingQueue<String>(100);
Random rand = new Random();
public SubscriberTest(int i) {
for (int j = 0; j < i; j++) {
subscribed.add("I love:" + j);
}
}
public SubscriberTest(Queue<String> subs) {
subscribed = subs;
}
public static void main(String[] args) {
SubscriberTest fun = new SubscriberTest(10000);
System.out.println(subscribed.size());
ExecutorService producers = Executors.newCachedThreadPool();
ExecutorService consumers = Executors.newSingleThreadExecutor();
Consumer consumer = fun.new Consumer();
Producer producer = fun.new Producer();
while (!subscribed.isEmpty()) {
producers.execute(producer);
consumers.execute(consumer);
}
producers.shutdown();
consumers.shutdown();
System.out.println("finally");
}
// take names from subscribed and get status
class Producer implements Runnable {
public void run() {
String x = subscribed.poll();
System.out.println("Producer: " + x + " " + Thread.currentThread().getName());
try {
if (getStatus(x)) {
valid.put(x);
}
} catch (InterruptedException e) {
e.printStackTrace();
}
}
// this is a call to an external server
private boolean getStatus(String x) {
return rand.nextBoolean();
}
}
// takes names from valid queue and save them
class Consumer implements Runnable {
public void run() {
try {
System.out.println("Consumer: " + valid.take() + " " + Thread.currentThread().getName());
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
Please show me where I go wrong.
String x = subscribed.poll();
Will return null if nothing is available in the queue, which means that you will try to put a null onto the 'valid' queue, which will cause a null pointer exception, and that particular thread will exit. When this happens with all the threads in the pool, the application will hang.
An ExecutorService is a pool of threads with a queue of tasks. Adding another queue just adds complexity and increases the chance you will do something incorrect. I suggest you just use the queue already there.
public class SubscriberTest {
public static void main(String[] args) throws InterruptedException {
final ExecutorService consumers = Executors.newSingleThreadExecutor();
// middle producer
final ExecutorService producers = Executors.newFixedThreadPool(
Runtime.getRuntime().availableProcessors());
// subscribed/original producer.
for (int i = 0; i < 1000*1000; i++) {
final String task = "I love:" + i;
producers.execute(new MidProducer(task, consumers));
}
producers.shutdown();
producers.awaitTermination(10, TimeUnit.SECONDS);
consumers.shutdown();
System.out.println("finally");
}
static class MidProducer implements Runnable {
private final Random rand = new Random();
private final String task;
private final ExecutorService consumers;
public MidProducer(String task, ExecutorService consumers) {
this.task = task;
this.consumers = consumers;
}
public void run() {
System.out.println("Producer: " + task + " " + Thread.currentThread().getName());
if (getStatus(task))
consumers.execute(new Consumer(task));
}
private boolean getStatus(String x) {
return rand.nextBoolean();
}
}
static class Consumer implements Runnable {
private final String task;
private Consumer(String task) {
this.task = task;
}
public void run() {
System.out.println("Consumer: " + task + " " + Thread.currentThread().getName());
}
}
}
prints
Producer: I love: 1 pool-2-thread-2
Producer: I love: 3 pool-2-thread-4
Producer: I love: 2 pool-2-thread-3
Producer: I love: 5 pool-2-thread-2
Producer: I love: 7 pool-2-thread-2
Producer: I love: 4 pool-2-thread-5
Producer: I love: 6 pool-2-thread-6
Producer: I love: 8 pool-2-thread-7
Producer: I love: 10 pool-2-thread-2
Producer: I love: 9 pool-2-thread-5
Producer: I love: 11 pool-2-thread-8
Producer: I love: 12 pool-2-thread-9
Producer: I love: 14 pool-2-thread-10
Producer: I love: 13 pool-2-thread-2
Producer: I love: 16 pool-2-thread-10
Producer: I love: 15 pool-2-thread-11
Producer: I love: 17 pool-2-thread-12
Producer: I love: 20 pool-2-thread-14
Producer: I love: 19 pool-2-thread-10
Producer: I love: 18 pool-2-thread-13
Producer: I love: 0 pool-2-thread-1
Producer: I love: 22 pool-2-thread-12
Producer: I love: 21 pool-2-thread-15
Producer: I love: 25 pool-2-thread-3
Producer: I love: 27 pool-2-thread-12
Producer: I love: 26 pool-2-thread-10
Producer: I love: 24 pool-2-thread-15
Producer: I love: 28 pool-2-thread-1
Producer: I love: 23 pool-2-thread-16
Producer: I love: 31 pool-2-thread-11
Producer: I love: 30 pool-2-thread-16
Producer: I love: 32 pool-2-thread-1
Producer: I love: 36 pool-2-thread-3
Consumer: I love: 2 pool-1-thread-1
...
Consumer: I love: 9975 pool-1-thread-1
Consumer: I love: 9977 pool-1-thread-1
Consumer: I love: 9978 pool-1-thread-1
Consumer: I love: 9979 pool-1-thread-1
Consumer: I love: 9981 pool-1-thread-1
Producer: I love: 9996 pool-2-thread-16
Consumer: I love: 9984 pool-1-thread-1
Consumer: I love: 9985 pool-1-thread-1
Consumer: I love: 9990 pool-1-thread-1
Consumer: I love: 9992 pool-1-thread-1
Producer: I love: 9997 pool-2-thread-16
Consumer: I love: 9994 pool-1-thread-1
Consumer: I love: 9995 pool-1-thread-1
Consumer: I love: 9996 pool-1-thread-1
Producer: I love: 9998 pool-2-thread-16
Producer: I love: 9999 pool-2-thread-16
Consumer: I love: 9997 pool-1-thread-1
Consumer: I love: 9998 pool-1-thread-1
Consumer: I love: 9999 pool-1-thread-1
finally

Categories