is there java thread interrupt asynchronous queue and put in alertable? - java

Like below link, is there java function that thread interrupt asynchronous queue and put in alertable?
https://learn.microsoft.com/en-us/windows/desktop/sync/using-a-waitable-timer-with-an-asynchronous-procedure-call
I want to make async timer function in java. I checked it works in c++. But I don't know if it is in java.
When main thread run and check periodically if there are the async timer fired and run it and going back and run again. That is what i want to do.
Of course, when checking async timer fired, I will use sleep with alertable.
I've tried to find it in google, but I didn't find.
Thanks in advance!
What I want to do more detail is below.
Let's assume that there is a program that getting requests like :
msg1 : msgname=aa1 to=sub1 waittime=1000 msgbody
msg2 : msgname=aa2 to=sub2 waittime=2000 msgbody
msg3 : msgname=aa3 to=sub1 waittime=3000 msgbody
msg4 : msgname=aa3 to=sub1 msgbody . .
and the program should pass each msg to sub1, sub2 described in msg's to field.
If waittime exists, it should pass the message as much as waittime millisec later. the program should do that in 1 thread, and there over 10 thousands msg in one second. If use just synchronous sleep, all msgs souldn't pass in a time and delayed. I check it works well in c++ code, and I have seen a commercial program made in java(maybe) does this. But I am novice in java and I want to know it is possible in java.

Java doesn't have concepts analogous to Windows' "alertable" and the APC queue, and I doubt that it would be possible to both use the Windows native APIs and integrate this with normal Java thread behavior.
The simple way to implement timers in Java is to use the standard Timer class; see the javadoc. If this isn't going to work for you, please explain your problem in more detail.
In response to your followup problem: yes it is possible in Java. In fact. there are probably many ways to do it. But Timer and TimerTask are a good a way as any. Something like this:
public class MyTask extends TimerTask {
private String msg;
private String to;
public Mytask(String msg, String to) {
this.msg = msg;
this.to = to;
}
public void run() {
// process message
}
}
Timer timer = new Timer();
while (...) {
// read message
timer.schedule(new MyTask(msg, to), waitTime);
}

Related

How to pass a message from TimerTask to main thread?

I have a main client which keeps background timers for each peer. These timers run in a background thread, and in 30s (the timeout period) are scheduled to perform the task of marking the respective peer as offline. The block of code to do this is:
public void startTimer() {
timer = new Timer();
timer.schedule(new TimerTask() {
public void run() {
status = false;
System.out.println("Setting " + address.toString() + " status to offline");
// need to send failure message somehow
thread.sendMessage();
}
}, 5*1000);
}
Then, in the main program, I need some way to detect when the above timer task has been run, so that the main client can then send a failure message to all other peers, something like:
while (true)
if (msgFromThreadReceived)
notifyPeers();
How would I be able to accomplish this with TimerTask? As I understand, the timer is running in a separate thread, and I want to somehow pass a message to the main thread to notify the main thread that the task has been run.
I would have the class that handles the timers for the peers take a concurrent queue and place a message in the queue when the peer goes offline. Then the "main" thread can poll the queue(s) in an event-driven way, receiving and processing the messages.
Please note that this "main" thread MUST NOT be the event dispatch thread of a GUI framework. If there is something that needs to be updated in the GUI when the main thread receives the message, it can invoke another piece of code on the event dispatch thread upon reception of the message.
Two good choices for the queue would be ConcurrentLinkedQueue if the queue should be unbounded (the timer threads can put any number of messages in the queue before the main thread picks them up), or LinkedBlockingQueue if there should be a limit on the size of the queue, and if it gets too large, the timer threads have to wait before they can put another message on it (this is called backpressure, and can be important in distributed, concurrent systems, but may not be relevant in your case).
The idea here is to implement a version of the Actor Model (q.v.), in which nothing is shared between threads (actors), and any data that needs to be sent (which should be immutable) is passed between them. Each actor has an inbox in which it can receive messages and it acts upon them. Only, your timer threads probably don't need inboxes, if they take all their data as parameters to the constructor and don't need to receive any messages from the main thread after they're started.
public record PeerDownMessage(String peerName, int errorCode) {
}
public class PeerWatcher {
private final Peer peer;
private final BlockingQueue<PeerDownMessage> queue;
public PeerWatcher(Peer peer, BlockingQueue<PeerDownMessage> queue) {
this.peer = Objects.requireNonNull(peer);
this.queue = Objects.requireNonNull(queue);
}
public void startTimer() {
// . . .
// time to send failure message
queue.put(new PeerDownMessage(peer.getName(), error));
// . . .
}
}
public class Main {
public void eventLoop(List<Peer> peers) {
LinkedBlockingQueue<PeerDownMessage> inbox =
new LinkedBlockingQueue<>();
for (Peer peer : peers) {
PeerWatcher watcher = new PeerWatcher(peer, inbox);
watcher.startTimer();
}
while (true) {
PeerDownMessage message = inbox.take();
SwingWorker.invokeLater(() {
// suppose there is a map of labels for each peer
JLabel label = labels.get(message.peerName());
label.setText(message.peerName() +
" failed with error " + message.errorCode());
});
}
}
}
Notice that to update the GUI, we cause that action to be performed on yet another thread, the Swing Event Dispatch Thread, which must be different from our main thread.
There are big, complex frameworks you can use to implement the actor model, but the heart of it is this: nothing is shared between threads, so you never need to synchronize or make anything volatile, anything an actor needs it either receives as a parameter to its constructor or via its inbox (in this example, only the main thread has an inbox since the worker threads don't need to receive anything once they are started), and it is best to make everything immutable. I used a record instead of a class for the message, but you could use a regular class. Just make the fields final, set them in the constructor, and guarantee they can't be null, as in the PeerWatcher class.
I said the main thread can poll the "queue(s)," implying there could be more than one, but in this case they all send the same type of message, and they identify which peer the message is for in the message body. So I just gave every watcher a reference to the same inbox for the main thread. That's probably best. An actor should just have one inbox; if it needs to do multiple things, it should probably be multiple actors (that's the Erlang way, and that's where I've taken the inspiration for this from).
But if you really needed to have multiple queues, main could poll them like so:
while (true) {
for (LinkedBlockingQueue<PeerDownMessage> queue : queues) {
if (queue.peek() != null) {
PeerDownMessage message = queue.take();
handleMessageHowever(message);
}
}
}
But that's a lot of extra stuff you don't need. Stick to one inbox queue per actor, and then polling the inbox for messages to process is simple.
I initially wrote this to use ConcurrentLinkedQueue but I used put and take which are methods of BlockingQueue. I just changed it to use LinkedBlockingQueue but if you prefer ConcurrentLinkedQueue, you can use add and poll but on further consideration, I would really recommend BlockingQueue for the simplicity of its take() method; it lets you easily block while waiting for the next available item instead of busy waiting.

Executor service forgets about queued tasks

Working with Java 11 and Spring 2.1.6.RELEASE.
Im expierencing an issue where if I send a few records to the topic that this kafka consumer consumes from, everything works as planned. However If I produce A lot of records (a hundred or so) then the executor queues the processing but never actually does the processing. Am I using the executor wrong? I dont think its a kafka issue. Is there a way to query the executor to debug this?
#Configuration
public class ExecutorServiceConfig {
#Bean
public ExecutorService createExecutorService() {
return Executors.newFixedThreadPool(10);
}
}
#KafkaListener(topics = "${kafka.consumer.topic.name}",
groupId = "${spring.kafka.consumer.group-id}")
public void consume(PayrollDto message) {
log.info("Consumed message for processing:" + message); // this log is hit for all records
executor.execute(new ConsumerExecutor(message));
}
private class ConsumerExecutor implements Runnable {
PayrollDto message;
public ConsumerExecutor(PayrollDto message) {
this.message = message;
}
#Override
public void run() {
log.info("Beginning processing for payroll:" + this.message); // this log is hit for only some records
processPayrollList(this.message);
log.info("Finished processing for payroll:" + this.message);
}
}
It looks like you are using pure Java SE ExecutorService classes rather than Spring-specific TaskExecutor classes.
There is not enough information to diagnose this properly. (You haven't provide any clear evidence that the tasks have been "forgotten". Your reported evidence is that they are not executed. The "forgotten tasks" is only one of a number of possible explanations.)
The only explanations that I can think of are:
Your processPayrollList method is not terminating in some circumstances. It could be deadlocking, going into an infinite loop, waiting forever on some external service and so on.
If enough (i.e. 10) tasks failed to terminate, then you would run out of threads in the pool, and no more tasks would be processed. That is consistent with your evidence.
Something in your application is replacing executor with a different ExecutorService object.
Something in your application is removing tasks from the queue without executing them.
A build or deployment "process" issue; e.g. the code you are running is different to the code you are looking at. (It happens.)
An unreported bug in the Java 11 class library.
Of these, (1) is the most likely (IMO). Explanations (2) and (3) involve application code that I assume you would have mentioned in the question. I would treat (5) as implausible ... unless you can provide some clear evidence in the form of a minimal reproducible example.
Am I using the executor wrong?
It doesn't look like it from the code you have shown us.
Is there a way to query the executor to debug this?
You could take a thread stack dump (e.g. using the jstack command) and look at the status of the threads in the pool.
You could also cast executor to ThreadPoolExecutor and use that API to look at the queue length, the number of active threads and so on.
Note that this is not due to the ExecutorService being shut down. If that happened, you would get RejectedExecutionException in calls to execute.

Do while loop behaving unexpectedly, for some inexplicable reason

I've been all over the internet and the Java docs regarding this one; I can't seem to figure out what it is about do while loops I'm not understanding. Here's the background: I have some message handler code that takes some JSON formatted data from a REST endpoint, parses it into a runnable task, then adds this task to a linked blocking queue for processing by the worker thread. Meanwhile, on the worker thread, I have this do while loop to process the message tasks:
do {
PublicTask currentTask = pubMsgQ.poll();
currentTask.run();
} while(pubMsgQ.size() > 0);
pubMsgQ is a LinkedBlockingQueue<PublicTask> (PublicTask implements the Runnable interface). I can't see any problems with this loop (obviously, or else I wouldn't be here), but this is how it behaves during execution: Upon entering the do block, pubMsgQ is polled and returns the runnable task as expected. The task is then run successfully with expected results, but then we get to the while statement. Now, according to the Java docs, poll() should return and remove the head of the queue, so I should expect that pubMsgQ.size() will return 0, right? Wrong I guess, because somehow the while statement passes and the program enters the do block again; of course this time pubMsgQ.poll() returns null (as I would have expected it should) and the program crashes with NullPointerException. What? Please explain like I'm five...
EDIT:
I decided to leave my original post as is above; because I think I actually explain the undesired behavior of that specific piece of the code quite succinctly (the loop is being executed twice while I'm fairly certain there is no way the loop should be executing twice). However, I realize that probably doesn't give enough context for that loop's existence and purpose in the first place, so here is the complete breakdown for what I am actually trying to accomplish with this code as I am sure there is a better way to implement this altogether anyways.
What this loop is actually a part of is a message handler class which implements the MessageHandler interface belonging to my Client Endpoint class [correction from my previous post; I had said the messages coming in were JSON formatted strings from a REST endpoint. This is technically not true: they are JSON formatted strings being received through a web socket connection. Note that while I am using the Spring framework, this is not a STOMP client; I am only using the built-in javax WebSocketContainer as this is more lightweight and easier for me to implement]. When a new message comes in onMessage() is called, which passes the JSON string to the MessageHandler; so here is the code for the entire MessageHandler class:
public class MessageHandler implements com.innotech.gofish.AutoBrokerClient.MessageHandler {
private LinkedBlockingQueue<PublicTask> pubMsgQ = new LinkedBlockingQueue<PublicTask>();
private LinkedBlockingQueue<AuthenticatedTask> authMsgQ = new LinkedBlockingQueue<AuthenticatedTask>();
private MessageLooper workerThread;
private CyclicBarrier latch = new CyclicBarrier(2);
private boolean running = false;
private final boolean authenticated;
public MessageHandler(boolean authenticated) {
this.authenticated = authenticated;
}
#Override
public void handleMessage(String msg) {
try {
//Create new Task and submit it to the message queue:
if(authenticated) {
AuthenticatedTask msgTsk = new AuthenticatedTask(msg);
authMsgQ.put(msgTsk);
} else {
PublicTask msgTsk = new PublicTask(msg);
pubMsgQ.put(msgTsk);
}
//Check status of worker thread:
if(!running) {
workerThread = new MessageLooper();
running = true;
workerThread.start();
} else if(running && !workerThread.active) {
latch.await();
latch.reset();
}
} catch(InterruptedException | BrokenBarrierException e) {
e.printStackTrace();
}
}
private class MessageLooper extends Thread {
boolean active = false;
public MessageLooper() {
}
#Override
public synchronized void run() {
while(running) {
active = true;
if(authenticated) {
do {
AuthenticatedTask currentTask = authMsgQ.poll();
currentTask.run();
if(GoFishApplication.halt) {
GoFishApplication.reset();
}
} while(authMsgQ.size() > 0);
} else {
do {
PublicTask currentTask = pubMsgQ.poll();
currentTask.run();
} while(pubMsgQ.size() > 0);
}
try {
active = false;
latch.await();
} catch (InterruptedException | BrokenBarrierException e) {
e.printStackTrace();
}
}
}
}
}
You may probably see where I'm going with this...what this Gerry-rigged code is trying to do is act as a facsimile for the Looper class provided by the Android Development Kit. The actual desired behavior is as messages are received, the handleMessage() method adds the messages to the queue for processing and the messages are processed on the worker thread separately as long as there are messages to process. If there are no more messages to process, the worker thread waits until it is notified by the handler that more messages have been received; at which point it resumes processing those messages until the queue is once again empty. Rinse and repeat until the user stops the program.
Of course, the closest thing the JDK provides to this is the ThreadPoolExecutor (which I know is probably the actual proper way to implement this); but for the life of me I couldn't figure out how to for this exact case. Finally, as a quick aside so I can be sure to explain everything fully, The reason why there are two queues (and a public and authenticated handler) is because there are two web socket connections. One is an authenticated channel for sending/receiving private messages; the other is un-authenticated and used only to send/receive public messages. There should be no interference, however, given that the authenticated status is final and set at construction; and each Client Endpoint is passed it's own Handler which is instantiated at the time of server connection.
You appear to have a number of concurrency / threading bugs in your code.
Assumptions:
It looks like there could be multiple MessageHandler objects, each with its own pair of queues and (supposedly) at most one MessageLooper thread. It also looks as if a given MessageHandler could be used by multiple request worker threads.
If that is the case, then one problem is that MessageHandler is not thread-safe. Specifically, the handleMessage is accessing and updating fields of the MessageHandler instance without doing any synchronization.
Some of the fields are initialized during object creation and then never changed. They are probably OK. (But you should declare them as final to be sure!) But some of the variables are supposed to change during operation, and they must be handled correctly.
One section that rings particular alarm bells is this:
if (!running) {
workerThread = new MessageLooper();
running = true;
workerThread.start();
} else if (running && !workerThread.active) {
latch.await();
latch.reset();
}
Since this is not synchronized, and the variables are not volatile:
There are race conditions if two threads call this code simultaneously; e.g. between testing running and assigning true to it.
If one thread sets running to true, there are no guarantees that a second thread will see the new value.
The net result is that you could potentially get two or more MessageLooper threads for a given set of queues. That breaks your assumptions in the MessageLooper code.
Looking at the MessageLooper code, I see that you have declared the run method as synchronized. Unfortunately, that doesn't help. The problem is that the run method will be synchronizing on this ... which is the specific instance of MessageLooper. And it will acquire the lock once and release it once. On short, the synchronized is wrong.
(For Java synchronized methods and synchronized blocks to work properly, 1) the threads involved need to synchronize on the same object (i.e. the same primitive lock), and 2) all read and write operations on the state guarded by the lock need to be done while holding the lock. This applies to use of Lock objects as well.)
So ...
There is no synchronization between a MessageLooper thread and any other threads that are adding to or removing from the queues.
There are no guarantees that the MessageLooper thread will notice changes to the running flag.
As I previously noted, you could have two or more MessageLooper polling the same pair of queues.
In short, there are lots of possible explanations for strange behavior in the code in the Question. This includes the specific problem you noticed with the queue size.
Writing correct multi-threaded code is difficult. This is why you should be using an ExecutorService rather than attempting to roll your own code.
But it you do need to roll your own concurrency code, I recommend buying and reading "Java: Concurrency in Practice" by Brian Goetz et al. It is still the only good textbook on this topic ...

Recursive function call with time delay

I have a web application, I need to run a backgroung process which will hit a web-service, after getting the response it will wait for few seconds(say 30) then again hit the service. The response data can vary from very less to very large, so i dont want to call the processagain untill i am finished with processing of data. So, its a recursive call with a time delay. How i intend to do is:
Add a ContextListener to web app.
On contextIntialized() method , call invokeWebService() i.e. arbitary method to hit web service.
invokeWebService will look like:
invokeWebService()
{
//make request
//hit service
//get response
//process response
timeDelayInSeconds(30);
//recursive call
invokeWebService();
}
Pls. suggest whether I am doing it right. Or go with threads or schedulers. Pls. answer with sample codes.
You could use a ScheduledExecutorService, which is part of the standard JDK since 1.5:
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
Runnable r = new Runnable() {
#Override
public void run() {
invokeWebService();
}
};
scheduler.scheduleAtFixedRate(r, 0, 30, TimeUnit.SECONDS);
It is not recursive but repeated. You have two choice here:
Use a Timer and a TimerTask with scheduleAtFixedRate
Use Quartz with a repeated schedule.
In quartz, you can create a repeated schedule with this code:
TriggerBuilder.newTrigger().withSchedule(SimpleScheduleBuilder.repeatSecondlyForever(30))
.build()
From what I am getting, waiting sort of implies hanging, which I do not really think is a good idea. I would recommend you use something such as Quartz and run your method at whatever interval you wish.
Quartz is a full-featured, open source job scheduling service that can
be integrated with, or used along side virtually any Java EE or Java
SE application
Tutorials can be accessed here.
As stated in here you can do something like so:
JobDetail existingJobDetail = sched.getJobDetail(jobName, jobGroup);
if (existingJobDetail != null) {
List<JobExecutionContext> currentlyExecutingJobs = (List<JobExecutionContext>) sched.getCurrentlyExecutingJobs();
for (JobExecutionContext jec : currentlyExecutingJobs) {
if(existingJobDetail.equals(jec.getJobDetail())) {
//String message = jobName + " is already running.";
//log.info(message);
//throw new JobExecutionException(message,false);
}
}
//sched.deleteJob(jobName, jobGroup); if you want to delete the scheduled but not-currently-running job
}

Java thread start() without join() or interrupt() in servlet

I have a servlet filter that carries some logic and produces output before a request is served by it's primary page. I have a new need to send out the output a few seconds later than at the moment it is generated (with ~10s delay). Because of certain poor design choices made earlier I can't move the position of the filter just to have the output sent after.
I've chosen to spawn off a thread and delay transmission of the message in there. I'm currently not taking any explicit steps to halt execution of this thread. I'm not sure if everything is getting cleaned up properly though. Should I be using join() or interrupt() or any other Thread methods to clean up safely after this?
So within the main servlet code I have something like this...
Thread t = new Thread(new MessageSender(message, 10000));
t.start();
//Carry on.. la la la
While there are other fields in this class, I just stripped out a lot of the non-essential stuff like making DB connections etc to make the point clear.
private static class MessageSender implements Runnable {
String message;
int delay;
public MessageSender(String message, int delay) {
this.message = message;
this.delay = delay;
}
public void run() {
try {
Thread.sleep(delay);
System.out.println(new java.util.Date() + ": hello world");
} catch (InterruptedException e) {
// Do blah
} catch (Exception e) {
// Do blah blah
} finally {
// Close connections and stuff
}
}
}
Your code should be fine, the VM will clean up the thread once it completes.
However, I'd advise not using raw threads like that, but instead using a java.util.concurrent.ScheduledExecutorService, creating using java.util.concurrent.Executors. It's a nicer abstraction that would better control your thread allocation.
Yes, everything will be properly cleaned up. Thread dies after finishing run() method and as you have no more references to that thread object - it will be properly garbage-collected.
Just be sure that "Thread t" object will not be referenced by anything. To be sure on that, you can use:
(new Thread(...)).start();
The servlet specification explicitly states (section "Thread safety") that request and response objects are not guaranteed to be thread-safe, and that if those objects are handed off to other threads, then the application is responsible for ensuring that these objects are synchronized and that they are accessed only within the scope of the servlet's service method. In other words, you must .join() those threads.
I've just had to answer the same question myself :)
I can acknowledge that the threads are indeed cleaned up after they complete. If you're not completely certain the spawned threads ever die, you should be able to monitor the process and see how many threads it's currently running at. If the number keeps growing, something's outta control.
On a Unix-system, you can use the ps command, but I'm rusty, so I asked google instead of reading the man-page.
One of the first hits on google was This script that lists threads for each process. Output looks like this
PID TID CLS RTPRIO STAT COMMAND WCHAN
....
16035 16047 TS - S (java)
16035 16050 TS - S (java)
16035 16054 TS - S (java)
16035 16057 TS - S (java)
16035 16058 TS - S (java)
16035 16059 TS - S (java)
16035 16060 TS - S (java)
....
And I just grep the output for the process id (pid) of the process I want to watch and count the number of lines, each one corresponding to a thread. Like this:
morten#squeeze: ~$ sh /tmp/list_threads.sh | grep 16035 | wc -l
20
So the program I'm currently watching (PID 16035) has 20 threads running.
This required no knowledge of jconsole or any changes to the code. The last part is probably the most important part, as I haven't written the program myself, so now I don't have to read and understand the program.

Categories