I have a Quartz Job like this
#PersistJobDataAfterExecution
#DisallowConcurrentExecution
public class MyJob{
public void execute(JobExecutionContext jec) throws JobExecutionException {
//connect to a FTP server, monitor directory for new files and download
//Using FTPClient of commons-net-3.5.jar
}
The job is triggered with
JobDetail jobDetail = newJob(MyJob.class)
.withIdentity(jobName, DEFAULT_GROUP)
.usingJobData(new JobDataMap(jobProperties))
.build();
//trigger every minute
Trigger trigger = newTrigger()
.withIdentity(jobName, DEFAULT_GROUP)
.startNow()
.withSchedule(cronSchedule(cronExpression))
.build();
scheduler.scheduleJob(jobDetail,trigger);
The job is triggered every minute. It works well for about 1 week (10000 Executions) and inexplicably not relaunches. There are no errors in the log and see that it has completed the previous execution. The other processes are firing correctly.
Upgrading libraries to quartz-2.2.3 and commons-net-3.5 (looking for a possible bug in the FTP library) I managed to last 3 weeks
I have a Job to monitor Scheduler that says trigger state is BLOCKED. The Thread of the blocked process is not reused by application server
TriggerState triggerState = scheduler.getTriggerState(triggerKey);
I have not found documentation on this type of problem with Quartz, so my suspicion is a bug in the FTP library that interferes with the thread started by quartz for example with the usage of #PersistJobDataAfterExecution
I wonder if it's a known issue or could be a bug so I could apply a solution or a workaround ( killing the quartz job how to stop/interrupt quartz scheduler job manually)
After months with occasional drops of service and suspect that FTP connectivity errors block the service, we have finally implemented a measure that seems to solve the problem
Each process executions do now:
FTPClient ftp = new FTPClient();
//Added connection timeout before connect()
ftp.setDefaultTimeout(getTimeoutInMilliseconds());
ftp.connect(host, port);
//Added more timeouts to see if thread locks disappear...
ftp.setBufferSize(1024 * 1024);
ftp.setSoTimeout(getTimeoutInMilliseconds());
The weird thing is that the process was not blocked previously in connect(), the process continued and ended without restarting, but when setting the timeout the problem has not happened again
Related
I'm trying to set up Debezium engine with MariaDB and ActiveMQ. I'm using Quarkus framework. I'm following the official documentation (https://debezium.io/documentation/reference/development/engine.html). When I start the engine I get the following error:
2021-05-03 10:05:53,184 INFO [io.deb.pip.sou.AbstractSnapshotChangeEventSource] (debezium-mysqlconnector-my-app-connector-change-event-source-coordinator) Snapshot - Final stage
2021-05-03 10:05:53,184 WARN [io.deb.pip.ChangeEventSourceCoordinator] (debezium-mysqlconnector-my-app-connector-change-event-source-coordinator) Change event source executor was interrupted: java.lang.InterruptedException: Interrupted while emitting initial DROP TABLE events
Not really sure why this happens and so far I've not been able to track down the source of the problem so any kind of help will be appreciated.
I was able to resolve this by deleting the file configured with the property offset.storage.file.filename.
// Run the engine asynchronously ...
ExecutorService executor = Executors.newSingleThreadExecutor();
executor.execute(engine);
// Do something else or wait for a signal or an event
Make sure DO wait something, or the connector thread will be terminated by the main thread, and you will get a message like "Snapshot was interrupted before completion".
Is it right to say that - java gRPC server thread will still run even after the DEADLINE time. But, gRPC server will stop/block that thread only from making any subsequent gRPC calls since the DEADLINE time has crossed?
If the above is a correct statement, then is there a way to stop / block the thread making any Redis / DB calls as well for which DEADLINE time has crossed ? Or once the DEADLINE time is crossed, interrupt the thread immedietly?
Is it right to say that - java gRPC server thread will still run even after the DEADLINE time.
Correct. Java doesn't offer any real alternatives.
But, gRPC server will stop/block that thread only from making any subsequent gRPC calls since the DEADLINE time has crossed?
Mostly. Outgoing gRPC calls observe the io.grpc.Context, which means deadlines and cancellations are propagated (unless you fail to propagate Context to another thread or use Context.fork()).
If the above is a correct statement, then is there a way to stop / block the thread making any Redis / DB calls as well for which DEADLINE time has crossed ? Or once the DEADLINE time is crossed, interrupt the thread immedietly?
You can listen for the Context cancellation via Context.addListener(). The gRPC server will cancel the Context when the deadline expires and if the client cancels the RPC. This notification is how outgoing RPCs are cancelled.
I will note that thread interruption is a bit involved to perform without racing. If you want interruption and don't have a Future already, I suggest wrapping your work in a FutureTask (and simply calling FutureTask.run() on the current thread) in order to get its non-racy cancel(true) implementation.
final FutureTask<Void> future = new FutureTask<Void>(work, null);
Context current = Context.current();
CancellationListener listener = new CancellationListener() {
#Override public void cancelled(Context context) {
future.cancel(true);
}
};
current.addListener(listener);
future.run();
current.removeListener(listener);
You can check Context.isCancelled() before making Redis / DB queries, and throw StatusException(CANCELLED) if it has.
I`ve configured MDB for listening to the Queue on external ActiveMQ broker. It works fine, but MDB takes a message from the queue and starts processing only after 2 minutes delay. I haven't configured any timeouts, but it really looks like there is a kind of property that delays the processing. Could someone advise how can I tune this delay and switch to immediate processing?
It's an old bug in Glassfish for a long time.
There was a bug recorded here but now this site also closed .
http://java.net/jira/browse/GLASSFISH-1429
Add System.exit(0) (in a finally block), which closes all threads.
try{
code ...
}finally{
System.exit(0) ;
}
You can also enable debugging :
1) enable jstack to see how many threads from the mdb-thread-pool are in
use.
2) try enabling monitoring statistics of work-manager and thread-pools :
http://download.oracle.com/docs/cd/E19879-01/820-4335/6nfqc3qp8/index.html
I'm working with Java Play Framework (2.3.6) and have some problems handeling scheduled tasks. Sometimes some of my recurring tasks crash with an exception. So I want to create a website to check the status of my scheduling tasks to have a quick view, if each tasks still runs without problems.
I know, I have also to check the exceptions but the website would be a nice controlling tool.
So I have some scheduled tasks like this one:
ExecutionContext dispatcher = Akka.system().dispatcher();
FiniteDuration timeNow = Duration.create(0, TimeUnit.SECONDS);
FiniteDuration time1m = Duration.create(1, TimeUnit.MINUTES);
Recorder recorderTask = new Recorder(); // implements Runnable
Akka.system().scheduler().schedule(timeNow, time1m, recorderTask, dispatcher);
Now, is there any possibility to check the status of my task (like it's alive)?
Thanks for your help!
We have an application deployed in a clustered environment. Every 5 minutes our application sends a ping operation to all other applications connected to it. We have used non-persistent Quartz scheduler in order to do this work.
The problem is that in a clustered environment only one node is doing this activity(ping operation). Are there any references or any sample code for this? (This is a plain servlet application.)
Since all nodes are working in a cluster, every job runs on just a single machine (most idle one). This is the reason you use clustering. But you want all machines to run given job independently, not being aware of other cluster nodes. Basically, you don't need Quartz (cluster) at all!
Enough is to use ScheduledExecutorService.html#scheduleAtFixedRate():
final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
final Runnable pinger = new Runnable() {
public void run() {
//send PING
}
};
scheduler.scheduleAtFixedRate(pinger, 5, 5, MINUTES);
Just run this code on every machine and use Quartz where you need it.