when a quartz job fires, is it a new job class instance? - java

I am very new to Quartz and I have some doubts about the jobs lifecycle.
Let's suppose I have a single job configured to do some stuff.
The job fires and ends its work. When it fires again is it the same instance (maybe set to sleep and awaken by the scheduler) or is it a new job instance (once the job ends it is killed and when the trigger condition is met again a new job instance is created)?
I ask such question because when I debug my application (spring 3 mvc with quartz support) I see new instances of the job and new threads with SimpleThreadPool$WorkerThreadRun() opened for every time the job is fired so that the SimpleThreadPool$WorkerThreadRun() threads are piled up and never terminated.
I just want to know if this behaviour is allright or I'm bound to fill the memory ;-)
Can anyone give me some explanation? Thanks in advance.

Quartz creates new instance of your job class every time it wants to trigger that job. Suppose you have hundreds of thousands of jobs scheduled to trigger very infrequently - it would be a waste of memory to keep all those jobs in memory.
However if you are using Spring support for Quartz, especially the MethodInvokingJobDetailFactoryBean, Spring will handle the lifecycle of your job (it basically calls designated method of one of your beans). But seems not to be the case in your application.
Of course after the job is done and no other references are pointing to it (which is the normal case) garbage collector will eventually release the memory occupied by the job).
Finally about threads - Quartz creates a fixed pool of worker threads (see org.quartz.threadPool.threadCount configuration option). Every time you run a job Quartz may decide to use a different thread - but it won't create new thread per every trigger.

I will write about version 2.1.5 (latest version), but it could also be true for other versions.
Job-instance created by some instance of "JobFactory" with "newJob"-function (SimpleJobFactory, for example). Call to "newJob" executed in "initialize"-method of JobRunShell-class. JobRunShell-object held in local variable of "QuartzSchedulerThread.run" and not stored in any other list or field.
So, new Job-instance created for every trigger time and after execution it will be cleaned up normally by garbage collector.

Related

Do I need to clean up Thread objects in Java?

In my Java application I have a Runnable such as:
this.runner = new Runnable({
#Override
public void run() {
// do something that takes roughly 5 seconds.
}
});
I need to run this roughly every 30 seconds (although this can vary) in a separate thread. The nature of the code is such that I can run it and forget about it (whether it succeeds or fails). I do this as follows as a single line of code in my application:
(new Thread(this.runner)).start()
Now, this works fine. However, I'm wondering if there is any sort of cleanup I should be doing on each of the thread instances after they finish running? I am doing CPU profiling of this application in VisualVM and I can see that, over the course of 1 hour runtime, a lot of threads are being created. Is this concern valid or is everything OK?
N.B. The reason I start a new Thread instead of simply defining this.runner as a Thread, is that I sometimes need to run this.runner twice simultaneously (before the first run call has finished), and I can't do that if I defined this.runner as a Thread since a single Thread object can only be run again once the initial execution has finished.
Java objects that need to be "cleaned up" or "closed" after use conventionally implement the AutoCloseable interface. This makes it easy to do the clean up using try-with-resources. The Thread class does not implement AutoCloseable, and has no "close" or "dispose" method. So, you do not need to do any explicit clean up.
However
(new Thread(this.runner)).start()
is not guaranteed to immediately start computation of the Runnable. You might not care whether it succeeds or fails, but I guess you do care whether it runs at all. And you might want to limit the number of these tasks running concurrently. You might want only one to run at once, for example. So you might want to join() the thread (or, perhaps, join with a timeout). Joining the thread will ensure that the thread will completes its computation. Joining the thread with a timeout increases the chance that the thread starts its computation (because the current thread will be suspended, freeing a CPU that might run the other thread).
However, creating multiple threads to perform regular or frequent tasks is not recommended. You should instead submit tasks to a thread pool. That will enable you to control the maximum amount of concurrency, and can provide you with other benefits (such as prioritising different tasks), and amortises the expense of creating threads.
You can configure a thread pool to use a fixed length (bounded) task queue and to cause submitting threads to execute submitted tasks itself themselves when the queue is full. By doing that you can guarantee that tasks submitted to the thread pool are (eventually) executed. The documentation of ThreadPool.execute(Runnable) says it
Executes the given task sometime in the future
which suggests that the implementation guarantees that it will eventually run all submitted tasks even if you do not do those specific tasks to ensure submitted tasks are executed.
I recommend you to look at the Concurrency API. There are numerous pre-defined methods for general use. By using ExecutorService you can call the shutdown method after submitting tasks to the executor which stops accepting new tasks, waits for previously submitted tasks to execute, and then terminates the executor.
For a short introduction:
https://www.baeldung.com/java-executor-service-tutorial

Camunda MockExpressionManager doesn't work when the delegate invoked by timer

I've configured Camunda engine with org.camunda.bpm.engine.test.mock.MockExpressionManager.
At first glance it works as expected: when I do
Mocks.register("myDelegate", myDelegateMock), the bpmn process invokes my mock, but not the real delegate.
But when there is a task, that invoked by some timer boundary event, the mock is ignored and the real delegate becomes invoked.
I've looked at the code, and found that mocks are stored in the ThreadLocal. And if the tasks is invoked by timer, the execution happens in different thread. And that's looks like a root cause of such behavior. Probably mocks also will not work if the task is marked as asynchronous.
I've also tried the extension
https://github.com/camunda/camunda-bpm-mockito
but looks like internally it uses the same Mocks.register, and also doesn't work for me.
May be there are some other possibilities to mock delegate that will work for the case with timer?
Well, the this is already answered in the thread you mentioned:
Mocks.register is meant to be used in a purely single-threaded,
no-job-executor, "unit test" environment. In such an environment,
instead of setting the time and waiting for the job executor to
process the jobs, you need to explicitly trigger the timer job in your
own testing thread:
Job job = processEngineRule.getManagementService().createJobQuery().singleResult();
processEngineRule.getManagementService().executeJob(job.getId());
Then it should happily resolve the name and should work.
So the solution is: let the process run into the timer event, and then manually execute the job() so the process continues as if the timer was reached. This is a good idea even without the single-thread problem: do not simulate timers in camunda tests, just verify that the process is waiting in the correct step and control if the timer condition (due date) is equal to the one you expected.

thread pool - make a new one per task, detect when a set of tasks is done

Running concurrent tasks via ThreadPoolExecutors. Since I have 2-3 sets of tasks to do, for now have a map of ThreadPoolExecutors and can send a set of tasks to one of them.
Now want to know when a pool has completed all tasks assigned to it. The way its organized is that I know before hand the list of tasks, so send them to a newly constructed pool, then plan to start pooling/ tracking to know when all are done.
One way would be to have another pool with 1-2 threads, that polls the other pools to know when their queues are empty. If a few scans show them as empty (with a second sleep between polling, assumes they are done).
Another way would be to sub class ThreadPoolExecutor , keep a track via the queue and over ridding afterExecute(Runnable r, Throwable t) so can know exactly when each task is done, good to show status and know when all are complete if everything moving smoothly.
Is there an implementation of the second some where? Would be good to have an interface that listeners can implement, then add them selves to the sub classed method.
Also looking for an implementation :
To to ask a pool to shut down within a time out,
If after a time out the shut down is not complete then call shutdownNow()
And if this fails then get the thread factory and stop all threads in its group. (assumes that we set the factory and it uses a group or other way to get a reference to all its threads)
Basically as sure a way as we can, to clean up a pool so that we can have this running in an app container. Some of the tasks call selenium etc so there can be hung threads.
The last ditch would be to restart the container (tomcat/jboss) but want that to be the last ditch.
Question is - know of an open source implementation of this or any code to start off with?
For your first question, you can use a ExecutorCompletionService. It will add all completed tasks into a Queue so with a blocking queue you can wait until all tasks arrived at the queue.
Or create a subclass of FutureTask and override its done method to define the “after execute” action. Then submit instances of this class wrapping your jobs to the executor.
The second question has a straightforward solution. “shut down within a time out, and if after a time out the shut down is not complete then call shutdownNow()”:
executor.shutDown();
if(!executor.awaitTermination(timeout, timeUnit))
executor.shutdownNow();
Stopping threads is something you shouldn’t do (Thread.stop is deprecated for a good reason). But you may invoke cancel(true) on your jobs. That could accelerate the termination if your tasks support interruption.
By the way it looks very unnatural to me having multiple ThreadPoolExecutors and playing around with shutting them down instead of simply having one ThreadPoolExecutor for all jobs and letting that ThreadPoolExecutor manage the live cycle of all threads. That’s what the ThreadPoolExecutor is made for.

Any available design pattern for a thread that is capable of executing a specific job sent by another threads?

I'm working on a project where execution time is critical. In one of the algorithms I have, I need to save some data into a database.
What I did is call a method that does that. It fires a new thread every time it's called. I faced a runoutofmemory problem since the loaded threads are more than 20,000 ...
My question now is, I want to start only one thread, when the method is called, it adds the job into a queue and notifies the thread, it sleeps when no jobs are available and so on. Any design patterns available or examples available online ?
Run, do not walk to your friendly Javadocs and look up ExecutorService, especially Executors.newSingleThreadExecutor().
ExecutorService myXS = Executors.newSingleThreadExecutor();
// then, as needed...
myXS.submit(myRunnable);
And it will handle the rest.
Yes, you want a worker thread or thread pool pattern.
http://en.wikipedia.org/wiki/Thread_pool_pattern
See http://www.ibm.com/developerworks/library/j-jtp0730/index.html for Java examples
I believe the pattern you're looking for is called producer-consumer. In Java, you can use the blocking methods on a BlockingQueue to pass tasks from the producers (that create the jobs) to the consumer (the single worker thread). This will make the worker thread automatically sleep when no jobs are available in the queue, and wake up when one is added. The concurrent collections should also handle using multiple worker threads.
Are you looking for java.util.concurrent.Executor?
That said, if you have 20000 concurrent inserts into the database, using a thread pool will probably not save you: If the database can't keep up, the queue will get longer and longer, until you run out of memory again. Also, note that an executors queue is volatile, i.e. if the server crashes, the data in it will be gone.

ScheduledThreadPoolExecutor and corePoolSize 0?

I'd like to have a ScheduledThreadPoolExecutor which also stops the last thread if there is no work to do, and creates (and keeps threads alive for some time) if there are new tasks. But once there is no more work to do, it should again discard all threads.
I naivly created it as new ScheduledThreadPoolExecutor(0) but as a consequence, no thread is ever created, nor any scheduled task is ever executed.
Can anybody tell me if I can achieve my goal without writing my own wrapper around the ScheduledThreadpoolExecutor?
Thanks in advance!
Actually you can do it, but its non-obvious:
Create a new ScheduledThreadPoolExecutor
In the constructor set the core threads to the maximum number of threads you want
set the keepAliveTime of the executor
and at last, allow the core threads to timeout
m_Executor = new ScheduledThreadPoolExecutor ( 16,null );
m_Executor.setKeepAliveTime ( 5, TimeUnit.SECONDS );
m_Executor.allowCoreThreadTimeOut ( true );
This works only with Java 6 though
I suspect that nothing provided in java.util.concurrent will do this for you, just because if you need a scheduled execution service, then you often have recurring tasks to perform. If you have a recurring task, then it usually makes more sense to just keep the same thread around and use it for the next recurrence of the task, rather than tearing down your thread and having to build a new one at the next recurrence.
Of course, a scheduled executor could be used for inserting delays between non-recurring tasks, or it could be used in cases where resources are so scarce and recurrence is so infrequent that it makes sense to tear down all your threads until new work arrives. So, I can see cases where your proposal would definitely make sense.
To implement this, I would consider trying to wrap a cached thread pool from Executors.newCachedThreadPool together with a single-threaded scheduled executor service (i.e. new ScheduledThreadPoolExecutor(1)). Tasks could be scheduled via the scheduled executor service, but the scheduled tasks would be wrapped in such a way that rather than having your single-threaded scheduled executor execute them, the single-threaded executor would hand them over to the cached thread pool for actual execution.
That compromise would give you a maximum of one thread running when there is absolutely no work to do, and it would give you as many threads as you need (within the limits of your system, of course) when there is lots of work to do.
Reading the ThreadPoolExecutor javadocs might suggest that Alex V's solution is okay. However, doing so will result in unnecessarily creating and destroying threads, nothing like a cashed thread-pool. The ScheduledThreadPool is not designed to work with a variable number of threads. Having looked at the source, I'm sure you'll end up spawning a new thread almost every time you submit a task. Joe's solution should work even if you are ONLY submitting delayed tasks.
PS. I'd monitor your threads to make sure your not wasting resources in your current implementation.
This problem is a known bug in ScheduledThreadPoolExecutor (Bug ID 7091003) and has been fixed in Java 7u4. Though looking at the patch, the fix is that "at least one thread is started even if corePoolSize is 0."

Categories