Quartz, schedule process always alive - java

currently, I have a web application based on Java 7, tomcat 7 and Spring 4 that invokes a thread on tomcat startup.
This thread is always alive and the java code is:
public class Scheduler {
Queue<Long> queue = new ArrayBlockingQueue<Long>();
private static class ThreadExecutor implements Runnable
{
.......
#Override
public void run()
{
while(true)
{
Long ID = queue.get();
if(ID != null)
{
Object o = webFacade.get(ID);
//Exec....
}
else
{
try
{
Thread.sleep(30000);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
}
}
}
}
An external event fills the queue with the Object's ID.
With one tomcat this thread works well, but now I nedd to add onother tomcat, so I want to introduce Quartz in clustered mode.
I've configured Quartz in my project and it seems to work, but now how can I "translate" this class using Quartz?I want that only one thread is active because it is very expensive for my Database.
Thanks in advance

In general Quartz while being run in a cluster mode guarantees that the job will be triggered (and handled) on one server only.
So Job will be the task that you execute (in other words, what should be executed).
Now Quartz also introduces the concept of Trigger which basically defines when the job will be triggered.
From your code snippet, you run the job every 30000 ms = 30 seconds. So you'll trigger your stuff every 30 seconds (SimpleTrigger will do the job).
So, the 'while' loop goes away, it will be handled by quartz automatically.
In job you'll only work with a queue. Its unclear who fills this queue, but it looks like a different question.
It's hard to say exactly how you translate the queue, but in general job should
Get from queue
Call webFacade just like now
That's it. Last but not the least, Spring has a beautiful integration with Quartz. See Chapter 33.6.

Related

Spring #Scheduled job that will run every second but at a specific number of milliseconds after the second

I'm trying to run a scheduled job in spring boot that will run every second. That in itself is easy with a cron job or with the delay/rate attributes. The issue is that I don't want it to run on the second. I want it to run at a specified number of milliseconds after the second.
For example, run every second at 900 ms past the second. So the logs would look like this:
System started at 20:00:00:000.
20:00:00:900 - log
20:00:01:900 - log
20:00:02:900 - log
It's important that it is not dependent on when the system starts. It has to be at that specified time every second.
Cron jobs are too imprecise to be able to do this but surely something already exists that can do this?
The top answer in the following thread mentions creating a custom Trigger for that issue. Would that be a possible for this?
Spring's #Scheduled cron job is triggering a few milliseconds before the scheduled time
Cron supports resolution up to the second, so i think the only option is implementing Trigger on your own, as mentioned in other question.
My idea is to write a trigger, which recalculates the next execution time with the required milliseconds after the second. Somewhat naive implementation, just to illustrate the idea, by reusing spring's CronTrigger, may look like this:
public class DelayingCronTrigger extends CronTrigger {
private final long delayMs;
public DelayingCronTrigger(String expression, long delayMs) {
super(expression);
this.delayMs = delayMs;
}
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
Date next = super.nextExecutionTime(triggerContext);
if (next == null) {
return null;
}
long delayedNext = next.getTime() + this.delayMs;
return new Date(delayedNext);
}
}
Keep in mind that the parent class implementation i reuse here is based on the completion time of the previous execution in order to avoid overlapping executions. If the actual tasks you are running take longer, you may need to rewrite the logic from scratch basing it on lastScheduledExecutionTime() or lastActualExecutionTime().
After implementing the required trigger, you need to register the tasks programmatically:
#Configuration
#EnableScheduling
public class SchedulerConfig implements SchedulingConfigurer {
#Override
public void configureTasks(ScheduledTaskRegistrar taskRegistrar) {
String cron = "* * * ? * *";
long delayMs = 900;
taskRegistrar.addTriggerTask(() -> System.out.println("Time - " + LocalDateTime.now()), new DelayingCronTrigger(cron, delayMs));
}
}

Spring: PESSIMISTIC_READ/WRITE not working

I have two servers connected to the same database. Both have scheduled jobs I don't really care which one runs the scheduled jobs as long as only one does. So the idea was to keep a key-value pair in DB and whichever reads the value as 0 first gets to run the scheduled job.
Ideally this would work as so:
App A and App B run the scheduled job at the same time.
App A access the DB first, locks the table for reading & writing.
App A sets value to 1 and releases the lock.
App A starts working on the scheduled job.
App B reads the value 1 from it's DB request and does not run the scheduled job.
I have a config table where I keep status on my locks.
config:
name: VARCHAR(55)
value: VARCHAR(55)
The repository:
#Repository
public interface ConfigRepository extends CrudRepository<Config, Long> {
#Lock(LockModeType.PESSIMISTIC_READ)
Config findOneByName(String name);
#Lock(LockModeType.PESSIMISTIC_WRITE)
<S extends Config> S save(S entity);
}
The service:
#Service
public class ConfigService {
#Transactional
public void unlock(ConfigEnum lockable) {
Config lock = configRepository.findOneByName(lockable.getSetting());
lock.setValue("0");
configRepository.save(lock);
}
#Transactional
public void lock(ConfigEnum lockable) {
Config lock = configRepository.findOneByName(lockable.getSetting());
lock.setValue("1");
configRepository.save(lock);
}
#Transactional
public boolean isLocked(ConfigEnum lockable) {
Config lock = configRepository.findOneByName(lockable.getSetting());
return lock.getValue().equals("1");
}
}
The Scheduler:
#Component
public class JobScheduler {
#Async
#Scheduled("0 0 1 * * *")
#Transactional
public void run() {
if (!configService.isLocked(ConfigEnum.CNF_JOB.getJobName())) {
configService.lock(ConfigEnum.CNF_JOB.getJobName());
jobService.run();
configService.unlock(ConfigEnum.CNF_JOB.getJobName());
}
}
}
However I have noticed that the scheduled jobs still run at the same time on both apps. At times one will throw a deadlock but it appears that Spring retries the transaction if it hits a deadlock. At which time it appears that the one app has finished so this one begins the same job again (not sure).
The tasks are not that short that a lock could be established, table updated, task run and lock released. I would like to keep this really simple without involving additional libraries like Quartz or ShedLock.
I think your transactions are too short. You don't start a transaction in the run method, but each ConfigService method is transactional. Most likely each method gets a new transaction and commits when done. A commit will release the lock, so there is a race condition between isLocked and lock.
Combine isLocked and lock:
#Transactional
public boolean tryLock(ConfigEnum lockable) {
Config lock = configRepository.findOneByName(lockable.getSetting());
if("1".equals(lock.getValue()) {
return false;
}
lock.setValue("1");
configRepository.save(lock);
return true;
}
This checks and writes in the same transaction and should work.
As a side note it is a dangerous method. What happens if the node that has the lock dies? There are many possible solutions. One is to lock a specific record and keep that lock throughout the job. The other node cannot proceed and if the first one dies the lock will be released. Another is to use a timestamp instead of 1 and require the timestamp to be updated on a regular basis by the owner. Or you could introduce something like Zookeeper.

Hazelcast Distributed Lock with iMap

We are currently using Hazelcast 3.1.5.
I have a simple distributed locking mechanism that is supposed to provide thread safety across multiple JVM nodes. Code is pretty simple.
private static HazelcastInstance hInst = getHazelcastInstance();
private IMap<String, Integer> mapOfLocks = null;
...
...
mapOfLocks = hInst.getMap("mapOfLocks");
if (mapOfLocks.get(name) == null) {
mapOfLocks.put(name,1);
mapOfLocks.lock(name);
}
else {
mapOfLocks.put(name,mapOfLocks.get(name)+1);
}
...
<STUFF HAPPENS HERE>
mapOfLocks.unlock(name);
..
}
Earlier, I used to call HazelcastInstance.getLock() directly and things seemed to work, though we never saw anything out of place when multiple JVMs were involved.
Recently, I was asked to investigate a database deadlock in block, and after weeks of investigation and log analysis, I was able to determine this was caused by multiple threads being able to acquire the lock against the same key. Before the first thread can commit the code, second thread manages to get another lock, at which point the second thread is blocked by the Database lock from the first thread.
Is there any outstanding bug against Hazelcast implementation of distributed locks, should I be doing anything differently with my configuration?
And, Oh my configuration has multicast disabled and tcp-ip enabled
Here is how you can use IMap as a lock container.
You don't need to have entry for the name present in the map in order to lock it.
HazelcastInstance instance = Hazelcast.newHazelcastInstance();
IMap<Object, Object> lockMap = instance.getMap("lockMap");
lockMap.lock(name);
try {
//do some work
} finally {
lockMap.unlock(name);
}

TomEE chokes on too many #Asynchronous operations

I am using Apache TomEE 1.5.2 JAX-RS, pretty much out of the box, with the predefined HSQLDB.
The following is simplified code. I have a REST-style interface for receiving signals:
#Stateless
#Path("signal")
public class SignalEndpoint {
#Inject
private SignalStore store;
#POST
public void post() {
store.createSignal();
}
}
Receiving a signal triggers a lot of stuff. The store will create an entity, then fire an asynchronous event.
public class SignalStore {
#PersistenceContext
private EntityManager em;
#EJB
private EventDispatcher dispatcher;
#Inject
private Event<SignalEntity> created;
public void createSignal() {
SignalEntity entity = new SignalEntity();
em.persist(entity);
dispatcher.fire(created, entity);
}
}
The dispatcher is very simple, and merely exists to make the event handling asynchronous.
#Stateless
public class EventDispatcher {
#Asynchronous
public <T> void fire(Event<T> event, T parameter) {
event.fire(parameter);
}
}
Receiving the event is something else, which derives data from the signal, stores it, and fires another asynchronous event:
#Stateless
public class DerivedDataCreator {
#PersistenceContext
private EntityManager em;
#EJB
private EventDispatcher dispatcher;
#Inject
private Event<DerivedDataEntity> created;
#Asynchronous
public void onSignalEntityCreated(#Observes SignalEntity signalEntity) {
DerivedDataEntity entity = new DerivedDataEntity(signalEntity);
em.persist(entity);
dispatcher.fire(created, entity);
}
}
Reacting to that is even a third layer of entity creation.
To summarize, I have a REST call, which synchronously creates a SignalEntity, which asynchronously triggers the creation of a DerivedDataEntity, which asynchronously triggers the creation of a third type of entity. It all works perfectly, and the storage processes are beautifully decoupled.
Except for when I programmatically trigger a lot (f.e. 1000) of signals in a for-loop. Depending on my AsynchronousPool size, after processing signals (quite fast) in the amount of about half of that size, the application completely freezes for up to some minutes. Then it resumes, to process about the same amount of signals, quite fast, before freezing again.
I have been playing around with AsynchronousPool settings for the last half hour. Setting it to 2000, for instance, will easily make all my signals be processed at once, without any freezes. But the system isn't sane either, after that. Triggering another 1000 signals, resulted in them being created allright, but the entire creation of derived data never happened.
Now I am completely at a loss as to what to do. I can of course get rid of all those asynchronous events and implement some sort of queue myself, but I always thought the point of an EE container was to relieve me of such tedium. Asynchronous EJB events should already bring their own queue mechanism. One which should not freeze as soon as the queue is too full.
Any ideas?
UPDATE:
I have now tried it with 1.6.0-SNAPSHOT. It behaves a little bit differently: It still doesn't work, but I do get an exception:
Aug 01, 2013 3:12:31 PM org.apache.openejb.core.transaction.EjbTransactionUtil handleSystemException
SEVERE: EjbTransactionUtil.handleSystemException: fail to allocate internal resource to execute the target task
javax.ejb.EJBException: fail to allocate internal resource to execute the target task
at org.apache.openejb.async.AsynchronousPool.invoke(AsynchronousPool.java:81)
at org.apache.openejb.core.ivm.EjbObjectProxyHandler.businessMethod(EjbObjectProxyHandler.java:240)
at org.apache.openejb.core.ivm.EjbObjectProxyHandler._invoke(EjbObjectProxyHandler.java:86)
at org.apache.openejb.core.ivm.BaseEjbProxyHandler.invoke(BaseEjbProxyHandler.java:303)
at <<... my code ...>>
...
Caused by: java.util.concurrent.RejectedExecutionException: Timeout waiting for executor slot: waited 30 seconds
at org.apache.openejb.util.executor.OfferRejectedExecutionHandler.rejectedExecution(OfferRejectedExecutionHandler.java:55)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
at org.apache.openejb.async.AsynchronousPool.invoke(AsynchronousPool.java:75)
... 38 more
It is as though TomEE would not do ANY queueing of operations. If no thread is free to process in the moment of the call, tough luck. Surely, this cannot be intended..?
UPDATE 2:
Okay, I seem to have stumbled upon a semi-solution: Setting the AsynchronousPool.QueueSize property to maxint solves the freeze. But questions remain: Why is the QueueSize so limited in the first place, and, more worryingly: Why would this block the entire application? If the queue is full, it blocks, but as soon as a task is taken from it, another should pop in, right? The queue appears to be blocked until it is completely empty again.
UPDATE 3:
For anyone who wants to have a go: http://github.com/JanDoerrenhaus/tomeefreezetestcase
UPDATE 4:
As it turns out, increasing the queue size does NOT solve the problem, it merely delays it. The problem remains the same: Too many asynchronous operations at once, and TomEE chockes so bad, that it cannot even undeploy the application on termination anymore.
So far, my diagnosis is that the task cleanup does not work properly. My tasks are all very small and fast (see the test case on github). I was already afraid that it would be OpenJPA or HSQLDB slowing down on too many concurrent calls, but I commented out all em.persist calls, and the problem remained the same. So if my tasks are quite small and fast, but still manage to block out TomEE so bad that it could not get any further task in after 30 seconds (javax.ejb.EJBException: fail to allocate internal resource to execute the target task), I would imagine that completed tasks linger, clogging up the pipe, so to speak.
How could I resolve this issue?
Basically BlockingQueues use locks to ensure the consistency of data and avoid data loss, so in too highly concurrent environment it will reject a lot of tasks (your case).
You can play on trunk with the RejectedExecutionHandler implementation to retry to offer the task. One implementation can be:
new RejectedExecutionHandler() {
#Override
public void rejectedExecution(final Runnable r, final ThreadPoolExecutor executor) {
for (int i = 0; i < 10; i++) {
if (executor.getQueue().offer(r)) {
return;
}
try {
Thread.sleep(50);
} catch (final InterruptedException e) {
// no-op
}
}
throw new RejectedExecutionException();
}
}
It even works better with random sleep (between min and max).
The idea is basically: if the queue is full, wait some short time to reduce the concurrency.
configurable through WEB-INF/application.properties https://issues.apache.org/jira/browse/TOMEE-1012

Java: Asynchronous task

For some of HTTP requests from clients, there're very complex business logic in server side.
Some of these business logics doesn't require to response to the client immediately, like sending a email to somebody. Can I put those tasks in an asynchronous method,so I only need to ensure that they had been executed,I don't need to wait all tasks complete to respond to the user.
Updated: Some people asked about the framework I am using. I am using Struts2 + Spring.
You can use the following 'fire and forget' pattern:
new Thread(new Runnable(){
public void run(){
System.out.println("I Am Sending Email");
sendEmailFunction();
}
}).start();
But too many such threads will lead to trouble. If you are going to do this, then you should use a ThreadPoolExecutor to ensure that you have some control over thread production. At the very least, place a maximum on the number of threads.
I don't know what framework you're using, but, in basic Java, you can just create a new Thread:
interface MyTaskCallback {
void myTaskCallback();
}
class MyTask implements Runnable {
MyTaskCallback callback;
Thread me;
public MyTask(MyTaskCallback callback) {
this.callback = callback;
this.me = new Thread();
}
public void start() {
this.me = new Thread(this);
this.me.start();
}
public void stop() {
try {
this.me.join(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public void run() {
// Calls here will not block the other threads
sendEmailRequest();
callback.myTaskCallback();
}
}
class Main implements MyTaskCallback {
public void foo() {
MyTask m = new MyTask(this);
m.start();
}
public void myTaskCallback() {
// called when MyTask completes
}
}
Yes. Read about concurrency.
You can probably set up an asynchronous producer/consumer queue, for example.
there is no "asynchroneous method" in java, but you will either use Threads (possibly through a framework like Quartz: http://www.quartz-scheduler.org/ ) or a message queue like JMS http://java.sun.com/products/jms/
You want to look at the java.util.concurrent.Executors. One way to solve your problem is to have a ScheduledExecutorService which keeps a Queue, and runs every so often. There are many different ways to offload work available in the concurrent utilities however, it depends on your requirements, how expensive the tasks are, how fast they need to be done, etc.
You should respond to all HTTP requests immediately, otherwise the client may think the server is not responding or timeout. However, you could start up other threads or processes to complete tasks in the background before you respond to the client.
You could also continue to send 100 responses until the task was complete.
Yes you can Servlet 3.0 has great asynchronous support.
Watch this its a really great resource, you can watch the entire cast if you are unfamiliar with Servlet 3.0.
A good run down of it here.
The api docs.
Spring has good support for Quartz scheduling as well as Java Threading. This link will give you better idea about it.
Can I put those tasks in an asynchronous method,so I don't need to wait all tasks complete to respond to the user ?
YES

Categories