I'm building a small application of Employee Management System,in my application I have a quartz Scheduler it used for tracking daily employee attendance,leave info etc.This batch is scheduled to be run every day at 11pm.
Now,I have made some changes in my java code for leave calculation,this code is supposed to be run under batch.Its working fine in my local environment as well as in DEV environment,but after releasing it to PROD the newly made code changes is not getting reflected when the batch runs.There are no error messages in log as well as The scheduler is also firing at 11pm but newly made code changes are not getting reflected in PROD.
One thing I would like to mention is that my local Scheduler as well as DEV scheduler are started and stopped manually by the user through a GUI.But the prod scheduler remains started throughout the entire year for everyday record tracking.
Can anyone give any feasible solution to this??????remember I'm getting this problem only in PROD server
You need to make sure your scheduler is destroyed when you undeploy the application. It's probably still firing for code from the previous version, because it created its own thread which doesn't get stopped.
If this is too hard, just restart the PROD server.
You might also want to look into Java EE scheduled tasks instead of Quartz: http://docs.oracle.com/javaee/6/tutorial/doc/bnboy.html
Make sure you have undeployed it cleanly and deploy it again.
Probably your old code still exist in staging area of PROD server
Related
I have a Spring Boot microservice deployed in Azure that is supposed to be running on a fixed rate with the #Scheduled Spring Annotation.
When I run it locally, it performs exactly as expected.
When deployed in Azure, it seems to be a mixed bag as to when it will run as scheduled.
During off-peak hours (~00:00 - 8:00AM) it seems to work as scheduled with a little variation here and there.
However, during peak business hours (12:00PM - 18:00PM) the scheduled times can vary DRASTICALLY.
A service that should be running once every minute will run potentially every 5 minutes during this time.
It's required that the service stay up and running (can't just kick off the service anew when scheduled), it has a list of customers that it loops through (the list is fetched from a DB whenever it first starts or gets through the list). It works a certain number of customers every time it is scheduled and moves on to the next fixed set of customers until it goes through them all and then starts the process anew.
Is this due to throttling during peak hours?
Does anyone know of a good way to keep my service firing on its schedule, or an Azure alternative to the #Scheduled annotation?
Thanks
Use fixedRate
fixedRate : makes Spring run the task on periodic intervals even if the last invocation may still be running.
fixedDelay : specifically controls the next execution time when the last execution finishes. In code:
#Scheduled(fixedDelay=5000)
public void updateEmployeeInventory(){
}
#Scheduled(fixedRate=5000)
public void updateEmployeeInventory(){
}
I've got the answer from there How wait #Scheduled till previous task is not finished?
Everytime that we add a new attribute to items.xml, we have to execute a hybris update, otherwise we will get some error like: JaloItemNotFoundException: no attribute Cart.newAttribute
But, sometimes after executing an update, instead of getting JaloItemNotFoundException, we get something like:
de.hybris.platform.servicelayer.exceptions.AttributeNotSupportedException: cannot find attribute newAttribute
For this second case, it always work if we restart the server after the update.
Is there any other way to fix that besides restarting the server after the update?
I worked for a company years ago that added this restart as a "deploy step" after the update. I am trying to avoid that here.
I tried to execute several updates and clean type cache. But no luck.
Platform Update with "Update Running System" is usually enough. If you have localization, impex, or some other changes, you might need to include the other options or extensions.
If you have a clustered environment, make sure all nodes have been updated / refreshed as well.
Make sure that your build and deploy process is something like:
Build
Deploy
Restart Server. You stop/start manually (or by script), or let Hybris restart itself when it detects changes from the deployment.
Run Platform Update
You can try to update the platform directly after the build from the command line(i.e "ant updatesystem") before starting the server.
The restart after deploy is a pretty common step(In case the update system is performed with the server started).
I believe that one of the reasons the restart is needed is due to the fact that the Spring Context needs to be reinitialized since some of the beans need the new type system information.
For example, Let's say you need to create a new type and an interceptor for that newly created type. When deploying this change you do the following:
Change the binaries and start the server
Perform an update system in order for the database to get the latest columns and so on
Now if you try to see whether the interceptor is working you will see it does not work because when its spring bean was instantiated(during the server startup) the type that it is suppose to handle was not present in the database.
Because of that, after a restart the Interceptor works as expected.
PS: The above described Interceptor problem might have been fixed somehow in the latest Hybris Versions.
I had added quartz scheduler code earlier in our application and now I have removed that code.
The job was scheduled to run every hour.
But now I have removed that code and deployed new war which does not have that code.
But now also I can see logs like below every one hour,
10:00:00.001 [org.springframework.scheduling.quartz.SchedulerFactoryBean#0_Worker-4]
How to kill these threads?
If you are making use of quartz jdbcstore, you need to delete the trigger and job detail records from database.
We run several spring batch jobs within tomcat in the same web application that serves up our UI. Lately we have been adding many more jobs and we are noticing that when we patch our app, several jobs may get stuck in a STARTING or STARTED status. Many of those jobs ensure that another job is not running before they start up, so this means after we patch the server, some of our jobs are broken until we manually run SQL to update the statuses of the jobs to ABANDONED or STOPPED.
I have read here that JobScope and StepScope jobs don't play nicely with shutting down.
That article suggests not using JobScope or StepScope but I can't help but think that this is a solved problem where people must be doing something when the application exits to prevent this problem.
Are there some best practices for handling this scenario? What are you doing in your applications?
We are using spring-batch version 3.0.3.RELEASE
I will provide you an idea on how to solve this scenario. Not necessarily a spring-batch solution.
Everytime I need to add jobs in an application I do as this:
Create a table to control the jobs (queue, priority, status, etc.)
Create a JobController class to manage all jobs
All jobs are defined by the status R-running, F-Finished, Q-Queue (you can add more as you need like aborted, cancelled, etc) (the jobs control these statuses)
The jobController must be loaded only once, you can define it as a spring bean for this
Add a boolean attribute to JobController to inform if you already checked the jobs when you instantiate it. Set it to false
Check if there are jobs with the R status which means that in the last stop of the server they were running so you update every job with this R status to Q and increase their priority so it will get executed first after a restart of the server. This check is inside the if for that boolean attribute, after the check set it to true.
That way every time you call the JobController for the first time and there are unfinished jobs from a server crash you will be able to set then all to a status where it can be executed again. And this check will happens only once since you will be checking that boolean attribute.
A thing that you should be aware of is caution with your jobs priority, if you manage it wrong you may run into a starvation problem.
You can easily adapt this solution to spring-batch.
Hope it helps.
So I started to tinker around with JDBCJobStore in Quartz. Firstly, I could not find a single good resource on how to configure it from scratch. After looking for it for a while and singling out a good resource for beginners, I downloaded the sample application at Job scheduling with Quartz. I have a few doubts regarding it.
How does JDBCJobStore capture jobs.? I mean in order for the job to get stored in the database does the job have to run manually once.? Or will JDBCJobStore automatically detect the jobs and their details..?
How does JDBCJobStore schedule the jobs.? Does it hit the database at a fixed interval like a heartbeat to check if there are any scheduled jobs.? Or does it keep the triggers in the memory while the application is running.?
In order to run the jobs will I have to manually specify the details of the job like like name and group and fetch the trigger accordingly.? Is there any alternative to this.?
On each application restart how can I tell the scheduler to start automatically..? Can it be specified somehow.?
If you are using servlet/app server you can start it during startup:
http://quartz-scheduler.org/documentation/quartz-2.2.x/cookbook/ServletInitScheduler
If you are running standalone you have to initialize it manually i think.
You can read more about JobStores here:
http://quartz-scheduler.org/documentation/quartz-2.2.x/tutorials/tutorial-lesson-09
And about jobs and triggers:
http://quartz-scheduler.org/documentation/quartz-2.2.x/tutorials/tutorial-lesson-02
http://quartz-scheduler.org/documentation/quartz-2.2.x/tutorials/tutorial-lesson-03
http://quartz-scheduler.org/documentation/quartz-2.2.x/tutorials/tutorial-lesson-04
I guess that quartz checks jobs based on time interval to proper work in clusters and distributed systems.