In my system, user can create a schedule with time and conditions. Before 30 mins of schedule time, if the conditions are not satisfied the system will raise an alarm to notice users about that.
My system are spring boot applications and using spring scheduled task to trigger alarms. The problems is when user creates a lot of schedule in the future, if I create a scheduled task for each schedule data, there will be memory problem.
My current solution is a create a schedule run at a time of everyday to scan all data in next 24 hours and create scheduled task for them to trigger alarm. This will reduce scheduled tasks created but if user creates new schedule data in next 24 hours after scanning, that data will be not trigger any alarm.
So what should I do?
Is there a reason that you are scheduling all of this in JVM memory? If the JVM crashes (or is simply rebooted), the timers would then be lost as if the user never scheduled any alarm. As you mentioned, creating a timer per request would likely not be a scalable solution.
Without knowing the specific details of your system, the most common approach would be to persist (i.e. in a DB, flat file, etc.) the data each time a user requests to schedule event. This way, in the event of a crash or reboot, you won't lose events. Similarly, this approach can scale to multiple servers if necessary. Then, at whatever granularity you support (i.e. minute, hour, day, etc.) there would be a process or thread (only a single monitor thread) find all of the events which have expired since you last ran. Finally, once this thread has identified events that need an "alarm," this one thread can control sending these events for active processing. This thread can either individually handle each event or otherwise submit them to an active work queue for parallelization.
More specifically, if you have alarms which could go off at any minute, you should schedule a monitor thread to run every minute. This thread should find all the events which require an alarm and then actually send that alarm.
Remember that how often you should schedule your monitor thread is a function of the resolution you want for your alarms and your tolerance for late alarms. If late alarms are totally unacceptable, then your monitor must run at least as often as the finest granularity for scheduling an alarm event. This is, of course, assuming alarms are always scheduled in the future-- otherwise, you will probably want to double the frequency of your monitoring checks. To see why, consider the following example:
minute 0: Run monitor
minute 0: User schedules alarm for minute 0
minute 1: Run monitor
If we run the monitor once per minute but allow the user to schedule an alarm in the current minute, it's quite possible that we'll miss the event (as shown in the example above). I can go into this more deeply if necessary, but this is here mostly for completeness as I have no indications from your description that this will actually pose any problems.
Good luck.
Related
I'm programming an update interface in my Android Things project. I can do manual update, with an user input. But I'm trying to schedule an auto-update every night at midnight. I want to use a custom UpdatePolicy with a deadline but I failed to use it.
I tried this in the onCreate method in my activity :
mUpdateManager.setPolicy(
new UpdatePolicy.Builder()
.setPolicy(POLICY_APPLY_AND_REBOOT)
.setUpdateDeadline(10, TimeUnit.SECONDS)
.build());
But there isn't any update after 10 seconds.
Maybe, I don't understand the deadline.
Do I use it wrong ?
The deadline has nothing to do with when an update check is performed. The usual schedule of update checks is
once shortly after boot
once every 5 hours (approximately) thereafter
(These times are not exact for reasons that aren't relevant to this discussion.)
The deadline reflects how long the device will let an available update sit without being applied before the device will force it to apply and reboot. The device doesn't know about an available update until it performs a check, so you could be waiting up to 5 hours for that.
The deadline is meant to operate on a longer timescale (for instance, 5 days, a week, etc). This is useful as a fallback in case there's some kind of bug with the update scheduler, or in case you allow users to postpone the update but don't want them to be able to do that forever.
To achieve what you want, you should schedule (using WorkManager, JobScheduler, etc) a task that runs at midnight each day and calls UpdateManager.performUpdateNow(UpdatePolicy.POLICY_APPLY_AND_REBOOT)
TL,DR: Update checks are very much a background thing. If you care about timing at all, use UpdateManager.performUpdateNow, but no more than once every 5 hours.
I am writing a reminder app for Android that repeatedly sends the user a notification with increasing intervals in between. Namely after 30s, 2m, 10m, ..., 25 days, 4 months, 2 years.
I originally intended to do this by registering a JobService that would run every 30s to check whether it was time to send a notification. However, as this post warns (and as I found out):
Which means that my job runs at most once every 15 minutes, and prevents me from sending my 30s, 2m and 10m reminders.
What would be the correct/most efficient way of implementing such functionality?
(Also, running a job every 30s to check for the 2y notification is quite inefficient)
Don't make it a periodic job. Make each next schedule a one-time event. Each time the job is triggered schedule a new item.
I have a non-concurrent quartz job running on 6 application server instances. A high level responsibility of the job is to walk through a DB table and process and update which ever row is expired. Now I see a behavior of the job which is not understandable.
I have a configuration by which the job should be triggered after 15 minutes, but as the span of a single run can be multiple days, each of this trigger after 15 minutes should be suppressed by a lock already acquired by running job instance.
So, the ideal behavior is, job starts running on one of the 6 server instances, it completes a single DB table iteration in let us say 3 days. Meanwhile, quartz is trying to push in another job every minutes, but as lock is already acquired, it should not. After 3 days when the first job run finishes quartz scheduler should succeed in starting another job, within <= 15 minutes of the first run endtime.
But, in reality I see a behavior, where the the job has run for some days and has not run for some of the days. some time this gap is as long as 8-10 days. I am unable to explain this scenario.
The closest theory I can think of, is that it might be the case that during a particular job run, the server instance got killed(due to deployment/redeployment), because of which the quartz did not get a chance to remove shared lock. So, all the attempts of acquiring a lock for next job run keep on failing till the orphan lock is not expired by an expiry date. The moment it got expired, a new job kicks in.
My question here is, what could be the possible explanations to this, and more importantly, how to debug it? Any leads to Quartz Lock management documentation for non-concurrent jobs can helpful.
I use DisallowConcurrentExecution annotation for non-concurrency.
I'm building a system where users can set a future date(down to hours and minutes) in calendar. At that date a trigger is calling a certain task, unique for every user.
Every user can set a different date. The system will have 10k+ from the start and a user can create more than one trigger.
So assuming I have 10k users each user create on average 3 triggers => 30k triggers with 30k different dates.
All dates are saved in a database.
I'm new to quartz, can this be done in a more optimized way?
I was thinking about making a task run every minute that will get the tasks that will suppose to run in the next hour and remove them from database.
Do you have any better ideas? Did someone used quartz for a large number of triggers.
You have the schedule backed in the database. If I understand the idea - you want the quartz to load all the upcoming tasks to execute them in the future.
This is problematic approach:
Synchronization Issues: I assume that users can edit, remove and add new tasks to the database. You would have to periodically ask the database to refresh the state of the quartz jobs, remove some jobs, edit other jobs etc. This may not be trivial. The state of the program would be a long living cache which needs to be synchronised often.
Performance and scalability issues: Even if proposed solution may be ok for 30K tasks it may not be ok for 70k or 700k tasks. In your approach it's not easy to scale - adding new machine would require additional layer of synchronisation - which machine should actually execute which job (as all of them have all the tasks).
What I would propose:
Add the "stage" to the Tasks table (new, queued, running, finished, failed)
divide your solution into several components. (Initially they can run on a single machine but it will be easy to scale)
Components:
Task Finder: Executed periodically (once every few seconds). Scans the database for tasks that are "new", and due soon. Sends the tasks found to Message Queue and marks the task as "queued" in the db. Marking as "queued" has to be done carefully as there can be multiple "task finders". (As an addition it may find the tasks that have been marked as "queued" or "running" more than N minutes ago and are not "finished" nor "canceled" - probably need to re-run these)
Message Queue: Connector between Taks Finder and Task Executor.
Task Executor: Listens to the Message Queue and process the tasks that it received. Marks the tasks as "running" initially and "finished" or "failed" later on.
With this approach you can have:
multiple Task Executors on multiple machines
multiple Task Schedulers on multiple machines
even if one of the Task Schedulers or Executors will fail it will not be Single Point of Failure. Some of the tasks will be delayed but it will be picked up and run afterwards.
This may not address all the scenarios but would be a good starting point.
I don't see why you need quartz here at all. As far as I remember, quartz is best used to schedule backend internal processes, not user-defined tasks obtained from db.
Just process the trigger as it is created, save a row to your tasks table with start_date based on the trigger and every second select all incomplete tasks with start_date< sysdate. If the job is repeating, calculate next execution time and insert new task row / update previous accordingly.
As Sam pointed out there are some nice topics addressing the same problem:
Quartz Performance
Quartz FAQ
In a system like the mentioned it should not a problem mostly to handle this amount of triggers. But according to my experiance it is a better way to create something like a "JobChecker". If you enable your users to create own triggers it could really break Quartz in some cases. For example if 5000 user creates an event to the exact same time, Quartz will have a hard time to handle them correctly. (It is not likely a situation that will occur often, but it is possible as your specification does not excludes it.) Quartz has difficulties only when a lot of triggers should be fired at the same time.
My recommendation to this problem is to create one job that is running in every hour/minute etc and that should handle every user set events. This way is simmilar to a cron job in bash. With this kind of processing your system will be pretty stable even if the number of "triggers" increases dramatically. Basically your line of thought is correct if you thrive for scalability.
I need to running thread every one second. But when application killed, the thread must be still alive.
My thread task is used for increment Unix Timestamp (that synchronized when the first time application running from our server time) by one every second. I need to create this task because in some device, date time can changed unpredictable (maybe low on battery, hard reset, dropped or something else).
My Activity must be get that Unix Timestamp value when it needed.
From SO, Alarm Manager is not a good choice,
I would recommend you not to use an AlarmManager for 30 seconds, as some have suggested. Because 30 seconds is too short. it will drain the battery. For AlarmManager use a minimum 1 minute with RTC.
Other people suggest using Timer Task or ScheduledExecutorService, what the best thread to fit my need?
Thanks.
You would never achieve that. Any process could be killed by System. And task running every seconds is horrible (like AlarmManager said).
One idea is: save your server time and device time such as SystemClock.elapsedRealtime() . (do not use System.currentTimeMillis() for this purpose. ... this is display time for user and can be changed by user or something).
When you need time later, get elapsedRealtime() again and compare with stored elapsedRealtime(), and add this diff to stored server time. You will get desired time.
Or simply ask current time to your server , depends on needs :).
If you want to care hard reset I think that you should have database on your server to manage the first time when user launches app.