I have an app that runs over several instances and all requests come through one servlet.
I need to run a cron job which executes once a week for about 3 minutes. During that cron call some kind of flag/boolean will be modified somewhere so that the servlet can pick up and send an "server temporarily unavailable" type message back instead of processing the request. Once the cron job is complete it will flag it back to true.
I cannot use a singleton or a static boolean as the app will be in multiple instances. Nor do I want the servlet to have to fetch a value from the datastore on every request, as it will mean hundreds of thousands of extra datastore reads.
What can I do? Any ideas?
I think you may be able to store boolean in memcached. GAE has a Cache API for Memcached. However note that cache values are not persistent and may not be survived for even 3 minutes. I think you should have a firm time to start cron task hardcoded in one of your Java classes or .properties file and then when your task finishes, it should look at that hard-coded time and schedule itself for next round according to that time.
And by this way your servlet can also look at that time and do not serve requests in the interval you are going to specify. Yeah, that will be very fast but your jobs will be scheduled to a fixed time periodically and you won't be able to change this unless you re-deploy application.
I think the better solution is you should keep the boolean in the datastore and make use of cache. See the following algorithm:
is my boolean in the cache?
yes:
[alright, then choose to serve or not to serve request using it.]
no:
[fetch variable from datastore and put it on the cache.] (cache miss)
Again, cache will be fast, but not as much as hard-coding the schedule in the program.
EDIT: Another solution. (however not possible to implement)
If you want to serving pages during the task execution, you should use a task api
First of all you should be familiar with using countdown for your task (in this case next week) http://code.google.com/appengine/docs/java/javadoc/com/google/appengine/api/taskqueue/TaskOptions.html#countdownMillis(long)
Then you can use size() method of Queue – which I was expecting it to be there but apparently Google didn't implement it– to see if task queue size is 0, then it means it is processed right now because when the task finishes, it submits itself again to 1 week later.
One approach would be to have the cron job publish a message to a JMS topic to which all the servlet instances were listening. The messages could inform the servlet instances to set a value in the static boolean you mentioned to true or false.
Related
I'm adding a caching functionality to one of the DoFns inside a Dataflow Pipeline in Java. The DoFn is currently using a REST client to send a request to an endpoint (that charges based on number of request, and the response will change roughly every hour) for every input element, and what I want to achieve is to cache the response from the endpoint, and have it expiries every 15 mins.
I found two ways to do this: one from a similar stackoverflow question that suggested to use static variable to host a cache service (I used guava for caching). However I wasn't sure how to update the expiry from outside of the DoFn.
Another approach that I found is to use stateful processing to store a hash that keep track of the requests and responses, and use a TimerSpec to clear the "cache" every 15 mins. Although it appears that there is no way to set a timer for each element in the cache.
I haven't tried the second approach yet. While I'm going to implement it, I wonder if someone had running into similar situations, and has any suggestions, or has better approaches.
However I wasn't sure how to update the expiry from outside of the DoFn.
Do you need to? DoFn does a lookup first and if the entry already expired it issues a request and updates cache. Cache reads & write need to be thread-safe.
Although it appears that there is no way to set a timer for each element in the cache.
You can probably sets a timer that scan the entire cache every X minutes and refresh expired entries. However, if your state is not keyed using a global state like this will limit the parallelism of your pipeline.
I'm building a system where users can set a future date(down to hours and minutes) in calendar. At that date a trigger is calling a certain task, unique for every user.
Every user can set a different date. The system will have 10k+ from the start and a user can create more than one trigger.
So assuming I have 10k users each user create on average 3 triggers => 30k triggers with 30k different dates.
All dates are saved in a database.
I'm new to quartz, can this be done in a more optimized way?
I was thinking about making a task run every minute that will get the tasks that will suppose to run in the next hour and remove them from database.
Do you have any better ideas? Did someone used quartz for a large number of triggers.
You have the schedule backed in the database. If I understand the idea - you want the quartz to load all the upcoming tasks to execute them in the future.
This is problematic approach:
Synchronization Issues: I assume that users can edit, remove and add new tasks to the database. You would have to periodically ask the database to refresh the state of the quartz jobs, remove some jobs, edit other jobs etc. This may not be trivial. The state of the program would be a long living cache which needs to be synchronised often.
Performance and scalability issues: Even if proposed solution may be ok for 30K tasks it may not be ok for 70k or 700k tasks. In your approach it's not easy to scale - adding new machine would require additional layer of synchronisation - which machine should actually execute which job (as all of them have all the tasks).
What I would propose:
Add the "stage" to the Tasks table (new, queued, running, finished, failed)
divide your solution into several components. (Initially they can run on a single machine but it will be easy to scale)
Components:
Task Finder: Executed periodically (once every few seconds). Scans the database for tasks that are "new", and due soon. Sends the tasks found to Message Queue and marks the task as "queued" in the db. Marking as "queued" has to be done carefully as there can be multiple "task finders". (As an addition it may find the tasks that have been marked as "queued" or "running" more than N minutes ago and are not "finished" nor "canceled" - probably need to re-run these)
Message Queue: Connector between Taks Finder and Task Executor.
Task Executor: Listens to the Message Queue and process the tasks that it received. Marks the tasks as "running" initially and "finished" or "failed" later on.
With this approach you can have:
multiple Task Executors on multiple machines
multiple Task Schedulers on multiple machines
even if one of the Task Schedulers or Executors will fail it will not be Single Point of Failure. Some of the tasks will be delayed but it will be picked up and run afterwards.
This may not address all the scenarios but would be a good starting point.
I don't see why you need quartz here at all. As far as I remember, quartz is best used to schedule backend internal processes, not user-defined tasks obtained from db.
Just process the trigger as it is created, save a row to your tasks table with start_date based on the trigger and every second select all incomplete tasks with start_date< sysdate. If the job is repeating, calculate next execution time and insert new task row / update previous accordingly.
As Sam pointed out there are some nice topics addressing the same problem:
Quartz Performance
Quartz FAQ
In a system like the mentioned it should not a problem mostly to handle this amount of triggers. But according to my experiance it is a better way to create something like a "JobChecker". If you enable your users to create own triggers it could really break Quartz in some cases. For example if 5000 user creates an event to the exact same time, Quartz will have a hard time to handle them correctly. (It is not likely a situation that will occur often, but it is possible as your specification does not excludes it.) Quartz has difficulties only when a lot of triggers should be fired at the same time.
My recommendation to this problem is to create one job that is running in every hour/minute etc and that should handle every user set events. This way is simmilar to a cron job in bash. With this kind of processing your system will be pretty stable even if the number of "triggers" increases dramatically. Basically your line of thought is correct if you thrive for scalability.
At our company we have a server which is distributed into few instances. Server handles users requests. Requests from different users can be processed in parallel. Requests from same users should be executed strongly sequentionally. But they can arrive to different instances due to balancing. Currently we use Redis-based distributed locks but this is error-prone and requires more work around concurrency than business logic.
What I want is something like this (more like a concept):
Distinct queue for each user
Queue is named after user id
Each requests identified by request id
Imagine two requests from the same user arriving at two different instances concurrently:
Each instance put their request id into this user queue.
Additionaly, they both store their request ids locally.
Then some broker takes request id from the top of "some_user_queue" and moves it into "some_user_queue_processing"
Both instances listen for "some_user_queue_processing". They peek into it and see if this is request id they stored locally. If yes, then do processing. If not, then ignore and wait.
When work is done server deletes this id from "some_user_queue_processing".
Then step 3 again.
And all of this happens concurrently for a lot (thousands of them) of different users (and their queues).
Now, I know this sounds a lot like actors, but:
We need solution requiring as small changes as possible to make fast transition from locks. Akka will force us to rewrite almost everything from scratch.
We need production ready solution. Quasar sounds good, but is not production ready yet (more correctly, their Galaxy cluster).
Tops at my work are very conservative, they simply don't want another dependency which we'll need to support. But we already use Redis (for distributed locks), so I thought maybe it could help with this too.
Thanks
The best solution that matches the description of your problem is Redis Cluster.
Basically, the cluster solves your concurrency problem, in the following way:
Two (or more) requests from the same user, will always go to the same instance, assuming that you use the user-id as a key and the request as a value. The value must be actually a list of requests. When you receive one, you will append it to that list. In other words, that is your queue of requests (a single one for every user).
That matching is being possible by the design of the cluster implementation. It is based on a range of hash-slots spread over all the instances.
When a set command is executed, the cluster performs a hashing operation, which results in a value (the hash-slot that we are going to write on), which is located on a specific instance. The cluster finds the instance that contains the right range, and then performs the writing procedure.
Also, when a get is performed, the cluster does the same procedure: it finds the instance that contains the key, and then it gets the value.
The transition from locks is very easy to perform because you only need to have the instances ready (with the cluster-enabled directive set on "yes") and then to run the cluster-create command from redis-trib.rb script.
I've worked last summer with the cluster in a production environment and it behaved very well.
I want to run some kind of Thread continuously in app engine. What the thread does is
checks a hashmap and updates entries as per some business continuously.
My hashmap is a public memeber variable of class X. And X is a singleton class.
Now I know that appengine do not support Thread and it has somethinking called backend.
Now my question is: If I run backend continiously for 24*7 will I be charged?
There is no heavy processing in backend. It just updates a hashmap based on some condition.
Can I apply some trick so that am not charged? My webapp is not for commercial use and is for fun.
Yes, backends are billed per hour. It does not matter how much they are used: https://developers.google.com/appengine/docs/billing#Billable_Resource_Unit_Costs
Do you need this calculation to happen immediatelly? You could run a cron job, say ever 5 min and perform the task.
Or you can too enqueue a 10 minutes task and re-enqueue when is near to arrive to its 10 minutes limit time. For that you can use the task parameters to pass the state of the process to the next task or also you can use datastore.
I am trying to decide if use a java-ee timer in my application or not. The server I am using is Weblogic 10.3.2
The need is: After one hour of a call to an async webservice from an EJB, if the async callback method has not been called it is needed to execute some actions. The information regarding if the callback method has been called and the date of the execution of the call is stored in database.
The two possibilities I see are:
Using a batch process that every half hour looks for all the calls that have been more than one hour without response and execute the needed actions.
Create a timer of one hour after every single call to the ws and in the #Timeout method check if the answer has come and if it has not, execute the required actions.
From a pure programming point of view, it looks easier and cleaner the second one, but I am worry of the performance issues I could have if let's say there are 100.000 Timer created at a single moment.
Any thoughts?
You would be better off having a more specialized process. The real problem is the 100,000 issue. It would depend on how long your actions take.
Because its easy to see that each second, the EJB timer would fire up 30 threads to process all of the current pending jobs, since that's how it works.
Also timers are persistent, so your EJB managed timer table will be saving and deleting 30 rows per second (60 total), this is assuming 100K transactions/hour.
So, that's an lot of work happening very quickly. I can easily see the system simply "falling behind" and never catching up.
A specialized process would be much lighter weight, could perhaps batch the action calls (call 5 actions per thread instead of one per thread), etc. It would be nice if you didn't have to persist the timer events, but that is what it is. You could almost easily simply append the timer events to a file for safety, and keep them in memory. On system restart, you can reload that file, and then roll the file (every hour create a new file, delete the older file after it's all been consumed, etc.). That would save a lot of DB traffic, but you could lose the transactional nature of the DB.
Anyway, I don't think you want to use the EJB Timer for this, I don't think it's really designed for this amount of traffic. But you can always test it and see. Make sure you test restarting your container see how well it works with 100K pending timer jobs in its table.
All depends of what is used by the container. e.g. JBoss uses Quartz Scheduler to implement EJB timer functionality. Quartz is pretty good when you have around 100 000 timer instances.
#Pau: why u need to create a timer for every call made...instead u can have a single timer thread created at start up of application which runs after every half-hour(configurable) period of time and looks in your Database for all web services calls whose response have not been received and whose requested time is past 1 hour. And for selected records, in for loop, it can execute required action.
Well above design may not be useful if you have time critical activity to be performed.
If you have spring framework in your application, you may also look up its timer services.http://static.springsource.org/spring/docs/1.2.9/reference/scheduling.html
Maybe you could use some of these ideas:
Where I'm at, we've built a cron-like scheduler which is powered by a single timer. When the timer fires the system checks which crons need to run using a Quartz CronTrigger. Generally these crons have a lot of work to do, and the way we handle that is each cron spins its individual tasks off as JMS messages, then MDBs handle the messages. Currently this runs on a single Glassfish instance and as our task load increases, we should be able to scale this up with a cluster so multiple nodes are processing the jms messages. We balance the jms message processing load for each type of task by setting the max-pool-size in glassfish-ejb-jar.xml (also known as sun-ejb-jar.xml).
Building a system like this and getting all the details right isn't trivial, but it's proving really effective.