Multiple Quartz schedulers to run same job - java

I searched around looking for my situation but found many threads on making multiple quartz schedulers on different machines run a job once. My situation is the opposite. We have multiple web servers behind a load balancer, all are using quartz and connect to the same database. One of the jobs is to load log files from a third party app into the database. When the job is triggered only one of the web servers picks it up. I am trying to find a solution to have one scheduled job and when it is triggered all the attached web servers will pick it up and start processing the logs on that machine from this third party app.

Related

How to make only one server runs job in cluster mode

I am beginner to web development,in my project people gave me R&D task and the task is
I am deploying same war in two servers and running the servers in cluster mode, in the code they have schedule some jobs, As there are two servers these jobs are running on two servers and results in the duplication of data in DB(as two servers are running same jobs independently),please any one help me how to solve this situation.

How to start slaves on different machines in spring remote partitioning strategy

I am using spring batch local partitioning to process my Job.In local partitioning multiple slaves will be created in same instance i.e in the same job. How Remote partitioning is different from local partitioning.What i am assuming is that in Remote partitioning each slave will be executed in different machine. Is my understanding correct. If my understanding is correct how to start the slaves in different machines without using cloudfoundry. I have seen Michael Minella talk on Remote partitioning https://www.youtube.com/watch?v=CYTj5YT7CZU tutorial. I am curious to know how remote partitioning works without using cloudfoundry. How can I start slaves in different machines?
While that video uses CloudFoundry, the premise of how it works applies off CloudFoundry as well. In that video I launch multiple JVM processes (web apps in that case). Some are configured as slaves so they listen for work. The other is configured as a master and he's the one I use to do the actual launching of the job.
Off of CloudFoundry, this would be no different than deploying WAR files onto Tomcat instances on multiple servers. You could also use Spring Boot to package executable jar files that run your Spring applications in a web container. In fact, the code for that video (which is available on Github here: https://github.com/mminella/Spring-Batch-Talk-2.0) can be used in the same way it was on CF. The only change you'd need to make is to not use the CF specific connection factories and use traditional configuration for your services.
In the end, the deployment model is the same off CloudFoundry or on. You launch multiple JVM processes on multiple machines (connected by middleware of your choice) and Spring Batch handles the rest.

TOMCAT server becomes very slow after advancing time of system

We are creating a test automation framework for web application. For Test scenarios including the Scheduled jobs, we need to advance the time to let the jobs triggered, But this is making the server (TOMCAT) very slow. What could be the reason and solution?

Weblogic server- Identifying the managed server status in Java code

I have some jobs that run in my application (These jobs created and managed by the application) and the application is deployed on a cluster of 2 managed servers.
We have distributed the load based on the even and odd number of the jobs on these 2 managed servers.
Now, we want to create the jobs if one of the instances goes down into the other instance.
How do we know if the other instance went down in Weblogic server cluster and my application is built in Java and spring.
Thanks

Scaling Scheduler Web Service

We are developing an application which periodically syncs the LDAP servers of different clients with our database. This application needs to be accessed via a web portal. A web user will create, modify or delete scheduled tasks on this application. So, we have developed this application as a web service.
Now, we have to scale this application and also ensure high availability.
The application is an Axis2 based web service running on Tomcat. We have thought of httpd + mod_jk + tomcat combination for load balancing. The problem is that if a request for modification/deletion comes, then it should land on the same tomcat server on which the task was created initially. But, since, the request can come from different web users accessing web portal from different ip addresses, we can not have same session id (sticky session).
Any solutions? Different architecture? Anything.
We have also thought of using Quartz scheduler api. The site says it supports load balancing and clustering. Does anyone has experience of working on such scenario with Quartz?
If you are using Quartz for your scheduling, that can be backed by a database (see JDBCJobStore). Then you could access any Tomcat server and the scheduling would be centralized. I would recommend using a key in the database that you return back to the Axis service, so that the user can reference the same data between calls.
Alternatively it is not difficult to use the database as a job scheduler, then have your tasks run on Tomcat (any location), and put the results into the database. If the results of the job (such as its status) are small, this would work fine.

Categories