Quartz not delaying jobs already started in cluster environment - java

I have Quartz running in a cluster and I do get jobs running periodically. The job is started in one machine and the others will hold until next execution time.
What I want now is to delay the job invocation if the previous invocation isn't finished yet. For instance:
10:00 - instance invocation#1
10:06 - invocation#1 finished
10:10 - instance invocation#2
10:13 - invocation#2 finished
10:20 - instance invocation#3
10:31 - invocation#3 finished // took longer than expected
10:31 - instance invocation#4 // start delayed
10:35 - invocation#4 finished
Even this would be acceptable:
10:00 - instance invocation#1
10:06 - invocation#1 finished
10:10 - instance invocation#2
10:13 - invocation#2 finished
10:20 - instance invocation#3
10:31 - invocation#3 finished // took longer than expected
10:40 - instance invocation#4 // waits for next timed invocation
10:44 - invocation#4 finished
I'm using cron expression like triggers and it is triggered once each 10 minutes (0 0/10 * * *).

Annotating your job with #DisallowConcurrentExecution should do the trick.

Related

How to identify the optimum number of shuffle partition in Spark

I am running a spark structured streaming job (bounces every day) in EMR. I am getting an OOM error in my application after a few hours of execution and get killed. The following are my configurations and spark SQL code.
I am new to Spark and need your valuable input.
The EMR is having 10 instances with 16 core and 64GB memory.
Spark-Submit arguments:
num_of_executors: 17
executor_cores: 5
executor_memory: 19G
driver_memory: 30G
Job is reading input as micro-batches from a Kafka at an interval of 30seconds. Average number of rows read per batch is 90k.
spark.streaming.kafka.maxRatePerPartition: 4500
spark.streaming.stopGracefullyOnShutdown: true
spark.streaming.unpersist: true
spark.streaming.kafka.consumer.cache.enabled: true
spark.hadoop.fs.s3.maxRetries: 30
spark.sql.shuffle.partitions: 2001
Spark SQL aggregation code:
dataset.groupBy(functions.col(NAME),functions.window(functions.column(TIMESTAMP_COLUMN),30))
.agg(functions.concat_ws(SPLIT, functions.collect_list(DEPARTMENT)).as(DEPS))
.select(NAME,DEPS)
.map((row) -> {
Map<String, Object> map = Maps.newHashMap();
map.put(NAME, row.getString(0));
map.put(DEPS, row.getString(1));
return new KryoMapSerializationService().serialize(map);
}, Encoders.BINARY());
Some logs from the driver:
20/04/04 13:10:51 INFO TaskSetManager: Finished task 1911.0 in stage 1041.0 (TID 1052055) in 374 ms on <host> (executor 3) (1998/2001)
20/04/04 13:10:52 INFO TaskSetManager: Finished task 1925.0 in stage 1041.0 (TID 1052056) in 411 ms on <host> (executor 3) (1999/2001)
20/04/04 13:10:52 INFO TaskSetManager: Finished task 1906.0 in stage 1041.0 (TID 1052054) in 776 ms on <host> (executor 3) (2000/2001)
20/04/04 13:11:04 INFO YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 3.
20/04/04 13:11:04 INFO DAGScheduler: Executor lost: 3 (epoch 522)
20/04/04 13:11:04 INFO BlockManagerMasterEndpoint: Trying to remove executor 3 from BlockManagerMaster.
20/04/04 13:11:04 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(3, <host>, 38533, None)
20/04/04 13:11:04 INFO BlockManagerMaster: Removed 3 successfully in removeExecutor
20/04/04 13:11:04 INFO YarnAllocator: Completed container container_1582797414408_1814_01_000004 on host: <host> (state: COMPLETE, exit status: 143)
And by the way, I am using collectasList in my forEachBatch code
List<Event> list = dataset.select("value")
.selectExpr("deserialize(value) as rows")
.select("rows.*")
.selectExpr(NAME, DEPS)
.as(Encoders.bean(Event.class))
.collectAsList();
With these settings, you may be causing your own issues.
num_of_executors: 17
executor_cores: 5
executor_memory: 19G
driver_memory: 30G
You are basically creating extra containers here to have to shuffle between. Instead, start off with something like 10 executors, 15 cores, 60g memory. If that is working, then you can play these a bit to try and optimize performance. I usually try splitting my containers in half each step (but I also havent needed to do this since spark 2.0).
Let Spark SQL keep the default at 200. The more you break this up, the more math you make Spark do to calculate the shuffles. If anything, I'd try to go with the same number of parallelism as you have executors, so in this case just 10. When 2.0 came out, this is how you would tune hive queries.
Making the job complex to break up puts all the load on the master.
Using Datasets and Encoding are also generally not as performant as going with straight DataFrame operations. I have found great lifts in performance of factoring this out for dataframe operations.

JTA Transaction Timeout Troubleshooting

Setup:
Oracle 12 DB
JBoss EAP7
Webservice running on JBoss, inserts into DB
Batchprogramm calling the webservice from multiple threads about 130.000 times in the span of an hour
The problem:
2018-04-26 18:20:44,675 +0200 [WARN ] [com.arjuna.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check timeout for TX 0:ffffac110923:-4c44ed1d:5ac9329e:6866ea in state RUN
2018-04-26 18:20:44,675 +0200 [WARN ] [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012095: Abort of action id 0:ffffac110923:-4c44ed1d:5ac9329e:6866ea invoked while multiple threads active within it.
2018-04-26 18:20:44,679 +0200 [WARN ] [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012381: Action id 0:ffffac110923:-4c44ed1d:5ac9329e:6866ea completed with multiple threads - thread default task-48 was in progress with xxx.BaseEntity.getNextValue(BaseEntity.java:28)
This happens routinely in the production environment under heavy load, not when processing fewer records and not in an identical test environment with the exact same load.
The last line shows that this transaction timeout (300s) occurs while fetching the next value from a sequence:
CREATE SEQUENCE "XXX_S" MINVALUE xxx MAXVALUE xxx INCREMENT BY 1 START WITH xxx CACHE 2 NOORDER NOCYCLE NOPARTITION ;
I know Oracle needs to lock/unlock the sequence in order to keep it consistent, so my parallel webservice calls must somehow run into a deadlock or massive contention, producing the timeout.
How do I find the root of this problem? Which parameters can I try to manipulate?
Issue is now resolved, though very unsatisfyingly. We removed parallelism.

Spark Job in YARN - Executors are not executing the tasks for long time

I can see the executors are not executing the tasks for long time from the Spark UI.
When i see the executors tab stderr, i can see the below logs.
6/02/04 05:30:56 INFO storage.MemoryStore: Block broadcast_91 of size 153016 dropped from memory (free 6665239401)
16/02/04 06:11:20 WARN hdfs.DFSClient: Slow ReadProcessor read fields took 31337ms (threshold=30000ms); ack: seqno: 1240 status: SUCCESS status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 4835789, targets: [DatanodeInfoWithStorage[10.25.36.18:1004,DS-f6e20cf7-0ccb-45aa-988f-f3310d5acf89,DISK], DatanodeInfoWithStorage[10.25.36.11:1004,DS-61ad0a2d-a6fd-402e-b0a1-61682d1755fb,DISK], DatanodeInfoWithStorage[10.25.36.5:1004,DS-c77503a2-0c7f-4b5c-8f4a-9c61cb4f18d7,DISK]]
I do not see any log for long time. i do not see error as well. It is keep on running..
Is anyone faced the same problem? how we can improve this?
Update:
It is actually took long time on saveAsTextFile() method.

Quartz Job executed multiple times simultaneously by each cluster machine, rather than one time by one machine for the entire cluster

Goal:
* Have Job1 run once for a three-node cluster every 10 minutes, and Job2 run once for the same cluster every 5 minutes. Each job generates an email; so at 10:55am I should receive only one Job2 email from the cluster, and at 11:00am I should receive one Job1 email and one Job2 email from the cluster, at 11:05am I should receive only one Job2 email from the cluster, and so on...
Problem:
* Job1 is being run multiple times every 10 minutes on each node in the cluster, and the same for Job2 (except every 5 minutes). This leads to many, many more than one or two emails.
Configuration:
* Three-node linux cluster
* Each machine NTP configured and time-sync'd
* Oracle DB
* Quartz v2.2.0 (cluster mode)
* Jobs configured via CronTrigger
* Each node has an instance of the same standalone Java application running on it, and the Java application instantiates an instance of the quartz scheduler in cluster-mode.
* quartz.properties files are identical on each machine.
I have investigated all the obvious potential causes, but nothing explains it or presents a fix. I have even tried inserting an artificial 10-second sleep instruction in the job, to ensure that it doesn't finish in under a second. Please find relevant artifacts below (quartz.properties and log output). Any help would be greatly appreciated!
Artifact #1:
============================================================================
============================================================================
Q U A R T Z --- P R O P E R T I E S
==================
#============================================================================
# Configure Main Scheduler Properties
#============================================================================
org.quartz.scheduler.instanceName: MyQrtzScheduler
org.quartz.scheduler.instanceId: AUTO
org.quartz.scheduler.skipUpdateCheck: true
#============================================================================
# Configure ThreadPool
#============================================================================
org.quartz.threadPool.class: org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount: 1
org.quartz.threadPool.threadPriority: 5
#============================================================================
# Configure JobStore
#============================================================================
org.quartz.jobStore.misfireThreshold: 2592000000
org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.useProperties=false
org.quartz.jobStore.dataSource=myDS
org.quartz.jobStore.tablePrefix=QRTZ_
org.quartz.jobStore.isClustered=true
org.quartz.jobStore.clusterCheckinInterval=60000
#============================================================================
# Other Example Delegates
#============================================================================
#org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.DB2v6Delegate
#org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.DB2v7Delegate
#org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.DriverDelegate
#org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.HSQLDBDelegate
#org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.MSSQLDelegate
#org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.PointbaseDelegate
#org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
#org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.StdJDBCDelegate
#org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.WebLogicDelegate
#org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
#org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.oracle.WebLogicOracleDelegate
#============================================================================
# Configure Datasources
#============================================================================
org.quartz.dataSource.myDS.driver: oracle.jdbc.driver.OracleDriver
org.quartz.dataSource.myDS.URL: jdbc:oracle:thin:#myServer:myPort:blah
org.quartz.dataSource.myDS.user: myDBUser
org.quartz.dataSource.myDS.password: myDBPassword
org.quartz.dataSource.myDS.maxConnections: 2
org.quartz.dataSource.myDS.validationQuery: select 0
#============================================================================
# Configure Plugins
#============================================================================
org.quartz.plugin.shutdownHook.class: org.quartz.plugins.management.ShutdownHookPlugin
org.quartz.plugin.shutdownHook.cleanShutdown: true
org.quartz.plugin.triggerHistory.class=org.quartz.plugins.history.LoggingTriggerHistoryPlugin
org.quartz.plugin.jobHistory.class=org.quartz.plugins.history.LoggingJobHistoryPlugin
Artifact #2:
============================================================================
============================================================================
L O G --- O U T P U T
==================
2015-01-29 12:56:16,602 [main] INFO com.mycompany.myapp.jobs.QuartzHelper - Initializing Quartz scheduler...
2015-01-29 12:56:16,829 [main] INFO org.quartz.impl.StdSchedulerFactory - Using default implementation for ThreadExecutor
2015-01-29 12:56:16,855 [main] INFO org.quartz.core.SchedulerSignalerImpl - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
2015-01-29 12:56:16,855 [main] INFO org.quartz.core.QuartzScheduler - Quartz Scheduler v.2.2.0 created.
2015-01-29 12:56:16,857 [main] INFO org.quartz.plugins.management.ShutdownHookPlugin - Registering Quartz shutdown hook.
2015-01-29 12:56:16,859 [main] INFO org.quartz.impl.jdbcjobstore.JobStoreTX - Using db table-based data access locking (synchronization).
2015-01-29 12:56:16,864 [main] INFO org.quartz.impl.jdbcjobstore.JobStoreTX - JobStoreTX initialized.
2015-01-29 12:56:16,865 [main] INFO org.quartz.core.QuartzScheduler - Scheduler meta-data: Quartz Scheduler (v2.2.0) 'MyQrtzScheduler' with instanceId 'node1_1422554176832'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 1 threads.
Using job-store 'org.quartz.impl.jdbcjobstore.JobStoreTX' - which supports persistence. and is clustered.
2015-01-29 12:56:16,865 [main] INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler 'MyQrtzScheduler' initialized from specified file: '/my/install/directory/quartz.properties'
2015-01-29 12:56:16,866 [main] INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler version: 2.2.0
2015-01-29 12:56:16,866 [main] INFO com.mycompany.myapp.jobs.QuartzHelper - Quartz scheduler initialized successfully.
2015-01-29 12:59:53,450 [MyQrtzScheduler_QuartzSchedulerThread] DEBUG org.quartz.core.QuartzSchedulerThread - batch acquisition of 1 triggers
2015-01-29 13:00:00,007 [MyQrtzScheduler_QuartzSchedulerThread] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' is desired by: MyQrtzScheduler_QuartzSchedulerThread
2015-01-29 13:00:00,008 [MyQrtzScheduler_QuartzSchedulerThread] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' is being obtained: MyQrtzScheduler_QuartzSchedulerThread
2015-01-29 13:00:00,809 [MyQrtzScheduler_QuartzSchedulerThread] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' given to: MyQrtzScheduler_QuartzSchedulerThread
2015-01-29 13:00:00,836 [MyQrtzScheduler_QuartzSchedulerThread] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' returned by: MyQrtzScheduler_QuartzSchedulerThread
2015-01-29 13:00:00,839 [MyQrtzScheduler_QuartzSchedulerThread] DEBUG org.quartz.simpl.PropertySettingJobFactory - Producing instance of Job 'node2_1422546730757.Job1', class=com.mycompany.myapp.job.Job1
2015-01-29 13:00:00,851 [MyQrtzScheduler_Worker-1] INFO org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger node2_1422546730757.Job1Trigger fired job node2_1422546730757.Job1 at: 13:00:00 01/29/2015
2015-01-29 13:00:00,852 [MyQrtzScheduler_Worker-1] INFO org.quartz.plugins.history.LoggingJobHistoryPlugin - Job node2_1422546730757.Job1 fired (by trigger node2_1422546730757.Job1Trigger) at: 13:00:00 01/29/2015
2015-01-29 13:00:00,852 [MyQrtzScheduler_Worker-1] DEBUG org.quartz.core.JobRunShell - Calling execute on job node2_1422546730757.Job1
2015-01-29 13:00:00,853 [MyQrtzScheduler_Worker-1] INFO com.mycompany.myapp.job.Job1 - ***Executing Inbound File SLA Job...
2015-01-29 13:00:02,054 [MyQrtzScheduler_Worker-1] INFO com.mycompany.myapp.job.Job1 - ***Inbound File SLA Job: No SLA breaches found...
2015-01-29 13:00:02,150 [MyQrtzScheduler_Worker-1] INFO com.mycompany.myapp.job.Job1 - Job1 completed successfully in [1297ms]; sleeping [63703ms] to meet the required minimum runtime for quartz-jobs
2015-01-29 13:00:24,881 [QuartzScheduler_MyQrtzScheduler-node1_1422554176832_ClusterManager] DEBUG org.quartz.impl.jdbcjobstore.JobStoreTX - ClusterManager: Check-in complete.
2015-01-29 13:01:05,862 [MyQrtzScheduler_Worker-1] INFO com.mycompany.myapp.job.Job1 - Job1 sleep-delay completed.
2015-01-29 13:01:05,864 [MyQrtzScheduler_Worker-1] INFO org.quartz.plugins.history.LoggingJobHistoryPlugin - Job node2_1422546730757.Job1 execution complete at 13:01:05 01/29/2015 and reports: SUCCESS
2015-01-29 13:01:05,865 [MyQrtzScheduler_Worker-1] INFO org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger node2_1422546730757.Job1Trigger completed firing job node2_1422546730757.Job1 at 13:01:05 01/29/2015 with resulting trigger instruction code: DO NOTHING
2015-01-29 13:01:05,868 [MyQrtzScheduler_Worker-1] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' is desired by: MyQrtzScheduler_Worker-1
2015-01-29 13:01:05,869 [MyQrtzScheduler_Worker-1] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' is being obtained: MyQrtzScheduler_Worker-1
2015-01-29 13:01:05,872 [MyQrtzScheduler_Worker-1] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' given to: MyQrtzScheduler_Worker-1
2015-01-29 13:01:05,880 [MyQrtzScheduler_Worker-1] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' returned by: MyQrtzScheduler_Worker-1
2015-01-29 13:01:05,915 [MyQrtzScheduler_QuartzSchedulerThread] DEBUG org.quartz.core.QuartzSchedulerThread - batch acquisition of 1 triggers
2015-01-29 13:01:05,917 [MyQrtzScheduler_QuartzSchedulerThread] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' is desired by: MyQrtzScheduler_QuartzSchedulerThread
2015-01-29 13:01:05,918 [MyQrtzScheduler_QuartzSchedulerThread] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' is being obtained: MyQrtzScheduler_QuartzSchedulerThread
2015-01-29 13:01:05,921 [MyQrtzScheduler_QuartzSchedulerThread] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' given to: MyQrtzScheduler_QuartzSchedulerThread
2015-01-29 13:01:05,954 [MyQrtzScheduler_QuartzSchedulerThread] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' returned by: MyQrtzScheduler_QuartzSchedulerThread
2015-01-29 13:01:05,955 [MyQrtzScheduler_QuartzSchedulerThread] DEBUG org.quartz.simpl.PropertySettingJobFactory - Producing instance of Job 'node1_1422543657050.Job2', class=com.mycompany.myapp.jobs.Job2
2015-01-29 13:01:05,961 [MyQrtzScheduler_Worker-1] INFO org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger node1_1422543657050.Job2Trigger fired job node1_1422543657050.Job2 at: 13:01:05 01/29/2015
2015-01-29 13:01:05,962 [MyQrtzScheduler_Worker-1] INFO org.quartz.plugins.history.LoggingJobHistoryPlugin - Job node1_1422543657050.Job2 fired (by trigger node1_1422543657050.Job2Trigger) at: 13:01:05 01/29/2015
2015-01-29 13:01:05,963 [MyQrtzScheduler_Worker-1] DEBUG org.quartz.core.JobRunShell - Calling execute on job node1_1422543657050.Job2
2015-01-29 13:01:05,963 [MyQrtzScheduler_Worker-1] WARN com.mycompany.myapp.jobs.Job2 - No outbound files found; Outbound File SLA Job cannot check for SLA breaches.
2015-01-29 13:01:05,965 [MyQrtzScheduler_Worker-1] INFO org.quartz.plugins.history.LoggingJobHistoryPlugin - Job node1_1422543657050.Job2 execution complete at 13:01:05 01/29/2015 and reports: null
2015-01-29 13:01:05,966 [MyQrtzScheduler_Worker-1] INFO org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger node1_1422543657050.Job2Trigger completed firing job node1_1422543657050.Job2 at 13:01:05 01/29/2015 with resulting trigger instruction code: DO NOTHING
The following answer was given by the OP.
The problem was that I was defining quartz jobs with identities that have a unique group id (the scheduler id) instead of a group id common to all hosts in the cluster. Since the scheduler id is unique to the host, each host in the cluster would look to see if that job already existed using the fully qualified job name groupId.jobName and surely it found it didn't, so it would create a new instance of Job1 and Job2 during startup. The quartz jobs/triggers are never expired or cleared without an explicit request in Java or manual sql statement in Oracle. So over time the instances would build up, and instead of quartz running a single instance of Job1 and Job2, it would run all the instances of each job that had been created over time (hence the multiple executions and multiple email alerts).
The solution is that I replace schedulerId with a static string such as "MyQuartzJobs" when defining a job's identity.
Basically, I changed the following line of Java code:
JobDetail job =
newJob(Job1.class).withIdentity(JOB1_JOB_NAME, uniqueSchedulerId)
.withDescription(JOB1_DESC + " created [" + new Date() + "]")
.storeDurably(false)
.requestRecovery(false)
.build();
to something like the following:
JobDetail job =
newJob(Job1.class).withIdentity(JOB1_JOB_NAME, "MyQuartzJobs")
.withDescription(JOB1_DESC + " created [" + new Date() + "]")
.storeDurably(false)
.requestRecovery(false)
.build();

PlayFramework schedule job does not work?

I config a Job to execute every 3 hours day time, below is cron config:
#On("0 0 10-20/3 * * ?")
But it didn't work
This is my play staus output:
Requests execution pool:
~~~~~~~~~~~~~~~~~~~~~~~~
Pool size: 20
Active count: 0
Scheduled task count: 876
Queue size: 0
I think I got the answer:
#On("0 0 10-20/3 * * ?")
didn't means the job will run 4 times(10, 13, 16, 19), play will wait for the first job until it ends, and then wait 3 hours to run next job.
so, if this job spend 10 hours, the job will only execute once per day.

Categories