JMockit - Strict Expectation is ignored - java

I want to test the behavior of a private method.
The method "moveDataToArchive" does 4 steps.
It's 4x: calculate a date + call a sub method.
This is my test:
#Test
public void testMoveData2Archive() throws Exception{
final long now = 123456789000L;
//Necessary to make the archivingBean runable.
Vector<LogEntry> logCollector = new Vector<LogEntry>();
Deencapsulation.setField(archivingBean, "logCollector", logCollector);
new NonStrictExpectations(archivingBean) {
{ //Lets fake the DB stuff.
invoke(archivingBean, "getConnection");result = connection;
connection.prepareStatement(anyString); result = prepStatement;
prepStatement.executeUpdate(); returns(Integer.valueOf(3), Integer.valueOf(0), Integer.valueOf(3));
}
};
new NonStrictExpectations(props) {
{ //This is important. The numbers will be used for one of each 4 submethods
props.getProperty(ArchivingHandlerBean.ARCHIVING_CREDMATURITY_OVER_IN_DAYS); result = "160";
props.getProperty(ArchivingHandlerBean.ARCHIVING_CREDHIST_AGE_IN_DAYS); result = "150";
props.getProperty(ArchivingHandlerBean.ARCHIVING_DEBTHIST_AGE_IN_DAYS); result = "140";
props.getProperty(ArchivingHandlerBean.ARCHIVING_LOG_AGE_IN_DAYS); result = "130";
}
};
new Expectations() {
{
Date expected = new Date(now - (160 * 24 * 60 * 60 * 1000));
invoke(archivingBean, "moveCreditBasic2Archive", expected);
expected = new Date(now - (150 * 24 * 60 * 60 * 1000));
invoke(archivingBean, "moveCreditHistory2Archive", expected);
expected = new Date(now - (999 * 24 * 60 * 60 * 1000));
invoke(archivingBean, "moveDebtorHistory2Archive", expected);
expected = new Date(now - (130 * 24 * 60 * 60 * 1000));
invoke(archivingBean, "moveLog2Archive", expected);
}
};
Calendar cal = Calendar.getInstance();
cal.setTimeInMillis(now);
Deencapsulation.invoke(archivingBean,"moveDataToArchive",cal, props);
}
Whats the problem? See the third expected date. It is wrong! (999 instead of 140).
I also changed the order of the calls. I even made those private method public and tried it. All those changes did not change the outcome: Test is green.
What is wrong here? Why is the test green?

The test is misusing the mocking API, by mixing strict and non-strict expectations for the same mock (archivingBean). The first expectations recorded on this mock are non-strict, so JMockit regards it as a non-strict mock for the whole test.
The correct way to write the test would be to turn the strict expectation block (the one with the 4 calls to "sub methods") into a verification block at the end of the test.
(As an aside, the whole test has several problems. 1) In general, private methods should be tested indirectly, through some public method. 2) Also, private methods should not be mocked, unless there is a strong reason otherwise - in this case, I would probably write a test which verifies the actual contents of the output file. 3) Don't mock things unnecessarily, such as props - props.setProperty could be used instead, I suppose. 4) Use auto-boxing - Integer.valueOf(3) -> 3).

#Rogério:
Your assumptions do not work completely. i.e. I don't have a setProperty(). What I tried is to use a Verifications-Block.
Sadly I dont understand JMockit good enough to get it running...
I did 2 things. First I tried to mock the 4 private methods. I only want to see if they are called. But I don't want there logic to run.
I tried it by extending the first NonStrictExpectations-Block like this:
new NonStrictExpectations(archivingBean) {
{
invoke(archivingBean, "getConnection");result = connection;
connection.prepareStatement(anyString); result = prepStatement;
prepStatement.executeUpdate(); returns(Integer.valueOf(3), Integer.valueOf(0), Integer.valueOf(3));
//New part
invoke(archivingBean, "moveCreditBasic2Archive", withAny(new Date()));
invoke(archivingBean, "moveCreditHistory2Archive", withAny(new Date()));
invoke(archivingBean, "moveDebtorHistory2Archive", withAny(new Date()));
invoke(archivingBean, "moveLog2Archive", withAny(new Date()));
}
};
On the other hand I moved the Expectations-Block down and made it a verifications Block. Now the JUnit fails with a
mockit.internal.MissingInvocation: Missing invocation of:
de.lpm.ejb.archiving.ArchivingHandlerBean#moveCreditBasic2Archive(java.util.Date pOlderThan)
with arguments: Tue Feb 03 03:39:51 CET 2009
on mock instance: de.lpm.ejb.archiving.ArchivingHandlerBean#1601bde
at de.lpm.ejb.archiving.ArchivingHandlerBean.moveCreditBasic2Archive(ArchivingHandlerBean.java:175)
[...]
Caused by: Missing invocation
This is Line 170-175 in ArchivingHandlerBean.java:
170: Connection connection = getConnection();
171: SQLService service = new SQLService(connection);
172:
173: PreparedStatement prepStmtMove = null;
174:
175: Vector<HashMap<String, String>> where_clauses = new Vector<HashMap<String,String>>();
I just want to verify that the 4 private methods are executed with the right date.

Related

testHarness ListState TTL not getting applied on Flink 1.8.2

I am testing a window function which has a listState, with TTL enabled.
Snippet of window function:
public class CustomWindowFunction extends ProcessWindowFunction<InputPOJO, OutputPOJO, String, TimeWindow> {
...
#Override
public void open(Configuration config) {
StateTtlConfig ttlConfig =
StateTtlConfig.newBuilder(listStateTTl)
.setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)
.setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired) // NOTE: NeverReturnExpired
.build();
listStateDescriptor = new ListStateDescriptor<>("unprocessedItems", InputPOJO.class);
listStateDescriptor.enableTimeToLive(ttlConfig);
}
#Override
public void process( String key, Context context, Iterable<InputPOJO> windowElements, Collector<OutputPOJO> out) throws Exception {
ListState<InputPOJO> listState = getRuntimeContext().getListState(listStateDescriptor);
....
Iterator<InputPOJO> iterator;
// Getting unexpired listStateItems for computation.
iterator = listState.get().iterator();
while (iterator.hasNext()) {
InputPOJO listStateInput = iterator.next();
System.out.println("There are unexpired elements in listState");
/** Business Logic to compute result using the unexpired values in listState**/
}
/** Business Logic to compute result using the current window elements.*/
// Adding unProcessed WindowElements to ListState(with TTL)
// NOTE: processed WindowElements are removed manually.
iterator = windowElements.iterator();
while (iterator.hasNext()) {
System.out.println("unProcessed Item added to ListState.")
InputPOJO unprocessedItem = iterator.next();
listState.add(unprocessedItem); // This part gets executed for listStateInput1
}
}
....
}
I am using testHarness to perform the integration test. I am testing the listState item count when the TTL for the listState is expired. Below is my test function snippet.
NOTE:
There is a custom allowedLateness which is implemented using a custom Timer.
private OneInputStreamOperatorTestHarness<InputPOJO, OutputPOJO> testHarness;
private CustomWindowFunction customWindowFunction;
#Before
public void setup_testHarness() throws Exception {
KeySelector<InputPOJO, String> keySelector = InputPOJO::getKey;
TypeInformation<InputPOJO> STRING_INT_TUPLE = TypeInformation.of(new TypeHint<InputPOJO>() {}); // Any suggestion ?
ListStateDescriptor<InputPOJO> stateDesc = new ListStateDescriptor<>("window-contents", STRING_INT_TUPLE.createSerializer(new ExecutionConfig())); // Any suggestion ?
/**
* Creating windowOperator for the below function
*
* <pre>
*
* DataStream<OutputPOJO> OutputPOJOStream =
* inputPOJOStream
* .keyBy(InputPOJO::getKey)
* .window(ProcessingTimeSessionWindows.withGap(Time.seconds(triggerMaximumTimeoutSeconds)))
* .trigger(new CustomTrigger(triggerAllowedLatenessMillis))
* .process(new CustomWindowFunction(windowListStateTtlMillis));
* </pre>
*/
customWindowFunction = new CustomWindowFunction(secondsToMillis(windowListStateTtlMillis));
WindowOperator<String, InputPOJO, Iterable<InputPOJO>, OutputPOJO, TimeWindow>
operator =
new WindowOperator<>(
// setting .window(ProcessingTimeSessionWindows.withGap(maxTimeout))
ProcessingTimeSessionWindows.withGap(Time.seconds(triggerMaximumTimeoutSeconds)),
new TimeWindow.Serializer(),
// setting .keyBy(InputPOJO::getKey)
keySelector,
BasicTypeInfo.STRING_TYPE_INFO.createSerializer(new ExecutionConfig()),
stateDesc,
// setting .process(new CustomWindowFunction(windowListStateTtlMillis))
new InternalIterableProcessWindowFunction<>(customWindowFunction),
// setting .trigger(new CustomTrigger(allowedLateness))
new CustomTrigger(secondsToMillis(allowedLatenessSeconds)),
0,
null);
// Creating testHarness for window operator
testHarness = new KeyedOneInputStreamOperatorTestHarness<>(operator, keySelector, BasicTypeInfo.STRING_TYPE_INFO);
// Setup and Open Test Harness
testHarness.setup();
testHarness.open();
}
#Test
public void test_listStateTtl_exclusion() throws Exception {
int allowedLatenessSeconds = 3;
int listStateTTL = 10;
//1. Arrange
InputPOJO listStateInput1 = new InputPOJO(1,"Arjun");
InputPOJO listStateInput2 = new InputPOJO(2,"Arun");
// 2. Act
// listStateInput1 comes at 1 sec
testHarness.setProcessingTime(secondsToMillis(1));
testHarness.processElement(new StreamRecord<>(listStateInput1));
// Setting current processing time to 1 + 3 = 4 > allowedLateness.
// Window.process() is called, and window is purged (FIRE_AND_PURGE)
// Expectation: listStateInput1 is put into listState with TTL (10 secs), before process() ends.
testHarness.setProcessingTime(secondsToMillis(4));
// Setting processing time after listStateTTL, ie 4 + listStateTTL(10) + 1 = 15
// Expectation: listStateInput1 is evicted from the listState (Fails)
testHarness.setProcessingTime(secondsToMillis(15));
// Using sleep(), the listStateTTL is getting applied to listState and listStateInput1 is evicted (Pass)
//Thread.sleep(secondsToMillis(15))
//Passing listStateInput2 to the test Harness
testHarness.setProcessingTime(secondsToMillis(16));
testHarness.processElement(new StreamRecord<>(listStateInput2));
// Setting processing time after allowedLateness = 16 + 3 + 1 = 20
testHarness.setProcessingTime(secondsToMillis(20));
// 3. Assert
List<StreamRecord<? extends T>> streamRecords = testHarness.extractOutputStreamRecords();
// Expectation: streamRecords will only contain listStateInput2, since listStateInput1 was evicted.
// Actual: Getting both listStateInput1 & listStateInput2 in the output.
}
I noticed that TTL is not getting applied by setting processing time. When I tried the same function with Thread.sleep(TTL), the result was as expected.
Is listState TTL using system time for eviction (with testHarness)?
Is there any way to test listStateTTL using testHarness?
TTL test should by the following way
#Test
public void testSetTtlTimeProvider() throws Exception {
AbstractStreamOperator<Integer> operator = new AbstractStreamOperator<Integer>() {};
try (AbstractStreamOperatorTestHarness<Integer> result =
new AbstractStreamOperatorTestHarness<>(operator, 1, 1, 0)) {
result.config.setStateKeySerializer(IntSerializer.INSTANCE);
result.config.serializeAllConfigs();
Time timeToLive = Time.hours(1);
result.initializeState(OperatorSubtaskState.builder().build());
result.open();
ValueStateDescriptor<Integer> stateDescriptor =
new ValueStateDescriptor<>("test", IntSerializer.INSTANCE);
stateDescriptor.enableTimeToLive(StateTtlConfig.newBuilder(timeToLive).build());
KeyedStateBackend<Integer> keyedStateBackend = operator.getKeyedStateBackend();
ValueState<Integer> state =
keyedStateBackend.getPartitionedState(
VoidNamespace.INSTANCE,
VoidNamespaceSerializer.INSTANCE,
stateDescriptor);
int expectedValue = 42;
keyedStateBackend.setCurrentKey(1);
result.setStateTtlProcessingTime(0L);
state.update(expectedValue);
Assert.assertEquals(expectedValue, (int) state.value());
result.setStateTtlProcessingTime(timeToLive.toMilliseconds() + 1);
Assert.assertNull(state.value());
}
}

Need to implement Thread.sleep(100) in wait function context to evaluate the sleep time?

Following test case fails for sleep of 100 but if sleep increased, test passes. But since this is a lazy approach i want to implement this in a function to evaluate the sleep time. Such that it works fine without a static sleep value on different environments.
public void testDeRegistration() throws Exception {
storeEntity(EntityType.PROCESS, "summarize4");
Process mockProcess = getStore().get(EntityType.PROCESS, "summarize4");
mockProcess.setParallel(2);
Date startTime = EntityUtil.getStartTime(mockProcess, cluster);
ExecutionInstance instance1 = new ProcessExecutionInstance(mockProcess, new DateTime(startTime), cluster);
// Schedule 3 instances.
SchedulerService.JobScheduleRequestBuilder request = (SchedulerService.JobScheduleRequestBuilder)
*emphasized text* scheduler.createRequestBuilder(handler, instance1.getId());
request.setInstance(instance1);
scheduler.register(request.build());
ExecutionInstance instance2 = new ProcessExecutionInstance(mockProcess,
new DateTime(startTime.getTime() + 60000), cluster);
SchedulerService.JobScheduleRequestBuilder request2 = (SchedulerService.JobScheduleRequestBuilder)
scheduler.createRequestBuilder(handler, instance2.getId());
request2.setInstance(instance2);
scheduler.register(request2.build());
ExecutionInstance instance3 = new ProcessExecutionInstance(mockProcess,
new DateTime(startTime.getTime() + 120000), cluster);
SchedulerService.JobScheduleRequestBuilder request3 = (SchedulerService.JobScheduleRequestBuilder)
scheduler.createRequestBuilder(handler, instance3.getId());
request3.setInstance(instance3);
scheduler.register(request3.build());
// Abort third instance
stateStore.putExecutionInstance(new InstanceState(instance3));
scheduler.unregister(handler, instance3.getId());
Thread.sleep(100);
Assert.assertEquals(((MockDAGEngine) mockDagEngine).getTotalRuns(instance1), new Integer(1));
Assert.assertEquals(((MockDAGEngine) mockDagEngine).getTotalRuns(instance2), new Integer(1));
// Second instance should not run.
Assert.assertEquals(((MockDAGEngine) mockDagEngine).getTotalRuns(instance3), null);
}
Try awaitility
Awaitility.await().atMost(500, TimeUnit.MILLISECONDS)
.until(() -> Objects.equals(((MockDAGEngine) mockDagEngine).getTotalRuns(instance1), 1))
And same for second instance

MongoDB ACKNOWLEDGED write concern faster than UNACKNOWLEDGED?

I've got a very simple test program that performs faster with ACKNOWLEDGED bulk inserts than with UNACKNOWLEDGED. And it's not just a little faster - I'm seeing a factor of nearly 100!
My understanding of the difference between these two write concerns is solely that with ACKNOWLEDGED the client waits for confirmation from the server that the operation has been executed (but not necessarily made durable), while with UNACKNOWLEDGED the client only knows that the request made it out onto the wire. So it would seem preposterous that the former could actually perform at a higher speed, yet that's what I'm seeing.
I'm using the Java driver (v2.12.0) with Oracle's Java JDK v1.7.0_71, and mongo version 3.0.0 on 64-bit Windows 7. I'm running mongod, completely out-of-the-box (fresh install), no sharding or anything. And before each test I ensure that the collection is empty and has no non-default indexes.
I would appreciate any insight into why I'm consistently seeing the opposite of what I'd expect.
Thanks.
Here's my code:
package test;
import com.mongodb.BasicDBObject;
import com.mongodb.BulkWriteOperation;
import com.mongodb.BulkWriteResult;
import com.mongodb.DBCollection;
import com.mongodb.DBObject;
import com.mongodb.MongoClient;
import com.mongodb.ServerAddress;
import com.mongodb.WriteConcern;
import java.util.Arrays;
public class Test {
private static final int BATCHES = 100;
private static final int BATCH_SIZE = 1000;
private static final int COUNT = BATCHES * BATCH_SIZE;
public static void main(String[] argv) throws Exception {
DBCollection coll = new MongoClient(new ServerAddress()).getDB("test").getCollection("test");
for (String wcName : Arrays.asList("UNACKNOWLEDGED", "ACKNOWLEDGED")) {
WriteConcern wc = (WriteConcern) WriteConcern.class.getField(wcName).get(null);
coll.dropIndexes();
coll.remove(new BasicDBObject());
long start = System.currentTimeMillis();
BulkWriteOperation bulkOp = coll.initializeUnorderedBulkOperation();
for (int i = 1; i < COUNT; i++) {
DBObject doc = new BasicDBObject().append("int", i).append("string", Integer.toString(i));
bulkOp.insert(doc);
if (i % BATCH_SIZE == 0) {
BulkWriteResult results = bulkOp.execute(wc);
if (wc == WriteConcern.ACKNOWLEDGED && results.getInsertedCount() != 1000) {
throw new RuntimeException("Bogus insert count: " + results.getInsertedCount());
}
bulkOp = coll.initializeUnorderedBulkOperation();
}
}
long time = System.currentTimeMillis() - start;
double rate = COUNT / (time / 1000.0);
System.out.printf("%s[w=%s,j=%s]: Inserted %d documents in %s # %f/sec\n",
wcName, wc.getW(), wc.getJ(), COUNT, duration(time), rate);
}
}
private static String duration(long msec) {
return String.format("%d:%02d:%02d.%03d",
msec / (60 * 60 * 1000),
(msec % (60 * 60 * 1000)) / (60 * 1000),
(msec % (60 * 1000)) / 1000,
msec % 1000);
}
}
And here's typical output:
UNACKNOWLEDGED[w=0,j=false]: Inserted 100000 documents in 0:01:27.025 # 1149.095088/sec
ACKNOWLEDGED[w=1,j=false]: Inserted 100000 documents in 0:00:00.927 # 107874.865156/sec
EDIT
Ran more extensive tests, per request from Markus W. Mahlberg. For these tests, I ran the code with four write concerns: UNACKNOWLEDGED, ACKNOWLEDGED, JOURNALED, and FSYNCED. (I would expect this order to show decreasing speed.) I ran 112 repetitions, each of which performed 100 batches of 1000 inserts under each of the four write concerns, each time into an empty collection with no indexes. Code was identical to original post but with two additional write concerns, and with output to CSV format for easy analysis.
Results summary:
UNACKNOWLEDGED: 1147.105004 docs/sec avg, std dev 27.88577035
ACKNOWLEDGED: 77539.27653 docs/sec avg, std dev 1567.520303
JOURNALED: 29574.45243 docs/sec avg, std dev 123.9927554
FSYNCED: 29567.02467 docs/sec avg, std dev 147.6150994
The huge inverted performance difference between UNACKNOWLEDGED and ACKNOWLEDGED is what's got me baffled.
Here's the raw data if anyone cares for it ("time" is elapsed msec for 100*1000 insertions; "rate" is docs/second):
"UNACK time","UNACK rate","ACK time","ACK rate","JRNL time","JRNL rate","FSYNC time","FSYNC rate"
92815,1077.4120562409094,1348,74183.9762611276,3380,29585.798816568047,3378,29603.31557134399
90209,1108.5368422219512,1303,76745.97083653108,3377,29612.081729345577,3375,29629.62962962963
91089,1097.8273995762386,1319,75815.01137225171,3382,29568.30277942046,3413,29299.73630237328
90159,1109.1516099335618,1320,75757.57575757576,3375,29629.62962962963,3377,29612.081729345577
89922,1112.0749093658949,1315,76045.62737642587,3380,29585.798816568047,3376,29620.853080568722
89997,1111.1481493827573,1306,76569.67840735069,3381,29577.048210588586,3379,29594.55460195324
90141,1109.373093264996,1319,75815.01137225171,3386,29533.372711163614,3378,29603.31557134399
89771,1113.9454835080371,1325,75471.69811320755,3387,29524.65308532625,3521,28401.022436807725
89716,1114.6283828971423,1325,75471.69811320755,3379,29594.55460195324,3379,29594.55460195324
90205,1108.5859985588381,1323,75585.78987150417,3377,29612.081729345577,3376,29620.853080568722
90092,1109.976468498868,1328,75301.2048192771,3382,29568.30277942046,3379,29594.55460195324
89822,1113.3129968159249,1322,75642.965204236,3385,29542.097488921714,3383,29559.562518474726
89821,1113.3253916122064,1310,76335.87786259541,3380,29585.798816568047,3383,29559.562518474726
89945,1111.7905386625162,1318,75872.53414264036,3379,29594.55460195324,3379,29594.55460195324
89917,1112.1367483345753,1352,73964.49704142011,3381,29577.048210588586,3377,29612.081729345577
90358,1106.7088691648773,1303,76745.97083653108,3377,29612.081729345577,3380,29585.798816568047
90187,1108.8072560346836,1348,74183.9762611276,3387,29524.65308532625,3395,29455.081001472754
90634,1103.3387029150208,1322,75642.965204236,3384,29550.827423167848,3381,29577.048210588586
90148,1109.2869503483162,1331,75131.48009015778,3389,29507.22927117144,3381,29577.048210588586
89767,1113.9951207013714,1321,75700.22710068131,3380,29585.798816568047,3382,29568.30277942046
89910,1112.2233344455567,1321,75700.22710068131,3381,29577.048210588586,3385,29542.097488921714
89852,1112.9412812180028,1316,75987.84194528875,3381,29577.048210588586,3401,29403.116730373422
89537,1116.8567184515898,1319,75815.01137225171,3380,29585.798816568047,3380,29585.798816568047
89763,1114.0447623185498,1331,75131.48009015778,3380,29585.798816568047,3382,29568.30277942046
90070,1110.2475852115022,1325,75471.69811320755,3383,29559.562518474726,3378,29603.31557134399
89771,1113.9454835080371,1302,76804.91551459293,3389,29507.22927117144,3378,29603.31557134399
90518,1104.7526458825869,1325,75471.69811320755,3383,29559.562518474726,3380,29585.798816568047
90314,1107.2480457071995,1322,75642.965204236,3380,29585.798816568047,3384,29550.827423167848
89874,1112.6688474976079,1329,75244.54477050414,3386,29533.372711163614,3379,29594.55460195324
89954,1111.6793027547415,1318,75872.53414264036,3381,29577.048210588586,3381,29577.048210588586
89903,1112.3099340400208,1325,75471.69811320755,3379,29594.55460195324,3388,29515.9386068477
89842,1113.0651588343983,1314,76103.500761035,3382,29568.30277942046,3377,29612.081729345577
89746,1114.2557885588217,1325,75471.69811320755,3378,29603.31557134399,3385,29542.097488921714
93249,1072.3975592231552,1327,75357.95026375283,3381,29577.048210588586,3377,29612.081729345577
93638,1067.9425019756936,1331,75131.48009015778,3377,29612.081729345577,3392,29481.132075471698
87775,1139.2765593847905,1340,74626.86567164179,3379,29594.55460195324,3378,29603.31557134399
86495,1156.136192843517,1271,78678.20613690009,3375,29629.62962962963,3376,29620.853080568722
85584,1168.442699570013,1276,78369.90595611285,3432,29137.529137529138,3376,29620.853080568722
86648,1154.094728095282,1278,78247.2613458529,3382,29568.30277942046,3411,29316.91586045148
85745,1166.2487608606916,1274,78492.93563579278,3380,29585.798816568047,3363,29735.355337496283
85813,1165.3246011676551,1279,78186.08287724786,3375,29629.62962962963,3376,29620.853080568722
85831,1165.0802157728558,1288,77639.75155279503,3376,29620.853080568722,3377,29612.081729345577
85807,1165.4060857505797,1259,79428.11755361399,3466,28851.702250432772,3375,29629.62962962963
85964,1163.2776511097668,1258,79491.2559618442,3378,29603.31557134399,3378,29603.31557134399
85854,1164.7680946723508,1257,79554.49482895785,3382,29568.30277942046,3375,29629.62962962963
85787,1165.6777833471272,1257,79554.49482895785,3377,29612.081729345577,3377,29612.081729345577
85537,1169.084723569917,1272,78616.35220125786,3377,29612.081729345577,3377,29612.081729345577
85408,1170.8505058074186,1271,78678.20613690009,3375,29629.62962962963,3425,29197.080291970804
85577,1168.5382754712132,1261,79302.14115781126,3378,29603.31557134399,3375,29629.62962962963
85663,1167.365140142185,1261,79302.14115781126,3377,29612.081729345577,3378,29603.31557134399
85812,1165.3381811401669,1273,78554.59544383347,3377,29612.081729345577,3378,29603.31557134399
85783,1165.7321380693145,1273,78554.59544383347,3377,29612.081729345577,3376,29620.853080568722
85682,1167.106276697556,1280,78125.0,3381,29577.048210588586,3376,29620.853080568722
85753,1166.1399601180133,1260,79365.07936507936,3379,29594.55460195324,3377,29612.081729345577
85573,1168.5928972923703,1332,75075.07507507507,3377,29612.081729345577,3377,29612.081729345577
86206,1160.0120641254668,1263,79176.56373713381,3376,29620.853080568722,3383,29559.562518474726
85593,1168.31983923919,1264,79113.92405063291,3380,29585.798816568047,3378,29603.31557134399
85903,1164.1036983574495,1261,79302.14115781126,3378,29603.31557134399,3377,29612.081729345577
85516,1169.3718134618082,1277,78308.53563038372,3375,29629.62962962963,3376,29620.853080568722
85553,1168.8660830128692,1291,77459.3338497289,3490,28653.295128939826,3377,29612.081729345577
85550,1168.907071887785,1293,77339.52049497294,3379,29594.55460195324,3379,29594.55460195324
85610,1168.0878402055835,1298,77041.60246533128,3384,29550.827423167848,3378,29603.31557134399
85522,1169.2897733916418,1267,78926.59826361484,3379,29594.55460195324,3379,29594.55460195324
85595,1168.2925404521293,1276,78369.90595611285,3379,29594.55460195324,3376,29620.853080568722
85451,1170.2613193526115,1286,77760.49766718507,3376,29620.853080568722,3391,29489.82601002654
85792,1165.609847071988,1252,79872.20447284346,3382,29568.30277942046,3376,29620.853080568722
86501,1156.0559993526085,1255,79681.2749003984,3379,29594.55460195324,3379,29594.55460195324
85718,1166.616113301757,1269,78802.20646178094,3382,29568.30277942046,3376,29620.853080568722
85605,1168.156065650371,1265,79051.38339920949,3378,29603.31557134399,3380,29585.798816568047
85398,1170.9876109510762,1274,78492.93563579278,3377,29612.081729345577,3395,29455.081001472754
86370,1157.809424568716,1273,78554.59544383347,3376,29620.853080568722,3376,29620.853080568722
85905,1164.0765962400326,1280,78125.0,3379,29594.55460195324,3379,29594.55460195324
86020,1162.5203441060219,1285,77821.01167315176,3375,29629.62962962963,3376,29620.853080568722
85726,1166.5072440099852,1272,78616.35220125786,3380,29585.798816568047,3380,29585.798816568047
85628,1167.8422945765403,1270,78740.15748031496,3379,29594.55460195324,3376,29620.853080568722
85989,1162.93944574306,1258,79491.2559618442,3376,29620.853080568722,3378,29603.31557134399
85981,1163.047650062223,1276,78369.90595611285,3376,29620.853080568722,3376,29620.853080568722
86558,1155.2947156819703,1269,78802.20646178094,3385,29542.097488921714,3378,29603.31557134399
85745,1166.2487608606916,1293,77339.52049497294,3378,29603.31557134399,3375,29629.62962962963
85544,1168.9890582624148,1266,78988.94154818325,3376,29620.853080568722,3377,29612.081729345577
85536,1169.0983913206135,1268,78864.35331230283,3380,29585.798816568047,3380,29585.798816568047
85477,1169.9053546568082,1278,78247.2613458529,3388,29515.9386068477,3377,29612.081729345577
85434,1170.4941826439124,1253,79808.45969672786,3378,29603.31557134399,3375,29629.62962962963
85609,1168.1014846569872,1276,78369.90595611285,3364,29726.516052318668,3376,29620.853080568722
85740,1166.316771635176,1258,79491.2559618442,3377,29612.081729345577,3377,29612.081729345577
85640,1167.6786548341897,1266,78988.94154818325,3378,29603.31557134399,3377,29612.081729345577
85648,1167.569587147394,1281,78064.012490242,3378,29603.31557134399,3376,29620.853080568722
85697,1166.9019919017,1287,77700.0777000777,3377,29612.081729345577,3378,29603.31557134399
85696,1166.9156086631815,1256,79617.83439490446,3379,29594.55460195324,3376,29620.853080568722
85782,1165.7457275419085,1258,79491.2559618442,3379,29594.55460195324,3379,29594.55460195324
85837,1164.9987767512844,1264,79113.92405063291,3379,29594.55460195324,3376,29620.853080568722
85632,1167.7877428998504,1278,78247.2613458529,3380,29585.798816568047,3459,28910.089621277824
85517,1169.3581393173288,1256,79617.83439490446,3379,29594.55460195324,3380,29585.798816568047
85990,1162.925921618793,1302,76804.91551459293,3380,29585.798816568047,3377,29612.081729345577
86690,1153.535586572846,1281,78064.012490242,3375,29629.62962962963,3381,29577.048210588586
86045,1162.1825788831425,1274,78492.93563579278,3380,29585.798816568047,3383,29559.562518474726
86146,1160.820003250296,1274,78492.93563579278,3382,29568.30277942046,3418,29256.87536571094
86027,1162.4257500552153,1280,78125.0,3382,29568.30277942046,3381,29577.048210588586
85992,1162.8988743138896,1281,78064.012490242,3376,29620.853080568722,3380,29585.798816568047
85857,1164.727395553071,1288,77639.75155279503,3382,29568.30277942046,3376,29620.853080568722
85853,1164.7816616775185,1284,77881.6199376947,3375,29629.62962962963,3374,29638.41138114997
86069,1161.8585088707896,1295,77220.07722007722,3378,29603.31557134399,3378,29603.31557134399
85842,1164.930919596468,1296,77160.49382716049,3378,29603.31557134399,3376,29620.853080568722
86195,1160.160102094089,1301,76863.95080707148,3376,29620.853080568722,3379,29594.55460195324
85523,1169.2761011657683,1305,76628.35249042146,3376,29620.853080568722,3378,29603.31557134399
85752,1166.1535591006625,1275,78431.37254901961,3374,29638.41138114997,3377,29612.081729345577
85441,1170.3982865369085,1286,77760.49766718507,3377,29612.081729345577,3380,29585.798816568047
85566,1168.6884977678048,1265,79051.38339920949,3377,29612.081729345577,3380,29585.798816568047
85523,1169.2761011657683,1267,78926.59826361484,3377,29612.081729345577,3376,29620.853080568722
86152,1160.7391586962578,1285,77821.01167315176,3374,29638.41138114997,3378,29603.31557134399
85684,1167.0790345922226,1272,78616.35220125786,3378,29603.31557134399,3384,29550.827423167848
86252,1159.3934053703103,1271,78678.20613690009,3376,29620.853080568722,3377,29612.081729345577

cron library for java

I am looking for a cron expression library in java. Something that can parse cron expressions and return me future fire times for the trigger.
API on the lines of.
CronExpression cronExpression = new CronExpression("0 30 4 * * *");
List<Date> fireTimes = cronExpression.getFireTimes(todaysDate, nextWeekDate);
I don't want to use something as complicated as quartz. The purpose is to basically use cron like a regex for timings. That's all. I do not want a background scheduler.
I tried googling but wasn't able to find anything very helpful. Any suggestions would be appreciated.
Regards,
Pulkit
P.S - I looked at using the CronExpression class out of quartz. Wasn't very helpful - failing some tests.
You can definitely make use of cron4j for cron expessions and scheduling.
also you might find this post from chirag interesting,
cronTrigger.getExpressionSummary()
Example:
CronTrigger t = new CronTrigger();
t.setCronExpression("0 30 10-13 ? * WED,FRI");
System.out.println(""+t.getExpressionSummary());
Output:
seconds: 0
minutes: 30
hours: 10,11,12,13
daysOfMonth: ?
months: *
daysOfWeek: 4,6
lastdayOfWeek: false
nearestWeekday: false
NthDayOfWeek: 0
lastdayOfMonth: false
years: *
Sounds like cron-utils may be useful to you. Is not a scheduler. Provides methods to handle a cron definition and return last/next execution given a DateTime.
Here a snippet from the docs:
CronDefinition cronDefinition = CronDefinitionBuilder.instanceDefinitionFor(QUARTZ);
CronParser parser = new CronParser(cronDefinition);
//Get date for last execution
DateTime now = DateTime.now();
ExecutionTime executionTime = ExecutionTime.forCron(parser.parse("* * * * * * *"));
DateTime lastExecution = executionTime.lastExecution(now));
//Get date for next execution
DateTime nextExecution = executionTime.nextExecution(now));
//Time from last execution
Duration timeFromLastExecution = executionTime.timeFromLastExecution(now);
//Time to next execution
Duration timeToNextExecution = executionTime.timeToNextExecution(now);
I was able to solve the problem using dummy triggers on quartz. I didn't schedule and jobs etc, simply used the trigger api to compute all the times the job should fire based on a cron expression.
Best,
Pulkit
OperableTrigger trigger = (OperableTrigger)TriggerBuilder
.newTrigger()
.withIdentity("trigger1", "group1")
.withSchedule(
SimpleScheduleBuilder.simpleSchedule()
.withIntervalInSeconds(5).repeatForever()
)
.build();
Date startDate = new Date(); Date endDate = new Date(startDate.getTime() + 1000000);
List<Date> dateList = TriggerUtils.computeFireTimesBetween(trigger, new BaseCalendar(), startDate, endDate);
System.out.println("******Times**********");
for(Date date : dateList) {
System.out.println(date.toString());
}
System.out.println("*********************");
In case this is helpful to others too, I tried the other options but wasn't satisfied with any and ended up writing my own very small library just for that purpose, crony. It's available on maven-central.
The code you wanted would be, with crony:
Cron cronExpression = Cron.parseCronString("0 30 4 * * *").get();
Stream<ZonedDateTime> fireTimes = CronExecution
.getNextExecutionDates(cron, todaysDate)
.takeUntil(d -> d.isAfter(nextWeekDate));

Running two jobs with Quartz in Java

I have Quartz coded as follows and the first job runs perfectly:
JobDetail jd = null;
CronTrigger ct = null;
jd = new JobDetail("Job1", "Group1", Job1.class);
ct = new CronTrigger("cronTrigger1","Group1","0/5 * * * * ?");
scheduler.scheduleJob(jd, ct);
jd = new JobDetail("Job2", "Group2", Job2.class);
ct = new CronTrigger("cronTrigger2","Group2","0/20 * * * * ?");
scheduler.scheduleJob(jd, ct);
But I'm finding that Job2, which is a completely separate job to Job1, will not execute.
The scheduler is started using a listener in Java. I've also tried using scheduler.addJob(jd, true); but nothing changes. I'm running Java through a JVM on windows 7.
How do you know the job does not run? If you substitute Job1.class for Job2.class, does it still fail? When you swap order in which they're added to scheduler, or only leave Job2? Or if you strip down Job2 to only print a message to console?
I suspect Job2 execution dies with an exception.

Categories