I want to create a time-based rule that is being triggered every 5 minutes, and Drools documentation states that:
Conversely when the Drools engine runs in passive mode (i.e.: using fireAllRules instead of fireUntilHalt) by default it doesn’t fire consequences of timed rules unless fireAllRules isn’t invoked again. However it is possible to change this default behavior by configuring the KieSession with a TimedRuleExecutionOption as shown in the following example
KieSessionConfiguration ksconf = KieServices.Factory.get().newKieSessionConfiguration();
ksconf.setOption( TimedRuleExecutionOption.YES );
KSession ksession = kbase.newKieSession(ksconf, null);
However, I am not accessing the KieSession object directly because I am using the Java REST API to send requests to a Drools project deployed on KieExecution Server like so (example taken directly from the Drools documentation):
public class MyConfigurationObject {
private static final String URL = "http://localhost:8080/kie-server/services/rest/server";
private static final String USER = "baAdmin";
private static final String PASSWORD = "password#1";
private static final MarshallingFormat FORMAT = MarshallingFormat.JSON;
private static KieServicesConfiguration conf;
private static KieServicesClient kieServicesClient;
public static void initializeKieServerClient() {
conf = KieServicesFactory.newRestConfiguration(URL, USER, PASSWORD);
conf.setMarshallingFormat(FORMAT);
kieServicesClient = KieServicesFactory.newKieServicesClient(conf);
}
public void executeCommands() {
String containerId = "hello";
System.out.println("== Sending commands to the server ==");
RuleServicesClient rulesClient = kieServicesClient.getServicesClient(RuleServicesClient.class);
KieCommands commandsFactory = KieServices.Factory.get().getCommands();
Command<?> insert = commandsFactory.newInsert("Some String OBJ");
Command<?> fireAllRules = commandsFactory.newFireAllRules();
Command<?> batchCommand = commandsFactory.newBatchExecution(Arrays.asList(insert, fireAllRules));
ServiceResponse<ExecutionResults> executeResponse = rulesClient.executeCommandsWithResults(containerId, batchCommand);
if(executeResponse.getType() == ResponseType.SUCCESS) {
System.out.println("Commands executed with success! Response: ");
System.out.println(executeResponse.getResult());
} else {
System.out.println("Error executing rules. Message: ");
System.out.println(executeResponse.getMsg());
}
}
}
so I'm a bit confused as to how i can pass this TimedRuleExecutionOption to the session?
I've already found a workaround by sending a FireAllRules command periodically but I'd like to know if I can configure this session option so that I don't have to add periodical triggering for every timed event I want to create.
Also, I've tried using FireUntilHalt instead of FireAllRules, but to my understanding that command blocks the execution thread on the server and I have to send a HaltCommand at some point, all of which I would like to avoid since I have a multi-threaded client that sends events to the server.
pass "-Ddrools.timedRuleExecution=true" while starting server instance where kie-server.war is deployed.
You can use drools cron function. It acts as a timer and invoke rule based on the cron expresion. Example to execute a rule every 5 minutes :
rule "Send SMS every 5 minutes"
timer (cron:* 0/5 * * * ?)
when
$a : Event( )
then
end
you can find explanation here
Related
I'd like to join data coming in from two Kafka topics ("left" and "right").
Matching records are to be joined using an ID, but if a "left" or a "right" record is missing, the other one should be passed downstream after a certain timeout. Therefore I have chosen to use the coGroup function.
This works, but there is one problem: If there is no message at all, there is always at least one record which stays in an internal buffer for good. It gets pushed out when new messages arrive. Otherwise it is stuck.
The expected behaviour is that all records should be pushed out after the configured idle timeout has been reached.
Some information which might be relevant
Flink 1.14.4
The Flink parallelism is set to 8, so is the number of partitions in both Kafka topics.
Flink checkpointing is enabled
Event-time processing is to be used
Lombok is used: So val is like final var
Some code snippets:
Relevant join settings
public static final int AUTO_WATERMARK_INTERVAL_MS = 500;
public static final Duration SOURCE_MAX_OUT_OF_ORDERNESS = Duration.ofMillis(4000);
public static final Duration SOURCE_IDLE_TIMEOUT = Duration.ofMillis(1000);
public static final Duration TRANSFORMATION_MAX_OUT_OF_ORDERNESS = Duration.ofMillis(5000);
public static final Duration TRANSFORMATION_IDLE_TIMEOUT = Duration.ofMillis(1000);
public static final Time JOIN_WINDOW_SIZE = Time.milliseconds(1500);
Create KafkaSource
private static KafkaSource<JoinRecord> createKafkaSource(Config config, String topic) {
val properties = KafkaConfigUtils.createConsumerConfig(config);
val deserializationSchema = new KafkaRecordDeserializationSchema<JoinRecord>() {
#Override
public void deserialize(ConsumerRecord<byte[], byte[]> record, Collector<JoinRecord> out) {
val m = JsonUtils.deserialize(record.value(), JoinRecord.class);
val copy = m.toBuilder()
.partition(record.partition())
.build();
out.collect(copy);
}
#Override
public TypeInformation<JoinRecord> getProducedType() {
return TypeInformation.of(JoinRecord.class);
}
};
return KafkaSource.<JoinRecord>builder()
.setProperties(properties)
.setBootstrapServers(config.kafkaBootstrapServers)
.setTopics(topic)
.setGroupId(config.kafkaInputGroupIdPrefix + "-" + String.join("_", topic))
.setDeserializer(deserializationSchema)
.setStartingOffsets(OffsetsInitializer.latest())
.build();
}
Create DataStreamSource
Then the DataStreamSource is built on top of the KafkaSource:
Configure "max out of orderness"
Configure "idleness"
Extract timestamp from record, to be used for event time processing
private static DataStreamSource<JoinRecord> createLeftSource(Config config,
StreamExecutionEnvironment env) {
val leftKafkaSource = createLeftKafkaSource(config);
val leftWms = WatermarkStrategy
.<JoinRecord>forBoundedOutOfOrderness(SOURCE_MAX_OUT_OF_ORDERNESS)
.withIdleness(SOURCE_IDLE_TIMEOUT)
.withTimestampAssigner((joinRecord, __) -> joinRecord.timestamp.toEpochSecond() * 1000L);
return env.fromSource(leftKafkaSource, leftWms, "left-kafka-source");
}
Use keyBy
The keyed sources are created on top of the DataSource instances like this:
Again configure "out of orderness" and "idleness"
Again extract timestamp
val leftWms = WatermarkStrategy
.<JoinRecord>forBoundedOutOfOrderness(TRANSFORMATION_MAX_OUT_OF_ORDERNESS)
.withIdleness(TRANSFORMATION_IDLE_TIMEOUT)
.withTimestampAssigner((joinRecord, __) -> {
if (VERBOSE_JOIN)
log.info("Left : " + joinRecord);
return joinRecord.timestamp.toEpochSecond() * 1000L;
});
val leftKeyedSource = leftSource
.keyBy(jr -> jr.id)
.assignTimestampsAndWatermarks(leftWms)
.name("left-keyed-source");
Join using coGroup
The join then combines the left and the right keyed sources
val joinedStream = leftKeyedSource
.coGroup(rightKeyedSource)
.where(left -> left.id)
.equalTo(right -> right.id)
.window(TumblingEventTimeWindows.of(JOIN_WINDOW_SIZE))
.apply(new CoGroupFunction<JoinRecord, JoinRecord, JoinRecord>() {
#Override
public void coGroup(Iterable<JoinRecord> leftRecords,
Iterable<JoinRecord> rightRecords,
Collector<JoinRecord> out) {
// Transform
val result = ...;
out.collect(result);
}
Write stream to console
The resulting joinedStream is written to the console:
val consoleSink = new PrintSinkFunction<JoinRecord>();
joinedStream.addSink(consoleSink);
How can I configure this join operation, so that all records are pushed downstream after the configured idle timeout?
If it can't be done this way: Is there another option?
This is the expected behavior. withIdleness doesn't try to handle the case where all streams are idle. It only helps in cases where there are still events flowing from at least one source partition/shard/split.
To get the behavior you desire (in the context of a continuous streaming job), you'll have to implement a custom watermark strategy that advances the watermark based on a processing time timer. Here's an implementation that uses the legacy watermark API.
On the other hand, if the job is complete and you just want to drain the final results before shutting it down, you can use the --drain option when you stop the job. Or if you use bounded sources this will happen automatically.
I'm having a spring boot application which is calling iteratively a mockserver instance via a hystrix command, with a fallback method.
The mockserver is configured to allways respond with status code 500. When running without having circuitBreaker.sleepWindowInMilliseconds, everything works fine, the call is done to the mockserver and then the fallback method is invoked.
After configuring circuitBreaker.sleepWindowInMilliseconds value to 5 minutes or so, I would expect that during 5 minutes no calls are done to the mockserver all the calls being directed to the fallback method, but that's not the case.
It looks like the circuitBreaker.sleepWindowInMilliseconds configuration is ignored.
For instance if I reconfigure the mockservice to reply with status code 200 while the iteration is still running, it would imediately print the "mockservice response",without waiting 5 minutes.
in the spring boot main application class:
#RequestMapping("/iterate")
public void iterate() {
for (int i = 1; i<100; i++ ) {
try {
System.out.println(bookService.readingMockService());
Thread.sleep(3000);
} catch (Exception e) {
System.out.println(e.getMessage());
}
}
}
in the spring boot service :
#HystrixCommand(groupKey = "ReadingMockService", commandKey = "ReadingMockService", threadPoolKey = "ReadingMockService", fallbackMethod = "reliableMock", commandProperties = {
#HystrixProperty(name ="circuitBreaker.sleepWindowInMilliseconds", value = "300000") })
public String readingMockService() {
URI uri = URI.create("http://localhost:1080/askmock");
return this.restTemplate.getForObject(uri, String.class);
}
also the mock server is running on the same machine, being configured like :
new MockServerClient("127.0.0.1", 1080).reset();
new MockServerClient("127.0.0.1", 1080)
.when(request("/askmock"))
.respond(response()
.withStatusCode(500)
.withBody("mockservice response")
.applyDelay());
Found the problem :
This property (...circuitBreaker.sleepWindowInMilliseconds ) works together with another one (...circuitBreaker.requestVolumeThreshold ).
If not specifically set this defaults to 20, meaning that first hystrix will try to connect 20 times the usual way and only afterwards the sleepWindowInMilliseconds will get activated and will go to fallback only.
Also the circuit break opens only if the percentage of failed calls exceeds circuitBreaker.errorThresholdPercentage
and in the same time the total number of failed calls exceeds circuitBreaker.requestVolumeThreshold, all within a window of metrics.rollingStats.timeInMilliseconds
From the docs:
https://github.com/Netflix/Hystrix/wiki/configuration#circuitBreaker.sleepWindowInMilliseconds
and by looking at the source code:
https://github.com/Netflix/Hystrix/blob/master/hystrix-core/src/main/java/com/netflix/hystrix/HystrixCommandProperties.java
Using
#HystrixProperty(name="hystrix.command.ReadingMockService.circuitBreaker.sleepWindowInMilliseconds"
should work.
I have been working on a process that continuously monitors a distributed atomic long counter. It monitors it every minute using the following class ZkClient's method getCounter. In fact, I have multiple threads running each of which are monitoring a different counter (distributed atomic long) stored in the Zookeeper nodes. Each thread specifies the path of the counter via the parameters of the getCounter method.
public class TagserterZookeeperManager {
public enum ZkClient {
COUNTER("10.11.18.25:2181"); // Integration URL
private CuratorFramework client;
private ZkClient(String servers) {
Properties props = TagserterConfigs.ZOOKEEPER.getProperties();
String zkFromConfig = props.getProperty("servers", "");
if (zkFromConfig != null && !zkFromConfig.isEmpty()) {
servers = zkFromConfig.trim();
}
ExponentialBackoffRetry exponentialBackoffRetry = new ExponentialBackoffRetry(1000, 3);
client = CuratorFrameworkFactory.newClient(servers, exponentialBackoffRetry);
client.start();
}
public CuratorFramework getClient() {
return client;
}
}
public static String buildPath(String ... node) {
StringBuilder sb = new StringBuilder();
for (int i = 0; i < node.length; i++) {
if (node[i] != null && !node[i].isEmpty()) {
sb.append("/");
sb.append(node[i]);
}
}
return sb.toString();
}
public static DistributedAtomicLong getCounter(String taskType, int hid, String jobId, String countType) {
String path = buildPath(taskType, hid+"", jobId, countType);
Builder builder = PromotedToLock.builder().lockPath(path + "/lock").retryPolicy(new ExponentialBackoffRetry(10, 10));
DistributedAtomicLong count = new DistributedAtomicLong(ZkClient.COUNTER.getClient(), path, new RetryNTimes(5, 20), builder.build());
return count;
}
}
From within the threads, this is how I am calling this method:
DistributedAtomicLong counterTotal = TagserterZookeeperManager
.getCounter("testTopic", hid, jobId, "test");
Now it seems like after the threads have run for a few hours, at one stage I start getting the following org.apache.zookeeper.KeeperException$ConnectionLossException exception inside the getCounter method where it tries to read the count:
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /contentTaskProd
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1073)
at org.apache.curator.utils.ZKPaths.mkdirs(ZKPaths.java:215)
at org.apache.curator.utils.EnsurePath$InitialHelper$1.call(EnsurePath.java:148)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
at org.apache.curator.utils.EnsurePath$InitialHelper.ensure(EnsurePath.java:141)
at org.apache.curator.utils.EnsurePath.ensure(EnsurePath.java:99)
at org.apache.curator.framework.recipes.atomic.DistributedAtomicValue.getCurrentValue(DistributedAtomicValue.java:254)
at org.apache.curator.framework.recipes.atomic.DistributedAtomicValue.get(DistributedAtomicValue.java:91)
at org.apache.curator.framework.recipes.atomic.DistributedAtomicLong.get(DistributedAtomicLong.java:72)
...
I keep getting this exception from thereon for a while and I get the feeling it is causing some internal memory leaks that eventually causes an OutOfMemory error and the whole process bails out. Does anybody have any idea what the reason for this could be? Why would Zookeeper suddenly start throwing the connection loss exception? After the process bails out, I can manually connect to Zookeeper through another small console program that I have written (also using curator) and all look good there.
In order to monitor a node in Zookeeper using curator you can use the NodeCache this won't solve your connection problems.... but instead of polling the node once a minute you can get a push event when it changes.
In my experience, the NodeCache handles quite well disconnection and resume of connections.
I'm experimenting with java flavored zmq to test the benefits of using PGM over TCP in my project. So I changed the weather example, from the zmq guide, to use the epgm transport.
Everything compiles and runs, but nothing is being sent or received. If I change the transport back to TCP, the server receives the messages sent from the client and I get the console output I'm expecting.
So, what are the requirements for using PGM? I changed the string, that I'm passing to the bind and connect methods, to follow the zmq api for zmq_pgm: "transport://interface;multicast address:port". That didn't work. I get and invalid argument error whenever I attempt to use this format. So, I simplified it by dropping the interface and semicolon which "works", but I'm not getting any results.
I haven't been able to find a jzmq example that uses pgm/epgm and the api documentation for the java binding does not define the appropriate string format for an endpoint passed to bind or connect. So what am I missing here? Do I have to use different hosts for the client and the server?
One thing of note is that I'm running my code on a VirtualBox VM (Ubuntu 14.04/OSX Mavericks host). I'm not sure if that has anything to do with the issue I'm currently facing.
Server:
public class wuserver {
public static void main (String[] args) throws Exception {
// Prepare our context and publisher
ZMQ.Context context = ZMQ.context(1);
ZMQ.Socket publisher = context.socket(ZMQ.PUB);
publisher.bind("epgm://xx.x.x.xx:5556");
publisher.bind("ipc://weather");
// Initialize random number generator
Random srandom = new Random(System.currentTimeMillis());
while (!Thread.currentThread ().isInterrupted ()) {
// Get values that will fool the boss
int zipcode, temperature, relhumidity;
zipcode = 10000 + srandom.nextInt(10000) ;
temperature = srandom.nextInt(215) - 80 + 1;
relhumidity = srandom.nextInt(50) + 10 + 1;
// Send message to all subscribers
String update = String.format("%05d %d %d", zipcode, temperature, relhumidity);
publisher.send(update, 0);
}
publisher.close ();
context.term ();
}
}
Client:
public class wuclient {
public static void main (String[] args) {
ZMQ.Context context = ZMQ.context(1);
// Socket to talk to server
System.out.println("Collecting updates from weather server");
ZMQ.Socket subscriber = context.socket(ZMQ.SUB);
//subscriber.connect("tcp://localhost:5556");
subscriber.connect("epgm://xx.x.x.xx:5556");
// Subscribe to zipcode, default is NYC, 10001
String filter = (args.length > 0) ? args[0] : "10001 ";
subscriber.subscribe(filter.getBytes());
// Process 100 updates
int update_nbr;
long total_temp = 0;
for (update_nbr = 0; update_nbr < 100; update_nbr++) {
// Use trim to remove the tailing '0' character
String string = subscriber.recvStr(0).trim();
StringTokenizer sscanf = new StringTokenizer(string, " ");
int zipcode = Integer.valueOf(sscanf.nextToken());
int temperature = Integer.valueOf(sscanf.nextToken());
int relhumidity = Integer.valueOf(sscanf.nextToken());
total_temp += temperature;
}
System.out.println("Average temperature for zipcode '"
+ filter + "' was " + (int) (total_temp / update_nbr));
subscriber.close();
context.term();
}
}
There are a couple possibilities:
You need to make sure ZMQ is compiled with the --with-pgm option: see here - but this doesn't appear to be your issue if you're not seeing "protocol not supported"
Using raw pgm requires root privileges because it requires the ability to create raw sockets... but epgm doesn't require that, so it shouldn't be your issue either (I only bring it up because you use the term "pgm/epgm", and you should be aware that they are not equally available in all situations)
What actually appears to be the problem in your case is that pgm/epgm requires support along the network path. In theory, it requires support out to your router, so your application can send a single message and have your router send out multiple messages to each client, but if your server is aware enough, it can probably send out multiple messages immediately and bypass this router support. The problem is, as you correctly guessed, trying to do this all on one host is not supported.
So, you need different hosts for client and server.
Another bit to be aware of is that some virtualization environments--RHEV/Ovirt and libvirt/KVM with the mac_filter option enabled come to mind-- that, by default, neuter one's abilities via (eb|ip)tables to utilize mcast between guests. With libvirt, of course, the solution is to simply set the option to '0' and restart libvirtd. RHEV/Ovirt require a custom plugin.
At any rate, I would suggest putting a sniffer on the network devices on each system you are using and watching to be sure traffic that is exiting the one host is actually visible on the other.
I'd like to generate alarms on my Java desktop application :
alarms set with a specific date/time which can be in 5 minutes or 5 months
I need to be able to create a SWT application when the alarm is triggered
I need this to be able to work on any OS. The software users will likely have Windows (90% of them), and the rest Mac OS (including me)
the software license must allow me to use it in a commercial program, without requiring to open source it (hence, no GPL)
I cannot require the users to install Cygwin, so the implementation needs to be native to Windows and Unix
I am developing using Java, Eclipse, SWT and my application is deployed from my server using Java Web Start. I'm using Mac OS X.6 for developing.
I think I have a few options:
Run my application at startup, and handle everything myself;
Use a system service.
Use the cron table on Unix, and Scheduled Tasks on Windows
Run at startup
I don't really like this solution, I'm hoping for something more elegant.
Refs: I would like to run my Java program on System Startup on Mac OS/Windows. How can I do this?
System service
If I run it as a system service, I can benefit from this, because the OS will ensure that my software:
is always running
doesn't have/need a GUI
restarts on failure
I've researched some resources that I can use:
run4j — CPL — runs on Windows only, seems like a valid candidate
jsvc — Apache 2.0 — Unix only, seems like a valid candidate
Java Service Wrapper — Various — I cannot afford paid licenses, and the free one is a GPL. Hence, I don't want to/can't use this
My questions in the system service options are:
Are there other options?
Is my planned implementation correct:
at the application startup, check for existence of the service
if it is not installed:
escalate the user to install the service (root on Unix, UAC on Windows)
if the host OS is Windows, use run4j to register the service
if the host OS is Unix, use jsvc to register the service
if it is not running, start it
Thus, at the first run, the application will install the service and start it. When the application closes the service is still running and won't need the application ever again, except if it is unregistered.
However, I think I still miss the "run on startup" feature.
Am I right? Am I missing something?
cron / Task Scheduler
On Unix, I can easily use the cron table without needing the application to escalate the user to root. I don't need to handle restarts, system date changes, etc. Seems nice.
On Windows, I can use the Task Scheduler, even in command-line using At or SchTasks. This seems nice, but I need this to be compatible from XP up to 7, and I can't easily test this.
So what would you do? Did I miss something? Do you have any advice that could help me pick the best and most elegant solution?
Bicou: Great that you shared your solution!
Note that the "schtasks.exe" has some localization issues, if you want to create a daily trigger with it, on an English Windows you'd have to use "daily", on a German one (for example) you'd have to use "täglich" instead.
To resolve this issue I've implemented the call to schtasks.exe with the /xml-option, providing a temporary xml-file which I create by template.
The easiest way to create such a template is to create a task "by hand" and use the "export"-function in the task management GUI tool.
Of the available options you have listed, IMHO Option 3 is better.
As you are looking only for an external trigger to execute the application, CRON or Scheduled tasks are better solutions than other options you have listed. By this way, you remove a complexity from your application and also your application need not be running always. It could be triggered externally and when the execution is over, your application will stop. Hence, unnecessary resource consumption is avoided.
Here's what I ended up implementing:
public class AlarmManager {
public static final String ALARM_CLI_FORMAT = "startalarm:";
public static SupportedOS currentOS = SupportedOS.UNSUPPORTED_OS;
public enum SupportedOS {
UNSUPPORTED_OS,
MAC_OS,
WINDOWS,
}
public AlarmManager() {
final String osName = System.getProperty("os.name");
if (osName == null) {
L.e("Unable to retrieve OS!");
} else if ("Mac OS X".equals(osName)) {
currentOS = SupportedOS.MAC_OS;
} else if (osName.contains("Windows")) {
currentOS = SupportedOS.WINDOWS;
} else {
L.e("Unsupported OS: "+osName);
}
}
/**
* Windows only: name of the scheduled task
*/
private String getAlarmName(final long alarmId) {
return new StringBuilder("My_Alarm_").append(alarmId).toString();
}
/**
* Gets the command line to trigger an alarm
* #param alarmId
* #return
*/
private String getAlarmCommandLine(final long alarmId) {
return new StringBuilder("javaws -open ").append(ALARM_CLI_FORMAT).append(alarmId).append(" ").append(G.JNLP_URL).toString();
}
/**
* Adds an alarm to the system list of scheduled tasks
* #param when
*/
public void createAlarm(final Calendar when) {
// Create alarm
// ... stuff here
final long alarmId = 42;
// Schedule alarm
String[] commandLine;
Process child;
final String alarmCL = getAlarmCommandLine(alarmId);
try {
switch (currentOS) {
case MAC_OS:
final String cron = new SimpleDateFormat("mm HH d M '*' ").format(when.getTime()) + alarmCL;
commandLine = new String[] {
"/bin/sh", "-c",
"crontab -l | (cat; echo \"" + cron + "\") | crontab"
};
child = Runtime.getRuntime().exec(commandLine);
break;
case WINDOWS:
commandLine = new String[] {
"schtasks",
"/Create",
"/ST "+when.get(Calendar.HOUR_OF_DAY) + ":" + when.get(Calendar.MINUTE),
"/SC ONCE",
"/SD "+new SimpleDateFormat("dd/MM/yyyy").format(when.getTime()), // careful with locale here! dd/MM/yyyy or MM/dd/yyyy? I'm French! :)
"/TR \""+alarmCL+"\"",
"/TN \""+getAlarmName(alarmId)+"\"",
"/F",
};
L.d("create command: "+Util.join(commandLine, " "));
child = Runtime.getRuntime().exec(commandLine);
break;
}
} catch (final IOException e) {
L.e("Unable to schedule alarm #"+alarmId, e);
return;
}
L.i("Created alarm #"+alarmId);
}
/**
* Removes an alarm from the system list of scheduled tasks
* #param alarmId
*/
public void removeAlarm(final long alarmId) {
L.i("Removing alarm #"+alarmId);
String[] commandLine;
Process child;
try {
switch (currentOS) {
case MAC_OS:
commandLine = new String[] {
"/bin/sh", "-c",
"crontab -l | (grep -v \""+ALARM_CLI_FORMAT+"\") | crontab"
};
child = Runtime.getRuntime().exec(commandLine);
break;
case WINDOWS:
commandLine = new String[] {
"schtasks",
"/Delete",
"/TN \""+getAlarmName(alarmId)+"\"",
"/F",
};
child = Runtime.getRuntime().exec(commandLine);
break;
}
} catch (final IOException e) {
L.e("Unable to remove alarm #"+alarmId, e);
}
}
public void triggerAlarm(final long alarmId) {
// Do stuff
//...
L.i("Hi! I'm alarm #"+alarmId);
// Remove alarm
removeAlarm(alarmId);
}
}
Usage is simple. Schedule a new alarm using:
final AlarmManager m = new AlarmManager();
final Calendar cal = new GregorianCalendar();
cal.add(Calendar.MINUTE, 1);
m.createAlarm(cal);
Trigger an alarm like this:
public static void main(final String[] args) {
if (args.length >= 2 && args[1] != null && args[1].contains(AlarmManager.ALARM_CLI_FORMAT)) {
try {
final long alarmId = Long.parseLong(args[1].replace(AlarmManager.ALARM_CLI_FORMAT, ""));
final AlarmManager m = new AlarmManager();
m.triggerAlarm(alarmId);
} catch (final NumberFormatException e) {
L.e("Unable to parse alarm !", e);
}
}
}
Tested on Mac OS X.6 and Windows Vista. The class L is an helper to System.out.println and G holds my global constants (here, my JNLP file on my server used to launch my application).
You can also try using Quartz http://quartz-scheduler.org/ . It has a CRON like syntax to schedule jobs.
I believe your scenario is correct. Since services are system specific things, IMHO you should not user a generic package to cover them all, but have a specific mechanism for every system.