I'm using drools 7.29 in STREAM mode, and i would like to collect events that are not followed by a certain event.
$transaction: TransactionOmDto(service_type == "CASHOUT", transfer_status == "TS", $requestId: transfer_id, $msisdn: msisdn) over window:length(1)
not(TransactionOmDto(service_type == "CONFIRMATION", transfer_status == "TS", transfer_id == $requestId, msisdn == $msisdn, this after [0s, 5m] $transaction))
//How to collect such $transaction in a List ???
For example in my system, when i receive a transaction $t1 that matches the following patterns,
$t1.transfer_id = "my_uniq_transfer_id"
$t1.msisdn == "07xxxxxxxx"
$t1.service_type == "CASHOUT"
$t1.transfer_status == "TS"
then I am also supposed to receive within the next 5 minutes another transaction $t2 from the same POS which should meet the following conditions:
$t2.transfer_id = "my_uniq_transfer_id"
$t2.msisdn == "07xxxxxxxx"
$t2.service_type == "CONFIRMATION"
$t2.transfer_status == "TS"
Sometimes $t2 transactions do not arrive. And the goal of my rule is to report all $t1 transactions that have not had their $t2 confirmation transaction in the system.
Common 'heartbeat arrival problem'. Could you please check existing simplified example whether it suits your needs from here. Please ask additional questions if any.
Rule
dialect 'mvel'
import TemporalReasoningTest.Heartbeat
declare Heartbeat #role (event) end
global java.io.PrintStream stdout
rule "Sound the Alarm"
when
$h: Heartbeat() from entry-point "MonitoringStream"
not(Heartbeat(this != $h, this after[0s,10s] $h) from entry-point "MonitoringStream")
then
stdout.println('No heartbeat')
end
Model and test
#DroolsSession("classpath:/temporalReasoning.drl")
public class TemporalReasoningTest {
public static class Heartbeat {
public int ordinal;
public Heartbeat(int ordinal) {
this.ordinal = ordinal;
}
}
#RegisterExtension
public DroolsAssert drools = new DroolsAssert();
#BeforeEach
public void before() {
drools.setGlobal("stdout", System.out);
}
#Test
#TestRules(expected = {})
public void testRegularHeartbeat() {
Heartbeat heartbeat1 = new Heartbeat(1);
drools.insertAndFireAt("MonitoringStream", heartbeat1);
drools.advanceTime(5, SECONDS);
drools.assertExist(heartbeat1);
Heartbeat heartbeat2 = new Heartbeat(2);
drools.insertAndFireAt("MonitoringStream", heartbeat2);
drools.assertExist(heartbeat1, heartbeat2);
drools.advanceTime(5, SECONDS);
drools.assertDeleted(heartbeat1);
drools.assertExist(heartbeat2);
Heartbeat heartbeat3 = new Heartbeat(3);
drools.insertAndFireAt("MonitoringStream", heartbeat3);
drools.assertExist(heartbeat2, heartbeat3);
drools.advanceTime(5, SECONDS);
drools.assertDeleted(heartbeat2);
drools.assertExist(heartbeat3);
Heartbeat heartbeat4 = new Heartbeat(4);
drools.insertAndFireAt("MonitoringStream", heartbeat4);
drools.assertExist(heartbeat3, heartbeat4);
drools.advanceTime(5, SECONDS);
drools.assertDeleted(heartbeat3);
drools.assertExist(heartbeat4);
drools.assertFactsCount(1);
assertEquals(4, drools.getObject(Heartbeat.class).ordinal);
drools.printFacts();
}
#Test
#TestRules(expectedCount = { "1", "Sound the Alarm" })
public void testIrregularHeartbeat() {
drools.insertAndFireAt("MonitoringStream", new Heartbeat(1));
drools.advanceTime(5, SECONDS);
drools.advanceTime(5, SECONDS);
drools.insertAndFireAt("MonitoringStream", new Heartbeat(2), new Heartbeat(3));
drools.advanceTime(5, SECONDS);
drools.assertFactsCount(2);
drools.printFacts();
}
}
Related
I found a memory leak in my spring batch code. Just when I run the code below. Some people seem to say that jobexplorer causes a memory leak. Should I not use jobexplorer? thanks for the help.
At boot :
Just 5 min later : 5gb more memory consumption
And 1 hour later, It's kills some process by oom kill.
I use
java 11
spring boot 2.7.1
spring-boot-starter-batch 2.4.0
This is my code.
spring-batch processConfiguration and some class.
-BlockProcessConfiguration
-jobValidator
BlockProcessConfiguration
#Configuration
#RequiredArgsConstructor
#Slf4j
#Profile("block")
public class BlockProcessConfiguration {
#Value("${isStanby:false}")
private Boolean isStanby;
#Scheduled(fixedDelay = 500)
public String launch() throws JobInstanceAlreadyCompleteException, JobExecutionAlreadyRunningException, JobParametersInvalidException, JobRestartException {
if (isStanby != null && isStanby) {
Boolean isRunningJob = jobValidator.isExistLatestRunningJob(JOB_NAME, 5000);
if (isRunningJob) {
return "skip";
}
}
return "completed";
}
jobValidator
import java.util.*;
#RequiredArgsConstructor
#Slf4j
#Component
public class JobValidator {
public enum batchMode {
RECOVER, FORWARD
}
private final JobExplorer jobExplorer;
public Boolean isExistLatestRunningJob(String jobName, long jobTTL) {
List<JobInstance> jobInstances = jobExplorer.findJobInstancesByJobName(jobName, 0, 10000);
if (jobInstances.size() > 0) {
List<JobExecution> jobExecutions = jobExplorer.getJobExecutions(jobInstances.get(0));
jobInstances.clear();
if (jobExecutions.size() > 0) {
JobExecution jobExecution = jobExecutions.get(0);
jobExecutions.clear();
// boolean isRunning = jobExecution.isRunning();
Date createTime = jobExecution.getCreateTime();
long now = new Date().getTime();
long timeFrame = now - createTime.getTime();
log.info("createTime.getTime() : {}", createTime.getTime());
log.info("isExistLatestRunningJob found jobExecution : id, status, timeFrame, jobTTL : {}, {}, {}, {}", jobExecution.getJobId(), jobExecution.getStatus(), timeFrame, jobTTL);
// if (jobExecution.isRunning() && (now.getTime() - createTime.getTime()) < jobTTL) {
if ( timeFrame < jobTTL ) {
log.info("isExistLatestRunningJob result : {}", true);
log.info("Job is already running, skip this job, job name : {}", jobName);
return true;
}
}
}
return false;
}
public Boolean isExecutableJob(String jobName, String paramKey, Long paramValue) {
List<JobInstance> jobInstances = jobExplorer.findJobInstancesByJobName(jobName, 0, 1);
if (jobInstances.size() > 0) {
List<JobExecution> jobExecutions = jobExplorer.getJobExecutions(jobInstances.get(0));
if (jobExecutions.size() > 0) {
JobExecution jobExecution = jobExecutions.get(0);
JobParameters jobParameters = jobExecution.getJobParameters();
Optional<Long> blockNumber = Optional.ofNullable(jobParameters.getLong(paramKey));
if (blockNumber.isPresent() && blockNumber.get().equals(paramValue)) {
if (jobExecution.getStatus().equals(BatchStatus.STARTED)) {
// throw new RuntimeException("waiting until previous job done");
log.info("waiting until previous job done ... : {}", jobName);
return false;
}
}
}
}
return true;
}
public Long getStartNumberFromBatch(String jobName, String batchMode, String paramKey1, String paramKey2, long defaultValue) {
List<JobInstance> jobInstances = jobExplorer.findJobInstancesByJobName(jobName, 0, 20);
ArrayList<Long> failExecutionNumbers = new ArrayList<>();
ArrayList<Long> successExecutionNumbers = new ArrayList<>();
ArrayList<Long> successEndExecutionNumbers = new ArrayList<>();
ArrayList<JobExecution> executions = new ArrayList<>();
jobInstances.stream().map(jobInstance -> jobExplorer.getJobExecutions(jobInstance)).forEach(jobExecution -> {
JobParameters jobParameters = jobExecution.get(0).getJobParameters();
Optional<Long> param1 = Optional.ofNullable(jobParameters.getLong(paramKey1));
Optional<Long> param2 = Optional.ofNullable(jobParameters.getLong(paramKey2));
if (param1.isPresent() && param2.isPresent()) {
if (jobExecution.get(0).getExitStatus().getExitCode().equals("FAILED")) {
failExecutionNumbers.add(param1.get());
} else {
successExecutionNumbers.add(param1.get());
successEndExecutionNumbers.add(param2.get());
}
}
});
if (failExecutionNumbers.size() == 0 && successExecutionNumbers.size() == 0) {
return defaultValue;
}
long successMax = defaultValue;
long failMin = defaultValue;
if (successEndExecutionNumbers.size() > 0) {
successMax = Collections.max(successEndExecutionNumbers);
}
if (failExecutionNumbers.size() > 0) {
failExecutionNumbers.removeIf(successExecutionNumbers::contains);
if (failExecutionNumbers.size() > 0) {
failMin = Collections.min(failExecutionNumbers);
} else {
return successMax;
}
}
if (Objects.equals(batchMode, JobValidator.batchMode.RECOVER.toString())) {
return Math.min(failMin, successMax);
} else {
return Math.max(failMin, successMax);
}
}
}
I would not consider that as a memory leak (and definitely not is Spring Batch's code). The way you are checking things like isExistLatestRunningJob involves retrieving a lot of data that is not really needed. For example, the method isExistLatestRunningJob() could be implemented with a single database query instead of retrieving 10000 job instances with:
List<JobInstance> jobInstances = jobExplorer.findJobInstancesByJobName(jobName, 0, 10000);
A query like the following should work:
SELECT E.JOB_EXECUTION_ID from BATCH_JOB_EXECUTION E, BATCH_JOB_INSTANCE I where E.JOB_INSTANCE_ID = I.JOB_INSTANCE_ID and I.JOB_NAME=? and E.START_TIME is not NULL and E.END_TIME is NULL
Adding to that that your method is called every 500ms.. Clearing the lists does not necessarily clear memory at the time you might expect.
So I think you should find a way to optimize the way you retrieve data by doing the filtering on the database side instead of the application side.
I debugged your code.
The problem is your enum field var is defined as public and not static.
I think It's will make serious problem, If you reference this var in another class.
Define enum field as static or private.
I hope find solution.
Fatal Exception: java.lang.ArrayIndexOutOfBoundsException
Out of range in /Users/cm/Realm/realm-java/realm/realm-library/src/main/cpp/io_realm_internal_OsResults.cpp line 108(requested: 0 valid: 0) io.realm.internal.OsResults.nativeGetRow
MainActivity.randomizeEvents (MainActivity.java:1600)
...MainActivity$42.run (MainActivity.java:1739)
randomizeEvents()
public void randomizeEvents() {
Realm nrealm;
nrealm = Realm.getDefaultInstance();
RealmResults<Event> eventList = nrealm.where(Event.class).equalTo("theevent", "theevent").findAll();
if(eventList.size() != 0) {
evt = eventList.get(0); <<<<<<<<<line 1600
nrealm.beginTransaction();
evt.setDurations();
nrealm.commitTransaction();
}
}
Method starting Runnable
RealmResults<Event> eventList = realm.where(Event.class).equalTo("theevent", "theevent").findAll();
evt = eventList.get(0);
if(!eventTimerRunning) {
runnable = new Runnable() {
public void run() {
if(!realm.isClosed()) {
eventTimerRunning = true;
randomizeEvents(); <<<<<<<<line 1739
handler.postDelayed(runnable, 30000);
}
}
};
handler.postDelayed(runnable, 5000);
}
I'm getting this error quite in different parts, but they are all doing the same thing. I have a Runnable for different things. For example this one will run that method every 30s after an initial delay. I am using a new instance of Realm in the Runnable due to errors from using one instance on UI thread and one on runnable etc.
The if statement is if(eventList.size() != 0) so the size is 1+. Why would eventList.size() equal 1 or more, but still get the error about eventList.get(0) being null?
you should use
eventList.size()
instead of
eventList.size
I'm writing a test for a function and I want to combine the expected and timeout and I don't want to make the global timeout
like :
#Rule
public Timeout globalTimeout= new Timeout(200);
My example:
#Test(expected = IllegalStateException.class)
#Test(timeout = 200)
public void lattesRequireMilk() throws InterruptedException {
// given
Cafe cafe = new Cafe();
cafe.restockBeans(7);
// when
cafe.brew(Latte);
Thread.sleep(400);
}
Oh ,i just found it sorry for my silly Question :
#Test(expected = IllegalStateException.class,timeout = 1000)
public void lattesRequireMilk() {
// given
Cafe cafe = new Cafe();
cafe.restockBeans(7);
// when
cafe.brew(Latte);
}
I'm trying to make distributed pub-sub across different cluster system but it's not working whatever i try.
All I'm trying to do is create a simple example where.
1) I create a topic, say "content".
2) One node in say jvm A creates the topic, subscribes to it, and a publisher who publishes to it too.
3) In a different node , say jvm B on a different port , I create a subscriber.
4) When i sent a message to the topic from jvm A, then I want the subscriber on jvm B to receive it too as its subscribed to the same topic.
Any helps would be greatly appreciated or a simple working example of distributed pub sub with subscribers and publishers in different cluster system on different ports, in Java.
here is the code for app1 and its config file.
public class App1{
public static void main(String[] args) {
System.setProperty("akka.remote.netty.tcp.port", "2551");
ActorSystem clusterSystem = ActorSystem.create("ClusterSystem");
ClusterClientReceptionist clusterClientReceptionist1 = ClusterClientReceptionist.get(clusterSystem);
ActorRef subcriber1=clusterSystem.actorOf(Props.create(Subscriber.class), "subscriber1");
clusterClientReceptionist1.registerSubscriber("content", subcriber1);
ActorRef publisher1=clusterSystem.actorOf(Props.create(Publisher.class), "publisher1");
clusterClientReceptionist1.registerSubscriber("content", publisher1);
publisher1.tell("testMessage1", ActorRef.noSender());
}
}
app1.confi
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
stdout-loglevel = "DEBUG"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = off
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2551
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:2551"
]
auto-down-unreachable-after = 10s
}
akka.extensions = ["akka.cluster.pubsub.DistributedPubSub",
"akka.contrib.pattern.ClusterReceptionistExtension"]
akka.cluster.pub-sub {
name = distributedPubSubMediator
role = ""
routing-logic = random
gossip-interval = 1s
removed-time-to-live = 120s
max-delta-elements = 3000
use-dispatcher = ""
}
akka.cluster.client.receptionist {
name = receptionist
role = ""
number-of-contacts = 3
response-tunnel-receive-timeout = 30s
use-dispatcher = ""
heartbeat-interval = 2s
acceptable-heartbeat-pause = 13s
failure-detection-interval = 2s
}
}
code for app2 and its config file
public class App
{
public static Set<ActorPath> initialContacts() {
return new HashSet<ActorPath>(Arrays.asList(
ActorPaths.fromString("akka.tcp://ClusterSystem#127.0.0.1:2551/system/receptionist")));
}
public static void main( String[] args ) {
System.setProperty("akka.remote.netty.tcp.port", "2553");
ActorSystem clusterSystem = ActorSystem.create("ClusterSystem2");
ClusterClientReceptionist clusterClientReceptionist2 = ClusterClientReceptionist.get(clusterSystem);
final ActorRef clusterClient = clusterSystem.actorOf(ClusterClient.props(ClusterClientSettings.create(
clusterSystem).withInitialContacts(initialContacts())), "client");
ActorRef subcriber2=clusterSystem.actorOf(Props.create(Subscriber.class), "subscriber2");
clusterClientReceptionist2.registerSubscriber("content", subcriber2);
ActorRef publisher2=clusterSystem.actorOf(Props.create(Publisher.class), "publisher2");
publisher2.tell("testMessage2", ActorRef.noSender());
clusterClient.tell(new ClusterClient.Send("/user/publisher1", "hello", true), null);
}
}
app2.confi
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
stdout-loglevel = "DEBUG"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = off
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2553
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:2553"
]
auto-down-unreachable-after = 10s
}
akka.extensions = ["akka.cluster.pubsub.DistributedPubSub",
"akka.contrib.pattern.ClusterReceptionistExtension"]
akka.cluster.pub-sub {
name = distributedPubSubMediator
role = ""
routing-logic = random
gossip-interval = 1s
removed-time-to-live = 120s
max-delta-elements = 3000
use-dispatcher = ""
}
akka.cluster.client.receptionist {
name = receptionist
role = ""
number-of-contacts = 3
response-tunnel-receive-timeout = 30s
use-dispatcher = ""
heartbeat-interval = 2s
acceptable-heartbeat-pause = 13s
failure-detection-interval = 2s
}
}
Publisher and Subscriber class are same for both application which is given below.
Publisher:
public class Publisher extends UntypedActor {
private final ActorRef mediator =
DistributedPubSub.get(getContext().system()).mediator();
#Override
public void onReceive(Object msg) throws Exception {
if (msg instanceof String) {
mediator.tell(new DistributedPubSubMediator.Publish("events", msg), getSelf());
} else {
unhandled(msg);
}
}
}
Subscriber:
public class Subscriber extends UntypedActor {
private final LoggingAdapter log = Logging.getLogger(getContext().system(), this);
public Subscriber(){
ActorRef mediator = DistributedPubSub.get(getContext().system()).mediator();
mediator.tell(new DistributedPubSubMediator.Subscribe("events", getSelf()), getSelf());
}
public void onReceive(Object msg) throws Throwable {
if (msg instanceof String) {
log.info("Got: {}", msg);
} else if (msg instanceof DistributedPubSubMediator.SubscribeAck) {
log.info("subscribing");
} else {
unhandled(msg);
}
}
}
i got this error in receiver side app while running both apps.Dead letters encounterd
[ClusterSystem-akka.actor.default-dispatcher-21] INFO akka.actor.RepointableActorRef - Message [java.lang.String] from Actor[akka://ClusterSystem/system/receptionist/akka.tcp%3A%2F%2FClusterSystem2%40127.0.0.1%3A2553%2FdeadLetters#188707926] to Actor[akka://ClusterSystem/system/distributedPubSubMediator#1119990682] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
and in sender side app message send successfully is displayed in log.
[ClusterSystem2-akka.actor.default-dispatcher-22] DEBUG akka.cluster.client.ClusterClient - Sending buffered messages to receptionist
Using the ClusterClient in that way does not really make sense and does not have anything to do with using the distributed pub sub, as both your nodes are a part of the cluster you can just use the distributed pub sub api directly.
Here is a simple main including config creating a two node cluster using your exact Publisher and Subscriber actors that works as expected:
public static void main(String[] args) throws Exception {
final Config config = ConfigFactory.parseString(
"akka.actor.provider=cluster\n" +
"akka.remote.netty.tcp.port=2551\n" +
"akka.cluster.seed-nodes = [ \"akka.tcp://ClusterSystem#127.0.0.1:2551\"]\n");
ActorSystem node1 = ActorSystem.create("ClusterSystem", config);
ActorSystem node2 = ActorSystem.create("ClusterSystem",
ConfigFactory.parseString("akka.remote.netty.tcp.port=2552")
.withFallback(config));
// wait a bit for the cluster to form
Thread.sleep(3000);
ActorRef subscriber = node1.actorOf(
Props.create(Subscriber.class),
"subscriber");
ActorRef publisher = node2.actorOf(
Props.create(Publisher.class),
"publisher");
// wait a bit for the subscription to be gossiped
Thread.sleep(3000);
publisher.tell("testMessage1", ActorRef.noSender());
}
Note that distributed pub sub does not give any guarantees of delivery, so if you send a message before the mediators has gotten in contact with each other, the message will simply be lost (hence the Thread.sleep statements, which are ofc not something you should do in actual code).
I think the issue is that your actor systems have different names ClusterSystem and ClusterSystem2. At least I was having the same issue because I had two different services in the cluster but I names the systems in each service with a different name.
This will emit a tick every 5 seconds.
Observable.interval(5, TimeUnit.SECONDS, Schedulers.io())
.subscribe(tick -> Log.d(TAG, "tick = "+tick));
To stop it you can use
Schedulers.shutdown();
But then all the Schedulers stops and it is not possible to resume the ticking later. How can I stop and resume the emiting "gracefully"?
Here's one possible solution:
class TickHandler {
private AtomicLong lastTick = new AtomicLong(0L);
private Subscription subscription;
void resume() {
System.out.println("resumed");
subscription = Observable.interval(5, TimeUnit.SECONDS, Schedulers.io())
.map(tick -> lastTick.getAndIncrement())
.subscribe(tick -> System.out.println("tick = " + tick));
}
void stop() {
if (subscription != null && !subscription.isUnsubscribed()) {
System.out.println("stopped");
subscription.unsubscribe();
}
}
}
Some time ago, I was also looking for kind of RX "timer" solutions, but non of them met my expectations. So there you can find my own solution:
AtomicLong elapsedTime = new AtomicLong();
AtomicBoolean resumed = new AtomicBoolean();
AtomicBoolean stopped = new AtomicBoolean();
public Flowable<Long> startTimer() { //Create and starts timper
resumed.set(true);
stopped.set(false);
return Flowable.interval(1, TimeUnit.SECONDS)
.takeWhile(tick -> !stopped.get())
.filter(tick -> resumed.get())
.map(tick -> elapsedTime.addAndGet(1000));
}
public void pauseTimer() {
resumed.set(false);
}
public void resumeTimer() {
resumed.set(true);
}
public void stopTimer() {
stopped.set(true);
}
public void addToTimer(int seconds) {
elapsedTime.addAndGet(seconds * 1000);
}
val switch = new java.util.concurrent.atomic.AtomicBoolean(true)
val tick = new java.util.concurrent.atomic.AtomicLong(0L)
val suspendableObservable =
Observable.
interval(5 seconds).
takeWhile(_ => switch.get()).
repeat.
map(_ => tick.incrementAndGet())
You can set switch to false to suspend the ticking and true to resume it.
Sorry this is in RxJS instead of RxJava, but the concept will be the same. I adapted this from learn-rxjs.io and here it is on codepen.
The idea is that you start out with two streams of click events, startClick$ and stopClick$. Each click occurring on the stopClick$ stream get mapped to an empty observable, and clicks on startClick$ each get mapped to the interval$ stream. The two resulting streams get merge-d together into one observable-of-observables. In other words, a new observable of one of the two types will be emitted from merge each time there's a click. The resulting observable will go through switchMap, which starts listening to this new observable and stops listening to whatever it was listening to before. Switchmap will also start merge the values from this new observable onto its existing stream.
After the switch, scan only ever sees the "increment" value emitted by interval$, and it doesn't see any values when "stop" has been clicked.
And until the first click occurs, startWith will start emitting values from $interval, just to get things going:
const start = 0;
const increment = 1;
const delay = 1000;
const stopButton = document.getElementById('stop');
const startButton = document.getElementById('start');
const startClick$ = Rx.Observable.fromEvent(startButton, 'click');
const stopClick$ = Rx.Observable.fromEvent(stopButton, 'click');
const interval$ = Rx.Observable.interval(delay).mapTo(increment);
const setCounter = newValue => document.getElementById("counter").innerHTML = newValue;
setCounter(start);
const timer$ = Rx.Observable
// a "stop" click will emit an empty observable,
// and a "start" click will emit the interval$ observable.
// These two streams are merged into one observable.
.merge(stopClick$.mapTo(Rx.Observable.empty()),
startClick$.mapTo(interval$))
// until the first click occurs, merge will emit nothing, so
// use the interval$ to start the counter in the meantime
.startWith(interval$)
// whenever a new observable starts, stop listening to the previous
// one and start emitting values from the new one
.switchMap(val => val)
// add the increment emitted by the interval$ stream to the accumulator
.scan((acc, curr) => curr + acc, start)
// start the observable and send results to the DIV
.subscribe((x) => setCounter(x));
And here's the HTML
<html>
<body>
<div id="counter"></div>
<button id="start">
Start
</button>
<button id="stop">
Stop
</button>
</body>
</html>
Here is a another way to do this, I think.
When you check the source code, you will find interval() using class OnSubscribeTimerPeriodically. The key code below.
#Override
public void call(final Subscriber<? super Long> child) {
final Worker worker = scheduler.createWorker();
child.add(worker);
worker.schedulePeriodically(new Action0() {
long counter;
#Override
public void call() {
try {
child.onNext(counter++);
} catch (Throwable e) {
try {
worker.unsubscribe();
} finally {
Exceptions.throwOrReport(e, child);
}
}
}
}, initialDelay, period, unit);
}
So, you will see, if you wanna cannel the loop, what about throwing a new exception in onNext(). Example code below.
Observable.interval(1000, TimeUnit.MILLISECONDS)
.subscribe(new Action1<Long>() {
#Override
public void call(Long aLong) {
Log.i("abc", "onNext");
if (aLong == 5) throw new NullPointerException();
}
}, new Action1<Throwable>() {
#Override
public void call(Throwable throwable) {
Log.i("abc", "onError");
}
}, new Action0() {
#Override
public void call() {
Log.i("abc", "onCompleted");
}
});
Then you will see this:
08-08 11:10:46.008 28146-28181/net.bingyan.test I/abc: onNext
08-08 11:10:47.008 28146-28181/net.bingyan.test I/abc: onNext
08-08 11:10:48.008 28146-28181/net.bingyan.test I/abc: onNext
08-08 11:10:49.008 28146-28181/net.bingyan.test I/abc: onNext
08-08 11:10:50.008 28146-28181/net.bingyan.test I/abc: onNext
08-08 11:10:51.008 28146-28181/net.bingyan.test I/abc: onNext
08-08 11:10:51.018 28146-28181/net.bingyan.test I/abc: onError
You can use takeWhile and loop until conditions is true
Observable.interval(1, TimeUnit.SECONDS)
.takeWhile {
Log.i(TAG, " time " + it)
it != 30L
}
.subscribe(object : Observer<Long> {
override fun onComplete() {
Log.i(TAG, "onComplete " + format.format(System.currentTimeMillis()))
}
override fun onSubscribe(d: Disposable) {
Log.i(TAG, "onSubscribe " + format.format(System.currentTimeMillis()))
}
override fun onNext(t: Long) {
Log.i(TAG, "onNext " + format.format(System.currentTimeMillis()))
}
override fun onError(e: Throwable) {
Log.i(TAG, "onError")
e.printStackTrace()
}
});
#AndroidEx , that's a wonderful answer. I did it a bit differently:
private fun disposeTask() {
if (disposeable != null && !disposeable.isDisposed)
disposeable.dispose()
}
private fun runTask() {
disposeable = Observable.interval(0, 30, TimeUnit.SECONDS)
.flatMap {
apiCall.runTaskFromServer()
.map{
when(it){
is ResponseClass.Success ->{
keepRunningsaidTasks()
}
is ResponseClass.Failure ->{
disposeTask() //this will stop the task in instance of a network failure.
}
}
}