Switching to different behaviors does not work as intended - java

So I was playing with different behaviors in Akka. When I executed this code:
#Override
public Receive<CommonCommand> createReceive() {
return notYetStarted();
}
public Receive<CommonCommand> notYetStarted() {
return newReceiveBuilder()
.onMessage(RaceLengthCommand.class, message -> {
// business logic
return running();
})
.build();
}
public Receive<CommonCommand> running() {
return newReceiveBuilder()
.onMessage(AskPosition.class, message -> {
if ("some_condition") {
// business logic
return this;
} else {
// business logic
return completed(completedTime);
}
})
.build();
}
public Receive<CommonCommand> completed(long completedTime) {
return newReceiveBuilder()
.onMessage(AskPosition.class, message -> {
// business logic
return this;
})
.build();
}
I got following log:
21:46:41.038 [monitor-akka.actor.default-dispatcher-6] INFO akka.actor.LocalActorRef - Message [learn.tutorial._5_racing_game_akka.RacerBehavior$AskPosition] to Actor[akka://monitor/user/racer_1#-1301834398] was unhandled. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
Initially the RaceLengthCommand message is sent to notYetStarted() behavior. That works fine. Then this behavior should transition to running() behavior, and this second one should receive the message AskPosition.
But according to my tests, the AskPosition message is delivered to notYetStarted() behavior. This contradicts my whole understanding of the concept.
I confirmed this by copying the onMessage() part from running() behavior and pasting on notYetStarted() behavior. Now the code executes fine and no more deadletters.
So apparently notYetStarted() behavior is indeed receiving messages even after I switched behaviors? Why is this happening???

It seems like your actor definition is mixing the OO and functional styles and hitting some warts in the interplay between using those styles in the same actor. Some of this confusion arises from Receive in the Java API being a Behavior but not an AbstractBehavior. This may be a vestige of an earlier evolution of the API (I've suggested to the Akka maintainers that this vestige be dropped in 2.7 (which is the absolute earliest it could be dropped owing to binary compatibility); communications with some of them haven't yielded a reason for this distinction and there's no such distinction in the analogous Scala API).
Disclaimer: I tend to work exclusively in the Scala functional API for actor definition.
There's a duality in the actor model between the state of an actor (i.e. its fields) and its behavior (how it responds to the next message it receives): to the world outside the actor, they are one and the same because the only way (ignoring something like taking a heap dump) to observe the actor's state is to observe its response to a message. Of course, in fact, the current behavior is a field in the runtime representation of the actor, and in the functional definitions, the behavior generally has fields for state.
The OO style of behavior definition favors:
mutable fields in the actor
the current behavior being immutable (the behavior can make decisions based on the fields)
The functional style of behavior definition favors:
no mutable fields in the actor
updating the behavior (which is an implicitly mutable field)
(the distinction is analogous to imperative programming having a while/for loop where a variable is updated vs. functional programming's preference for defining a recursive function which the compiler turns behind the scenes into a loop).
The AbstractBehavior API seems to assume that the message handler is createReceive(): returning this within an AbstractBehavior means to go back to the createReceive(). Conversely, Behaviors.same() in the functional style means "whatever the current behavior is, don't change it". Where there are multiple sub-Behaviors/Receives in one AbstractBehavior, this difference is important (it's not important when there's one Receive in the AbstractBehavior).
TL;DR: if defining multiple message handlers in an AbstractBehavior, prefer return Behaviors.same to return this in message handlers. Alternatively: only define one message handler per AbstractBehavior.

Related

Update objects's state in Reactor

Given the following method:
private Mono<UserProfileUpdate> upsertUserIdentifier(UserProfileUpdate profileUpdate, String id){
return userIdentifierRepository.findUserIdentifier(id)
.switchIfEmpty(Mono.defer(() -> {
profileUpdate.setNewUser(true);
return createProfileIdentifier(profileUpdate.getType(), id);
}))
.map(userIdentifier -> {
profileUpdate.setProfileId(userIdentifier.getProfileId());
return profileUpdate;
});
}
switchIfEmpty and map operators mutate the profileUpdate object. Is it safe to mutate in switchIfEmpty operator? Regarding map, if I have understood correctly, this is not safe and object profileUpdate must be immutable, right? eg:
private Mono<UserProfileUpdate> upsertUserIdentifier(UserProfileUpdate profileUpdate, String id){
return userIdentifierRepository.findUserIdentifier(id)
.switchIfEmpty(Mono.defer(() -> {
profileUpdate.setNewUser(true);
return createProfileIdentifier(profileUpdate.getType(), id);
}))
.map(userIdentifier -> profileUpdate.withProfileId(userIdentifier.getProfileId()));
}
Later in the chain, another method mutates the object:
public Mono<UserProfileUpdate> transform(UserProfileUpdate profUpdate) {
if (profUpdate.isNewUser()) {
profUpdate.getAttributesToSet().putAll(profUpdate.getAttributesToSim());
} else if (!profUpdate.getAttributesToSim().isEmpty()) {
return userProfileRepository.findUserProfileById(profUpdate.getProfileId())
.map(profile -> {
profUpdate.getAttributesToSet().putAll(
collectMissingAttributes(profUpdate.getAttributesToSim(), profile.getAttributes().keySet()));
return profUpdate;
});
}
return Mono.just(profUpdate);
}
The above methods are called as follows:
Mono.just(update)
.flatMap(update -> upsertUserIdentifier(update, id))
.flatMap(this::transform)
Nebulous answer, but... it depends!
The danger of mutating an input parameter in the returned Mono or Flux comes from the fact that said Mono or Flux could be subscribed multiple times. In such a case, suddenly you have a shared resource on your hands, and it can lead to puzzling issues.
But if the methods in question are called from a well controlled context, it can be safe.
In your case, flatMap ensures that the inner publishers are subscribed to only once. So as long as you only use these methods inside such flatMaps, they can safely mutate their input parameter (it is kept in the scope of the flatmapping function).
There's arguably two factors here - the "dangerous" aspect and the "style" aspect.
Simon has covered the "dangerous" aspect very well; there is one thing I'd add however. Even though your code is "safe" within the method that we can see here (due to the guarantees we have behind a single subscription to an inner flatmap publisher), we still can't absolutely guarantee it's safe in a wider context - we don't know what else has visibility of, or might mutate, your profileUpdate parameter. If it's created, immediately passed into this method only, then read after this method completes, then sure, it's good. If it's created at some point in the past, perhaps passed around to a few methods that may or may not mutate it, passed back, passed into this method, passed into a few other methods... then, well, it might be safe, but it becomes increasingly difficult to analyse and say for certain - and if it's not safe, it becomes just as difficult to track down where that one in a million bug caused by the behaviour might occur.
Now, your code may look nothing like this complex mess I've just described - but with just a few lines changed here and there, or "just doing one more mutation" with this object elsewhere before it's passed in, it could start to get there.
That leads into the "style" aspect.
Personally, I'm very much a fan of keeping everything as part of the reactive chain where at all possible. Even ignoring the potential for bad mutations / side-effects, it becomes much harder to read the code if it's all written in this way - you have to mentally keep track of both the value being passed through the chain, as well as values external to the chain being mutated. With this example it's reasonably trivial, in larger examples it becomes almost unreadable (at least to my brain!)
So with that in mind, if I were reviewing this code I'd strongly prefer UserProfileUpdate to be immutable, then to use something like this instead:
private Mono<UserProfileUpdate> upsertUserIdentifier(UserProfileUpdate profileUpdate, String id){
return userIdentifierRepository.findUserIdentifier(id)
.switchIfEmpty(() -> createProfileIdentifier(profileUpdate.withNewUser(true), id))
.map(userIdentifier -> profileUpdate.withProfileId(userIdentifier.getProfileId()));
}
...note this isn't a drop-in replacement, in particular:
createProfileIdentifier() would just take a UserProfileUpdate object and an id, and be expected to return a new UserProfileUpdate object from those parameters;
UserProfileUpdate would need to be enhanced with #With (lombok) or a similar implementation to allow it to produce immutable copies of itself with only a single value changed.
Other code would likely need to be modified similarly to encapsulate the profileUpdate as part of the reactive chain, and not rely on mutating it.
However, in my opinion at least, the resulting code would be far more robust and readable.

What kind of object is a Reactive Java Subscription?

In Reactive Java, we're told that the .subscribe() call returns "a Subscription reference". But Subscription is an interface, not a class. So what kind of object are we handed that implements this interface? Do we have any control over this?
There is the class Subscriptions that can create and return several different kinds of Subscription, but what does one do with them? If I write
Subscription mSub = Subscriptions.create(<some Action0>);
mSub = someObservable.subscribe();
won't my just-created Subscription simply be overwritten by whatever the .subscribe() call returns? How do you use a Subscription you create?
(On a somewhat related note, what is the point of Subscriptions.unsubscribed(), which "returns a Subscription to which unsubscribe does nothing, as it is already unsubscribed. Huh?)
Short answer: You shouldn't care.
Longer answer: a subscription gives you two methods:
unsubscribe(), which causes the subscription to terminate.
isUnsubscribed(), which checks whether that has already happened.
You can use these methods to a) check whether an Observable chain terminated and b) to cause it to terminate prematurely, for example if the user switched to a different Activity.
That's it. You aren't exposed to the internals on purpose. Also, do you notice that there's no resubscribe method? That's because if you want to restart the operation, you need to resubscribe to the Observable, giving you a new Subscription.
As you know Subscriptions are used to keep references to ongoing Observables, mainly for resources' management. For example in Android applications, when you change an Activity (screen) you flush old Activity Observables. In this scenario, Subscription instances are given by .subscribe() (as you mentioned) and stored. So, for which reason would one create a Subscription directly, especially Subscriptions.unsubscribed()? I encountered two cases:
Default implementation; avoid declaration like Subscription mSub; that would be filled latter and could create an NPE. It's especially true if you use Kotlin that require property initialization.
Testing
On a somewhat related note, what is the point of Subscriptions.unsubscribed(), which "returns a Subscription to which unsubscribe does nothing, as it is already unsubscribed. Huh?
In 1.x, Subscriptions.unsubscribed() is used to return a Subscription instance the operation was completed (or never run in the first place) when the control is returned to your code from RxJava. Since being unsubscribed is stateless and a constant state, the returned Subscription is a singleton because just by looking at the interface Subscription there is no (reasonable) way to distinguish one completed/unsubscribed Subscription from another.
In 2.x, there is a public and internal version of its equivalent interface, Disposable. The internal version is employed mostly to swap out a live Disposable with a terminated one, avoiding NullPointerException and null checks in general and to help the GC somewhat.
what does one do with them?
Usually you don't need to worry about Subscriptions.create(); it is provided for the case you have a resource you'd like to attach to the lifecycle of your end-subscriber:
FileReader file = new FileReader ("file.txt");
readLines(file)
.map(line -> line.length())
.reduce(0, (a, b) -> a + b)
.subscribe(new Subscriber<Integer>() {
{
add(Subscriptions.create(() -> {
Closeables.closeSilently(file); // utility from Guava
});
}
#Override public void onNext(Integer) {
// process
}
// onError(), onCompleted()
});
This example, demonstrating one way of usage, can be expressed via using instead nonetheless:
Observable.using(
() -> new FileReader("file.txt"), // + try { } catch { }
file -> readLines(file).map(...).reduce(...),
file -> Closeables.closeSilently(file)
)
.subscribe(...)

Akka Java FSM by Example

Please note: I am a Java developer with no working knowledge of Scala (sadly). I would ask that any code examples provided in the answer would be using Akka's Java API.
I am trying to use the Akka FSM API to model the following super-simple state machine. In reality, my machine is much more complicated, but the answer to this question will allow me to extrapolate to my actual FSM.
And so I have 2 states: Off and On. You can go fro Off -> On by powering the machine on by calling SomeObject#powerOn(<someArguments>). You can go from On -> Off by powering the machine off by calling SomeObject#powerOff(<someArguments>).
I'm wondering what actors and supporting classes I'll need in order to implement this FSM. I believe the actor representing the FSM has to extend AbstractFSM. But what classes represent the 2 states? What code exposes and implements the powerOn(...) and powerOff(...) state transitions? A working Java example, or even just Java pseudo-code, would go a long way for me here.
I think we can do a bit better than copypasta from the FSM docs (http://doc.akka.io/docs/akka/snapshot/java/lambda-fsm.html). First, let's explore your use case a bit.
You have two triggers (or events, or signals) -- powerOn and powerOff. You would like send these signals to an Actor and have it change state, of which the two meaningful states are On and Off.
Now, strictly speaking an FSM needs one additional component: an action you wish to take on transition.
FSM:
State (S) x Event (E) -> Action (A), State (S')
Read: "When in state S, if signal E is received, produce action A and advance to state S'"
You don't NEED an action, but an Actor cannot be directly inspected, nor directly modified. All mutation and acknowledgement occurs through asynchronous message passing.
In your example, which provides no action to perform on transition, you basically have a state machine that's a no-op. Actions occur, state transitions without side effect and that state is invisible, so a working machine is identical to a broken one. And since this all occurs asynchronously, you don't even know when the broken thing has finished.
So allow me to expand your contract a little bit, and include the following actions in your FSM definitions:
When in Off, if powerOn is received, advance state to On and respond to the caller with the new state
When in On, if powerOff is received, advance state to Off and respond to the caller with the new state
Now we might be able to build an FSM that is actually testable.
Let's define a pair of classes for your two signals. (the AbstractFSM DSL expects to match on class):
public static class PowerOn {}
public static class PowerOff {}
Let's define a pair of enums for your two states:
enum LightswitchState { on, off }
Let's define an AbstractFSM Actor (http://doc.akka.io/japi/akka/2.3.8/akka/actor/AbstractFSM.html). Extending AbstractFSM allows us to define an actor using a chain of FSM definitions similar to those above rather than defining message behavior directly in an onReceive() method. It provides a nice little DSL for these definitions, and (somewhat bizarrely) expects that the definitions be set up in a static initializer.
A quick detour, though: AbstractFSM has two generics defined which are used to provide compile time type checking.
S is the base of State types we wish to use, and D is the base of Data types. If you're building an FSM that will hold and modify data (maybe a power meter for your light switch?), you would build a separate class to hold this data rather than trying to add new members to your subclass of AbstractFSM. Since we have no data, let's define a dummy class just so you can see how it gets passed around:
public static class NoDataItsJustALightswitch {}
And so, with this out of the way, we can build our actor class.
public class Lightswitch extends AbstractFSM<LightswitchState, NoDataItsJustALightswitch> {
{ //static initializer
startWith(off, new NoDataItsJustALightswitch()); //okay, we're saying that when a new Lightswitch is born, it'll be in the off state and have a new NoDataItsJustALightswitch() object as data
//our first FSM definition
when(off, //when in off,
matchEvent(PowerOn.class, //if we receive a PowerOn message,
NoDataItsJustALightswitch.class, //and have data of this type,
(powerOn, noData) -> //we'll handle it using this function:
goTo(on) //go to the on state,
.replying(on); //and reply to the sender that we went to the on state
)
);
//our second FSM definition
when(on,
matchEvent(PowerOff.class,
NoDataItsJustALightswitch.class,
(powerOn, noData) -> {
goTo(off)
.replying(off);
//here you could use multiline functions,
//and use the contents of the event (powerOn) or data (noData) to make decisions, alter content of the state, etc.
}
)
);
initialize(); //boilerplate
}
}
I'm sure you're wondering: how do I use this?! So let's make you a test harness using straight JUnit and the Akka Testkit for java:
public class LightswitchTest {
#Test public void testLightswitch() {
ActorSystem system = ActorSystem.create("lightswitchtest");//should make this static if you're going to test a lot of things, actor systems are a bit expensive
new JavaTestKit(system) {{ //there's that static initializer again
ActorRef lightswitch = system.actorOf(Props.create(Lightswitch.class)); //here is our lightswitch. It's an actor ref, a reference to an actor that will be created on
//our behalf of type Lightswitch. We can't, as mentioned earlier, actually touch the instance
//of Lightswitch, but we can send messages to it via this reference.
lightswitch.tell( //using the reference to our actor, tell it
new PowerOn(), //to "Power On," using our message type
getRef()); //and giving it an actor to call back (in this case, the JavaTestKit itself)
//because it is asynchronous, the tell will return immediately. Somewhere off in the distance, on another thread, our lightbulb is receiving its message
expectMsgEquals(LightswitchState.on); //we block until the lightbulb sends us back a message with its current state ("on.")
//If our actor is broken, this call will timeout and fail.
lightswitch.tell(new PowerOff(), getRef());
expectMsgEquals(LightswitchState.off);
system.stop(lightswitch); //switch works, kill the instance, leave the system up for further use
}};
}
}
And there you are: an FSM lightswitch. Honestly though, an example this trivial doesn't really show the power of FSMs, as a data-free example can be performed as a set of "become/unbecome" behaviors in like half as many LoC with no generics or lambdas. Much more readable IMO.
PS consider learning Scala, if only to be able to read other peoples' code! The first half of the book Atomic Scala is available free online.
PPS if all you really want is a composable state machine, I maintain Pulleys, a state machine engine based on statecharts in pure java. It's getting on in years (lot of XML and old patterns, no DI integration) but if you really want to decouple the implementation of a state machine from inputs and outputs there may be some inspiration there.
I know about Actors in Scala.
This Java Start Code may help you, to go ahead:
Yes, extend your SimpleFSM from AbstractFSM.
The State is an enum in the AbstractFSM.
Your <someArguments> can be the Data Part in your AbstractFSM
Your powerOn and powerOff are Actor Messages/Events.
And the State switching is in the transitions Part
// states
enum State {
Off, On
}
enum Uninitialized implements Data {
Uninitialized
}
public class SimpleFSM extends AbstractFSM<State, Data> {
{
// fsm body
startWith(Off, Uninitialized);
// transitions
when(Off,
matchEvent(... .class ...,
(... Variable Names ...) ->
goTo(On).using(...) ); // powerOn(<someArguments>)
when(On,
matchEvent(... .class ...,
(... Variable Names ...) ->
goTo(Off).using(...) ); // powerOff(<someArguments>)
initialize();
}
}
Real working Project see
Scala and Java 8 with Lambda Template for a Akka AbstractFSM
Well this is a really old question but if you get as a hit from Google, but if you are still interested implementing FSM with Akka, I suggest to look this part of the documentation.
If you want to see how a practical model driven state machine implementation, you can check my blog1, blog2.

Are tagged classes in Java acceptable when simply conveying a value and (differing) associated attributes?

I have an object representing a network session, with a list of actions to execute - so it could send a message, receive a message, pause, receive a message and receive a message, for example. Actions have some extra data associated with them - for example, when receiving a message you have a regular expression that matches it, whereas when sending a message you just have the literal message and whether to retransmit.
I'd like the session object to handle the actual receiving or sending of messages - those rely on state contained in the session object (fields to fill in, what to do on failure, and so on) and I think it's cleaner to have the session do that based on the current action than to delegate it to the action and pass the action all of its state.
Instinctively I'd have a single Action class, with a field indicating its type (send/receive/pause) and some other fields, not all of which would be used for a given type (message to send/regexp to match/pause duration). But I've been reading Effective Java, which says that using a "tagged class" like this is bad and is better done with inheritance. I'm not really sure how to make that work, though - if I had a RecvAction, SendAction and PauseAction subclass, I think my session object would have to do an instanceof check to figure out the right behaviour, and I was under the impression that instanceof checks are a bit of a code smell.
What is the right approach to this problem, in terms of good Java style? If I have a value object conveying a piece of primary information (send a message) and related secondary information (what message to send), is that a legitimate exception where I can use tagged classes, or is there a cleaner way to approach this problem?
If you need this kind of flexibility, you can toss the tags, and allow the actions to be just plain objects that can be caught by accompanied processors. E.g. a list of Processor classes with method boolean supports(Action action) and void handle(Action action). You will also need an orchestrator, which would hold an arbitrary list of processors, and for the message received, the processors will be asked if they support it, and the one that answers true on supports(Action action), will get handle(Action action) called respectively.
E.g.
public interface Processor<A extends Action> {
boolean <T extends A> supports(T action);
void handle(A action);
}
public ActionRouter {
private List<Processor> processors = new LinkedList<Processor>();
public void handle(Action a) {
for (Processor p : processors) {
if (p.supports(a)) {
p.handle(a);
return;
}
}
}
}
This way you can achieve acceptable level of cohesion, e.g. by implementing focused action processors, like SendActionProcessor implements Processor<SendAction>.
Yeah, the instanceof doesn't seem very elegant, but can be tolerated for the purpose. If you don't like it and to not to repeat yourself, implement an abstract processor class, which will take the needed type as a constructor argument, and will encapsulate the acceptance by type.
On the other side, it's not always instanceof test. Your handle method would act as a predicate, and can test for the state of your actions before deciding to handle it. Really, depends on what you need.

Java pattern for nested callbacks?

I'm looking for a Java pattern for making a nested sequence of non-blocking method calls. In my case, some client code needs to asynchronously invoke a service to perform some use case, and each step of that use case must itself be performed asynchronously (for reasons outside the scope of this question). Imagine I have existing interfaces as follows:
public interface Request {}
public interface Response {}
public interface Callback<R extends Response> {
void onSuccess(R response);
void onError(Exception e);
}
There are various paired implementations of the Request and Response interfaces, namely RequestA + ResponseA (given by the client), RequestB + ResponseB (used internally by the service), etc.
The processing flow looks like this:
In between the receipt of each response and the sending of the next request, some additional processing needs to happen (e.g. based on values in any of the previous requests or responses).
So far I've tried two approaches to coding this in Java:
anonymous classes: gets ugly quickly because of the required nesting
inner classes: neater than the above, but still hard for another developer to comprehend the flow of execution
Is there some pattern to make this code more readable? For example, could I express the service method as a list of self-contained operations that are executed in sequence by some framework class that takes care of the nesting?
Since the implementation (not only the interface) must not block, I like your list idea.
Set up a list of "operations" (perhaps Futures?), for which the setup should be pretty clear and readable. Then upon receiving each response, the next operation should be invoked.
With a little imagination, this sounds like the chain of responsibility. Here's some pseudocode for what I'm imagining:
public void setup() {
this.operations.add(new Operation(new RequestA(), new CallbackA()));
this.operations.add(new Operation(new RequestB(), new CallbackB()));
this.operations.add(new Operation(new RequestC(), new CallbackC()));
this.operations.add(new Operation(new RequestD(), new CallbackD()));
startNextOperation();
}
private void startNextOperation() {
if ( this.operations.isEmpty() ) { reportAllOperationsComplete(); }
Operation op = this.operations.remove(0);
op.request.go( op.callback );
}
private class CallbackA implements Callback<Boolean> {
public void onSuccess(Boolean response) {
// store response? etc?
startNextOperation();
}
}
...
In my opinion, the most natural way to model this kind of problem is with Future<V>.
So instead of using a callback, just return a "thunk": a Future<Response> that represents the response that will be available at some point in the future.
Then you can either model subsequent steps as things like Future<ResponseB> step2(Future<ResponseA>), or use ListenableFuture<V> from Guava. Then you can use Futures.transform() or one of its overloads to chain your functions in a natural way, but while still preserving the asynchronous nature.
If used in this way, Future<V> behaves like a monad (in fact, I think it may qualify as one, although I'm not sure off the top of my head), and so the whole process feels a bit like IO in Haskell as performed via the IO monad.
You can use actor computing model. In your case, the client, services, and callbacks [B-D] all can be represented as actors.
There are many actor libraries for java. Most of them, however, are heavyweight, so I wrote a compact and extendable one: df4j. It considers actor model as a specific case of more general dataflow computing model and, as a result, allows user to create new types of actors, to optimally fit user's requirements.
I am not sure if I get you question correctly. If you want to invoke a service and on its completion result need to be passed to other object which can continue processing using result. You can look at using Composite and Observer to achive this.

Categories