Akka Java FSM by Example - java

Please note: I am a Java developer with no working knowledge of Scala (sadly). I would ask that any code examples provided in the answer would be using Akka's Java API.
I am trying to use the Akka FSM API to model the following super-simple state machine. In reality, my machine is much more complicated, but the answer to this question will allow me to extrapolate to my actual FSM.
And so I have 2 states: Off and On. You can go fro Off -> On by powering the machine on by calling SomeObject#powerOn(<someArguments>). You can go from On -> Off by powering the machine off by calling SomeObject#powerOff(<someArguments>).
I'm wondering what actors and supporting classes I'll need in order to implement this FSM. I believe the actor representing the FSM has to extend AbstractFSM. But what classes represent the 2 states? What code exposes and implements the powerOn(...) and powerOff(...) state transitions? A working Java example, or even just Java pseudo-code, would go a long way for me here.

I think we can do a bit better than copypasta from the FSM docs (http://doc.akka.io/docs/akka/snapshot/java/lambda-fsm.html). First, let's explore your use case a bit.
You have two triggers (or events, or signals) -- powerOn and powerOff. You would like send these signals to an Actor and have it change state, of which the two meaningful states are On and Off.
Now, strictly speaking an FSM needs one additional component: an action you wish to take on transition.
FSM:
State (S) x Event (E) -> Action (A), State (S')
Read: "When in state S, if signal E is received, produce action A and advance to state S'"
You don't NEED an action, but an Actor cannot be directly inspected, nor directly modified. All mutation and acknowledgement occurs through asynchronous message passing.
In your example, which provides no action to perform on transition, you basically have a state machine that's a no-op. Actions occur, state transitions without side effect and that state is invisible, so a working machine is identical to a broken one. And since this all occurs asynchronously, you don't even know when the broken thing has finished.
So allow me to expand your contract a little bit, and include the following actions in your FSM definitions:
When in Off, if powerOn is received, advance state to On and respond to the caller with the new state
When in On, if powerOff is received, advance state to Off and respond to the caller with the new state
Now we might be able to build an FSM that is actually testable.
Let's define a pair of classes for your two signals. (the AbstractFSM DSL expects to match on class):
public static class PowerOn {}
public static class PowerOff {}
Let's define a pair of enums for your two states:
enum LightswitchState { on, off }
Let's define an AbstractFSM Actor (http://doc.akka.io/japi/akka/2.3.8/akka/actor/AbstractFSM.html). Extending AbstractFSM allows us to define an actor using a chain of FSM definitions similar to those above rather than defining message behavior directly in an onReceive() method. It provides a nice little DSL for these definitions, and (somewhat bizarrely) expects that the definitions be set up in a static initializer.
A quick detour, though: AbstractFSM has two generics defined which are used to provide compile time type checking.
S is the base of State types we wish to use, and D is the base of Data types. If you're building an FSM that will hold and modify data (maybe a power meter for your light switch?), you would build a separate class to hold this data rather than trying to add new members to your subclass of AbstractFSM. Since we have no data, let's define a dummy class just so you can see how it gets passed around:
public static class NoDataItsJustALightswitch {}
And so, with this out of the way, we can build our actor class.
public class Lightswitch extends AbstractFSM<LightswitchState, NoDataItsJustALightswitch> {
{ //static initializer
startWith(off, new NoDataItsJustALightswitch()); //okay, we're saying that when a new Lightswitch is born, it'll be in the off state and have a new NoDataItsJustALightswitch() object as data
//our first FSM definition
when(off, //when in off,
matchEvent(PowerOn.class, //if we receive a PowerOn message,
NoDataItsJustALightswitch.class, //and have data of this type,
(powerOn, noData) -> //we'll handle it using this function:
goTo(on) //go to the on state,
.replying(on); //and reply to the sender that we went to the on state
)
);
//our second FSM definition
when(on,
matchEvent(PowerOff.class,
NoDataItsJustALightswitch.class,
(powerOn, noData) -> {
goTo(off)
.replying(off);
//here you could use multiline functions,
//and use the contents of the event (powerOn) or data (noData) to make decisions, alter content of the state, etc.
}
)
);
initialize(); //boilerplate
}
}
I'm sure you're wondering: how do I use this?! So let's make you a test harness using straight JUnit and the Akka Testkit for java:
public class LightswitchTest {
#Test public void testLightswitch() {
ActorSystem system = ActorSystem.create("lightswitchtest");//should make this static if you're going to test a lot of things, actor systems are a bit expensive
new JavaTestKit(system) {{ //there's that static initializer again
ActorRef lightswitch = system.actorOf(Props.create(Lightswitch.class)); //here is our lightswitch. It's an actor ref, a reference to an actor that will be created on
//our behalf of type Lightswitch. We can't, as mentioned earlier, actually touch the instance
//of Lightswitch, but we can send messages to it via this reference.
lightswitch.tell( //using the reference to our actor, tell it
new PowerOn(), //to "Power On," using our message type
getRef()); //and giving it an actor to call back (in this case, the JavaTestKit itself)
//because it is asynchronous, the tell will return immediately. Somewhere off in the distance, on another thread, our lightbulb is receiving its message
expectMsgEquals(LightswitchState.on); //we block until the lightbulb sends us back a message with its current state ("on.")
//If our actor is broken, this call will timeout and fail.
lightswitch.tell(new PowerOff(), getRef());
expectMsgEquals(LightswitchState.off);
system.stop(lightswitch); //switch works, kill the instance, leave the system up for further use
}};
}
}
And there you are: an FSM lightswitch. Honestly though, an example this trivial doesn't really show the power of FSMs, as a data-free example can be performed as a set of "become/unbecome" behaviors in like half as many LoC with no generics or lambdas. Much more readable IMO.
PS consider learning Scala, if only to be able to read other peoples' code! The first half of the book Atomic Scala is available free online.
PPS if all you really want is a composable state machine, I maintain Pulleys, a state machine engine based on statecharts in pure java. It's getting on in years (lot of XML and old patterns, no DI integration) but if you really want to decouple the implementation of a state machine from inputs and outputs there may be some inspiration there.

I know about Actors in Scala.
This Java Start Code may help you, to go ahead:
Yes, extend your SimpleFSM from AbstractFSM.
The State is an enum in the AbstractFSM.
Your <someArguments> can be the Data Part in your AbstractFSM
Your powerOn and powerOff are Actor Messages/Events.
And the State switching is in the transitions Part
// states
enum State {
Off, On
}
enum Uninitialized implements Data {
Uninitialized
}
public class SimpleFSM extends AbstractFSM<State, Data> {
{
// fsm body
startWith(Off, Uninitialized);
// transitions
when(Off,
matchEvent(... .class ...,
(... Variable Names ...) ->
goTo(On).using(...) ); // powerOn(<someArguments>)
when(On,
matchEvent(... .class ...,
(... Variable Names ...) ->
goTo(Off).using(...) ); // powerOff(<someArguments>)
initialize();
}
}
Real working Project see
Scala and Java 8 with Lambda Template for a Akka AbstractFSM

Well this is a really old question but if you get as a hit from Google, but if you are still interested implementing FSM with Akka, I suggest to look this part of the documentation.
If you want to see how a practical model driven state machine implementation, you can check my blog1, blog2.

Related

Switching to different behaviors does not work as intended

So I was playing with different behaviors in Akka. When I executed this code:
#Override
public Receive<CommonCommand> createReceive() {
return notYetStarted();
}
public Receive<CommonCommand> notYetStarted() {
return newReceiveBuilder()
.onMessage(RaceLengthCommand.class, message -> {
// business logic
return running();
})
.build();
}
public Receive<CommonCommand> running() {
return newReceiveBuilder()
.onMessage(AskPosition.class, message -> {
if ("some_condition") {
// business logic
return this;
} else {
// business logic
return completed(completedTime);
}
})
.build();
}
public Receive<CommonCommand> completed(long completedTime) {
return newReceiveBuilder()
.onMessage(AskPosition.class, message -> {
// business logic
return this;
})
.build();
}
I got following log:
21:46:41.038 [monitor-akka.actor.default-dispatcher-6] INFO akka.actor.LocalActorRef - Message [learn.tutorial._5_racing_game_akka.RacerBehavior$AskPosition] to Actor[akka://monitor/user/racer_1#-1301834398] was unhandled. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
Initially the RaceLengthCommand message is sent to notYetStarted() behavior. That works fine. Then this behavior should transition to running() behavior, and this second one should receive the message AskPosition.
But according to my tests, the AskPosition message is delivered to notYetStarted() behavior. This contradicts my whole understanding of the concept.
I confirmed this by copying the onMessage() part from running() behavior and pasting on notYetStarted() behavior. Now the code executes fine and no more deadletters.
So apparently notYetStarted() behavior is indeed receiving messages even after I switched behaviors? Why is this happening???
It seems like your actor definition is mixing the OO and functional styles and hitting some warts in the interplay between using those styles in the same actor. Some of this confusion arises from Receive in the Java API being a Behavior but not an AbstractBehavior. This may be a vestige of an earlier evolution of the API (I've suggested to the Akka maintainers that this vestige be dropped in 2.7 (which is the absolute earliest it could be dropped owing to binary compatibility); communications with some of them haven't yielded a reason for this distinction and there's no such distinction in the analogous Scala API).
Disclaimer: I tend to work exclusively in the Scala functional API for actor definition.
There's a duality in the actor model between the state of an actor (i.e. its fields) and its behavior (how it responds to the next message it receives): to the world outside the actor, they are one and the same because the only way (ignoring something like taking a heap dump) to observe the actor's state is to observe its response to a message. Of course, in fact, the current behavior is a field in the runtime representation of the actor, and in the functional definitions, the behavior generally has fields for state.
The OO style of behavior definition favors:
mutable fields in the actor
the current behavior being immutable (the behavior can make decisions based on the fields)
The functional style of behavior definition favors:
no mutable fields in the actor
updating the behavior (which is an implicitly mutable field)
(the distinction is analogous to imperative programming having a while/for loop where a variable is updated vs. functional programming's preference for defining a recursive function which the compiler turns behind the scenes into a loop).
The AbstractBehavior API seems to assume that the message handler is createReceive(): returning this within an AbstractBehavior means to go back to the createReceive(). Conversely, Behaviors.same() in the functional style means "whatever the current behavior is, don't change it". Where there are multiple sub-Behaviors/Receives in one AbstractBehavior, this difference is important (it's not important when there's one Receive in the AbstractBehavior).
TL;DR: if defining multiple message handlers in an AbstractBehavior, prefer return Behaviors.same to return this in message handlers. Alternatively: only define one message handler per AbstractBehavior.

Command Pattern with additional argument parsing

Recently I have really focused on design patterns and implementing them to solve different problems. Today I am working on the Command Pattern.
I have ended up creating an interface:
public interface Command {
public void execute();
}
I have several concrete implementations:
public class PullCommand implements Command {
public void execute() {
// logic
}
}
and:
public class PushCommand implements Command {
public void execute() {
// logic
}
}
There are several other commands as well.
Now.. the thing is there's a BlockingQueue<Command> which runs on a different thread using .take() to retrieve queued commands and execute them as they come in (I'd call this Executor class below) by another class which produces them by parsing the user input and using .queue(). So far so good...
The hard part about me is parsing the command (CLI application).
I have put all of them in a HashMap:
private HashMap<String, Command> commands = new HashMap<String, Command>();
commands.put("pull", new PullCommand());
commands.put("push", new PushCommand());
//etc..
When user inputs a command, the syntax is such that one thing is for sure, and it is that the "action" (pull / push) comes as first argument, so I can always do commands.get(arguments[0]) and check if that is null, if it is, then the command is invalid, if it isn't, then I have successfully retrieved an object that represents that command. The tricky part is that there are other arguments that also need to be parsed and for each command the algorithm for parsing it, is different... Obviously one thing I can do is put the arguments[] as a parameter to the method execute() and end up having execute(String[] args) but that would mean I have to put the parsing of arguments inside the execute() method of the command, which I would like to avoid for several reasons:
The execution of Command happens on a different thread that uses a BlockingQueue, it executes a single command, then another one etc.. The logic I would like to put inside execute() has to ONLY be the execution of the command itself, without the parsing or any for example heavy tasks which would slow the execution (I do realize parsing several args would not mess up the performance that much.. but here I am learning structural designs and ways to build good coding habits and nice solutions. This would not be perfect by any mean)
It makes me feel like I am breaking some fundamental principles of the "Command" pattern. (Even if not so, I'd like to think of a better way to solve this)
It is obvious that I cannot use the constructor of the concrete commands since HashMap returns already initialized objects. Next thing that comes to mind is using another method inside the object that "processes" (process(String[] args)) the arguments and sets private variables to the result of the parsing and this process(String[] args) method is called by the Producer class before doing queue() on the command, so the parsing would end up OUT of the Executor class (thread) and Point 1. from above would not be a problem.
But there's another problem.. What happens if a user enters a lot of commands to the application, the application does .get(args[0]) on the arguments and retrieves a PullCommand, it uses the process(String[] args) and private variables are set, so the command is queued to the Executor class and it is waiting to be executed. Meanwhile.. another command is input by the user, .get(args[0]) is used again, it retrieves a PullCommand from the HashMap (but that PullCommand is the same as the one that is queued for execution) and process() would be called BEFORE the command has been executed by the Executor class and it would screw up the private variables. We would end up with 2 PullCommands records in the BlockingQueue, second one would be correct from user point of view (since he input what he wants it to do and it does just that), but first one will be the same as the second one (since it is the same object) and it would not correspond to the initial arguments.
Another thing I thought of is using a Factory class that implements the parsing for each of the commands and returns the appropriate Command object.
This would mean though, that I need to change the way HashMap is used and instead of Command I have to use the Factory class instead:
HashMap<String, CommandFactory> commands = new HashMap<String, CommandFactory>();
commands.put("pull", new CommandFactory("pull"));
commands.put("pull", new CommandFactory("push"));
and based on the String passed to the Factory, its process() method would use the appropriate parsing for that command and it would return the corresponding Command object, but this would mean that this class could potentially be very big because of containing the parsing for all commands..
Overall, this seems like my only option, but I am very hesitant since from structural point of view, I don't think I am solving this problem nicely. Is this a nice way to deal with this situation? Is there anything I am missing? Is there any way I can maybe refactor part of my code to ease it?
You are over thinking this. The command pattern is basically "keep everything you need to know how do something and do it later", so it's OK to pass stuff to the execution code.
Just do this:
user inputs a String[]
first string is the command "name" (use it as you are now)
the remaining strings are the parameters to the command (if any)
change your interface to public void execute(String[] parameters);
to execute, pass the parameters to the command object
Throwing a design question broadly like this in SO is in general not a good idea. It's a bit surprising to only see a downvote with no close request.
In any case, without understanding your problem thoroughly, it's hard to say what's the "best" design, not to mention if it I did, I won't call anything to be the best. Therefore I will stick with what I say by using Builder pattern.
In general, a builder pattern is used whenever the logic of construction is too complicated to fit in constructor, or it's necessarily divided into phases. In this case, if you want some extreme diversity of how your commands should look like depending on action, then you will want some builder like this:
interface CommandBuilder<T extends Command> {
void parseArgs(String[] args);
T build();
}
Generic here is optional if you don't plan to use these builders more than your current architecture, otherwise it's beneficial to be more precise in the types. parseArgs is responsible for those necessary parsing you were referring to. build should spit out an instance of Command depending on your current arguments.
Then you want your dispatcher map to look like this:
HashMap<String, Supplier<? extends CommandBuilder<? extends Command>>> builders = new HashMap<>();
commands.put("pull", () -> new PullBuilder());
commands.put("push", () -> new PushBuilder());
// etc
Any of these builders can potentially have extremely complicated logic, as you would desire. Then you can do
CommandBuilder builder = builders.get(args[0]).get();
builder.setArgs(args);
queue.add(builder.build());
In this way, your Command interface can focus on what exact it's supposed to do.
Notice that after your builders map is built, everything goes statically, and mutation is localized. I don't fully understand what's your concern, but it should be addressed by doing this.
However, it could be an overkill design, depending on what you want to do.

Are tagged classes in Java acceptable when simply conveying a value and (differing) associated attributes?

I have an object representing a network session, with a list of actions to execute - so it could send a message, receive a message, pause, receive a message and receive a message, for example. Actions have some extra data associated with them - for example, when receiving a message you have a regular expression that matches it, whereas when sending a message you just have the literal message and whether to retransmit.
I'd like the session object to handle the actual receiving or sending of messages - those rely on state contained in the session object (fields to fill in, what to do on failure, and so on) and I think it's cleaner to have the session do that based on the current action than to delegate it to the action and pass the action all of its state.
Instinctively I'd have a single Action class, with a field indicating its type (send/receive/pause) and some other fields, not all of which would be used for a given type (message to send/regexp to match/pause duration). But I've been reading Effective Java, which says that using a "tagged class" like this is bad and is better done with inheritance. I'm not really sure how to make that work, though - if I had a RecvAction, SendAction and PauseAction subclass, I think my session object would have to do an instanceof check to figure out the right behaviour, and I was under the impression that instanceof checks are a bit of a code smell.
What is the right approach to this problem, in terms of good Java style? If I have a value object conveying a piece of primary information (send a message) and related secondary information (what message to send), is that a legitimate exception where I can use tagged classes, or is there a cleaner way to approach this problem?
If you need this kind of flexibility, you can toss the tags, and allow the actions to be just plain objects that can be caught by accompanied processors. E.g. a list of Processor classes with method boolean supports(Action action) and void handle(Action action). You will also need an orchestrator, which would hold an arbitrary list of processors, and for the message received, the processors will be asked if they support it, and the one that answers true on supports(Action action), will get handle(Action action) called respectively.
E.g.
public interface Processor<A extends Action> {
boolean <T extends A> supports(T action);
void handle(A action);
}
public ActionRouter {
private List<Processor> processors = new LinkedList<Processor>();
public void handle(Action a) {
for (Processor p : processors) {
if (p.supports(a)) {
p.handle(a);
return;
}
}
}
}
This way you can achieve acceptable level of cohesion, e.g. by implementing focused action processors, like SendActionProcessor implements Processor<SendAction>.
Yeah, the instanceof doesn't seem very elegant, but can be tolerated for the purpose. If you don't like it and to not to repeat yourself, implement an abstract processor class, which will take the needed type as a constructor argument, and will encapsulate the acceptance by type.
On the other side, it's not always instanceof test. Your handle method would act as a predicate, and can test for the state of your actions before deciding to handle it. Really, depends on what you need.

Java pattern for nested callbacks?

I'm looking for a Java pattern for making a nested sequence of non-blocking method calls. In my case, some client code needs to asynchronously invoke a service to perform some use case, and each step of that use case must itself be performed asynchronously (for reasons outside the scope of this question). Imagine I have existing interfaces as follows:
public interface Request {}
public interface Response {}
public interface Callback<R extends Response> {
void onSuccess(R response);
void onError(Exception e);
}
There are various paired implementations of the Request and Response interfaces, namely RequestA + ResponseA (given by the client), RequestB + ResponseB (used internally by the service), etc.
The processing flow looks like this:
In between the receipt of each response and the sending of the next request, some additional processing needs to happen (e.g. based on values in any of the previous requests or responses).
So far I've tried two approaches to coding this in Java:
anonymous classes: gets ugly quickly because of the required nesting
inner classes: neater than the above, but still hard for another developer to comprehend the flow of execution
Is there some pattern to make this code more readable? For example, could I express the service method as a list of self-contained operations that are executed in sequence by some framework class that takes care of the nesting?
Since the implementation (not only the interface) must not block, I like your list idea.
Set up a list of "operations" (perhaps Futures?), for which the setup should be pretty clear and readable. Then upon receiving each response, the next operation should be invoked.
With a little imagination, this sounds like the chain of responsibility. Here's some pseudocode for what I'm imagining:
public void setup() {
this.operations.add(new Operation(new RequestA(), new CallbackA()));
this.operations.add(new Operation(new RequestB(), new CallbackB()));
this.operations.add(new Operation(new RequestC(), new CallbackC()));
this.operations.add(new Operation(new RequestD(), new CallbackD()));
startNextOperation();
}
private void startNextOperation() {
if ( this.operations.isEmpty() ) { reportAllOperationsComplete(); }
Operation op = this.operations.remove(0);
op.request.go( op.callback );
}
private class CallbackA implements Callback<Boolean> {
public void onSuccess(Boolean response) {
// store response? etc?
startNextOperation();
}
}
...
In my opinion, the most natural way to model this kind of problem is with Future<V>.
So instead of using a callback, just return a "thunk": a Future<Response> that represents the response that will be available at some point in the future.
Then you can either model subsequent steps as things like Future<ResponseB> step2(Future<ResponseA>), or use ListenableFuture<V> from Guava. Then you can use Futures.transform() or one of its overloads to chain your functions in a natural way, but while still preserving the asynchronous nature.
If used in this way, Future<V> behaves like a monad (in fact, I think it may qualify as one, although I'm not sure off the top of my head), and so the whole process feels a bit like IO in Haskell as performed via the IO monad.
You can use actor computing model. In your case, the client, services, and callbacks [B-D] all can be represented as actors.
There are many actor libraries for java. Most of them, however, are heavyweight, so I wrote a compact and extendable one: df4j. It considers actor model as a specific case of more general dataflow computing model and, as a result, allows user to create new types of actors, to optimally fit user's requirements.
I am not sure if I get you question correctly. If you want to invoke a service and on its completion result need to be passed to other object which can continue processing using result. You can look at using Composite and Observer to achive this.

What is a callback in java [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
What is a callback function?
I have read the wikipedia definition of a callback but I still didn't get it. Can anyone explain me what a callback is, especially the following line
In computer programming, a callback is a reference to executable code, or a piece of executable code, that is passed as an argument to other code. This allows a lower-level software layer to call a subroutine (or function) defined in a higher-level layer.
Callbacks are most easily described in terms of the telephone system. A function call is analogous to calling someone on a telephone, asking her a question, getting an answer, and hanging up; adding a callback changes the analogy so that after asking her a question, you also give her your name and number so she can call you back with the answer.
Paul Jakubik, Callback Implementations in C++.
Maybe an example would help.
Your app wants to download a file from some remote computer and then write to to a local disk. The remote computer is the other side of a dial-up modem and a satellite link. The latency and transfer time will be huge and you have other things to do. So, you have a function/method that will write a buffer to disk. You pass a pointer to this method to your network API, together with the remote URI and other stuff. This network call returns 'immediately' and you can do your other stuff. 30 seconds later, the first buffer from the remote computer arrives at the network layer. The network layer then calls the function that you passed during the setup and so the buffer gets written to disk - the network layer has 'called back'. Note that, in this example, the callback would happen on a network layer thread than the originating thread, but that does not matter - the buffer still gets written to the disk.
A callback is some code that you pass to a given method, so that it can be called at a later time.
In Java one obvious example is java.util.Comparator. You do not usually use a Comparator directly; rather, you pass it to some code that calls the Comparator at a later time:
Example:
class CodedString implements Comparable<CodedString> {
private int code;
private String text;
...
#Override
public boolean equals() {
// member-wise equality
}
#Override
public int hashCode() {
// member-wise equality
}
#Override
public boolean compareTo(CodedString cs) {
// Compare using "code" first, then
// "text" if both codes are equal.
}
}
...
public void sortCodedStringsByText(List<CodedString> codedStrings) {
Comparator<CodedString> comparatorByText = new Comparator<CodedString>() {
#Override
public int compare(CodedString cs1, CodedString cs2) {
// Compare cs1 and cs2 using just the "text" field
}
}
// Here we pass the comparatorByText callback to Collections.sort(...)
// Collections.sort(...) will then call this callback whenever it
// needs to compare two items from the list being sorted.
// As a result, we will get the list sorted by just the "text" field.
// If we do not pass a callback, Collections.sort will use the default
// comparison for the class (first by "code", then by "text").
Collections.sort(codedStrings, comparatorByText);
}
Strictly speaking, the concept of a callback function does not exist in Java, because in Java there are no functions, only methods, and you cannot pass a method around, you can only pass objects and interfaces. So, whoever has a reference to that object or interface may invoke any of its methods, not just one method that you might wish them to.
However, this is all fine and well, and we often speak of callback objects and callback interfaces, and when there is only one method in that object or interface, we may even speak of a callback method or even a callback function; we humans tend to thrive in inaccurate communication.
(Actually, perhaps the best approach is to just speak of "a callback" without adding any qualifications: this way, you cannot possibly go wrong.
See next sentence.)
One of the most famous examples of using a callback in Java is when you call an ArrayList object to sort itself, and you supply a comparator which knows how to compare the objects contained within the list.
Your code is the high-level layer, which calls the lower-level layer (the standard java runtime list object) supplying it with an interface to an object which is in your (high level) layer. The list will then be "calling back" your object to do the part of the job that it does not know how to do, namely to compare elements of the list. So, in this scenario the comparator can be thought of as a callback object.
A callback is commonly used in asynchronous programming, so you could create a method which handles the response from a web service. When you call the web service, you could pass the method to it so that when the web service responds, it call's the method you told it ... it "calls back".
In Java this can commonly be done through implementing an interface and passing an object (or an anonymous inner class) that implements it. You find this often with transactions and threading - such as the Futures API.
http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/Future.html
In Java, callback methods are mainly used to address the "Observer Pattern", which is closely related to "Asynchronous Programming".
Although callbacks are also used to simulate passing methods as a parameter, like what is done in functional programming languages.

Categories