Recently I have really focused on design patterns and implementing them to solve different problems. Today I am working on the Command Pattern.
I have ended up creating an interface:
public interface Command {
public void execute();
}
I have several concrete implementations:
public class PullCommand implements Command {
public void execute() {
// logic
}
}
and:
public class PushCommand implements Command {
public void execute() {
// logic
}
}
There are several other commands as well.
Now.. the thing is there's a BlockingQueue<Command> which runs on a different thread using .take() to retrieve queued commands and execute them as they come in (I'd call this Executor class below) by another class which produces them by parsing the user input and using .queue(). So far so good...
The hard part about me is parsing the command (CLI application).
I have put all of them in a HashMap:
private HashMap<String, Command> commands = new HashMap<String, Command>();
commands.put("pull", new PullCommand());
commands.put("push", new PushCommand());
//etc..
When user inputs a command, the syntax is such that one thing is for sure, and it is that the "action" (pull / push) comes as first argument, so I can always do commands.get(arguments[0]) and check if that is null, if it is, then the command is invalid, if it isn't, then I have successfully retrieved an object that represents that command. The tricky part is that there are other arguments that also need to be parsed and for each command the algorithm for parsing it, is different... Obviously one thing I can do is put the arguments[] as a parameter to the method execute() and end up having execute(String[] args) but that would mean I have to put the parsing of arguments inside the execute() method of the command, which I would like to avoid for several reasons:
The execution of Command happens on a different thread that uses a BlockingQueue, it executes a single command, then another one etc.. The logic I would like to put inside execute() has to ONLY be the execution of the command itself, without the parsing or any for example heavy tasks which would slow the execution (I do realize parsing several args would not mess up the performance that much.. but here I am learning structural designs and ways to build good coding habits and nice solutions. This would not be perfect by any mean)
It makes me feel like I am breaking some fundamental principles of the "Command" pattern. (Even if not so, I'd like to think of a better way to solve this)
It is obvious that I cannot use the constructor of the concrete commands since HashMap returns already initialized objects. Next thing that comes to mind is using another method inside the object that "processes" (process(String[] args)) the arguments and sets private variables to the result of the parsing and this process(String[] args) method is called by the Producer class before doing queue() on the command, so the parsing would end up OUT of the Executor class (thread) and Point 1. from above would not be a problem.
But there's another problem.. What happens if a user enters a lot of commands to the application, the application does .get(args[0]) on the arguments and retrieves a PullCommand, it uses the process(String[] args) and private variables are set, so the command is queued to the Executor class and it is waiting to be executed. Meanwhile.. another command is input by the user, .get(args[0]) is used again, it retrieves a PullCommand from the HashMap (but that PullCommand is the same as the one that is queued for execution) and process() would be called BEFORE the command has been executed by the Executor class and it would screw up the private variables. We would end up with 2 PullCommands records in the BlockingQueue, second one would be correct from user point of view (since he input what he wants it to do and it does just that), but first one will be the same as the second one (since it is the same object) and it would not correspond to the initial arguments.
Another thing I thought of is using a Factory class that implements the parsing for each of the commands and returns the appropriate Command object.
This would mean though, that I need to change the way HashMap is used and instead of Command I have to use the Factory class instead:
HashMap<String, CommandFactory> commands = new HashMap<String, CommandFactory>();
commands.put("pull", new CommandFactory("pull"));
commands.put("pull", new CommandFactory("push"));
and based on the String passed to the Factory, its process() method would use the appropriate parsing for that command and it would return the corresponding Command object, but this would mean that this class could potentially be very big because of containing the parsing for all commands..
Overall, this seems like my only option, but I am very hesitant since from structural point of view, I don't think I am solving this problem nicely. Is this a nice way to deal with this situation? Is there anything I am missing? Is there any way I can maybe refactor part of my code to ease it?
You are over thinking this. The command pattern is basically "keep everything you need to know how do something and do it later", so it's OK to pass stuff to the execution code.
Just do this:
user inputs a String[]
first string is the command "name" (use it as you are now)
the remaining strings are the parameters to the command (if any)
change your interface to public void execute(String[] parameters);
to execute, pass the parameters to the command object
Throwing a design question broadly like this in SO is in general not a good idea. It's a bit surprising to only see a downvote with no close request.
In any case, without understanding your problem thoroughly, it's hard to say what's the "best" design, not to mention if it I did, I won't call anything to be the best. Therefore I will stick with what I say by using Builder pattern.
In general, a builder pattern is used whenever the logic of construction is too complicated to fit in constructor, or it's necessarily divided into phases. In this case, if you want some extreme diversity of how your commands should look like depending on action, then you will want some builder like this:
interface CommandBuilder<T extends Command> {
void parseArgs(String[] args);
T build();
}
Generic here is optional if you don't plan to use these builders more than your current architecture, otherwise it's beneficial to be more precise in the types. parseArgs is responsible for those necessary parsing you were referring to. build should spit out an instance of Command depending on your current arguments.
Then you want your dispatcher map to look like this:
HashMap<String, Supplier<? extends CommandBuilder<? extends Command>>> builders = new HashMap<>();
commands.put("pull", () -> new PullBuilder());
commands.put("push", () -> new PushBuilder());
// etc
Any of these builders can potentially have extremely complicated logic, as you would desire. Then you can do
CommandBuilder builder = builders.get(args[0]).get();
builder.setArgs(args);
queue.add(builder.build());
In this way, your Command interface can focus on what exact it's supposed to do.
Notice that after your builders map is built, everything goes statically, and mutation is localized. I don't fully understand what's your concern, but it should be addressed by doing this.
However, it could be an overkill design, depending on what you want to do.
Related
NOTE: This isn't specific to Minecraft Fabric. I'm just new to rigid pre-runtime optimization.
I'm writing an API hook for Minecraft mods that allows the mapping of various tasks to a Villager's "profession" attribute, allowing other mods to add custom tasks for custom professions. I have all of the backend code done, so now I'm worried about optimization.
I have an ImmutableMap.Builder<VillagerProfession, VillagerTask> that I'm using to store the other mods' added tasks. Problem is, while I know that the "put" method will never be called at runtime, I don't know if the compiler does. Obviously, since this is a game and startup times in modpacks are already long, I'd like to optimize this as much as possible, since it will be used by every mod that wishes to add a new villager task.
Here's my current source code for the "task registry":
private static final ImmutableMap.Builder<VillagerProfession, ImmutableList<Pair<Task<? super VillagerEntity>, Integer>>> professionToVillagerTaskBuilder = ImmutableMap.builder();
private static final ImmutableMap<VillagerProfession, ImmutableList<Pair<Task<? super VillagerEntity>, Integer>>> professionToVillagerTaskMap;
// The hook that any mods will use in their source code
public static void addVillagerTasks(VillagerProfession executingProfession, ImmutableList<Pair<Task<? super VillagerEntity>, Integer>> task)
{
professionToVillagerTaskBuilder.put(executingProfession, task);
}
//The tasklist retrieval method used at runtime
static ImmutableList<Pair<Task<? super VillagerEntity>, Integer>> getVillagerRandomTasks(VillagerProfession profession)
{
return professionToVillagerTaskMap.get(profession);
}
static { // probably not the correct way to do this, but it lets me mark the map as final
professionToVillagerTaskMap = professionToVillagerTaskBuilder.build();
}
Thanks!
The brief answer is: you can't do what you want to do.
Problem is, while I know that the "put" method will never be called at runtime, I don't know if the compiler does.
The put method has to be called at runtime for your mod to be useful. By the time your code is being loaded in a form that it can be executed -- that's runtime. It may be the setup phase for your mod, but it's running in a JVM.
If the source code doesn't contain the registry itself, then the compiler can't translate it to executable code; it can't optimize something it doesn't know exists. You (the developer) can't know what mods will be loading, hence the compiler can't know, hence it can't optimize or pre-calculate it. That's the price you pay for dynamic loading of code.
As for the code you put up: it won't work.
The static block is executed when the class is loaded. Think of it as a constructor for your class instead of the objects. By the time a mod can call any of its methods, the class has to be loaded, and its static blocks will already have been executed. Your map will be set and empty before any method is called from the outside. All tasks added will forever linger in the builder, unused, unseen, unloved.
Keep the builder. Let mods add their entries to it. Then, when all mod-loading is done and the game starts, call build() and use the result as a registry. (Use whichever 'game is starting' hook your modding framework provides.)
Please note: I am a Java developer with no working knowledge of Scala (sadly). I would ask that any code examples provided in the answer would be using Akka's Java API.
I am trying to use the Akka FSM API to model the following super-simple state machine. In reality, my machine is much more complicated, but the answer to this question will allow me to extrapolate to my actual FSM.
And so I have 2 states: Off and On. You can go fro Off -> On by powering the machine on by calling SomeObject#powerOn(<someArguments>). You can go from On -> Off by powering the machine off by calling SomeObject#powerOff(<someArguments>).
I'm wondering what actors and supporting classes I'll need in order to implement this FSM. I believe the actor representing the FSM has to extend AbstractFSM. But what classes represent the 2 states? What code exposes and implements the powerOn(...) and powerOff(...) state transitions? A working Java example, or even just Java pseudo-code, would go a long way for me here.
I think we can do a bit better than copypasta from the FSM docs (http://doc.akka.io/docs/akka/snapshot/java/lambda-fsm.html). First, let's explore your use case a bit.
You have two triggers (or events, or signals) -- powerOn and powerOff. You would like send these signals to an Actor and have it change state, of which the two meaningful states are On and Off.
Now, strictly speaking an FSM needs one additional component: an action you wish to take on transition.
FSM:
State (S) x Event (E) -> Action (A), State (S')
Read: "When in state S, if signal E is received, produce action A and advance to state S'"
You don't NEED an action, but an Actor cannot be directly inspected, nor directly modified. All mutation and acknowledgement occurs through asynchronous message passing.
In your example, which provides no action to perform on transition, you basically have a state machine that's a no-op. Actions occur, state transitions without side effect and that state is invisible, so a working machine is identical to a broken one. And since this all occurs asynchronously, you don't even know when the broken thing has finished.
So allow me to expand your contract a little bit, and include the following actions in your FSM definitions:
When in Off, if powerOn is received, advance state to On and respond to the caller with the new state
When in On, if powerOff is received, advance state to Off and respond to the caller with the new state
Now we might be able to build an FSM that is actually testable.
Let's define a pair of classes for your two signals. (the AbstractFSM DSL expects to match on class):
public static class PowerOn {}
public static class PowerOff {}
Let's define a pair of enums for your two states:
enum LightswitchState { on, off }
Let's define an AbstractFSM Actor (http://doc.akka.io/japi/akka/2.3.8/akka/actor/AbstractFSM.html). Extending AbstractFSM allows us to define an actor using a chain of FSM definitions similar to those above rather than defining message behavior directly in an onReceive() method. It provides a nice little DSL for these definitions, and (somewhat bizarrely) expects that the definitions be set up in a static initializer.
A quick detour, though: AbstractFSM has two generics defined which are used to provide compile time type checking.
S is the base of State types we wish to use, and D is the base of Data types. If you're building an FSM that will hold and modify data (maybe a power meter for your light switch?), you would build a separate class to hold this data rather than trying to add new members to your subclass of AbstractFSM. Since we have no data, let's define a dummy class just so you can see how it gets passed around:
public static class NoDataItsJustALightswitch {}
And so, with this out of the way, we can build our actor class.
public class Lightswitch extends AbstractFSM<LightswitchState, NoDataItsJustALightswitch> {
{ //static initializer
startWith(off, new NoDataItsJustALightswitch()); //okay, we're saying that when a new Lightswitch is born, it'll be in the off state and have a new NoDataItsJustALightswitch() object as data
//our first FSM definition
when(off, //when in off,
matchEvent(PowerOn.class, //if we receive a PowerOn message,
NoDataItsJustALightswitch.class, //and have data of this type,
(powerOn, noData) -> //we'll handle it using this function:
goTo(on) //go to the on state,
.replying(on); //and reply to the sender that we went to the on state
)
);
//our second FSM definition
when(on,
matchEvent(PowerOff.class,
NoDataItsJustALightswitch.class,
(powerOn, noData) -> {
goTo(off)
.replying(off);
//here you could use multiline functions,
//and use the contents of the event (powerOn) or data (noData) to make decisions, alter content of the state, etc.
}
)
);
initialize(); //boilerplate
}
}
I'm sure you're wondering: how do I use this?! So let's make you a test harness using straight JUnit and the Akka Testkit for java:
public class LightswitchTest {
#Test public void testLightswitch() {
ActorSystem system = ActorSystem.create("lightswitchtest");//should make this static if you're going to test a lot of things, actor systems are a bit expensive
new JavaTestKit(system) {{ //there's that static initializer again
ActorRef lightswitch = system.actorOf(Props.create(Lightswitch.class)); //here is our lightswitch. It's an actor ref, a reference to an actor that will be created on
//our behalf of type Lightswitch. We can't, as mentioned earlier, actually touch the instance
//of Lightswitch, but we can send messages to it via this reference.
lightswitch.tell( //using the reference to our actor, tell it
new PowerOn(), //to "Power On," using our message type
getRef()); //and giving it an actor to call back (in this case, the JavaTestKit itself)
//because it is asynchronous, the tell will return immediately. Somewhere off in the distance, on another thread, our lightbulb is receiving its message
expectMsgEquals(LightswitchState.on); //we block until the lightbulb sends us back a message with its current state ("on.")
//If our actor is broken, this call will timeout and fail.
lightswitch.tell(new PowerOff(), getRef());
expectMsgEquals(LightswitchState.off);
system.stop(lightswitch); //switch works, kill the instance, leave the system up for further use
}};
}
}
And there you are: an FSM lightswitch. Honestly though, an example this trivial doesn't really show the power of FSMs, as a data-free example can be performed as a set of "become/unbecome" behaviors in like half as many LoC with no generics or lambdas. Much more readable IMO.
PS consider learning Scala, if only to be able to read other peoples' code! The first half of the book Atomic Scala is available free online.
PPS if all you really want is a composable state machine, I maintain Pulleys, a state machine engine based on statecharts in pure java. It's getting on in years (lot of XML and old patterns, no DI integration) but if you really want to decouple the implementation of a state machine from inputs and outputs there may be some inspiration there.
I know about Actors in Scala.
This Java Start Code may help you, to go ahead:
Yes, extend your SimpleFSM from AbstractFSM.
The State is an enum in the AbstractFSM.
Your <someArguments> can be the Data Part in your AbstractFSM
Your powerOn and powerOff are Actor Messages/Events.
And the State switching is in the transitions Part
// states
enum State {
Off, On
}
enum Uninitialized implements Data {
Uninitialized
}
public class SimpleFSM extends AbstractFSM<State, Data> {
{
// fsm body
startWith(Off, Uninitialized);
// transitions
when(Off,
matchEvent(... .class ...,
(... Variable Names ...) ->
goTo(On).using(...) ); // powerOn(<someArguments>)
when(On,
matchEvent(... .class ...,
(... Variable Names ...) ->
goTo(Off).using(...) ); // powerOff(<someArguments>)
initialize();
}
}
Real working Project see
Scala and Java 8 with Lambda Template for a Akka AbstractFSM
Well this is a really old question but if you get as a hit from Google, but if you are still interested implementing FSM with Akka, I suggest to look this part of the documentation.
If you want to see how a practical model driven state machine implementation, you can check my blog1, blog2.
I am building a game simulator that has hundreds of micro steps like the following. They each perform a unique task, but I left out the implementation details for the sake of brevity.
public class Sim {
static void phase() {
phaseIn();
phaseOut();
}
static void untap() {
}
static void upkeep() {
}
static void draw() {
}
...
}
A Turn usually involves executing steps sequentially, but there are times when a special effect may cause the sequence to change. For example, I may be required to repeat a step twice, skip a step, or rearrange the order of the steps. These actions are all special cases, as the turn typically just occurs in order from start to finish.
For example, the following series of events represents my normal turn.
... > upkeep() > draw() > preCombatMain() > ...
Now, I play something that requires me to repeat my draw step. I need my turn to look like this:
... > upkeep() > draw() > draw() > preCombatMain() > ...
The steps of a turn are methods, and I do not know how to enqueue or dequeue methods. I know that Java 8 has method references, but the feature is relatively new. I have been unable to apply existing tutorials to what I am trying to accomplish. I got as far as Sim::untap, but I have no idea how to assign it, invoke it, etc. How do I queue methods in Java 8, or otherwise call methods in an order determined at run-time by the choices a player makes?
Note: I realize that my inability to understand may be due to a fundamental design flaw. I have never taken a game design course, I am completely open to criticism, and I will change my design if it is flawed. That said, the question is not to be misconstrued as "Please recommend a design pattern." I considered an alternate design, where I "number" each step in a massive switch statement, queue "numbers", and repeatedly switch on the front of the queue, but that seemed like a poor plan (in my opinion).
If you simply want them to run sequentially, you can of course call them one after the other. If the order can change, an alternative is to use a queue of method references:
LinkedList<Runnable> queue = new LinkedList<>();
queue.add(Sim::upkeep);
queue.add(Sim::draw);
queue.add(Sim::preCombatMain);
queue.forEach(Runnable::run);
I was able to use a LinkedList<Runnable> because the signature of your methods is void m(). For other signatures you can use other types, for example:
void m() use Runnable
T m() use Supplier<T>
void m(T o) use Consumer<T>
R m(T o) use Function<T, R>
The solution is to use polymorphism. Define an interface for the step:
interface Step {
void process();
}
Then define each step by implementing it:
class UpkeepStep implements Step {
void process() { ... }
}
Now you can put all your steps in an array, shuffle it, if needed, and execute all steps, like this:
for (Step step : steps) {
step.process();
}
An alternative approach that may run faster, is to generate code that contains the method calls, compile it and load the class. However, it gives you only better performance if the step does not take much runtime compared to the method call overhead and if you execute each generated piece of code a lot, so the JIT will optimize it.
I'm looking for a Java pattern for making a nested sequence of non-blocking method calls. In my case, some client code needs to asynchronously invoke a service to perform some use case, and each step of that use case must itself be performed asynchronously (for reasons outside the scope of this question). Imagine I have existing interfaces as follows:
public interface Request {}
public interface Response {}
public interface Callback<R extends Response> {
void onSuccess(R response);
void onError(Exception e);
}
There are various paired implementations of the Request and Response interfaces, namely RequestA + ResponseA (given by the client), RequestB + ResponseB (used internally by the service), etc.
The processing flow looks like this:
In between the receipt of each response and the sending of the next request, some additional processing needs to happen (e.g. based on values in any of the previous requests or responses).
So far I've tried two approaches to coding this in Java:
anonymous classes: gets ugly quickly because of the required nesting
inner classes: neater than the above, but still hard for another developer to comprehend the flow of execution
Is there some pattern to make this code more readable? For example, could I express the service method as a list of self-contained operations that are executed in sequence by some framework class that takes care of the nesting?
Since the implementation (not only the interface) must not block, I like your list idea.
Set up a list of "operations" (perhaps Futures?), for which the setup should be pretty clear and readable. Then upon receiving each response, the next operation should be invoked.
With a little imagination, this sounds like the chain of responsibility. Here's some pseudocode for what I'm imagining:
public void setup() {
this.operations.add(new Operation(new RequestA(), new CallbackA()));
this.operations.add(new Operation(new RequestB(), new CallbackB()));
this.operations.add(new Operation(new RequestC(), new CallbackC()));
this.operations.add(new Operation(new RequestD(), new CallbackD()));
startNextOperation();
}
private void startNextOperation() {
if ( this.operations.isEmpty() ) { reportAllOperationsComplete(); }
Operation op = this.operations.remove(0);
op.request.go( op.callback );
}
private class CallbackA implements Callback<Boolean> {
public void onSuccess(Boolean response) {
// store response? etc?
startNextOperation();
}
}
...
In my opinion, the most natural way to model this kind of problem is with Future<V>.
So instead of using a callback, just return a "thunk": a Future<Response> that represents the response that will be available at some point in the future.
Then you can either model subsequent steps as things like Future<ResponseB> step2(Future<ResponseA>), or use ListenableFuture<V> from Guava. Then you can use Futures.transform() or one of its overloads to chain your functions in a natural way, but while still preserving the asynchronous nature.
If used in this way, Future<V> behaves like a monad (in fact, I think it may qualify as one, although I'm not sure off the top of my head), and so the whole process feels a bit like IO in Haskell as performed via the IO monad.
You can use actor computing model. In your case, the client, services, and callbacks [B-D] all can be represented as actors.
There are many actor libraries for java. Most of them, however, are heavyweight, so I wrote a compact and extendable one: df4j. It considers actor model as a specific case of more general dataflow computing model and, as a result, allows user to create new types of actors, to optimally fit user's requirements.
I am not sure if I get you question correctly. If you want to invoke a service and on its completion result need to be passed to other object which can continue processing using result. You can look at using Composite and Observer to achive this.
I have an application that consists of two processes, one client process with a (SWT-based) GUI and one server process. The client process is very lightweight, which means a lot of GUI operations will have to query the server process or request it to something, for example in response to the user clicking a button or choosing a menu item. This means that there will be a lot of event handlers that looks like this:
// Method invoked e.g. in response to the user choosing a menu item
void execute(Event event) {
// This code is executed on the client, and now we need some info off the server:
server.execute(new RemoteRequest() {
public void run() {
// This code is executed on the server, and we need to update the client
// GUI with current progress
final Result result = doSomeProcessing();
client.execute(new RemoteRequest() {
public void run() {
// This code is again executed on the client
updateUi(result);
}
}
}
});
}
However, since the server.execute implies a serialization (it is executed on a remote machine), this pattern is not possible without making the whole class serializable (since the RemoteRequest inner classes are not static (just to be clear: it is not a requirement that the Request implementation can access the parent instance, for the sake of the application they could be static).
Of course, one solution is to create separate (possibly static inner) classes for the Request and Response, but this hurts readability and makes it harder to understand the execution flow.
I have tried to find any standard pattern for solving this problem, but I have not find anything that answers my concern about readability.
To be clear, there will be a lot of these operations, and the operations are often quite short. Note that Future objects are not entirely useful here, since in many cases one request to the server will need to do multiple things on the client (often varying), and it is also not always a result being returned.
Ideally, I would like to be able to write code like this: (obvious pseudo-code now, please disregard the obvious errors in details)
String personName = nameField.getText();
async exec on server {
String personAddress = database.find(personName);
async exec on client {
addressField.setText(personAddress);
}
Order[] orders = database.searchOrderHistory(personName);
async exec on client {
orderListViewer.setInput(orders);
}
}
Now I want to be clear, that the underlying architecture is in place and works well, the reason this solution is there is of course not the above example. The only thing I am looking for is a way to write code like the above, without having to define static classes for each process transition. I hope that I did not just complicate things by giving this example...
My preference is to use Command Pattern and generic AsynchronousCallbacks. This kind of approach is used in GWT for communicating with the server, for example. Commands are Serializable, AsyncCallback is an interface.
Something along these lines:
// from the client
server.execute(new GetResultCommand(args), new AsyncCallback<Result>()
{
public void onSuccess(Result result) {
updateUi(); // happens on the client
}
});
The server then needs to receive a command, process it and issue an appropriate response with Result.
I ran into a similar problem the other day.
There is a solution which makes use of anonymous classes (and thus does not require you to define static inner classes), yet makes those anonymous classes be static (and thus not reference the outer object).
Simply define the anonymous classes in a static method, like this:
void execute(Event event) {
static_execute(server, client, event);
}
// An anonymous class is static if it is defined in a static method. Let's use that.
static void static_execute(final TheServer server, final TheClient client, final Event event) {
server.execute(new RemoteRequest() {
public void run() {
final Result result = doSomeProcessing();
// See note below!
client.execute(new RemoteRequest() {
public void run() {
updateUi(result);
}
});
}
});
}
The main advantage of this approach, compared to using named static inner classes, is probably that you avoid having to define fields and constructors for those classes.
-- Come to think of it, the same trick probably needs to be applied once more, for the server->client direction. I'll leave this as an exercise for the reader :-)
FIRST:
Without Pattern , if I would suggest, you can make a seperate class for handle all the Patterns. Just pass the instance of each event generated Object to a class and delegate the event request to other classes. Delegation will lead to very clearer approach, just required to use instanceof and then delegate further. Every event can be concise to a seperate place.
Along with the above Approach , yes, COMMAND PATTERN is definately a good option to log requests but you are getting EVENT State for every request raised , so you can try STATE PATTERN as it allows an object to alter its behaviour when state changes.
I actually ended up solving this by creating a base class with a custom serializer that takes care of this. I still hope it is solved in the language eventually.