in the reference book "Design Patterns Elements of Reusable Object-Oriented Software" by the gang of four, the intent of the visitor pattern is explained as follow :
Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates.
Another advantage I read about the visitor pattern is that:
ADD A NEW OPERATION WITHOUT HAVING THE SOURCE CODE OF THE CLASSES..
I made a deep search in Google, but I did not find any example showing how to do that.
So let's take a simple example :
public interface MyInterface {
public void myMethod();
}
public class MyClassA implements MyInterface {
/* (non-Javadoc)
* #see com.mycomp.tutorials.designpattern.behavorials.MyInterface#myMethodA()
*/
public void myMethod() {
System.out.println("myMethodA implemented in MyClassA");
}
}
public class MyClassB implements MyInterface {
/* (non-Javadoc)
* #see com.mycomp.tutorials.designpattern.behavorials.MyInterface#myMethodA()
*/
public void myMethod() {
System.out.println("myMethod implemented in MyClassB");
}
}
So how would I add a new method myNewMethod() to this hierarchy of classes without changing them, using the visitor pattern?
You example is not a visitor pattern. It is just inheritance.
A visitor pattern first requires an visitor interface
interface ThingVisitor {
void visit(ThingA a);
void visit(ThingB b);
}
Now you need an interface Thing:
interface Thing {
void accept(ThingVisitor visitor);
}
And your implementation of, for example, ThingA would be
class ThingA implements Thing {
public void accept(final ThingVisitor visitor) {
visitor.visit(this);
}
}
Now you see the logic to handle the Thing types is contained in the implementations of ThingVisitor.
Let's say you have a Message class, and 2 subclasses Email and Sms.
You could have many operations on these two classes, like sendToOnePerson(), sendToSeveralPeople(). But you probably don't want to have these methods in the Email and Sms class directly, because it tightly couples them to the SMTP/phone system. And you would also like to be able to add other operations in the futre, like forward() or delete(), or whatever. So the first implementation you could use is
public void delete(Message message) {
if (message instanceof Email) {
deleteEmail(Email) message);
}
else if (message instanceof Sms) {
deleteSms((Sms) message);
}
}
But this is ugly: it's not object-oriented, and it will fail if there is a new VoiceMessage subclass appearing.
An alternative is to use the visitor pattern.
public interface MessageVisitor {
void visitEmail(Email email);
void visitSms(Sms sms);
}
public abstract class Message {
public void accept(MessageVisitor visitor);
}
public class Email extends Message {
#Override
public void accept(MessageVisitor visitor) {
visitor.visitEmail(this);
}
}
public class Sms extends Message {
#Override
public void accept(MessageVisitor visitor) {
visitor.visitSms(this);
}
}
This way, to implement send(), all you need is a MessageVisitor implementation that can send an email and send an Sms:
SendMessageVisitor visitor = new SendMessageVisitor();
message.accept(visitor);
And if you introduce a new delete() operation, you don't have to touch to Message classes at all. All you need is a DeleteMessageVisitor:
DeleteMessageVisitor visitor = new DeleteMessageVisitor();
message.accept(visitor);
So, basically, it's a bit like if you added polymorphic methods to the Message classes by not actually modifying the Message classes.
The visitor pattern assumes that you have a method in the classes you want to "visit" which accepts and executes the visitor, here is an example. The pattern is not motivated by adding functionality to foreign classes but to localize functionality in the visitors which would otherwise be spread over several classes, e.g. for saving elements (see the example).
Quick description of the visitor pattern.
The classes that require modification must all implement the 'accept' method. Clients call this accept method to perform some new action on that family of classes thereby extending their functionality. Clients are able to use this one accept method to perform a wide range of new actions by passing in a different visitor class for each specific action. A visitor class contains multiple overridden visit methods defining how to achieve that same specific action for every class within the family. These visit methods get passed an instance on which to work
Related
I do have a service which needs to handle two types of meal.
#Service
class MealService {
private final List<MealStrategy> strategies;
MealService(…) {
this.strategies = strategies;
}
void handle() {
var foo = …;
var bar = …;
strategies.forEach(s -> s.remove(foo, bar));
}
}
There are two strategies, ‘BurgerStrategy’ and ‘PastaStrategy’. Both implements Strategy interface with one method called remove which takes two parameters.
BurgerStrategy class retrieves meals of enum type burger from the database and iterate over them and perform some operations. Similar stuff does the PastaStrategy.
The question is, does it make sense to call it Strategy and implement it this way or not?
Also, how to handle duplications of the code in those two services, let’s say both share the same private methods. Does it make sense to create a Helper class or something?
does it make sense to call it Strategy and implement it this way or not
I think these classes ‘BurgerStrategy’ and ‘PastaStrategy’ have common behaviour. Strategy pattern is used when you want to inject one strategy and use it. However, you are iterating through all behaviors. You did not set behaviour by getting one strategy and stick with it. So, in my honour opinion, I think it is better to avoid Strategy word here.
So strategy pattern would look like this. I am sorry, I am not Java guy. Let me show via C#. But I've provided comments of how code could look in Java.
This is our abstraction of strategy:
public interface ISoundBehaviour
{
void Make();
}
and its concrete implementation:
public class DogSound : ISoundBehaviour // implements in Java
{
public void Make()
{
Console.WriteLine("Woof");
}
}
public class CatSound : ISoundBehaviour
{
public void Make()
{
Console.WriteLine("Meow");
}
}
And then we stick with one behaviour that can also be replaced:
public class Dog
{
ISoundBehaviour _soundBehaviour;
public Dog(ISoundBehaviour soundBehaviour)
{
_soundBehaviour = soundBehaviour;
}
public void Bark()
{
_soundBehaviour.Make();
}
public void SetAnotherSound(ISoundBehaviour anotherSoundBehaviour)
{
_soundBehaviour = anotherSoundBehaviour;
}
}
how to handle duplications of the code in those two services, let’s say both share the same private methods.
You can create one base, abstract class. So basic idea is to put common logic into some base common class. Then we should create abstract method in abstract class. Why? By doing this, subclasses will have particular logic for concrete case. Let me show an example.
An abstract class which has common behaviour:
public abstract class BaseMeal
{
// I am not Java guy, but if I am not mistaken, in Java,
// if you do not want method to be overriden, you shoud use `final` keyword
public void CommonBehaviourHere()
{
// put here code that can be shared among subclasses to avoid code duplication
}
public abstract void UnCommonBehaviourShouldBeImplementedBySubclass();
}
And its concrete implementations:
public class BurgerSubclass : BaseMeal // extends in Java
{
public override void UnCommonBehaviourShouldBeImplementedBySubclass()
{
throw new NotImplementedException();
}
}
public class PastaSubclass : BaseMeal // extends in Java
{
public override void UnCommonBehaviourShouldBeImplementedBySubclass()
{
throw new NotImplementedException();
}
}
I was wondering if it's frowned upon that when designing an framework to be used by others, a class has some function as default behavior and expects its customers to override it if necessary. An example would be something like the following:
public class RecordProcessor<T extends Record> {
// ...
public void process() {
// process record logic
}
}
Consumers of this library creates their concrete classes to process their own records of type T.
Now I want to add a function called preProcess() to offer the ability for the consumers to preprocess their records. It would then look something like this:
public class RecordProcessor<T extends Record> {
// ...
public void process() {
preprocess();
// process record logic
}
public void preProcess() {
// By default no preprocessing
}
}
I know I can make preProcess an abstract function, but I dont want to due to a couple reasons:
Not all customers need to preprocess their records
We have a pipeline structure that autodeploys pushed code, so making RecordProcessor an abstract class would immediately break our customers' applications.
Is making preProcess do nothing in the parent class and let child classes override it considered bad practice? If not, what should the best way be to let customers know that they now have the power to preprocess the records? Through java docs?
One approach is to mark the public method as final (but this might also break existing apps) and allow protected hook methods to be overridden. For example:
public class RecordProcessor<T extends Record> {
// ...
public final void process() {
doPreProcess();
doProcess();
doPostProcess();
}
protected void doPreProcess() {
// By default no preprocessing
return;
}
protected void doProcess() {
// some default implementation
}
protected void doPostProcess() {
// By default no postprocessing
return;
}
}
Having some documentation should make it natural for other developers to recognize the optional extension methods.
I don't see anything wrong with having a hook method which does nothing. However, it should contain a return statement so static analysis tools won't complain.
UPDATE: in order to avoid breaking existing apps, if possible mark the existing method as deprecated and introduce a new method. For example:
public class RecordProcessor<T extends Record> {
// ...
public final void execute() {
doPreProcess();
doProcess();
doPostProcess();
}
#Deprecated - use execute() method instead.
public void process() {
doProcess();
}
protected void doPreProcess() {
// By default no preprocessing
return;
}
protected void doProcess() {
// some default implementation
}
protected void doPostProcess() {
// By default no postprocessing
return;
}
}
Prefer composition over inheritance. If you want your clients to add custom pre processing then do it by delegating to a separate objects.
public interface RecordPreProcessor<T extends Record>{
public void process(T record);
}
public class RecordProcessor<T extends Record> {
private RecordPreProcessor<T> recordPreProcessor = null;
public void setRecordPreProcessor(RecordPreProcessor<T> recordPreProcessor) {
this.recordPreProcessor = recordPreProcessor;
}
public void process() {
if (recordPreProcessor != null) recordPreProcessor.process(record);
// process record logic
}
}
No, overriding is not discouraged in Java.
The language allows overriding.
The language makes all methods overridable by default.
The Java class library includes examples of the same pattern.
Your approach is one reasonable way to allow subclasses to extend the behavior of their parent class. There are alternatives, such as passing a behavior as an object. However, there is no one true way.
One way you could improve your code is to mark preProcess() as protected. It's an implementation detail of the class. You don't want just anyone holding a RecordProcessor to decide they can call preProcess() by itself, right?
public class RecordProcessor<T extends Record> {
...
protected void preProcess() {
^^^^^^^^^
// By default no preprocessing
}
}
Another way to improve this is to consider whether you intend anyone to create an instance of the superclass RecordProcessor. If you don't, make the class abstract, to prevent that. The class name can express that, if you like, or your coding guidelines call for it.
public abstract class AbstractRecordProcessor<T extends Record> {
^^^^^^^^ ^^^^^^^^
...
protected void preProcess() {
// By default no preprocessing
}
}
One common way to document such methods is with the phrase "The default implementation does nothing. Subclasses may override this method ...". For example, below is the documentation for java.util.concurrent.FutureTask.done(). You can find more examples by searching for the first sentence of that phrase online.
public class FutureTask<V> implements RunnableFuture<V> {
...
/**
* Protected method invoked when this task transitions to state
* {#code isDone} (whether normally or via cancellation). The
* default implementation does nothing. Subclasses may override
* this method to invoke completion callbacks or perform
* bookkeeping. Note that you can query status inside the
* implementation of this method to determine whether this task
* has been cancelled.
*/
protected void done() { }
}
What I ended up doing- which I also thought was pretty good, inspired by #tsolakp, was simply creating a child class to RecordProcessor, called something like PreprocessRecordProcessor. This has no way of interfering existing code because nothing existing was touched. The class would something like this:
public class PreprocessRecordProcessor<T extends Record> extends RecordProcessor<T> {
// ...
public void process() {
preProcess();
super.process();
}
protected abstract void preProcess();
}
And if customers of this library would like to add their own logic they can simply extend this class and they'd be forced to provide pre-processing logic (as supposed to having the option to provide, which may result in unexpected results if they forgot to.)
I have an interface called Section and MapSection which extends section. I have a list of Sections and if it is a MapSection I need to do some additional processing. I can thing of two ways to handle this. I can add a boolean isAMapSection() to the Section interface but that leads to alot of isA.. if I add more types. The other way I could think of is instanceof check but my OOP senses think this is not great either.
curSection instanceof MapSection
which one of these is the right way? or is there another way?
As mentioned above by Oliver Charlesworth's comment, you could use a Visitor Design Pattern to give your code to do different actions depending on the type involved, without having to use a bunch of instanceof's or class equals.
For example, say you have two similar interfaces, Section and MapSection, where for grins will give MapSection one additional method:
interface Section {
void someMethod();
void accept(SectionVisitor visitor);
}
interface MapSection extends Section {
void additionalProcessingMethod();
}
We'll also give Section the accept(...) method to allow action by a Visitor of type SectionVisitor whose interface looks like:
interface SectionVisitor {
void visit(Section section);
void visit(MapSection mapSection);
}
The visit method will hold code that knows which methods to call depending on the type passed into it.
A very simple concrete example could look like:
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
public class VisitorTest {
public static void main(String[] args) {
Random random = new Random();
List<Section> sectionList = new ArrayList<>();
for (int i = 0; i < 10; i++) {
Section section = random.nextBoolean() ? new ConcreteSection() : new ConcreteMapSection();
sectionList.add(section);
}
SectionVisitor visitor = new ConcreteSectionVisitor();
for (Section section : sectionList) {
section.accept(visitor);
}
}
}
interface Section {
void someMethod();
void accept(SectionVisitor visitor);
}
interface MapSection extends Section {
void additionalProcessingMethod();
}
interface SectionVisitor {
void visit(Section section);
void visit(MapSection mapSection);
}
class ConcreteSection implements Section {
#Override
public void someMethod() {
System.out.println("someMethod in ConcreteSection");
}
#Override
public void accept(SectionVisitor visitor) {
visitor.visit(this);
}
}
class ConcreteMapSection implements MapSection {
#Override
public void someMethod() {
System.out.println("someMethod in ConcreteMapSection");
}
#Override
public void additionalProcessingMethod() {
System.out.println("additionalProcessingMethod in ConcreteMapSection");
}
#Override
public void accept(SectionVisitor visitor) {
visitor.visit(this);
}
}
class ConcreteSectionVisitor implements SectionVisitor {
#Override
public void visit(Section section) {
section.someMethod();
}
#Override
public void visit(MapSection mapSection) {
mapSection.someMethod();
mapSection.additionalProcessingMethod();
}
}
Best way might be to add a method "additionalProcessing" to Section. Implement this method to do your additional processing in MapSection, and leave it blank in your other implementations
Sometimes it's fine to have an isXXX method (and the corresponding asXXX method is nice too), but it really depends on how open-ended your object hierarchy is.
For example in StAX the XMLEvent interface will have descendants that represent the different types of events that can come from an XML document. But the list of those types is closed (no-one's going to radically change the XML format any time soon) and very short (there are about 10 different types of events in the StAX API), so it's fine. These interfaces also define the primary nature of their implementations, you wouldn't realistically just tag an object with an XMLEvent interface like you do with Serializable or Iterable.
If your interface is more "behavioural" (for want of a better word), more optional (like Comparable) or too open-ended (like LayoutManager), things like the visitor or the strategy pattern may be more appropriate.
Judging just by the names Section and MapSection, your model seems to belong to the first category but really only you can make that decision. What I definitely wouldn't do is leave it to the client of the code to fool around with instanceof calls. One way or another the solution should be part of Section.
What should be the preferable Java interface or similar pattern that could be used as a generic callback mechanism?
For example it could be something like
public interface GenericCallback
{
public String getID();
public void callback(Object notification);
// or public void callback(String id, Object notification);
}
The ID would be needed for cases of overriden hashCode() methods so that the callee identifies the caller.
A pattern like the above is useful for objects that needs to report back to the class they were spawned from a condition (e.g., end of processing).
In this scenario, the "parent" class would use the getID() method of each of these GenericCallback objects to keep a track of them in a Map<String, GenericCallable> and add or remove them according to the notification received.
Also, how should such an interface be actually named?
Many people seem to prefer the Java Observer pattern, but the Observable class defined there is not convenient, since it not an interface to circumvent single inheritance and it carries more functionality than actually needed in the above, simple scenario.
I would genericize the callback, based upon the type of Object passed. This is particularly useful for EventListeners listening for different classes of events. e.g.
public interface Callback<T> {
public void callback(T t);
}
You may be able to use the type T as the key in a Map. Of course, if you want to differentiate between two callbacks that take the same argument, like a String, then you'd need something like your getID().
Here my old blog about using this for Event Listeners The interface Events.Listener corresponds to Callback<T> above. And Broadcasters uses a Map to keep track of multiple listeners based upon the class they accept as the argument.
I'd recommend using Observer pattern since the Observer pattern is the gold standard in decoupling - the separation of objects that depend on each other.
But I'd recommend avoiding using the Java.util.Observable class if you are looking for a generic callback mechanism. Because Observable has a couple of weaknesses: it's not an interface, and forces you to use Object to represent events.
You can define your own event listener like this:
public class MyEvent extends EventObject {
public MyEvent(Object source) {
super(source);
}
}
public interface MyEventListener {
void handleEvent(EventObject event);
}
public class MyEventSource {
private final List<MyEventListener> listeners;
public MyEventSource() {
listeners = new CopyOnWriteArrayList<MyEventListener>();
}
public void addMyEventListener(MyEventListener listener) {
listeners.add(listener);
}
public void removeMyEventListener(MyEventListener listener) {
listeners.remove(listener);
}
void fireEvent() {
MyEvent event = new MyEvent(this);
for (MyEventListener listener : listeners) {
listener.handleEvent(event);
}
}
}
looks like you want to implement the Observer pattern. In this url is a complete implementation for the observer pattern in Java. In your case the observer will be the callback.
Also If you need to implement something more complex, you will end up doing an event/notifier pattern. Take a look at this other pattern here.
Thanks,
#leo.
Callbacks in Java8 can now be done with the java.util.function package.
See Java 8 lambda Void argument for more information.
I had an idea and it goes like this:
Parse a file on service side.
Create a list of actions based on the file's contents.
Pass the list of actions to the client side.
Have the client define and perform actions based on the items on the list.
As in the visitor pattern, we'd have a class for the actions and all of them inherit the Action interface. The clients would then implement the visitors. In Java it'd be something like this:
public interface Action {
void act(Visitor visitor);
}
public class PerfectAction implements Action {
void act(Visitor visitor) {
visitor.bePerfect();
}
}
public class VisibleAction implements Action {
void act(Visitor visitor) {
visitor.beVisible();
}
}
public interface Visitor {
void bePerfect();
void beVisible();
}
The Problem
I can't create Proxy classes for the Action and Visitor interfaces. They do not contain setters and/or getters. Plus they do not contain any data. Is it possible to pass this knowledge of which method should be called on the Visitor object from service to client side?
Request Factory can only move data around (EntityProxy and/or ValueProxy), and ask the server to do things on behalf of the client (RequestContext).
To transfer actions, the client and server first need to share the knowledge of those actions that can be performed.
You then have two solutions:
move to GWT-RPC
because the client has to know every possible action upfront anyway, create an enum or whatever to identify each action, and transfer those identifiers to the client, which will map them back to concrete actions to perform.
I don't think this is how you'd implement the visitor pattern. I'd do something like this
public interface ActionVisitor {
void visit(VisibleAction va);
void visit(PerfrectAction pa);
}
public class PerfectAction implements Action {
void act(Visitor visitor) {
visitor.visit(this);
}
}
public class VisibleAction implements Action {
void act(Visitor visitor) {
visitor.visit(this);
}
}
Then I'd define an implementation of the visitor that performed the appropriate actions.
It's important to define it in this way so that the logic of what the visitor does is external to the class. Prior to this, each implementation had a different implementation of the visitor, so it'd be harder to change behaviour.
I think that this will solve your problem, because now the logic of what to do is externalized to the visitor.