What's this java pattern called? - java

I'm wondering what the following pattern is called, if it has a name at all.
Purpose
Store data that is associated with an object (MyObject), but that is private to an implementation of an interface that deals with that object. Clients of the object have no business looking at this data.
Alternatives
Some alternatives are
a WeakHashMap<MyObject, FooApiMyObjectAttachment> maintained in the implementation of the interface,
using subclassing and factories everywhere the value is created, so that the extra data can be stored in the subclass or
using subclassing and accepting both MyObject and subclasses in the API.
Code example
public interface MyApi {
void doSomething(MyObject x);
}
public class MyObject {
public interface Attachment {} // empty interface, type bound only
private Attachment attachment;
public void setAttachment(Attachment attachment) {
this.attachment = attachment;
}
public <T extends Attachment> T getAttachment(Class<T> type) {
return type.cast(attachment);
}
}
class FooApiMyObjectAttachment implements MyObject.Attachment {
Foo foo; // some data that one MyApi implementer `foo' wants to persist between calls, but that is neither needed nor desired on MyObject
}
class BarApiMyObjectAttachment implements MyObject.Attachment {
Bar bar; // some data that another MyApi implementer `bar' wants to persist between calls, but that is neither needed nor desired on MyObject
}
class FooApi implements MyApi {
// associates FooApiMyObjectAttachment with any MyObjects passed to it or created by it
}
class BarApi implements MyApi {
// associates BarApiMyObjectAttachment with any MyObjects passed to it or created by it
}
Compared to subclassing, the advantage is that no factories are needed for MyObject, just so that implementers of MyApi can associate extra data with the objects.
Compared to a WeakHashMap in the implementers, a disadvantage is two methods on MyObject that aren't useful to clients, but an advantage is the simplicity.
A nice property of this pattern is that you can generalize it to store any number of attachments of different types with each node by changing the field to Map<Class<?>, Attachment> attachments, which cannot be done with subclassing at all.
I've seen the generalized form used successfully to annotate tree nodes in a tree rewriting system with various data used by various modules that processed the nodes. (c.f. pointers to parent nodes, origin information)
Question
Does this pattern have a name? If so, what is it? Any references?

It looks like a structural pattern, very close derivation from Whole-part, or composite.
Looking for a reference online, an overview of Whole-Part:
Sometimes called Composite
Helps with the aggregation of components (parts) that together form a semantic unit (whole).
Direct access to the Parts is not possible
Compose objects into tree structures to represent part-whole hierarchies.
Whole-Part lets clients treat individual objects and compositions of object uniformly
Composite Pattern
Really the difference between what you are doing and the composite is that you are storing non-composites, so you don't get the tree structure that composites would allow, but a UML would look similar just without the pigs ear.

Found it!
The form where multiple attachments are possible (Map<Class<?>, Attachment> attachments) is described by Erich Gamma as the Extension Objects Pattern.

The Gang of Four calls this a Memento.

The Role Object Pattern is really really similar, maybe even up to the point where I conclude that the answer to my own question is: It's the Role Object Pattern.

Related

Java CRUD DAO Persistence design

Recently I have really focused on writing clean code and implementing designs and I have stumbled accross a situation where I have several options but cannot decide which one is the appropriate one. I am working on a software that requires persistence on a collection of objects. I decided to implement a DAO Pattern. The thing is that persistency could both be Json OR Xml so I implemented it this way:
I created a Generic DAO:
public interface GenericDao<T> {
public boolean add(T type);
public boolean change(T type);
public void delete(T type);
}
Then I created a CarDAO:
public interface CarDao extends GenericDao<Car> {
public Car getByIdentificationNumber(int id);
public void process();
}
For JSON persistence:
JsonGenericDao:
public class JsonGenericDao<T> implements GenericDao<T> {
public boolean add(T type) {
// implement ADD for json
}
public boolean change(T type) {
// implement Change for json
}
public void delete(T type) {
// implement Delete for json
}
}
JsonCarDao:
public class JsonCarDao extends JsonGenericDao<Task> implements CarDao {
public Car getByIdentificationNumber(int id) {
// Implement logic
}
public void process() {
// Logic
}
}
JsonCarDao extends JsonGenericDao to include the Add, Change, Delete and it also provides additional methods.
The same way is implemented XmlGenericDao and XmlCarDao.
So I end up with the possibility of using XmlCarDao OR JsonCarDao depending on the persistence I want to use.
When implementing the persistence, I used JAXB for XML and Gson for JSON.
I made an EntityCollection<T> class to store the objects inside and I would convert this collection to either XML OR JSON depending on the persistence used and I would retrieve the information from the file to this collection, change what needs to be changed and then rewrite the file.
There are two ways I can implement it:
Option 1:
I could implement the persistence using Gson inside JsonGenericDao and do the same for JAXB inside XmlGenericDao.
Option 2:
I can create an interface Persister<T> and write two classes that implement this interface, therefore JsonPersister<T> and XmlPersister<T> with methods such as update(T type) and acquireAllFromFile(), one of which is going to rewrite the whole file with the new data, and the other one is going to retrieve the information from the file. (Same thing could be done in Option 1 but without making the additional classes)
Then inside JsonGenericDao<T> I can use: JsonPersister<EntityCollection<T>>
and inside XmlGenericDao<T> I can use: XmlPersister<EntityCollection<T>>
therefore packing everything.
The problem here though is thinking about this, it would mean that I can get rid of JsonGenericDao and XmlGenericDao and implement a single PersistenceGenericDao which is going to use a Persister interface inside its CONSTRUCTOR to specify if JsonPersister should be used or XmlPersister should be used. It would basically be a combination of DAO and Strategy Pattern. Now this seems like something I can do.. but it also appears to me that it messes up my initial DAO design. Is it an appropriate thing to do or is it bad practice?
I think your option 2 actually looks like the GoF Bridge Pattern. XmlPersister/JsonPersister are ConcreteImplementors. PersistenceGenericDao is Abstraction, JsonCarDao is RefinedAbstraction.
So the idea actually makes sense. See What problems can the Bridge design pattern solve? to check if you really need the pattern or not.
If you only plan to use XML or JSON persistence, I personally would go with option 2. If you compare JsonCarDao with XmlCarDao, the only difference between them will probably be the mechanics of saving/loading data from some resource (JSON vs. XML). The rest of the logic will probably be pretty much the same. From this point of view, it is reasonable to extract the "saving/loading" into specific implementors and have one generic class for the rest of the DAO logic.
However if you consider relational or NoSQL database persistence, this might not fit that well. Because the DAO logic will probably be different. A method like findById will be pretty different in a relational DAO (query in the DB) compared to a JSON DAO (load data from a JSON file and search the collection of objects for an object with the given ID). In this situation, RelationalPersistence will probably not be very efficient.

Is Composition pattern good choice in this scenario?

Here is one design dilemma I have...
In my program, I have different kind of entries - numeric, textual, date, image, etc.
My first idea was to have model structured with inheritance like this:
Entry
-- NumericEntry (extends Entry)
-- TextualEntry (extends Entry)
-- DateEntry (extends Entry)
-- ImageEntry (extends Entry)
Then I can have a list of Entry objects, and each object will know how to handle & expose its data through common members (i.e. showData(), makeSummary() etc.) If I want to add new Entry object, I will just add another class with that specific type.
But, java limitations, and also android orm libraries limitations makes this pretty complicated.
So, I have turned to composite pattern, but I am not sure if I am approaching it right.
So, now I have this (pseudocode):
class Entry
{
Type type;
(nullable)NumericEntry numericEntry;
(nullable)TextualEntry textualEntry;
(nullable)DateEntry dateEntry;
(nullable)ImageEntry imageEntry;
public showData()
{
swicth (type)
{
case numeric: ..
case textual: ..
case date: ..
case image: ..
}
}
}
But this seems to me too wired, doesn't it?
What would be right approach in the described scenario?
I think what you're trying to do is legit, but I think the composite pattern is a bit off here. The composite pattern is rather used for hierarchical structures, as far as I know (like directory structures).
Your model seems quite good, using an (abstract) base class, and let the other types extend from it, however I fail to understand why you want to have all the different types of entries in your base Entry class.
If I understand correctly what you want then this would be more logical.
Interface example:
public interface Entry{
// Define all your methods as abstract and define them here
void showData();
}
public class TextualEntry implements Entry{
void showData(){
// Your implementation for textual entries here
}
}
// Repeat this for the other types of entries
You could also consider an abstract class implementation, which can define properties/fields used in all the extended classes. Moreover, you can implement methods in the abstract class which have the same implementation for all extended classes.
Abstract class example:
abstract class Entry{
// Define your properties here that are used in all the other classes
// Define all your methods as abstract and define them here
public abstract void showData();
}
class TextualEntry extends Entry{
// Define new properties here
public override void showData(){
// Your implementation for textual entries here
}
}
// Repeat this for the other types of entries
On http://docs.oracle.com/javase/tutorial/java/IandI/abstract.html they discuss a similar problem.
If I understand your request correctly you can use Composite, but I did not get how you came to pseudo code.
Composite pattern compose objects into tree structures to represent part-whole hierarchies. Group of objects is to be treated in the same way as a single instance of an object.
Component interface defines common method/methods for leafs and composites.
Leaf implements Component interface, but catch is that you can have multiple leaf objects(numeric, text, ...).
Composite implements Component interface, but it is container for leaf objects as well.
So usage can be:
Component leaf1 = new Leaf(); //numeric entry
Component leaf2 = new Leaf(); // text entry
Composite composite = new Composite();
composite.add(leaf1);
composite.add(leaf2);
composite.operation(); // showData()

How to avoid potentially long if statements for the same thing multiple places

I am creating an application and at the front I check if the user is an admin, user, moderator or superadmin. Based on this I create a different XML.
So what I currently do is to pass a string in the method argument that converts the object to XML to specify which mapping it should use. However passing those strings around isn't good. Are there any patterns to do this better?
I could bring the role check to the mapping class, and then change the mapping id to the same as the role of the current user. But I don't think security checks fits those classes.
Would you just create an enum to keep the roles and pass that instead of a string?
Or create different classes and use a factory to return the right object?
A Common Interface Approach
By implementing a common interface between all return objects, you can develop some loose coupling in your code. For example:
public interface XmlReturn
{
public void displayXML(); // Just an example method.
}
And a class that implements this interface:
public class AdminXmlReturn implements XmlReturn
{
public void displayXML() { // Some code here for the admin XML }
}
With this, you can generate some sort of factory that takes a discriminator:
public abstract class XmlFactory
{
public static XmlReturn getInstance(String type)
{
// Using string as an example type. Doesn't need to be.
if(type.equals("Admin")) {
return new AdminXmlReturn();
}
}
}
and by referring to the object by it's interface type, you can generate as many different XML files you want, without having to change any code. IE:
public void loadPage(String permission)
{
// permission can be any type. This is just an example.
XmlReturn xml = XmlFactory.getInstance(permission);
xml.displayXML();
// This method exists in all objects that implement XmlReturn
}
Advantages
This approach has the main advantage that you can add as many new XML files and permissions as you want, and you won't need to change the code that loads the XML. This "separation of concerns" will help you to make your program very manageable and extendable.
By porting your decision logic to a factory, you help make your code more readable, and allows other people to abstract away from the details of the inner workings of your program, if you intend on sharing your code.
You question is not very clear. Anyway, I try to give some option:
if you want to serialize to XML different kind of users, then I would suggest to model the different kind of users as a hierarchy of classes, and have a specialized toXML() serialization method in each class. By the way, JAXB can help you a lot, if this is what you want to do.
if you have a class XMLBuilder that writes some XML, and the way the XML is built depends on the kind of user, then I would suggest to model your different kind of users with a hierarchy of classes, and then use method overloading in XMLBuilder, i.e. have several build() methods each one taking as input a different subclass of your user-kind hierarchy.
I hope this helps.

API design: one generic interface VS three specialized interfaces?

I'm working on a tool where users can use their own annotations to describe data processing workflow (like validation, transformation etc).
Besides using ready-to-use annotations, users can user their own: in order to do this they need to declare annotation class itself, and then implement annotation processor (<--it's the main point of this question actualy).
The configured method for data processing may look like this one:
void foo(#Provide("dataId") #Validate(Validator.class) String str) {
doSmth(str);
}
There're naturally three groups of annotations:
those which produce initial values;
those which transforms values (converters);
those which just read values and perform some work (validators, different consumers).
So I need to make a choise: either create one interface for handling all these types of annotations, which can look like this one:
interface GenericAnnotationProcessor {
Object processAnnotation(Annotation annotation, Object processedValue);
}
Or I can add 3 intefaces to the API:
interface ProducerAnnotationProcessor {
Object produceInitValue(Annotation annotation);
}
interface TransformerAnnotationProcessor {
Object transformValue(Annotation annotation, Object currentValue);
}
interface ConsumerAnnotationProcessor {
void consumeValue(Annotation annotation, Object currentValue);
}
The first option is not very clear in use, but the third option pollutes the API with 3 almost similar interfaces.
What would you choose (first of all as an API user) and why?
Thanks!
I would create the first, more general interface, then define the three different implementation classes. Without knowing more about how you will be using this, my first instinct would be to define the Interface and/or a base class (depending upon how much common implementation code was shared between the different processors), and then add specialized processor implementation in derived types, all of whihc share the common interface.
In using the API, I would expect to declare a variable which implements GenericAnnotationProcessor, and then assign the appropriate implementation type depending upon my needs.
It is early here in Portland, OR, but at this moment, at 50% of my required caffeine level, this seems to me like it would provide maximum flexibility while maximizing cade re-use.
Of course, your actual reuirements might dictate otherwise . . .
Hope that was helpful!
Just diving deep into your problem.
As they are executing similar task, with some variance, Strategy pattern #Example should assist you.
Your problem should look like something below.
interface GenericAnnotationProcessor {
Object processAnnotation(Annotation annotation, Object processedValue);
}
interface ProducerAnnotationProcessor implements GenericAnnotationProcessor {
}
interface TransformerAnnotationProcessor implements GenericAnnotationProcessor {
}
interface ConsumerAnnotationProcessor implements GenericAnnotationProcessor {
}
Now you can follow example from Wiki
class Context {
// map of annotation processors
// register(add/remove) annotation processors to the map
public int executeAnnotationProcessor(Annotation annotation, Object processedValue) {
return locateAnnotationProcessor(annotation).processAnnotation(annotation, processedValue);
}
private GenericAnnotationProcessor locateAnnotationProcessor(Annotation annotation) {
// return expected annotation processor
}
}
I believe you can understand.
You can use Interfaces Extending Interfaces More on there
Similar to classes, you can build up inheritance hierarchies of interfaces by using the extends keyword, as in:
interface Washable {
void wash();
}
interface Soakable extends Washable {
void soak();
}
In this example, interface Soakable extends interface Washable. Consequently, Soakable inherits all the members of Washable. A class that implements Soakable must provide bodies for all the methods declared in or inherited by Soakable, wash() and soak(), or be declared abstract. Note that only interfaces can "extend" other interfaces. Classes can't extend interfaces, they can only implement interfaces.
Hope it helps.

When do you decide to use a visitors for your objects?

I always thought an object needs the data and the messages to act on it. When would you want a method that is extrinsic to the object? What rule of thumb do you follow to have a visitor? This is supposing that you have full control of the object graph.
The visitor pattern is particularly useful when applying an operation to all elements of a fairly complicated data structure for which traversal is non-trivial (e.g. traversing over the elements in parallel, or traversing a highly interconnected data structure) or in implementing double-dispatch. If the elements are to be processed sequentially and if double-dispatch is not needed, then implementing a custom Iterable and Iterator is usually the better choice, especially since it fits in better with the other APIs.
I always thought an object needs the
data and the messages to act on it.
When would you want a method that is
extrinsic to the object? What rule of
thumb do you follow to have a visitor?
This is supposing that you have full
control of the object graph.
It's sometimes not convenient to have all behaviors for a particular object defined in one class. For instance in Java, if your module requires a method toXml to be implemented in a bunch of classes originally defined in another module, it's complicated because you can not write toXml somewhere else than the original class file, which mean you can not extend the system without changing existing sources (in Smalltalk or other languages, you can group method in extension which are not tied to a particular file).
More generally, there's a tension in statically typed languages between the ability to (1) add new functions to existing data types, and (2) add new data types implementations supporting the same functions -- that's called the expression problem (wikipedia page).
Object oriented languages excel at point 2. If you have an interface, you can add new implementations safely and easily. Functional languages excel at point 1. They rely on pattern matching/ad-hoc polymorphism/overloading so you can add new functions to existing types easily.
The visitor pattern is a way to support point 1 in an object-oriented design: you can easily extend the system with new behaviors in a type-safe way (which wouldn't be the case if you do kind of manual pattern matching with if-else-instanceof because the language would never warn you if a case is not covered).
Visitors are then typically used when there is a fixed set of known types, which I think is what you meant by "full control of the object graph". Examples include token in a parser, tree with various types of nodes, and similar situations.
So to conclude, I would say you were right in your analysis :)
PS: The visitor pattern works well with the composite pattern, but they are also useful individually
Sometimes its just a matter of organization. If you have n-kinds of objects (ie: classes) with m-kinds of operations (ie: methods), do you want to have the n * m class/method pairs to be grouped by class or by method? Most OO languages strongly lean towards having things grouped by class, but there are cases where organizing by operation makes more sense. For example, in a multi-phase processing of object graphs, like in a compiler, is often more useful to think about each phase (ie: operation) as a unit rather than to think about all of operations that can happen to a particular sort of node.
A common use-case for the Visitor pattern where it's more than just strictly organizational is to break unwanted dependencies. For example, it's generally undesirable to have your "data" objects depend on your presentation layer, especially if you imagine that you may have multiple presentation layers. By using the visitor pattern, details of the presentation layer live in the visitor objects, not in methods of the data objects. The data objects themselves only know about the abstract visitor interface.
I use it a lot when I find I want to put a method that will be stateful onto Entity/DataObject/BusinessObject but I really don't want to introduce that statefulness to my object. A stateful visitor can do the job, or generate a collection of stateful executor objects from my non-stateful data objects. Particularly useful when processing of the work is going to be farmed out to executor threads, many stateful visitor/workers can reference the same group of non-stateful objects.
For me, the only one reason to use visitor pattern is when I need to perform double dispatch on graph-like data structure like tree/trie.
When you have the following problem:
Many distinct and unrelated operations need to be performed on node objects in a heterogeneous aggregate structure. You want to avoid “polluting” the node classes with these operations. And, you don’t want to have to query the type of each node and cast the pointer to the correct type before performing the desired operation.
Then you can use a Visitor pattern with one of the following intents:
Represent an operation to be performed on the elements of an object structure.
Define a new operation without changing the classes of the elements on which it operates.
The classic technique for recovering lost type information.
Do the right thing based on the type of two objects.
Double dispatch
(From http://sourcemaking.com/design_patterns/visitor)
The visitor pattern is most useful when you need behaviour to vary by object type (in a class hierarchy), and that behaviour can be defined in terms of the public interface provided by the object. The behaviour is not intrinsic to that object and doesn't benefit from or require encapsulation by the object.
I find visitors often naturally arises with graphs/trees of objects, where each node is part of a class hierarchy. To allow clients to walk the graph/tree and handle any each type of node in a uniform way, the Visitor pattern is really the simplest alternative.
For example, consider an XML DOM - a Node is the base class, with Element, Attribute and other types of Node define the class hierarchy.
Imagine the requirement is to output the DOM as JSON. The behaviour is not intrinsic to a Node - if it were, we would have to add methods to Node to handle all formats that the client might need (toJSON(), toASN1(), toFastInfoSet() etc.) We could even argue that toXML() doesn't belong there, although that might be provided for convenience since it is going to be used by most clients, and is conceptually "closer" to the DOM, so toXML could be made intrinsic to Node for convenience - although it doesn't have to be, and could be handled like all the other formats.
As Node and its subclasses make their state fully available as methods, we have all the information needed externally to be able to convert the DOM to some output format. Instead of then putting the output methods on the Node object, we can use a Visitor interface, with an abstract accept() method on Node, and implementation in each subclass.
The implementation of each visitor method handles the formatting for each node type. It can do this because all the state needed is available from the methods of each node type.
By using a visitor, we open the door to implementing any output format desired without needing to burden each Node class with that functionality.
I would always recommend using a visitor when you have full knowledge of what classes that implement an interface. In this way you won't do any not-so-pretty instanceof-calls, and the code becomes a lot more readable. Also, once a visitor has been implemented can be reused in many places, present and future.
Visitor pattern is a very natural solution to double dispatch problems. Double dispatch problem is a subset of dynamic dispatch problems and it stems from the fact that method overloads are determined statically at compile time, unlike virtual(overriden) methods, which are determined at runtime.
Consider this scenario:
public class CarOperations {
void doCollision(Car car){}
void doCollision(Bmw car){}
}
public class Car {
public void doVroom(){}
}
public class Bmw extends Car {
public void doVroom(){}
}
public static void Main() {
Car bmw = new Bmw();
bmw.doVroom(); //calls Bmw.doVroom() - single dispatch, works out that car is actually Bmw at runtime.
CarOperations carops = new CarOperations();
carops.doCollision(bmw); //calls CarOperations.doCollision(Car car) because compiler chose doCollision overload based on the declared type of bmw variable
}
This code below is adopted from my previous answer and translated to Java. The problem is somewhat different from the above sample, but demonstrates the essence of Visitor pattern.
//This is the car operations interface. It knows about all the different kinds of cars it supports
//and is statically typed to accept only certain ICar subclasses as parameters
public interface CarVisitor {
void StickAccelerator(Toyota car);
void ChargeCreditCardEveryTimeCigaretteLighterIsUsed(Bmw car);
}
//Car interface, a car specific operation is invoked by calling PerformOperation
public interface Car {
public string getMake();
public void setMake(string make);
public void performOperation(CarVisitor visitor);
}
public class Toyota implements Car {
private string make;
public string getMake() {return this.make;}
public void setMake(string make) {this.make = make;}
public void performOperation(CarVisitor visitor) {
visitor.StickAccelerator(this);
}
}
public class Bmw implements Car{
private string make;
public string getMake() {return this.make;}
public void setMake(string make) {this.make = make;}
public void performOperation(ICarVisitor visitor) {
visitor.ChargeCreditCardEveryTimeCigaretteLighterIsUsed(this);
}
}
public class Program {
public static void Main() {
Car car = carDealer.getCarByPlateNumber("4SHIZL");
CarVisitor visitor = new SomeCarVisitor();
car.performOperation(visitor);
}
}

Categories