Design Pattern Advise - java

I have an class with a method which accepts an argument of particular type. The behavior of the method should be dependent on the specific class. For example,
public void doSomething(SomeInterface t) {
...
}
Depending on the actual class of the argument, I need the behavior to change. I need the outer class to perform an action based on the values found in T. Specifically, the outer class needs to construct a Hibernate criteria object which has restrictions which depend on the type of T, which is an implementation of a "Query" interface. The outer class is an implementation of a parameterized builder interface which constructs instances of objects which can be used to execute queries against a data store (for example, Criteria for hibernate, a SearchQueryBuilder for elasticsearch, etc). So as you can see, the problem with having t do the work is that it would require knowledge of HOW to construct these criteria which is beyond its intended purpose of just containing information about WHAT to query
It feels dirty and wrong to do something like
if (t instanceof X) {
...
} else if (t instance of Y) {
...
}
I see a couple problems here.
This requires previous knowledge about the types being passed in
The class is not "closed for modification" and will require a modification every time a new type needs to be supported.
Can someone suggest a good design pattern that can be used to solve this problem? My first thought is to use a factory pattern in combination with strategy and create instances of the class with a "handler" for a specific type. Another thought I had was to create a mapping of Class -> Handler which is supplied to the class at construction time.
Ideas appreciated.

The simplest idea would be to put the logic in the implementations of SomeInterface:
public interface SomeInterface {
public void actOnUsage();
}
public class SomeOtherClass {
public void doSomething(SomeInterface t) {
t.actonUsage();
}
}

Related

How to avoid empty visit functions in visitor pattern?

I have the following use case. I have a restriction interface that needs to fill its members from dependencies, do the validations. These methods are applicable for all implementations and hence it is fine till now. Some restrictions require some other validation later. In the main function, I want to loop over each of the restriction and call the methods in a general way instead of using instanceOf and then calling. I think this might be a use case of visitor pattern as mentioned here. Now I have the following classes.
interface Restriction() {
void fillFields();
void firstRoundValidation();
void accept(SecondRoundValidationVisitor secondRoundValidationVisitor);
}
class RestrictionBasic implements Restriction {
Field field;
// Inject dependencies
#Override
void fillFields() {
// Get field from dependencies
}
void firstRoundValidation() {
// Implement
}
#void accept(SecondRoundValidationVisitor secondRoundValidationVisitor) {
secondRoundValidationVisitor.visitRestrictionBasic(this);
}
}
class RestrictionAdvanced implements Restriction {
// Same as above except below function.
#void accept(SecondRoundValidationVisitor secondRoundValidationVisitor) {
secondRoundValidationVisitor.visitRestrictionAdvanced(this);
}
}
interface ValidationVisitor {
void visitRestriction(RestrictionBasic restrictionBasic);
void visitRestriction(RestrictionAdvanced restrictionAdvanced);
}
class SecondRoundValidationVisitor implements ValidationVisitor {
#Override
void visitRestriction(RestrictionBasic restrictionBasic) {
// Empty function
}
#Override
void visitRestriction(RestrictionAdvanced restrictionAdvanced) {
// Perform second level of validation
}
}
class Main() {
List<Restriction> restrictionList = new ArrayList();
ValidationVisitor validationVisitor = new SecondRoundValidationVisitor();
for (restriction : restrictionList) {
restriction.accept(validationVisitor)
}
}
Could you please tell if there is any issue with this approach? There is also another approach where getSecondValidationNeeded() could be added to the interface and based on that, call secondValidation with default value of empty body. But this is not following interface segregation principle. My doubt is how does visitor pattern solve this issue? Even in visitor pattern, there is only one interface and accept is being added in base interface even when only some visitors have non empty visit functions.
Visitor pattern uses overloading of methods to choose appropriate implementation. It can be seen in a wiki example:
interface CarElementVisitor {
void visit(Body body);
void visit(Car car);
void visit(Engine engine);
void visit(Wheel wheel);
}
So I would edit interface ValidationVisitor:
interface ValidationVisitor {
void visitRestrictionBasic(RestrictionBasic restrictionBasic);
void visitRestrictionAdvanced(RestrictionAdvanced restrictionAdvanced);
}
to this:
public interface ValidationVisitor
{
void VisitRestriction(RestrictionBasic restrictionBasic);
void VisitRestriction(RestrictionAdvanced restrictionAdvanced);
}
So we have created VisitRestriction() with different overloads.
Why? If you don't know the type of the object? You probably would need to find out the real type of Restriction and then call VisitRestrictionBasic or VisitRestrictionAdvanced. I highly recommend you to read this very nice answer about What's the point of the accept method?
Pattern-wise I don't think there is a problem. visitRestrictionBasic is only empty because apparently you don't have second round validation for basic restrictions. This is a business rule, not a flaw of the design. If you later decide that you DO want second round validation for basic restrictions, you know where you can add it.
Apart from that, the whole set up might be overkill. It's usually good to start off simple. But I don't know your complete domain and use case, so cannot judge if this is the case here.
EDIT: To evaluate your approach we should get more context and take a step back to understand the problem. So far what I understand is there are several restriction types which have the following characteristics:
each restriction has a fixed groups of dependencies
a restriction extracts values from these dependencies into its fields
each restriction performs two rounds of validation on these fields
the first round validation is implemented in each restriction type
the second round validation is also specific per restriction type, but implemented in the form of a separate visitor
The fundamental difference between the first round and second round validation is not clear to me. Both of them have specific validation code for each restriction type if I understand it correctly. If not, and the basic validator is only used in the first round, and the advanced validator only in the second round, then the model could probably be simplified. In that case first round = basic and second round = advanced...

Java Interface that forces implementation of an enum - How to approach?

I have a situation where I would like to used an instance of an object called Abstraction, which would be a Java interface looking something like:
public interface Abstraction {
public enum Actions {
}
}
The idea being that any class implementing Abstraction has to implement enum Actions (I realise this doesn't work).
A class implementing the interface may look like:
public class AbstractionA implements Abstraction {
public enum Actions {
f, c, r, a
}
}
In another class I would want to create an Abstraction object like:
Abstraction abs = new AbstractionA();
and then be able to access the enum values applicable to the Abstraction object created, e.g.
abs.Action.r;
I realise my approach to this is all wrong but cannot see an appropriate way to handle this type of situation. How can I implement something like this where different implementations of the interface have a varying subset of the options I would generally want to put in an enum?
Perhaps I can implement the enum with all possible options in the interface and then somehow restrict implementations of the interface to using a subset of those enum values?
EDIT:
Another example implementation might be
public class AbstractionB implements Abstraction {
public enum Actions {
f, c, b, r, a
}
}
I think I have figured out a way forward with this:
public interface Abstraction {
public enum Actions {
f, c, b, r, s, a
}
public Actions[] availableActions();
}
Then implement with:
public class HunlAbstractionA implements Abstraction{
#Override
public Actions[] availableActions()
{
Actions[] actions = new Actions[] {Actions.f, Actions.c, Actions.r, Actions.a};
return actions;
}
}
This way I have access to all possible actions listed in the interfaces enum and can make checks to ensure an Action to be dealt with is one of the availableActions for the created class.
Recommendation
I'd recommend the following approach.
This approach uses a combination of generics and reflection to help explicitly indicate the need to implement or choose an appropriate enum, it also gives you the option of preserving information about the enum type whilst hiding all other information about the specific Abstraction implementation.
/**
* An abstraction with an implementation-defined enum
* #param <E> your custom enum.
*/
interface Abstraction<E extends Enum> {
//this gives you the enum constants as a list
Class<E> getEnumType();
}
class AbstractionA implements Abstraction<AbstractionA.EnumA> {
enum EnumA {
FOO,
BAR
}
#Override
public Class<EnumA> getEnumType() {
return EnumA.class;
}
}
class AbstractionB implements Abstraction<AbstractionB.EnumB> {
enum EnumB {
FOO,
BAR
}
#Override
public Class<EnumB> getEnumType() {
return EnumB.class;
}
}
Note that unfortunately we can supply a default implementation of getEnumType() due to type erasure.
Usage Example
class Main {
public static void main(String[] args) {
Abstraction myAbstractionA = new AbstractionA();
Abstraction<AbstractionB.EnumB> myAbstractionB = new AbstractionB();
Class enumAType = myAbstractionA.getEnumType();
Class<AbstractionB.EnumB> enumBType = myAbstractionB.getEnumType();
Object[] enumsA = enumAType.getEnumConstants();
AbstractionB.EnumB[] enumsB = enumBType.getEnumConstants();
System.out.printf("Enums of the same order are still non-identical: %s", enumsA[0].equals(enumsB[0]));
System.out.println();
Enum enumA = ((Enum)enumsA[0]);
Enum enumB = ((Enum)enumsB[1]);
System.out.printf("We can get enum constants in order, and get the orderinal of the enum: A=%s, B=%s", enumA.ordinal(), enumB.ordinal());
System.out.println();
enumA = Enum.valueOf(enumAType, "FOO");
enumB = Enum.valueOf(enumBType, "BAR");
System.out.printf("We can get enum constants by name and get the name out of the enum: A=%s, B=%s", enumA.name(), enumB.name());
System.out.println();
}
}
Alternatives
If you can use an abstract class instead of an interface, you may prefer a solution similar to this related answer.
Edit: If you have a common set of constants you want to share across your actions, you should probably use a global/shared enum for those constants and define only the extensions themselves in the custom Abstractions. If you cast them all to Enum and use .equals() as needed, this should work in most cases.
Background
As you have stated you know, it is not possible to place member objects (variable or classes) of an interface.
However, the good news is that java actually supports the behaviour you want pretty well.
There are 3 key features that relate to my recommendation:
Enums are Objects
Firstly, enums in java are fully-fledged Objects, which all extend java.lang.Enum, and all implement .equals().
So, you can store different any enum class' values in a variable of type java.lang.Enum and compare them with .equals().
And, if you want to pretend that values of different enum classes are the same because they share the same name (or are the nth constant in their respective class), you can do that too.
Note that this also means that custom enums can contain complex data and behaviour like any other class, beyond it's use as a unique identifier.
See the Enum API documentation for details.
Java Reflection
Secondly, Java has extensive reflection support. For our purposes, java.lang.Class has a method called getEnumConstants() for getting the enum constants (or null if the class is not an enum).
See the Class API documentation for details.
Cyclic Dependancies
Thirdly, at least when it comes to generics, Java is permissive when it comes to cyclic dependancies, so you can define a generic interface depends on a specialisation of that generic. Java won't mind.
Interface is a contract that you want anyone to provide an implementation of that contract. In your example code you do not have a method but a definition of a enum called Action.
Generally enum is a set of constants hence we do not expect multiple classes to come up with different implementations of the same constant.
So you might want to rethink about your approach and figure out a better way. Hope this will help moving you in correct direction.

Double Dispatch and inheritance

I have a number of dumb object classes that I would like to serialize as Strings for the purpose of out-of-process storage. This is a pretty typical place to use double-dispatch / the visitor pattern.
public interface Serializeable {
<T> T serialize(Serializer<T> serializer);
}
public interface Serializer<T> {
T serialize(Serializeable s);
T serialize(FileSystemIdentifier fsid);
T serialize(ExtFileSystemIdentifier extFsid);
T serialize(NtfsFileSystemIdentifier ntfsFsid);
}
public class JsonSerializer implements Serializer<String> {
public String serialize(Serializeable s) {...}
public String serialize(FileSystemIdentifier fsid) {...}
public String serialize(ExtFileSystemIdentifer extFsid) {...}
public String serialize(NtfsFileSystemIdentifier ntfsFsid) {...}
}
public abstract class FileSystemIdentifier implements Serializeable {}
public class ExtFileSystemIdentifier extends FileSystemIdentifier {...}
public class NtfsFileSystemIdentifier extends FileSystemIdentifier {...}
With this model, the classes that hold data don't need to know about the possible ways to serialize that data. JSON is one option, but another serializer might "serialize" the data classes into SQL insert statements, for example.
If we take a look at the implementation of one of the data classes, the impementation looks pretty much the same as all the others. The class calls the serialize() method on the Serializer passed to it, providing itself as the argument.
public class ExtFileSystemIdentifier extends FileSystemIdentifier {
public <T> T serialize(Serializer<T> serializer) {
return serializer.serialize(this);
}
}
I understand why this common code cannot be pulled into a parent class. Although the code is shared, the compiler knows unambiguously when it is in that method that the type of this is ExtFileSystemIdentifier and can (at compile time) write out the bytecode to call the most type-specific overload of the serialize().
I believe I understand most of what is happening when it comes to the V-table lookup as well. The compiler only knows the serializer parameter as being of the abstract type Serializer. It must, at runtime, look into the V-table of the serializer object to discover the location of the serialize() method for the specific subclass, in this case JsonSerializer.serialize()
The typical usage is to take a data object, known to be Serializable and serialize it by giving it to a serializer object, known to be a Serializer. The specific types of the objects are not known at compile time.
List<Serializeable> list = //....
Serializer<String> serializer = //....
list.stream().map(serializer::serialize)
This instance works similar to the other invocation, but in reverse.
public class JsonSerializer implements Serializer<String> {
public String serialize(Serializeable s) {
s.serialize(this);
}
// ...
}
The V-table lookup is now done on the instance of Serializable and it will find, for example, ExtFileSystemIdentifier.serialize. It can statically determine that the closest matching overload is for Serializer<T> (it just so happens to also be the only overload).
This is all well and good. It achieves the main goal of keeping the input and output data classes oblivious to the serialization class. And it also achieves the secondary goal of giving the user of the serialization classes a consistent API regardless of what sort of serialization is being done.
Imagine now that a second set of dumb data classes exist in a different project. A new serializer needs to be written for these objects. The existing Serializable interface can be used in this new project. The Serializer interface, however, contains references to the data classes from the other project.
In an attempt to generalize this, the Serializer interface could be split into three
public interface Serializer<T> {
T serialize(Serializable s);
}
public interface ProjectASerializer<T> extends Serializer<T> {
T serialize(FileSystemIdentifier fsid);
T serialize(ExtFileSystemIdentifier fsid);
// ... other data classes from Project A
}
public interface ProjectBSerializer<T> extends Serializer<T> {
T serialize(ComputingDevice device);
T serialize(PortableComputingDevice portable);
// ... other data classes from Project B
}
In this way, the Serializer and Serializable interfaces could be packaged and reused. However, this breaks the double-dispatch and it results in an infinite loop in the code. This is the part I'm uncertain about in the V-table lookup.
When stepping through the code in a debugger, the issue arises when in the data class' serialize method.
public class ExtFileSystemIdentifier implements Serializable {
public <T> T serialize(Serializer<T> serializer) {
return serializer.serialize(this);
}
}
What I think is happening is that at compile time, the compiler is attempting to select the correct overload for the serialize method, from the available options in the Serializer interface (since the compiler knows it only as a Serializer<T>). This means by the time we get to the runtime to do the V-table lookup, the method being looked for is the wrong one and the runtime will select JsonSerializer.serialize(Serializable), leading to the infinite loop.
A possible solution to this problem is to provide a more type-specific serialize method in the data class.
public interface ProjectASerializable extends Serializable {
<T> T serialize(ProjectASerializer<T> serializer);
}
public class ExtFileSystemIdentifier implements ProjectASerializable {
public <T> T serialize(Serializer<T> serializer) {
return serializer.serialize(this);
}
public <T> T serialize(ProjectASerializer<T> serializer) {
return serializer.serialize(this);
}
}
Program control flow will bounce around until the most type-specific Serializer overload is reached. At that point, the ProjectASerializer<T> interface will have a more specific serialize method for the data class from Project A; avoiding the infinite loop.
This makes the double-dispatch slightly less attractive. There is now more boilerplate code in the data classes. It was bad enough that obviously duplicate code can't be factored out to a parent class because it circumvented the double-dispatch trickery. Now, there is more of it and it compounds with the depth of the inheritance of the Serializer.
Double-dispatch is static typing trickery. Is there some more static typing trickery that will help me avoid the duplicated code?
as you noticed the serialize method of
public interface Serializer<T> {
T serialize(Serializable s);
}
does not make sense. The visitor pattern is there for doing case analysis but with this method you make no progress (you already know it is a Serializable), hence the inevitable infinite recursion.
What would make sense is a base Serializer interface that has at least one concrete type to visit, and that concrete type shared between the two projects. If there is no shared concrete type, then there is no hope of a Serializer hierarchy being useful.
Now if you are looking to reduce boilerplate when implementing the visitor pattern I suggest the use of a code generator (via annotation processing), eg. adt4j or derive4j.

Is there benefit in a generified interface?

Recently in an answer it was suggested to me that this:
public interface Operation<R extends OperationResult, P extends OperationParam> {
public R execute(P param);
}
Is better than this:
public interface Operation {
public OperationResult execute(OperationParam param);
}
I however can't see any benefit in using the first code block over the second one ...
Given that both OperationResult and OperationParam are interfaces an implementer needs to return a derived class anyway and this seems quite obvious to me.
So do you see any reason the use the first code block over the second one ?
This way you can declare your Operation implementations to return a more specific result, e.g.
class SumOperation implements Operation<SumResult, SumParam>
Though whether this is of any value to your application depends entirely on the situation.
Update: Of course you could return a more specific result without having a generic interface, but this way you can restrict the input parameters as well.

Factory configured by a. object's class - how to do it nicely?

In my current project we have a couple of data classes that deal with core concepts of our application domain. Now at some places in our project we have to have different behavior depending on the concrete object in hand. E.g. a JList has the task to render a list of objects but we want the rendering to be slightly different, depending on the object's class. E.g. an object of class A should be rendered different than one of class B and class C is a totally different animal, too.
We encapsulate the behavior in strategy classes and then have a factory that returns a class suitable for the object that is to be rendered. From the client perspective, that is okay, I guess.
Now from the perspective of the factory this gets pretty ugly, because all we could come up with so far is stuff like
if (obj instanceof classA) return strategyA;
else if (obj instanceof classB) return strategyB;
...
Now, for a pool of already instantiated objects, a map would also work. But if the factory has to actually create a new object, we'd have to put another layer of factory/strategy objects into that map that then return a suitable strategies for displaying.
Is there any design pattern that deals nicely with this kind of problem?
One way to do this is to delegate the implementation to the object itself. For instance, if classes A, B, and C are all rendered differently, you might have them each implement an interface such as:
interface IRenderable {
public void render();
}
Then, each one would provide its own implementation of render(). To render a List of IRenderable, you would only need to iterate over its members and call the render() method of each.
Using this approach, you never have to explicitly check the type of an object. This is particularly useful if any of your classes are ever subclassed. Suppose you had class classD which extends classA, and was to be rendered differently from A. Now code like:
if (obj instanceof classA) return strategyA;
...
else if (obj instanceof classD) return strategyD;
will fail - you would always need to check in order of most to least specific. Better not to have to think about such things.
Edit: in response to your comment - if your goal is to keep the front end code out of the model objects, but you still want to avoid explicit checks, you can use the visitor pattern.
Something like this:
class Renderer {
public void visit(classA obj);
public void visit(classB obj);
// etc
}
and
class classA {
public void accept(Renderer r) {
r.visit(this);
}
}
Now, all the rendering code goes into the Renderer, and the model objects choose which method to call.
Instead of the if/else block you can have a Factory interface, like this
interface RendererFactory {
supports(Object obj);
createRenderer(Object obj);
}
Then you can have an implementation which asks a list of other implementations if one of them support a given type. The other implementations may do an instanceof check in the supports method.
The consumer of the renderer only needs to call createRenderer.
Advantage: Configuration of your RendererFactories possible
Disadvantage: You have to take care about the order of the RendererFactories (but you have to do that with if/else too)
I like the factory-serving-strategy objects a lot. But I wonder if you could treat it like IoC and register strategies for specifci types? You don't have a bunch of if-else's but you would have to 'register' them. But it might also be nice for testing - rather an implementing a 'mock factory' you'd register 'mock strategies'?
You can have your model classes implement an interface like:
public interface RenderingStrategyProvider {
public RenderingStrategy getRenderingStrategy();
}
and return an instance of the appropriate strategy. Like:
public ClassA implements RenderingStrategyProvider {
public RenderingStrategy getRenderingStrategy() {
return new ClassARenderingStrategy(this);
// or without this, depending on your other code
}
}
In that case you wouldn't even need a factory. Or if you want such, it will contain just a one method call. That way you don't have the presentation logic inside your model classes.
Alternatively, you can use convention + reflection, but this is a weird. The strategy for a model class would be ModelClassStrategy, and you can have:
public RenderingStrategy createRenderingStrategy(Object modelObject) {
return (RenderingStrategy) Class.forName(
modelObject.getClass().getName() + "Strategy").newInstance();
}

Categories