I have an abstract class that implements an interface. I then have several classes that extends that abstract class that are in turn composed of a hierarchy of some objects plus one or more Lists of objects extending the same abstract class, repeated for some levels. In essence,
public interface Bar
public abstract class BarImpl implements Bar
public class Foo extends BarImpl {
private String value1;
private String value2;
private List<Foo2> fooSubs;
public List<Foo2> getFooSubs() {
return fooSubs;
}
}
public class Foo2 extends BarImpl {
private String value3;
private String value4;
private List<Foo3> fooSubs;
public List<Foo3> getFooSubs() {
return fooSubs;
}
}
...etc...
The data in question is actually X12 healthcare claim data for those who are familiar. I've defined a Loop interface to correspond to the various loops that compose the X12 file.
My issues is this - I need to also be able to describe a single transaction, in theory using the same object or some wrapper on that object, where for some specified depth the size of each list of objects is 1.
My first though is/was to add a boolean singleTransaction to the BarImpl abstract class. Each class extending that would then have a check on the addFoo methods to make sure that the object did not grow beyond the single entry. Before converting to FooSingle I would check as well.
public void addFoo(Foo foo) throws FooException {
if (singleTransaction && fooSubs.size() >= 1)
throw new FooException();
else
fooSubs.add(foo);
}
I would also have to remove the setFoo method, so as to prevent an already-populated List from being assigned. Perhaps just make it final...
Does this seem like a reasonable way to go about this? I could then have a SingleBarImpl class that would verify it had a single path down the hierarchy, filter the boolean down, and could then safely assume that there was only one object-per-list for the specified classes. This could then simplify the access to the hierarchy since I no longer needed to worry about multiple list entires.
This feels very ugly is why I raise the question, and I wasn't quite sure what I should search on for an alternative. So I decided to stop lurking, create an account, and throw this out there. So...any ideas? Am I missing some design pattern that makes this much more elegant?
I am not familiar with X12 healthcare claim data and hence can't properly model the domain, but it sounds like you want to use the GOF composite pattern . A "Leaf" implementation class could easily replace your "singleTransaction" flag
Related
I need to add one optional method in existing abstract class that is extended by more than 50 classes:
public abstract class Animal{...}
This method is not used by all those classes, but in the future it probably will.
The structure of one of my classes is:
public class Dog extends Animal {...}
The cleanest way is using abstract method but it obliges me to change all existing classes.
The workaround is to create "empty" method in abstract class:
public String getString(Map<String, Object> params){
return "";
}
and then override it when I need in classes that extend abstract class.
Is there any better solution?
Having an "empty" method is fine. But in order to be sure, that it will be implemented where it is really needed, consider throwing an exception by default from this method:
throw new UnsupportedOperationException();
A similar approach is used in java.util.AbstractList class:
public E set(int index, E element) {
throw new UnsupportedOperationException();
}
I can't help feeling like you have some architectural/design issues here, but without knowing more, I can't say for sure. If 50 classes are going to inherit from Animal, but not all of them are going to use this method, then I'm wondering if they should really inherit from one common class. Perhaps you need further levels of sub-classing... think Kingdom->Phylum->Sub-Phylum. But my gut says that's still not the right answer for you.
Step back - what are you trying to accomplish? If you're going to implement this function on these classes in the future, then you must also be changing your code to know to use/expect this. The point of inheritance is to allow code to refer to an object's expected common behavior without knowing what type of object it's referencing. In your getString() example, you might have a function as such:
public string SendMessage(Animal someAnimal) {
string message = someAnimal.getString();
// Send the message
}
You can pass it a dog, a cat, a platypus - whatever. The function doesn't care, because it can query the message from its base class.
So when you say you'll have animals that don't implement this message... that implies you'll have logic that ensures only cats and dogs will call this function, and that a platypus is handled differently (or not at all). That kind of defeats the point of inheritance.
A more modern approach would be to use interfaces to establish a "has a" relationship instead of an "is a" relationship. A plane might have an IEngine member, but the specific type of engine can be set at run-time, either by the plane class itself, or by the app if the member is writeable.
public interface IEngine {
string getStatus();
string getMileage();
}
public class Cessna {
public IEngine _engine;
public Cessna() {
_engine = new PropellerEngine();
}
}
You could also inherit directly from that interface... Animals that don't implement IAnimalMessage wouldn't implement that function. Animals that do would be required to. The downside is that each animal will have to have its own implementation, but since your base class currently has an abstract function with no body, I'm assuming that's a non-issue. With this approach, you can determine if the object implements the interface as such:
IAnimalMessage animalMessage = myPlatypus as IAnimalMessage;
// If your playtpus doesn't implement IAnimalMessage,
// animalMessage will be null.
if (null != animalMessage) {
string message = animalMessage.getString();
}
public interface IAnimalMessage {
string getMessage();
}
public class Platypus : IAnimalMessage {
// Add this implementation when Platypus implements IAnimalMessage...
// Not needed before then
public string getMessage() {
return "I'm a cowboy, howdy, howdy, howdy!";
}
}
That's probably the closest to what you're asking for I can suggest... classes that don't need the message won't implement that interface until they do, but the code can easily check if the interface is implemented and act accordingly.
I can offer more helpful/specific thoughts, but I'd need to understand the problem you're trying to solve better.
I have a number of dumb object classes that I would like to serialize as Strings for the purpose of out-of-process storage. This is a pretty typical place to use double-dispatch / the visitor pattern.
public interface Serializeable {
<T> T serialize(Serializer<T> serializer);
}
public interface Serializer<T> {
T serialize(Serializeable s);
T serialize(FileSystemIdentifier fsid);
T serialize(ExtFileSystemIdentifier extFsid);
T serialize(NtfsFileSystemIdentifier ntfsFsid);
}
public class JsonSerializer implements Serializer<String> {
public String serialize(Serializeable s) {...}
public String serialize(FileSystemIdentifier fsid) {...}
public String serialize(ExtFileSystemIdentifer extFsid) {...}
public String serialize(NtfsFileSystemIdentifier ntfsFsid) {...}
}
public abstract class FileSystemIdentifier implements Serializeable {}
public class ExtFileSystemIdentifier extends FileSystemIdentifier {...}
public class NtfsFileSystemIdentifier extends FileSystemIdentifier {...}
With this model, the classes that hold data don't need to know about the possible ways to serialize that data. JSON is one option, but another serializer might "serialize" the data classes into SQL insert statements, for example.
If we take a look at the implementation of one of the data classes, the impementation looks pretty much the same as all the others. The class calls the serialize() method on the Serializer passed to it, providing itself as the argument.
public class ExtFileSystemIdentifier extends FileSystemIdentifier {
public <T> T serialize(Serializer<T> serializer) {
return serializer.serialize(this);
}
}
I understand why this common code cannot be pulled into a parent class. Although the code is shared, the compiler knows unambiguously when it is in that method that the type of this is ExtFileSystemIdentifier and can (at compile time) write out the bytecode to call the most type-specific overload of the serialize().
I believe I understand most of what is happening when it comes to the V-table lookup as well. The compiler only knows the serializer parameter as being of the abstract type Serializer. It must, at runtime, look into the V-table of the serializer object to discover the location of the serialize() method for the specific subclass, in this case JsonSerializer.serialize()
The typical usage is to take a data object, known to be Serializable and serialize it by giving it to a serializer object, known to be a Serializer. The specific types of the objects are not known at compile time.
List<Serializeable> list = //....
Serializer<String> serializer = //....
list.stream().map(serializer::serialize)
This instance works similar to the other invocation, but in reverse.
public class JsonSerializer implements Serializer<String> {
public String serialize(Serializeable s) {
s.serialize(this);
}
// ...
}
The V-table lookup is now done on the instance of Serializable and it will find, for example, ExtFileSystemIdentifier.serialize. It can statically determine that the closest matching overload is for Serializer<T> (it just so happens to also be the only overload).
This is all well and good. It achieves the main goal of keeping the input and output data classes oblivious to the serialization class. And it also achieves the secondary goal of giving the user of the serialization classes a consistent API regardless of what sort of serialization is being done.
Imagine now that a second set of dumb data classes exist in a different project. A new serializer needs to be written for these objects. The existing Serializable interface can be used in this new project. The Serializer interface, however, contains references to the data classes from the other project.
In an attempt to generalize this, the Serializer interface could be split into three
public interface Serializer<T> {
T serialize(Serializable s);
}
public interface ProjectASerializer<T> extends Serializer<T> {
T serialize(FileSystemIdentifier fsid);
T serialize(ExtFileSystemIdentifier fsid);
// ... other data classes from Project A
}
public interface ProjectBSerializer<T> extends Serializer<T> {
T serialize(ComputingDevice device);
T serialize(PortableComputingDevice portable);
// ... other data classes from Project B
}
In this way, the Serializer and Serializable interfaces could be packaged and reused. However, this breaks the double-dispatch and it results in an infinite loop in the code. This is the part I'm uncertain about in the V-table lookup.
When stepping through the code in a debugger, the issue arises when in the data class' serialize method.
public class ExtFileSystemIdentifier implements Serializable {
public <T> T serialize(Serializer<T> serializer) {
return serializer.serialize(this);
}
}
What I think is happening is that at compile time, the compiler is attempting to select the correct overload for the serialize method, from the available options in the Serializer interface (since the compiler knows it only as a Serializer<T>). This means by the time we get to the runtime to do the V-table lookup, the method being looked for is the wrong one and the runtime will select JsonSerializer.serialize(Serializable), leading to the infinite loop.
A possible solution to this problem is to provide a more type-specific serialize method in the data class.
public interface ProjectASerializable extends Serializable {
<T> T serialize(ProjectASerializer<T> serializer);
}
public class ExtFileSystemIdentifier implements ProjectASerializable {
public <T> T serialize(Serializer<T> serializer) {
return serializer.serialize(this);
}
public <T> T serialize(ProjectASerializer<T> serializer) {
return serializer.serialize(this);
}
}
Program control flow will bounce around until the most type-specific Serializer overload is reached. At that point, the ProjectASerializer<T> interface will have a more specific serialize method for the data class from Project A; avoiding the infinite loop.
This makes the double-dispatch slightly less attractive. There is now more boilerplate code in the data classes. It was bad enough that obviously duplicate code can't be factored out to a parent class because it circumvented the double-dispatch trickery. Now, there is more of it and it compounds with the depth of the inheritance of the Serializer.
Double-dispatch is static typing trickery. Is there some more static typing trickery that will help me avoid the duplicated code?
as you noticed the serialize method of
public interface Serializer<T> {
T serialize(Serializable s);
}
does not make sense. The visitor pattern is there for doing case analysis but with this method you make no progress (you already know it is a Serializable), hence the inevitable infinite recursion.
What would make sense is a base Serializer interface that has at least one concrete type to visit, and that concrete type shared between the two projects. If there is no shared concrete type, then there is no hope of a Serializer hierarchy being useful.
Now if you are looking to reduce boilerplate when implementing the visitor pattern I suggest the use of a code generator (via annotation processing), eg. adt4j or derive4j.
I have read Item 16 from Effective Java and
Prefer composition over inheritance? and now try to apply it to the code written 1 year ago, when I have started getting to know Java.
I am trying to model an animal, which can have traits, i.e. Swimming, Carnivorous, etc. and get different type of food.
public class Animal {
private final List<Trait> traits = new ArrayList<Trait>();
private final List<Food> eatenFood = new ArrayList<Food>();
}
In Item 16 composition-and-forwarding reuseable approach is suggested:
public class ForwardingSet<E> implements Set<E> {
private final Set<E> s;
public ForwardingSet(Set<E> s) {this.s = s;}
//implement all interface methods
public void clear() {s.clear();}
//and so on
}
public class InstrumentedSet<E> extends ForwardingSet<E> {
//counter for how many elements have been added since set was created
}
I can implement ForwardingList<E> but I am not sure on how I would apply it twice for Animal class. Now in Animal I have many methods like below for traits and also for eatenFood. This seems akward to me.
public boolean addTrait (Trait trait) {
return traits.add(trait);
}
public boolean removeTrait (Trait trait) {
return traits.remove(trait);
}
How would you redesign the Animal class?
Should I keep it as it is or try to apply ForwardingList?
There is no reason you'd want to specialize a List for this problem. You are already using Composition here, and it's pretty much what I would expect from the class.
Composition is basically creating a class which has one (or usually more) members. Forwarding is effectively having your methods simply make a call to one of the objects it holds, to handle it. This is exactly what you're already doing.
Anyhow, the methods you mention are exactly the sort of methods I would expect for a class that has-a Trait. I would expect similar addFood / removeFood sorts of methods for the food. If they're wrong, they're the exact sort of wrong that pretty much everyone does.
IIRC (my copy of Effective Java is at work): ForwardingSet's existence was simply because you cannot safely extend a class that wasn't explicitly designed to be extended. If self-usage patterns etc. aren't documented, you can't reasonably delegate calls to super methods because you don't know that addAll may or may not call add repeatedly for the default implemntation. You can, however, safely delegate calls because the object you are delegating to will never make a call the wrapper object. This absolutely doesn't apply here; you're already delegating calls to the list.
Recently, I've discovered this code of the following structure:
Interface:
public interface Base<T> {
public T fromValue(String v);
}
Enum implementation:
public enum AddressType implements Base<AddressType> {
NotSpecified("Not Specified."),
Physical("Physical"),
Postal("Postal");
private final String label;
private AddressType(String label) {
this.label = label;
}
public String getLabel() {
return this.label;
}
#Override
public AddressType fromValue(String v) {
return valueOf(v);
}
}
My immediate reaction is that one cannot create an instance of an enum by deserialization or by reflection, so the fromValue() should be static.
I'm not trying to start a debate, but is this correct? I have read, Why would an Enum implement an interface, and I totally agree with the answers provided, but the above example is invalid.
I am doing this because the "architect" doesn't want to take my answer, so this is to create a strong argument (with facts) why the above approach is good/bad.
Your Base interface does not declare valueOf and the fromValue method is indeed implemented. I see no reason why this code should not compile. If you are referring to the valueOf call inside fromValue, that is a call of a static method defined for every enum. I would have to agree, though, that the design of it is quite misguided as you need an arbitrary member of the enum just to call fromValue and get the real member.
On the other hand, in a project that I'm doing right now I have several enums implementing a common interface. This because the enums are related and I want to be able to treat them uniformly with respect to their common semantics.
In my opinion this design is wrong. In order to use valueFrom() one has to get an instance of this enum beforehand. Thus, it will look like:
AddressType type = AddressType.Postal.valueFrom("Physical");
What sense does it make?
Your Base interface seems to serve a whole other purpose (if any).
It is probably meant to be a String-to-T-converter, since it generates a T from a String. The enum is simply wrong if it implements this interface (#yegor256 already pointed out why). So you can keep the enum and you can have some AddressTypeConverter implements Base<AddressType> which calls AddressType.valueOf() in its fromString() method.
But don't get me wrong: enums implementing interfaces are NOT a bad practice, it's just this particular usage that is completely wrong.
I have n classes which either stack or do not stack on top of one another. All these classes extend the same class (CellObject). I know that more classes will be added to this list, and I want to create some kind of way that it is easy to manipulate "stackability" in one place.
I was thinking of creating a matrix, where the row-index is the class on the bottom of the stack and the column index is the class on the top of the stack. The value would be true (or 1) if you can stack top on bottom, false (0) otherwise.
However, my colleague suggests creating n+1 methods called canStack. One general canStack method would switch on an instanceof statement that would direct it into one of the n submethods. Each of the submethods would just answer the question of whether the top object can stack on the bottom object by itself.
I think my solution is more elegant/clean. Is this true? If so, how would I implement it?
I changed objects to classes
Your solution would be shorter. But it has the drawback that if you add a sublcass of CellObject, you could potentially forget to alter your array. Even if you know this should happen, someone else might some day work on the code. Then again, his solution has that same issue.
Now, this is a slightly wild idea, but since you're essentially saying something about classes it feels like a metadata facility is in order. What you could do is define an annotation that states which classes can be stacked onto the annotated class and/or which classes it can stack on.
Something like this:
#interface Stackable {
Class<? extends CellObject>[] stackables(); //Classes that may stack on the annotated one
Class<? extends Cellobject>[] pillars(); //Classes this one can stack on
}
Then you could create an annotation processor that uses this metadata. It could create a configuration file your read in at compile time, or generate some boilerplate code for you. You could generate meta-classes like JPA does for its type-safe query API that say something about the class. Or you could even retain the annotations at runtime to use reflection for finding out what can stack on what, building up your desired array ad-hoc rather than having to code it.
If you use an annotation processor, then maybe it would be safer to use String arrays with canonical class names, since the Class objects might not be available yet at compile time. Its feasibility would also depend on whether all CellObject classes are always in the same compilation run or not.
Using reflection (possible when making sure the annotation has a RetentionType RUNTIME) seems like a viable option here. Check the array; if the corresponding element is null (can be done by using Boolean instead of boolean), do the reflection stuff and fill in that element. Next time you can avoid the reflection overhead, lazily filling the array as needed.
EDIT: forgot to mention, my solution doesn't enforce you to keep the metadata up-to-date either. Also, the complexity could be reduced if the stackability is transitive. That is, A can stack on B and B can stack on C implies A can stack on C.
The matrix approach would scale as O(n2). In contrast, the other approach would scale as O(n), but it would be riskier to maintain.
As an alternative, consider letting an abstract CellObject implement a suitable Stackable interface, but defer the implementation to the n concrete subclass. the compiler will identify missing implementations immediately. See also When an Abstract Class Implements an Interface.
interface Stackable {
boolean canStack(Stackable other);
}
abstract class CellObject implements Stackable {}
class Cell01 extends CellObject {
#Override
public boolean canStack(Stackable other) {
return true; // TODO
}
}
class Cell02 extends CellObject {
#Override
public boolean canStack(Stackable other) {
return true; // TODO
}
}
...
I don't think your matrix concept would be the good way to achieve your goal. You'll end up with a huge matrix that contains every possibilities. Obviously, extracting the information you wish from the matrix will be fairly easy, but maintaining it in the long run might prove to be a painful experience as more CellObject subclasses are being added. The same applies to the n + 1 methods your colleague suggested.
In both cases, everytime you will add a subblass of CellObject, you will have to either go to the class that holds the matrix, create a new row, and a new column for each existing row, and manually specify if this new class can be stacked or not on class x, or add a new method canStackOnNewClassX () to each existing class. Both solutions are bug prone in my opinion (you might easily forget to update your matrix, or enter the wrong information as the code might not be easily readable), there are more elegant ways to handle such kind of problem.
One thing you could do is have a map in your CellObject super class that will hold on your "stackability" information, and provide methods to populate this map and to retrieve if a member of class A can be stacked on a member of class B. Something like this:
public abstract class CellObject
{
private static Map<Class<? extends CellObject>, Map<Class<? extends CellObject>, Boolean>> fullStackabilityMap =
new HashMap<Class<? extends CellObject>, Map<Class<? extends CellObject>, Boolean>> ();
protected static void addStackableOnObjectInformation (Class<? extends CellObject> baseObjectClass, Class<? extends CellObject> objectToStack, boolean canStackOnObject)
{
Map<Class<? extends CellObject>, Boolean> stackableMapForObject = fullStackabilityMap.get (baseObjectClass);
if (stackableMapForObject == null)
{
stackableMapForObject = new HashMap<Class<? extends CellObject>, Boolean> ();
fullStackabilityMap.put (baseObjectClass, stackableMapForObject);
}
stackableMapForObject.put (objectToStack, canStackOnObject);
}
protected boolean isStackableOnObject (CellObject baseObject)
{
Map<Class<? extends CellObject>, Boolean> stackableMapForObject = CellObject.fullStackabilityMap.get (baseObject.getClass ());
if (stackableMapForObject == null)
{
return false;
}
Boolean canStackOnObject = stackableMapForObject.get (this.getClass ());
return canStackOnObject != null ? canStackOnObject : false; //Assume that the object cannot be stacked if it was not specified
}
}
public class CellObjectA extends CellObject
{
}
public class CellObjectB extends CellObject
{
static
{
addStackableOnObjectInformation (CellObjectB.class, CellObjectA.class, true);
}
}
public class CellObjectC extends CellObject
{
static
{
addStackableOnObjectInformation (CellObjectC.class, CellObjectA.class, true);
addStackableOnObjectInformation (CellObjectC.class, CellObjectB.class, true);
}
}
The creation of fullStackabilityMap in CellObject seems complicated, due to Java's lack of diamond operator in Java 6, but it could be simplified if you wrote a utility method that creates maps, or use Guava.
So, in this example, CellObjectC instances would not be stackable over kind of objects; CellObjectB instances could be stacked on CellObjectC objects only, and CellObjectA could be stacked on either CellObjectB or CellObjectC objects.
The only work you would have to do each time you add a new class is to update the static initializers of your existing classes to make sure this new class is accounted for. The advantages of this solution are:
You only have to specify which kind of object can be stacked on which kind of object. No need to fully initialize a matrix with all possibilities.
You can ask an object directly if it can be stacked on any kind of object, rather than having to statically poll an external class, which to me is easier to maintain, and generates cleaner code.
You do not have to maintain n+1 methods that will tell you with object A can be stacked on object B, which would be a total nightmare if you end up with a significant number of CellObject subclasses.