I have to take over and improve/finish some code that transforms Java objects from a third party library into internal objects. Currently this is done through a big if-else statement along the lines of:
if (obj instanceOf X)
{
//code to initialize internal object
}
else if (obj instanceOf Y)
{
//code to initialize different object
}
else if (obj instanceOf Z)
{
//more init code
}
...
Personally I don't find this solution satisfactory; it's long and messy and to make matters worse many of the if-else blocks contain more if-else blocks dealing with subclasses and edge cases. Is there a better solution to this problem?
Create an interface like this
public interface Converter<S,T> {
public T convert(S source);
}
and implement it for each object of X,Y,Z. Then put all known converters into a Map and get happy!
While it doesn't work for edge cases, building a Map between Classes and Converters
X.getClass() -> X Converter
Y.getClass() -> Y Converter
would get you a lot closer. You'd want to also check superclasses if the leaf class is not found.
Code like this, with all of its instanceof conditions, screams for an interface!
You may want to create a public interface Initializable, with a method public void initialize().
Then all if your if-else's simply resolve into a single obj.initialize() call.
If these internal objects present an interface to the application, rather than being used directly, adapt them rather than converting them.
That is, if you have something like this:
public class ThirdPartyClass { ... }
public interface InternalInterface { ... }
public class InternalClass { ... }
Internal foo(ThirdPartyClass thirdParty) {
InternalClass internal = new InternalClass();
// convert thirdPaty -> internal
return internal;
}
Then instead do something like this:
public class ThirdPartyClass { ... }
public interface InternalInterface { ... }
public class InternalClass { ... }
public class ThirdPartyInternalAdapter implements InternalInterface {
private final ThirdPartyClass thirdParty;
public ThirdPartyInternalAdapter(ThirdPartyClass thirdParty) {
this.thirdParty = thirdParty;
}
// implement interface in terms of thirdParty
}
It's not clear from your question if this applies, but if it does this may be easier and more efficient than direct object-to-object conversion.
Related
I have a design question I can't get a good solution for. This is my problem:
There are two different object "trees" which need to processed together. Object tree one:
AbstractObjectTreeOne with Sub1ObjectTreeOne and Sub2ObjectTreeOne
AbstractObjectTreeTwo with Sub1ObjectTreeTwo and Sub2ObjectTreeTwo
I now have a method where I get a list of AbstractObjectTreeOne and a list of AbstractObjectTreeTwo. They are exact the same size and "match" to each other by name. So I can loop through the objects in the list of AbstractObjectTreeOne and get the according AbstractObjectTreeTwo by name.
Now it should be validated if the "matching" objects (by name) really match to each other, so the current code contains a lot of instanceof stuff. Example:
if (!(objectOfAbstractObjectTreeOne instanceof Sub1ObjectTreeOne)) {
throw exception;
}
and then also in the same method
if (!(objectOfAbstractObjectTreeTwo instanceof Sub1ObjectTreeTwo)) {
throw exception;
}
After that, both parameters are cast to their "real" subtype to be further processed. This also does not feel very good.
This all feels not very object-oriented, but I currently do not have a good idea how to solve this. I tried the visitor pattern, but it only solves the instanceof issue in either AbstractObjectTreeOne or AbstractObjectTreeTwo and still contains a lot of instanceof.
Maybe some of you have a good idea about this kind of problem. Maybe it's easy to solve, but I do not have the right idea yet.
This is called OOO Principle Polymorfism.
No need to use instanceof. You have to create an interface and use it in the declaration of the tree. All subtypes should implement this interface, and you can call the required methods without typecasting.
This is an example.
public interface ObjectTreeOne { void payloadOne() {} }
public class Sub1ObjectTreeOne implements ObjectTreeOne { void payloadOne() {} }
public class Sub2ObjectTreeOne implements ObjectTreeOne { void payloadOne() {} }
List<ObjectTreeOne> objectTreeOne = new ArrayList<>();
objectTreeOne.add(new Sub1ObjectTreeOne());
objectTreeOne.add(new Sub2ObjectTreeOne());
public interface ObjectTreeTwo { void payloadTwo() {} }
public class Sub1ObjectTreeTwo implements ObjectTreeTwo { void payloadTwo() {} }
public class Sub2ObjectTreeTwo implements ObjectTreeTwo { void payloadTwo() {} }
List<ObjectTreeTwo> objectTreeTwo = new ArrayList<>();
objectTreeTwo.add(new Sub1ObjectTreeTwo());
objectTreeTwo.add(new Sub2ObjectTreeTwo());
for(int i = 0; i < 2; i++) {
ObjectTreeOne objectTreeOne = objectTreeOne.get(i);
ObjectTreeTwo objectTreeTwo = objectTreeTwo.get(i);
objectTreeOne.payloadOne();
objectTreeTwo.payloadTwo();
}
Its possbile, to design a way to call different method-overloads at compile-time?
Lets say, I have this little class:
#RequiredArgsConstructor
public class BaseValidator<T> {
private final T newValue;
}
Now, I need methods that returns diffrent Objects (depends on the T).
Like this:
private StringValidator getValidator() {
return new ValidationString(newValue);
}
private IntegerValidator getValidator() {
return new Validation(newValue);
}
At the end, I want a call-hierachy that is very fluent and looks like this:
new BaseValidator("string")
.getValidator() // which returns now at compile-time a StringValidator
.checkIsNotEmpty();
//or
new BaseValidator(43)
.getValidator() // which returns now a IntegerValidator
.checkIsBiggerThan(42);
And in my "real"-case (I have a very specific way to update objects and a lot of conditions for every object and the chance of a copy-and-paste issue is very high. So the wizard enforces all developer to implement exact this way.) :
I tried diffrent ways. Complex generics inside the Validators, or play around with the generics. My last approch looks like this.
public <C> C getValidator() {
return (C) getValidation(newValue);
}
private ValidationString getValidation(String newValue) {
return new StringValidator(newValue);
}
private ValidationInteger getValidation(Integer newValue) {
return new IntegerValidation(newValue);
}
What is the trick?
//edit: I want it at compile-time and not with instanceof-checks at runtime.
What is the trick?
Not to do it like this.
Provide static factory methods:
class BaseValidator<T> {
static ValidationString getValidation(String newValue) {
return new ValidationString(newValue);
}
static ValidationInteger getValidation(Integer newValue) {
return new ValidationInteger(newValue);
}
}
class ValidationString extends BaseValidator<String> { ... }
class ValidationInteger extends BaseValidator<Integer> { ... }
Although I consider this to be odd: you are referring to subclasses inside the base class. Such cyclical dependencies make the code hard to work with, especially when it comes to refactoring, but also perhaps in initialization.
Instead, I would suggest creating a utility class to contain the factory methods:
class Validators {
private Validators() {}
static ValidationString getValidation(String newValue) {
return new ValidationString(newValue);
}
static ValidationInteger getValidation(Integer newValue) {
return new ValidationInteger(newValue);
}
}
which has no such cycles.
A really important thing to realize about generics is that it is nothing more than making explicit casts implicit (and then checking that all of these implicit casts are type-safe).
In other words, this:
List<String> list = new ArrayList<>();
list.add("foo");
System.out.println(list.get(0).length());
is just a nicer way of writing:
List list = new ArrayList();
list.add((String) "foo");
System.out.println(((String) list.get(0)).length());
Whilst <String> looks like it is part of the type, it is basically just an instruction to the compiler to squirt in a load of casts.
Generic classes with different type parameters all have the same methods. This is the specific difficulty in your approach: you can't make the BaseValidator<String>.getValidator() return something with a checkIsNotEmpty method (only), and the BaseValidator<Integer>.getValidator() return something with a checkIsGreaterThan method (only).
Well, this isn't quite true to say you can't. With your attempt involving the method-scoped type variable (<C> C getValidator()), you can write:
new BaseValidator<>("string").<StringValidator>getValidator().checkIsNotEmpty()
(assuming StringValidator has the checkIsNotEmpty method on it)
But:
Let's not mince words: it is ugly.
Worse than being ugly, it isn't type safe. You can equally write:
new BaseValidator<>("string").getValidator().checkIsGreaterThan(42)
which is nonsensical, but allowed by the compiler. The problem is that the return type is chosen at the call site: you will either have to return null (and get a NullPointerException when you try to invoke the following method); or return some non-null value and risk a ClassCastException. Either way: not good.
What you can do, however, is to make a generic validator a parameter of the method call. For example:
interface Validator<T> {
void validate(T b);
}
class BaseValidator<T> {
BaseValidator<T> validate(Validator<T> v) {
v.validate(this.value);
}
}
And invoke like so, demonstrating how you can chain method calls to apply multiple validations:
new BaseValidator<>("")
.validate(s -> !s.isEmpty())
.validate(s -> s.matches("pattern"))
...
new BaseValidator<>(123)
.validate(v -> v >= 0)
...
We decided to add more class-steps. You can go a the generic way or a way with explict types (in this examples, String). Our requirement for all updates-methods (we have many database-objects ...) are a little complicated. We want only one update-method (for each db-object), which ...
Ignore fields, that are null.
Ignore field, that are equal to "old" value.
Validate not ignored fields.
Save only, when no validation-issues occur.
To do that with many if-blocks is possbile but not really readable. And copy-paste-fails haves a high probably.
Our code look like this:
private void update(#NonNull final User.UpdateFinalStep params) {
UpdateWizard.update(dbUserService.get(params.getId())
.field(params.getStatus())
.withGetter(DbUser::getAccountStatus)
.withSetter(DbUser::setAccountStatus)
.finishField()
.field(Optional.ofNullable(params.getUsername())
.map(String::toLowerCase)
.orElse(null))
.withGetter(DbUser::getUsername)
.withSetter(DbUser::setUsername)
.beginValidationOfField(FieldName.USERNAME)
.notEmptyAndMatchPattern(USERNAME_PATTERN, () -> this.checkUniqueUsername(params.getUsername(), params.getId()))
.endValidation()
.field(params.getLastName())
.withGetter(DbUser::getLastname)
.withSetter(DbUser::setLastname)
.beginValidationOfField(FieldName.USER_LASTNAME)
.notEmptyAndMatchPattern(LAST_NAME_PATTERN)
.endValidation()
.field(params.getFirstName())
.withGetter(DbUser::getFirstname)
.withSetter(DbUser::setFirstname)
.beginValidationOfField(FieldName.USER_FIRSTNAME)
.notEmptyAndMatchPattern(FIRST_NAME_PATTERN)
.endValidation()
.save(dbUserService::save);
}
This is very readable and allows to add new field in a very simple way. With the generics, we dont give the "stupid developer" a chance to do an misstake.
As you can see in the image, accountStatus and username points to different classes.
At the end, we can use in a very fluent way the update-method:
userService.startUpdate()
.withId(currentUserId)
.setStatus(AccountStatus.INACTIVE)
.finallyUpdate();
I'm currently working at a company that has a diverse set of modules. In that company if you want to provide module internals you provide it via a java interface, that hides the actual implementing type and gives an interface for the requesting module. Now I want to have one provider to be able to provide data for multiple modules that expose different fields or methods of the actual internal data.
Therefore I have an internal Object, which has some data and I have an interface for each module that needs access to some but not strictly all fields. Finally I have an external object that implements all those interfaces and holds an instance of the internal object to delegate the method calls:
public class InternalObject {
public int getA() { return 0; }
public int getB() { return 0; }
}
public interface ModuleXObject {
int getA();
}
public interface ModuleYObject {
int getA();
int getB();
}
public class ExternalObject implements ModuleXObject, ModuleYObject {
private InternalObject _internal;
public int getA() { return _internal.getA(); }
public int getB() { return _internal.getB(); }
}
Now that is all fine and dandy, but if I want to provide - lets say - repository methods for finding a list of said objects typed for the correct module, I run into problems with how I can achieve that. I would wish for something like the following:
public interface ModuleXObjectRepository {
List<ModuleXObject> loadAllObjects();
}
public interface ModuleYObjectRepository {
List<ModuleYObject> loadAllObjects();
}
public class ExternalObjectRepository implements ModuleXObjectRepository, ModuleYObjectRepository {
public List<ExternalObject> loadAllObjects() {
// ...
}
}
This doesn't compile saying the return type is incompatible.
So my question is, if it is possible to achieve something like that and if, how?
I should note that I tried some different approaches which I want to include for completeness and to portray their downsides (in my eyes).
Approach 1:
public interface ModuleXObjectRepository {
List<? extends ModuleXObject> loadAllObjects();
}
public interface ModuleYObjectRepository {
List<? extends ModuleYObject> loadAllObjects();
}
public class ExternalObjectRepository implements ModuleXObjectRepository, ModuleYObjectRepository {
public List<ExternalObject> loadAllObjects() {
// ...
}
}
This approach is quite close to the solution I would prefer, but results in code like this:
List<? extends ModuleXObject> objects = repository.loadAllObjects();
Therefore requiring the user to include the "? extends" into each List-Declaration regarding to an invocation of loadAllObjects().
Approach 2:
public interface ModuleXObjectRepository {
List<ModuleXObject> loadAllObjects();
}
public interface ModuleYObjectRepository {
List<ModuleYObject> loadAllObjects();
}
public class ExternalObjectRepository implements ModuleXObjectRepository, ModuleYObjectRepository {
public List loadAllObjects() {
// ...
}
}
This approach just omits the generic in the ExternalObjectRepository and therefore reduces the type safety too much in my opinion. Also I haven't tested if this actually works.
Just to reharse, is there any possible way to define the loadAllObjects-method in a way that enables users to get lists that are typed with the objects for their respective module without
requiring "? extends" in the users code
degrading type safety in the repository implementation
using class/interface level generics
The challenge with allowing it to be typed as List<ModuleXObject> is that other code may hold is as a List<ExternalObject>.
All ExternalObject instances are ModuleXObject instances but the inverse is not true.
Consider the following additional class:
public class MonkeyWrench implements ModuleXObject{
//STUFF
}
MonkeyWrench instances are NOT ExternalObject instances but if one could cast a List<ExternalObject> to a List<ModuleXObject> one could add MonkeyWrench instances to this collection, and this causes a risk of run time class cast exceptions and ruins type safety.
Other code could very easily have:
for(ExternalObject externalObject:externalObjectRepository.loadAllObjects())
If one of those instances is a MonkeyWrench instance, run time class cast, which is what generics are meant to avoid.
The implication of ? extends ModuleXObject is that you can read any object from the collection as a ModuleXObject but you can't add anything to the collection as other code may have additional constraints on the collection that are not obvious/available at compile time.
I'd suggest in your case to use ? extends ModuleXObject as its semantics seem to align with what you want, namely pulling out ModuleXObject instances, e.g.
ModuleXObjectRepository repo = //get repo however
for(ModuleXObject obj : repo.loadAllObjects()){
//do stuff with obj
}
If I have two interfaces , both quite different in their purposes , but with same method signature , how do I make a class implement both without being forced to write a single method that serves for the both the interfaces and writing some convoluted logic in the method implementation that checks for which type of object the call is being made and invoke proper code ?
In C# , this is overcome by what is called as explicit interface implementation. Is there any equivalent way in Java ?
No, there is no way to implement the same method in two different ways in one class in Java.
That can lead to many confusing situations, which is why Java has disallowed it.
interface ISomething {
void doSomething();
}
interface ISomething2 {
void doSomething();
}
class Impl implements ISomething, ISomething2 {
void doSomething() {} // There can only be one implementation of this method.
}
What you can do is compose a class out of two classes that each implement a different interface. Then that one class will have the behavior of both interfaces.
class CompositeClass {
ISomething class1;
ISomething2 class2;
void doSomething1(){class1.doSomething();}
void doSomething2(){class2.doSomething();}
}
There's no real way to solve this in Java. You could use inner classes as a workaround:
interface Alfa { void m(); }
interface Beta { void m(); }
class AlfaBeta implements Alfa {
private int value;
public void m() { ++value; } // Alfa.m()
public Beta asBeta() {
return new Beta(){
public void m() { --value; } // Beta.m()
};
}
}
Although it doesn't allow for casts from AlfaBeta to Beta, downcasts are generally evil, and if it can be expected that an Alfa instance often has a Beta aspect, too, and for some reason (usually optimization is the only valid reason) you want to be able to convert it to Beta, you could make a sub-interface of Alfa with Beta asBeta() in it.
If you are encountering this problem, it is most likely because you are using inheritance where you should be using delegation. If you need to provide two different, albeit similar, interfaces for the same underlying model of data, then you should use a view to cheaply provide access to the data using some other interface.
To give a concrete example for the latter case, suppose you want to implement both Collection and MyCollection (which does not inherit from Collection and has an incompatible interface). You could provide a Collection getCollectionView() and MyCollection getMyCollectionView() functions which provide a light-weight implementation of Collection and MyCollection, using the same underlying data.
For the former case... suppose you really want an array of integers and an array of strings. Instead of inheriting from both List<Integer> and List<String>, you should have one member of type List<Integer> and another member of type List<String>, and refer to those members, rather than try to inherit from both. Even if you only needed a list of integers, it is better to use composition/delegation over inheritance in this case.
The "classical" Java problem also affects my Android development...
The reason seems to be simple:
More frameworks/libraries you have to use, more easily things can be out of control...
In my case, I have a BootStrapperApp class inherited from android.app.Application,
whereas the same class should also implement a Platform interface of a MVVM framework in order to get integrated.
Method collision occurred on a getString() method, which is announced by both interfaces and should have differenet implementation in different contexts.
The workaround (ugly..IMO) is using an inner class to implement all Platform methods, just because of one minor method signature conflict...in some case, such borrowed method is even not used at all (but affected major design semantics).
I tend to agree C#-style explicit context/namespace indication is helpful.
The only solution that came in my mind is using referece objects to the one you want to implent muliple interfaceces.
eg: supposing you have 2 interfaces to implement
public interface Framework1Interface {
void method(Object o);
}
and
public interface Framework2Interface {
void method(Object o);
}
you can enclose them in to two Facador objects:
public class Facador1 implements Framework1Interface {
private final ObjectToUse reference;
public static Framework1Interface Create(ObjectToUse ref) {
return new Facador1(ref);
}
private Facador1(ObjectToUse refObject) {
this.reference = refObject;
}
#Override
public boolean equals(Object obj) {
if (obj instanceof Framework1Interface) {
return this == obj;
} else if (obj instanceof ObjectToUse) {
return reference == obj;
}
return super.equals(obj);
}
#Override
public void method(Object o) {
reference.methodForFrameWork1(o);
}
}
and
public class Facador2 implements Framework2Interface {
private final ObjectToUse reference;
public static Framework2Interface Create(ObjectToUse ref) {
return new Facador2(ref);
}
private Facador2(ObjectToUse refObject) {
this.reference = refObject;
}
#Override
public boolean equals(Object obj) {
if (obj instanceof Framework2Interface) {
return this == obj;
} else if (obj instanceof ObjectToUse) {
return reference == obj;
}
return super.equals(obj);
}
#Override
public void method(Object o) {
reference.methodForFrameWork2(o);
}
}
In the end the class you wanted should something like
public class ObjectToUse {
private Framework1Interface facFramework1Interface;
private Framework2Interface facFramework2Interface;
public ObjectToUse() {
}
public Framework1Interface getAsFramework1Interface() {
if (facFramework1Interface == null) {
facFramework1Interface = Facador1.Create(this);
}
return facFramework1Interface;
}
public Framework2Interface getAsFramework2Interface() {
if (facFramework2Interface == null) {
facFramework2Interface = Facador2.Create(this);
}
return facFramework2Interface;
}
public void methodForFrameWork1(Object o) {
}
public void methodForFrameWork2(Object o) {
}
}
you can now use the getAs* methods to "expose" your class
You can use an Adapter pattern in order to make these work. Create two adapter for each interface and use that. It should solve the problem.
All well and good when you have total control over all of the code in question and can implement this upfront.
Now imagine you have an existing public class used in many places with a method
public class MyClass{
private String name;
MyClass(String name){
this.name = name;
}
public String getName(){
return name;
}
}
Now you need to pass it into the off the shelf WizzBangProcessor which requires classes to implement the WBPInterface... which also has a getName() method, but instead of your concrete implementation, this interface expects the method to return the name of a type of Wizz Bang Processing.
In C# it would be a trvial
public class MyClass : WBPInterface{
private String name;
String WBPInterface.getName(){
return "MyWizzBangProcessor";
}
MyClass(String name){
this.name = name;
}
public String getName(){
return name;
}
}
In Java Tough you are going to have to identify every point in the existing deployed code base where you need to convert from one interface to the other. Sure the WizzBangProcessor company should have used getWizzBangProcessName(), but they are developers too. In their context getName was fine. Actually, outside of Java, most other OO based languages support this. Java is rare in forcing all interfaces to be implemented with the same method NAME.
Most other languages have a compiler that is more than happy to take an instruction to say "this method in this class which matches the signature of this method in this implemented interface is it's implementation". After all the whole point of defining interfaces is to allow the definition to be abstracted from the implementation. (Don't even get me started on having default methods in Interfaces in Java, let alone default overriding.... because sure, every component designed for a road car should be able to get slammed into a flying car and just work - hey they are both cars... I'm sure the the default functionality of say your sat nav will not be affected with default pitch and roll inputs, because cars only yaw!
Having a chain of "instanceof" operations is considered a "code smell". The standard answer is "use polymorphism". How would I do it in this case?
There are a number of subclasses of a base class; none of them are under my control. An analogous situation would be with the Java classes Integer, Double, BigDecimal etc.
if (obj instanceof Integer) {NumberStuff.handle((Integer)obj);}
else if (obj instanceof BigDecimal) {BigDecimalStuff.handle((BigDecimal)obj);}
else if (obj instanceof Double) {DoubleStuff.handle((Double)obj);}
I do have control over NumberStuff and so on.
I don't want to use many lines of code where a few lines would do. (Sometimes I make a HashMap mapping Integer.class to an instance of IntegerStuff, BigDecimal.class to an instance of BigDecimalStuff etc. But today I want something simpler.)
I'd like something as simple as this:
public static handle(Integer num) { ... }
public static handle(BigDecimal num) { ... }
But Java just doesn't work that way.
I'd like to use static methods when formatting. The things I'm formatting are composite, where a Thing1 can contain an array Thing2s and a Thing2 can contain an array of Thing1s. I had a problem when I implemented my formatters like this:
class Thing1Formatter {
private static Thing2Formatter thing2Formatter = new Thing2Formatter();
public format(Thing thing) {
thing2Formatter.format(thing.innerThing2);
}
}
class Thing2Formatter {
private static Thing1Formatter thing1Formatter = new Thing1Formatter();
public format(Thing2 thing) {
thing1Formatter.format(thing.innerThing1);
}
}
Yes, I know the HashMap and a bit more code can fix that too. But the "instanceof" seems so readable and maintainable by comparison. Is there anything simple but not smelly?
Note added 5/10/2010:
It turns out that new subclasses will probably be added in the future, and my existing code will have to handle them gracefully. The HashMap on Class won't work in that case because the Class won't be found. A chain of if statements, starting with the most specific and ending with the most general, is probably the best after all:
if (obj instanceof SubClass1) {
// Handle all the methods and properties of SubClass1
} else if (obj instanceof SubClass2) {
// Handle all the methods and properties of SubClass2
} else if (obj instanceof Interface3) {
// Unknown class but it implements Interface3
// so handle those methods and properties
} else if (obj instanceof Interface4) {
// likewise. May want to also handle case of
// object that implements both interfaces.
} else {
// New (unknown) subclass; do what I can with the base class
}
You might be interested in this entry from Steve Yegge's Amazon blog: "when polymorphism fails". Essentially he's addressing cases like this, when polymorphism causes more trouble than it solves.
The issue is that to use polymorphism you have to make the logic of "handle" part of each 'switching' class - i.e. Integer etc. in this case. Clearly this is not practical. Sometimes it isn't even logically the right place to put the code. He recommends the 'instanceof' approach as being the lesser of several evils.
As with all cases where you are forced to write smelly code, keep it buttoned up in one method (or at most one class) so that the smell doesn't leak out.
As highlighted in the comments, the visitor pattern would be a good choice. But without direct control over the target/acceptor/visitee you can't implement that pattern. Here's one way the visitor pattern could possibly still be used here even though you have no direct control over the subclasses by using wrappers (taking Integer as an example):
public class IntegerWrapper {
private Integer integer;
public IntegerWrapper(Integer anInteger){
integer = anInteger;
}
//Access the integer directly such as
public Integer getInteger() { return integer; }
//or method passthrough...
public int intValue() { return integer.intValue(); }
//then implement your visitor:
public void accept(NumericVisitor visitor) {
visitor.visit(this);
}
}
Of course, wrapping a final class might be considered a smell of its own but maybe it's a good fit with your subclasses. Personally, I don't think instanceof is that bad a smell here, especially if it is confined to one method and I would happily use it (probably over my own suggestion above). As you say, its quite readable, typesafe and maintainable. As always, keep it simple.
Instead of a huge if, you can put the instances you handle in a map (key: class, value: handler).
If the lookup by key returns null, call a special handler method which tries to find a matching handler (for example by calling isInstance() on every key in the map).
When a handler is found, register it under the new key.
This makes the general case fast and simple and allows you to handle inheritance.
You can use reflection:
public final class Handler {
public static void handle(Object o) {
try {
Method handler = Handler.class.getMethod("handle", o.getClass());
handler.invoke(null, o);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public static void handle(Integer num) { /* ... */ }
public static void handle(BigDecimal num) { /* ... */ }
// to handle new types, just add more handle methods...
}
You can expand on the idea to generically handle subclasses and classes that implement certain interfaces.
I think that the best solution is HashMap with Class as key and Handler as value. Note that HashMap based solution runs in constant algorithmic complexity θ(1), while the smelling chain of if-instanceof-else runs in linear algorithmic complexity O(N), where N is the number of links in the if-instanceof-else chain (i.e. the number of different classes to be handled). So the performance of HashMap based solution is asymptotically higher N times than the performance of if-instanceof-else chain solution.
Consider that you need to handle different descendants of Message class differently: Message1, Message2, etc. . Below is the code snippet for HashMap based handling.
public class YourClass {
private class Handler {
public void go(Message message) {
// the default implementation just notifies that it doesn't handle the message
System.out.println(
"Possibly due to a typo, empty handler is set to handle message of type %s : %s",
message.getClass().toString(), message.toString());
}
}
private Map<Class<? extends Message>, Handler> messageHandling =
new HashMap<Class<? extends Message>, Handler>();
// Constructor of your class is a place to initialize the message handling mechanism
public YourClass() {
messageHandling.put(Message1.class, new Handler() { public void go(Message message) {
//TODO: IMPLEMENT HERE SOMETHING APPROPRIATE FOR Message1
} });
messageHandling.put(Message2.class, new Handler() { public void go(Message message) {
//TODO: IMPLEMENT HERE SOMETHING APPROPRIATE FOR Message2
} });
// etc. for Message3, etc.
}
// The method in which you receive a variable of base class Message, but you need to
// handle it in accordance to of what derived type that instance is
public handleMessage(Message message) {
Handler handler = messageHandling.get(message.getClass());
if (handler == null) {
System.out.println(
"Don't know how to handle message of type %s : %s",
message.getClass().toString(), message.toString());
} else {
handler.go(message);
}
}
}
More info on usage of variables of type Class in Java: http://docs.oracle.com/javase/tutorial/reflect/class/classNew.html
You could consider the Chain of Responsibility pattern. For your first example, something like:
public abstract class StuffHandler {
private StuffHandler next;
public final boolean handle(Object o) {
boolean handled = doHandle(o);
if (handled) { return true; }
else if (next == null) { return false; }
else { return next.handle(o); }
}
public void setNext(StuffHandler next) { this.next = next; }
protected abstract boolean doHandle(Object o);
}
public class IntegerHandler extends StuffHandler {
#Override
protected boolean doHandle(Object o) {
if (!o instanceof Integer) {
return false;
}
NumberHandler.handle((Integer) o);
return true;
}
}
and then similarly for your other handlers. Then it's a case of stringing together the StuffHandlers in order (most specific to least specific, with a final 'fallback' handler), and your despatcher code is just firstHandler.handle(o);.
(An alternative is to, rather than using a chain, just have a List<StuffHandler> in your dispatcher class, and have it loop through the list until handle() returns true).
Just go with the instanceof. All the workarounds seem more complicated. Here is a blog post that talks about it: http://www.velocityreviews.com/forums/t302491-instanceof-not-always-bad-the-instanceof-myth.html
I have solved this problem using reflection (around 15 years back in pre Generics era).
GenericClass object = (GenericClass) Class.forName(specificClassName).newInstance();
I have defined one Generic Class ( abstract Base class). I have defined many concrete implementations of base class. Each concrete class will be loaded with className as parameter. This class name is defined as part of configuration.
Base class defines common state across all concrete classes and concrete classes will modify the state by overriding abstract rules defined in base class.
At that time, I don't know the name of this mechanism, which has been known as reflection.
Few more alternatives are listed in this article : Map and enum apart from reflection.
Add a method in BaseClass which returns name of the class. And override the methods with the specific class name
public class BaseClass{
// properties and methods
public String classType(){
return BaseClass.class.getSimpleName();
}
}
public class SubClass1 extends BaseClass{
// properties and methods
#Override
public String classType(){
return SubClass1.class.getSimpleName();
}
}
public class SubClass2 extends BaseClass{
// properties and methods
#Override
public String classType(){
return SubClass1.class.getSimpleName();
}
}
Now use the switch case in following way-
switch(obj.classType()){
case SubClass1:
// do subclass1 task
break;
case SubClass2:
// do subclass2 task
break;
}
What I use for Java 8:
void checkClass(Object object) {
if (object.getClass().toString().equals("class MyClass")) {
//your logic
}
}