I have a method in which I need to pass to it either a single domain object or a collection of them. Processing of the parameter passed differs slightly depending on whether it's a single instance or a collection.
May I ask for advice on the best approach ? Should I make the method signature accept an Object type and then process with instanceof and downcasting as below ?
private static synchronized void mymethod(Object obj) {
if (obj instanceof List ) {
...
}
else if (obj instanceof MyObjectClass) {
...
}
}
Or should I use overloading ? Any pitfalls in each case ?
I understand the first case is a bit dangerous as it could accept anything passed to it, however my code is not meant to be used as an API or extended etc.
There are different approaches to this kind of design "problem".
Using method overloads:
void myMethod(final MyObject myObject);
void myMethod(final List<? extends MyObject> myObjects);
Using a var-args input parameter:
void myMethod(final MyObject... myObject);
-> myMethod(myObject);
-> myMethod(myObject, myOtherObject);
-> myMethod(myObjectsArray); // myObjectsArray = new MyObject[]
Using a Collection/List as input parameter:
void myMethod(final Collection<? extends MyObject> myObjects);
-> myMethod(Collections.singletonList(myObject));
-> myMethod(myObjectCollection); // List<MyObject>, Set<MyObject>, Collection<MyObject>
Personally I'd go with method overloads, as the internal logic usually changes, slightly maybe, but it changes. The intent is more clear, and JavaDoc can be customized for the single method.
I'm a "picky" developer, and I prefer explicitly stating that there can be two forms of input. I prefer overloads even when it might be not necessary (at the moment). In that case I just delegate to the Collection<?> method, or the opposite.
void myMethod(final MyObject object) {
myObject(Collections.singletonSet(object));
}
But that is based on opinions.
I'd say the most important aspect is, don't duplicate code!
Overloading is usually the way to go in such situations. Remember that the generic type of the list is actually 'type erased' at runtime, so you won't really know that your List is actually a List<MyObjectClass>. Overloading will give you compile time checks, so it's safer.
When using generics also think if your MyObjectClass is going to be extended in some way. And you might get a collection of those objects instead.
Also, as a general pattern, try to avoid repeating code in both overloaded methods. So if you are doing the same thing on all objects when you pass a List you can call one method from the other as follows:
private static synchronized void mymethod(MyObjectClass obj) {
//todo: do the logic on the object
}
private static synchronized void mymethod(Collection<? extends MyObjectClass> collection) {
//assuming the logic is the same, otherwise do whatever you need to do here
collection.forEach(obj -> mymethod(obj));
}
Downcasting and instanceof are usually symptoms of design decisions that do not quite fit what you need. Sometimes it is difficult to get out of them, and you have to resort to them, but in general it is ideal to let the compiler verify your types and do the right method resolution for the behaviour you want.
Method overloading suffice your need. I can think of following ways
private static synchronized void mymethod(MyObjectClass myObj){
...
}
private static synchronized void mymethod(Collection<MyObjectClass> myObj){
...
}
TreffnonX has already given more detailed and generic-based correct approach while i was editing my answer. Refer to it :)
Overloading the method seems more correct here, though both versions would work. That way, the compiler can evaluate the code as far as possible. However your code seems a bit incomplete, to be honest. My personal approach would be to go even further and generalize the method with a generic Type:
private static synchronized <T extends MyObjectClass> void mymethod(T obj) {
...
}
private static synchronized <T extends MyObjectClass> void mymethod(Collection<T> obj) {
...
}
The advantage of this version is, that whatever you do inside your mymethod, you can return stuff related to the type, and modern IDEs can greatly help your evaluation and resolve Lambdas better.
Also, why specifically a List? Does a Collection do? Usually when you limit yourself to lists, you miss out on Sets and other important collections.
Related
I came up with this question writing specific code, but I'll try to keep the question as generic as possible.
Other similar question refer to C# which seems to have some language specific handling for this and below code is Java, but again let's try to keep it generic.
Let's say I have class A which implements interface I.
This is useful to me cause I can implement methods that use A only as a I type and abstract the implementation.
Let's now say, I have class B which implements all methods in interface I, but it's never referred to as only I.
Let's now say, I have class B which implements methods that have the same name/signature as the ones in interface I, but it doesn't implements the interface.
Should I always explicitly implement I?
Even if I don't use it (though I might in the future) for type abstraction?
A more meaningful, even if probably not realistic, example would be:
interface Printable {
String print()
class A implements Printable {
//code...
String print(){return "A";}
//code...
}
class B {
//code...
String print(){return "B";}
void otherMethod(){/*code*/}
//code...
}
class Test {
Printable a = new A();
System.out.println(a.print());
B b = new B();
b.otherMethod();
System.out.println(b.print());
}
Are there any drawbacks on explicitly implementing, or not, the interface Printable?
The only one I can think of is scalability for the second case.
In the sense that if one day I'll want to explicitly use it as Printable, I'll be able to do so without any more effort.
But is there anything else (patterns, optimization, good programming, style, ..) I should take into consideration?
In some cases the type hierarchy will affect the method call cost due to not playing well with JIT method inlining. An example of that can be found in Guava ImmutableList (and others) offer awful performance in some cases due to size-optmized specializations #1268 bug:
Many of the guava Immutable collections have a cute trick where they have specializations for zero (EmptyImmutableList) and one (SingletonImmutableList) element collections. These specializations take the form of subclasses of ImmutableList, to go along with the "Regular" implementation and a few other specializations like ReverseImmutable, SubList, etc.
Unfortunately, the result is that when these subclasses mix at some call site, the call is megamorphic, and performance is awful compared to classes without these specializations (worse by a factor of 20 or more).
I don't think there is a simple correct answer for this question.
However, if you do not implement the method, you should do this:
public void unusedBlahMethod() {
throw new UnsupportedOperationException("operation blah not supported");
}
The advantages of omitting the unused method are:
You save yourself time and money (at least in the short term).
Since you don't need the method, it might not be clear to you how best to implement it anyway.
The disadvantages of omitting the method are:
If you need the method in the future, it will take longer to add it as you may have to refamiliarize yourself with the code, check-out, re-test, etc.
Throwing an UnsupportedOperationException may cause bugs in the future (though good test coverage should prevent that).
If you're writing disposable code, you don't need to write interfaces, but one day you might notice, that you should've taken your time and write an interface.
The main advantage and purpose of interfaces is the flexibility of using different implementations. I can put something, that offers the same functionality inside a method, I can create a fake of it for test purposes and I can create a decorator that behaves like the original object, but can log the stuff.
Example:
public interface A {
void someMethod();
}
public class AImplementation {
#Override
public void someMethod() {
// implementation
}
}
public class ADecorator {
private final A a;
public ADecorator(A a) {
this.a = a;
}
#Override
public void someMethod() {
System.out.println("Before method call");
a.someMethod();
System.out.println("After method call");
}
}
Nice side effect: ADecorator works with every implementation of A.
The cost for this flexibility isn't that high and if your code will live a little bit longer, you should take it.
Minimal working example:
static void foo(boolean bar){
some code A
if(bar){
some code B
}
else{
some code C
}
some code D
}
Here we use the parameter bar to determine the method's behavior, not to actually do something with its value. As a result we redundantly check the value of bar. The method that calls foo() knows the value of bar, since it actually passed it as a parameter. A simple alternative would be:
static void foo1(){
A;B;D;
}
static void foo2(){
A;C;D
}
The result is, that we have redundant code. Now we could put A and D into methods, but what if they manipulate several variables? Java doesn't have methods with multiple return types. Even assuming we could put them into methods, we would still have foo1 looking like a();b();d(), and foo2 looking like a();c();d(). My current solution to this issue is create a functional interface for c(), b() , then to define foo as
static void foo(BCinterface baz){ A; baz.do() ;D;}
The issue is that every time I want to write a method with slightly different behaviors, I have to define an interface for the methods where they differ. I know in other languages there are function pointers. Is there any way to achieve something similar in java without having to define an interface every time? Or is there some practice to avoid having these kinds of situations come up in the first place?
In fact, I think your very first code snippet is the best and most readable solution.
bar is used to determine what the method will do, so what? Why try to move this logic to the caller of foo? There is no point. If I were trying to read the caller of foo, do I need to know how foo works (given it's well named)? No. Because I'm only interested in what happens in the caller of foo. Abstraction is a good thing, not a bad thing. So my advice is, leave it as that.
If you really want to extract the logic, you don't need a new functional interface every time. The java.util.function package and java.lang package already provides you with some functional interfaces. Just use them. For example, in your specific case, BCInterface can be replaced by Runnable.
Your way of solving duplicated invocations seems over complicated.
To provide a distinct behavior at a specific step of an processing/algorithm, you can simply use the template method pattern that relies on abstract method(s)s and polymorphism :
In software engineering, the template method pattern is a behavioral
design pattern that defines the program skeleton of an algorithm in an
operation, deferring some steps to subclasses.1 It lets one redefine
certain steps of an algorithm without changing the algorithm's
structure.[2]
Of course you will have to remove all these static modifiers that don't allow to take advantage of OOP features.
The boolean parameter is not required either any longer.
Define in a base class Foo, foo() that defines the general behavior that relies on an abstract method and let the subclass to define the abstract method implementation.
public abstract class Foo{
public abstract void specificBehavior();
public void foo(){
a();
specificBehavior();
d();
}
public void a(){
...
}
public void d(){
...
}
}
Now subclasses :
public class FooOne extends Foo {
public void specificBehavior(){
...
}
}
public class FooTwo extends Foo {
public void specificBehavior(){
...
}
}
There is a possible optimization I could apply to one of my methods, if I can determine that another method in the same class is not overridden. It is only a slight optimization, so reflection is out of the question. Should I just make a protected method that returns whether or not the method in question is overridden, such that a subclass can make it return true?
I wouldn't do this. It violates encapsulation and changes the contract of what your class is supposed to do without implementers knowing about it.
If you must do it, though, the best way is to invoke
class.getMethod("myMethod").getDeclaringClass();
If the class that's returned is your own, then it's not overridden; if it's something else, that subclass has overridden it. Yes, this is reflection, but it's still pretty cheap.
I do like your protected-method approach, though. That would look something like this:
public class ExpensiveStrategy {
public void expensiveMethod() {
// ...
if (employOptimization()) {
// take a shortcut
}
}
protected boolean employOptimization() {
return false;
}
}
public class TargetedStrategy extends ExpensiveStrategy {
#Override
protected boolean employOptimization() {
return true; // Now we can shortcut ExpensiveStrategy.
}
}
Well, my optimization is a small yield on a case-by-case basis, and it only speeds things a lot because it is called hundreds of times per second.
You might want to see just what the Java optimizer can do. Your hand-coded optimization might not be necessary.
If you decide that hand-coded optimization is necessary, the protected method approach you described is not a good idea because it exposes the details of your implementation.
How many times do you expect the function to be called during the lifetime of the program? Reflection for a specific single method should not be too bad. If it is not worth that much time over the lifetime of the program my recommendation is to keep it simple, and don't include the small optimization.
Jacob
Annotate subclasses that overrides the particular method. #OverridesMethodX.
Perform the necessary reflective work on class load (i.e., in a static block) so that you publish the information via a final boolean flag. Then, query the flag where and when you need it.
maybe there is a cleaner way to do this via the Strategy Pattern, though I do not know how the rest of your application and data are modeled but it seem like it might fit.
It did to me anyhow when I was faced with a similar problem. You could have a heuristic that decides which strategy to use depending on the data that is to be processed.
Again, I do not have enough information on your specific usage to see if this is overkill or not. However I would refrain from changing the class signature for such specific optimization. Usually when I feel the urge to go against the current I take it as a sing that I had not forseen a corner case when I designed the thing and that I should refactor it to a cleaner more comprehensive solution.
however beware, such refactoring when done solely on optimization grounds almost inevitably lead to disaster. If this is the case I would take the reflecive approach suggested above. It does not alter the inheritance contract, and when done properly needs be done once only per subclass that requires it for the runtime life of the application.
I know this is a slightly old question, but for the sake of other googlers:
I came up with a different solution using interfaces.
class FastSub extends Super {}
class SlowSub extends Super implements Super.LetMeHandleThis {
void doSomethingSlow() {
//not optimized
}
}
class Super {
static interface LetMeHandleThis {
void doSomethingSlow();
}
void doSomething() {
if (this instanceof LetMeHandleThis)
((LetMeHandleThis) this).doSomethingSlow();
else
doSomethingFast();
}
private final void doSomethingFast() {
//optimized
}
}
or the other way around:
class FastSub extends Super implements Super.OptimizeMe {}
class SlowSub extends Super {
void doSomethingSlow() {
//not optimized
}
}
class Super {
static interface OptimizeMe {}
void doSomething() {
if (this instanceof OptimizeMe)
doSomethingFast();
else
doSomethingSlow();
}
private final void doSomethingFast() {
//optimized
}
void doSomethingSlow(){}
}
private static boolean isMethodImplemented(Object obj, String name)
{
try
{
Class<? extends Object> clazz = obj.getClass();
return clazz.getMethod(name).getDeclaringClass().equals(clazz);
}
catch (SecurityException e)
{
log.error("{}", e);
}
catch (NoSuchMethodException e)
{
log.error("{}", e);
}
return false;
}
Reflection can be used to determine if a method is overridden. The code is a little bit tricky. For instance, you need to be aware that you have a runtime class that is a subclass of the class that overrides the method.
You are going to see the same runtime classes over and over again. So you can save the results of the check in a WeakHashMap keyed on the Class.
See my code in java.awt.Component dealing with coalesceEvents for an example.
it might be another workaround which is similar to override another protected method returns true/false
I would suggest creating an empty interface, markup interface, then make the subclass implements this interface and inside the superclass check that this instance is instanceof this interface before calling the overridden expensive method.
I have a String which can either be of Double or Integer type or some other type. I first need to create a Double or Integer object and then send it over to a overloaded method. Here's my code so far;
public void doStuff1(object obj, String dataType){
if ("Double".equalsIgnoreCase(dataType)) {
doStuff2(Double.valueOf(obj.toString()));
} else if ("Integer".equalsIgnoreCase(dataType)) {
doStuff2(Integer.valueOf(obj.toString()));
}
}
public void doStuff2(double d1){
//do some double related stuff here
}
public void doStuff2(int d1){
//do some int related stuff here
}
I'd like to do this without if/else, with something like this;
Class<?> theClass = Class.forName(dataType);
The problem is 'theClass' still can't be cast to either double or int. I would be gratefull for any ideas.
Thanks.
Found a related thread; Overloading in Java and multiple dispatch
This is not just a problem of dealing with primitive types.
Which method to call is decided in compile time, that is, if you want to be able to call different methods depending on the type of the arguments, you'll need several calls (i.e. you need the if-construct).
In other words, it wouldn't work even if doStuff2 took Integer and Double as arguments (your code is basically as good as it gets).
(In fancy words, this is due to the fact that Java has single dispatch. To emulate multiple dispatch you either need to use conditional statements or a visitor pattern.)
Since the method call is decided at compile time as the another answer told you, overloading won't work for you. I think that this problem can be solved with inheritance. So you write a base class with yourMethod() and override it in your derived classes.
As aioobe says, the choice between overloaded methods is made at compile time based on the static types of the arguments.
If you want to simulate overload choice at runtime, you will need to do some complicated runtime analysis of the different possible methods. It would go something like this:
get all declared methods of the class that declared doStuff2.
filter out the methods whose name is not doStuff2.
filter out the methods whose argument type cannot be assigned from the (dynamic) type of the argument value.
of the remaining methods, pick the one that is the best match ... taking care to deal with "ties" as ambiguous.
This will be tricky to code, and trickier if you also throw in handling of primitive types. It will also make the method calls expensive.
Frankly, some kind of hard-wired dispatching is much simpler. If you don't like if / else tests (or switching on a String in Java 7), then you could do something like this.
Map<String, Operation> map = ...
map.put("Double", new Operation(){
public void doIt(Object obj) {
doStuff2((Double) obj);
}});
map.put("Integer", new Operation(){
public void doIt(Object obj) {
doStuff2((Integer) obj);
}});
...
map.get(typeName).doIt(obj);
... which at least allows you to "plug in" support for new types dynamically.
If you resort to reflection, you'll only have to deal specially with primitive types. So your technique can work, but with the addition of a few explicit tests. If you need to reflectively find a method that accepts a primitive double, use double.class.
Due to the use of Generics in Java I ended up in having to implement a function having Void as return type:
public Void doSomething() {
//...
}
and the compiler demands that I return something. For now I'm just returning null, but I'm wondering if that is good coding practice...
I'm asking about Void, not void. The class Void, not the reserved keyword void.
I've also tried Void.class, void, Void.TYPE, new Void(), no return at all, but all that doesn't work at all. (For more or less obvious reasons) (See this answer for details)
So what am I supposed to return if the return type of a function is Void?
What's the general use of the Void class?
So what am I supposed to return if the return type of a function has to be Void?
Use return null. Void can't be instantiated and is merely a placeholder for the Class<T> type of void.
What's the point of Void?
As noted above, it's a placeholder. Void is what you'll get back if you, for example, use reflection to look at a method with a return type of void. (Technically, you'll get back Class<Void>.) It has other assorted uses along these lines, like if you want to parameterize a Callable<T>.
Due to the use of generics in Java I ended up in having to implement this function
I'd say that something may be funky with your API if you needed to implement a method with this signature. Consider carefully whether there's a better way to do what you want (perhaps you can provide more details in a different, follow-up question?). I'm a little suspicious, since this only came up "due to the use of generics".
There's no way to instantiate a Void, so the only thing you can return is null.
return null is the way to go.
To make clear why the other suggestions you gave don't work:
Void.class and Void.TYPE point to the same object and are of type Class<Void>, not of Void.
That is why you can't return those values. new Void() would be of type Void but that constructor doesn't exist. In fact, Void has no public constructors and so cannot be instantiated: You can never have any object of type Void except for the polymorphic null.
Hope this helps! :-)
If, for obscure reasons, you MUST use this type, then indeed returning null seems to be a sensible option, since I suppose return value will not be used anyway.
The compiler will force you to return something anyway.
And this class doesn't seem to have a public constructor so new Void() is not possible.
just like this.
public Class TestClass {
public void testMethod () {
return;
}
}