There is a possible optimization I could apply to one of my methods, if I can determine that another method in the same class is not overridden. It is only a slight optimization, so reflection is out of the question. Should I just make a protected method that returns whether or not the method in question is overridden, such that a subclass can make it return true?
I wouldn't do this. It violates encapsulation and changes the contract of what your class is supposed to do without implementers knowing about it.
If you must do it, though, the best way is to invoke
class.getMethod("myMethod").getDeclaringClass();
If the class that's returned is your own, then it's not overridden; if it's something else, that subclass has overridden it. Yes, this is reflection, but it's still pretty cheap.
I do like your protected-method approach, though. That would look something like this:
public class ExpensiveStrategy {
public void expensiveMethod() {
// ...
if (employOptimization()) {
// take a shortcut
}
}
protected boolean employOptimization() {
return false;
}
}
public class TargetedStrategy extends ExpensiveStrategy {
#Override
protected boolean employOptimization() {
return true; // Now we can shortcut ExpensiveStrategy.
}
}
Well, my optimization is a small yield on a case-by-case basis, and it only speeds things a lot because it is called hundreds of times per second.
You might want to see just what the Java optimizer can do. Your hand-coded optimization might not be necessary.
If you decide that hand-coded optimization is necessary, the protected method approach you described is not a good idea because it exposes the details of your implementation.
How many times do you expect the function to be called during the lifetime of the program? Reflection for a specific single method should not be too bad. If it is not worth that much time over the lifetime of the program my recommendation is to keep it simple, and don't include the small optimization.
Jacob
Annotate subclasses that overrides the particular method. #OverridesMethodX.
Perform the necessary reflective work on class load (i.e., in a static block) so that you publish the information via a final boolean flag. Then, query the flag where and when you need it.
maybe there is a cleaner way to do this via the Strategy Pattern, though I do not know how the rest of your application and data are modeled but it seem like it might fit.
It did to me anyhow when I was faced with a similar problem. You could have a heuristic that decides which strategy to use depending on the data that is to be processed.
Again, I do not have enough information on your specific usage to see if this is overkill or not. However I would refrain from changing the class signature for such specific optimization. Usually when I feel the urge to go against the current I take it as a sing that I had not forseen a corner case when I designed the thing and that I should refactor it to a cleaner more comprehensive solution.
however beware, such refactoring when done solely on optimization grounds almost inevitably lead to disaster. If this is the case I would take the reflecive approach suggested above. It does not alter the inheritance contract, and when done properly needs be done once only per subclass that requires it for the runtime life of the application.
I know this is a slightly old question, but for the sake of other googlers:
I came up with a different solution using interfaces.
class FastSub extends Super {}
class SlowSub extends Super implements Super.LetMeHandleThis {
void doSomethingSlow() {
//not optimized
}
}
class Super {
static interface LetMeHandleThis {
void doSomethingSlow();
}
void doSomething() {
if (this instanceof LetMeHandleThis)
((LetMeHandleThis) this).doSomethingSlow();
else
doSomethingFast();
}
private final void doSomethingFast() {
//optimized
}
}
or the other way around:
class FastSub extends Super implements Super.OptimizeMe {}
class SlowSub extends Super {
void doSomethingSlow() {
//not optimized
}
}
class Super {
static interface OptimizeMe {}
void doSomething() {
if (this instanceof OptimizeMe)
doSomethingFast();
else
doSomethingSlow();
}
private final void doSomethingFast() {
//optimized
}
void doSomethingSlow(){}
}
private static boolean isMethodImplemented(Object obj, String name)
{
try
{
Class<? extends Object> clazz = obj.getClass();
return clazz.getMethod(name).getDeclaringClass().equals(clazz);
}
catch (SecurityException e)
{
log.error("{}", e);
}
catch (NoSuchMethodException e)
{
log.error("{}", e);
}
return false;
}
Reflection can be used to determine if a method is overridden. The code is a little bit tricky. For instance, you need to be aware that you have a runtime class that is a subclass of the class that overrides the method.
You are going to see the same runtime classes over and over again. So you can save the results of the check in a WeakHashMap keyed on the Class.
See my code in java.awt.Component dealing with coalesceEvents for an example.
it might be another workaround which is similar to override another protected method returns true/false
I would suggest creating an empty interface, markup interface, then make the subclass implements this interface and inside the superclass check that this instance is instanceof this interface before calling the overridden expensive method.
Related
I have a method in which I need to pass to it either a single domain object or a collection of them. Processing of the parameter passed differs slightly depending on whether it's a single instance or a collection.
May I ask for advice on the best approach ? Should I make the method signature accept an Object type and then process with instanceof and downcasting as below ?
private static synchronized void mymethod(Object obj) {
if (obj instanceof List ) {
...
}
else if (obj instanceof MyObjectClass) {
...
}
}
Or should I use overloading ? Any pitfalls in each case ?
I understand the first case is a bit dangerous as it could accept anything passed to it, however my code is not meant to be used as an API or extended etc.
There are different approaches to this kind of design "problem".
Using method overloads:
void myMethod(final MyObject myObject);
void myMethod(final List<? extends MyObject> myObjects);
Using a var-args input parameter:
void myMethod(final MyObject... myObject);
-> myMethod(myObject);
-> myMethod(myObject, myOtherObject);
-> myMethod(myObjectsArray); // myObjectsArray = new MyObject[]
Using a Collection/List as input parameter:
void myMethod(final Collection<? extends MyObject> myObjects);
-> myMethod(Collections.singletonList(myObject));
-> myMethod(myObjectCollection); // List<MyObject>, Set<MyObject>, Collection<MyObject>
Personally I'd go with method overloads, as the internal logic usually changes, slightly maybe, but it changes. The intent is more clear, and JavaDoc can be customized for the single method.
I'm a "picky" developer, and I prefer explicitly stating that there can be two forms of input. I prefer overloads even when it might be not necessary (at the moment). In that case I just delegate to the Collection<?> method, or the opposite.
void myMethod(final MyObject object) {
myObject(Collections.singletonSet(object));
}
But that is based on opinions.
I'd say the most important aspect is, don't duplicate code!
Overloading is usually the way to go in such situations. Remember that the generic type of the list is actually 'type erased' at runtime, so you won't really know that your List is actually a List<MyObjectClass>. Overloading will give you compile time checks, so it's safer.
When using generics also think if your MyObjectClass is going to be extended in some way. And you might get a collection of those objects instead.
Also, as a general pattern, try to avoid repeating code in both overloaded methods. So if you are doing the same thing on all objects when you pass a List you can call one method from the other as follows:
private static synchronized void mymethod(MyObjectClass obj) {
//todo: do the logic on the object
}
private static synchronized void mymethod(Collection<? extends MyObjectClass> collection) {
//assuming the logic is the same, otherwise do whatever you need to do here
collection.forEach(obj -> mymethod(obj));
}
Downcasting and instanceof are usually symptoms of design decisions that do not quite fit what you need. Sometimes it is difficult to get out of them, and you have to resort to them, but in general it is ideal to let the compiler verify your types and do the right method resolution for the behaviour you want.
Method overloading suffice your need. I can think of following ways
private static synchronized void mymethod(MyObjectClass myObj){
...
}
private static synchronized void mymethod(Collection<MyObjectClass> myObj){
...
}
TreffnonX has already given more detailed and generic-based correct approach while i was editing my answer. Refer to it :)
Overloading the method seems more correct here, though both versions would work. That way, the compiler can evaluate the code as far as possible. However your code seems a bit incomplete, to be honest. My personal approach would be to go even further and generalize the method with a generic Type:
private static synchronized <T extends MyObjectClass> void mymethod(T obj) {
...
}
private static synchronized <T extends MyObjectClass> void mymethod(Collection<T> obj) {
...
}
The advantage of this version is, that whatever you do inside your mymethod, you can return stuff related to the type, and modern IDEs can greatly help your evaluation and resolve Lambdas better.
Also, why specifically a List? Does a Collection do? Usually when you limit yourself to lists, you miss out on Sets and other important collections.
I came up with this question writing specific code, but I'll try to keep the question as generic as possible.
Other similar question refer to C# which seems to have some language specific handling for this and below code is Java, but again let's try to keep it generic.
Let's say I have class A which implements interface I.
This is useful to me cause I can implement methods that use A only as a I type and abstract the implementation.
Let's now say, I have class B which implements all methods in interface I, but it's never referred to as only I.
Let's now say, I have class B which implements methods that have the same name/signature as the ones in interface I, but it doesn't implements the interface.
Should I always explicitly implement I?
Even if I don't use it (though I might in the future) for type abstraction?
A more meaningful, even if probably not realistic, example would be:
interface Printable {
String print()
class A implements Printable {
//code...
String print(){return "A";}
//code...
}
class B {
//code...
String print(){return "B";}
void otherMethod(){/*code*/}
//code...
}
class Test {
Printable a = new A();
System.out.println(a.print());
B b = new B();
b.otherMethod();
System.out.println(b.print());
}
Are there any drawbacks on explicitly implementing, or not, the interface Printable?
The only one I can think of is scalability for the second case.
In the sense that if one day I'll want to explicitly use it as Printable, I'll be able to do so without any more effort.
But is there anything else (patterns, optimization, good programming, style, ..) I should take into consideration?
In some cases the type hierarchy will affect the method call cost due to not playing well with JIT method inlining. An example of that can be found in Guava ImmutableList (and others) offer awful performance in some cases due to size-optmized specializations #1268 bug:
Many of the guava Immutable collections have a cute trick where they have specializations for zero (EmptyImmutableList) and one (SingletonImmutableList) element collections. These specializations take the form of subclasses of ImmutableList, to go along with the "Regular" implementation and a few other specializations like ReverseImmutable, SubList, etc.
Unfortunately, the result is that when these subclasses mix at some call site, the call is megamorphic, and performance is awful compared to classes without these specializations (worse by a factor of 20 or more).
I don't think there is a simple correct answer for this question.
However, if you do not implement the method, you should do this:
public void unusedBlahMethod() {
throw new UnsupportedOperationException("operation blah not supported");
}
The advantages of omitting the unused method are:
You save yourself time and money (at least in the short term).
Since you don't need the method, it might not be clear to you how best to implement it anyway.
The disadvantages of omitting the method are:
If you need the method in the future, it will take longer to add it as you may have to refamiliarize yourself with the code, check-out, re-test, etc.
Throwing an UnsupportedOperationException may cause bugs in the future (though good test coverage should prevent that).
If you're writing disposable code, you don't need to write interfaces, but one day you might notice, that you should've taken your time and write an interface.
The main advantage and purpose of interfaces is the flexibility of using different implementations. I can put something, that offers the same functionality inside a method, I can create a fake of it for test purposes and I can create a decorator that behaves like the original object, but can log the stuff.
Example:
public interface A {
void someMethod();
}
public class AImplementation {
#Override
public void someMethod() {
// implementation
}
}
public class ADecorator {
private final A a;
public ADecorator(A a) {
this.a = a;
}
#Override
public void someMethod() {
System.out.println("Before method call");
a.someMethod();
System.out.println("After method call");
}
}
Nice side effect: ADecorator works with every implementation of A.
The cost for this flexibility isn't that high and if your code will live a little bit longer, you should take it.
Minimal working example:
static void foo(boolean bar){
some code A
if(bar){
some code B
}
else{
some code C
}
some code D
}
Here we use the parameter bar to determine the method's behavior, not to actually do something with its value. As a result we redundantly check the value of bar. The method that calls foo() knows the value of bar, since it actually passed it as a parameter. A simple alternative would be:
static void foo1(){
A;B;D;
}
static void foo2(){
A;C;D
}
The result is, that we have redundant code. Now we could put A and D into methods, but what if they manipulate several variables? Java doesn't have methods with multiple return types. Even assuming we could put them into methods, we would still have foo1 looking like a();b();d(), and foo2 looking like a();c();d(). My current solution to this issue is create a functional interface for c(), b() , then to define foo as
static void foo(BCinterface baz){ A; baz.do() ;D;}
The issue is that every time I want to write a method with slightly different behaviors, I have to define an interface for the methods where they differ. I know in other languages there are function pointers. Is there any way to achieve something similar in java without having to define an interface every time? Or is there some practice to avoid having these kinds of situations come up in the first place?
In fact, I think your very first code snippet is the best and most readable solution.
bar is used to determine what the method will do, so what? Why try to move this logic to the caller of foo? There is no point. If I were trying to read the caller of foo, do I need to know how foo works (given it's well named)? No. Because I'm only interested in what happens in the caller of foo. Abstraction is a good thing, not a bad thing. So my advice is, leave it as that.
If you really want to extract the logic, you don't need a new functional interface every time. The java.util.function package and java.lang package already provides you with some functional interfaces. Just use them. For example, in your specific case, BCInterface can be replaced by Runnable.
Your way of solving duplicated invocations seems over complicated.
To provide a distinct behavior at a specific step of an processing/algorithm, you can simply use the template method pattern that relies on abstract method(s)s and polymorphism :
In software engineering, the template method pattern is a behavioral
design pattern that defines the program skeleton of an algorithm in an
operation, deferring some steps to subclasses.1 It lets one redefine
certain steps of an algorithm without changing the algorithm's
structure.[2]
Of course you will have to remove all these static modifiers that don't allow to take advantage of OOP features.
The boolean parameter is not required either any longer.
Define in a base class Foo, foo() that defines the general behavior that relies on an abstract method and let the subclass to define the abstract method implementation.
public abstract class Foo{
public abstract void specificBehavior();
public void foo(){
a();
specificBehavior();
d();
}
public void a(){
...
}
public void d(){
...
}
}
Now subclasses :
public class FooOne extends Foo {
public void specificBehavior(){
...
}
}
public class FooTwo extends Foo {
public void specificBehavior(){
...
}
}
Is that good idea to change private class members to default(package access) for testing their behavior? I mean test case should destinate in test directory but in same package as tested member's class.
EDIT: All you guys tell the true. But classes have helper private methods often. And these methods can be complicated so need to be tested. And that is too bad - to test public methods for ensure correct working for private complicated methods. Don't you think so?
I generally prefer writing my classes and tests in a way that writing the tests against the public API makes sense. So basically I'm saying if you need to access the private state of your class under test you're probably already too involved in the internals of that class with your test..
No, it isn't. Because changing the test object may change the result. If you really need to call private members or methods during test, it's safer to add an accessor. This still changes the class, but with a lower risk. Example:
private void method() { /* ... */ }
// For testing purpose only, remove for production
#Deprecated // just another way to create awareness ;)
void testMethod() {
method();
}
OK - one more solution, if you need to test private methods: you can call any method with reflection and instantiation API.
Assuming, we have:
public class SomeClass {
private Object helper(String s, String t) { /* ... +/ }
}
then we can test it like
#Test public void testHelper() {
try {
SomeClass some = new SomeClass();
Method helperMethod = some.getClass().getDeclaredMethod("helper", String.class, String,class);
helperMethod.setAccessible(true);
Object result = helperMethod.invoke(some, "s", "t");
// do some assert...
catch(Exception e) {
// TODO - proper exception handling
}
}
I understand what you mean about needing to test private methods, and I also see why people say only test the public methods. I have just encountered some legacy code that has a lot of private methods, some of which are called by public methods, but some are threads, or called by threads, which are kicked off when the object is constructed. Since the code is riddled with bugs and lacks any comments I am forced to test the private code.
I have used this method to address the issue.
MainObject.cs
class MainObject
{
protected int MethodOne(); // Should have been private.
....
}
TestMainObject.cs
class ExposeMainObject : MainObject
{
public int MethodOne();
}
class TestMainObject
{
public void TestOne()
{
}
}
Since the test objects aren't shipped I can't see a problem with it, but if there is please tell me.
Testing trumps privacy modifiers. Really, how often is a bug caused by having "a little too much" visibility for a method? Compared to bugs caused by a method that was not fully tested?
It would be nice if Java had a "friend" option, like C++. But a limitation in the language should never be an excuse for not testing something.
Michael Feathers chimes in on this debate in "Working Effectively with Legacy Code" (excellent book), and suggests that this may be a smell of a sub-class that wants to be extracted (and have public methods).
In our shop (~ 1M LOC), we replace 'private' with '/TestScope/' as an indicator that a method should be effectively private, but still testable.
Trying to circumvent 'private' with reflection is IMHO a smell. It's making the tests harder to write, read, and debug in order to retain a 'fetish' of privacy, which you're working around anyway. Why bother?
Recently in an answer it was suggested to me that this:
public interface Operation<R extends OperationResult, P extends OperationParam> {
public R execute(P param);
}
Is better than this:
public interface Operation {
public OperationResult execute(OperationParam param);
}
I however can't see any benefit in using the first code block over the second one ...
Given that both OperationResult and OperationParam are interfaces an implementer needs to return a derived class anyway and this seems quite obvious to me.
So do you see any reason the use the first code block over the second one ?
This way you can declare your Operation implementations to return a more specific result, e.g.
class SumOperation implements Operation<SumResult, SumParam>
Though whether this is of any value to your application depends entirely on the situation.
Update: Of course you could return a more specific result without having a generic interface, but this way you can restrict the input parameters as well.