Why is "final" not allowed in Java 8 interface methods? - java

One of the most useful features of Java 8 are the new default methods on interfaces. There are essentially two reasons (there may be others) why they have been introduced:
Providing actual default implementations. Example: Iterator.remove()
Allowing for JDK API evolution. Example: Iterable.forEach()
From an API designer's perspective, I would have liked to be able to use other modifiers on interface methods, e.g. final. This would be useful when adding convenience methods, preventing "accidental" overrides in implementing classes:
interface Sender {
// Convenience method to send an empty message
default final void send() {
send(null);
}
// Implementations should only implement this method
void send(String message);
}
The above is already common practice if Sender were a class:
abstract class Sender {
// Convenience method to send an empty message
final void send() {
send(null);
}
// Implementations should only implement this method
abstract void send(String message);
}
Now, default and final are obviously contradicting keywords, but the default keyword itself would not have been strictly required, so I'm assuming that this contradiction is deliberate, to reflect the subtle differences between "class methods with body" (just methods) and "interface methods with body" (default methods), i.e. differences which I have not yet understood.
At some point of time, support for modifiers like static and final on interface methods was not yet fully explored, citing Brian Goetz:
The other part is how far we're going to go to support class-building
tools in interfaces, such as final methods, private methods, protected
methods, static methods, etc. The answer is: we don't know yet
Since that time in late 2011, obviously, support for static methods in interfaces was added. Clearly, this added a lot of value to the JDK libraries themselves, such as with Comparator.comparing().
Question:
What is the reason final (and also static final) never made it to Java 8 interfaces?

This question is, to some degree, related to What is the reason why “synchronized” is not allowed in Java 8 interface methods?
The key thing to understand about default methods is that the primary design goal is interface evolution, not "turn interfaces into (mediocre) traits". While there's some overlap between the two, and we tried to be accommodating to the latter where it didn't get in the way of the former, these questions are best understood when viewed in this light. (Note too that class methods are going to be different from interface methods, no matter what the intent, by virtue of the fact that interface methods can be multiply inherited.)
The basic idea of a default method is: it is an interface method with a default implementation, and a derived class can provide a more specific implementation. And because the design center was interface evolution, it was a critical design goal that default methods be able to be added to interfaces after the fact in a source-compatible and binary-compatible manner.
The too-simple answer to "why not final default methods" is that then the body would then not simply be the default implementation, it would be the only implementation. While that's a little too simple an answer, it gives us a clue that the question is already heading in a questionable direction.
Another reason why final interface methods are questionable is that they create impossible problems for implementors. For example, suppose you have:
interface A {
default void foo() { ... }
}
interface B {
}
class C implements A, B {
}
Here, everything is good; C inherits foo() from A. Now supposing B is changed to have a foo method, with a default:
interface B {
default void foo() { ... }
}
Now, when we go to recompile C, the compiler will tell us that it doesn't know what behavior to inherit for foo(), so C has to override it (and could choose to delegate to A.super.foo() if it wanted to retain the same behavior.) But what if B had made its default final, and A is not under the control of the author of C? Now C is irretrievably broken; it can't compile without overriding foo(), but it can't override foo() if it was final in B.
This is just one example, but the point is that finality for methods is really a tool that makes more sense in the world of single-inheritance classes (generally which couple state to behavior), than to interfaces which merely contribute behavior and can be multiply inherited. It's too hard to reason about "what other interfaces might be mixed into the eventual implementor", and allowing an interface method to be final would likely cause these problems (and they would blow up not on the person who wrote the interface, but on the poor user who tries to implement it.)
Another reason to disallow them is that they wouldn't mean what you think they mean. A default implementation is only considered if the class (or its superclasses) don't provide a declaration (concrete or abstract) of the method. If a default method were final, but a superclass already implemented the method, the default would be ignored, which is probably not what the default author was expecting when declaring it final. (This inheritance behavior is a reflection of the design center for default methods -- interface evolution. It should be possible to add a default method (or a default implementation to an existing interface method) to existing interfaces that already have implementations, without changing the behavior of existing classes that implement the interface, guaranteeing that classes that already worked before default methods were added will work the same way in the presence of default methods.)

In the lambda mailing list there are plenty of discussions about it. One of those that seems to contain a lot of discussion about all that stuff is the following: On Varied interface method visibility (was Final defenders).
In this discussion, Talden, the author of the original question asks something very similar to your question:
The decision to make all interface members public was indeed an
unfortunate decision. That any use of interface in internal design
exposes implementation private details is a big one.
It's a tough one to fix without adding some obscure or compatibility
breaking nuances to the language. A compatibility break of that
magnitude and potential subtlety would seen unconscionable so a
solution has to exist that doesn't break existing code.
Could reintroducing the 'package' keyword as an access-specifier be
viable. It's absence of a specifier in an interface would imply
public-access and the absence of a specifier in a class implies
package-access. Which specifiers make sense in an interface is unclear
- especially if, to minimise the knowledge burden on developers, we have to ensure that access-specifiers mean the same thing in both
class and interface if they're present.
In the absence of default methods I'd have speculated that the
specifier of a member in an interface has to be at least as visible as
the interface itself (so the interface can actually be implemented in
all visible contexts) - with default methods that's not so certain.
Has there been any clear communication as to whether this is even a
possible in-scope discussion? If not, should it be held elsewhere.
Eventually Brian Goetz's answer was:
Yes, this is already being explored.
However, let me set some realistic expectations -- language / VM
features have a long lead time, even trivial-seeming ones like this.
The time for proposing new language feature ideas for Java SE 8 has
pretty much passed.
So, most likely it was never implemented because it was never part of the scope. It was never proposed in time to be considered.
In another heated discussion about final defender methods on the subject, Brian said again:
And you have gotten exactly what you wished for. That's exactly what
this feature adds -- multiple inheritance of behavior. Of course we
understand that people will use them as traits. And we've worked hard
to ensure that the the model of inheritance they offer is simple and
clean enough that people can get good results doing so in a broad
variety of situations. We have, at the same time, chosen not to push
them beyond the boundary of what works simply and cleanly, and that
leads to "aw, you didn't go far enough" reactions in some case. But
really, most of this thread seems to be grumbling that the glass is
merely 98% full. I'll take that 98% and get on with it!
So this reinforces my theory that it simply was not part of the scope or part of their design. What they did was to provide enough functionality to deal with the issues of API evolution.

It will be hard to find and identify "THE" answer, for the resons mentioned in the comments from #EJP : There are roughly 2 (+/- 2) people in the world who can give the definite answer at all. And in doubt, the answer might just be something like "Supporting final default methods did not seem to be worth the effort of restructuring the internal call resolution mechanisms". This is speculation, of course, but it is at least backed by subtle evidences, like this Statement (by one of the two persons) in the OpenJDK mailing list:
"I suppose if "final default" methods were allowed, they might need rewriting from internal invokespecial to user-visible invokeinterface."
and trivial facts like that a method is simply not considered to be a (really) final method when it is a default method, as currently implemented in the Method::is_final_method method in the OpenJDK.
Further really "authorative" information is indeed hard to find, even with excessive websearches and by reading commit logs. I thought that it might be related to potential ambiguities during the resolution of interface method calls with the invokeinterface instruction and and class method calls, corresponding to the invokevirtual instruction: For the invokevirtual instruction, there may be a simple vtable lookup, because the method must either be inherited from a superclass, or implemented by the class directly. In contrast to that, an invokeinterface call must examine the respective call site to find out which interface this call actually refers to (this is explained in more detail in the InterfaceCalls page of the HotSpot Wiki). However, final methods do either not get inserted into the vtable at all, or replace existing entries in the vtable (see klassVtable.cpp. Line 333), and similarly, default methods are replacing existing entries in the vtable (see klassVtable.cpp, Line 202). So the actual reason (and thus, the answer) must be hidden deeper inside the (rather complex) method call resolution mechanisms, but maybe these references will nevertheless be considered as being helpful, be it only for others that manage to derive the actual answer from that.

I wouldn't think it is neccessary to specify final on a convienience interface method, I can agree though that it may be helpful, but seemingly the costs have outweight the benefits.
What you are supposed to do, either way, is to write proper javadoc for the default method, showing exactly what the method is and is not allowed to do. In that way the classes implementing the interface "are not allowed" to change the implementation, though there are no guarantees.
Anyone could write a Collection that adheres to the interface and then does things in the methods that are absolutely counter intuitive, there is no way to shield yourself from that, other than writing extensive unit tests.

We add default keyword to our method inside an interface when we know that the class extending the interface may or may not override our implementation. But what if we want to add a method that we don't want any implementing class to override? Well, two options were available to us:
Add a default final method.
Add a static method.
Now, Java says that if we have a class implementing two or more interfaces such that they have a default method with exactly same method name and signature i.e. they are duplicate, then we need to provide an implementation of that method in our class. Now in case of default final methods, we can't provide an implementation and we are stuck. And that's why final keyword isn't used in interfaces.

Related

Override the Object.toString() Method with a default interface method in Java [duplicate]

Default methods are a nice new tool in our Java toolbox. However, I tried to write an interface that defines a default version of the toString method. Java tells me that this is forbidden, since methods declared in java.lang.Object may not be defaulted. Why is this the case?
I know that there is the "base class always wins" rule, so by default (pun ;), any default implementation of an Object method would be overwritten by the method from Object anyway. However, I see no reason why there shouldn't be an exception for methods from Object in the spec. Especially for toString it might be very useful to have a default implementation.
So, what is the reason why Java designers decided to not allow default methods overriding methods from Object?
This is yet another of those language design issues that seems "obviously a good idea" until you start digging and you realize that its actually a bad idea.
This mail has a lot on the subject (and on other subjects too.) There were several design forces that converged to bring us to the current design:
The desire to keep the inheritance model simple;
The fact that once you look past the obvious examples (e.g., turning AbstractList into an interface), you realize that inheriting equals/hashCode/toString is strongly tied to single inheritance and state, and interfaces are multiply inherited and stateless;
That it potentially opened the door to some surprising behaviors.
You've already touched on the "keep it simple" goal; the inheritance and conflict-resolution rules are designed to be very simple (classes win over interfaces, derived interfaces win over superinterfaces, and any other conflicts are resolved by the implementing class.) Of course these rules could be tweaked to make an exception, but I think you'll find when you start pulling on that string, that the incremental complexity is not as small as you might think.
Of course, there's some degree of benefit that would justify more complexity, but in this case it's not there. The methods we're talking about here are equals, hashCode, and toString. These methods are all intrinsically about object state, and it is the class that owns the state, not the interface, who is in the best position to determine what equality means for that class (especially as the contract for equality is quite strong; see Effective Java for some surprising consequences); interface writers are just too far removed.
It's easy to pull out the AbstractList example; it would be lovely if we could get rid of AbstractList and put the behavior into the List interface. But once you move beyond this obvious example, there are not many other good examples to be found. At root, AbstractList is designed for single inheritance. But interfaces must be designed for multiple inheritance.
Further, imagine you are writing this class:
class Foo implements com.libraryA.Bar, com.libraryB.Moo {
// Implementation of Foo, that does NOT override equals
}
The Foo writer looks at the supertypes, sees no implementation of equals, and concludes that to get reference equality, all he need do is inherit equals from Object. Then, next week, the library maintainer for Bar "helpfully" adds a default equals implementation. Ooops! Now the semantics of Foo have been broken by an interface in another maintenance domain "helpfully" adding a default for a common method.
Defaults are supposed to be defaults. Adding a default to an interface where there was none (anywhere in the hierarchy) should not affect the semantics of concrete implementing classes. But if defaults could "override" Object methods, that wouldn't be true.
So, while it seems like a harmless feature, it is in fact quite harmful: it adds a lot of complexity for little incremental expressivity, and it makes it far too easy for well-intentioned, harmless-looking changes to separately compiled interfaces to undermine the intended semantics of implementing classes.
It is forbidden to define default methods in interfaces for methods in java.lang.Object, since the default methods would never be "reachable".
Default interface methods can be overwritten in classes implementing the interface and the class implementation of the method has a higher precedence than the interface implementation, even if the method is implemented in a superclass. Since all classes inherit from java.lang.Object, the methods in java.lang.Object would have precedence over the default method in the interface and be invoked instead.
Brian Goetz from Oracle provides a few more details on the design decision in this mailing list post.
To give a very pedantic answer, it is only forbidden to define a default method for a public method from java.lang.Object. There are 11 methods to consider, which can be categorized in three ways to answer this question.
Six of the Object methods cannot have default methods because they are final and cannot be overridden at all: getClass(), notify(), notifyAll(), wait(), wait(long), and wait(long, int).
Three of the Object methods cannot have default methods for the reasons given above by Brian Goetz: equals(Object), hashCode(), and toString().
Two of the Object methods can have default methods, though the value of such defaults is questionable at best: clone() and finalize().
public class Main {
public static void main(String... args) {
new FOO().clone();
new FOO().finalize();
}
interface ClonerFinalizer {
default Object clone() {System.out.println("default clone"); return this;}
default void finalize() {System.out.println("default finalize");}
}
static class FOO implements ClonerFinalizer {
#Override
public Object clone() {
return ClonerFinalizer.super.clone();
}
#Override
public void finalize() {
ClonerFinalizer.super.finalize();
}
}
}
I do not see into the head of Java language authors, so we may only guess. But I see many reasons and agree with them absolutely in this issue.
The main reason for introducing default methods is to be able to add new methods to interfaces without breaking the backward compatibility of older implementations. The default methods may also be used to provide "convenience" methods without the necessity to define them in each of the implementing classes.
None of these applies to toString and other methods of Object. Simply put, default methods were designed to provide the default behavior where there is no other definition. Not to provide implementations that will "compete" with other existing implementations.
The "base class always wins" rule has its solid reasons, too. It is supposed that classes define real implementations, while interfaces define default implementations, which are somewhat weaker.
Also, introducing ANY exceptions to general rules cause unnecessary complexity and raise other questions. Object is (more or less) a class as any other, so why should it have different behaviour?
All and all, the solution you propose would probably bring more cons than pros.
The reasoning is very simple, it is because Object is the base class for all the Java classes. So even if we have Object's method defined as default method in some interface, it will be useless because Object's method will always be used. That is why to avoid confusion, we cannot have default methods that are overriding Object class methods.

Java8: Why is it forbidden to define a default method for a method from java.lang.Object

Default methods are a nice new tool in our Java toolbox. However, I tried to write an interface that defines a default version of the toString method. Java tells me that this is forbidden, since methods declared in java.lang.Object may not be defaulted. Why is this the case?
I know that there is the "base class always wins" rule, so by default (pun ;), any default implementation of an Object method would be overwritten by the method from Object anyway. However, I see no reason why there shouldn't be an exception for methods from Object in the spec. Especially for toString it might be very useful to have a default implementation.
So, what is the reason why Java designers decided to not allow default methods overriding methods from Object?
This is yet another of those language design issues that seems "obviously a good idea" until you start digging and you realize that its actually a bad idea.
This mail has a lot on the subject (and on other subjects too.) There were several design forces that converged to bring us to the current design:
The desire to keep the inheritance model simple;
The fact that once you look past the obvious examples (e.g., turning AbstractList into an interface), you realize that inheriting equals/hashCode/toString is strongly tied to single inheritance and state, and interfaces are multiply inherited and stateless;
That it potentially opened the door to some surprising behaviors.
You've already touched on the "keep it simple" goal; the inheritance and conflict-resolution rules are designed to be very simple (classes win over interfaces, derived interfaces win over superinterfaces, and any other conflicts are resolved by the implementing class.) Of course these rules could be tweaked to make an exception, but I think you'll find when you start pulling on that string, that the incremental complexity is not as small as you might think.
Of course, there's some degree of benefit that would justify more complexity, but in this case it's not there. The methods we're talking about here are equals, hashCode, and toString. These methods are all intrinsically about object state, and it is the class that owns the state, not the interface, who is in the best position to determine what equality means for that class (especially as the contract for equality is quite strong; see Effective Java for some surprising consequences); interface writers are just too far removed.
It's easy to pull out the AbstractList example; it would be lovely if we could get rid of AbstractList and put the behavior into the List interface. But once you move beyond this obvious example, there are not many other good examples to be found. At root, AbstractList is designed for single inheritance. But interfaces must be designed for multiple inheritance.
Further, imagine you are writing this class:
class Foo implements com.libraryA.Bar, com.libraryB.Moo {
// Implementation of Foo, that does NOT override equals
}
The Foo writer looks at the supertypes, sees no implementation of equals, and concludes that to get reference equality, all he need do is inherit equals from Object. Then, next week, the library maintainer for Bar "helpfully" adds a default equals implementation. Ooops! Now the semantics of Foo have been broken by an interface in another maintenance domain "helpfully" adding a default for a common method.
Defaults are supposed to be defaults. Adding a default to an interface where there was none (anywhere in the hierarchy) should not affect the semantics of concrete implementing classes. But if defaults could "override" Object methods, that wouldn't be true.
So, while it seems like a harmless feature, it is in fact quite harmful: it adds a lot of complexity for little incremental expressivity, and it makes it far too easy for well-intentioned, harmless-looking changes to separately compiled interfaces to undermine the intended semantics of implementing classes.
It is forbidden to define default methods in interfaces for methods in java.lang.Object, since the default methods would never be "reachable".
Default interface methods can be overwritten in classes implementing the interface and the class implementation of the method has a higher precedence than the interface implementation, even if the method is implemented in a superclass. Since all classes inherit from java.lang.Object, the methods in java.lang.Object would have precedence over the default method in the interface and be invoked instead.
Brian Goetz from Oracle provides a few more details on the design decision in this mailing list post.
To give a very pedantic answer, it is only forbidden to define a default method for a public method from java.lang.Object. There are 11 methods to consider, which can be categorized in three ways to answer this question.
Six of the Object methods cannot have default methods because they are final and cannot be overridden at all: getClass(), notify(), notifyAll(), wait(), wait(long), and wait(long, int).
Three of the Object methods cannot have default methods for the reasons given above by Brian Goetz: equals(Object), hashCode(), and toString().
Two of the Object methods can have default methods, though the value of such defaults is questionable at best: clone() and finalize().
public class Main {
public static void main(String... args) {
new FOO().clone();
new FOO().finalize();
}
interface ClonerFinalizer {
default Object clone() {System.out.println("default clone"); return this;}
default void finalize() {System.out.println("default finalize");}
}
static class FOO implements ClonerFinalizer {
#Override
public Object clone() {
return ClonerFinalizer.super.clone();
}
#Override
public void finalize() {
ClonerFinalizer.super.finalize();
}
}
}
I do not see into the head of Java language authors, so we may only guess. But I see many reasons and agree with them absolutely in this issue.
The main reason for introducing default methods is to be able to add new methods to interfaces without breaking the backward compatibility of older implementations. The default methods may also be used to provide "convenience" methods without the necessity to define them in each of the implementing classes.
None of these applies to toString and other methods of Object. Simply put, default methods were designed to provide the default behavior where there is no other definition. Not to provide implementations that will "compete" with other existing implementations.
The "base class always wins" rule has its solid reasons, too. It is supposed that classes define real implementations, while interfaces define default implementations, which are somewhat weaker.
Also, introducing ANY exceptions to general rules cause unnecessary complexity and raise other questions. Object is (more or less) a class as any other, so why should it have different behaviour?
All and all, the solution you propose would probably bring more cons than pros.
The reasoning is very simple, it is because Object is the base class for all the Java classes. So even if we have Object's method defined as default method in some interface, it will be useless because Object's method will always be used. That is why to avoid confusion, we cannot have default methods that are overriding Object class methods.

Why interfaces must be declared in Java?

Sometimes we have several classes that have some methods with the same signature, but that don't correspond to a declared Java interface. For example, both JTextField and JButton (among several others in javax.swing.*) have a method
public void addActionListener(ActionListener l)
Now, suppose I wish to do something with objects that have that method; then, I'd like to have an interface (or perhaps to define it myself), e.g.
public interface CanAddActionListener {
public void addActionListener(ActionListener l);
}
so that I could write:
public void myMethod(CanAddActionListener aaa, ActionListener li) {
aaa.addActionListener(li);
....
But, sadly, I can't:
JButton button;
ActionListener li;
...
this.myMethod((CanAddActionListener)button,li);
This cast would be illegal. The compiler knows that JButton is not a CanAddActionListener, because the class has not declared to implement that interface ... however it "actually" implements it.
This is sometimes an inconvenience - and Java itself has modified several core classes to implement a new interface made of old methods (String implements CharSequence, for example).
My question is: why this is so? I understand the utility of declaring that a class implements an interface. But anyway, looking at my example, why can't the compiler deduce that the class JButton "satisfies" the interface declaration (looking inside it) and accept the cast? Is it an issue of compiler efficiency or there are more fundamental problems?
My summary of the answers: This is a case in which Java could have made allowance for some "structural typing" (sort of a duck typing - but checked at compile time). It didn't. Apart from some (unclear for me) performance and implementations difficulties, there is a much more fundamental concept here: In Java, the declaration of an interface (and in general, of everything) is not meant to be merely structural (to have methods with these signatures) but semantical: the methods are supposed to implement some specific behavior/intent. So, a class which structurally satisfies some interface (i.e., it has the methods with the required signatures) does not necessarily satisfies it semantically (an extreme example: recall the "marker interfaces", which do not even have methods!). Hence, Java can assert that a class implements an interface because (and only because) this has been explicitly declared. Other languages (Go, Scala) have other philosophies.
Java's design choice to make implementing classes expressly declare the interface they implement is just that -- a design choice. To be sure, the JVM has been optimized for this choice and implementing another choice (say, Scala's structural typing) may now come at additional cost unless and until some new JVM instructions are added.
So what exactly is the design choice about? It all comes down to the semantics of methods. Consider: are the following methods semantically the same?
draw(String graphicalShapeName)
draw(String handgunName)
draw(String playingCardName)
All three methods have the signature draw(String). A human might infer that they have different semantics from the parameter names, or by reading some documentation. Is there any way for the machine to tell that they are different?
Java's design choice is to demand that the developer of a class explicitly state that a method conforms to the semantics of a pre-defined interface:
interface GraphicalDisplay {
...
void draw(String graphicalShapeName);
...
}
class JavascriptCanvas implements GraphicalDisplay {
...
public void draw(String shape);
...
}
There is no doubt that the draw method in JavascriptCanvas is intended to match the draw method for a graphical display. If one attempted to pass an object that was going to pull out a handgun, the machine can detect the error.
Go's design choice is more liberal and allows interfaces to be defined after the fact. A concrete class need not declare what interfaces it implements. Rather, the designer of a new card game component may declare that an object that supplies playing cards must have a method that matches the signature draw(String). This has the advantage that any existing class with that method can be used without having to change its source code, but the disadvantage that the class might pull out a handgun instead of a playing card.
The design choice of duck-typing languages is to dispense with formal interfaces altogether and simply match on method signatures. Any concept of interface (or "protocol") is purely idiomatic, with no direct language support.
These are but three of many possible design choices. The three can be glibly summarized like this:
Java: the programmer must explicitly declare his intent, and the machine will check it. The assumption is that the programmer is likely to make a semantic mistake (graphics / handgun / card).
Go: the programmer must declare at least part of his intent, but the machine has less to go on when checking it. The assumption is that the programmer is likely to might make a clerical mistake (integer / string), but not likely to make a semantic mistake (graphics / handgun / card).
Duck-typing: the programmer needn't express any intent, and there is nothing for the machine to check. The assumption is that programmer is unlikely to make either a clerical or semantic mistake.
It is beyond the scope of this answer to address whether interfaces, and typing in general, are adequate to test for clerical and semantic mistakes. A full discussion would have to consider build-time compiler technology, automated testing methodology, run-time/hot-spot compilation and a host of other issues.
It is acknowledged that the draw(String) example are deliberately exaggerated to make a point. Real examples would involve richer types that would give more clues to disambiguate the methods.
Why can't the compiler deduce that the class JButton "satisfies" the interface declaration (looking inside it) and accept the cast? Is it an issue of compiler efficiency or there are more fundamental problems?
It is a more fundamental issue.
The point of an interface is to specify that there is a common API / set of behaviors that a number of classes support. So, when a class is declared as implements SomeInterface, any methods in the class whose signatures match method signatures in the interface are assumed to be methods that provide that behavior.
By contrast, if the language simply matched methods based on signatures ... irrespective of the interfaces ... then we'd be liable to get false matches, when two methods with the same signature actually mean / do something semantically unrelated.
(The name for the latter approach is "duck typing" ... and Java doesn't support it.)
The Wikipedia page on type systems says that duck typing is neither "nominative typing" or "structural typing". By contrast, Pierce doesn't even mention "duck typing", but he defines nominative (or "nominal" as he calls it) typing and structural typing as follows:
"Type systems like Java's, in which names [of types] are significant and subtyping is explicitly declared, are called nominal. Type systems like most of the ones in this book in which names are inessential and subtyping is defined directly on the structure of the types, are called structural."
So by Pierce's definition, duck typing is a form of structural typing, albeit one that is typically implemented using runtime checks. (Pierce's definitions are independent of compile-time versus runtime-checking.)
Reference:
"Types and Programming Languages" - Benjamin C Pierce, MIT Press, 2002, ISBN 0-26216209-1.
Likely it's a performance feature.
Since Java is statically typed, the compiler can assert the conformance of a class to an identified interface. Once validated, that assertion can be represented in the compiled class as simply a reference to the conforming interface definition.
Later, at runtime, when an object has its Class cast to the interface type, all the runtime needs to do is check the meta data of the class to see if the class that it is being cast too is compatible (via the interface or the inheritance hierarchy).
This is a reasonably cheap check to perform since the compiler has done most of the work.
Mind, it's not authoritative. A class can SAY that it conforms to an interface, but that doesn't mean that the actual method send about to be executed will actually work. The conforming class may well be out of date and the method may simply not exist.
But a key component to the performance of java is that while it still must actually do a form of dynamic method dispatch at runtime, there's a contract that the method isn't going to suddenly vanish behind the runtimes back. So once the method is located, its location can be cached in the future. In contrast to a dynamic language where methods may come and go, and they must continue to try and hunt the methods down each time one is invoked. Obviously, dynamic languages have mechanisms to make this perform well.
Now, if the runtime were to ascertain that an object complies with an interface by doing all of the work itself, you can see how much more expensive that can be, especially with a large interface. A JDBC ResultSet, for example, has over 140 methods and such in it.
Duck typing is effectively dynamic interface matching. Check what methods are called on an object, and map it at runtime.
All of that kind of information can be cached, and built at runtime, etc. All of this can (and is in other languages), but having much of this done at compile time is actually quite efficient both on the runtimes CPU and its memory. While we use Java with multi GB heaps for long running servers, it's actually pretty suitable for small deployments and lean runtimes. Even outside of J2ME. So, there is still motivation to try and keep the runtime footprint as lean as possible.
Duck typing can be dangerous for the reasons Stephen C discussed, but it is not necessarily the evil that breaks all static typing. A static and more safe version of duck typing lies at the heart of Go's type system, and a version is available in Scala where it is called "structural typing." These versions still perform a compile time check to make sure the object obeys the requirements, but have potential problems because they break the design paradigm that has implementing an interface always an intentional decision.
See http://markthomas.info/blog/?p=66 and http://programming-scala.labs.oreilly.com/ch12.html and http://beust.com/weblog/2008/02/11/structural-typing-vs-duck-typing/ for a discusion of the Scala feature.
I can't say I know why certain design decisions were made by the Java development team. I also caveat my answer with the fact that those individuals are far smarter than I'll ever be with regards to software development and (particularly) language design. But, here's a crack at trying to answer your question.
In order to understand why they may not have chosen to use an interface like "CanAddActionListener" you have to look at the advantages of NOT using an interface and, instead, preferring abstract (and, ultimately, concrete) classes.
As you may know, when declaring abstract functionality, you can provide default functionality to subclasses. Okay...so what? Big deal, right? Well, particularly in the case of designing a language, it is a big deal. When designing a language, you will need to maintain those base classes over the life of the language (and you can be sure that there will be changes as your language evolves). If you had chosen to use interfaces, instead of providing base functionality in an abstract class, any class that implements the interface will break. This is particularly important after publication - once customers (developers in this case) start using your libraries, you can't change up the interfaces on a whim or you are going to have a lot of pissed of developers!
So, my guess is that the Java development team fully realized that many of their AbstractJ* classes shared the same method names, it would not be advantageous in having them share a common interface as it would make their API rigid and inflexible.
To sum it up (thank you to this site here):
Abstract classes can easily be extended by adding new (non-abstract) methods.
An interface cannot be modified without breaking its contract with the classes that implement it. Once an interface has been shipped, its member set is permanently fixed. An API based on interfaces can only be extended by adding new interfaces.
Of course, this is not to say that you could do something like this in your own code, (extend AbstractJButton and implement CanAddActionListener interface) but be aware of the pitfalls in doing so.
Interfaces represent a form of substitution class. A reference of type which implements or inherits from a particular interface may be passed to a method which expects that interface type. An interface will generally not only specify that all implementing classes must have methods with certain names and signatures, but it will generally also have an associated contract which specifies that all legitimate implementing classes must have methods with certain names and signatures, which behave in certain designated ways. It is entirely possible that even if two interfaces contain members with the same names and signatures, an implementation may satisfy the contract of one but not the other.
As a simple example, if one were designing a framework from scratch, one might start out with an Enumerable<T> interface (which can be used as often as desired to create an enumerator which will output a sequence of T's, but different requests may yield different sequences), but then derive from it an interface ImmutableEnumerable<T> which would behave as above but guarantee that every request would return the same sequence. A mutable collection type would support all of the members required for ImmutableEnumerable<T>, but since requests for enumeration received after a mutation would report a different sequence from requests made before, it would not abide by the ImmutableEnumerable contract.
The ability of an interface to be regarded as encapsulating a contract beyond the signatures of its members is one of the things that makes interface-based programming more semantically powerful than simple duck-typing.

Java method keyword "final" and its use

When I create complex type hierarchies (several levels, several types per level), I like to use the final keyword on methods implementing some interface declaration. An example:
interface Garble {
int zork();
}
interface Gnarf extends Garble {
/**
* This is the same as calling {#link #zblah(0)}
*/
int zblah();
int zblah(int defaultZblah);
}
And then
abstract class AbstractGarble implements Garble {
#Override
public final int zork() { ... }
}
abstract class AbstractGnarf extends AbstractGarble implements Gnarf {
// Here I absolutely want to fix the default behaviour of zblah
// No Gnarf shouldn't be allowed to set 1 as the default, for instance
#Override
public final int zblah() {
return zblah(0);
}
// This method is not implemented here, but in a subclass
#Override
public abstract int zblah(int defaultZblah);
}
I do this for several reasons:
It helps me develop the type hierarchy. When I add a class to the hierarchy, it is very clear, what methods I have to implement, and what methods I may not override (in case I forgot the details about the hierarchy)
I think overriding concrete stuff is bad according to design principles and patterns, such as the template method pattern. I don't want other developers or my users do it.
So the final keyword works perfectly for me. My question is:
Why is it used so rarely in the wild? Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?
Why is it used so rarely in the wild?
Because you should write one more word to make variable/method final
Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?
Usually I see such examples in 3d part libraries. In some cases I want to extend some class and change some behavior. Especially it is dangerous in non open-source libraries without interface/implementation separation.
I always use final when I write an abstract class and want to make it clear which methods are fixed. I think this is the most important function of this keyword.
But when you're not expecting a class to be extended anyway, why the fuss? Of course if you're writing a library for someone else, you try to safeguard it as much as you can but when you're writing "end user code", there is a point where trying to make your code foolproof will only serve to annoy the maintenance developers who will try to figure out how to work around the maze you had built.
The same goes to making classes final. Although some classes should by their very nature be final, all too often a short-sighted developer will simply mark all the leaf classes in the inheirance tree as final.
After all, coding serves two distinct purposes: to give instructions to the computer and to pass information to other developers reading the code. The second one is ignored most of the time, even though it's almost as important as making your code work. Putting in unnecessary final keywords is a good example of this: it doesn't change the way the code behaves, so its sole purpose should be communication. But what do you communicate? If you mark a method as final, a maintainer will assume you'd had a good readon to do so. If it turns out that you hadn't, all you achieved was to confuse others.
My approach is (and I may be utterly wrong here obviously): don't write anything down unless it changes the way your code works or conveys useful information.
Why is it used so rarely in the wild?
That doesn't match my experience. I see it used very frequently in all kinds of libraries. Just one (random) example: Look at the abstract classes in:
http://code.google.com/p/guava-libraries/
, e.g. com.google.common.collect.AbstractIterator. peek(), hasNext(), next() and endOfData() are final, leaving just computeNext() to the implementor. This is a very common example IMO.
The main reason against using final is to allow implementors to change an algorithm - you mentioned the "template method" pattern: It can still make sense to modify a template method, or to enhance it with some pre-/post actions (without spamming the entire class with dozens of pre-/post-hooks).
The main reason pro using final is to avoid accidental implementation mistakes, or when the method relies on internals of the class which aren't specified (and thus may change in the future).
I think it is not commonly used for two reasons:
People don't know it exists
People are not in the habit of thinking about it when they build a method.
I typically fall into the second reason. I do override concrete methods on a somewhat common basis. In some cases this is bad, but there are many times it doesn't conflict with design principles and in fact might be the best solution. Therefore when I am implementing an interface, I typically don't think deeply enough at each method to decide if a final keyword would be useful. Especially since I work on a lot of business applications that change frequently.
Why is it used so rarely in the wild?
Because it should not be necessary. It also does not fully close down the implementation, so in effect it might give you a false sense of security.
It should not be necessary due to the Liskov substitution principle. The method has a contract and in a correctly designed inheritance diagram that contract is fullfilled (otherwise it's a bug). Example:
interface Animal {
void bark();
}
abstract class AbstractAnimal implements Animal{
final void bark() {
playSound("whoof.wav"); // you were thinking about a dog, weren't you?
}
}
class Dog extends AbstractAnimal {
// ok
}
class Cat extends AbstractAnimal() {
// oops - no barking allowed!
}
By not allowing a subclass to do the right thing (for it) you might introduce a bug. Or you might require another developer to put an inheritance tree of your Garble interface right beside yours because your final method does not allow it to do what it should do.
The false sense of security is typical of a non-static final method. A static method should not use state from the instance (it cannot). A non-static method probably does. Your final (non-static) method probably does too, but it does not own the instance variables - they can be different than expected. So you add a burden on the developer of the class inheriting form AbstractGarble - to ensure instance fields are in a state expected by your implementation at any point in time. Without giving the developer a way to prepare the state before calling your method as in:
int zblah() {
prepareState();
return super.zblah();
}
In my opinion you should not close an implementation in such a fashion unless you have a very good reason. If you document your method contract and provide a junit test you should be able to trust other developers. Using the Junit test they can actually verify the Liskov substitution principle.
As a side note, I do occasionally close a method. Especially if it's on the boundary part of a framework. My method does some bookkeeping and then continues to an abstract method to be implemented by someone else:
final boolean login() {
bookkeeping();
return doLogin();
}
abstract boolean doLogin();
That way no-one forgets to do the bookkeeping but they can provide a custom login. Whether you like such a setup is of course up to you :)

Why is subclassing not allowed for many of the SWT Controls?

I often find myself wanting to do it. It can be very useful when you want to store some useful information or extra states.
So my question is, is there a very good/strong reason why this is forbidden?
Thanks
EDIT:
Thanks a lot for all these answers. So it sounds like there's no right-or-wrong answer to this.
Assuming I accept the fact that these classes are not to be subclassed, what's the point of not marking a Control class final, but prohibiting subclassing - effectively demoting the exception/error from compile-time to run-time?
EDIT^2:
See my own answer to this: apparently, these classes are overrideable, but requires explicit acknowledgement by the overrider.
Thanks
It doesn't look like anybody mentioned this in any of the answers, but SWT does provide an overrideable checkSubclass() method; precisely where the Unextendable exception is thrown. Override the method to a no-op and effectively make extending legal. I guess to leave this option open is ultimately the reason that the class is not made final and the extension error not made compile-time instead of run-time.
Example:
#Override
protected void checkSubclass() {
// allow subclass
System.out.println("info : checking menu subclass");
}
Designing components for inheritance is hard, and can limit future implementation changes (certainly if you leave some methods overridable, and call them from other methods). Prohibiting subclassing restricts users, but means it's easier to write robust code.
This follows Josh Bloch's suggestion of "design for inheritance or prohibit it". This is a bit of a religious topic in the dev community - I agree with the sentiment, but others prefer everything to be as open to extension as possible.
It is very hard to create class that can be safely subclassed. You have to think about endless use cases and protect you class very well. I believe that this is a general reason to mark API class as final.
As for your follow-up question:
what's the point of not marking a
Control class final, but prohibiting
subclassing - effectively demoting the
exception/error from compile-time to
run-time?
It's not possible for SWT to subclass the Control class, if they mark it final. But they have to internally. So they defer the checking to runtime.
BTW, if you want an insane hack, you can still subclass Control or any other SWT class, by putting your subclass into the org.eclipse.swt.widgets package. But I never really had to do that.
The method description of org.eclipse.swt.widgets.Widget.checkSubclass() says:
Checks that this class can be subclassed.
The SWT class library is intended to be subclassed only at specific,
controlled points (most notably, Composite and Canvas when
implementing new widgets). This method enforces this rule unless it is
overridden.
IMPORTANT: By providing an implementation of this method that allows a
subclass of a class which does not normally allow subclassing to be
created, the implementer agrees to be fully responsible for the fact
that any such subclass will likely fail between SWT releases and will
be strongly platform specific. No support is provided for user-written
classes which are implemented in this fashion.
The ability to subclass outside of the allowed SWT classes is intended
purely to enable those not on the SWT development team to implement
patches in order to get around specific limitations in advance of when
those limitations can be addressed by the team. Subclassing should not
be attempted without an intimate and detailed understanding of the
hierarchy.
Throws: SWTException
ERROR_INVALID_SUBCLASS - if this class is not an allowed subclass

Categories