Java 8 gave us many fun ways to use functional interfaces and with them a new annotation: #FunctionalInterface. Its job is to tell the compiler to yell at us if we fail to stick to the rules of a functional interface (only one abstract method that needs overriding please).
There are 43 interfaces in the java.util.function package with this annotation. A search of jdk.1.8.0/src for #FunctionalInterface only turns up 57 hits. Why are the other interfaces (such as AutoCloseable) that could have added #FunctionalInterface still missing it?
There is a bit of a vague hint in the annotations documentation:
"An informative annotation type used to indicate that an interface type declaration is intended to be a functional interface"
Is there any good reason NOT to intend that an interface I've designed (that may simply happen to be a functional interface) not be used as one? Is leaving it off an indication of anything besides not realizing it could have been added?
Isn't adding abstract methods to any published interface going to screw anyone implementing it, functional or not? I feel cynical assuming they just didn't bother to hunt them all down but what other explanation is there?
Update: After looking over "Should 'Comparable' be a 'Functional interface'?" I find I still have nagging questions. When a Single Method Interface and a Functional Interface are structurally identical what's left to be different? Is the difference simply the names? Comparable and Comparator are close enough to the same semantically. Turns out they are different structurally though so still not the best example...
Is there a case when an SMI is structurally fine to use as a Functional Interface but still discouraged over the semantic meaning of the name of the interface and the method? Or perhaps the contract implied by the Javadocs?
Well, an annotation documenting an intention would be useless if you assume that there is always that intention given.
You named the example AutoCloseable which is obviously not intended to be implemented as a function as there’s Runnable which is much more convenient for a function with a ()->void signature. It’s intended that a class implementing AutoCloseable manages an external resource which anonymous classes implemented via lambda expression don’t do.
A clearer example is Comparable, an interface not only not intended to be implemented as a lambda expression, it’s impossible to implement it correctly using a lambda expression.
Possible reasons for not marking an interface with #FunctionalInterface by example:
The interface has programming language semantics, e.g. AutoClosable or Iterable (that’s unlikely to happen for your own interfaces)
It’s not expected that the interface has arbitrary implementations and/or is more an identifier than the actual implementation, e.g. java.net.ProtocolFamily, or java.lang.reflect.GenericArrayType (Note that the latter would also inherit a default implementation for getTypeName() being useless for lambda implementations as relying on toString())
The instances of this interface should have an identity, e.g. java.net.ProtocolFamily, java.nio.file.WatchEvent.Modifier, etc. Note that these are typically implemented by an enum
Another example is java.time.chrono.Era which happens to have only a single abstract method but its specification says “Instances of Era may be compared using the == operator.”
The interface is intended to alter the behavior of an operation for which an implementation of the interface without inheriting/implementing anything else makes no sense, e.g. java.rmi.server.Unreferenced
It’s an abstraction of common operations of classes which should have more than just these operations, e.g. java.io.Closeable, java.io.Flushable, java.lang.Readable
The expected inheritance is part of the contract and forbids lambda expression implementations, e.g. in java.awt: ActiveEvent should be implemented by an AWTEvent, PrinterGraphics by a Graphics, the same applies to java.awt.print.PrinterGraphics (hey, two interfaces for exactly the same thing…), wheras javax.print.FlavorException should be implemented by a javax.print.PrintException subclass
I don’t know whether the various event listener interfaces aren’t marked with #FunctionalInterface for symmetry with other multi-method event listener that can’t be functional interfaces, but actually event listeners are good candidates for lambda expressions. If you want remove a listener at a later time, you have to store the instance but that’s not different to, e.g. inner class listener implementations.
The library maintainer has a large codebase with more than 200 candidate types and not the resources to discuss for every interface whether it should be annotated and hence focuses on the primary candidates for being used in a functional context. I’m sure, that, e.g. java.io.ObjectInputValidation, java.lang.reflect.InvocationHandler, juc RejectedExecutionHandler & ThreadFactory wouldn’t be bad as #FunctionalInterface but I have no idea whether, e.g. java.security.spec.ECField makes a good candidate. The more general the library is, the more likely users of the library will be able to answer that question for a particular interface they are interested in but it would be unfair to insist on the library maintainer to answer it for all interfaces.
In this context it makes more sense to see the presence of a #FunctionalInterface as a message that an interface is definitely intended to be usable together with lambda expressions than to treat the absence of the annotation as an indicator for it’s being not intended to be used this way. This is exactly like the compiler handles it, you can implement every single abstract method interface using a lambda expression, but when the annotation is present it will ensure that you can use this interface in this way.
Planned expansion. Just because an interface matches the requirements of an SMI now doesn't mean that expansion isn't needed later.
In java 8, functional interface is an interface having exactly one abstract method called functional method to which the lambda expression’s parameter and return types are matched.
The java.util.function contains general purpose functional interfaces used by JDK and also available for end users. While they are not the complete set of funtional interfaces to which lambda expressions might be applicable, but they provide enough to cover common requirements. You are free to create your own functional interfaces whenever existing set are not enough.
There are many such interfaces available which deserves to be designated as functional interface but java.util.function package already provides functional interfaces for our almost all purposes.
For example look into following code.
public interface Comparable<T> {
public int compareTo(T o);
}
#FunctionalInterface
public interface ToIntFunction<T> {
int applyAsInt(T value);
}
public static void main(String[] args){
ToIntFunction<String> f = str -> Integer.parseInt(str);
Comparable<String> c = str -> Integer.parseInt(str);
}
Comparable can also take an object and derive some int type value but there is a more general dedicated interface ToIntFunction is provided to perform this task. There is no such hard rule that all the deserving interfaces should be annotated with #FunctionalInterface but to gain the advantage of lambda feature, the interface should fulfill all criterias defined by FunctionalInterface.
Related
I am working in java from some time. I know their are some thing knows as interface in java. While reading about them I come to know their is marker interface. Recently when i started reading about java 8 I come to know about an other interface Functional Interface.
I am just wondering what are the different kind of Interfaces available in java?
The Java language specification doesn't itself define the term marker interface and the term has been coined by authors, developers and designers. One common question asked is if we can create a marker interface or not and the answer is yes because of following reason:
We can't create marker interface similar to Serializable or Cloneable but we can simulate the functionality by writing extra code around the custom marker interface.
An empty interface is known as tag or marker interface. For example Serializable, EventListener, Remote(java.rmi.Remote) are tag interfaces. These interfaces do not have any field and methods in it.
Read more here: http://beginnersbook.com/2016/03/tag-or-marker-interfaces-in-java/
Functional Interface is the new addition in Java 8, An interface with exactly one abstract method is called Functional Interface. Read more here.
There are no other types of Interfaces in Java.
There's no special meaning for each.
Marker interface is kind of "design pattern", you attach a label/tag to a set of objects in order to indicates that they have something in common, they're OK for some kind of process or operations. Serializable is a typical example, it marks objects that they can be serialized/deserialized.
On the other hand for FunctionalInterface, it's just an interface with restriction that can only have one abstract method, and thus represents a single function contract. Java 8 add lambda expression for functional programming, for FP we need to pass function back and forth so often. Say we have an interface like:
public interface StringTrasformer {
String transform(Object obj);
}
Traditionally we can only create instance of asynchronous class like:
someObj.doTransform(new StringTransformer() {
#Override
public String transform(Object object) {
return "result";
}
});
But there's only one method to be implemented, so it's no need to make code so verbose, with lambda expression it could be as short as:
abc.doTransform(object -> "result");
Annotation FunctionalInterface is used for compiler to check whether the interface you have annotated is a valid one. Even functional interface is for lambda expressions, method referencesand constructor references, but nothing prevents you to use it the traditional way. Because essentially it is just an normal interface.
Why did new Spliterators class appear in Java 8? Since Java 8 we have possibility to add static methods to the interfaces.
Since Spliterators class has only static method wouldn't be simpler to declare all its methods in the Spliterator interface?
The same question about Collectors/Collector pair.
Thank you.
It’s perfectly possible that this decision was made without even thinking about this brand new possibility, but simply following the established-since-twenty-years pattern.
Besides that, it can be debated whether it is really useful to add 25 to 30 static methods to an interface. It makes sense to offer a few factories for canonical implementations, but you should draw a line somewhere. It’s not feasible to add factories to all implementations to an interface, just because they are offered by the same library. But this debate would be off-topic.
Further, Spliterators does not only offer static methods, but also nested classes. Unlike static methods, these classes would pollute the name space of every implementation class, when being defined in an interface.
Collectors and Spliterators may also contain implementation-specific non-public methods and even fields.
No, is not good idea, because interface declares a contract, but class represents logic. But after add default method to interface in Java 8 we can only declare public method, but in abstract class we can add public and private abstract method, so we still can hide some logic in abstract classes. Imagine, in actual level of language you can declare only public method, and everyone can change your idea for e.q. Collection
Because there is a difference between an interface and a class. These two have different intentions. Interface declares a contract. Default methods for the interface should be used carefully, for instance, where you can't break compatibility by adding a method declaration into an interface and can't declare xxxV2 interface.
A class is an entity, which represents a unit of the program logic.
While I am learning Haskell, I noticed its type class, which is supposed to be a great invention that originated from Haskell.
However, in the Wikipedia page on type class:
The programmer defines a type class by specifying a set of function or
constant names, together with their respective types, that must exist
for every type that belongs to the class.
Which seems rather close to Java's Interface to me (quoting Wikipedia's Interface(Java) page):
An interface in the Java programming language is an abstract type that
is used to specify an interface (in the generic sense of the term)
that classes must implement.
These two looks rather similar: type class limit a type's behavior, while interface limit a class' behavior.
I wonder what are the differences and similarities between type class in Haskell and interface in Java, or maybe they are fundamentally different?
EDIT: I noticed even haskell.org admits that they are similar. If they are so similar (or are they?), then why type class is treated with such hype?
MORE EDIT: Wow, so many great answers! I guess I'll have to let the community decide which is the best one. However, while reading the answers, all of them seem to just say that "there are many things typeclass can do while interface cannot or have to cope with generics". I cannot help but wondering, are there anything interfaces can do while typeclasses cannot? Also, I noticed that Wikipedia claims that typeclass was originally invented in the 1989 paper *"How to make ad-hoc polymorphism less ad hoc", while Haskell is still in its cradle, while Java project was started in 1991 and first released in 1995. So maybe instead of typeclass being similar to interfaces, its the other way around, that interfaces were influenced by typeclass? Are there any documents/papers support or disprove this? Thanks for all the answers, they are all very enlightening!
Thanks for all the inputs!
I would say that an interface is kind of like a type class SomeInterface t where all of the values have the type t -> whatever (where whatever does not contain t). This is because with the kind of inheritance relationship in Java and similar languages, the method called depends on the type of object they are called on, and nothing else.
That means it's really hard to make things like add :: t -> t -> t with an interface, where it is polymorphic on more than one parameter, because there's no way for the interface to specify that the argument type and return type of the method is the same type as the type of the object it is called on (i.e. the "self" type). With Generics, there are kinda ways to fake this by making an interface with generic parameter that is expected to be the same type as the object itself, like how Comparable<T> does it, where you are expected to use Foo implements Comparable<Foo> so that the compareTo(T otherobject) kind of has type t -> t -> Ordering. But that still requires the programmer to follow this rule, and also causes headaches when people want to make a function that uses this interface, they have to have recursive generic type parameters.
Also, you won't have things like empty :: t because you're not calling a function here, so it isn't a method.
What is similar between interfaces and type classes is that they name and describe a set of related operations. The operations themselves are described via their names, inputs, and outputs. Likewise there may be many implementations of these operations that will likely differ in their implementation.
With that out of the way, here are some notable differences:
Interfaces methods are always associated with an object instance. In other words, there is always an implied 'this' parameter that is the object on which the method is called. All inputs to a type class function are explicit.
An interface implementation must be defined as part of the class that implements the interface. Conversely, a type class 'instance' can be defined completely seperate from its associated type...even in another module.
In general, I think its fair to say that type classes are more powerful and flexible than interfaces. How would you define an interface for converting a string to some value or instance of the implementing type? It's certainly not impossible, but the result would not be intuitive or elegant. Have you ever wished it was possible to implement an interface for a type in some compiled library? These are both easy to accomplish with type classes.
Type classes were created as a structured way to express "ad-hoc polymorphism", which is basically the technical term for overloaded functions. A type class definition looks something like this:
class Foobar a where
foo :: a -> a -> Bool
bar :: String -> a
What this means is that, when you use apply the function foo to some arguments of a type that belong to the class Foobar, it looks up an implementation of foo specific to that type, and uses that. This is very similar to the situation with operator overloading in languages like C++/C#, except more flexible and generalized.
Interfaces serve a similar purpose in OO languages, but the underlying concept is somewhat different; OO languages come with a built-in notion of type hierarchies that Haskell simply doesn't have, which complicates matters in some ways because interfaces can involve both overloading by subtyping (i.e., calling methods on appropriate instances, subtypes implementing interfaces their supertypes do) and by flat type-based dispatch (since two classes implementing an interface may not have a common superclass that also implements it). Given the huge additional complexity introduced by subtyping, I suggest it's more helpful to think of type classes as an improved version of overloaded functions in a non-OO language.
Also worth noting is that type classes have vastly more flexible means of dispatch--interfaces generally apply only to the single class implementing it, whereas type classes are defined for a type, which can appear anywhere in the signature of the class's functions. The equivalent of this in OO interfaces would be allowing the interface to define ways to pass an object of that class to other classes, define static methods and constructors that would select an implementation based on what return type is required in calling context, define methods that take arguments of the same type as the class implementing the interface, and various other things that don't really translate at all.
In short: They serve similar purposes, but the way they work is somewhat different, and type classes are both significantly more expressive and, in some cases, simpler to use because of working on fixed types rather that pieces of an inheritance hierarchy.
I've read the above answers. I feel I can answer slightly more clearly:
A Haskell "type class" and a Java/C# "interface" or a Scala "trait" are basically analogous. There is no conceptual distinction between them but there are implementation differences:
Haskell type classes are implemented with "instances" that are separate from the data type definition. In C#/Java/Scala, the interfaces/traits must be implemented in the class definition.
Haskell type classes allow you to return a this type or self type. Scala traits do as well (this.type). Note that "self types" in Scala are a completely unrelated feature. Java/C# require a messy workaround with generics to approximate this behavior.
Haskell type classes let you define functions (including constants) without an input "this" type parameter. Java/C# interfaces and Scala traits require a "this" input parameter on all functions.
Haskell type classes let you define default implementations for functions. So do Scala traits and Java 8+ interfaces. C# can approximate something like this with extensions methods.
In Master minds of Programming, there's an interview about Haskell with Phil Wadler, the inventor of type classes, who explain the similarities between interfaces in Java and type classes in Haskell:
A Java method like:
public static <T extends Comparable<T>> T min (T x, T y)
{
if (x.compare(y) < 0)
return x;
else
return y;
}
is very similar to the Haskell method:
min :: Ord a => a -> a -> a
min x y = if x < y then x else y
So, type classes are related to interfaces, but the real correspondance would be a static method parametrized with a type as above.
Watch Phillip Wadler's talk Faith, Evolution, and Programming Languages. Wadler worked on Haskell and was a major contributor to Java Generics.
I can't speak to the "hype"-level, if it seems that way fine. But yes type classes are similar in lots of ways. One difference that I can think of is that it Haskell you can provide behavior for some of the type class's operations:
class Eq a where
(==), (/=) :: a -> a -> Bool
x /= y = not (x == y)
x == y = not (x /= y)
which shows that there are two operations, equal (==), and not-equal (/=), for things that are instances of the Eq type class. But the not-equal operation is defined in terms of equals (so that you'd only have to provide one), and vice versa.
So in probably-not-legal-Java that would be something like:
interface Equal<T> {
bool isEqual(T other) {
return !isNotEqual(other);
}
bool isNotEqual(T other) {
return !isEqual(other);
}
}
and the way that it would work is that you'd only need to provide one of those methods to implement the interface. So I'd say that the ability to provide a sort of partial implemention of the behavior you want at the interface level is a difference.
Read Software Extension and Integration with Type Classes where examples are given of how type classes can solve a number of problems that interfaces cannot.
Examples listed in the paper are:
the expression problem,
the framework integration problem,
the problem of independent extensibility,
the tyranny of the dominant decomposition, scattering and tangling.
They are similar (read: have similar use), and probably implemented similarly: polymorphic functions in Haskell take under the hood a 'vtable' listing the functions associated with the typeclass.
This table can often be deduced at compile time. This is probably less true in Java.
But this is a table of functions, not methods. Methods are bound to an object, Haskell typeclasses are not.
See them rather like Java's generics.
As Daniel says, interface implementations are defined seperately from data declarations. And as others have pointed out, there's a straightforward way to define operations that use the same free type in more than one place. So its easy to define Num as a typeclass. Thus in Haskell we get the syntactic benefits of operator overloading without actually having any magic overloaded operators -- just standard typeclasses.
Another difference is that you can use methods based on a type, even when you don't have a concrete value of that type hanging around yet!
For example, read :: Read a => String -> a. So if you have enough other type information hanging around about how you'll use the result of a "read", you can let the compiler figure out which dictionary to use for you.
You can also do things like instance (Read a) => Read [a] where... which lets you define a read instance for any list of readable things. I don't think that's quite possible in Java.
And all this is just standard single-parameter typeclasses with no trickery going on. Once we introduce multi-parameter typeclasses, then a whole new world of possibilities opens up, and even more so with functional dependencies and type families, which let you embed much more information and computation in the type system.
Sometimes we have several classes that have some methods with the same signature, but that don't correspond to a declared Java interface. For example, both JTextField and JButton (among several others in javax.swing.*) have a method
public void addActionListener(ActionListener l)
Now, suppose I wish to do something with objects that have that method; then, I'd like to have an interface (or perhaps to define it myself), e.g.
public interface CanAddActionListener {
public void addActionListener(ActionListener l);
}
so that I could write:
public void myMethod(CanAddActionListener aaa, ActionListener li) {
aaa.addActionListener(li);
....
But, sadly, I can't:
JButton button;
ActionListener li;
...
this.myMethod((CanAddActionListener)button,li);
This cast would be illegal. The compiler knows that JButton is not a CanAddActionListener, because the class has not declared to implement that interface ... however it "actually" implements it.
This is sometimes an inconvenience - and Java itself has modified several core classes to implement a new interface made of old methods (String implements CharSequence, for example).
My question is: why this is so? I understand the utility of declaring that a class implements an interface. But anyway, looking at my example, why can't the compiler deduce that the class JButton "satisfies" the interface declaration (looking inside it) and accept the cast? Is it an issue of compiler efficiency or there are more fundamental problems?
My summary of the answers: This is a case in which Java could have made allowance for some "structural typing" (sort of a duck typing - but checked at compile time). It didn't. Apart from some (unclear for me) performance and implementations difficulties, there is a much more fundamental concept here: In Java, the declaration of an interface (and in general, of everything) is not meant to be merely structural (to have methods with these signatures) but semantical: the methods are supposed to implement some specific behavior/intent. So, a class which structurally satisfies some interface (i.e., it has the methods with the required signatures) does not necessarily satisfies it semantically (an extreme example: recall the "marker interfaces", which do not even have methods!). Hence, Java can assert that a class implements an interface because (and only because) this has been explicitly declared. Other languages (Go, Scala) have other philosophies.
Java's design choice to make implementing classes expressly declare the interface they implement is just that -- a design choice. To be sure, the JVM has been optimized for this choice and implementing another choice (say, Scala's structural typing) may now come at additional cost unless and until some new JVM instructions are added.
So what exactly is the design choice about? It all comes down to the semantics of methods. Consider: are the following methods semantically the same?
draw(String graphicalShapeName)
draw(String handgunName)
draw(String playingCardName)
All three methods have the signature draw(String). A human might infer that they have different semantics from the parameter names, or by reading some documentation. Is there any way for the machine to tell that they are different?
Java's design choice is to demand that the developer of a class explicitly state that a method conforms to the semantics of a pre-defined interface:
interface GraphicalDisplay {
...
void draw(String graphicalShapeName);
...
}
class JavascriptCanvas implements GraphicalDisplay {
...
public void draw(String shape);
...
}
There is no doubt that the draw method in JavascriptCanvas is intended to match the draw method for a graphical display. If one attempted to pass an object that was going to pull out a handgun, the machine can detect the error.
Go's design choice is more liberal and allows interfaces to be defined after the fact. A concrete class need not declare what interfaces it implements. Rather, the designer of a new card game component may declare that an object that supplies playing cards must have a method that matches the signature draw(String). This has the advantage that any existing class with that method can be used without having to change its source code, but the disadvantage that the class might pull out a handgun instead of a playing card.
The design choice of duck-typing languages is to dispense with formal interfaces altogether and simply match on method signatures. Any concept of interface (or "protocol") is purely idiomatic, with no direct language support.
These are but three of many possible design choices. The three can be glibly summarized like this:
Java: the programmer must explicitly declare his intent, and the machine will check it. The assumption is that the programmer is likely to make a semantic mistake (graphics / handgun / card).
Go: the programmer must declare at least part of his intent, but the machine has less to go on when checking it. The assumption is that the programmer is likely to might make a clerical mistake (integer / string), but not likely to make a semantic mistake (graphics / handgun / card).
Duck-typing: the programmer needn't express any intent, and there is nothing for the machine to check. The assumption is that programmer is unlikely to make either a clerical or semantic mistake.
It is beyond the scope of this answer to address whether interfaces, and typing in general, are adequate to test for clerical and semantic mistakes. A full discussion would have to consider build-time compiler technology, automated testing methodology, run-time/hot-spot compilation and a host of other issues.
It is acknowledged that the draw(String) example are deliberately exaggerated to make a point. Real examples would involve richer types that would give more clues to disambiguate the methods.
Why can't the compiler deduce that the class JButton "satisfies" the interface declaration (looking inside it) and accept the cast? Is it an issue of compiler efficiency or there are more fundamental problems?
It is a more fundamental issue.
The point of an interface is to specify that there is a common API / set of behaviors that a number of classes support. So, when a class is declared as implements SomeInterface, any methods in the class whose signatures match method signatures in the interface are assumed to be methods that provide that behavior.
By contrast, if the language simply matched methods based on signatures ... irrespective of the interfaces ... then we'd be liable to get false matches, when two methods with the same signature actually mean / do something semantically unrelated.
(The name for the latter approach is "duck typing" ... and Java doesn't support it.)
The Wikipedia page on type systems says that duck typing is neither "nominative typing" or "structural typing". By contrast, Pierce doesn't even mention "duck typing", but he defines nominative (or "nominal" as he calls it) typing and structural typing as follows:
"Type systems like Java's, in which names [of types] are significant and subtyping is explicitly declared, are called nominal. Type systems like most of the ones in this book in which names are inessential and subtyping is defined directly on the structure of the types, are called structural."
So by Pierce's definition, duck typing is a form of structural typing, albeit one that is typically implemented using runtime checks. (Pierce's definitions are independent of compile-time versus runtime-checking.)
Reference:
"Types and Programming Languages" - Benjamin C Pierce, MIT Press, 2002, ISBN 0-26216209-1.
Likely it's a performance feature.
Since Java is statically typed, the compiler can assert the conformance of a class to an identified interface. Once validated, that assertion can be represented in the compiled class as simply a reference to the conforming interface definition.
Later, at runtime, when an object has its Class cast to the interface type, all the runtime needs to do is check the meta data of the class to see if the class that it is being cast too is compatible (via the interface or the inheritance hierarchy).
This is a reasonably cheap check to perform since the compiler has done most of the work.
Mind, it's not authoritative. A class can SAY that it conforms to an interface, but that doesn't mean that the actual method send about to be executed will actually work. The conforming class may well be out of date and the method may simply not exist.
But a key component to the performance of java is that while it still must actually do a form of dynamic method dispatch at runtime, there's a contract that the method isn't going to suddenly vanish behind the runtimes back. So once the method is located, its location can be cached in the future. In contrast to a dynamic language where methods may come and go, and they must continue to try and hunt the methods down each time one is invoked. Obviously, dynamic languages have mechanisms to make this perform well.
Now, if the runtime were to ascertain that an object complies with an interface by doing all of the work itself, you can see how much more expensive that can be, especially with a large interface. A JDBC ResultSet, for example, has over 140 methods and such in it.
Duck typing is effectively dynamic interface matching. Check what methods are called on an object, and map it at runtime.
All of that kind of information can be cached, and built at runtime, etc. All of this can (and is in other languages), but having much of this done at compile time is actually quite efficient both on the runtimes CPU and its memory. While we use Java with multi GB heaps for long running servers, it's actually pretty suitable for small deployments and lean runtimes. Even outside of J2ME. So, there is still motivation to try and keep the runtime footprint as lean as possible.
Duck typing can be dangerous for the reasons Stephen C discussed, but it is not necessarily the evil that breaks all static typing. A static and more safe version of duck typing lies at the heart of Go's type system, and a version is available in Scala where it is called "structural typing." These versions still perform a compile time check to make sure the object obeys the requirements, but have potential problems because they break the design paradigm that has implementing an interface always an intentional decision.
See http://markthomas.info/blog/?p=66 and http://programming-scala.labs.oreilly.com/ch12.html and http://beust.com/weblog/2008/02/11/structural-typing-vs-duck-typing/ for a discusion of the Scala feature.
I can't say I know why certain design decisions were made by the Java development team. I also caveat my answer with the fact that those individuals are far smarter than I'll ever be with regards to software development and (particularly) language design. But, here's a crack at trying to answer your question.
In order to understand why they may not have chosen to use an interface like "CanAddActionListener" you have to look at the advantages of NOT using an interface and, instead, preferring abstract (and, ultimately, concrete) classes.
As you may know, when declaring abstract functionality, you can provide default functionality to subclasses. Okay...so what? Big deal, right? Well, particularly in the case of designing a language, it is a big deal. When designing a language, you will need to maintain those base classes over the life of the language (and you can be sure that there will be changes as your language evolves). If you had chosen to use interfaces, instead of providing base functionality in an abstract class, any class that implements the interface will break. This is particularly important after publication - once customers (developers in this case) start using your libraries, you can't change up the interfaces on a whim or you are going to have a lot of pissed of developers!
So, my guess is that the Java development team fully realized that many of their AbstractJ* classes shared the same method names, it would not be advantageous in having them share a common interface as it would make their API rigid and inflexible.
To sum it up (thank you to this site here):
Abstract classes can easily be extended by adding new (non-abstract) methods.
An interface cannot be modified without breaking its contract with the classes that implement it. Once an interface has been shipped, its member set is permanently fixed. An API based on interfaces can only be extended by adding new interfaces.
Of course, this is not to say that you could do something like this in your own code, (extend AbstractJButton and implement CanAddActionListener interface) but be aware of the pitfalls in doing so.
Interfaces represent a form of substitution class. A reference of type which implements or inherits from a particular interface may be passed to a method which expects that interface type. An interface will generally not only specify that all implementing classes must have methods with certain names and signatures, but it will generally also have an associated contract which specifies that all legitimate implementing classes must have methods with certain names and signatures, which behave in certain designated ways. It is entirely possible that even if two interfaces contain members with the same names and signatures, an implementation may satisfy the contract of one but not the other.
As a simple example, if one were designing a framework from scratch, one might start out with an Enumerable<T> interface (which can be used as often as desired to create an enumerator which will output a sequence of T's, but different requests may yield different sequences), but then derive from it an interface ImmutableEnumerable<T> which would behave as above but guarantee that every request would return the same sequence. A mutable collection type would support all of the members required for ImmutableEnumerable<T>, but since requests for enumeration received after a mutation would report a different sequence from requests made before, it would not abide by the ImmutableEnumerable contract.
The ability of an interface to be regarded as encapsulating a contract beyond the signatures of its members is one of the things that makes interface-based programming more semantically powerful than simple duck-typing.
Why do many Collection classes in Java extend the Abstract class and also implement the interface (which is also implemented by the given abstract class)?
For example, class HashSet extends AbstractSet and also implements Set, but AbstractSet already implements Set.
It's a way to remember that this class really implements that interface.
It won't have any bad effect and it can help to understand the code without going through the complete hierarchy of the given class.
From the perspective of the type system the classes wouldn't be any different if they didn't implement the interface again, since the abstract base classes already implement them.
That much is true.
The reason they do implement it anyways is (probably) mostly documentation: a HashSet is-a Set. And that is made explicit by adding implements Set to the end, although it's not strictly necessary.
Note that the difference is actually observable using reflection, but I'd be hard-pressed to produce some code that would break if HashSet didn't implement Set directly.
This may not matter much in practice, but I wanted to clarify that explicitly implementing an interface is not exactly the same as implementing it by inheritance. The difference is present in compiled class files and visible via reflection. E.g.,
for (Class<?> c : ArrayList.class.getInterfaces())
System.out.println(c);
The output shows only the interfaces explicitly implemented by ArrayList, in the order they were written in the source, which [on my Java version] is:
interface java.util.List
interface java.util.RandomAccess
interface java.lang.Cloneable
interface java.io.Serializable
The output does not include interfaces implemented by superclasses, or interfaces that are superinterfaces of those which are included. In particular, Iterable and Collection are missing from the above, even though ArrayList implements them implicitly. To find them you have to recursively iterate the class hierarchy.
It would be unfortunate if some code out there uses reflection and depends on interfaces being explicitly implemented, but it is possible, so the maintainers of the collections library may be reluctant to change it now, even if they wanted to. (There is an observation termed Hyrum's Law: "With a sufficient number of users of an API, it does not matter what you promise in the contract; all observable behaviors of your system will be depended on by somebody".)
Fortunately this difference does not affect the type system. The expressions new ArrayList<>() instanceof Iterable and Iterable.class.isAssignableFrom(ArrayList.class) still evaluate to true.
Unlike Colin Hebert, I don't buy that people who were writing that cared about readability. (Everyone who thinks standard Java libraries were written by impeccable gods, should take look it their sources. First time I did this I was horrified by code formatting and numerous copy-pasted blocks.)
My bet is it was late, they were tired and didn't care either way.
From the "Effective Java" by Joshua Bloch:
You can combine the advantages of interfaces and abstract classes by adding an abstract skeletal implementation class to go with an interface.
The interface defines the type, perhaps providing some default methods, while the skeletal class implements the remaining non-primitive interface methods atop the primitive interface methods. Extending a skeletal implementation takes most of the work out of implementing an interface. This is the Template Method pattern.
By convention, skeletal implementation classes are called AbstractInterface where Interface is the name of the interface they implement. For example:
AbstractCollection
AbstractSet
AbstractList
AbstractMap
I also believe it is for clarity. The Java Collections framework has quite a hierarchy of interfaces that defines the different types of collection. It starts with the Collection interface then extended by three main subinterfaces Set, List and Queue. There is also SortedSet extending Set and BlockingQueue extending Queue.
Now, concrete classes implementing them is more understandable if they explicitly state which interface in the heirarchy it is implementing even though it may look redundant at times. As you mentioned, a class like HashSet implements Set but a class like TreeSet though it also extends AbstractSet implements SortedSet instead which is more specific than just Set. HashSet may look redundant but TreeSet is not because it requires to implement SortedSet. Still, both classes are concrete implementations and would be more understandable if both follow certain convention in their declaration.
There are even classes that implement more than one collection type like LinkedList which implements both List and Queue. However, there is one class at least that is a bit 'unconventional', the PriorityQueue. It extends AbstractQueue but doesn't explicitly implement Queue. Don't ask me why. :)
(reference is from Java 5 API)
Too late for answer?
I am taking a guess to validate my answer. Assume following code
HashMap extends AbstractMap (does not implement Map)
AbstractMap implements Map
Now Imagine some random guy came, Changed implements Map to some java.util.Map1 with exactly same set of methods as Map
In this situation there won't be any compilation error and jdk gets compiled (off course test will fail and catch this).
Now any client using HashMap as Map m= new HashMap() will start failing. This is much downstream.
Since both AbstractMap, Map etc comes from same product, hence this argument appears childish (which in all probability is. or may be not.), but think of a project where base class comes from a different jar/third party library etc. Then third party/different team can change their base implementation.
By implementing the "interface" in the Child class, as well, developer's try to make the class self sufficient, API breakage proof.
In my view,when a class implements an interface it has to implement all methods present in it(as by default they are public and abstract methods in an interface).
If we don't want to implement all methods of interface,it must be an abstract class.
So here if some methods are already implemented in some abstract class implementing particular interface and we have to extend functionality for other methods that have been unimplemented,we will need to implement original interface in our class again to get those remaining set of methods.It help in maintaining the contractual rules laid down by an interface.
It will result in rework if were to implement only interface and again overriding all methods with method definitions in our class.
I suppose there might be a different way to handle members of the set, the interface, even when supplying the default operation implementation does not serve as a one-size-fits-all. A circular Queue vs. LIFO Queue might both implement the same interface, but their specific operations will be implemented differently, right?
If you only had an abstract class you couldn't make a class of your own which inherits from another class too.