In a modern functional language like Scala, type variance is inherent in the type. Here's e.g. Scala's Function1:
trait Function1[-T1, +R] { ... }
contravariant in parameter type and covariant in return type. And here's java's counterpart:
interface Function<T,R> { ... }
Now, to express variance relationship, the "wildcard capture" special syntax is used. For example, the stream's map function is declared as
<R> Stream<T> map(Function<? super T, ? extends R> mapper);
Here, Java shifts the declaration of variance relationship from the type itself to its use as a param in some method signature.
Here's my question. Would I be amiss to say that there cannot be any legitimate usages of Function<T,R> that are not contravariant in T and covariant in R? In other words, does Java's way offer useful extra flexibility not found in Scala, or is it just a lot of repetitive unwieldy boilerplate?
For Function specifically, no. Function defines exactly one abstract method, apply, which uses T contravariantly and R covariantly. But Function isn't what they had in mind when they designed that feature.
When the Java devs designed call-site variance, they were imagining classes that had both covariant and contravariant uses. For instance, in principle, the E in List<E> must be invariant. It appears in covariant position in get and in contravariant position in add.
So the rationale was this. Suppose we have a type hierarchy X <= Y <= Z. That is, X is a class that subclasses Y, and Y in turn subclasses Z. A List<Y> can do anything with type Y. It can have Ys added to the end, and a user can retrieve elements of type Y from it. But it can never be a List<Z> or a List<X>, since adding to a List<X> would be unsound, and so would retrieving as a List<Z>.
But we can express our intention. List<? extends Y> is a type we can only ever read from. It actually can take a List<Z> under the hood, since a list of Z elements is genuinely still (at least for covariant methods) a list of Y elements. We can get elements from this list, but we can't add to the end of it, since we said we're using the type argument in covariant position but add uses the type argument contravariantly. Essentially, List<? extends Y> is a smaller interface that includes some of the methods from the actual interface List.
The same is true of List<? super Y>. We can't read from it, since we don't know that every element is of type Y. But we can add to it, since we know that the list at least supports elements of type Y. We can use all of the contravariant methods, like add, but none of the covariant ones.
For a type like List that uses its type arguments in different ways, the call-site variance makes some amount of sense. For a special-purpose interface like Function that does one thing, it makes little sense.
That was the Java developers' rationale some twenty years ago when generics were added to Java. A lot has happened since then. If someone wrote an interface like List in today's world, an interface with upwards of 20 abstract methods, half of which have "this method may not be supported and might just throw UnsupportedOperationException" built-in to the contract, they'd rightly be laughed off the stage.
Today's world is one of small, tight interfaces. We follow the SOLID principles. An interface does one thing and does it well. If an interface defines more than two or three (non-defaulted, non-inherited) methods, we give pause and ask if we can make it more modular. And we try to design systems that are more immutable by design, to support scaling and concurrency. We have records, or data classes or whatever your favorite language calls them, that are immutable by default.
So twenty years ago, the idea of a massive super-interface that does twenty things and that can be narrowed down dynamically via type projections seemed pretty cool. Today, it makes far more sense to specify the variance at the declaration site, since most interfaces are small and have a clear use case in mind.
The scala.collection.Seq trait defines three abstract, non-inherited methods (apply, iterator, and length), and all of those use the type argument covariantly, so Seq is defined with a covariant type. The corresponding mutable trait adds one more method (update), which uses its type argument contravariantly, so it has an invariant argument.
In Scala, if you want to modify a sequence, you take a scala.collection.mutable.Seq. If you want to read, you take a scala.collection.Seq. And those interfaces are small enough and narrow enough in purpose that the fact that there are several of those doesn't affect the code quality (and the fact that traits and classes in Scala are cheap to write, compared to the boilerplate necessary in Java to make even a simple class).
Actually, Scala supports both declaration and use site variance. Specifically, you can specify bounded wildcards just like in Java.
This already hints that declaration site variance can not replace use site variance in all cases. The reason is that a declaration can only be variant if it is variant in all possible uses. If some uses are variant, but other uses are not, we can't use declaration site variance, but we can use use site variance.
For instance, class Array[A] can not be declared variant, but the method appendedAll from ArrayOps can employ use site variance
def appendedAll[B >: A](suffix: Array[_ <: B])(implicit arg0: ClassTag[B]): Array[B]
since it uses covariant methods of suffix.
In other words, does Java's way offer useful extra flexibility not found in Scala, or is it just a lot of repetitive unwieldy boilerplate?
Suppose that class Foo extends class Parent and is in turn extended by class Child.
Then, as you know, we can pass an instance of ArrayList<Foo> to a method that takes a List<? extends Parent>, or to a method that takes a List<? super Child>.
A reason that a method might take List<? extends Parent> is if it only reads from the list (it never writes to it), and can happily support any element that's an instance of Parent (because it doesn't need anything specific to Foo or another subtype).
A reason that a method might take List<? super Child> is if it only writes to the list (it never reads from it), and the elements that it writes are always instances of Child (so it doesn't care whether the list can accept arbitrary instance of Foo or another supertype).
That said, yes, it is a lot of repetitive unwieldy boilerplate! As a result, it's not uncommon to come across a method that could take a List<? extends Parent> or a List<? super Child> but instead just takes a List<Parent> or a List<Child> (respectively).
I am not an expert on this but it can be argued that the use of wildcard capture syntax in Java's Function interface allows for more flexibility in expressing variance relationships compared to Scala's Function1 trait, where variance is inherent in the type. The wildcard capture syntax allows for variance to be declared specifically in the context of a method's parameter or return type, rather than being inherent to the type itself. However, it can also be seen as repetitive and unwieldy boilerplate. It depends on the perspective of the developer and their use case.
Related
Is it feasible to say that generic wildcard types should not be used in return parameters of a method?
In other words, does make sense to declare an interface like the following:
interface Foo<T> {
Collection<? extends T> next();
}
Additionally, is it ok to say that generic wildcard types does make sense only at method's parameter declaration?
The main benefit of using wildcard types, say in method formal parameter, is to provide flexibility to the user to pass, say any type of Collection, or List or anything that implements Collection (assuming that the collection is declared like Collection<?>). You would often find yourself using wildcard types in formal parameters.
But ideally you should avoid using them as return type of your method. Because that way, you would force the user of that method to use wildcard types at the caller end, even if they didn't want to. By using wildcard types, you're saying that, hey! this method can return any type of Collection, so it's your job to take care of that. You shouldn't do that. Better to use bounded type parameter. With bounded type parameter, the type will be inferred based on the type you pass, or the target type of the method invocation.
And here's a quote from Effective Java Item 28:
Do not use wildcard types as return types. Rather than providing
additional flexibility for your users, it would force them to use
wildcard types in client code.
Properly used, wildcard types are
nearly invisible to users of a class. They cause methods to accept the
parameters they should accept and reject those they should reject. If
the user of a class has to think about wildcard types, there is
probably something wrong with the class’s API.
No, it is not feasible to say this.
Or to put it that way: It does make sense to have such an interface.
Imagine the following
interface Foo<T>
{
Collection<? extends T> next();
}
class FooInteger implements Foo<Number>
{
private final List<Integer> integers = new ArrayList<Integer>();
void useInternally()
{
integers.add(123);
Integer i = integers.get(0);
}
#Override
public Collection<? extends Number> next()
{
return integers;
}
}
// Using it:
Foo<Number> foo = new FooInteger();
Collection<? extends Number> next = foo.next();
Number n = next.iterator().next();
If you wrote the return type as Collection<T>, you could not return a collection containing a subtype of T.
Whether or not it is desirable to have such a return type depends on the application case. In some cases, it may simply be necessary. But if it is easy to avoid, then you can do this.
EDIT: Edited the code to point out the difference, namely that you might not always be able to choose the type internally. However, in most cases returning something that involves a wildcard can be avoided - and as I said, if possible, it should be avoided.
The example sketched above should still be considered as an example to emphasize the key point. Although, of course, such an implementation would be a bad practice, because it is exposing an internal state.
In this and similar cases, one can often return something like a
return Collections.<Number>unmodifiableList(integers);
and by this, declare the return type as Colletion<Number>: The unmodifiableList method solves the problem of the exposed internal state, and has the neat property that it allows changing the type parameter to a supertype, because the list is then... well, unmodifiable anyhow.
https://rules.sonarsource.com/java/RSPEC-1452
It is highly recommended not to use wildcard types as return types.
Because the type inference rules are fairly complex it is unlikely the
user of that API will know how to use it correctly. Let's take the
example of method returning a "List<? extends Animal>". Is it possible
on this list to add a Dog, a Cat, ... we simply don't know. And
neither does the compiler, which is why it will not allow such a
direct use. The use of wildcard types should be limited to method
parameters.
This rule raises an issue when a method returns a wildcard type.
Noncompliant Code Example
List<? extends Animal> getAnimals(){...}
Compliant Solution
List<Animal> getAnimals(){...} or
List<Dog> getAnimals(){...}
from the text "Java Generics and Collections" by Naftalin and Wadler, a passage states that though Integer is a subtype of Number, List<Integer> is not a subtype of List<Number>.
This prevents one from using, polymorphically, references to the in places where one might traditionally expect to allow such statements.
My question here is, since the 'substitution principle' does not apply in the case List<Integer> and List<Number>, does the restriction on the principle relate to where a class and a specific type (assigned for a generic type) are combined - in all cases and in general - (here as 'List<Integer>' for example). Here in this case I refer to the notion of substiution in resepct to a statement like 'List<Integer>', as opposed to 'List', or '<Integer>' seperatley.
Or alternatively, is the restriciton instead defined through some mechanism that specifies whether and which classes are subtypes (and thus when and when it applies) as one does through the usual extends and implements mechanism.
Essentially i do not understand the mechanism through which the susbtitution principle in such cases are caused to be defined as applying or not applying.
many thanks
If I understood the question correctly, in Java and in the situation you described, the Liskov substition principle does not apply because List<Integer> is not a subtype of List<Number>, which however is a requirement for the Likov substition principle.
That being said, the relation between List<Integer> and List<Number> can be described by covaraince and contravariance, which models the expactation which could be formulated as follows.
As Integer is a subtype of Number, which means that every implementation for Number can be used for Integer, the same must apply for types which instantiate the same generic template, but use Integer and Number as type arguments.
However, to my understanding, this is a different mechanism, which is also discussed in this question for generics.
Referring to : Wildcard Capture Helper Methods
It says to create a helper method to capture the wild card.
public void foo(List<?> i) {
fooHelper(i);
}
private <T> void fooHelper(List<T> l) {
l.set(0, l.get(0));
}
Just using this function below alone doesn't produce any compilation errors, and seems to work the same way. What I don't understand is: why wouldn't you just use this and avoid using a helper?
public <T> void foo(List<T> l) {
l.set(0, l.get(0));
}
I thought that this question would really boil down to: what's the difference between wildcard and generics? So, I went to this: difference between wildcard and generics.
It says to use type parameters:
1) If you want to enforce some relationship on the different types of method arguments, you can't do that with wildcards, you have to use type parameters.
But, isn't that exactly what the wildcard with helper function is actually doing? Is it not enforcing a relationship on different types of method arguments with its setting and getting of unknown values?
My question is: If you have to define something that requires a relationship on different types of method args, then why use wildcards in the first place and then use a helper function for it?
It seems like a hacky way to incorporate wildcards.
In this particular case it's because the List.set(int, E) method requires the type to be the same as the type in the list.
If you don't have the helper method, the compiler doesn't know if ? is the same for List<?> and the return from get(int) so you get a compiler error:
The method set(int, capture#1-of ?) in the type List<capture#1-of ?> is not applicable for the arguments (int, capture#2-of ?)
With the helper method, you are telling the compiler, the type is the same, I just don't know what the type is.
So why have the non-helper method?
Generics weren't introduced until Java 5 so there is a lot of code out there that predates generics. A pre-Java 5 List is now a List<?> so if you were trying to compile old code in a generic aware compiler, you would have to add these helper methods if you couldn't change the method signatures.
I agree: Delete the helper method and type the public API. There's no reason not to, and every reason to.
Just to summarise the need for the helper with the wildcard version: Although it's obvious to us as humans, the compiler doesn't know that the unknown type returned from l.get(0) is the same unknown type of the list itself. ie it doesn't factor in that the parameter of the set() call comes from the same list object as the target, so it must be a safe operation. It only notices that the type returned from get() is unknown and the type of the target list is unknown, and two unknowns are not guaranteed to be the same type.
You are correct that we don't have to use the wildcard version.
It comes down to which API looks/feels "better", which is subjective
void foo(List<?> i)
<T> void foo(List<T> i)
I'll say the 1st version is better.
If there are bounds
void foo(List<? extends Number> i)
<T extends Number> void foo(List<T> i)
The 1st version looks even more compact; the type information are all in one place.
At this point of time, the wildcard version is the idiomatic way, and it's more familiar to programmers.
There are a lot of wildcards in JDK method definitions, particularly after java8's introduction of lambda/Stream. They are very ugly, admittedly, because we don't have variance types. But think how much uglier it'll be if we expand all wildcards to type vars.
The Java 14 Language Specification, Section 5.1.10 (PDF) devotes some paragraphs to why one would prefer providing the wildcard method publicly, while using the generic method privately. Specifically, they say (of the public generic method):
This is undesirable, as it exposes implementation information to the caller.
What do they mean by this? What exactly is getting exposed in one and not the other?
Did you know you can pass type parameters directly to a method? If you have a static method <T> Foo<T> create() on a Foo class -- yes, this has been most useful to me for static factory methods -- then you can invoke it as Foo.<String>create(). You normally don't need -- or want -- to do this, since Java can sometimes infer those types from any provided arguments. But the fact remains that you can provide those types explicitly.
So the generic <T> void foo(List<T> i) really takes two parameters at the language level: the element type of the list, and the list itself. We've modified the method contract just to save ourselves some time on the implementation side!
It's easy to think that <?> is just shorthand for the more explicit generic syntax, but I think Java's notation actually obscures what's really going on here. Let's translate into the language of type theory for a moment:
/* Java *//* Type theory */
List<?> ~~ ∃T. List<T>
void foo(List<?> l) ~~ (∃T. List<T>) -> ()
<T> void foo(List<T> l) ~~ ∀T.(List<T> -> ()
A type like List<?> is called an existential type. The ? means that there is some type that goes there, but we don't know what it is. On the type theory side, ∃T. means "there exists some T", which is essentially what I said in the previous sentence -- we've just given that type a name, even though we still don't know what it is.
In type theory, functions have type A -> B, where A is the input type and B is the return type. (We write void as () for silly reasons.) Notice that on the second line, our input type is the same existential list we've been discussing.
Something strange happens on the third line! On the Java side, it looks like we've simply named the wildcard (which isn't a bad intuition for it). On the type theory side we've said something _superficially very similar to the previous line: for any type of the caller's choice, we will accept a list of that type. (∀T. is, indeed, read as "for all T".) But the scope of T is now totally different -- the brackets have moved to include the output type! That's critical: we couldn't write something like <T> List<T> reverse(List<T> l) without that wider scope.
But if we don't need that wider scope to describe the function's contract, then reducing the scope of our variables (yes, even type-level variables) makes it easier to reason about those variables. The existential form of the method makes it abundantly clear to the caller that the relevance of the list's element type extends no further than the list itself.
I am reading multi-level wild cards from AngelikaLangerGenericsFaq. I am pretty confused
about the syntax. The document says
The type Collection<Pair<String,?>> is a concrete instantiation of the
generic Collection interface. It is a heterogenous collection of
pairs of different types. It can contain elements of type
Pair<String,Long> , Pair<String,Date> , Pair<String,Object> ,
Pair<String,String> , and so on and so forth. In other words,
Collection<Pair<String,?>> contains a mix of pairs of different types
of the form Pair<String,?> .
The type Collection<? extends Pair<String,?>> is a wildcard
parameterized type; it does NOT stand for a concrete parameterized
type. It stands for a representative from the family of collections
that are instantiations of the Collection interface, where the type
argument is of the form Pair<String,?> . Compatible instantiations
are Collection<Pair<String,Long>> , Collection<Pair<String,String>> ,
Collection<Pair<String,Object>> , or Collection<Pair<String,?>> . In
other words, we do not know which instantiation of Collection it
stands for.
As a rule of thumb, you have to read multi-level wildcards top-down.
I am confused about the following points.
Can someone elaborate on these three quotes with example. I am totally lost into the syntax
Document says, para-1 is the concrete instantiation of a generic type and other is not the concrete instantiation? How is that?
What does it mean to read the wild-cards top down?
What is the advantage of multi-level wild cards?
Can someone elaborate these points. Thanks.
Can someone elaborate on these three quotes with example. I am totally lost into the syntax
Well, it wouldn't make sense to write those 3 quotes again here, as I can't give a better explanation than that. Instead, I will try to answer your other questions below, then possibly you will understand the answer to this one too. If not, you can ask your query again and I'll try to elaborate a little further.
Document says, para-1 is the concrete instantiation of a generic type and other is not the concrete instantiation? How is that?
A concrete instantiation is the one in which all the type arguments are concrete types, and are known at compile time. For e.g., List<String> is a concrete instantiation, because String is a concrete type. Its type is known at compile time. Whereas, List<? extends Number> is not a concrete type, because ? extends Number can be any type that extends Number. So, its type is unknown at compile time. Similarly, Map<String, Integer> is a concrete instantiation of generic type Map<K, V>.
In the case of multi-level type parameters, List<List<? extends Number>>, the outer List is a concrete instantiation of List<E>, because the type of elements is known to be a List at compile time, although the inner List is a wildcard instantiation, as the type of elements stored can be Integer, Double, any subclass of Number. But that paragraph is talking about the outer type only. And the outer type can only contain List type.
That's why the first paragraph said, it's a heterogenous collection of Pair, because the actual type parameter of Pair can be anything, but that is certain to be Pair and nothing else.
What does it mean to read the wild-cards top down?
Talking in layman's term, it means from left-to-right. While determining the type of the parameterized type, you first see the outermost type parameter. Then if that type parameter is itself a parameterized type, then you move onto the type parameters of that parameterized type. So, we read the type parameters, from left-to-right.
What is the advantage of multi-level wild cards?
Suppose you want to create a List of List of Fruits. Now your inner List can contain any kind of of fruits. An apple is also a fruit, and a banana is also a fruit. So, you have to make sure that you get all of them. Now, since generic types are invariant, in the sense, List<Apple> is not the same as List<Fruit>, you can't add a List<Apple> if your type of list is List<List<Fruit>>. For that you would need to use wildcards like this - List<List<? extends Fruit>>, which can now take List<Apple>, List<Banana>, list of any fruit.
Generic types with wildcards are really "existential" types. If you're familiar at all with logic, you can read G< ? extends T > as ∃S extends T:G< S >.
Angela's explanation about reading types "top down" really means that the imaginary existential quantifier implied by a type that contains a ? in it is always as close as possible to the ?. For example, you should mentally rewrite G< H< ? extends T > > to G<∃S extends T:H< S > >. Since there's no quantifier on the outside, it's called concrete.
I am a C++ / Java programmer and the main paradigm I happen to use in everyday programming is OOP. In some thread I read a comment that Type classes are more intuitive in nature than OOP. Can someone explain the concept of type classes in simple words so that an OOP guy like me can understand it?
First, I am always very suspicious of claims that this or that program structure is more intuitive. Programming is counter-intuitive and always will be because people naturally think in terms of specific cases rather than general rules. Changing this requires training and practice, otherwise known as "learning to program".
Moving on to the meat of the question, the key difference between OO classes and Haskell typeclasses is that in OO a class (even an interface class) is both a type and a template for new types (descendants). In Haskell a typeclass is only a template for new types. More precisely, a typeclass describes a set of types that share a common interface, but it is not itself a type.
So the typeclass "Num" describes numeric types with addition, subtraction and multiplication operators. The "Integer" type is an instance of "Num", meaning that Integer is a member of the set of types that implement those operators.
So I can write a summation function with this type:
sum :: Num a => [a] -> a
The bit to to the left of the "=>" operator says that "sum" will work for any type "a" that is an instance of Num. The bit to the right says it takes a list of values of type "a" and returns a single value of type "a" as a result. So you could use it to sum a list of Integers or a list of Doubles or a list of Complex, because they are all instances of "Num". The implementation of "sum" will use the "+" operator of course, which is why you need the "Num" type constraint.
However you cannot write this:
sum :: [Num] -> Num
because "Num" is not a type.
This distinction between type and typeclass is why we don't talk about inheritance and descendants of types in Haskell. There is a sort of inheritance for typeclasses: you can declare one typeclass as a descendant of another. The descendant here describes a subset of the types described by parent.
An important consequence of all this is that you can't have heterogenous lists in Haskell. In the "sum" example you can pass it a list of integers or a list of doubles, but you cannot mix doubles and integers in the same list. This looks like a tricky restriction; how would you implement the old "cars and lorries are both types of vehicle" example? There are several answers depending on the problem you are actually trying to solve, but the general principle is that you do your indirection explicitly using first-class functions rather than implicitly using virtual functions.
Well, the short version is: Type classes are what Haskell uses for ad-hoc polymorphism.
...but that probably didn't clarify anything for you.
Polymorphism should be a familiar concept to people from an OOP background. The key point here, however, is the difference between parametric and ad-hoc polymorphism.
Parametric polymorphism means functions that operate on a structural type that itself is parameterized by other types, such as a list of values. Parametric polymorphism is pretty much the norm everywhere in Haskell; C# and Java call it "generics". Basically, a generic function does the same thing to a specific structure, no matter what the type parameters are.
Ad-hoc polymorphism, on the other hand, means a collection of distinct functions, doing different (but conceptually related) things depending on types. Unlike parametric polymorphism, ad-hoc polymorphic functions need to be specified individually for each possible type they can be used with. Ad-hoc polymorphism is thus a generalized term for a variety of features found in other languages, such as function overloading in C/C++ or class-based dispatch polymorphism in OOP.
A major selling point of Haskell's type classes over other forms of ad-hoc polymorphism is greater flexibility due to allowing polymorphism anywhere in the type signature. For instance, most languages won't distinguish overloaded functions based on return type; type classes can.
Interfaces as found in many OOP languages are somewhat similar to Haskell's type classes--you specify a group of function names/signatures that you want to treat in an ad-hoc polymorphic fashion, then explicitly describe how various types can be used with those functions. Haskell's type classes are used similarly, but with greater flexibility: you can write arbitrary type signatures for the type class functions, with the type variable used for instance selection appearing anywhere you like, not just as the type of an object that methods are being called on.
Some Haskell compilers--including the most popular, GHC--offer language extensions that make type classes even more powerful, such as multi-parameter type classes, which let you do ad-hoc polymorphic function dispatch based on multiple types (similar to what's called "multiple dispatch" in OOP).
To try to give you a bit of the flavor of it, here's some vaguely Java/C#-flavored pseudocode:
interface IApplicative<>
{
IApplicative<T> Pure<T>(T item);
IApplicative<U> Map<T, U>(Function<T, U> mapFunc, IApplicative<T> source);
IApplicative<U> Apply<T, U>(IApplicative<Function<T, U>> apFunc, IApplicative<T> source);
}
interface IReducible<>
{
U Reduce<T,U>(Function<T, U, U> reduceFunc, U seed, IReducible<T> source);
}
Note that we're, among other things, defining an interface over a generic type and defining a method where the interface type appears only as the return type, Pure. Not apparent is that every use of the interface name should mean the same type (i.e., no mixing different types that implement the interface), but I wasn't sure how to express that.
In C++/etc, "virtual methods" are dispatched according to the type of the this/self implicit argument. (The method is pointed to in a function table which the object implicitly points to)
Type classes work differently, and can do everything that "interfaces" can and more. Let's start with a simple example of something that interfaces can't do: Haskell's Read type-class.
ghci> -- this is a Haskell comment, like using "//" in C++
ghci> -- and ghci is an interactive Haskell shell
ghci> 3 + read "5" -- Haskell syntax is different, in C: 3 + read("5")
8
ghci> sum (read "[3, 5]") -- [3, 5] is a list containing 3 and 5
8
ghci> -- let us find out the type of "read"
ghci> :t read
read :: (Read a) => String -> a
read's type is (Read a) => String -> a, which means for every type that implements the Read type-class, read can convert a String to that type. This is dispatch based on return type, impossible with "interfaces".
This can't be done in C++ et al's approach, where the function table is retrieved from the object - here, you don't even have the relevant object until after read returns it so how could you call it?
A key implementation difference from interfaces that allows this to happen, is that the function table isn't pointed to inside the object, it is passed separately by the compiler to the called functions.
Additionally, in C++/etc when one defines a class they are also responsible on implementing their interfaces. This means that you can't just invent a new interface and make Int or std::vector implement it.
In Haskell you can, and it isn't by "monkey patching" like in Ruby. Haskell has a good name-spacing scheme that means that two type classes can both have a function of the same name and a type can still implement both.
This allows Haskell to have many simple classes like Eq (types that support equality checking), Show (types that can be printed to a String), Read (types that can be parsed from a String), Monoid (types that have a concatenation operation and an empty element) and many more, and allows for even the primitive types like Int to implement the appropriate type-classes.
With the richness of type-classes people tend to program to more general types and then have more reusable functions and since they also have less freedom when the types are general they may even produce less bugs!
tldr: type-classes == awesome
In addition to what xtofl and camccann have already written in their excellent answers, a useful thing to notice when comparing Java's interfaces to Haskell's type classes is the following:
Java interfaces are closed, meaning that the set of interfaces any given class implements is decided once and for all when and where it is defined;
Haskell's type classes are open, meaning that any type (or group of types for multi-parameter type classes) can be made a member of any type class at any time, as long as suitable definitions can be provided for the functions defined by the type class.
This openness of type classes (and Clojure's protocols, which are very similar) is a very useful property; it is quite usual for a Haskell programmer to come up with a new abstraction and immediately apply it to a range of problems involving pre-existing types through clever use of type classes.
A type class can be compared to the concept of 'implementing' an interface. If some data type in Haskell implements the "Show" interface, it can be used with all functions that expect a "Show" object.
With OOP, you inherit both interface and implementation. Haskell type-classes allow these to be seperated. Two utterly unrelated types can both expose the same interface.
Perhaps more importantly, Haskell allows class implementations to be added "after the fact". That is, I can invent some new type-class of my own, and then go and make all the standard pre-defined types be instances of this class. In an OO language, you [usually] can't easily add a new method to an existing class, no matter how useful that would be.