Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Background:
As a Java programmer, I extensively inherit (rather: implement) from interfaces, and sometimes I design abstract base classes. However, I have never really felt the need to subclass a concrete (non-abstract) class (in the cases where I did it, it later turned out that another solution, such as delegation would have been better).
So now I'm beginning to feel that there is almost no situation where inheriting from a concrete class is appropriate. For one thing, the Liskov substitution principle (LSP) seems almost impossible to satisfy for non-trivial classes; also many other questions here seem to echo a similar opinion.
So my question:
In which situation (if any) does it actually make sense to inherit from a concrete class?
Can you give a concrete, real-world example of a class that inherits from another concrete class, where you feel this is the best design given the constraints? I'b be particularly interested in examples that satisfy the LSP (or examples where satisfying LSP seems unimportant).
I mainly have a Java background, but I'm interested in examples from any language.
You often have a skeletal implementations for an interface I. If you can offer extensibility without abstract methods (e.g. via hooks), it is preferable to have a non-abstract skeletal class because you can instantiate it.
An example would be a forwarding wrapper classes, to be able to forward to another object of a concrete class C implementing I, e.g. enabling decoration or simple code-reuse of C without having to inherit from C. You can find such an example in Effective Java item 16, favor composition over inheritance. (I do not want to post it here because of copyrights, but it is really simply forwarding all method calls of I to the wrapped implementation).
I think the following is a good example when it can be appropriate:
public class LinkedHashMap<K,V>
extends HashMap<K,V>
Another good example is inheritance of exceptions:
public class IllegalFormatPrecisionException extends IllegalFormatException
public class IllegalFormatException extends IllegalArgumentException
public class IllegalArgumentException extends RuntimeException
public class RuntimeException extends Exception
public class Exception extends Throwable
One very common case I can think of is to derive from basic UI controls, such as forms, textboxes, comboboxes, etc. They are complete, concrete, and well able to stand on their own; however, most of them are also very basic, and sometimes their default behavior isn't what you want. Virtually nobody, for instance, would use an instance of an unadulterated Form, unless possibly they were creating an entirely dynamic UI layer.
For example, in a piece of software I wrote that recently reached relative maturity (meaning I ran out of time to focus primarily on developing it :) ), I found I needed to add "lazy loading" capability to ComboBoxes, so it wouldn't take 50 years (in computer years) for the first window to load. I also needed the ability to automatically filter the available options in one ComboBox based on what was shown in another, and lastly I needed a way to "mirror" one ComboBox's value in another editable control, and make a change in one control happen to the other as well. So, I extended the basic ComboBox to give it these extra features, and created two new types: LazyComboBox, and then further, MirroringComboBox. Both are based on the totally serviceable, concrete ComboBox control, just overriding some behaviors and adding a couple others. They're not very loosely-coupled and therefore not too SOLID, but the added functionality is generic enough that if I had to, I could rewrite either of these classes from scratch to do the same job, possibly better.
Generally speaking, the only time I derive from concrete classes is when they're in the framework. Deriving from Applet or JApplet being the trivial example.
This is an example of a current implementation that I'm undertaking.
In OAuth 2 environment, since the documentation is still in draft stage, the specification keeps changing (as of time of writing, we're in version 21).
Thus, I had to extend my concrete AccessToken class to accommodate the different access tokens.
In earlier draft, there was no token_type field set, so the actual access token is as follows:
public class AccessToken extends OAuthToken {
/**
*
*/
private static final long serialVersionUID = -4419729971477912556L;
private String accessToken;
private String refreshToken;
private Map<String, String> additionalParameters;
//Getters and setters are here
}
Now, with Access tokens that returns token_type, I have
public class TokenTypedAccessToken extends AccessToken {
private String tokenType;
//Getter and setter are here...
}
So, I can return both and the end user is none the wiser. :-)
In Summary: If you want a customized class that has the same functionality of your concrete class without changing the structure of the concrete class, I suggest extending the concrete class.
I mainly have a Java background, but I'm interested in examples from any language.
Like many frameworks, ASP.NET makes heavy use of inheritance to share behaviour between classes. For example, HtmlInputPassword has this inheritance hierarchy:
System.Object
System.Web.UI.Control
System.Web.UI.HtmlControls.HtmlControl // abstract
System.Web.UI.HtmlControls.HtmlInputControl // abstract
System.Web.UI.HtmlControls.HtmlInputText
System.Web.UI.HtmlControls.HtmlInputPassword
in which can be seen examples of concrete classes being derived from.
If you're building a framework - and you're sure you want to do that - you may well finding yourself wanting a nice big inheritance hierarchy.
Other use case would be the to override the default behavior:
Lets say there is a class which uses standard Jaxb parser for parsing
public class Util{
public void mainOperaiton(){..}
protected MyDataStructure parse(){
//standard Jaxb code
}
}
Now say I want to use some different binding (Say XMLBean) for the parsing operation,
public class MyUtil extends Util{
protected MyDataStructure parse(){
//XmlBean code code
}
}
Now I can use the new binding with code reuse of super class.
The decorator pattern, a handy way of adding additional behaviour to a class without making it too general, makes heavy use of inheritance of concrete classes. It was mentioned here already, but under somewhat a scientific name of "forwarding wrapper class".
Lot of answers but I though I'd add my own $0.02.
I override concreate classes infrequently but under some specific circumstances. At least 1 has already been mentioned when framework classes are designed to be extended. 2 additional ones come to mind with some examples:
1) If I want to tweak the behavior of a concrete class. Sometimes I want to change how the concrete class works or I want to know when a certain method is called so I can trigger something. Often concrete classes will define a hook method whose sole usage is for subclasses to override the method.
Example: We overrode MBeanExporter because we need to be able to unregister a JMX bean:
public class MBeanRegistrationSupport {
// the concrete class has a hook defined
protected void onRegister(ObjectName objectName) {
}
Our class:
public class UnregisterableMBeanExporter extends MBeanExporter {
#Override
protected void onUnregister(ObjectName name) {
// always a good idea
super.onRegister(name);
objectMap.remove(name);
}
Here's another good example. LinkedHashMap is designed to have its removeEldestEntry method overridden.
private static class LimitedLinkedHashMap<K, V> extends LinkedHashMap<K, V> {
#Override
protected boolean removeEldestEntry(Entry<K, V> eldest) {
return size() > 1000;
}
2) If a class shares a significant amount of overlap with the concrete class except for some tweaks to functionality.
Example: My ORMLite project handles persisting Long object fields and long primitive fields. Both have almost the identical definition. LongObjectType provides all of the methods that describe how the database deals with long fields.
public class LongObjectType {
// a whole bunch of methods
while LongType overrides LongObjectType and only tweaks a single method to say that handles primitives.
public class LongType extends LongObjectType {
...
#Override
public boolean isPrimitive() {
return true;
}
}
Hope this helps.
Inheriting concrete class is only option if you want to extend side-library functionality.
For example of real life usage you can look at hierarchy of DataInputStream, that implements DataInput interface for FilterInputStream.
I'm beginning to feel that there is almost no situation where inheriting from a concrete class is appropriate.
This is one 'almost'. Try writing an applet without extending Applet or JApplet.
Here is an e.g. from the applet info. page.
/* <!-- Defines the applet element used by the appletviewer. -->
<applet code='HelloWorld' width='200' height='100'></applet> */
import javax.swing.*;
/** An 'Hello World' Swing based applet.
To compile and launch:
prompt> javac HelloWorld.java
prompt> appletviewer HelloWorld.java */
public class HelloWorld extends JApplet {
public void init() {
// Swing operations need to be performed on the EDT.
// The Runnable/invokeLater() ensures that happens.
Runnable r = new Runnable() {
public void run() {
// the crux of this simple applet
getContentPane().add( new JLabel("Hello World!") );
}
};
SwingUtilities.invokeLater(r);
}
}
Another good example would be data storage types. To give a precise example: a red-black tree is a more specific binary tree, but retrieving data and other information like size can be handled identical. Of course, a good library should have that already implemented but sometimes you have to add specific data types for your problem.
I am currently developing an application which calculates matrices for the users. The user can provide settings to influence the calculation. There are several types of matrices that can be calculated, but there is a clear similarity, especially in the configurability: matrix A can use all the settings of matrix B but has additional parameters which can be used. In that case, I inherited from the ConfigObjectB for my ConfigObjectA and it works pretty good.
In general, it is better to inherit from an abstract class than from a concrete class. A concrete class must provide a definition for its data representation, and some subclasses will need a different representation. Since an abstract class does not have to provide a data representation, future subclasses can use any representation without fear of conflicting with the one that they inherited.
Even i never found a situation where i felt concrete inheritence is neccessary. But there could be some situations for concrete inheritence specially when you are providing backward compatibility to your software. In that case u might have specialized a class A but you want it to be concrete as your older application might be using it.
Your concerns are also echoed in the classic principle "favor composition over inheritance", for the reasons you stated. I can't remember the last time I inherited from a concrete class. Any common code that needs to be reused by child classes almost always needs to declare abstract interfaces for those classes. In this order I try to prefer the following strategies:
Composition (no inheritance)
Interface
Abstract Class
Inheriting from a concrete class really isn't ever a good idea.
[EDIT] I'll qualify this statement by saying I don't see a good use case for it when you have control over the architecture. Of course when using an API that expects it, whaddaya gonna do? But I don't understand the design choices made by those APIs. The calling class should always be able to declare and use an abstraction according to the Dependency Inversion Principle. If a child class has additional interfaces to be consumed you'd either have to violate DIP or do some ugly casting to get at those interfaces.
from the gdata project:
com.google.gdata.client.Service is designed to act as a base class that can be customized for specific types of GData services.
Service javadoc:
The Service class represents a client connection to a GData service. It encapsulates all protocol-level interactions with the GData server and acts as a helper class for higher level entities (feeds, entries, etc) that invoke operations on the server and process their results.
This class provides the base level common functionality required to access any GData service. It is also designed to act as a base class that can be customized for specific types of GData services. Examples of supported customizations include:
Authentication - implementing a custom authentication mechanism for services that require authentication and use something other than HTTP basic or digest authentication.
Extensions - define expected extensions for feed, entry, and other types associated with a the service.
Formats - define additional custom resource representations that might be consumed or produced by the service and client side parsers and generators to handle them.
I find the java collection classes as a very good example.
So you have an AbstractCollection with childs like AbstractList, AbstractSet, AbstractQueue...
I think this hierarchy has been well designed.. and just to ensure there's no explosion there's the Collections class with all its inner static classes.
You do that for instance in GUI libraries. It makes not much sense to inherit from a mere Component and delegate to a Panel. It is likely much easyer to inherit from the Panel directly.
Just a general thought. Abstract classes are missing something. It makes sense if this, what is missing, is different in each derived class. But you may have a case where you don't want to modify a class but just want to add something. To avoid duplication of code you would inherit. And if you need both classes it would be inheritance from a concrete class.
So my answer would be: In all cases where you really only want to add something. Maybe this just doesn't happen very often.
In his book Effective Java, Joshua Bloch recommends against using Interfaces to hold constants,
The constant interface pattern is a poor use of interfaces. That a class uses some constants internally is an implementation detail. Implementing a constant interface causes this implementation detail to leak into the class’s exported API. It is of no consequence to the users of a class that the class implements a constant interface. In fact, it may even confuse them. Worse, it represents a commitment: if in a future release the class is modified so that it no longer needs to use the con-stants, it still must implement the interface to ensure binary compatibility. If a nonfinal class implements a constant interface, all of its subclasses will have their namespaces polluted by the constants in the interface.
His reasoning makes sense to me and it seems to be the prevailing logic whenever the question is brought up but it overlooks storing constants in interfaces and then NOT implementing them.
For instance,
public interface SomeInterface {
public static final String FOO = "example";
}
public class SomeOtherClass {
//notice that this class does not implement anything
public void foo() {
thisIsJustAnExample("Designed to be short", SomeInteface.FOO);
}
}
I work with someone who uses this method all the time. I tend to use class with private constructors to hold my constants, but I've started using interfaces in this manner to keep our code a consistent style. Are there any reasons to not use interfaces in the way I've outlined above?
Essentially it's a short hand that prevents you from having to make a class private, since an interface can not be initialized.
I guess it does the job, but as a friend once said: "You can try mopping a floor with an octopus; it might get the job done, but it's not the right tool".
Interfaces exist to specify contracts, which are then implemented by classes. When I see an interface, I assume that there are some classes out there that implement it. So I'd lean towards saying that this is an example of abusing interfaces rather than using them, simply because I don't think that's the way interfaces were meant to be used.
I guess I don't understand why these values are public in the first place if they're simply going to be used privately in a class. Why not just move them into the class? Now if these values are going to be used by a bunch of classes, then why not create an enum? Another pattern that I've seen is a class that just holds public constants. This is similar to the pattern you've described. However, the class can be made final so that it cannot be extended; there is nothing that stops a developer from implementing your interface. In these situations, I just tend to use enum.
UPDATE
This was going to be a response to a comment, but then it got long. Creating an interface to hold just one value is even more wasteful! :) You should use a private constant for that. While putting unrelated values into a single enum is bad, you could group them into separate enums, or simply use private constants for the class.
Also, if it appears that all these classes are sharing these unrelated constants (but which make sense in the context of the class), why not create an abstract class where you define these constants as protected? All you have to do then is extend this class and your derived classes will have access to the constants.
I don't think a class with a private constructor is any better than using an interface.
What the quote says is that using implements ConstantInterface is not best pratice because this interface becomes part of the API.
However, you can use static import or qualified names like SomeInteface.FOO of the values from the interface instead to avoid this issue.
Constants are a bad thing anyway. Stuffing a bunch of strings in a single location is a sign that your application has design problems from the get go. Its not object oriented and (especially for String Constants) can lead to the development of fragile API's
If a class needs some static values then they should be local to that class. If more classes need access to those values they should be promoted to an enumeration and modeled as such. If you really insist on having a class full of constants then you create a final class with a private no args constructor. With this approach you can at least ensure that the buck stops there. There are no instantiations allowed and you can only access state in a static manner.
This particular anti-pattern has one serious problem. There is no mechanism to stop someone from using your class that implements this rouge constants interface.Its really about addressing a limitation of java that allows you to do non-sensical things.
The net out is that it reduces the meaningfulness of the application's design because the grasp on the principles of the language aren't there. When I inherit code with constants interfaces, I immediately second guess everything because who knows what other interesting hacks I'll find.
Creating a separate class for constants seems silly. It's more work than making an enum, and the only reason would be to do it would be to keep unrelated constants all in one place just because presumably they all happen to be referenced by the same chunks of code. Hopefully your Bad Smell alarm goes of when you think about slapping a bunch of unrelated stuff together and calling it a class.
As for interfaces, as long as you're not implementing the interface it's not the end of the world (and the JDK has a number of classes implementing SwingConstants for example), but there may be better ways depending on what exactly you're doing.
You can use enums to group related constants together, and even add methods to them
you can use Resource Bundles for UI text
use a Map<String,String> passed through Collections.unmodifiableMap for more general needs
you could also read constants from a file using java.util.Properties and wrap or subclass it to prevent changes
Also, with static imports there's no reason for lazy people to implement an interface to get its constants when you can be lazy by doing import static SomeInterface.*; instead.
When I create complex type hierarchies (several levels, several types per level), I like to use the final keyword on methods implementing some interface declaration. An example:
interface Garble {
int zork();
}
interface Gnarf extends Garble {
/**
* This is the same as calling {#link #zblah(0)}
*/
int zblah();
int zblah(int defaultZblah);
}
And then
abstract class AbstractGarble implements Garble {
#Override
public final int zork() { ... }
}
abstract class AbstractGnarf extends AbstractGarble implements Gnarf {
// Here I absolutely want to fix the default behaviour of zblah
// No Gnarf shouldn't be allowed to set 1 as the default, for instance
#Override
public final int zblah() {
return zblah(0);
}
// This method is not implemented here, but in a subclass
#Override
public abstract int zblah(int defaultZblah);
}
I do this for several reasons:
It helps me develop the type hierarchy. When I add a class to the hierarchy, it is very clear, what methods I have to implement, and what methods I may not override (in case I forgot the details about the hierarchy)
I think overriding concrete stuff is bad according to design principles and patterns, such as the template method pattern. I don't want other developers or my users do it.
So the final keyword works perfectly for me. My question is:
Why is it used so rarely in the wild? Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?
Why is it used so rarely in the wild?
Because you should write one more word to make variable/method final
Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?
Usually I see such examples in 3d part libraries. In some cases I want to extend some class and change some behavior. Especially it is dangerous in non open-source libraries without interface/implementation separation.
I always use final when I write an abstract class and want to make it clear which methods are fixed. I think this is the most important function of this keyword.
But when you're not expecting a class to be extended anyway, why the fuss? Of course if you're writing a library for someone else, you try to safeguard it as much as you can but when you're writing "end user code", there is a point where trying to make your code foolproof will only serve to annoy the maintenance developers who will try to figure out how to work around the maze you had built.
The same goes to making classes final. Although some classes should by their very nature be final, all too often a short-sighted developer will simply mark all the leaf classes in the inheirance tree as final.
After all, coding serves two distinct purposes: to give instructions to the computer and to pass information to other developers reading the code. The second one is ignored most of the time, even though it's almost as important as making your code work. Putting in unnecessary final keywords is a good example of this: it doesn't change the way the code behaves, so its sole purpose should be communication. But what do you communicate? If you mark a method as final, a maintainer will assume you'd had a good readon to do so. If it turns out that you hadn't, all you achieved was to confuse others.
My approach is (and I may be utterly wrong here obviously): don't write anything down unless it changes the way your code works or conveys useful information.
Why is it used so rarely in the wild?
That doesn't match my experience. I see it used very frequently in all kinds of libraries. Just one (random) example: Look at the abstract classes in:
http://code.google.com/p/guava-libraries/
, e.g. com.google.common.collect.AbstractIterator. peek(), hasNext(), next() and endOfData() are final, leaving just computeNext() to the implementor. This is a very common example IMO.
The main reason against using final is to allow implementors to change an algorithm - you mentioned the "template method" pattern: It can still make sense to modify a template method, or to enhance it with some pre-/post actions (without spamming the entire class with dozens of pre-/post-hooks).
The main reason pro using final is to avoid accidental implementation mistakes, or when the method relies on internals of the class which aren't specified (and thus may change in the future).
I think it is not commonly used for two reasons:
People don't know it exists
People are not in the habit of thinking about it when they build a method.
I typically fall into the second reason. I do override concrete methods on a somewhat common basis. In some cases this is bad, but there are many times it doesn't conflict with design principles and in fact might be the best solution. Therefore when I am implementing an interface, I typically don't think deeply enough at each method to decide if a final keyword would be useful. Especially since I work on a lot of business applications that change frequently.
Why is it used so rarely in the wild?
Because it should not be necessary. It also does not fully close down the implementation, so in effect it might give you a false sense of security.
It should not be necessary due to the Liskov substitution principle. The method has a contract and in a correctly designed inheritance diagram that contract is fullfilled (otherwise it's a bug). Example:
interface Animal {
void bark();
}
abstract class AbstractAnimal implements Animal{
final void bark() {
playSound("whoof.wav"); // you were thinking about a dog, weren't you?
}
}
class Dog extends AbstractAnimal {
// ok
}
class Cat extends AbstractAnimal() {
// oops - no barking allowed!
}
By not allowing a subclass to do the right thing (for it) you might introduce a bug. Or you might require another developer to put an inheritance tree of your Garble interface right beside yours because your final method does not allow it to do what it should do.
The false sense of security is typical of a non-static final method. A static method should not use state from the instance (it cannot). A non-static method probably does. Your final (non-static) method probably does too, but it does not own the instance variables - they can be different than expected. So you add a burden on the developer of the class inheriting form AbstractGarble - to ensure instance fields are in a state expected by your implementation at any point in time. Without giving the developer a way to prepare the state before calling your method as in:
int zblah() {
prepareState();
return super.zblah();
}
In my opinion you should not close an implementation in such a fashion unless you have a very good reason. If you document your method contract and provide a junit test you should be able to trust other developers. Using the Junit test they can actually verify the Liskov substitution principle.
As a side note, I do occasionally close a method. Especially if it's on the boundary part of a framework. My method does some bookkeeping and then continues to an abstract method to be implemented by someone else:
final boolean login() {
bookkeeping();
return doLogin();
}
abstract boolean doLogin();
That way no-one forgets to do the bookkeeping but they can provide a custom login. Whether you like such a setup is of course up to you :)
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Java: Rationale of the Object class not being declared abstract
Why is the Object class, which is base class of 'em all in Java, not abstract?
I've had this question for a really really long time and it is asked here purely out of curiosity, that's all. Nothing in my code or anybody's code is breaking because it is not abstract, but I was wondering why they made it concrete?
Why would anyone want an "instance" (and not its presence a.k.a. Reference) of this Object class? One case is a poor synchronization code which uses the instance of an Object for locking (at least I used it this way once.. my bad).
Is there any practical use of an "instance" of an Object class? And how does its instantiation fit in OOP? What would have happened if they had marked it abstract (of course after providing implementations to its methods)?
Without the designers of java.lang.Object telling us, we have to base our answers on opinion. There's a few questions which can be asked which may help clear it up.
Would any of the methods of Object benefit from being abstract?
It could be argued that some of the methods would benefit from this. Take hashCode() and equals() for instance, there would probably have been a lot less frustration around the complexities of these two if they had both been made abstract. This would require developers to figure out how they should be implementing them, making it more obvious that they should be consistent (see Effective Java). However, I'm more of the opinion that hashCode(), equals() and clone() belong on separate, opt-in abstractions (i.e. interfaces). The other methods, wait(), notify(), finalize(), etc. are sufficiently complicated and/or are native, so it's best they're already implemented, and would not benefit from being abstracted.
So I'd guess the answer would be no, none of the methods of Object would benefit from being abstract.
Would it be a benefit to mark the Object class as abstract?
Assuming all the methods are implemented, the only effect of marking Object abstract is that it cannot be constructed (i.e. new Object() is a compile error). Would this have a benefit? I'm of the opinion that the term "object" is itself abstract (can you find anything around you which can be totally described as "an object"?), so it would fit with the object-oriented paradigm. It is however, on the purist side. It could be argued that forcing developers to pick a name for any concrete subclass, even empty ones, will result in code which better expresses their intent. I think, to be totally correct in terms of the paradigm, Object should be marked abstract, but when it comes down to it, there's no real benefit, it's a matter of design preference (pragmatism vs. purity).
Is the practice of using a plain Object for synchronisation a good enough reason for it to be concrete?
Many of the other answers talk about constructing a plain object to use in the synchronized() operation. While this may have been a common and accepted practice, I don't believe it would be a good enough reason to prevent Object being abstract if the designers wanted it to be. Other answers have mentioned how we would have to declare a single, empty subclass of Object any time we wanted to synchronise on a certain object, but this doesn't stand up - an empty subclass could have been provided in the SDK (java.lang.Lock or whatever), which could be constructed any time we wanted to synchronise. Doing this would have the added benefit of creating a stronger statement of intent.
Are there any other factors which could have been adversely affected by making Object abstract?
There are several areas, separate from a pure design standpoint, which may have influenced the choice. Unfortunately, I do not know enough about them to expand on them. However, it would not suprise me if any of these had an impact on the decision:
Performance
Security
Simplicity of implementation of the JVM
Could there be other reasons?
It's been mentioned that it may be in relation to reflection. However, reflection was introduced after Object was designed. So whether it affects reflection or not is moot - it's not the reason. The same for generics.
There's also the unforgettable point that java.lang.Object was designed by humans: they may have made a mistake, they may not have considered the question. There is no language without flaws, and this may be one of them, but if it is, it's hardly a big one. And I think I can safely say, without lack of ambition, that I'm very unlikely to be involved in designing a key part of such a widely used technology, especially one that's lasted 15(?) years and still going strong, so this shouldn't be considered a criticism.
Having said that, I would have made it abstract ;-p
Summary
Basically, as far as I see it, the answer to both questions "Why is java.lang.Object concrete?" or (if it were so) "Why is java.lang.Object abstract?" is... "Why not?".
Plain instances of java.lang.Object are typically used in locking/syncronization scenarios and that's accepted practice.
Also - what would be the reason for it to be abstract? Because it's not fully functional in its own right as an instance? Could it really do with some abstract members? Don't think so. So the argument for making it abstract in the first place is non-existent. So it isn't.
Take the classic hierarchy of animals, where you have an abstract class Animal, the reasoning to make the Animal class abstract is because an instance of Animal is effectively an 'invalid' -by lack of a better word- animal (even if all its methods provide a base implementation). With Object, that is simply not the case. There is no overwhelming case to make it abstract in the first place.
From everything I've read, it seems that Object does not need to be concrete, and in fact should have been abstract.
Not only is there no need for it to be concrete, but after some more reading I am convinced that Object not being abstract is in conflict with the basic inheritance model - we should not be allowing abstract subclasses of a concrete class, since subclasses should only add functionality.
Clearly this is not the case in Java, where we have abstract subclasses of Object.
I can think of several cases where instances of Object are useful:
Locking and synchronization, like you and other commenters mention. It is probably a code smell, but I have seen Object instances used this way all the time.
As Null Objects, because equals will always return false, except on the instance itself.
In test code, especially when testing collection classes. Sometimes it's easiest to fill a collection or array with dummy objects rather than nulls.
As the base instance for anonymous classes. For example:
Object o = new Object() {...code here...}
I think it probably should have been declared abstract, but once it is done and released it is very hard to undo without causing a lot of pain - see Java Language Spec 13.4.1:
"If a class that was not abstract is changed to be declared abstract, then preexisting binaries that attempt to create new instances of that class will throw either an InstantiationError at link time, or (if a reflective method is used) an InstantiationException at run time; such a change is therefore not recommended for widely distributed classes."
From time to time you need a plain Object that has no state of its own. Although such objects seem useless at first sight, they still have utility since each one has different identity. Tnis is useful in several scenarios, most important of which is locking: You want to coordinate two threads. In Java you do that by using an object that will be used as a lock. The object need not have any state its mere existence is enough for it to become a lock:
class MyThread extends Thread {
private Object lock;
public MyThread(Object l) { lock = l; }
public void run() {
doSomething();
synchronized(lock) {
doSomethingElse();
}
}
}
Object lock = new Object();
new MyThread(lock).start();
new MyThread(lock).start();
In this example we used a lock to prevent the two threads from concurrently executing doSomethingElse()
If Object were abstract and we needed a lock we'd have to subclass it without adding any method nor fields just so that we can instantiate lock.
Coming to think about it, here's a dual question to yours: Suppose Object were abstract, will it define any abstract methods? I guess the answer is No. In such circumstances there is not much value to defining the class as abstract.
I don't understand why most seem to believe that making a fully functional class, which implements all of its methods in a use full way abstract would be a good idea.
I would rather ask why make it abstract? Does it do something it shouldn't? is it missing some functionality it should have? Both those questions can be answered with no, it is a fully working class on its own, making it abstract just leads to people implementing empty classes.
public class UseableObject extends AbstractObject{}
UseableObject inherits from abstract Object and surprise it can be implemented, it does not add any functionality and its only reason to exist is to allow access to the methods exposed by Object.
Also I have to disagree with the use in "poor" synchronisation. Using private Objects to synchronize access is safer than using synchronize(this) and safer as well as easier to use than the Lock classes from java util concurrent.
Seems to me there's a simple question of practicality here. Making a class abstract takes away the programmer's ability to do something, namely, to instantiate it. There is nothing you can do with an abstract class that you cannot do with a concrete class. (Well, you can declare abstract functions in it, but in this case we have no need to have abstract functions.) So by making it concrete, you make it more flexible.
Of course if there was some active harm that was done by making it concrete, that "flexibility" would be a drawback. But I can't think of any active harm done by making Object instantiable. (Is "instantiable" a word? Whatever.) We could debate whether any given use that someone has made of a raw Object instance is a good idea. But even if you could convince me that every use that I have ever seen of a raw Object instance was a bad idea, that still wouldn't prove that there might not be good uses out there. So if it doesn't hurt anything, and it might help, even if we can't think of a way that it would actually help at the moment, why prohibit it?
I think all of the answers so far forget what it was like with Java 1.0. In Java 1.0, you could not make an anonymous class, so if you just wanted an object for some purpose (synchronization or a null placeholder) you would have to go declare a class for that purpose, and then a whole bunch of code would have these extra classes for this purpose. Much more straight forward to just allow direct instantiation of Object.
Sure, if you were designing Java today you might say that everyone should do:
Object NULL_OBJECT = new Object(){};
But that was not an option in 1.0.
I suspect the designers did not know in which way people may use an Object may be used in the future, and therefore didn't want to limit programmers by enforcing them to create an additional class where not necessary, eg for things like mutexes, keys etc.
It also means that it can be instantiated in an array. In the pre-1.5 days, this would allow you to have generic data structures. This could still be true on some platforms (I'm thinking J2ME, but I'm not sure)
Reasons why Object needs to be concrete.
reflection
see Object.getClass()
generic use (pre Java 5)
comparison/output
see Object.toString(), Object.equals(), Object.hashCode(), etc.
syncronization
see Object.wait(), Object.notify(), etc.
Even though a couple of areas have been replaced/deprecated, there was still a need for a concrete parent class to provide these features to every Java class.
The Object class is used in reflection so code can call methods on instances of indeterminate type, i.e. 'Object.class.getDeclaredMethods()'. If Object were to be Abstract then code that wanted to participate would have to implement all abstract methods before client code could use reflection on them.
According to Sun, An abstract class is a class that is declared abstract—it may or may not include abstract methods. Abstract classes cannot be instantiated, but they can be subclassed. This also means you can't call methods or access public fields of an abstract class.
Example of an abstract root class:
abstract public class AbstractBaseClass
{
public Class clazz;
public AbstractBaseClass(Class clazz)
{
super();
this.clazz = clazz;
}
}
A child of our AbstractBaseClass:
public class ReflectedClass extends AbstractBaseClass
{
public ReflectedClass()
{
super(this);
}
public static void main(String[] args)
{
ReflectedClass me = new ReflectedClass();
}
}
This will not compile because it's invalid to reference 'this' in a constructor unless its to call another constructor in the same class. I can get it to compile if I change it to:
public ReflectedClass()
{
super(ReflectedClass.class);
}
but that only works because ReflectedClass has a parent ("Object") which is 1) concrete and 2) has a field to store the type for its children.
A example more typical of reflection would be in a non-static member function:
public void foo()
{
Class localClass = AbstractBaseClass.clazz;
}
This fails unless you change the field 'clazz' to be static. For the class field of Object this wouldn't work because it is supposed to be instance specific. It would make no sense for Object to have a static class field.
Now, I did try the following change and it works but is a bit misleading. It still requires the base class to be extended to work.
public void genericPrint(AbstractBaseClass c)
{
Class localClass = c.clazz;
System.out.println("Class is: " + localClass);
}
public static void main(String[] args)
{
ReflectedClass me = new ReflectedClass();
ReflectedClass meTwo = new ReflectedClass();
me.genericPrint(meTwo);
}
Pre-Java5 generics (like with arrays) would have been impossible
Object[] array = new Object[100];
array[0] = me;
array[1] = meTwo;
Instances need to be constructed to serve as placeholders until the actual objects are received.
I suspect the short answer is that the collection classes lost type information in the days before Java generics. If a collection is not generic, then it must return a concrete Object (and be downcast at runtime to whatever type it was previously).
Since making a concrete class into an abstract class would break binary compatibility (as noted upthread), the concrete Object class was kept. I would like to point out that in no case was it created for the sole purpose of sychronization; dummy classes work just as well.
The design flaw is not including generics from the beginning. A lot of design criticism is aimed at that decision and its consequences. [oh, and the array subtyping rule.]
Its not abstract because whenever we create a new class it extends Object class then if it was abstract you need to implement all the methods of Object class which is overhead... There are already methods implemented in that class...
This question already has answers here:
Java 8: Interface with static methods instead of static util class
(5 answers)
Closed 7 years ago.
For the past decade or so, I've been using the pattern below for my Java utility classes. The class contains only static methods and fields, is declared final so it can't be extended, and has a private constructor so it can't be instantiated.
public final class SomeUtilityClass {
public static final String SOME_CONSTANT = "Some constant";
private SomeUtilityClass() {}
public static Object someUtilityMethod(Object someParameter) {
/* ... */
return null;
}
}
Now, with the introduction of static methods in interfaces in Java 8, I lately find myself using a utility interface pattern:
public interface SomeUtilityInterface {
String SOME_CONSTANT = "Some constant";
static Object someUtilityMethod(Object someParameter) {
/* ... */
return null;
}
}
This allows me to get rid of the constructor, and a lot of keywords (public, static, final) that are implicit in interfaces.
Are there any downsides to this approach? Are there any benefits to using a utility class over a utility interface?
You should use interface only if you expect that somebody would implement it. For example, java.util.stream.Stream interface has a bunch of static methods which could be located in some Streams or StreamUtils class prior to Java 8. However it's a valid interface which has non-static methods as well and can be implemented. The java.util.Comparable is another example: all static methods there just support the interface. You cannot forbid users from implementing your public interface, but for utility class you can forbid them to instantiate it. Thus for the code clarity I suggest not to use interfaces unless they are intended to be implemented.
A note regarding #saka1029 answer. While it's true that you cannot define helper private methods and constants in the same interface, it's not a problem to create a package-private class in the same package like MyInterfaceHelper which will have all the necessary implementation-related stuff. In general package-private classes are good to hide your implementation details from the outer world.
Going based on the person who coined the Constant Interface pattern an anti-pattern, I would say although you don't intend the client(s) to implement the interface, it's still possible, possibly easier, and shouldn't be allowed:
APIs should be easy to use and hard to misuse. It should be easy to do simple things; possible to do complex things; and impossible, or at least difficult, to do wrong things.
Although as mentioned below, it really depends on the target audience
A lot of easy-to-use designs patterns get a lot of criticism (Context pattern, Singleton pattern, Constant Interface pattern). Heck, even design principles such as the law of demeter gets criticised for being too verbose.
I'd hate to say it, but these kinds of decisions are opinion based. Although the context pattern is seen as an anti-pattern, it's apparent in mainstream frameworks such as Spring and the Android SDK. It boils down to the environment, as well as target audience.
The main downside that I can find is listed as the third listing under "downsides" in the Constant Interface wiki:
If binary code compatibility is required in future releases, the constants interface must remain forever an interface (it cannot be converted into a class), even though it has not been used as an interface in the conventional sense.
If you ever figure "Hey, this actually isn't a contract and I want to enforce stronger design", you will not be able to change it. But as I've said, it's up to you; maybe you won't care to change it in the future.
On top of that, code clarity as mentioned by #TagirValeev. Interfaces have the intent of being implemented; if you don't want someone to implement the API you're supplying, don't make it implementable. But I believe this revolves around the "target audience" statement. Not gonna lie, I'm with you on the less-verbose foundation, but it depends on who my code is for; wouldn't wanna use a constant interface for code that may get reviewed.
You should not use interface.
Interfaces cannot have private constants and static initializers.
public class Utility {
private Utility() {}
public static final Map<String, Integer> MAP_CONSTANT;
static {
Map<String, Integer> map = new HashMap<>();
map.put("zero", 0);
map.put("one", 1);
map.put("three", 3);
MAP_CONSTANT = Collections.unmodifiableMap(map);
}
private static String PRIVATE_CONSTANT = "Hello, ";
public static String hello(String name) {
return PRIVATE_CONSTANT + name;
}
}
I think it would work. I think the variable SOME_CONSTANT defaults to being static final in your SomeUtilityInterface, even though you didn't explicitly say so. So, it would work as a Utility but wouldn't you have some mutability problems that you wouldn't have with a regular class with all member variables being required to be final? As long as thats not an issue with your particular implementation of the default methods, I can't think of a problem.