Java method keyword "final" and its use - java

When I create complex type hierarchies (several levels, several types per level), I like to use the final keyword on methods implementing some interface declaration. An example:
interface Garble {
int zork();
}
interface Gnarf extends Garble {
/**
* This is the same as calling {#link #zblah(0)}
*/
int zblah();
int zblah(int defaultZblah);
}
And then
abstract class AbstractGarble implements Garble {
#Override
public final int zork() { ... }
}
abstract class AbstractGnarf extends AbstractGarble implements Gnarf {
// Here I absolutely want to fix the default behaviour of zblah
// No Gnarf shouldn't be allowed to set 1 as the default, for instance
#Override
public final int zblah() {
return zblah(0);
}
// This method is not implemented here, but in a subclass
#Override
public abstract int zblah(int defaultZblah);
}
I do this for several reasons:
It helps me develop the type hierarchy. When I add a class to the hierarchy, it is very clear, what methods I have to implement, and what methods I may not override (in case I forgot the details about the hierarchy)
I think overriding concrete stuff is bad according to design principles and patterns, such as the template method pattern. I don't want other developers or my users do it.
So the final keyword works perfectly for me. My question is:
Why is it used so rarely in the wild? Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?

Why is it used so rarely in the wild?
Because you should write one more word to make variable/method final
Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?
Usually I see such examples in 3d part libraries. In some cases I want to extend some class and change some behavior. Especially it is dangerous in non open-source libraries without interface/implementation separation.

I always use final when I write an abstract class and want to make it clear which methods are fixed. I think this is the most important function of this keyword.
But when you're not expecting a class to be extended anyway, why the fuss? Of course if you're writing a library for someone else, you try to safeguard it as much as you can but when you're writing "end user code", there is a point where trying to make your code foolproof will only serve to annoy the maintenance developers who will try to figure out how to work around the maze you had built.
The same goes to making classes final. Although some classes should by their very nature be final, all too often a short-sighted developer will simply mark all the leaf classes in the inheirance tree as final.
After all, coding serves two distinct purposes: to give instructions to the computer and to pass information to other developers reading the code. The second one is ignored most of the time, even though it's almost as important as making your code work. Putting in unnecessary final keywords is a good example of this: it doesn't change the way the code behaves, so its sole purpose should be communication. But what do you communicate? If you mark a method as final, a maintainer will assume you'd had a good readon to do so. If it turns out that you hadn't, all you achieved was to confuse others.
My approach is (and I may be utterly wrong here obviously): don't write anything down unless it changes the way your code works or conveys useful information.

Why is it used so rarely in the wild?
That doesn't match my experience. I see it used very frequently in all kinds of libraries. Just one (random) example: Look at the abstract classes in:
http://code.google.com/p/guava-libraries/
, e.g. com.google.common.collect.AbstractIterator. peek(), hasNext(), next() and endOfData() are final, leaving just computeNext() to the implementor. This is a very common example IMO.
The main reason against using final is to allow implementors to change an algorithm - you mentioned the "template method" pattern: It can still make sense to modify a template method, or to enhance it with some pre-/post actions (without spamming the entire class with dozens of pre-/post-hooks).
The main reason pro using final is to avoid accidental implementation mistakes, or when the method relies on internals of the class which aren't specified (and thus may change in the future).

I think it is not commonly used for two reasons:
People don't know it exists
People are not in the habit of thinking about it when they build a method.
I typically fall into the second reason. I do override concrete methods on a somewhat common basis. In some cases this is bad, but there are many times it doesn't conflict with design principles and in fact might be the best solution. Therefore when I am implementing an interface, I typically don't think deeply enough at each method to decide if a final keyword would be useful. Especially since I work on a lot of business applications that change frequently.

Why is it used so rarely in the wild?
Because it should not be necessary. It also does not fully close down the implementation, so in effect it might give you a false sense of security.
It should not be necessary due to the Liskov substitution principle. The method has a contract and in a correctly designed inheritance diagram that contract is fullfilled (otherwise it's a bug). Example:
interface Animal {
void bark();
}
abstract class AbstractAnimal implements Animal{
final void bark() {
playSound("whoof.wav"); // you were thinking about a dog, weren't you?
}
}
class Dog extends AbstractAnimal {
// ok
}
class Cat extends AbstractAnimal() {
// oops - no barking allowed!
}
By not allowing a subclass to do the right thing (for it) you might introduce a bug. Or you might require another developer to put an inheritance tree of your Garble interface right beside yours because your final method does not allow it to do what it should do.
The false sense of security is typical of a non-static final method. A static method should not use state from the instance (it cannot). A non-static method probably does. Your final (non-static) method probably does too, but it does not own the instance variables - they can be different than expected. So you add a burden on the developer of the class inheriting form AbstractGarble - to ensure instance fields are in a state expected by your implementation at any point in time. Without giving the developer a way to prepare the state before calling your method as in:
int zblah() {
prepareState();
return super.zblah();
}
In my opinion you should not close an implementation in such a fashion unless you have a very good reason. If you document your method contract and provide a junit test you should be able to trust other developers. Using the Junit test they can actually verify the Liskov substitution principle.
As a side note, I do occasionally close a method. Especially if it's on the boundary part of a framework. My method does some bookkeeping and then continues to an abstract method to be implemented by someone else:
final boolean login() {
bookkeeping();
return doLogin();
}
abstract boolean doLogin();
That way no-one forgets to do the bookkeeping but they can provide a custom login. Whether you like such a setup is of course up to you :)

Related

Why does Java 8 not allow non-public default methods?

Let's take an example:
public interface Testerface {
default public String example() {
return "Hello";
}
}
public class Tester implements Testerface {
#Override
public String example() {
return Testerface.super.example() + " world!";
}
}
public class Internet {
public static void main(String[] args) {
System.out.println(new Tester().example());
}
}
Simply enough, this would print Hello world!. But say I was doing something else with the return value of Testerface#example, for instance initializing a data file and returning a sensitive internal value that shouldn't leave the implementing class. Why does Java not allow access modifiers on default interface methods? Why can't they be protected/private and potentially elevated by a subclass (similar in how a class that extends a parent class can use a more visible modifier for an overridden method)?
A common solution is moving to an abstract class however in my specific case, I have an interface for enums, so that does not apply here. I imagine it was either overlooked or because the original idea behind interfaces that they are a "contract" of available methods, but I suppose I want input as to what's going on with this.
I've read "Why is “final” not allowed in Java 8 interface methods?", which states:
The basic idea of a default method is: it is an interface method with a default implementation, and a derived class can provide a more specific implementation
And it sounds to me like visibility wouldn't break that aspect at all.
As with the linked question since it looks like it had trouble being closed, an authoritative answer would be appreciated in this matter, rather than opinion-based ones.
As we saw in What is the reason why “synchronized” is not allowed in Java 8 interface methods? and Why is "final" not allowed in Java 8 interface methods?, extending interfaces to define behavior is more subtle than it might first appear. It turns out that each of the possible modifiers has their own story; its not simply a matter of blindly copying from how classes work. (This is at least obvious in hindsight, as tools for OO modeling that work for single inheritance do not automatically work for multiple inheritance.)
Let's start with the obvious answer: interfaces have always been restricted to only having public members, and while we added default methods and static methods to interfaces in Java 8, that doesn't mean we have to change everything just to be "more like" classes.
Unlike with synchronized and final, which would have been serious mistakes to support for default methods, weaker accessibilities, especially private, are reasonable features to consider. Private interface methods, whether static or instance (note that these would not be defaults, since they do not participate in inheritance) are a perfectly sensible tool (though they can be easily simulated by nonpublic helper classes.)
We actually did consider doing private interface methods in Java 8; this was mostly something that just fell off the bottom of the list due to resource and time constraints. It is quite possible this feature might reappear on the to-do list some day. (UPDATE: private methods in interfaces were added in Java 9.)
Package and protected methods, however, are more complicated than they look; the complexity of multiple inheritance and the complexity of the true meaning of protected would interact in all sorts of no-so-fun ways. So I wouldn't hold your breath for that.
So, the short answer is, private interface methods is something we could have done in 8, but we couldn't do everything that could have been done and still ship, so it was cut, but could come back.

Why is "final" not allowed in Java 8 interface methods?

One of the most useful features of Java 8 are the new default methods on interfaces. There are essentially two reasons (there may be others) why they have been introduced:
Providing actual default implementations. Example: Iterator.remove()
Allowing for JDK API evolution. Example: Iterable.forEach()
From an API designer's perspective, I would have liked to be able to use other modifiers on interface methods, e.g. final. This would be useful when adding convenience methods, preventing "accidental" overrides in implementing classes:
interface Sender {
// Convenience method to send an empty message
default final void send() {
send(null);
}
// Implementations should only implement this method
void send(String message);
}
The above is already common practice if Sender were a class:
abstract class Sender {
// Convenience method to send an empty message
final void send() {
send(null);
}
// Implementations should only implement this method
abstract void send(String message);
}
Now, default and final are obviously contradicting keywords, but the default keyword itself would not have been strictly required, so I'm assuming that this contradiction is deliberate, to reflect the subtle differences between "class methods with body" (just methods) and "interface methods with body" (default methods), i.e. differences which I have not yet understood.
At some point of time, support for modifiers like static and final on interface methods was not yet fully explored, citing Brian Goetz:
The other part is how far we're going to go to support class-building
tools in interfaces, such as final methods, private methods, protected
methods, static methods, etc. The answer is: we don't know yet
Since that time in late 2011, obviously, support for static methods in interfaces was added. Clearly, this added a lot of value to the JDK libraries themselves, such as with Comparator.comparing().
Question:
What is the reason final (and also static final) never made it to Java 8 interfaces?
This question is, to some degree, related to What is the reason why “synchronized” is not allowed in Java 8 interface methods?
The key thing to understand about default methods is that the primary design goal is interface evolution, not "turn interfaces into (mediocre) traits". While there's some overlap between the two, and we tried to be accommodating to the latter where it didn't get in the way of the former, these questions are best understood when viewed in this light. (Note too that class methods are going to be different from interface methods, no matter what the intent, by virtue of the fact that interface methods can be multiply inherited.)
The basic idea of a default method is: it is an interface method with a default implementation, and a derived class can provide a more specific implementation. And because the design center was interface evolution, it was a critical design goal that default methods be able to be added to interfaces after the fact in a source-compatible and binary-compatible manner.
The too-simple answer to "why not final default methods" is that then the body would then not simply be the default implementation, it would be the only implementation. While that's a little too simple an answer, it gives us a clue that the question is already heading in a questionable direction.
Another reason why final interface methods are questionable is that they create impossible problems for implementors. For example, suppose you have:
interface A {
default void foo() { ... }
}
interface B {
}
class C implements A, B {
}
Here, everything is good; C inherits foo() from A. Now supposing B is changed to have a foo method, with a default:
interface B {
default void foo() { ... }
}
Now, when we go to recompile C, the compiler will tell us that it doesn't know what behavior to inherit for foo(), so C has to override it (and could choose to delegate to A.super.foo() if it wanted to retain the same behavior.) But what if B had made its default final, and A is not under the control of the author of C? Now C is irretrievably broken; it can't compile without overriding foo(), but it can't override foo() if it was final in B.
This is just one example, but the point is that finality for methods is really a tool that makes more sense in the world of single-inheritance classes (generally which couple state to behavior), than to interfaces which merely contribute behavior and can be multiply inherited. It's too hard to reason about "what other interfaces might be mixed into the eventual implementor", and allowing an interface method to be final would likely cause these problems (and they would blow up not on the person who wrote the interface, but on the poor user who tries to implement it.)
Another reason to disallow them is that they wouldn't mean what you think they mean. A default implementation is only considered if the class (or its superclasses) don't provide a declaration (concrete or abstract) of the method. If a default method were final, but a superclass already implemented the method, the default would be ignored, which is probably not what the default author was expecting when declaring it final. (This inheritance behavior is a reflection of the design center for default methods -- interface evolution. It should be possible to add a default method (or a default implementation to an existing interface method) to existing interfaces that already have implementations, without changing the behavior of existing classes that implement the interface, guaranteeing that classes that already worked before default methods were added will work the same way in the presence of default methods.)
In the lambda mailing list there are plenty of discussions about it. One of those that seems to contain a lot of discussion about all that stuff is the following: On Varied interface method visibility (was Final defenders).
In this discussion, Talden, the author of the original question asks something very similar to your question:
The decision to make all interface members public was indeed an
unfortunate decision. That any use of interface in internal design
exposes implementation private details is a big one.
It's a tough one to fix without adding some obscure or compatibility
breaking nuances to the language. A compatibility break of that
magnitude and potential subtlety would seen unconscionable so a
solution has to exist that doesn't break existing code.
Could reintroducing the 'package' keyword as an access-specifier be
viable. It's absence of a specifier in an interface would imply
public-access and the absence of a specifier in a class implies
package-access. Which specifiers make sense in an interface is unclear
- especially if, to minimise the knowledge burden on developers, we have to ensure that access-specifiers mean the same thing in both
class and interface if they're present.
In the absence of default methods I'd have speculated that the
specifier of a member in an interface has to be at least as visible as
the interface itself (so the interface can actually be implemented in
all visible contexts) - with default methods that's not so certain.
Has there been any clear communication as to whether this is even a
possible in-scope discussion? If not, should it be held elsewhere.
Eventually Brian Goetz's answer was:
Yes, this is already being explored.
However, let me set some realistic expectations -- language / VM
features have a long lead time, even trivial-seeming ones like this.
The time for proposing new language feature ideas for Java SE 8 has
pretty much passed.
So, most likely it was never implemented because it was never part of the scope. It was never proposed in time to be considered.
In another heated discussion about final defender methods on the subject, Brian said again:
And you have gotten exactly what you wished for. That's exactly what
this feature adds -- multiple inheritance of behavior. Of course we
understand that people will use them as traits. And we've worked hard
to ensure that the the model of inheritance they offer is simple and
clean enough that people can get good results doing so in a broad
variety of situations. We have, at the same time, chosen not to push
them beyond the boundary of what works simply and cleanly, and that
leads to "aw, you didn't go far enough" reactions in some case. But
really, most of this thread seems to be grumbling that the glass is
merely 98% full. I'll take that 98% and get on with it!
So this reinforces my theory that it simply was not part of the scope or part of their design. What they did was to provide enough functionality to deal with the issues of API evolution.
It will be hard to find and identify "THE" answer, for the resons mentioned in the comments from #EJP : There are roughly 2 (+/- 2) people in the world who can give the definite answer at all. And in doubt, the answer might just be something like "Supporting final default methods did not seem to be worth the effort of restructuring the internal call resolution mechanisms". This is speculation, of course, but it is at least backed by subtle evidences, like this Statement (by one of the two persons) in the OpenJDK mailing list:
"I suppose if "final default" methods were allowed, they might need rewriting from internal invokespecial to user-visible invokeinterface."
and trivial facts like that a method is simply not considered to be a (really) final method when it is a default method, as currently implemented in the Method::is_final_method method in the OpenJDK.
Further really "authorative" information is indeed hard to find, even with excessive websearches and by reading commit logs. I thought that it might be related to potential ambiguities during the resolution of interface method calls with the invokeinterface instruction and and class method calls, corresponding to the invokevirtual instruction: For the invokevirtual instruction, there may be a simple vtable lookup, because the method must either be inherited from a superclass, or implemented by the class directly. In contrast to that, an invokeinterface call must examine the respective call site to find out which interface this call actually refers to (this is explained in more detail in the InterfaceCalls page of the HotSpot Wiki). However, final methods do either not get inserted into the vtable at all, or replace existing entries in the vtable (see klassVtable.cpp. Line 333), and similarly, default methods are replacing existing entries in the vtable (see klassVtable.cpp, Line 202). So the actual reason (and thus, the answer) must be hidden deeper inside the (rather complex) method call resolution mechanisms, but maybe these references will nevertheless be considered as being helpful, be it only for others that manage to derive the actual answer from that.
I wouldn't think it is neccessary to specify final on a convienience interface method, I can agree though that it may be helpful, but seemingly the costs have outweight the benefits.
What you are supposed to do, either way, is to write proper javadoc for the default method, showing exactly what the method is and is not allowed to do. In that way the classes implementing the interface "are not allowed" to change the implementation, though there are no guarantees.
Anyone could write a Collection that adheres to the interface and then does things in the methods that are absolutely counter intuitive, there is no way to shield yourself from that, other than writing extensive unit tests.
We add default keyword to our method inside an interface when we know that the class extending the interface may or may not override our implementation. But what if we want to add a method that we don't want any implementing class to override? Well, two options were available to us:
Add a default final method.
Add a static method.
Now, Java says that if we have a class implementing two or more interfaces such that they have a default method with exactly same method name and signature i.e. they are duplicate, then we need to provide an implementation of that method in our class. Now in case of default final methods, we can't provide an implementation and we are stuck. And that's why final keyword isn't used in interfaces.

Reasoning behind not using non-implemented Interfaces to hold constants?

In his book Effective Java, Joshua Bloch recommends against using Interfaces to hold constants,
The constant interface pattern is a poor use of interfaces. That a class uses some constants internally is an implementation detail. Implementing a constant interface causes this implementation detail to leak into the class’s exported API. It is of no consequence to the users of a class that the class implements a constant interface. In fact, it may even confuse them. Worse, it represents a commitment: if in a future release the class is modified so that it no longer needs to use the con-stants, it still must implement the interface to ensure binary compatibility. If a nonfinal class implements a constant interface, all of its subclasses will have their namespaces polluted by the constants in the interface.
His reasoning makes sense to me and it seems to be the prevailing logic whenever the question is brought up but it overlooks storing constants in interfaces and then NOT implementing them.
For instance,
public interface SomeInterface {
public static final String FOO = "example";
}
public class SomeOtherClass {
//notice that this class does not implement anything
public void foo() {
thisIsJustAnExample("Designed to be short", SomeInteface.FOO);
}
}
I work with someone who uses this method all the time. I tend to use class with private constructors to hold my constants, but I've started using interfaces in this manner to keep our code a consistent style. Are there any reasons to not use interfaces in the way I've outlined above?
Essentially it's a short hand that prevents you from having to make a class private, since an interface can not be initialized.
I guess it does the job, but as a friend once said: "You can try mopping a floor with an octopus; it might get the job done, but it's not the right tool".
Interfaces exist to specify contracts, which are then implemented by classes. When I see an interface, I assume that there are some classes out there that implement it. So I'd lean towards saying that this is an example of abusing interfaces rather than using them, simply because I don't think that's the way interfaces were meant to be used.
I guess I don't understand why these values are public in the first place if they're simply going to be used privately in a class. Why not just move them into the class? Now if these values are going to be used by a bunch of classes, then why not create an enum? Another pattern that I've seen is a class that just holds public constants. This is similar to the pattern you've described. However, the class can be made final so that it cannot be extended; there is nothing that stops a developer from implementing your interface. In these situations, I just tend to use enum.
UPDATE
This was going to be a response to a comment, but then it got long. Creating an interface to hold just one value is even more wasteful! :) You should use a private constant for that. While putting unrelated values into a single enum is bad, you could group them into separate enums, or simply use private constants for the class.
Also, if it appears that all these classes are sharing these unrelated constants (but which make sense in the context of the class), why not create an abstract class where you define these constants as protected? All you have to do then is extend this class and your derived classes will have access to the constants.
I don't think a class with a private constructor is any better than using an interface.
What the quote says is that using implements ConstantInterface is not best pratice because this interface becomes part of the API.
However, you can use static import or qualified names like SomeInteface.FOO of the values from the interface instead to avoid this issue.
Constants are a bad thing anyway. Stuffing a bunch of strings in a single location is a sign that your application has design problems from the get go. Its not object oriented and (especially for String Constants) can lead to the development of fragile API's
If a class needs some static values then they should be local to that class. If more classes need access to those values they should be promoted to an enumeration and modeled as such. If you really insist on having a class full of constants then you create a final class with a private no args constructor. With this approach you can at least ensure that the buck stops there. There are no instantiations allowed and you can only access state in a static manner.
This particular anti-pattern has one serious problem. There is no mechanism to stop someone from using your class that implements this rouge constants interface.Its really about addressing a limitation of java that allows you to do non-sensical things.
The net out is that it reduces the meaningfulness of the application's design because the grasp on the principles of the language aren't there. When I inherit code with constants interfaces, I immediately second guess everything because who knows what other interesting hacks I'll find.
Creating a separate class for constants seems silly. It's more work than making an enum, and the only reason would be to do it would be to keep unrelated constants all in one place just because presumably they all happen to be referenced by the same chunks of code. Hopefully your Bad Smell alarm goes of when you think about slapping a bunch of unrelated stuff together and calling it a class.
As for interfaces, as long as you're not implementing the interface it's not the end of the world (and the JDK has a number of classes implementing SwingConstants for example), but there may be better ways depending on what exactly you're doing.
You can use enums to group related constants together, and even add methods to them
you can use Resource Bundles for UI text
use a Map<String,String> passed through Collections.unmodifiableMap for more general needs
you could also read constants from a file using java.util.Properties and wrap or subclass it to prevent changes
Also, with static imports there's no reason for lazy people to implement an interface to get its constants when you can be lazy by doing import static SomeInterface.*; instead.

In Java, is there any disadvantage to static methods on a class?

Lets assume that a rule (or rule of thumb, anyway), has been imposed in my coding environment that any method on a class that doesn't use, modify, or otherwise need any instance variables to do its work, be made static. Is there any inherent compile time, runtime, or any other disadvantage to doing this?
(edited for further clarifications)
I know the question was somewhat open ended and vague so I apologize for that. My intent in asking was in the context of mostly "helper" methods. Utility classes (with private CTORs so they can't be instantiated) as holders for static methods we already do. My question here was more in line of these little methods that HELP OUT the main class API.
I might have 4 or 5 main API/instance methods on a class that do the real work, but in the course of doing so they share some common functionality that might only be working on the input parameters to the API method, and not internal state. THESE are the code sections I typically pull out into their own helper methods, and if they don't need to access the class' state, make them static.
My question was thus, is this inherently a bad idea, and if so, why? (Or why not?)
In my opinion, there are four reasons to avoid static methods in Java. This is not to say that static methods are never applicable, only to say that they should generally be avoided.
As others have pointed out, static methods cannot be mocked out in a unit test. If a class is depending on, say, DatabaseUtils.createConnection(), then that dependent class, and any classes that depend on it, will be almost impossible to test without actually having a database or some sort of "testing" flag in DatabaseUtils. In the latter case, it sounds like you actually have two implementations of a DatabaseConnectionProvider interface -- see the next point.
If you have a static method, its behavior applies to all classes, everywhere. The only way to alter its behavior conditionally is to pass in a flag as a parameter to the method or set a static flag somewhere. The problem with the first approach is that it changes the signature for every caller, and quickly becomes cumbersome as more and more flags are added. The problem with the second approach is that you end up with code like this all over the place:
boolean oldFlag = MyUtils.getFlag();
MyUtils.someMethod();
MyUtils.setFlag( oldFlag );
One example of a common library that has run into this problem is Apache Commons Lang: see StringUtilsBean and so forth.
Objects are loaded once per ClassLoader, which means that you could actually have multiple copies of your static methods and static variables around unwittingly, which can cause problems. This usually doesn't matter as much with instance methods, because the objects are ephemeral.
If you have static methods that reference static variables, those stay around for the life of the classloader and never get garbage collected. If these accumulate information (e.g. caches) and you are not careful, you can run into "memory leaks" in your application. If you use instance methods instead, the objects tend to be shorter-lived and so are garbage-collected after a while. Of course, you can still get into memory leaks with instance methods too! But it's less of a problem.
Hope that helps!
The main disadvantage is that you cannot swap, override or choose method implementations at runtime.
The performance advantage is likely negligible. Use static methods for anything that's not state dependent. This clarifies the code, as you can immediately see with a static method call that there's no instance state involved.
Disadvantage -> Static
Members are part of class and thus remain in memory till application terminates.and can't be ever garbage collected. Using excess of static members sometime predicts that you fail to design your product and trying to cop of with static /procedural programming. It denotes that object oriented design is compromised.This can result in memory over flow.
I really like this question as this has been a point I have been debating for last 4 years in my professional life. Static method make a lot of sense for classes which are not carrying any state. But lately I have been revised my though somewhat.
Utility classes having static methods is a good idea.
Service classes carrying business logic can be stateless in many cases. Initially I always added static methods in them, but then when I gained more familiarity with Spring framework (and some more general reading), I realized these methods become untestable as an independent unit as u cannot inject mock services easily into this class. E.g. A static method calling another static method in another class, there is no way JUnit test can short circuit tis path by injecting a dummy implementation at run time.
So I kind of settled to the thought that having utility static methods which do not need to call other classes or methods pretty much can be static. But service classes in general should be non static. This allows you to leverage OOPs features like overriding.
Also having a singleton instance class helps us to make a class pretty much like a static class still use OOPs concepts.
It's all a question of context. Some people have already given examples where static is absolutely preferable, such as when writing utility functions with no conceivable state. For example, if you are writing a collection of different sort algorithms to be used on arrays, making your method anything but static just confuses the situation. Any programmer reading your code would have to ask, why did you NOT make it static, and would have to look to see if you are doing something stateful to the object.
public class Sorting {
public static void quiksort(int [] array) {}
public static void heapsort(int[] array) { }
}
Having said that, there are many people who write code of some kind, and insist that they have some special one-off code, only to find later that it isn't so. For example, you want to calculate statistics on a variable. So you write:
public class Stats {
public static void printStats(float[] data) { }
}
The first element of bad design here is that the programmer intends to just print out the results, rather than generically use them. Embedding I/O in computation is terrible for reuse. However, the next problem is that this general purpose routine should be computing max, min, mean, variance, etc. and storing it somewhere. Where? In the state of an object. If it were really a one-off, you could make it static, but of course, you are going to find that you want to compute the mean of two different things, and then it's awfully nice if you can just instantiate the object multiple times.
public class Stats {
private double min,max,mean,var;
public void compute(float data[]) { ... }
public double getMin() { return min; }
public double
}
The knee jerk reaction against static is often the reaction of programmers to the stupidity of doing this sort of thing statically, since it's easier to just say never do that than actually explain which cases are ok, and which are stupid.
Note that in this case, I am actually using the object as a kind of special-purpose pass by reference, because Java is so obnoxious in that regard. In C++, this sort of thing could have been a function, with whatever state passed as references. But even in C++, the same rules apply, it's just that Java forces us to use objects more because of the lack of pass by reference.
As far as performance goes, the biggest performance increase of switching from a regular method is actually avoiding the dynamic polymorphic check which is the default in java, and which in C++ is specified manually with virtual.
When I tried last there was a 3:1 advantage of calling a final method over a regular method, but no discernible for calling static functions over final.
Note that if you call one method from another, the JIT is often smart enough to inline the code, in which case there is no call at all, which is why making any statement about exactly how much you save is extremely dangerous. All you can say is that when the compiler has to call a function, it can't hurt if it can call one like static or final which requires less computation.
The main problem you may face is, you won't be able to provide a new implementation if needed.
If you still have doubts ( whether your implementation may change in the future or not ) you can always use a private instance underneath with the actual implementation:
class StringUtil {
private static StringUtil impl = new DefaultStringUtil();
public static String nullOrValue( String s ) {
return impl.doNullOrValue();
}
... rest omitted
}
If for "some" reason, you need to change the implementation class you may offer:
class StringUtil {
private static StringUtil impl = new ExoticStringUtil();
public static String nullOrValue( String s ) {
return impl.doNullOrValue(s);
}
... rest omitted
}
But may be excessive in some circumstances.
No, actually the reason for that advice is that it provides a performance advantage. Static methods can be called with less overhead so any method that doesn't need a reference to this ought to be made static.
No there is no disadvantages, rather when you are not accessing any instance members in the method then there is no meaning of having it as an instance method. It is good programming skill to have it as a static method.
and adding to that you don't have to create any instances to access these methods and thus saving a memory and garbage collecting time.
In order to call the static methods you don't need to create class objects. The method is available immediately.
Assuming the class is already loaded. Otherwise there's a bit of a wait. :-)
I think of static as a good way to separate the functional code from procedural/state-setting code. The functional code typically needs no extension and changes only when there are bugs.
There's also the use of static as an access-control mechanism--such as with singletons.
One disadvantage is if your static methods are general and distributed in different classes as far as usage is concerned. You might consider putting all static methods that are general in a utility class.
There shouldn't be any disadvantages--there may even be a slight advantage in performance (although it wouldn't be measurable) since the dynamic lookup can be avoided.
It's nice to tag functions as functions instead of having them look like Methods--(and static "Methods" ARE functions, not methods--that's actually by definition).
In general a static method is a bad OO code smell--it probably means that your OO model isn't fully integrated. This happens all the time with libraries that can't know about the code that will be using it, but in integrated non-library code static methods should be examined to evaluate which of it's parameters it's most closely associated with--there is a good chance it should be a member of that class.
If a static method just takes native values, then you're probably missing a handful of classes; you should also keep passing native variables or library objects (like collections) to a minimum--instead containing them in classes with business logic.
I guess what I'm saying is that if this is really an issue, you might want to re-examine your modeling practices--statics should be so rare that this isn't even an issue.
As others have said, it provides a slight performance advantage and is good programming practice. The only exception is when the method needs to be an instance method for overriding purposes, but those are usually easily recognised. For example if a class provides default behaviour of an instance method, that happens not to need instance variables, that clearly can't be made static.
In general:
You should be writing your software to take advantage of interfaces and not implementations. Who's to say that "now" you won't use some instance variable, but in the future you will? An example of coding to interfaces...
ArrayList badList = new ArrayList(); //bad
List goodList = new ArrayList(); //good
You should be allowed to swap implementations, especially for mocking & testing. Spring dependency injection is pretty nice in this respect. Just inject the implementation from Spring and bingo you have pretty much a "static" (well, singleton) method...
Now, those types of APIs that are purely "utility" in purpose (i.e., Apache Commons Lang) are the exception here because I believe that most (if not all) of the implementations are static. In this situation, what are the odds that you will want to ever swap Apache Commons out for another API?
Specifically:
How would you elegantly handle the "staticness" of your implementation when you're targeting, say, a Websphere vs. Tomcat deployment? I'm sure there would be an instance (no pun intended) of when your implementation would differ between the two...and relying on a static method in one of those specific implementations might be dangerous...

Why java.lang.Object is not abstract? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Java: Rationale of the Object class not being declared abstract
Why is the Object class, which is base class of 'em all in Java, not abstract?
I've had this question for a really really long time and it is asked here purely out of curiosity, that's all. Nothing in my code or anybody's code is breaking because it is not abstract, but I was wondering why they made it concrete?
Why would anyone want an "instance" (and not its presence a.k.a. Reference) of this Object class? One case is a poor synchronization code which uses the instance of an Object for locking (at least I used it this way once.. my bad).
Is there any practical use of an "instance" of an Object class? And how does its instantiation fit in OOP? What would have happened if they had marked it abstract (of course after providing implementations to its methods)?
Without the designers of java.lang.Object telling us, we have to base our answers on opinion. There's a few questions which can be asked which may help clear it up.
Would any of the methods of Object benefit from being abstract?
It could be argued that some of the methods would benefit from this. Take hashCode() and equals() for instance, there would probably have been a lot less frustration around the complexities of these two if they had both been made abstract. This would require developers to figure out how they should be implementing them, making it more obvious that they should be consistent (see Effective Java). However, I'm more of the opinion that hashCode(), equals() and clone() belong on separate, opt-in abstractions (i.e. interfaces). The other methods, wait(), notify(), finalize(), etc. are sufficiently complicated and/or are native, so it's best they're already implemented, and would not benefit from being abstracted.
So I'd guess the answer would be no, none of the methods of Object would benefit from being abstract.
Would it be a benefit to mark the Object class as abstract?
Assuming all the methods are implemented, the only effect of marking Object abstract is that it cannot be constructed (i.e. new Object() is a compile error). Would this have a benefit? I'm of the opinion that the term "object" is itself abstract (can you find anything around you which can be totally described as "an object"?), so it would fit with the object-oriented paradigm. It is however, on the purist side. It could be argued that forcing developers to pick a name for any concrete subclass, even empty ones, will result in code which better expresses their intent. I think, to be totally correct in terms of the paradigm, Object should be marked abstract, but when it comes down to it, there's no real benefit, it's a matter of design preference (pragmatism vs. purity).
Is the practice of using a plain Object for synchronisation a good enough reason for it to be concrete?
Many of the other answers talk about constructing a plain object to use in the synchronized() operation. While this may have been a common and accepted practice, I don't believe it would be a good enough reason to prevent Object being abstract if the designers wanted it to be. Other answers have mentioned how we would have to declare a single, empty subclass of Object any time we wanted to synchronise on a certain object, but this doesn't stand up - an empty subclass could have been provided in the SDK (java.lang.Lock or whatever), which could be constructed any time we wanted to synchronise. Doing this would have the added benefit of creating a stronger statement of intent.
Are there any other factors which could have been adversely affected by making Object abstract?
There are several areas, separate from a pure design standpoint, which may have influenced the choice. Unfortunately, I do not know enough about them to expand on them. However, it would not suprise me if any of these had an impact on the decision:
Performance
Security
Simplicity of implementation of the JVM
Could there be other reasons?
It's been mentioned that it may be in relation to reflection. However, reflection was introduced after Object was designed. So whether it affects reflection or not is moot - it's not the reason. The same for generics.
There's also the unforgettable point that java.lang.Object was designed by humans: they may have made a mistake, they may not have considered the question. There is no language without flaws, and this may be one of them, but if it is, it's hardly a big one. And I think I can safely say, without lack of ambition, that I'm very unlikely to be involved in designing a key part of such a widely used technology, especially one that's lasted 15(?) years and still going strong, so this shouldn't be considered a criticism.
Having said that, I would have made it abstract ;-p
Summary
Basically, as far as I see it, the answer to both questions "Why is java.lang.Object concrete?" or (if it were so) "Why is java.lang.Object abstract?" is... "Why not?".
Plain instances of java.lang.Object are typically used in locking/syncronization scenarios and that's accepted practice.
Also - what would be the reason for it to be abstract? Because it's not fully functional in its own right as an instance? Could it really do with some abstract members? Don't think so. So the argument for making it abstract in the first place is non-existent. So it isn't.
Take the classic hierarchy of animals, where you have an abstract class Animal, the reasoning to make the Animal class abstract is because an instance of Animal is effectively an 'invalid' -by lack of a better word- animal (even if all its methods provide a base implementation). With Object, that is simply not the case. There is no overwhelming case to make it abstract in the first place.
From everything I've read, it seems that Object does not need to be concrete, and in fact should have been abstract.
Not only is there no need for it to be concrete, but after some more reading I am convinced that Object not being abstract is in conflict with the basic inheritance model - we should not be allowing abstract subclasses of a concrete class, since subclasses should only add functionality.
Clearly this is not the case in Java, where we have abstract subclasses of Object.
I can think of several cases where instances of Object are useful:
Locking and synchronization, like you and other commenters mention. It is probably a code smell, but I have seen Object instances used this way all the time.
As Null Objects, because equals will always return false, except on the instance itself.
In test code, especially when testing collection classes. Sometimes it's easiest to fill a collection or array with dummy objects rather than nulls.
As the base instance for anonymous classes. For example:
Object o = new Object() {...code here...}
I think it probably should have been declared abstract, but once it is done and released it is very hard to undo without causing a lot of pain - see Java Language Spec 13.4.1:
"If a class that was not abstract is changed to be declared abstract, then preexisting binaries that attempt to create new instances of that class will throw either an InstantiationError at link time, or (if a reflective method is used) an InstantiationException at run time; such a change is therefore not recommended for widely distributed classes."
From time to time you need a plain Object that has no state of its own. Although such objects seem useless at first sight, they still have utility since each one has different identity. Tnis is useful in several scenarios, most important of which is locking: You want to coordinate two threads. In Java you do that by using an object that will be used as a lock. The object need not have any state its mere existence is enough for it to become a lock:
class MyThread extends Thread {
private Object lock;
public MyThread(Object l) { lock = l; }
public void run() {
doSomething();
synchronized(lock) {
doSomethingElse();
}
}
}
Object lock = new Object();
new MyThread(lock).start();
new MyThread(lock).start();
In this example we used a lock to prevent the two threads from concurrently executing doSomethingElse()
If Object were abstract and we needed a lock we'd have to subclass it without adding any method nor fields just so that we can instantiate lock.
Coming to think about it, here's a dual question to yours: Suppose Object were abstract, will it define any abstract methods? I guess the answer is No. In such circumstances there is not much value to defining the class as abstract.
I don't understand why most seem to believe that making a fully functional class, which implements all of its methods in a use full way abstract would be a good idea.
I would rather ask why make it abstract? Does it do something it shouldn't? is it missing some functionality it should have? Both those questions can be answered with no, it is a fully working class on its own, making it abstract just leads to people implementing empty classes.
public class UseableObject extends AbstractObject{}
UseableObject inherits from abstract Object and surprise it can be implemented, it does not add any functionality and its only reason to exist is to allow access to the methods exposed by Object.
Also I have to disagree with the use in "poor" synchronisation. Using private Objects to synchronize access is safer than using synchronize(this) and safer as well as easier to use than the Lock classes from java util concurrent.
Seems to me there's a simple question of practicality here. Making a class abstract takes away the programmer's ability to do something, namely, to instantiate it. There is nothing you can do with an abstract class that you cannot do with a concrete class. (Well, you can declare abstract functions in it, but in this case we have no need to have abstract functions.) So by making it concrete, you make it more flexible.
Of course if there was some active harm that was done by making it concrete, that "flexibility" would be a drawback. But I can't think of any active harm done by making Object instantiable. (Is "instantiable" a word? Whatever.) We could debate whether any given use that someone has made of a raw Object instance is a good idea. But even if you could convince me that every use that I have ever seen of a raw Object instance was a bad idea, that still wouldn't prove that there might not be good uses out there. So if it doesn't hurt anything, and it might help, even if we can't think of a way that it would actually help at the moment, why prohibit it?
I think all of the answers so far forget what it was like with Java 1.0. In Java 1.0, you could not make an anonymous class, so if you just wanted an object for some purpose (synchronization or a null placeholder) you would have to go declare a class for that purpose, and then a whole bunch of code would have these extra classes for this purpose. Much more straight forward to just allow direct instantiation of Object.
Sure, if you were designing Java today you might say that everyone should do:
Object NULL_OBJECT = new Object(){};
But that was not an option in 1.0.
I suspect the designers did not know in which way people may use an Object may be used in the future, and therefore didn't want to limit programmers by enforcing them to create an additional class where not necessary, eg for things like mutexes, keys etc.
It also means that it can be instantiated in an array. In the pre-1.5 days, this would allow you to have generic data structures. This could still be true on some platforms (I'm thinking J2ME, but I'm not sure)
Reasons why Object needs to be concrete.
reflection
see Object.getClass()
generic use (pre Java 5)
comparison/output
see Object.toString(), Object.equals(), Object.hashCode(), etc.
syncronization
see Object.wait(), Object.notify(), etc.
Even though a couple of areas have been replaced/deprecated, there was still a need for a concrete parent class to provide these features to every Java class.
The Object class is used in reflection so code can call methods on instances of indeterminate type, i.e. 'Object.class.getDeclaredMethods()'. If Object were to be Abstract then code that wanted to participate would have to implement all abstract methods before client code could use reflection on them.
According to Sun, An abstract class is a class that is declared abstract—it may or may not include abstract methods. Abstract classes cannot be instantiated, but they can be subclassed. This also means you can't call methods or access public fields of an abstract class.
Example of an abstract root class:
abstract public class AbstractBaseClass
{
public Class clazz;
public AbstractBaseClass(Class clazz)
{
super();
this.clazz = clazz;
}
}
A child of our AbstractBaseClass:
public class ReflectedClass extends AbstractBaseClass
{
public ReflectedClass()
{
super(this);
}
public static void main(String[] args)
{
ReflectedClass me = new ReflectedClass();
}
}
This will not compile because it's invalid to reference 'this' in a constructor unless its to call another constructor in the same class. I can get it to compile if I change it to:
public ReflectedClass()
{
super(ReflectedClass.class);
}
but that only works because ReflectedClass has a parent ("Object") which is 1) concrete and 2) has a field to store the type for its children.
A example more typical of reflection would be in a non-static member function:
public void foo()
{
Class localClass = AbstractBaseClass.clazz;
}
This fails unless you change the field 'clazz' to be static. For the class field of Object this wouldn't work because it is supposed to be instance specific. It would make no sense for Object to have a static class field.
Now, I did try the following change and it works but is a bit misleading. It still requires the base class to be extended to work.
public void genericPrint(AbstractBaseClass c)
{
Class localClass = c.clazz;
System.out.println("Class is: " + localClass);
}
public static void main(String[] args)
{
ReflectedClass me = new ReflectedClass();
ReflectedClass meTwo = new ReflectedClass();
me.genericPrint(meTwo);
}
Pre-Java5 generics (like with arrays) would have been impossible
Object[] array = new Object[100];
array[0] = me;
array[1] = meTwo;
Instances need to be constructed to serve as placeholders until the actual objects are received.
I suspect the short answer is that the collection classes lost type information in the days before Java generics. If a collection is not generic, then it must return a concrete Object (and be downcast at runtime to whatever type it was previously).
Since making a concrete class into an abstract class would break binary compatibility (as noted upthread), the concrete Object class was kept. I would like to point out that in no case was it created for the sole purpose of sychronization; dummy classes work just as well.
The design flaw is not including generics from the beginning. A lot of design criticism is aimed at that decision and its consequences. [oh, and the array subtyping rule.]
Its not abstract because whenever we create a new class it extends Object class then if it was abstract you need to implement all the methods of Object class which is overhead... There are already methods implemented in that class...

Categories