Quoting from http://sites.google.com/site/gson/gson-design-document:
Why are most classes in Gson marked as
final?
While Gson provides a fairly
extensible architecture by providing
pluggable serializers and
deserializers, Gson classes were not
specifically designed to be
extensible. Providing non-final
classes would have allowed a user to
legitimately extend Gson classes, and
then expect that behavior to work in
all subsequent revisions. We chose to
limit such use-cases by marking
classes as final, and waiting until a
good use-case emerges to allow
extensibility. Marking a class final
also has a minor benefit of providing
additional optimization opportunities
to Java compiler and virtual machine.
Why is this the case? [If I would guess: of JVM knows class is final it does not maintain method override tables? Are there any other reasons?]
What is the benefit in performance?
Does this applies to classes that are frequency instantiated (POJO?) or perhaps to class that are holders static methods (Utility classes) ?
Are methods defined as final also can theoretically improve performance?
Are there any implications?
Thank you,
Maxim.
Virtual (overridden) methods generally are implemented via some sort of table (vtable) that is ultimately a function pointer. Each method call has the overhead of having to go through that pointer. When classes are marked final then all of the methods cannot be overridden and the use of a table is not needed anymore - this it is faster.
Some VMs (like HotSpot) may do things more intelligently and know when methods are/are not overridden and generate faster code as appropriate.
Here is some more specific info on HotSpot. And some general info too.
An old, apparently no longer but still largely relevant, article on this from IBM developerWorks, which states:
The common perception is that
declaring classes or methods final
makes it easier for the compiler to
inline method calls, but this
perception is incorrect (or at the
very least, greatly overstated).
final classes and methods can be a
significant inconvenience when
programming -- they limit your options
for reusing existing code and
extending the functionality of
existing classes. While sometimes a
class is made final for a good reason,
such as to enforce immutability, the
benefits of using final should
outweigh the inconvenience.
Performance enhancement is almost
always a bad reason to compromise good
object-oriented design principles, and
when the performance enhancement is
small or nonexistent, this is a bad
trade-off indeed.
Also see this related answer on another question. There's also the equivalent question for .Net, discussed here. SO discussion, "Are final methods inlined?" On a question titled "What optimizations are going to be useless tomorrow," this one appears on the list.
Note also that there is an entangling of the effects of final classes vs. final methods. You may get some performance benefit (again, I don't have a good reference) for final methods for sure, as it could cue the JIT to do inlining it couldn't otherwise do (or not so simply). You get the same effect when you mark the class final, which means that all the methods are suddenly final as well. Note that the Sun/Oracle folks claim that HotSpot can usually do this with or without the final keyword. Are there any additional effects from having the class itself final?
For reference, links to the JLS on final methods and final classes.
Not knowing the implementation of every particular JVM, I would theoretically say that if a JVM knows that a pointer to an object is a pointer to a type that is final, it can do non-virtual function calls (i.e., direct vs. indirect) to a member functions (i.e., no indirection through a function pointer), which may result in faster execution. This may also in turn lead to inlinining possibilities.
Marking classes as final allows further optimizations to be applied during the JIT stage.
If you are calling a virtual method on a non-final class, you don't know whether the proper implementation is the one defined in that class, or some sub-class that you don't know about.
However, if you have a reference to a final class, you know the specific implementation that is required.
Consider:
A extends B
B extends C
B myInstance = null;
if(someCondition)
myInstance = new B();
else
myInstance = new C();
myInstance.toString();
In this case, the JIT can't know whether C's implementation of toString() or B's implementation of toString() will be called. However, if B is marked as final, it is impossible for any implementation other than B's to be the proper implementation
No difference, that's just speculation. The only situation where it has sense are classes like String, etc where jvm treat them differently.
Related
I read that declaring a method as final leads to performance enhancement. So, doesn't it make sense to declare methods that are not expected to be overridden as final? My question is specifically about the improvement in performance and any associated cons of such usage.
I read that declaring a method as final leads to performance enhancement.
That is incorrect for recent HotSpot JIT compilers. My understanding is that the JIT compiler looks at all currently loaded classes to determine whether there is any overriding for each method that it compiles. If none is found, then the JIT compiler treats the method as if it was final1.
So, declaring methods final as an optimization does not make sense.
(This may not apply to all Java platforms; e.g. non-HotSpot platforms with a primitive JIT compiler.)
A better use of final on a method is when you want / need to forbid certain kinds of extension of your classes by subclassing. Whether / when to do this is a matter of opinion. I certainly wouldn't do this "as a matter of course".
1 - A HotSpot JIT compiler will even recompile previously compiled classes if dynamic loading introduces a new subclass that overloads a method that was previously not overridden.
So, doesn't it make sense to declare methods that are not expected to
be overridden as final?
I don't believe so. The final keyword is saying that this method can't be overridden (for security reasons for example), not that the original developer doesn't expect that you'd want to override it. That would be very presumptuous and definitely reduce the extensibility of APIs.
As mentioned in the other answer, I wouldn't expect it to make any difference to performance and if it did, you wouldn't notice!
One of the most useful features of Java 8 are the new default methods on interfaces. There are essentially two reasons (there may be others) why they have been introduced:
Providing actual default implementations. Example: Iterator.remove()
Allowing for JDK API evolution. Example: Iterable.forEach()
From an API designer's perspective, I would have liked to be able to use other modifiers on interface methods, e.g. final. This would be useful when adding convenience methods, preventing "accidental" overrides in implementing classes:
interface Sender {
// Convenience method to send an empty message
default final void send() {
send(null);
}
// Implementations should only implement this method
void send(String message);
}
The above is already common practice if Sender were a class:
abstract class Sender {
// Convenience method to send an empty message
final void send() {
send(null);
}
// Implementations should only implement this method
abstract void send(String message);
}
Now, default and final are obviously contradicting keywords, but the default keyword itself would not have been strictly required, so I'm assuming that this contradiction is deliberate, to reflect the subtle differences between "class methods with body" (just methods) and "interface methods with body" (default methods), i.e. differences which I have not yet understood.
At some point of time, support for modifiers like static and final on interface methods was not yet fully explored, citing Brian Goetz:
The other part is how far we're going to go to support class-building
tools in interfaces, such as final methods, private methods, protected
methods, static methods, etc. The answer is: we don't know yet
Since that time in late 2011, obviously, support for static methods in interfaces was added. Clearly, this added a lot of value to the JDK libraries themselves, such as with Comparator.comparing().
Question:
What is the reason final (and also static final) never made it to Java 8 interfaces?
This question is, to some degree, related to What is the reason why “synchronized” is not allowed in Java 8 interface methods?
The key thing to understand about default methods is that the primary design goal is interface evolution, not "turn interfaces into (mediocre) traits". While there's some overlap between the two, and we tried to be accommodating to the latter where it didn't get in the way of the former, these questions are best understood when viewed in this light. (Note too that class methods are going to be different from interface methods, no matter what the intent, by virtue of the fact that interface methods can be multiply inherited.)
The basic idea of a default method is: it is an interface method with a default implementation, and a derived class can provide a more specific implementation. And because the design center was interface evolution, it was a critical design goal that default methods be able to be added to interfaces after the fact in a source-compatible and binary-compatible manner.
The too-simple answer to "why not final default methods" is that then the body would then not simply be the default implementation, it would be the only implementation. While that's a little too simple an answer, it gives us a clue that the question is already heading in a questionable direction.
Another reason why final interface methods are questionable is that they create impossible problems for implementors. For example, suppose you have:
interface A {
default void foo() { ... }
}
interface B {
}
class C implements A, B {
}
Here, everything is good; C inherits foo() from A. Now supposing B is changed to have a foo method, with a default:
interface B {
default void foo() { ... }
}
Now, when we go to recompile C, the compiler will tell us that it doesn't know what behavior to inherit for foo(), so C has to override it (and could choose to delegate to A.super.foo() if it wanted to retain the same behavior.) But what if B had made its default final, and A is not under the control of the author of C? Now C is irretrievably broken; it can't compile without overriding foo(), but it can't override foo() if it was final in B.
This is just one example, but the point is that finality for methods is really a tool that makes more sense in the world of single-inheritance classes (generally which couple state to behavior), than to interfaces which merely contribute behavior and can be multiply inherited. It's too hard to reason about "what other interfaces might be mixed into the eventual implementor", and allowing an interface method to be final would likely cause these problems (and they would blow up not on the person who wrote the interface, but on the poor user who tries to implement it.)
Another reason to disallow them is that they wouldn't mean what you think they mean. A default implementation is only considered if the class (or its superclasses) don't provide a declaration (concrete or abstract) of the method. If a default method were final, but a superclass already implemented the method, the default would be ignored, which is probably not what the default author was expecting when declaring it final. (This inheritance behavior is a reflection of the design center for default methods -- interface evolution. It should be possible to add a default method (or a default implementation to an existing interface method) to existing interfaces that already have implementations, without changing the behavior of existing classes that implement the interface, guaranteeing that classes that already worked before default methods were added will work the same way in the presence of default methods.)
In the lambda mailing list there are plenty of discussions about it. One of those that seems to contain a lot of discussion about all that stuff is the following: On Varied interface method visibility (was Final defenders).
In this discussion, Talden, the author of the original question asks something very similar to your question:
The decision to make all interface members public was indeed an
unfortunate decision. That any use of interface in internal design
exposes implementation private details is a big one.
It's a tough one to fix without adding some obscure or compatibility
breaking nuances to the language. A compatibility break of that
magnitude and potential subtlety would seen unconscionable so a
solution has to exist that doesn't break existing code.
Could reintroducing the 'package' keyword as an access-specifier be
viable. It's absence of a specifier in an interface would imply
public-access and the absence of a specifier in a class implies
package-access. Which specifiers make sense in an interface is unclear
- especially if, to minimise the knowledge burden on developers, we have to ensure that access-specifiers mean the same thing in both
class and interface if they're present.
In the absence of default methods I'd have speculated that the
specifier of a member in an interface has to be at least as visible as
the interface itself (so the interface can actually be implemented in
all visible contexts) - with default methods that's not so certain.
Has there been any clear communication as to whether this is even a
possible in-scope discussion? If not, should it be held elsewhere.
Eventually Brian Goetz's answer was:
Yes, this is already being explored.
However, let me set some realistic expectations -- language / VM
features have a long lead time, even trivial-seeming ones like this.
The time for proposing new language feature ideas for Java SE 8 has
pretty much passed.
So, most likely it was never implemented because it was never part of the scope. It was never proposed in time to be considered.
In another heated discussion about final defender methods on the subject, Brian said again:
And you have gotten exactly what you wished for. That's exactly what
this feature adds -- multiple inheritance of behavior. Of course we
understand that people will use them as traits. And we've worked hard
to ensure that the the model of inheritance they offer is simple and
clean enough that people can get good results doing so in a broad
variety of situations. We have, at the same time, chosen not to push
them beyond the boundary of what works simply and cleanly, and that
leads to "aw, you didn't go far enough" reactions in some case. But
really, most of this thread seems to be grumbling that the glass is
merely 98% full. I'll take that 98% and get on with it!
So this reinforces my theory that it simply was not part of the scope or part of their design. What they did was to provide enough functionality to deal with the issues of API evolution.
It will be hard to find and identify "THE" answer, for the resons mentioned in the comments from #EJP : There are roughly 2 (+/- 2) people in the world who can give the definite answer at all. And in doubt, the answer might just be something like "Supporting final default methods did not seem to be worth the effort of restructuring the internal call resolution mechanisms". This is speculation, of course, but it is at least backed by subtle evidences, like this Statement (by one of the two persons) in the OpenJDK mailing list:
"I suppose if "final default" methods were allowed, they might need rewriting from internal invokespecial to user-visible invokeinterface."
and trivial facts like that a method is simply not considered to be a (really) final method when it is a default method, as currently implemented in the Method::is_final_method method in the OpenJDK.
Further really "authorative" information is indeed hard to find, even with excessive websearches and by reading commit logs. I thought that it might be related to potential ambiguities during the resolution of interface method calls with the invokeinterface instruction and and class method calls, corresponding to the invokevirtual instruction: For the invokevirtual instruction, there may be a simple vtable lookup, because the method must either be inherited from a superclass, or implemented by the class directly. In contrast to that, an invokeinterface call must examine the respective call site to find out which interface this call actually refers to (this is explained in more detail in the InterfaceCalls page of the HotSpot Wiki). However, final methods do either not get inserted into the vtable at all, or replace existing entries in the vtable (see klassVtable.cpp. Line 333), and similarly, default methods are replacing existing entries in the vtable (see klassVtable.cpp, Line 202). So the actual reason (and thus, the answer) must be hidden deeper inside the (rather complex) method call resolution mechanisms, but maybe these references will nevertheless be considered as being helpful, be it only for others that manage to derive the actual answer from that.
I wouldn't think it is neccessary to specify final on a convienience interface method, I can agree though that it may be helpful, but seemingly the costs have outweight the benefits.
What you are supposed to do, either way, is to write proper javadoc for the default method, showing exactly what the method is and is not allowed to do. In that way the classes implementing the interface "are not allowed" to change the implementation, though there are no guarantees.
Anyone could write a Collection that adheres to the interface and then does things in the methods that are absolutely counter intuitive, there is no way to shield yourself from that, other than writing extensive unit tests.
We add default keyword to our method inside an interface when we know that the class extending the interface may or may not override our implementation. But what if we want to add a method that we don't want any implementing class to override? Well, two options were available to us:
Add a default final method.
Add a static method.
Now, Java says that if we have a class implementing two or more interfaces such that they have a default method with exactly same method name and signature i.e. they are duplicate, then we need to provide an implementation of that method in our class. Now in case of default final methods, we can't provide an implementation and we are stuck. And that's why final keyword isn't used in interfaces.
I was asked this question in an interview recently:
Can you name any class in the Java API that is final that shouldn't be or one that isn't and should be'?
I couldn't think of any. The question implies that I should know all the API classes like the back of my hand, which I personally wouldn't expect any Java developer to know.
If anyone knows any such classes, please provide examples.
java.awt.Dimension isn't final or immutable and should have been. Anything that returns a Dimension (e.g a Window object) needs to make defensive copies to prevent callers from doing nasty things.
The first examples that come to mind are some of the non-final Number subclasses, such as BigDecimal and BigInteger, which should probably have been final.
In particular, all of their methods can be overriden. That enables you to create a broken BigDecimal, for example:
public class BrokenBigDecimal extends BigDecimal {
public BigDecimal add(BigDecimal augend) {
return BigDecimal.ZERO;
}
}
That could create significant issues if you receive BigDecimal from an untrusted code for example.
To paraphrase Effective Java:
Design and document for inheritance or else prohibit it
Classes should be immutable unless there's a very good reason to make them mutable
In my opinion, your reply should have been that it is a matter of taste which classes should be final and which shouldn't.
There are good reasons to make Integer, Double and String all final.
There are good reasons to complain about this.
Then there is BitSet, BitInteger etc. which could be made final.
There are a number of situations where classes are not final, but they also cannot be extended reasonably, so they probably should have been made final.
To pick on a particular class: BitSet. It is not final, yet you cannot extend it to add a bit shift operation. They might as well have made it final then, or allow us to add such functionality.
The Date class leaps out. It is a mutable simple value class (essentially a wrapper around a long), but a good heuristic is that simple value classes should be immutable. Note also its numerous deprecated methods: more evidence that the design was botched. The mutability of the Date is a source of bugs, requiring disciplined defensive copying.
one that isn't and should be
Most final classes in java are designed so due w/ security considerations in mind, overall there are relatively few final ones. For instance java.util.String is final for that very reason. So are many others.
Some classes w/ private c-tor are declared final (Math, StrictMath) but it doesn't matter in such a case.
Basically unless there are security issues involved I don't care if the class is final, yet you can always use non-public c-tor w/ some factory, effectively limiting the ability to subclass. Usually that's my preferred way as it allows package-private subclassing.
In short: I can't think of a final class that should not be, however there are some that could potentially have been. For instance java.lang.Thread being final might have not needed to protect vs malicious clone().
I believe java.util.Arrays and java.util.Collections should be declared final.
Here is why:
They contain only static members and a private constructor.
The private constructor prevents those classes from being extended.
So, those classes cannot be extended, but this fact is not visible in their public interface. Declaring them final would expose it and clarify intent.
Additionally, java.lang.Math (another so-called utility class) has the same structure and it is also declared final.
Check the String class which is final and probably should had been your answer in the interview.
Check the docs.
http://docs.oracle.com/javase/7/docs/api/java/lang/String.html
My question is pretty simple:
Does the compiler treat all the methods in a final class as being final themselves? Does adding the final keyword to methods in a final class has any effect?
I understood that final methods have a better chance of getting inlined and this is why I am asking.
Thanks in advance.
You're correct, all methods in a final class are implicitly final.
See here:
"Note that you can also declare an entire class final. A class that is
declared final cannot be subclassed. This is particularly useful, for
example, when creating an immutable class like the String class."
And here:
All methods in a final class are implicitly final.
This may also be of interest for you: Performance tips for the Java final keyword
Does the compiler treat all the methods in a final class as being final themselves?
In effect, yes it does. A method in a final class cannot be overridden. Adding (or removing) a final keyword to a method makes no difference to this rule.
Does adding the final keyword to methods in a final class has any effect?
In practice, it has minimal effect. It has no effect on the rules on overriding (see above), and no effect on inlining (see below).
It is possible to tell at runtime if a method was declared with a final keyword ... using reflection to look at the method's flags. So it does have some effect, albeit an effect that it irrelevant to 99.99% of programs.
I understood that final methods have a better chance of getting inlined and this is why I am asking.
This understanding is incorrect. The JIT compiler in a modern JVMs keeps track of which methods are not overridden in the classes loaded by an application. It uses this information, and the static types to determine whether a particular call requires virtual class dispatching or not. If not, then inlining is possible, and will be used depending on how large the method body is. In effect, the JIT compiler ignores the presence / absence of final, and uses a more accurate method to detect method calls where inlining of the method is allowable.
(In fact it is more complex than this. An application can dynamically load subclasses that cause the JIT compiler's method override analysis to become incorrect. If this happens, the JVM needs to invalidate any effected compiled methods and cause them to be recompiled.)
The bottom line is:
There is NO performance advantage in adding final to methods in final classes.
There might be a performance advantage in final to methods in non-final classes, but only if you are using an old Sun JVM, or some other Java / Java-like platform with a poor quality JIT compiler.
If you care about performance, it is better to use an up-to-date / high performance Java platform with a decent JIT compiler than to pollute your code-base with final keywords that are liable to cause you problems in the future.
You wrote in a comment:
#RussellZahniser I have read differently in many places.
The internet is full of old information, much of which is out of date ... or was never correct in the first place.
May be the compiler treats them as final.
The following prints "false":
final class FinalClass {
public void testMethod() {}
}
Method method = FinalClass.class.getDeclaredMethod("testMethod");
int m = method.getModifiers();
System.out.println(Modifier.isFinal(m));
As the source of the ActivityThread class shows, it's a final class, and all the methods in this class is also final methods. As the final keyword definition in java, this class cannot be inherited, but why android developers keep those methods final ?
Maybe I didn't express the question clear, I fix it here.
ActivityThread is a final class, it will not have any sub-class, and no method will be overridden, but you know that all the methods in this class is final , I want to know why they need these final keywords, they can remove them with no impact.
The Java Language Specification makes it clear that no method of a final class can be overridden. Thus, the final declaration on the methods appear to be redundant. (Maybe left over from a beta version of the Android API when ActivityThread was perhaps not a final class?)
On the other hand, optimizers and obfuscators can sometimes do a little more with methods declared final. Although they ought to be smart enough to make the inference that a final class won't have any overridden methods, it can't hurt to give them the extra hint.
why android developers keep those methods final?
Actually, the Java engineers did that. It's a design decision: sometimes you want to prohibit sub-classing one of your classes. Why? Some of the reason is that it implies some strong responsibility, that forces you to take very smart decisions.
Let me reference part of the item number 17 of Effective Java Second Edition by Joshua Bloch:
So what does it mean for a class to be designed and documented for inheritance?
First, the class must document precisely the effects of overriding any method. In other words, the class must document its self-use of overridable methods. For each public or protected method or constructor, the documentation must indicate which overridable methods the method or constructor invokes, in what sequence, and how the results of each invocation affect subsequent processing. (By overridable, we mean nonfinal and either public or protected.) More generally, a class must document any circumstances under which it might invoke an overridable method. For example, invocations might come from background threads or static initializers.
...
Design for inheritance involves more than just documenting patterns of self- use. To allow programmers to write efficient subclasses without undue pain, a class may have to provide hooks into its internal workings in the form of judi- ciously chosen protected methods or, in rare instances, protected fields.