SemVer: can new functionality be considered breaking change? - java

Consider a library that defines a class A, with several methods (A1, A2, ...), at version 1.0.0 (semantic versioning)
Now imagine I add a new method to A object (method Ab). Is this a minor release? Because it adds functionality, and it shouldn't be a breaking change.
But if someone who's using the library declared a class A that extends class B, and B defines a method Ab with the same signature than the new method, now the code won't compile because it required the override declaration (in Scala and Java).
So, is this a breaking change?

First of all, adding a public method to a class in general is not a breaking change in semantic versioning. Removing a public method however would be a clear breaking change.
If you provide a Java library and you add a method to an interface, this is a breaking change, since others have to change / extend their code.
If you add a public method to a class, this will only be a problem, if this class is not final, so other can extend it and override methods.
So the best way I think is to declare a class as final, so the problem with method override should never happen. You can also leave the major update and only increase the minor one. Adding methods to interfaces will be a breaking change, so there you should increase the major version.

Related

Effective Java: Safety of Forwarding Classes

Effective Java 3rd Edition, Item 18: Favor composition over inheritance describes an issue with using inheritance to add behavior to a class:
A related cause of fragility in subclasses is that their superclass can acquire new methods in subsequent releases. Suppose a program depends for its security on the fact that all elements inserted into some collection satisfy some predicate. This can be guaranteed by subclassing the collection and overriding each method capable of adding an element to ensure that the predicate is satisfied before adding the element. This works fine until a new method capable of inserting an element is added to the superclass in a subsequent release. Once this happens, it becomes possible to add an "illegal" element merely by invoking the new method, which is not overridden in the subclass.
The recommended solution:
Instead of extending an existing class, give your new class a private field that references an instance of the existing class... Each instance method in the new class invokes the corresponding method on the contained instance of the existing class and returns the results. This is known as forwarding, and the methods in the new class are known as forwarding methods... adding new methods to the existing class will have no impact on the new class... It's tedious to write forwarding methods, but you have to write the reusable forwarding class for each interface only once, and forwarding classes may be provided for you. For example, Guava provides forwarding classes for all of the collection interfaces.
My question is, doesn't the risk remain that methods could also be added to the forwarding class, thereby breaking the invariants of the subclass? How could an external library like Guava ever incorporate newer methods in forwarding classes without risking the integrity of its clients?
The tacit assumption seems to be that you are the one writing the forwarding class, therefore you are in control of whether anything gets added to it. That's the common way of using composition over inheritance, anyway.
The Guava example seems to refer to the Forwarding Decorators, which are explicitly designed to be inherited from. But they are just helpers to make it simpler to create these forwarding classes without having to define every method in the interface; they explicitly don't shield you from any methods being added in the future that you might need to override as well:
Remember, by default, all methods forward directly to the delegate, so overriding ForwardingMap.put will not change the behavior of ForwardingMap.putAll. Be careful to override every method whose behavior must be changed, and make sure that your decorated collection satisfies its contract.
So, if I understood all this correctly, Guava is not such a great example.
doesn't the risk remain that methods could also be added to the forwarding class, thereby breaking the invariants of the subclass?
Composition is an alternative to inheritance, so when you use composition, there is no sub-class. If you add new public methods to the forwarding class (which may access methods of the contained instance), that means you want these methods to be used.
Because you are the owner of the forwarding class, only you can add new methods to it, thus maintaining the invariant.

Adding methods or not adding methods to interface?

In Java 8, we can have default implementations for methods in interfaces, in addition to declarations which need to be implemented in the concrete classes.
Is it a good design or best practice to have default methods in an interface, or did Java 8 come-up with that only to provide more support on older APIs? Should we start with using default methods in new Java 8 projects?
Please help me to understand what is good design here, in detail.
Prior java8, you were looking towards versioned capabilities when talking about "reasonable" ways of extending interfaces:
You have something like:
interface Capability ...
interface AppleDealer {
List<Apples> getApples();
}
and in order to retrieve an AppleDealer, there is some central service like
public <T> T getCapability (Class<T> type);
So your client code would be doing:
AppleDealer dealer = service.getCapability(AppleDealer.class);
When the need for another method comes up, you go:
interface AppleDealerV2 extends AppleDealer { ...
And clients that want V2, just do a getCapability(AppleDealerV2.class) call. Those that don't care don't have to modify their code!
Please note: of course, this only works for extending interfaces. You can't use this approach neither to change signatures nor to remove methods in existing interfaces.
Thus: just adding a new method to an interface; and having default to implement that method right there; without breaking any existing client code is a huge step forward!
Meaning: why wouldn't you use default methods on existing interfaces? Existing code will not care. It doesn't know about the new defaulted methods.
Default method in Interface has some limitations. You can not have data variables in Interface. In general the reason default methods were added for the following reason. Say in your previous version you wrote a class that implements an interface "A".In your next version you decided that it would be good idea to add a method to your interface "A". But you can not do so since any class that implements "A" now will not have that extra method and thus will not compile. This would be a MAJOR backwards compatibility breakdown. So in Java 8 you can add a default method implementation into interface so all classes that implemented old version of "A" will not be broken but will fall back on default implementation. So use this feature sparingly, only if indeed you need to expand your existing interface.
In earlier java versions it wasnt possible beacuase you had abstract classes to use concrete and declared methods only but.
Java 8 introduces “Default Method” or (Defender methods) new feature, which allows developer to add new methods to the interfaces without breaking the existing implementation of these interface. It provides flexibility to allow interface define implementation which will use as default in the situation where a concrete class fails to provide an implementation for that method.
Let consider small example to understand how it works:
public interface oldInterface {
public void existingMethod();
default public void newDefaultMethod() {
System.out.println("New default method"
" is added in interface");
}
}
The following class will compile successfully in Java JDK 8
public class oldInterfaceImpl implements oldInterface {
public void existingMethod() {
// existing implementation is here…
}
}
Why Defaut Method?
Reengineering an existing JDK framework is always very complex. Modify one interface in JDK framework breaks all classes that extends the interface which means that adding any new method could break millions of lines of code. Therefore, default methods have introduced as a mechanism to extending interfaces in a backward compatible way.
NOTE:
However we can achive this backward compatability.but its always recommended to use interfaces with delarations only that is what they are best used for.
For simple example if You have an interface Human_behaviour you can utilize all the actions of this interface like to_Walk();
to_Eat() ,to_Love(),to_Fight() say for example in every implementing class in a unique way for every human object.Like
One Human can Fight using Swords and another Object using guns and so forth.
Thus Interface is a blessing but may always be used as per the need.

Why is "final" not allowed in Java 8 interface methods?

One of the most useful features of Java 8 are the new default methods on interfaces. There are essentially two reasons (there may be others) why they have been introduced:
Providing actual default implementations. Example: Iterator.remove()
Allowing for JDK API evolution. Example: Iterable.forEach()
From an API designer's perspective, I would have liked to be able to use other modifiers on interface methods, e.g. final. This would be useful when adding convenience methods, preventing "accidental" overrides in implementing classes:
interface Sender {
// Convenience method to send an empty message
default final void send() {
send(null);
}
// Implementations should only implement this method
void send(String message);
}
The above is already common practice if Sender were a class:
abstract class Sender {
// Convenience method to send an empty message
final void send() {
send(null);
}
// Implementations should only implement this method
abstract void send(String message);
}
Now, default and final are obviously contradicting keywords, but the default keyword itself would not have been strictly required, so I'm assuming that this contradiction is deliberate, to reflect the subtle differences between "class methods with body" (just methods) and "interface methods with body" (default methods), i.e. differences which I have not yet understood.
At some point of time, support for modifiers like static and final on interface methods was not yet fully explored, citing Brian Goetz:
The other part is how far we're going to go to support class-building
tools in interfaces, such as final methods, private methods, protected
methods, static methods, etc. The answer is: we don't know yet
Since that time in late 2011, obviously, support for static methods in interfaces was added. Clearly, this added a lot of value to the JDK libraries themselves, such as with Comparator.comparing().
Question:
What is the reason final (and also static final) never made it to Java 8 interfaces?
This question is, to some degree, related to What is the reason why “synchronized” is not allowed in Java 8 interface methods?
The key thing to understand about default methods is that the primary design goal is interface evolution, not "turn interfaces into (mediocre) traits". While there's some overlap between the two, and we tried to be accommodating to the latter where it didn't get in the way of the former, these questions are best understood when viewed in this light. (Note too that class methods are going to be different from interface methods, no matter what the intent, by virtue of the fact that interface methods can be multiply inherited.)
The basic idea of a default method is: it is an interface method with a default implementation, and a derived class can provide a more specific implementation. And because the design center was interface evolution, it was a critical design goal that default methods be able to be added to interfaces after the fact in a source-compatible and binary-compatible manner.
The too-simple answer to "why not final default methods" is that then the body would then not simply be the default implementation, it would be the only implementation. While that's a little too simple an answer, it gives us a clue that the question is already heading in a questionable direction.
Another reason why final interface methods are questionable is that they create impossible problems for implementors. For example, suppose you have:
interface A {
default void foo() { ... }
}
interface B {
}
class C implements A, B {
}
Here, everything is good; C inherits foo() from A. Now supposing B is changed to have a foo method, with a default:
interface B {
default void foo() { ... }
}
Now, when we go to recompile C, the compiler will tell us that it doesn't know what behavior to inherit for foo(), so C has to override it (and could choose to delegate to A.super.foo() if it wanted to retain the same behavior.) But what if B had made its default final, and A is not under the control of the author of C? Now C is irretrievably broken; it can't compile without overriding foo(), but it can't override foo() if it was final in B.
This is just one example, but the point is that finality for methods is really a tool that makes more sense in the world of single-inheritance classes (generally which couple state to behavior), than to interfaces which merely contribute behavior and can be multiply inherited. It's too hard to reason about "what other interfaces might be mixed into the eventual implementor", and allowing an interface method to be final would likely cause these problems (and they would blow up not on the person who wrote the interface, but on the poor user who tries to implement it.)
Another reason to disallow them is that they wouldn't mean what you think they mean. A default implementation is only considered if the class (or its superclasses) don't provide a declaration (concrete or abstract) of the method. If a default method were final, but a superclass already implemented the method, the default would be ignored, which is probably not what the default author was expecting when declaring it final. (This inheritance behavior is a reflection of the design center for default methods -- interface evolution. It should be possible to add a default method (or a default implementation to an existing interface method) to existing interfaces that already have implementations, without changing the behavior of existing classes that implement the interface, guaranteeing that classes that already worked before default methods were added will work the same way in the presence of default methods.)
In the lambda mailing list there are plenty of discussions about it. One of those that seems to contain a lot of discussion about all that stuff is the following: On Varied interface method visibility (was Final defenders).
In this discussion, Talden, the author of the original question asks something very similar to your question:
The decision to make all interface members public was indeed an
unfortunate decision. That any use of interface in internal design
exposes implementation private details is a big one.
It's a tough one to fix without adding some obscure or compatibility
breaking nuances to the language. A compatibility break of that
magnitude and potential subtlety would seen unconscionable so a
solution has to exist that doesn't break existing code.
Could reintroducing the 'package' keyword as an access-specifier be
viable. It's absence of a specifier in an interface would imply
public-access and the absence of a specifier in a class implies
package-access. Which specifiers make sense in an interface is unclear
- especially if, to minimise the knowledge burden on developers, we have to ensure that access-specifiers mean the same thing in both
class and interface if they're present.
In the absence of default methods I'd have speculated that the
specifier of a member in an interface has to be at least as visible as
the interface itself (so the interface can actually be implemented in
all visible contexts) - with default methods that's not so certain.
Has there been any clear communication as to whether this is even a
possible in-scope discussion? If not, should it be held elsewhere.
Eventually Brian Goetz's answer was:
Yes, this is already being explored.
However, let me set some realistic expectations -- language / VM
features have a long lead time, even trivial-seeming ones like this.
The time for proposing new language feature ideas for Java SE 8 has
pretty much passed.
So, most likely it was never implemented because it was never part of the scope. It was never proposed in time to be considered.
In another heated discussion about final defender methods on the subject, Brian said again:
And you have gotten exactly what you wished for. That's exactly what
this feature adds -- multiple inheritance of behavior. Of course we
understand that people will use them as traits. And we've worked hard
to ensure that the the model of inheritance they offer is simple and
clean enough that people can get good results doing so in a broad
variety of situations. We have, at the same time, chosen not to push
them beyond the boundary of what works simply and cleanly, and that
leads to "aw, you didn't go far enough" reactions in some case. But
really, most of this thread seems to be grumbling that the glass is
merely 98% full. I'll take that 98% and get on with it!
So this reinforces my theory that it simply was not part of the scope or part of their design. What they did was to provide enough functionality to deal with the issues of API evolution.
It will be hard to find and identify "THE" answer, for the resons mentioned in the comments from #EJP : There are roughly 2 (+/- 2) people in the world who can give the definite answer at all. And in doubt, the answer might just be something like "Supporting final default methods did not seem to be worth the effort of restructuring the internal call resolution mechanisms". This is speculation, of course, but it is at least backed by subtle evidences, like this Statement (by one of the two persons) in the OpenJDK mailing list:
"I suppose if "final default" methods were allowed, they might need rewriting from internal invokespecial to user-visible invokeinterface."
and trivial facts like that a method is simply not considered to be a (really) final method when it is a default method, as currently implemented in the Method::is_final_method method in the OpenJDK.
Further really "authorative" information is indeed hard to find, even with excessive websearches and by reading commit logs. I thought that it might be related to potential ambiguities during the resolution of interface method calls with the invokeinterface instruction and and class method calls, corresponding to the invokevirtual instruction: For the invokevirtual instruction, there may be a simple vtable lookup, because the method must either be inherited from a superclass, or implemented by the class directly. In contrast to that, an invokeinterface call must examine the respective call site to find out which interface this call actually refers to (this is explained in more detail in the InterfaceCalls page of the HotSpot Wiki). However, final methods do either not get inserted into the vtable at all, or replace existing entries in the vtable (see klassVtable.cpp. Line 333), and similarly, default methods are replacing existing entries in the vtable (see klassVtable.cpp, Line 202). So the actual reason (and thus, the answer) must be hidden deeper inside the (rather complex) method call resolution mechanisms, but maybe these references will nevertheless be considered as being helpful, be it only for others that manage to derive the actual answer from that.
I wouldn't think it is neccessary to specify final on a convienience interface method, I can agree though that it may be helpful, but seemingly the costs have outweight the benefits.
What you are supposed to do, either way, is to write proper javadoc for the default method, showing exactly what the method is and is not allowed to do. In that way the classes implementing the interface "are not allowed" to change the implementation, though there are no guarantees.
Anyone could write a Collection that adheres to the interface and then does things in the methods that are absolutely counter intuitive, there is no way to shield yourself from that, other than writing extensive unit tests.
We add default keyword to our method inside an interface when we know that the class extending the interface may or may not override our implementation. But what if we want to add a method that we don't want any implementing class to override? Well, two options were available to us:
Add a default final method.
Add a static method.
Now, Java says that if we have a class implementing two or more interfaces such that they have a default method with exactly same method name and signature i.e. they are duplicate, then we need to provide an implementation of that method in our class. Now in case of default final methods, we can't provide an implementation and we are stuck. And that's why final keyword isn't used in interfaces.

Do I need a rebuild for Java here?

I have a ClassA that is being used my many components and libraries in various areas of a project.
Now I need to add an extra member to this class but since it will not be needed/used by other areas it does not feel proper to extend the class.
If I add the member to ClassA instead of extending would I have any issues? Would everything need to be rebuild?
Adding a new member preserves binary compatibility, see also Chapter 13. Binary Compatibility of the Java Language specification.
Obviously you need to rebuild the modified class, but not classes which are using the modified one.
Unless your existing contacts and interactions between ClassA and other classes BREAK, there should be no issue. But if you change signature of a method that is used by other classes you could get a runtime error while locating the old version of method as it does not exist anymore.
If you change your Class A, obviously a rebuild is necessary. To minimize the impact you can extend the class A and use the subclass for your work. The other components and libraries will continue to keep using your Class A, while your code should now refer to the sublcass which has the added member.
Again, it depends on how you define your objects.

(Java) Creating methods for new object created via reflection?

I have abstract methods in a class that need to be implemented by a foreign class in a SEPARATE project that uses my project.
-- All classes instanceof A are initially generated using reflection --
So anyway, say Class A is abstract, and Class B (non-abstract) extends A
B has all the unimplemented methods in Class A because B is in my workspace so I know to add those methods.
C also extends A, but C only has a subset of the abstract methods in A. C, however, is not in my workspace.
Therefore, for each abstract method in C NOT in A, I need to find some way to add the method for A like so:
(For each method)
public <corresponding return type> <missingMethodName>() { return null; }
Is this possible?
P.S. Please assume that I either have to completely rewrite my code to be in sync with the objects I have no control over, or implement a solution like the one I am alluding to above.
No, unless I'm reading you incorrectly, what you're asking for doesn't really make much sense.
If you wanted to inject a method
public <corresponding return type> <missingMethodName>() { super.<missingMethodName>(); }
into C, which extends A, which doesn't implement that method, what would it exactly do?
If you want to provide a default implementation in A, that's fine, and it won't affect C. If you add abstract methods into A, C must implement them, mark itself as abstract, or it won't compile (or throw serialization, or some weird error) if you run with a C compiled with an older A.
You should never need to do this as any instance method which has a super implementation can be called on a sub-class instance.
You can add these methods using byte code, but the only difference they would make is to change the list of getDefinedMethods(). However it wouldn't change the behaviour of the objects of the class.
its quiet difficult but you can do it with Javassist
Javassist (Java programming assistant) is a Java library providing a means to manipulate the Java bytecode of an application.1 In this sense Javassist provides the support for structural reflection, i.e. the ability to change the implementation of a class at run time.
Bytecode manipulation is performed at load-time through a provided class loader.
http://en.wikipedia.org/wiki/Javassist

Categories