Interface Segregation Principle and default methods in Java 8 - java

As per the Interface Segregation Principle
clients should not be forced to implement the unwanted methods of an interface
...and so we should define interfaces to have logical separation.
But default methods introduced in Java 8 have provided the flexibility to implement methods in Java interfaces. It seems Java 8 has provided the feasibility to enhance an interface to have some methods not related to its core logic, but with some default or empty implementation.
Does it not violate the ISP?

Good question. Definitely, it violates Interface Segregation Principle and I personally don't like the concept of default implementation because it spoils the beauty of interface design and a bit on exact polymorphism as well. If somebody is not aware of the concept of ISP then they will start design fat interfaces and will end up like everything packed in one interface. During code design, people will not think logically as well.
This will end up with code smells and I am sure those who don't know the concepts will start writing bad code. I believe default implementation is an unwanted feature as it will tend people to write smelly code.

The ISP will be violated if you intend to do so. You can segregate the interfaces to fulfill only single responsibility. The group of methods for a particular responsibility most probably will follow 80-20 rule. You can provide the default implementations for 40-50% of the methods out of 80% part. This 40-50% part will be the one which will be rarely used and hence defaults are ok. If interface fulfills single responsibility, they will rarely be too big, and most often will be in ISP.

No.
In fact, default methods can be a good way of providing additional useful functionality while avoiding violating ISP.
An excerpt from my longer answer here:
There are examples in the standard library of good uses default methods. One would be java.util.function.Function with its andThen(...) and compose(...) methods. ...
These default methods do not violate ISP, as classes that implement Function have no need to call or override them. There may be many use-cases where concrete instances of Function never have their andThen(...) method called, but that's fine – you don't break ISP by providing useful but non-essential functionality, as long as you don't encumber all those use-cases by forcing them to do something with it. In the case of Function, providing these methods as abstract rather than default would violate ISP, as all implementing classes would have to add their own implementations, even when they know it's unlikely to ever be called.

The quote in the OP is not quite accurate. The actual statement of the ISP is,
Clients should not be forced to depend upon interfaces that they do not use.
The client is the consumer of the interface, i.e. the caller of its abstract methods. The client is not the implementer of the interface. If we are following the dependency inversion principle, the client should not even know the concrete implementation of the interface.
The reasoning behind the ISP is not to save developers from implementing additional abstract methods. It is to save the caller of those methods from unnecessary transitive dependencies. Additional interface methods risk pulling in additional dependencies (via declaration or implementation). A client who doesn't use those methods still acquires those dependencies and becomes coupled to clients who do use those methods through the changes those clients force upon the interface and its shared implementation.
Applying this principle to default methods in Java 8, it is certainly possible for such a method to violate the ISP by adding dependencies that not all clients require. On the other hand, it is also possible (and preferable) for a default method to not add any dependency and to never change in any way that breaks clients.
In summary, default methods are just another tool. They may be used to help your code or to hurt it.

Related

Turning abstract class without fields to interface (Java, Sonar rule s1610)

I refer to this rule:
With Java 8's "default method" feature, any abstract class without direct or inherited field should be converted into an interface.
In my perception default + private methods in Java8+ interfaces is a
pure compromise JDK designers took in order to solve a dilemma they faced: they had to introduce new methods (for ex. Map interface)
without breaking old code that used those interfaces (backward
compatibility).
In practice JDK designers introduced implementation inheritance in places
where interface inheritance existed, so potentially they
increased coupling and brittleness of our existing and future code.
In my perception introduced implementation inheritance now diminishes cool nature of interfaces - the multiple inheritance.
I understand why "abstract classes without fields" is mentioned in Sonar rule. By this the authors of the rule do lessen brittleness (but don't eliminate the fact of implementation inheritance). Compare to problems of Scala traits that do permit fields (new Java interfaces look more and more like Scala's traits) - Scala lang designers tried to solve those problems with things like trait linearization, lazy vals, etc.
I'm avoiding arguments like "it is in JDK so it is a pattern", let us speak more at the conceptual level here.
Question: Could anyone explain me why Sonar promotes this, IMHO, flawed rule?
What do we gain by such a rule? What benefit/use case am I missing here?
Thanks.
I don't necessarily agree with the rule, but I see where it's coming from.
The thing is that you don't really lose anything in this case. The only time you'd run into problems with regard to multiple inheritance would be if you implemented several interfaces containing methods with the same signature and different implementations.
However, this is forbidden by Java (you'd need to provide your own implementation of that method), so there is no danger in doing so.
In general, interfaces are more versatile than classes, so it makes sense to use them if possible.
As a counter point, the interface default feature was introduced to allow adding methods to existing interfaces without breaking existing code, but that's not the case here.

Mocking ScheduledExecutorService vs the "Don't mock type you don't own" philosophy

Mocking ScheduledExecutorService would really make testing my classes easier, but according to the mockito recommendations this seems a bad idea, as the logic of the mocked class can change in a way that it would be used in an incorrect way, but unit tests would still report success.
It seems that writing a wrapper for it would be the "clean" way, but I have a feeling that this would merely result in the complete duplication of an interface, which would just make my code less straightforward. I'd like to follow the practical recommendations of this answer, but I am not sure that the contract of ScheduledExecutorService will always remain the same.
Can I assume that the contract for the existing methods of ScheduledExecutorService (or more generally, any other class in the JRE libs) will never change? If not, is it enough if I test the correct use of it in the integration tests, while still mocking it directly in the unit tests?
It's more of a guideline than a rule; do the thing that will most likely result in a clean, reliable, and non-brittle test. As in the document you quoted:
This is not a hard line, but crossing this line may have repercussions! (it most likely will)
One important thing here is that "don't mock types you don't own" usually refers to concrete or internal types, because those are much more likely to change their behavior between versions, or to gain or lose modifiers like final or static that Mockito's dynamic overrides might not pick up on. After all, if you were to subclass a third-party class manually, Java would throw a compiler error; Mockito's syntax would hide that from you until test runtime.
To list out the factors I think of:
As assylias pointed out in the comments, you're referring to a Java interface, which insulates you from common changes to final methods or method visibility.
The interface is well-documented and designed for third-party extension, providing yet another reason that Java would be unlikely to make breaking changes to the general contract of the interface.
The interface in question is a very well-used interface in Java, which overall has a lot of users, and a lot of backwards-compatibility concerns. It is very unlikely that you'd be subject to breaking changes, compared to a smaller library, or one under active development. One might even say that the JRE is in such lock step to the Java language, you have as much to worry about from breaking syntax changes than from breaking interface changes.
Though I believe strongly in "don't mock types you don't own" as a general heuristic or code smell, I'd agree with you here that the type is worth mocking, and that—unless you were to write and test a full implementation to be used in other tests—it's the best path forward for you here.
I'd say the "Don't mock type you don't own!" is the false conclusion out of the right reasoning.
Unittests should only need to be changes if your API changes or the part of an API of a dependency your code uses.
example:
You us an interface of a dependency as an input parameter, but your tested code uses only one method in that interface. If you don't mock this interface (which is a type you don't own) you have to create your own dummy implementation implementing all of the interfaces methods, even those you don't use.
If you change the version of that dependency this interface might have additional method and/or some methods have been removed. You have to change all of your the implementations of this interface throughout your program. If you mocked this interface you don't need to change your tests and they still give you confidence that your codes behavior did not change after the required refactoring.
Furthermore your Unittest should only fail because the behavior of your code changed, not because of a change in the dependencies behavior.
Changes in a dependencies behavior should be pinned with separate Unittest you setup for the dependencies behavior (if it is crucial for your application) and/or integration tests.

Questions concerning an interface class

I know this is a question that has been asked a 100 times over but I would like to provide some of my own definitions to see if I understand an interface correctly.
Questions:
What is an interface?
An interface defines the structure for code design. An interface lays the foundation for your design and is made up of a collection of abstract methods and contains behaviors that a class must implement.
When to use an interface?
When similar methods of a design are to be reused across a project. This creates the structure of the behavior within a a project.
Why use an interface?
You use an interface in a project to create the foundation much like the construction of a new home. When a new home is built the frame is built, then the walls and doors and so on are added.
My above answers are what I describe as an interface and would like to know whether I am correct or not? If not, then please explain?
Thanks
You use an interface so that any class can make use of it as long as they implement it. That is why List is so cool and we use it in everyday programming.
Cows and Goats are both animals and they eat, walk, sleep, etc. You do not want to declare the same behaviors separately. If you now have a new animal, you will have to define the similar behaviors again. An interface forces you to implement the required behaviors.
I would argue that an interface is more like a an optional contract -- you specify the method names, parameters, and return type, and if a class chooses to implement the interface, it must then conform to the terms of that contract. It's more like an API spec than a design foundation, as interfaces are relatively flexible.
Note that this flexibility gives you considerable leeway in how you choose to use interfaces. As long as the implementing class provides the agreed-upon methods, the implementation is entirely up to you. There are interfaces (e.g. Serializable) that require no methods, and simply act as markers for the programmer's intent regarding a certain class.
Another use of interfaces is to mitigate the disadvantages of the fact that Java doesn't support multiple inheritance. Though each implementing class must contain the actual code, you might use an interface in an 'inheritance-like' way, to indicate that a certain set of classes derives behavior (in name, if not necessarily in implementation) from some common, more abstract pattern.
I would definitely suggest looking through the javadocs (perhaps the Collections framework) for more examples of interfaces. To continue with the contract analogy, the main use of interfaces is as a sort of API that specifies behavior you can count on, without having to know the implementation details.
The problem with the home analogy is that it's too restrictive-- interfaces don't restrict the design of a class as much as a foundation prescribes a certain structure for a building. Additionally, a building can only have one foundation, and in Java, there is no limit to the number of interfaces a class may implement.
Think of a TV remote analogy.
What
Its a standard way of using a Television(any television...flat screen, CRT, LED, LCD, Plasma etc).
So its basically an interface to a Television. Now, all TV remotes must have some basic common buttons (On, Off, Vol+, Vol-, Ch+, Ch-), these are the methods which should be present in the interface. Different brands implement these using different techs.
When
Now all Television brands want to share this standard way of controlling a Television (which is a big complicated machine).
Why
Think about it. It helps the consumer. As far as the consumer is concerned, he/she does not need to know how to manually operate the Television or from inside(you can do that from the circuit board!)

Interfaces, what if not all implementations use all methods?

I'm fairly new to programming against interfaces and am trying to get it right as a major tool for developing test driven.
Currently we have a lot of Manager classes that all implement a CRUD interface. However some Managers don't yet do updates and some don't do delete, some may never do so.
Not implemented exception?
Is it okay, to just
throw new NotImplementedException()
until the method gets implemented or even for all time if it never does?
(obviously with a source code comment telling the programmer "this method is not supposed to be used, as e.g. Types like 'male' 'female' do never get deleted)?
Split?
Or should I split my CRUD interface into Creatable, Readable(Searchable), Updatable and Deletable? Wouldn't that clutter my class definition?
PersonManager implements Creatable<Person>, Updateable<Person>, Deletable<Person>, Searchable<Person>
Split and combine?
Or should I combine some interfaces like all 4 into CRUD and maybe some other combinations like Read + Update?
Maybe that would also create a load of interfaces where one has to click through a big inheritence path to find out which interface implements all the desired atomic interfaces for the current situation (I need read and create, so which one just implements the two? and this can get a lot more complex quickly)
IMO, for the middle stage - it is OK to use NotImplementedException, until you finish implementing it.
However, as a permanentsolution - I believe it is a bad practice [in most cases].
Instead, I'd create an interface that contains behavior common to all implementing classes, and use subinterfaces to cluster them up for more specific behavior.
The idea is similar to java standard SortedSet, which extends a Set - we wouldn't want to regard Set as SortedSets and give a variable of this type a value of HashSet, instead we use a sub-interface, SortedSet for this purpose.
Generally you would like to throw UnsupportedOperationException which is a runtime exception, clearly mentioning that the requested operation is not supported.
Having loads of interfaces will lead to too many files and also if someone tries to look at them they will get confused. Java docs don't help much either in such cases.
Splitting interface makes sense if there are too many operations for one interface, and not all operations are logically binded together.
For database operation rarely its the case as you will have some basic operation which will be true for most of the scenario.
NotImplementedException doesn't mean that class doesn't support this action. It means it's not implemented, but it will be in the future.
From logical point of view all interface methods must be implemented, and must work well. But if you leave it, and write an application just for yourself, then you will remember about this limitation. In other hand I would be angry that some developer implemented interface and left it unimplemented. So I don't think you can leave interface method not implemented just for future development.
My suggestion is rather to modify interfaces, then use exceptions inside implemented methods.
In frameworks that support covariance and contravariance, it can be very useful to split up interfaces and then define some composite interfaces. For frameworks that do not offer such support, (and even sometimes on frameworks which do) it is sometimes more helpful to have an interface include methods which individual implementations may or may not support (implementations should throw an exception when unsupported actions are attempted); if one is going to do that, one should include methods or properties by which outside code can ask what actions are supported without needing to use any code that will throw an exception.
Even when using interfaces that where support for actions is optional, however, it may sometimes be helpful to define additional interfaces which guarantee that certain actions will be available. Having interfaces which inherit other interfaces without adding new members can be a good way to do this. If done properly, the only extra work this will require on behalf of implementations is to make sure they declare themselves as the most specific type applicable. The situation for clients is a little more complex: if clients' needs can be adequately expressed in the type system, clients can avoid the need for run-time type-checking by demanding specific types. On the other hand, routines that pass instances between clients may be complicated by some client's demands for more specific type than the instance-passing code itself would otherwise require.

why are interfaces created instead of their implementations for every class

It seems to be the standard so I have been going along with it so far, but now I am building a new class from scratch instead of modifying the old ones and feel I should understand why I should follow the projects convention.
Almost every class has an interface to it which is called classnameable. In the code database.class would never appear even once but in place where I would want to use that class I see databaseable.class.
To my understanding an interface was a class that was never implemented but was inhereted from to keep standards. So why are the interfaces being used as if they were real classes?
To my understanding an interface was a class that was never
implemented but was inhereted from to keep standards. So why are the
interfaces being used as if they were real classes.
This is a bit confused. An interface defines an API, so that pieces of code from different authors, modules or projects can interact. For example, java.util.Collections.sort() can sort anything that implements the List interface and contains objects that implement the Comparable interface - even though the implementation classes may not have existed yet when the sorting code was written!
Now the situation in your project seems to reflect an unfortunately rather common antipattern: having an interface for everything, mostly with a single implementation class, even for internal classes.
This used to be strongly promoted by proponents of Test-Driven-Development (TDD) who see it as vital to be able to test every class in isolation with all its dependencies replaced by mock objects. Older mocking frameworks could only mock interfaces, so to be able to test every class in isolation, all inter-class dependencies had to be through interfaces.
Fortunately, newer mocking frameworks can mock concrete classes and don't require you to pollute your project with unnecessary interfaces. Some people will probably still argue that it should be done anyway to "reduce coupling", but IMO they're just rationalizing their desire not to change their practices.
And of course, if you don't do fundamentalist TDD, there never was a good reason to have an interface for everything - but very good reasons to have interfaces for some things.
If you've got an interface for pretty much every single class in your project even though there's no reason for it, that's not a good thing and in this day and age there's no great reason for it. It may be a legacy from days gone by when it was required by some external testing toolkit for instance - but these days that's not a requirement.
It may be of course that someone's heard that loose coupling is a good thing, that you should always couple to interfaces and not concrete classes, and taken this idea to an extreme.
On the other hand, it is good practice to define interfaces for some classes even if there's only one of them (at the moment.) When I'm writing a class I try to think along the lines of whether another (potentially useful) implementation could exist, and if so I'll put an interface in. If it's not used it's no problem, but if it is it saves time and hassle and refactoring later.
If you want a class for your interfaces then a common way is to create an AbstractFoo class to go with the Foo interface. You can provide simple implementation of the required methods, allowing derived classes to overwrite them as needed. See AbstractCollection for an example of such a class.
The advantage is that you don't have to implement all the small stuff, it is already done for you. The disadvantage is that you can't inherit from any other class. You pays your money and you takes your choice.
A good indication for bad design is when you have a ISomething or a SomethingImpl. The interface name should state how to use it (i.e. List), the class name should state how it works (i.e. ArrayList).
If you need pre- or suffixes because the names would be the same, this means there is only one way to implement it, and then there is probably no need for a separation. (If you think there will be more sophisticated implementations in the future, name your class DefaultSomething or SimpleSomething)

Categories