Is inheritance chaining something that should be avoided?
I'm currently in the process of refactoring a java project and I have come across a number of classes (8) which all of them have a shared functionality. So, using the Extract Super Class refactoring, I created a super class with that functionality and passed it to all the sub-classes. Now, 2 of the classes have also a second shared functionality. Should I create another super class which inherits from the first super class and, in turn, is extended by the sub-classes or simply extend multiple classes on the 2 sub-classes?
Thoughts
Even though both options would work, I feel that going for multiple extensions is preferable since it describes and clarifies the responsibilities of the class. Also, we would avoid making continuous super() calls in order to reach the top super class.
On the other hand, maybe having an inheritance chain kinda tidies up the place.
Edit: I understand I stated my problem completely wrong. The 8 classes that are mentioned above are made to abide by the command pattern and as such each one has a different purpose and functionality. I have extracted a super class to avoid code duplication inside their constructors. Two of those classes have also have a little bit more duplicate code (not duplicate functionality, in the sense that they do the same thing). Unfortunately, due to the nature of the project, the fact that I did not write the original code and the tight time schedule do not allow me to completely reconstruct the whole project. It's, let's say, beyond repair for now. But what I can do is patchwork to try to remove duplicate code and tidy the place up a bit.
Super classes should be used when they model a is-a relationship. If the 8 classes are the same entity, a common super class can be used to avoid duplication. Class hierarchies can hurt maintainability quickly and should be as simple as possible. Avoiding duplicate code is not the main concern when introducing them.
As #davidza5 already pointed out, the principle of composition over inheritance applies here. Common functionality should preferably be extracted into classes that model a has-a relationship, which decouples functionality and makes it easier to change later on.
Another aspect: if a refactoring does not work in all cases this does not mean it is the wrong idea. If it leads to good results for the majority, working around the quirks of the exceptions is a good trade off to make.
Both methods seems to be a bad practice in most cases (favorize composition over inheritance). If you can avoid it by using design patterns (facade, composite, adapter,decorator...). If it can't be done in your case the principle of encapsulation and inheritance is what I would apply. So, when necessary chain inheritance.
Related
I have been learning java for the past two days and i have gotten confused with composition. what exactly is the point in composition and what does it do?Please leave some examples as well.
I think it is already answered in so many links. Refer these links.
Implementation difference between Aggregation and Composition in Java,
Favor composition over inheritance
https://softwareengineering.stackexchange.com/questions/176049/what-is-the-use-of-association-aggregation-and-composition-encapsulation-in-c
Also read this link
http://www.journaldev.com/1325/what-is-composition-in-java-java-composition-example
I put what Chandra posted. This might be what you are looking for
I think this is one of the most discussed point in Object Oriented design. As suggested in the article, composition is always preferred over inheritance. That doesn't mean that you should never use inheritance. You should where it makes more sense (which can debatable).
There are many advantages of using composition, couple of them are :
You will have full control of your implementations. i.e., you can expose only the methods you intend to expose.
any changes in the super class can be shielded by modifying only in your class. Any clients classes which uses your classes, need not make modifications.
Allows you to control when you want to load the super class (lazy loading)
It seems to be the standard so I have been going along with it so far, but now I am building a new class from scratch instead of modifying the old ones and feel I should understand why I should follow the projects convention.
Almost every class has an interface to it which is called classnameable. In the code database.class would never appear even once but in place where I would want to use that class I see databaseable.class.
To my understanding an interface was a class that was never implemented but was inhereted from to keep standards. So why are the interfaces being used as if they were real classes?
To my understanding an interface was a class that was never
implemented but was inhereted from to keep standards. So why are the
interfaces being used as if they were real classes.
This is a bit confused. An interface defines an API, so that pieces of code from different authors, modules or projects can interact. For example, java.util.Collections.sort() can sort anything that implements the List interface and contains objects that implement the Comparable interface - even though the implementation classes may not have existed yet when the sorting code was written!
Now the situation in your project seems to reflect an unfortunately rather common antipattern: having an interface for everything, mostly with a single implementation class, even for internal classes.
This used to be strongly promoted by proponents of Test-Driven-Development (TDD) who see it as vital to be able to test every class in isolation with all its dependencies replaced by mock objects. Older mocking frameworks could only mock interfaces, so to be able to test every class in isolation, all inter-class dependencies had to be through interfaces.
Fortunately, newer mocking frameworks can mock concrete classes and don't require you to pollute your project with unnecessary interfaces. Some people will probably still argue that it should be done anyway to "reduce coupling", but IMO they're just rationalizing their desire not to change their practices.
And of course, if you don't do fundamentalist TDD, there never was a good reason to have an interface for everything - but very good reasons to have interfaces for some things.
If you've got an interface for pretty much every single class in your project even though there's no reason for it, that's not a good thing and in this day and age there's no great reason for it. It may be a legacy from days gone by when it was required by some external testing toolkit for instance - but these days that's not a requirement.
It may be of course that someone's heard that loose coupling is a good thing, that you should always couple to interfaces and not concrete classes, and taken this idea to an extreme.
On the other hand, it is good practice to define interfaces for some classes even if there's only one of them (at the moment.) When I'm writing a class I try to think along the lines of whether another (potentially useful) implementation could exist, and if so I'll put an interface in. If it's not used it's no problem, but if it is it saves time and hassle and refactoring later.
If you want a class for your interfaces then a common way is to create an AbstractFoo class to go with the Foo interface. You can provide simple implementation of the required methods, allowing derived classes to overwrite them as needed. See AbstractCollection for an example of such a class.
The advantage is that you don't have to implement all the small stuff, it is already done for you. The disadvantage is that you can't inherit from any other class. You pays your money and you takes your choice.
A good indication for bad design is when you have a ISomething or a SomethingImpl. The interface name should state how to use it (i.e. List), the class name should state how it works (i.e. ArrayList).
If you need pre- or suffixes because the names would be the same, this means there is only one way to implement it, and then there is probably no need for a separation. (If you think there will be more sophisticated implementations in the future, name your class DefaultSomething or SimpleSomething)
Is it a bad practice to have a package with only one class in it? Would it make more sense just to move the single class to a util package that would contain other random useful classes?
Is it a bad practice to have a package with only one class in it?
Not necessarily. It could be a sign of somebody getting obsessed with classifying things. On the other hand, it could just be a logical consequence of a sensible general classification scheme applied in an unusual case.
An example of the latter might be where you have a general API, and multiple implementations of that API, where each of the implementations consists of multiple classes. But one of those implementations (lets call it the Null implementation) consists of just one class.
The real test is whether the package structure is serving its purpose(s):
Is it making it easier to find library classes?
Do the packages organize the application classes along the lines of the application's logical module structure?
Does the structure allow you to effectively make use of "package private" visibility?
Would it make more sense just to move the single class to a util package that would contain other random useful classes?
Not necessarily. If the class is just another "randomly useful" leaf class, then there is a good case for moving it. On the other hand, if it has a specific function and is not intended to be used generally, then it may be better to leave it where it is.
It is best not to get too obsessed with creating elegant package hierarchies, or with rejigging them when they turn out to be not as elegant (or useful) as you first thought. There are usually more important things to do, like implementing functionality, writing tests, writing documentation and so on.
No
Package is used to put similar classes together,
In your system if there is no similar class then obviously you can put it .
Is it a bad practice to have a package with only one class in it?
Not necessarily. Packages are using to group together logically related entities. It doesn't prevent you from having just one such entity in a package.
Would it make more sense just to move the single class to a util package that would contain other random useful classes?
Not to me, for two reasons:
Util has a specific meaning. Moving an arbitrary entity to util for reasons of loneliness would be a borderline case of util-abuse.
This is premature organization. With Java the IDE support is rich enough to reorganize easily and effectively using a few clicks. Wait a while to see how your project evolves and then take a call.
There are different stategies for static util classes. I use this one :
if your util class is generic (String utils, DB utils, etc.), I put it in a "util" package, that is used in all the application.
if the util class is specific to a domain, I call it "DomainHelper" by convention, and put it in the domain package, at the same level as domain classes.
Yes, it's a definite code smell.
This doesn't mean it's necessarily wrong, but there should be a really good reason for a lone class in a package.
Most instances of a package with a single class that I've seen have been erroneous.
Packages should implement features. It's rare that a feature is implemented using only a single class.
Its not 'bad' to have a single class in a package, Create a new package to group more than one related classes and in case if you expect more related classes to your present single logically unrelated class in future to avoid refactoring. Moving all the random utility type classes to a single package is a common practice seen in many places.Its a matter of choice really.
I guess it depends. It is quite rare in to have a package with one class in it because in addition to the answers listed above, packages also serve the purpose of creating a layered system. A package with only one class in it indicates that the decomposition of the system has not surfaced some objects in the system. So, yes, I would take a closer look at this package and question what the purpose is.
It is better not to stick random stuff in an Util package precisely because of the reason mentioned above. You should ask yourself whether you would think to look in Util for your class in the future before putting it there. When Util grows large it starts to get difficult finding the Utility one is looking for.
I have a hierarchy of three interfaces, grandparent, parent and child. Parent and child have a method "add", which requires different input parameters in the child. While it's no problem to add the required signature in the child, the inherited method will be pointless, so is there a way to not have it in there at all? The other methods work fine.
Maybe, to achieve what I want, I can improve the design altogether, so I'll shortly outline what the interfaces are about:
I collect meter readings that consist of a time and a value. The grandparent interface is for a single reading. I also have classes that represent a number of consecutive readings (a series), and one that contains multiple series running over the same period of time (let's just call that a table).
The table can be viewed as a series (which aggregates the values orthogonally to the time axis), and both table and series can be viewed as a single reading (the implementations providing different means of aggregation), hence the inheritance. This seems to work out fine, but for the add method. (I can add a single point to the series, but for the table I need an additional parameter to tell me to which series it belongs.)
No, you cannot avoid inheriting a method, since doing so would violate the Liskov substitution principle.
In practice, you could have implementations throw an UnsupportedOperationException, but that would be pretty nasty.
Can't you implement the inherited method with some sort of default value for the series?
Maybe it would make sense to break the interface inheritance all together. Just have specific interfaces for specific types of behaviors. Whatever classes you have that implement these interfaces can just pick the ones that make sense, and won't have to worry about implementing methods that don't make sense.
The problem with inheritance is that the focus on the language mechanism makes people think about implementation rather than semantics.
When B inherits from A, it means that every instance of B is also an instance of A. In OOP, being an instance of something means typically that you should have a sensible response to its methods and at least support their messages.
If you feel that B should not support one of the messages of A, then as far as I am concerned you have two options:
BAD - Throw an "Unimplemented" exception as you would get with the collections framework. However, this is in my opinion poor form.
Good - Accept that B is not a type of A and avoid the inheritance, or restructure it (e.g., using composition and/or interfaces) so that you don't have to rewrite the code but you do not use a subtyping relation. If your application will live over time, you don't want to have semantic issues in your hierarchies.
Thanks for putting me on the right track, I upvoted the posts I found most helpful. Since my solution was inspired by the posts, but is not posted, I'll share what I decided to do:
As the hierarchy was inspired by how the data should be viewed, while the problems arise on the semantics of how you add data, I'm going to split up the interfaces for series and table into a read and a write interface each. The write interfaces have nothing to do with each other, and the read interfaces can inherit without conflicts.
I'll make this wiki, in case someone wants to expand on this.
You might want to look at the Refused Bequest code smell.
An interface is a contract. It means that anything that implements that interface will necessarily implement the methods defined. You could technically just implement it as a dummy method (no body, simply return, whatever) but to my knowledge, it must be implemented.
You can always implement the method as empty, for example:
class A implements B{ void add(A) { /*Goes Nowhere Does Nothing*/ return;} }
but really, it's not a good idea. A better solution would be for all of your grandparents, parents, and children all be the same class with two extra methods- hasParent():boolean and hasChild():boolean. This has the benefit of being a liskov substition compatible change as well as a cleaner design.
The open-closed principle states that "Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification".
However, Joshua Bloch in his famous book "Effective Java" gives the following advice: "Design and document for inheritance, or else prohibit it", and encourages programmers to use the "final" modifier to prohibit subclassing.
I think these two principles clearly contradict each other (am I wrong?). Which principle do you follow when writing your code, and why? Do you leave your classes open, disallow inheritance on some of them (which ones?), or use the final modifier whenever possible?
Frankly I think the open/closed principle is more an anachronism than not. It sems from the 80s and 90s when OO frameworks were built on the principle that everything must inherit from something else and that everything should be subclassable.
This was most typified in UI frameworks of the era like MFC and Java Swing. In Swing, you have ridiculous inheritance where (iirc) button extends checkbox (or the other way around) giving one of them behaviour that isn't used (I think it's its the setDisabled() call on checkbox). Why do they share an ancestry? No reason other than, well, they had some methods in common.
These days composition is favoured over inheritance. Whereas Java allowed inheritance by default, .Net took the (more modern) approach of disallowing it by default, which I think is more correct (and more consistent with Josh Bloch's principles).
DI/IoC have also further made the case for composition.
Josh Bloch also points out that inheritance breaks encapsulation and gives some good examples of why. It's also been demonstrated that changing the behaviour of Java collections is more consistent if done by delegation rather than extending the classes.
Personally I largely view inheritance as little more than an implemntation detail these days.
I don't think the two statements contradict each other. A type can be open for extension and still be closed for inheritance.
One way to do this is to employ dependency injection. Instead of creating instances of its own helper types, a type can have these supplied upon creation. This allows you to change the parts (i.e. open for extension) of the type without changing the type itself (i.e. close for modification).
In open-closed principle (open for extension, closed for modification) you can still use the final modifier. Here is one example:
public final class ClosedClass {
private IMyExtension myExtension;
public ClosedClass(IMyExtension myExtension)
{
this.myExtension = myExtension;
}
// methods that use the IMyExtension object
}
public interface IMyExtension {
public void doStuff();
}
The ClosedClass is closed for modification inside the class, but open for extension through another one. In this case it can be of anything that implements the IMyExtension interface. This trick is a variation of dependency injection since we're feeding the closed class with another, in this case through the constructor. Since the extension is an interface it can't be final but its implementing class can be.
Using final on classes to close them in java is similar to using sealed in C#. There are similar discussions about it on the .NET side.
I respect Joshua Bloch a great deal, and I consider Effective Java to pretty much be the Java bible. But I think that automatically defaulting to private access is often a mistake. I tend to make things protected by default so that they can at least be accessed by extending the class. This mostly grew out of a need to unit test components, but I also find it handy for overriding the default behavior of classes. I find it very annoying when I'm working in my own company's codebase and end up having to copy & modify the source because the author chose to "hide" everything. If it's at all in my power, I lobby to have the access changed to protected to avoid the duplication, which is far worse IMHO.
Also keep in mind that Bloch's background is in designing very public bedrock API libraries; the bar for getting such code "correct" must be set very high, so chances are it's not really the same situation as most code you'll be writing. Important libraries such as the JRE itself tend to be more restrictive in order to ensure that the language is not abused. See all the deprecated APIs in the JRE? It's almost impossible to change or remove them. Your codebase is probably not set in stone, so you do have the opportunity to fix things if it turns out you made a mistake initially.
Nowadays I use the final modifier by default, almost reflexively as part of the boilerplate. It makes things easier to reason about, when you know that a given method will always function as seen in the code you're looking at right now.
Of course, sometimes there are situations where a class hierarchy is exactly what you want, and it would be silly not to use one then. But be scared of hierarchies of more than two levels, or ones where non-abstract classes are further subclassed. A class should be either abstract or final.
Most of the time, using composition is the way to go. Put all the common machinery into one class, put the the different cases into different classes, then composit instances to have working whole.
You can call this "dependency injection", or "strategy pattern" or "visitor pattern" or whatever, but what it boils down to is using composition instead of inheritance to avoid repetition.
The two statements
Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.
and
Design and document for inheritance, or else prohibit it.
are not in direct contradiction with one another. You can follow the open-closed principle as long as you design and document for it (as per Bloch's advice).
I don't think that Bloch states that you should prefer to prohibit inheritance by using the final modifier, just that you should explicitly choose to allow or disallow inheritance in each class you create. His advice is that you should think about it and decide for yourself, instead of just accepting the default behavior of the compiler.
I don't think that the Open/closed principle as originally presented allows the interpretation that final classes can be extended through injection of dependencies.
In my understanding, the principle is all about not allowing direct changes to code that has been put into production, and the way to achieve that while still permitting modifications to functionality is to use implementation inheritance.
As pointed out in the first answer, this has historical roots. Decades ago, inheritance was in favor, developer testing was unheard of, and recompilation of the codebase often took too long.
Also, consider that in C++ the implementation details of a class (in particular, private fields) were commonly exposed in the ".h" header file, so if a programmer needed to change it, all clients would require recompilation. Notice this isn't the case with modern languages like Java or C#. Besides, I don't think developers back then could count on sophisticated IDEs capable of performing on-the-fly dependency analysis, avoiding the need for frequent full rebuilds.
In my own experience, I prefer to do the exact opposite: "classes should be closed for extension (final) by default, but open for modification". Think about it: today we favor practices like version control (makes it easy to recover/compare previous versions of a class), refactoring (which encourages us to modify code to improve design, or as a prelude to introducing new features), and developer testing, which provides a safety net when modifying existing code.