Axiom Deathmatch: Final Classes - java

Hibernate Community Documentation :
"A central feature of Hibernate, proxies (lazy loading), depends upon
the persistent class being either non-final, or the implementation of
an interface that declares all public methods. You can persist final
classes that do not implement an interface with Hibernate; you will
not, however, be able to use proxies for lazy association fetching
which will ultimately limit your options for performance tuning."
Effective Java Second Edition :
"Design & document for inheritance or else prohibit it"
Well, which one is right, or better yet, while using hibernate when should I follow one principle or the other? Should I make all classes final until I need the extra performance of using dynamic proxies? If I choose to use final classes, can I implement interfaces?

There are no hard laws, only guidelines. Effective Java is a great set of java axioms that should further investigated , validated, and meditated on. When it comes to our livelihood, however, we typically have little say on the hand we are given. never blindly follow any philosophy. Put in the work, do the tests and choose the right "Way" for the job.

Related

What does composition do?

I have been learning java for the past two days and i have gotten confused with composition. what exactly is the point in composition and what does it do?Please leave some examples as well.
I think it is already answered in so many links. Refer these links.
Implementation difference between Aggregation and Composition in Java,
Favor composition over inheritance
https://softwareengineering.stackexchange.com/questions/176049/what-is-the-use-of-association-aggregation-and-composition-encapsulation-in-c
Also read this link
http://www.journaldev.com/1325/what-is-composition-in-java-java-composition-example
I put what Chandra posted. This might be what you are looking for
I think this is one of the most discussed point in Object Oriented design. As suggested in the article, composition is always preferred over inheritance. That doesn't mean that you should never use inheritance. You should where it makes more sense (which can debatable).
There are many advantages of using composition, couple of them are :
You will have full control of your implementations. i.e., you can expose only the methods you intend to expose.
any changes in the super class can be shielded by modifying only in your class. Any clients classes which uses your classes, need not make modifications.
Allows you to control when you want to load the super class (lazy loading)

Interfaces, what if not all implementations use all methods?

I'm fairly new to programming against interfaces and am trying to get it right as a major tool for developing test driven.
Currently we have a lot of Manager classes that all implement a CRUD interface. However some Managers don't yet do updates and some don't do delete, some may never do so.
Not implemented exception?
Is it okay, to just
throw new NotImplementedException()
until the method gets implemented or even for all time if it never does?
(obviously with a source code comment telling the programmer "this method is not supposed to be used, as e.g. Types like 'male' 'female' do never get deleted)?
Split?
Or should I split my CRUD interface into Creatable, Readable(Searchable), Updatable and Deletable? Wouldn't that clutter my class definition?
PersonManager implements Creatable<Person>, Updateable<Person>, Deletable<Person>, Searchable<Person>
Split and combine?
Or should I combine some interfaces like all 4 into CRUD and maybe some other combinations like Read + Update?
Maybe that would also create a load of interfaces where one has to click through a big inheritence path to find out which interface implements all the desired atomic interfaces for the current situation (I need read and create, so which one just implements the two? and this can get a lot more complex quickly)
IMO, for the middle stage - it is OK to use NotImplementedException, until you finish implementing it.
However, as a permanentsolution - I believe it is a bad practice [in most cases].
Instead, I'd create an interface that contains behavior common to all implementing classes, and use subinterfaces to cluster them up for more specific behavior.
The idea is similar to java standard SortedSet, which extends a Set - we wouldn't want to regard Set as SortedSets and give a variable of this type a value of HashSet, instead we use a sub-interface, SortedSet for this purpose.
Generally you would like to throw UnsupportedOperationException which is a runtime exception, clearly mentioning that the requested operation is not supported.
Having loads of interfaces will lead to too many files and also if someone tries to look at them they will get confused. Java docs don't help much either in such cases.
Splitting interface makes sense if there are too many operations for one interface, and not all operations are logically binded together.
For database operation rarely its the case as you will have some basic operation which will be true for most of the scenario.
NotImplementedException doesn't mean that class doesn't support this action. It means it's not implemented, but it will be in the future.
From logical point of view all interface methods must be implemented, and must work well. But if you leave it, and write an application just for yourself, then you will remember about this limitation. In other hand I would be angry that some developer implemented interface and left it unimplemented. So I don't think you can leave interface method not implemented just for future development.
My suggestion is rather to modify interfaces, then use exceptions inside implemented methods.
In frameworks that support covariance and contravariance, it can be very useful to split up interfaces and then define some composite interfaces. For frameworks that do not offer such support, (and even sometimes on frameworks which do) it is sometimes more helpful to have an interface include methods which individual implementations may or may not support (implementations should throw an exception when unsupported actions are attempted); if one is going to do that, one should include methods or properties by which outside code can ask what actions are supported without needing to use any code that will throw an exception.
Even when using interfaces that where support for actions is optional, however, it may sometimes be helpful to define additional interfaces which guarantee that certain actions will be available. Having interfaces which inherit other interfaces without adding new members can be a good way to do this. If done properly, the only extra work this will require on behalf of implementations is to make sure they declare themselves as the most specific type applicable. The situation for clients is a little more complex: if clients' needs can be adequately expressed in the type system, clients can avoid the need for run-time type-checking by demanding specific types. On the other hand, routines that pass instances between clients may be complicated by some client's demands for more specific type than the instance-passing code itself would otherwise require.

How to test Java app operating directly on external API

After comming from Ruby world, I'm having little problems doing TDD in Java. The biggest issue is when I have application that is just communicating with external API.
Say I want to just fetch some data from Google Calendar, or 5 tweets from some Twitter user and display it.
In Ruby, I don't have any problems, because I can monkey-patch the API library in tests directly, but I have no such option in Java.
If I think about this in terms of MVC, my model objects are directly accessing the API through some library. The question is, is this bad design? Should I always wrap any API library in some interface, so I can mock/stub it in Java?
Because when I think about this, the only purpose of that interface would be to simulate (please don't kill me for saying this) the monkey-patch. Meaning that any time I use any external resource, I have to wrap each layer in interface that can be stubbed out.
# do I have to abstract everything just to do this in Java?
Twitter.stub!(:search)
Now you might say that I should always abstract away the interface, so I can change the underlying layer to anything else. But if I'm writing twitter app, I'm not going to change it to RSS reader.
Yes, I can add for example Facebook and then it would make sense to have interface. But when there is no other resource that can be substituted for the one I'm using, than I still have to wrap everything in interfaces to make it testable.
Am I missing something, or is this just a way to test in the Java world?
Using interfaces is just generally good practice in Java. Some languages have multiple inheritance, others have duck typing, Java has interfaces. It's a key feature of the language, it lets me use
different aspects of a class in different contexts and
different implementations of the same contract without changing client code.
So interfaces are a concept you should embrace in general, and then you would reap the benefits in situations like this where you could substitute your services by mock objects.
One of the most important books about Java best practices is Effective Java by Joshua Bloch. I would highly suggest you to read it. In this context the most important part is Item 52: Refer to objects by their interfaces. Quote:
More generally, you should favor the use of interfaces rather than
classes to refer to objects. If appropriate interface types exist, then parameters, return values, variables, and fields should all be declared using interface
types. The only time you really need to refer to an object’s class is when you’re
creating it with a constructor.
And if you take things even further (e.g. when using dependency injection), you aren't even calling the constructor.
One of the key problems of switching languages is that you have to switch the way of thinking too. You can't program language x effectively while thinking in language y. You can't program C effectively without using pointers, Ruby not without duck typing and Java not without Interfaces.
Wrapping the external API is the way I would do this.
So, as you already said, you would have an interface and two classes: the real one and the dummy implementation.
Yes, it may seem unreasonable from the perspective of some services indeed being specific, like Twitter. But, this way your build process doesn't depend on external resources. Depending on external libraries isn't all that bad, but having your tests depend on actual data present or not present out there on the web can mess up the build process.
The easiest way is to wrap the API service with your interface/class pair and use that throughout your code.
I understand that what you want are Mock objects.
As you described it, one of the ways one can generate "test versions" of objects is by implementing a common interface and using it.
However, what you are missing is to simply extend the class (provided that it is not declared final) and override the methods that you want to mock. (NB: the possibility of doing that is the reason why it is considered bad form for a library to declare its classes final - it can make testing considerably harder.)
There is a number of Java libraries that aim in facilitating the use of Mock objects - you can look at Mockito or EasyMock.
Mockito is more handy and like your ruby mocks.
You can "monkey-patch" an API in Java. The Java language itself does not provide specific means to do it, but the JVM and the standard libraries do. In Ruby, developers can use the Mocha library for that. In Java, you can use the JMockit library (which I created because of limitations in older mocking tools).
Here is an example JMockit test, equivalent to the test_should_calculate_value_of_unshipped_orders test available in Mocha documentation:
#Test
public void shouldCalculateValueOfUnshippedOrders()
{
final Order anOrder = new Order();
final List<Order> orders = asList(anOrder, new Order(), new Order());
new NonStrictExpectations(Order.class)
{{
Order.findAll(); result = orders;
anOrder.getTotalCost(); result = 10;
}};
assertEquals(30, Order.unshippedValue());
}

What is meaning of Plain Old Java Object (POJO)?

What does the term Plain Old Java Object (POJO) mean? I couldn't find anything explanatory enough.
POJO's Wikipedia page says that POJO is an ordinary Java Object and not a special object. Now, what makes or what doesn't make and object special in Java?
The above page also says that a POJO should not have to extend prespecified classes, implement prespecified Interfaces or contain prespecified Annotations. Does that also mean that POJOs are not allowed to implement interfaces like Serializable, Comparable or classes like Applets or any other user-written Class/Interfaces?
Also, does the above policy (no extending, no implementing) means that we are not allowed to use any external libraries?
Where exactly are POJOs used?
EDIT: To be more specific, am I allowed to extend/implement classes/interfaces that are part of the Java or any external libraries?
Plain Old Java Object The name is used to emphasize that a given object is an ordinary Java Object, not a special object such as those defined by the EJB 2 framework.
class A {}
class B extends/implements C {}
Note: B is non POJO when C is kind of distributed framework class or ifc.
e.g. javax.servlet.http.HttpServlet, javax.ejb.EntityBean or J2EE extn
and not serializable/comparable. Since serializable/comparable are valid for POJO.
Here A is simple object which is independent.
B is a Special obj since B is extending/implementing C. So B object gets some more meaning from C and B is restrictive to follow the rules from C. and B is tightly coupled with distributed framework. Hence B object is not POJO from its definition.
Code using class A object reference does not have to know anything about the type of it, and It can be used with many frameworks.
So a POJO should not have to 1) extend prespecified classes and 2) Implement prespecified interfaces.
JavaBean is a example of POJO that is serializable, has a no-argument constructor, and allows access to properties using getter and setter methods that follow a simple naming convention.
POJO purely focuses on business logic and has no dependencies on (enterprise) frameworks.
It means it has the code for business logic but how this instance is created, Which service(EJB..) this object belongs to and what are its special characteristics( Stateful/Stateless) it has will be decided by the frameworks by using external xml file.
Example 1: JAXB is the service to represent java object as XML; These java objects are simple and come up with default constructor getters and setters.
Example 2: Hibernate where simple java class will be used to represent a Table. columns will be its instances.
Example 3: REST services. In REST services we will have Service Layer and Dao Layer to perform some operations over DB. So Dao will have vendor specific queries and operations. Service Layer will be responsible to call Which DAO layer to perform DB operations. Create or Update API(methods) of DAO will be take POJOs as arguments, and update that POJOs and insert/update in to DB. These POJOs (Java class) will have only states(instance variables) of each column and its getters and setters.
In practice, some people find annotations elegant, while they see XML as verbose, ugly and hard to maintain, yet others find annotations pollute the POJO model.
Thus, as an alternative to XML, many frameworks (e.g. Spring, EJB and JPA) allow annotations to be used instead or in addition to XML:
Advantages:
Decoupling the application code from the infrastructure frameworks is one of the many benefits of using POJOs. Using POJOs future proofs your application's business logic by decoupling it from volatile, constantly evolving infrastructure frameworks. Upgrading to a new version or switching to a different framework becomes easier and less risky. POJOs also make testing easier, which simplifies and accelerates development. Your business logic will be clearer and simpler because it won't be tangled with the infrastructure code
References : wiki source2
According to Martin Fowler, he and some others came up with it as a way to describe something which was a standard class as opposed to an EJB etc.
Usage of the term implies what it's supposed to tell you. If, for example, a dependency injection framework tells you that you can inject a POJO into any other POJO they want to say that you do not have to do anything special: there is no need to obey any contracts with your object, implement any interfaces or extend special classes. You can just use whatever you've already got.
UPDATE To give another example: while Hibernate can map any POJO (any object you created) to SQL tables, in Core Data (Objective C on the iPhone) your objects have to extend NSManagedObject in order for the system to be able to persist them to a database. In that sense, Core Data cannot work with any POJO (or rather POOCO=PlainOldObjectiveCObject) while Hibernate can. (I might not by 100% correct re Core Data since I just started picking it up. Any hints / corrections are welcome :-) ).
Plain Old Java Object :)
Well, you make it sound like those are all terrible restrictions.
In the usual context where POJO is/are used, it's more like a benefit:
It means that whatever library/API you're working with is perfectly willing to work with Java objects that haven't been doctored or manhandled in any way, i.e. you don't have to do anything special to get them to work.
For example, the XStream XML processor will (I think) happily serialize Java classes that don't implement the Serializable interface. That's a plus! Many products that work with data objects used to force you to implement SomeProprietaryDataObject or even extend an AbstractProprietaryDataObject class. Many libraries will expect bean behavior, i.e. getters and setters.
Usually, whatever works with POJOs will also work with not-so-PO-JO's. So XStream will of course also serialize Serializable classes.
POJO is a Plain Old Java Object - as compared to something needing Enterprise Edition's (J2EE) stuff (beans etc...).
POJO is not really a hard-and-fast definition, and more of a hand-wavy way of describing "normal" non-enterprise Java Objects. Whether using an external library or framework makes an object POJO or not is kind of in the eye of the beholder, largely depending on WHAT library/framework, although I'd venture to guess that a framework would make something less of a POJO
The whole point of a POJO is simplicity and you appear to be assuming its something more complicated than it appears.
If a library supports a POJO, it implies an object of any class is acceptible. It doesn't mean the POJO cannot have annotations/interface or that they won't be used if they are there, but it is not a requirement.
IMHO The wiki-page is fairly clear. It doesn't say a POJO cannot have annotations/interfaces.
What does the term Plain Old Java Object (POJO) mean?
POJO was coined by Martin Fowler, Rebecca Parsons and Josh Mackenzie when they were preparing for a talk at a conference in September 2000. Martin Fowler in Patterns of Enterprise Application Architecture explains how to implement a Domain Model pattern in Java. After enumerating some of disadvantages of using EJB Entity Beans:
There's always a lot of heat generated when people talk about
developing a Domain Model in J2EE. Many of the teaching materials and
introductory J2EE books suggest that you use entity beans to develop a
domain model, but there are some serious problems with this approach,
at least with the current (2.0) specification.
Entity beans are most useful when you use Container Managed
Persistence (CMP)...
Entity beans can't be re-entrant. That is, if you call out from one
entity bean into another object, that other object (or any object it
calls) can't call back into the first entity bean...
...If you have remote objects with fine-grained interfaces you get
terrible performance...
To run with entity beans you need a container and a database
connected. This will increase build times and also increase the time
to do test runs since the tests have to execute against a database.
Entity beans are also tricky to debug.
As an alternative, he proposed to use Regular Java Objects for Domain Model implementation:
The alternative is to use normal Java objects, although this often
causes a surprised reaction—it's amazing how many people think that
you can't run regular Java objects in an EJB container. I've come to
the conclusion that people forget about regular Java objects because
they haven't got a fancy name. That's why, while preparing for a talk
in 2000, Rebecca Parsons, Josh Mackenzie, and I gave them one: POJOs
(plain old Java objects). A POJO domain model is easy to put together,
is quick to build, can run and test outside an EJB container, and is
independent of EJB (maybe that's why EJB vendors don't encourage you
to use them).
There is an abundance of posts that are half correct and half incorrect. The best example of the correct interpretation is given by Rex M in their answer here.
[POJO are classes] that doesn't require any significant "guts" to make
it work. The idea is in contrast with very dependent objects that have
a hard time being (or can't be) instantiated and manipulated on their
own - they require other services, drivers, provider instances, etc.
to also be present.
Unfortunately, these very same answers often come along with the misunderstanding that they are somehow simple or often have a simple structure. This is not necessarily true and the confusion seems to stem from the fact that in the Java (POJO) and C# world (POCO) business logic is relatively easily modeled especially in the web application world.
POJO's can have multiple levels of inheritance, generic types, abstractions, etc. It just so happens that this isn't required in the majority of web applications as business logic doesn't necessitate it - alot of the effort goes into databases, queries, data transfer objects and repositories.
As soon as you step out of line with simple web apps, your POJO's start looking a lot more complex. E.g. Make a web app that assigns taxi's to user schedules. To do this, you need a graph coloring algorithm. To color the graphs, you need a graph object. Each node in the graph is a schedule object. Now what if we want to make it generic so that coloring the graph can be done not only with schedules but other things as well. We can make it generic, abstract and add levels of inheritance - almost to the point of making it a mini library.
At this point though, no matter its complexity, its still a POJO because it doesn't rely on the guts of other frameworks.
A Plain Old Java Object (POJO) that contains all of the business logic for your extension.
Exp. Pojo which contains a single method
public class Extension {
public static void logInfo(String message) {
System.out.println(message);
}
}

Open-closed principle and Java "final" modifier

The open-closed principle states that "Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification".
However, Joshua Bloch in his famous book "Effective Java" gives the following advice: "Design and document for inheritance, or else prohibit it", and encourages programmers to use the "final" modifier to prohibit subclassing.
I think these two principles clearly contradict each other (am I wrong?). Which principle do you follow when writing your code, and why? Do you leave your classes open, disallow inheritance on some of them (which ones?), or use the final modifier whenever possible?
Frankly I think the open/closed principle is more an anachronism than not. It sems from the 80s and 90s when OO frameworks were built on the principle that everything must inherit from something else and that everything should be subclassable.
This was most typified in UI frameworks of the era like MFC and Java Swing. In Swing, you have ridiculous inheritance where (iirc) button extends checkbox (or the other way around) giving one of them behaviour that isn't used (I think it's its the setDisabled() call on checkbox). Why do they share an ancestry? No reason other than, well, they had some methods in common.
These days composition is favoured over inheritance. Whereas Java allowed inheritance by default, .Net took the (more modern) approach of disallowing it by default, which I think is more correct (and more consistent with Josh Bloch's principles).
DI/IoC have also further made the case for composition.
Josh Bloch also points out that inheritance breaks encapsulation and gives some good examples of why. It's also been demonstrated that changing the behaviour of Java collections is more consistent if done by delegation rather than extending the classes.
Personally I largely view inheritance as little more than an implemntation detail these days.
I don't think the two statements contradict each other. A type can be open for extension and still be closed for inheritance.
One way to do this is to employ dependency injection. Instead of creating instances of its own helper types, a type can have these supplied upon creation. This allows you to change the parts (i.e. open for extension) of the type without changing the type itself (i.e. close for modification).
In open-closed principle (open for extension, closed for modification) you can still use the final modifier. Here is one example:
public final class ClosedClass {
private IMyExtension myExtension;
public ClosedClass(IMyExtension myExtension)
{
this.myExtension = myExtension;
}
// methods that use the IMyExtension object
}
public interface IMyExtension {
public void doStuff();
}
The ClosedClass is closed for modification inside the class, but open for extension through another one. In this case it can be of anything that implements the IMyExtension interface. This trick is a variation of dependency injection since we're feeding the closed class with another, in this case through the constructor. Since the extension is an interface it can't be final but its implementing class can be.
Using final on classes to close them in java is similar to using sealed in C#. There are similar discussions about it on the .NET side.
I respect Joshua Bloch a great deal, and I consider Effective Java to pretty much be the Java bible. But I think that automatically defaulting to private access is often a mistake. I tend to make things protected by default so that they can at least be accessed by extending the class. This mostly grew out of a need to unit test components, but I also find it handy for overriding the default behavior of classes. I find it very annoying when I'm working in my own company's codebase and end up having to copy & modify the source because the author chose to "hide" everything. If it's at all in my power, I lobby to have the access changed to protected to avoid the duplication, which is far worse IMHO.
Also keep in mind that Bloch's background is in designing very public bedrock API libraries; the bar for getting such code "correct" must be set very high, so chances are it's not really the same situation as most code you'll be writing. Important libraries such as the JRE itself tend to be more restrictive in order to ensure that the language is not abused. See all the deprecated APIs in the JRE? It's almost impossible to change or remove them. Your codebase is probably not set in stone, so you do have the opportunity to fix things if it turns out you made a mistake initially.
Nowadays I use the final modifier by default, almost reflexively as part of the boilerplate. It makes things easier to reason about, when you know that a given method will always function as seen in the code you're looking at right now.
Of course, sometimes there are situations where a class hierarchy is exactly what you want, and it would be silly not to use one then. But be scared of hierarchies of more than two levels, or ones where non-abstract classes are further subclassed. A class should be either abstract or final.
Most of the time, using composition is the way to go. Put all the common machinery into one class, put the the different cases into different classes, then composit instances to have working whole.
You can call this "dependency injection", or "strategy pattern" or "visitor pattern" or whatever, but what it boils down to is using composition instead of inheritance to avoid repetition.
The two statements
Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.
and
Design and document for inheritance, or else prohibit it.
are not in direct contradiction with one another. You can follow the open-closed principle as long as you design and document for it (as per Bloch's advice).
I don't think that Bloch states that you should prefer to prohibit inheritance by using the final modifier, just that you should explicitly choose to allow or disallow inheritance in each class you create. His advice is that you should think about it and decide for yourself, instead of just accepting the default behavior of the compiler.
I don't think that the Open/closed principle as originally presented allows the interpretation that final classes can be extended through injection of dependencies.
In my understanding, the principle is all about not allowing direct changes to code that has been put into production, and the way to achieve that while still permitting modifications to functionality is to use implementation inheritance.
As pointed out in the first answer, this has historical roots. Decades ago, inheritance was in favor, developer testing was unheard of, and recompilation of the codebase often took too long.
Also, consider that in C++ the implementation details of a class (in particular, private fields) were commonly exposed in the ".h" header file, so if a programmer needed to change it, all clients would require recompilation. Notice this isn't the case with modern languages like Java or C#. Besides, I don't think developers back then could count on sophisticated IDEs capable of performing on-the-fly dependency analysis, avoiding the need for frequent full rebuilds.
In my own experience, I prefer to do the exact opposite: "classes should be closed for extension (final) by default, but open for modification". Think about it: today we favor practices like version control (makes it easy to recover/compare previous versions of a class), refactoring (which encourages us to modify code to improve design, or as a prelude to introducing new features), and developer testing, which provides a safety net when modifying existing code.

Categories