Effective Java item 1 applicability with TDD and dependency injection - java

I have been reading Effective Java and I have some concerns regarding the first Item "use
static factory method instead of constructor" in relation to TDD and dependency injection.
The item says that you should avoid having public/protected/default constructor and expose
it using static factory. I agree with all the advantages related to using static factories
like factories can have name, you can return subtype, you can reduce verbosity etc. But, I
think in disadvantages Joshua missed TDD because having static factories in your code will
lead to tight coupling and you can't mock the class using it. We will not be able to mock
the class which will be having static factories. So, it hampers test driven development.
Second point, I think he missed that in todays enterprise development most of the
applications use one or another dependency injection container. So, when we can inject
dependencies using DI so why should I use it.
Please explain how it applies to today's Java enterprise development which includes DI and
TDD.

The DI engine is the factory.
Joshua Bloch understands DI well enough. I think this is an artifact of history, because DI's ascent came after the first edition of "Effective Java".
"Effective Java" was published in 2001. Martin Fowler coined the term in 2004. Spring's 1.0 release came in March 2004.
Joshua Bloch didn't modify that chapter for the second edition.
The point is the coupling that "new" introduces. Anyone who understands that and factories will easily make the leap to DI engines. The point is that his statements regarding "new", and the remedy of using factories, is still true.

I see 2 separate issues here :
static factories vs using new()
dependency injection
When using new your code is just as tightly coupled as using a static method, actually even worse since the static factory can do some magic and return some specific implementation as long as it is a subclass or implementation of an interface.
With dependency injection the principle is also called the Hollywood Principle : "Do not call us, we'll call you". So in that philosphy you should not call new() or the static factory in your code, but have an external thing do that for you, either the DI framework or the unit test. This has nothing to do with factories or the use of new, as that is done somewhere else.
In that case factories are better because you can inject a test factory under your control. With new this is not possible (without doing weird things to the classpath like hiding the implementation with dummy objects in the test classpath, which I do not recommend btw).

I had the same concern you have, and that's how I found your question.
You say:
because having static factories in your code will lead to tight
coupling and you can't mock the class using it
the book suggests that you should expose the constructor of your class with a static method (api design). It doesn't suggest that you "hardcode" the static call all over your application (api usage).
Suppose you are using Guice for DI, and your interface is called Connection, you could do:
bind(Connection.class).toInstance(Connections.makeASpecificConnection(params));
And then your usual #Inject Connection connection;
Of course this is if your connection is a Singleton. If it's not, you could inject an Abstract Factory with an implementation that creates instances calling the static method of your class, but this might be way overkill and you'd be better using Guice alone.
Remember that this book is not targeted primarily at building huge enterprise applications, although its still very helpful.
Quote from the preface of the book:
While this book is not targeted solely at developers of reusable
components, it is inevitably colored by my experience writing such
components over the past two decades. I naturally think in terms of
exported APIs (Application Programming Interfaces), and I encourage
you to do likewise.

Related

Dependency injection example

There's a nice tutorial that seems to make sense about DI and IoC
Wouldn't it be easier to make the getCustomer methods static in the injector classes? I don't really see any reason to instantiate injector objects in this case.
So instead of:
MessageServiceInjector injector = null;
Consumer app = null;
injector = new EmailServiceInjector();
app = injector.getConsumer();
app.processMessages(msg, email);
We would just do:
Consumer app = null;
app = EmailServiceInjector.getConsumer();
app.processMessages(msg, email);
How to write injector classes properly? At the end of the article it says
We can achieve IoC through Factory Pattern, Template Method Design
Pattern, Strategy Pattern and Service Locator pattern too
but I guess none of the patterns was used in this example (IoC in this case is just simplistic). So how would it be done in a real project (assuming no external frameworks or IoC containers are used), i.e. if I wanted to create all the necessary stuff to let me use DI in my code.
"Easier"? I don't think it makes any difference to your DI engine.
Static would assume one instance. What if you want an instance per request, as you might for each web request? Static won't do then.
I've used Spring for more than ten years. It has a bean factory and DI capability that can deal with singleton or per request instances. Have a look at how it works to see one successful way to do it.
I think Martin Fowler's DI article, now 11 years old, still explains it well.
In real world project, you should use Dependency injection framework like Spring Framework. The are other as well likeGoogle Guice.
I have been personally using spring for couple of years now and it has grow to much larger extent apart from basic DI.
Wouldn't it be easier to make the getCustomer methods static in the injector classes?
It seems to me that what this article is calling an Injector is actually an abstract factory. We use abstract factories when we want a class to create objects at a later point it time.
In general it is not recommended to have static methods. Mainly because the classes that call the static method will have a hard dependency on it, i.e., you cannot at a later point in time consume a different dependency without changing the consuming class code. And this is exactly the problem that DI is trying to solve.
Another reason is that it is hard to mock static methods in a unit test.
So how would it be done in a real project?
In real projects you should construct your objects (e.g. services) in the composition root.
There are two ways to do it. Either use a DI container, or use Pure DI.
In my opinion, Pure DI is a better choice. See this article for a reason why.

Axiom Deathmatch: Final Classes

Hibernate Community Documentation :
"A central feature of Hibernate, proxies (lazy loading), depends upon
the persistent class being either non-final, or the implementation of
an interface that declares all public methods. You can persist final
classes that do not implement an interface with Hibernate; you will
not, however, be able to use proxies for lazy association fetching
which will ultimately limit your options for performance tuning."
Effective Java Second Edition :
"Design & document for inheritance or else prohibit it"
Well, which one is right, or better yet, while using hibernate when should I follow one principle or the other? Should I make all classes final until I need the extra performance of using dynamic proxies? If I choose to use final classes, can I implement interfaces?
There are no hard laws, only guidelines. Effective Java is a great set of java axioms that should further investigated , validated, and meditated on. When it comes to our livelihood, however, we typically have little say on the hand we are given. never blindly follow any philosophy. Put in the work, do the tests and choose the right "Way" for the job.

Should member variables of global objects be made global as well?

I'm developing plugins in Eclipse which mandates the use of singleton pattern for the Plugin class in order to access the runtime plugin. The class holds references to objects such as Configuration and Resources.
In Eclipse 3.0 plug-in runtime objects
are not globally managed and so are
not generically accessible. Rather,
each plug-in is free to declare API
which exposes the plug-in runtime
object (e.g., MyPlugin.getInstance()
In order for the other components of my system to access these objects, I have to do the following:
MyPlugin.getInstance().getConfig().getValue(MyPlugin.CONFIGKEY_SOMEPARAMETER);
, which is overly verbose IMO.
Since MyPlugin provides global access, wouldn't it be easier for me to just provide global access to the objects it manages as well?
MyConfig.getValue(MyPlugin.CONFIGKEY_SOMEPARAMETER);
Any thoughts?
(I'm actually asking because I was reading about the whole "Global variable access and singletons are evil" debates)
Any thoughts?
Yes, for the current use-case you are examining, you could marginally simplify your example code by using statics.
But think of the potential disadvantages of using statics:
What if in a future version of Eclipse Plugin objects are globally managed?
What if you want to reuse your configuration classes in a related Plugin?
What if you want to use a mock version of your configuration class for unit testing?
Also, you can make the code less verbose by refactoring; e.g.
... = MyPlugin.getInstance().getConfig().getValue(MyPlugin.CONFIGKEY_P1);
... = MyPlugin.getInstance().getConfig().getValue(MyPlugin.CONFIGKEY_P2);
becomes
MyConfig config = MyPlugin.getInstance().getConfig();
... = config.getValue(MyPlugin.CONFIGKEY_P1);
... = config.getValue(MyPlugin.CONFIGKEY_P2);
You suggest that
MyPlugin.getInstance().getConfig().getValue(MyPlugin.CONFIGKEY_SOMEPARAMETER);
is too verbose and
MyConfig.getValue(MyPlugin.CONFIGKEY_SOMEPARAMETER);
might be better. By that logic, wouldn't:
getMyConfigValue(MyPlugin.CONFIGKEY_SOMEPARAMETER):
be better yet (Maybe not shorter, but simpler)? I'm suggesting you write a local helper method.
This gives you the advantage of readability without bypassing concepts that were crafted by people who have been through the effort of fixing code that was done the easy/short/simple way.
Generally globals are pretty nasty in any situation. Singletons are also an iffy concept, but they beat the hell out of public statics in a class.
Consider how you will mock out such a class. Mocking out public statics is amazingly annoying. Mocking out singletons is hard (You have to override your getter in every method that uses it). Dependency Injection is the next level, but it can be a touch call between DI and a few simple singletons.

cheap way to mock an interface with no runtime overhead

Suppose I have an interface with lots of methods that I want to mock for a test, and suppose that I don't need it to do anything, I just need the object under test to have an instance of it. For example, I want to run some performance testing/benchmarking over a certain bit of code and don't want the methods on this interface to contribute.
There are plenty of tools to do that easily, for example
Interface mock = Mockito.mock(Interface.class);
ObjectUnderTest obj = ...
obj.setItem(mock);
or whatever.
However, they all come with some runtime overhead that I would rather avoid:
Mockito records all calls, stashing the arguments for verification later
JMock and others (I believe) require you to define what they going to do (not such a big deal), and then execution goes through a proxy of various sorts to actual invoke the method.
Good old java.lang.reflect.Proxy and friends all go through at least a few more method calls on the stack before getting to the method to be invoked, often reflectively.
(I'm willing to be corrected on any of the details of those examples, but I believe the principle holds.)
What I'm aiming for is a "real" no-op implementation of the interface, such as I could write by hand with everything returning null, false or 0. But that doesn't help if I'm feeling lazy and the interface has loads of methods. So, how can I generate and instantiate such a no-op implementation of an arbitrary interface at runtime?
There are tools available such as Powermock, CGLib that use bytecode generation, but only as part of the larger mocking/proxying context and I haven't yet figured out what to pick out of the internals.
OK, so the example may be a little contrived and I doubt that proxying will have too substantial an impact on the timings, but I'm curious now as to how to generate such a class. Is it easy in CGLib, ASM?
EDIT: Yes, this is premature optimisation and there's no real need to do it. After writing this question I think the last sentence didn't quite make my point that I'm more interested in how to do it in principle, and easy ways into dynamic class-generation than the actual use-case I gave. Perhaps poorly worded from the start.
Not sure if this is what you're looking for, but the "new class" wizard in Eclipse lets you build a new class and specify superclass and/or interface(s). If you let it, it will auto-code up dummy implementations of all interface/abstract methods (returning null unless void). It's pretty painless to do.
I suspect the other "big name" IDEs, such as NetBeans and Idea, have similar facilities.
EDIT:
Looking at your question again, I wonder why you'd be concerned about performance of auto proxies when dealing with test classes. It seems to me that if performance is an issue, you should be testing "real" functionality, and if you're dealing with mostly-unimplemented classes anyway then you shouldn't be in a testing situation where performance matters.
It would take a little work to build the utility, but probably not too hard for basic vanilla Java interface without "edge cases" (annotations, etc), to use Javassist code generation to textually create a class at runtime that implements null versions of every method defined on the interface. This would be different from Javassist ProxyFactory (Or CGLib Enhancer) proxy objects which would still have a few layers of indirection. I think there would be no overhead in the resulting class from the direct bytecode generation mode. If you are brave you could also dive into ASM to do the same thing.

Open-closed principle and Java "final" modifier

The open-closed principle states that "Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification".
However, Joshua Bloch in his famous book "Effective Java" gives the following advice: "Design and document for inheritance, or else prohibit it", and encourages programmers to use the "final" modifier to prohibit subclassing.
I think these two principles clearly contradict each other (am I wrong?). Which principle do you follow when writing your code, and why? Do you leave your classes open, disallow inheritance on some of them (which ones?), or use the final modifier whenever possible?
Frankly I think the open/closed principle is more an anachronism than not. It sems from the 80s and 90s when OO frameworks were built on the principle that everything must inherit from something else and that everything should be subclassable.
This was most typified in UI frameworks of the era like MFC and Java Swing. In Swing, you have ridiculous inheritance where (iirc) button extends checkbox (or the other way around) giving one of them behaviour that isn't used (I think it's its the setDisabled() call on checkbox). Why do they share an ancestry? No reason other than, well, they had some methods in common.
These days composition is favoured over inheritance. Whereas Java allowed inheritance by default, .Net took the (more modern) approach of disallowing it by default, which I think is more correct (and more consistent with Josh Bloch's principles).
DI/IoC have also further made the case for composition.
Josh Bloch also points out that inheritance breaks encapsulation and gives some good examples of why. It's also been demonstrated that changing the behaviour of Java collections is more consistent if done by delegation rather than extending the classes.
Personally I largely view inheritance as little more than an implemntation detail these days.
I don't think the two statements contradict each other. A type can be open for extension and still be closed for inheritance.
One way to do this is to employ dependency injection. Instead of creating instances of its own helper types, a type can have these supplied upon creation. This allows you to change the parts (i.e. open for extension) of the type without changing the type itself (i.e. close for modification).
In open-closed principle (open for extension, closed for modification) you can still use the final modifier. Here is one example:
public final class ClosedClass {
private IMyExtension myExtension;
public ClosedClass(IMyExtension myExtension)
{
this.myExtension = myExtension;
}
// methods that use the IMyExtension object
}
public interface IMyExtension {
public void doStuff();
}
The ClosedClass is closed for modification inside the class, but open for extension through another one. In this case it can be of anything that implements the IMyExtension interface. This trick is a variation of dependency injection since we're feeding the closed class with another, in this case through the constructor. Since the extension is an interface it can't be final but its implementing class can be.
Using final on classes to close them in java is similar to using sealed in C#. There are similar discussions about it on the .NET side.
I respect Joshua Bloch a great deal, and I consider Effective Java to pretty much be the Java bible. But I think that automatically defaulting to private access is often a mistake. I tend to make things protected by default so that they can at least be accessed by extending the class. This mostly grew out of a need to unit test components, but I also find it handy for overriding the default behavior of classes. I find it very annoying when I'm working in my own company's codebase and end up having to copy & modify the source because the author chose to "hide" everything. If it's at all in my power, I lobby to have the access changed to protected to avoid the duplication, which is far worse IMHO.
Also keep in mind that Bloch's background is in designing very public bedrock API libraries; the bar for getting such code "correct" must be set very high, so chances are it's not really the same situation as most code you'll be writing. Important libraries such as the JRE itself tend to be more restrictive in order to ensure that the language is not abused. See all the deprecated APIs in the JRE? It's almost impossible to change or remove them. Your codebase is probably not set in stone, so you do have the opportunity to fix things if it turns out you made a mistake initially.
Nowadays I use the final modifier by default, almost reflexively as part of the boilerplate. It makes things easier to reason about, when you know that a given method will always function as seen in the code you're looking at right now.
Of course, sometimes there are situations where a class hierarchy is exactly what you want, and it would be silly not to use one then. But be scared of hierarchies of more than two levels, or ones where non-abstract classes are further subclassed. A class should be either abstract or final.
Most of the time, using composition is the way to go. Put all the common machinery into one class, put the the different cases into different classes, then composit instances to have working whole.
You can call this "dependency injection", or "strategy pattern" or "visitor pattern" or whatever, but what it boils down to is using composition instead of inheritance to avoid repetition.
The two statements
Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.
and
Design and document for inheritance, or else prohibit it.
are not in direct contradiction with one another. You can follow the open-closed principle as long as you design and document for it (as per Bloch's advice).
I don't think that Bloch states that you should prefer to prohibit inheritance by using the final modifier, just that you should explicitly choose to allow or disallow inheritance in each class you create. His advice is that you should think about it and decide for yourself, instead of just accepting the default behavior of the compiler.
I don't think that the Open/closed principle as originally presented allows the interpretation that final classes can be extended through injection of dependencies.
In my understanding, the principle is all about not allowing direct changes to code that has been put into production, and the way to achieve that while still permitting modifications to functionality is to use implementation inheritance.
As pointed out in the first answer, this has historical roots. Decades ago, inheritance was in favor, developer testing was unheard of, and recompilation of the codebase often took too long.
Also, consider that in C++ the implementation details of a class (in particular, private fields) were commonly exposed in the ".h" header file, so if a programmer needed to change it, all clients would require recompilation. Notice this isn't the case with modern languages like Java or C#. Besides, I don't think developers back then could count on sophisticated IDEs capable of performing on-the-fly dependency analysis, avoiding the need for frequent full rebuilds.
In my own experience, I prefer to do the exact opposite: "classes should be closed for extension (final) by default, but open for modification". Think about it: today we favor practices like version control (makes it easy to recover/compare previous versions of a class), refactoring (which encourages us to modify code to improve design, or as a prelude to introducing new features), and developer testing, which provides a safety net when modifying existing code.

Categories