IoC and Dependency injection - java

What's wrong with following code?
public class DeDoper {
public boolean wackapediaOkToday() {
DnsResolver resolver = ResolverFactory.getInstance().makeResolver();
return resolver.getIpAddressFor("wackapedia.org").equals("123.456.78.9");
}
}
Why is this version preferred?
public class DeDoper {
#InjectService
private DnsResolver resolver;
public boolean wackapediaOkToday() {
return resolver.getIpAddressFor("wackapedia.org").equals("123.456.78.9");
}
}
I can easily mock ResolverFactory.makeResolver() and this is the same as setting resolver is latest example.
This is what was said in this article from ProQuest.biz:
This [first] version of WackapediaOkToday is, very loosely speaking, "injected" with a DnsResolver (although it's admittedly less like getting a shot, and more like asking a waiter for the check). But it does solve the testing problem, and the "turtles all the way down" problem.
Chained to the Factory
But in this [first version] approach, we are literally "chained" to the Factory classes. (Worse, if the objects created by our Factories have dependencies in turn, we may have to introduce new Factories inside our Factories.) We haven't fully "inverted" our control, we're still calling (controlling) the Factories from inside our classes.
What was needed was a way to get rid of the control in our classes entirely, and have them told what they were getting (for their dependencies).

Let's say you had a service which can have multiple implementations, like "FileLocator". This has a FilesystemFileLocator, which takes as an argument the path to the root of the files and a S3FileLocator, which takes as an argument your S3 credentials. The former would require you to write a service locator to figure out which version you want and then return it. That code, in turn, has to go get your data, build the appropriate type of file locator, etc. You're doing what the IOC container should do for you. On top of that, you've inject a dependency on that specific creation method.
In the second version you've defined (through annotations or XML) what type of file locator you want. The IOC container instantiates and manages that for you. It's less code you have to maintain. It's also less work if you want to introduce a third type of FileLocator. Or maybe you refactor your code so file locators are singletons, or if they were singletons they're now factories for fresh locators, or maybe you want to pool locator instances. In all these cases there will be less breakage if you work with your IOC container.

Related

Using dependency injection (via Guice) in a multi-module environment

I have a slight dilemma involving Guice and avoiding non-Guice singletons. Consider a multi-module project where there are 3 modules: shared, frontend and backend. Both the frontend and the backend use an instance of a Profiling class inside of the shared module (which times methods, and is used extensively throughout the project).
Almost every single class requires the use of this Profiling instance (including User objects created dynamically when a user connects).
If every single class requires an instance of the Profiling class, there are drawbacks to different methods of doing it:
Solution 1 (in the constructor, copied to the instance field):
private final Profiling profiling;
#Inject
public User(Profiling profiling, String username)
Drawback: You have to include a Profiling object in every single constructor. This is cumbersome and slightly pointless. You also have to store Guice's Injector statically (or inject it) so that you can create the User objects on-the-fly and not just when the program is first loaded.
Solution 2 (as a instance field only):
#Inject
private Profiling profiling;
public User(String username)
Drawback: Similar as above, you have to use Guice's Injector to instantiate every single object. This means that to create User objects dynamically, you need an instance of the Injector in the class creating the User objects.
Solution 3 (as a static field in one (main) class, created manually by us)
public static final Profiling PROFILING; // Can also use a get method
public Application() {
Application.PROFILING = injector.getInstance(Profiling.class)
}
Drawback: Goes against Guice's/dependency injection recommendations - creating a singleton Profiling object which is accessed statically (by Application.PROFILING.start()) defeats the purpose of Guice?
Solution 4 (as a static field in every single class, injected by Guice)
#Inject
private static Profiling profiling;
// You need to request every single class:
// requestStaticInjection(XXXX.class)
Drawback: Again, this goes against Guice's/dependency injection recommendations because it is statically injecting. I also have to request every single class that Guice needs to inject the Profiler into (which is also cumbersome).
Is there any better way to design my project and avoid falling back to the singleton design patterns I used to use?
TL;DR: I want to be able to access this Profiling instance (one per module) across every single class without falling back to singleton design patterns.
Thanks!
Pragmatically, I would use a normal singleton, maybe through a single-field Enum or a similar pattern.
To see why, you should ask the question: What is the purpose of Guice, and of dependency injection in general? The purpose is to decouple the pieces of your application so they can be independently developed and tested, and centrally configured and rearranged. With that in mind, you need to weigh the cost of coupling against the cost of decoupling. This varies based on the object you're choosing to couple or decouple.
The cost of coupling, here, is that you would be unable to operate any piece of your application without a real working instance of Profiling, including in tests, including for model objects like User. Consequently, if Profiling makes any assumptions about its environment—the availability of high-resolution system timing calls, for instance—you will be unable to use classes like User without allowing Profiling to be disabled. Furthermore, if you want your tests to profile with a new (non-Singleton) Profiling instance for the sake of test isolation, you'll need to implement that separately. However, if your Profiling class is lightweight enough not to pose a huge burden, then you might still choose this way forward.
The cost of decoupling is that it can force every object to become an injectable, as opposed to a newable. You would then be able to substitute new/dummy/fake implementations of Profiling in your classes and tests, and reconfigure to use different Profiling behavior in different containers, though that flexibility may not have immediate benefits if you don't have a reason to subsitute those implementations. For classes like User created later, you would need to pursue factory implementations, such as those provided through Guice assisted injection or AutoFactory code generation. (Remember that you can create an arbitrary number of objects by injecting Provider<T> instead of T for any object you would otherwise inject, and that injecting a Factory instance would be like customizing a Provider to take get parameters you choose.)
Regarding your solutions:
Solutions 1 and 2, per-object injection: This is where a Factory would shine. (I'd favor constructor injection, given the choice, so I'd go with solution 1 between them.) Of course, everything that creates a new User would need to instead inject a User.Factory, so that may turn a tightly-scoped project into a project to convert every class in your codebase to DI—which may be an unacceptable cost for what you're trying to do now.
// Nested interface for your Factory:
public interface Factory { User get(String username); }
// Mark fields that the user provides:
#Inject public User(Profiling profiling, #Assisted String username) { ... }
// Wire up your Factory in a Module's configure:
install(new FactoryModuleBuilder().implement(User.Factory.class));
// Now you can create new Users on the fly:
#Inject User.Factory userFactory;
User myNewUser = userFactory.get("timothy");
Solution 3, requesting static injection of a main holder approximates what I had in mind: For the objects not created through dependency injection, request static injection for a single class, like ProfilingHolder or somesuch. You could even give it no-op behavior for flexibility's sake:
public class ProfilingHolder {
// Populate with requestStaticInjection(ProfilingHolder.class).
#Inject static Profiling profilingInstance;
private ProfilingHolder() { /* static access only */ }
public static Profiling getInstance() {
if (profilingInstance == null) {
// Run without profiling in isolation and tests.
return new NoOpProfilingInstance();
}
return profilingInstance;
}
}
Of course, if you're relying on calls to VM singletons, you're really embracing a normal VM-global static singleton pattern, just with interplay to use Guice where possible. You could easily turn this pattern around and have a Guice module bind(Profiling.class).toInstance(Profiling.INSTANCE); and get the same effect (assuming Profiling can be instantiated without Guice).
Solution 4, requestStaticInjection for every single class is the only one I wouldn't consider. The list of classes is too long, and the likelihood that they'll vary Profiling is too slim. You'd turn a Module into a high-maintenance-cost grocery list rather than any sort of valuable configuration, and you'd force yourself into breaking encapsulation or using Guice for testing.
So, in summary, I'd choose Guice singleton injection for your current injectable objects, normal singleton for your current newable objects, and the option of migrating to Factories if/when any of your newables make the leap to injectables.

Should i inject Objects needed for execution of an algorithm? Should i inject everything?

Maybe i missed it in the documentation but i'm wondering how i should handle "helper Objects"?
code example:
public Path dijkstra(Node startNode, Node endNode) {
Set<Node> nodesToInspect = new HashSet<Node>(); // should this Object be injected?
Path path = new Path(); // and this one?
while (!nodesToInspect.isEmpty()) {
// some logic like:
path.add(currentNode);
}
return path;
}
Should i inject everything or should i say at some point that the algorithm "knows" best what it needs?
Should i try to eliminate every "new"? or are some object creations fine, for example API classes like HashSet, ArrayList, etc.
Before you replace a simple new with dependency injection, you need to ask yourself "why am I doing this?" ... "what real benefit does it have?". If the answer is "I don't know" or "nothing", then you shouldn't.
In this case, I can see no real benefit in using DI in the first cases in your example code. There is no need for anything outside of that method to know about how the internal set is represented ... or even to know that it exists.
The other question you should ask is whether there is a simpler, more obvious way of achieving the goal. For example, the (most likely) purpose of using DI for the path variable is to allow the application to use a different Path class. But the simple way to do that is to pass a Path instance to the dijkstra method as an explicit parameter. You could even use overloading to make this more palatable; e.g.
public Path dijkstra(Node startNode, Node endNode) {
return dijkstra(startNode, endNode, new Path());
}
public Path dijkstra(Node startNode, Node endNode, Path path) {
...
}
The final thing to consider is that DI (in Java) involves reflection at some level, and is inevitably more expensive than the classical approaches of using new or factory objects / methods. If you don't need the extra flexibility of DI, you shouldn't pay for it.
I just noticed that the two variables you are referring to are local variables. I'm not aware of any DI framework that allows you to inject local variables ...
Remember the design principle: "Encapsulate what changes often" or "encapsulate what varies". As the engineer, you know best what is likely to change. You wouldn't want to hard-code the year 2012 into code that will live until next decade, but you also wouldn't want to make "Math.PI" a configuration setting either--that'd be extra overhead and configuration that you'd never need to touch.
That principle doesn't change with dependency injection.
Are you writing one algorithm and you know which implementation of Set you need, like in your Dijkstra example? Create your object yourself. But what if your co-worker is soon to deliver a fantastic new implementation of Set optimized for your use-case, or what if you are experimenting with different collection implementations for benchmarking? Maybe you'll want to inject a Provider until the dust settles. That's less likely for collections, but maybe it's more likely for similar disposable objects you might think of as "helper objects".
Suppose you're choosing between different types of CreditCardAuthService that may vary at runtime. Great case for injection. But what if you've signed a five-year contract and you know your code is going to be replaced long before then? Maybe hard-coding to one service makes more sense. But then you have to write some unit tests or integration tests and you really don't want to use a real credit card backend. Back to dependency injection.
Remember: code is malleable. Someday you may decide that you need to rip out your hard-coded HashSet and replace it with something else, which is fine. Or maybe you'll discover that you need your service to vary often enough that it should be Guice-controlled, and then you add a constructor parameter and a binding and call it a day. Don't worry about it too much. Just keep in mind that just because you have a hammer, not every problem is a Provider<WoodFasteningService>.
When working with DI, I prefer to avoid "new" whenever possible. Every instance you create outside the container (e.g. the guice injector) can not refactored to use injection (what if your "Path" instance should get a String "root" injected by configuration ... and you cannot use interceptors on those objects either.
So unless it is a pure Entity-Pojo, I use Provides/Inject all the time.
It's also part of the inversion of control (hollywood) pattern and testability ... what if you need to mock Path or Set in Junit? if you have them injected (in this case the Providers), you can easily switch concrete implementation afterwards.

Java Annotations and apt (fundamentals)

I'm really rolling up my sleeves and trying to understand Java annotations for the first time, and have read the Sun, Oracle and Wikipedia articles on the subject. They're easy to understand conceptually, but am finding it difficult putting all the pieces of the puzzle together.
The following example is probably terrible engineering, but just humor me (it's an example!).
Let's say I have the following class:
public Widget
{
// ...
public void foo(int cmd)
{
switch(cmd)
{
case 1:
function1();
break;
case 2:
function2();
break;
case 3:
default:
function3();
break;
}
}
}
Now, somewhere else in my project, I have another class, SpaceShuttle, that has a method called blastOff():
public class SpaceShuttle
{
// ...
public void blastOff()
{
// ...
}
}
Now then, I want to configure an annotation called Widgetize so that any methods annotated with #Widgetize will have Widget::foo(int) invoked prior to their own call.
#interface Widgetize
{
int cmd() default 2;
}
So now let's revisit SpaceShuttle:
public class SpaceShuttle
{
// ...
#Widgetize(3)
public void blastOff()
{
// Since we pass a cmd of "3" to #Widgetize,
// Widget::function3() should be invoked, per
// Widget::foo()'s definition.
}
}
Alas, my questions!
I assume that somewhere I need to define an annotation processor; a Java class that will specify what to do when #Widgetize(int) annotations are encountered, yes? Or does this happen in, say, XML config files that get fed into apt (like the way ant reads build.xml files)?
Edit: If I was correct about these annotation processors in question #1 above, then how do I "map"/"register"/make known these processors to the apt?
In buildscripts, is apt typically ran before javac, so that annotation-based changes or code generation takes place prior to the compile? (This is a best practices-type question).
Thanks and I apologize for my code samples, they turned out a lot bulkier than I intended them to (!)
This sounds more like AOP (Aspect oriented programming) than annotations. The topics are often confused since AOP uses annotations to achieve it's goals. Rather than reinvent AOP from scratch, I would recommend looking up and existing AOP library such as AspectJ.
However, to answer your specific question, there are two possible approaches to achieve your goal.
Runtime Approach
This is the approach typically taken by container frameworks (like Spring). The way it works is that instead of instantiating your classes yourself, you ask a container for an instance of your class.
The container has logic to examine the class for any RuntimeAnnotations (like #Widgetize). The container will then dynamically create a proxy of your class that calls the correct Widgetize method first and then calls the target method.
The container will then return that proxy to the original requester. The requester will still thing he got the class (or interface) that he asked for and be completely unaware of the proxying behavior added by the container.
This is also the behavior used by AspectJ.
Enhancement Approach
This is the approach taken by AspectJ. To be honest, I don't know a lot of the details of how it works. Somehow, AspectJ will scan your class files (the byte code), figure out where the annotations are, and then modify the byte code itself to call the proxy class instead of the actual class.
The benefit of this approach is that you don't need to use a container. The drawback is that you now have to do this enhancement step after you compile your code.
I assume that somewhere I need to
define an annotation processor; a Java
class that will specify what to do
when #Widgetize(int) annotations are
encountered, yes? Or does this happen
in, say, XML config files that get fed
into apt (like the way ant reads
build.xml files)?
In Java 1.6, the standard way to define annotation processors its through the ServiceLoader SPI.
In buildscripts, is apt typically ran
before javac, so that annotation-based
changes or code generation takes place
prior to the compile? (This is a best
practices-type question).
APT must take place before compilation, as it operates on source files (actually on syntax trees).
I use method interceptors often with Hibernate. Hibernate requires that a transaction be started and committed round every query. Rather than have lots of duplicate code I intercept every Hibernate method and start the transaction in the interceptor.
I use AOP Alliance method interceptors in conjunction with Google Guice for this. Using these you use your Widgetise annotation and then use Guice to say where you see this annotation use this method interceptor. The following Guice snippet does this.
bindInterceptor(Matchers.any(), Matchers.annotatedWith(Transactional.class), new TransactionalInterceptor);
The interceptor catches the method, you can then call foo and then the tell the method interceptor to proceed with invocation of the original method. For example (in a simplified form):
public class Interceptor implements MethodInterceptor {
//PUT ANY DEPENDENCIES HERE IN A CONSTRUCTOR
public Object invoke(MethodInvocation invocation) throws Throwable {
//DO FOO HERE
result = invocation.proceed();
return result;
}
}
This might be a little confusing and it took me a while to get my head around but it is quite simple once you understand it.
It seems that the basis of your confusion is an incorrect belief that Annotations are something more than just Metadata. Check out this page from the JSE 1.5 language guide. Here is a relevant snippet:
Annotations do not directly affect
program semantics, but they do affect
the way programs are treated by tools
and libraries, which can in turn
affect the semantics of the running
program. Annotations can be read from
source files, class files, or
reflectively at run time.
In your code
#Widgetize(3)
public void blastOff()
does not cause Widget::function3() to execute (asside: in java we reference a member as Widget.function3()) The annotation is just the equivalent of a machine-readable comment (i.e. metadata).
Annotations can be processed both at compile time and at runtime. To process them at runtime requires using the reflection API, which will have a performance impact if it's not done smartly.
It's also possible to process annotations at compile time. The goal is to generate a class which can then be used by your code. There is a specific interface annotation processors have to satisfy, and they have to be declared in order to be executed.
Take a look at the following article for an example which is structurally simple to your use case:
Annotation processing 101

Would you use DI or a factory?

My application stores files, and you have the option of storing the files on your own server or using S3.
I defined an interface:
interface FileStorage {
}
Then I have 2 implementations, S3FileStorage and LocalFileStorage.
In the control panel, the administrator chooses which FileStorage method they want, and this value is stored in a SiteConfiguration object.
Since the FileStorage setting can be changed while the application is already running, would you still use spring's DI to do this?
Or would you just do this in your code:
FileStorage fs = null;
switch(siteConfig.FileStorageMethod)
case FileStorageMethod.S3:
fs = new S3FileStorage();
case FileStorageMethod.Local:
fs = new LocalFileStorage();
Which one makes more sense?
I believe you can use DI with spring during runtime, but I haven't read about it much at this point.
I would inject a factory, and let clients request the actual services from it at runtime. This will decouple your clients from the actual factory implementation, so you can have several factory implementations as well, for example, for testing.
You can also use some kind of a proxy object with several strategies behind it instead of the factory, but it can cause problems, if sequence of calls (like open, write, close for file storage) from one client cannot be served by different implementations.
I would still use Dependency Injection here. If it can be changed at runtime, you should inject it using setter injection, rather than constructor injection. The benefit of using any dependency injection is that you can easily add new implementations of the interface without changing the actual code.
DI without question. Or would you prefer to enhance your factory code when you create/update/delete an implementation? IMO, If you're programming to an interface, then you shouldn't bootstrap your implementations, however many layers deep it actually occurs.
Also, DI isn't synonymous to Spring, et al. It's as simple as containing a constructor with the abstracted interface as an argument, i.e. public FileApp(FileStorage fs) { }.
FYI, another possibility is a proxy.

Correct approach to Properties

I am working in Java on a fairly large project. My question is about how to best structure the set of Properties for my application.
Approach 1: Have some static Properties object that's accessible by every class. (Disadvantages: then, some classes lose their generality should they be taken out of the context of the application; they also require explicit calls to some static object that is located in a different class and may in the future disappear; it just doesn't feel right, am I wrong?)
Approach 2: Have the Properties be instantiated by the main class and handed down to the other application classes. (Disadvantages: you end up passing a pointer to the Properties object to almost every class and it seems to become very redundant and cumbersome; I don't like it.)
Any suggestions?
I like using Spring dependency injection for many of the properties. You can treat your application like building blocks and inject the properties directly into the component that needs them. This preserves (encourages) encapsulation. Then, you assemble your components together and create the "main class".
A nice side effect of the dependency injection is that your code should be more easily testable.
Actually, approach 2 works really well.
I tried using a Singleton properties object on a recent project. Then, when it came time to add features, I need to revise the Singleton, and regretted having to locate every place where I used MySingleton.getInstance().
Approach 2 of passing a global information object through your various constructors is easier to control.
Using an explicit setter helps, too.
class MyConfig extends Properties {...}
class SomeClass {
MyConfig theConfig;
public void setConfi( MyConfig c ) {
theConfig= c;
}
...
}
It works well, and you'll be happy that you tightly controlled precisely which classes actually need configuration information.
If the properties are needed by a lot of classes, I would go for approach 1. Or perhaps a variant in which you use the Singleton design pattern instead of all static methods. This means that you don't have to keep passing some properties object around. On the other hand, if only a few classes need these properties, you might choose approach 2, for the reasons you mentioned. You might also want to ask yourself how likely it is that the classes you write are actually going to be reused and if so, how much of a problem it is to also reuse the properties object. If reuse is not likely, don't bother with it now and choose the solution that is the simplest for the current situation.
Sounds like you need a configuration manager component. It would be found via some sort of service locator, which could be as simple as ConfigurationManagerClass.instance(). This would encapsulate all that fun. Or you could use a dependency injection framework like Spring.
Much depends on how components find each other in your architecture. If your other components are being passed around as references, do that. Just be consistent.
If you are looking for something quick you can use the System properties, they are available to all classes. You can store a String value or if you need to store a list of 'stuff' you can use the System.Properties() method. This returns a 'Properties' object which is a HashTable. You can then store what ever you want into the table. It's not pretty but it's a quick way to have global properties. YMMV
I usually go for a singleton object that resides in a common project and contains a hashtable of itself keyed on namespace, resulting in a properties class for each.
Dependency injection is also a nice way of doing it.
I feel more comfortable when I have a static pointer to my in-memory properties. some times you want to reload properties in runtime, or other functionality that is easier to implement with a static reference.
just remember that no class is an island. a reusable class can have a client class to keep the core free of the singletone reference.
you can also use interfaces, just try not to overdo it.
Approache 2 is defenetly better.
Anyway, you should not let other class search through config object. You should injet config taken in the config object ouside the object.
Take a look at apache commons configuration for help with configuration Impl.
So in the main() you could have
MyObject mobj = new MyObject();
mobj.setLookupDelay(appConfig.getMyObjectLookupDelay);
mobj.setTrackerName(appConfig.getMyObjectTrackerName);
Instead of
MyObject mobj = new MyObject();
mobj.setConfig(appConfig);
where appConfig is a wrapper around the apache configuration library that do all the lookup of the value base on the name of the value in a config file.
this way your object become very easily testable.
Haven't done Java for a while, but can't you just put your properties into the java.lang.System properties? This way you can access the values from everywhere and avoid having a "global" property class.

Categories