My understanding of dependency injection is it quickly allows someone to switch out implementations or use test implementations. I'm trying to understand how you're expected to do it in dagger. For me it seems like you should be able to switch out Module implementations, but that doesn't seem to be supported by dagger. What is the idiomatic way to do it.
For example:
#component{modules = UserStoreModule.class}
class ServerComponent {
Server server();
}
class UserStoreModule {
#Provides
UserStore providesUserStore() {
return // Different user stores depending on the application
}
}
Assuming user store is an interface, what if I want to be able to use a mysql UserStore or a redis UserStore depending on the situation. Would I need to have two different server components? Intuitively I feel like I should be able to switch out which user store I use in The DaggerServerComponent.builder() since that'd be a lot less code than multiple components.
Conceptually, it is true that dependency injection "allows someone to switch out implementations or use test implementations": You've written your classes to accept any implementation of UserStore, and can supply an arbitrary one in a constructor call for tests. This is the case whether or not you use Dagger, and is a big advantage in design.
However, Dagger's most prominent feature--its compile-time code generation--makes it somewhat more limited here than alternatives such as Spring or Guice. Because Dagger generates the classes it needs at compile time, it needs to know exactly which implementations it might encounter, so that it can prepare those implementations' dependencies. Consequently, you couldn't take in an arbitrary Class<? extends UserStore> at runtime and expect Dagger to fill in the rest.
This leaves you with a few options:
Separate Modules, separate Components
Create two separate Module classes, one for each implementation, and use them to let Dagger generate two separate components. This will generate the most efficient code, particularly when using #Binds, because Dagger will not need to generate any code for the implementation you're not binding. Of course, though this allows you to reuse your classes and some of your modules, it doesn't allow the decision between implementations to be made at runtime (short of choosing between entire Dagger component implementations).
This option entails a very small amount of handwritten code, but does generate a lot of extra code in the component implementations. It probably isn't what you're looking for, but it's included to highlight its differences from the others, and should still be used when possible.
#Module public interface MySqlModule {
#Binds UserStore bindUserStore(MySqlUserStore mySqlUserStore);
}
#Module public interface RedisModule {
#Binds UserStore bindUserStore(RedisUserStore redisUserStore);
}
#Component(modules = {MySqlModule.class, OtherModule.class})
public interface MySqlServerComponent { Server server(); }
#Component(modules = {RedisModule.class, OtherModule.class})
public interface RedisServerComponent { Server server(); }
Subclassing Modules
Create a subclass of your Module with different behavior. This precludes you from using #Binds or static/final #Provides methods, causes your #Provides method to take (and generate code for) unnecessary extra dependencies, and requires you to explicitly make and update constructor calls as dependencies may change. Due to its fragility and optimization opportunity-cost, I wouldn't recommend this option in most cases, but it can be handy for limited cases like substituting dependency-light fakes in tests.
#Module public class UserStoreModule {
#Provides public abstract UserStore bindUserStore(Dep1 dep1, Dep2 dep2, Dep3 dep3);
}
public class MySqlUserStoreModule extends UserStoreModule {
#Override public UserStore bindUserStore(Dep1 dep1, Dep2 dep2, Dep3 dep3) {
return new MySqlUserStore(dep1, dep2);
}
}
public class RedisUserStoreModule extends UserStoreModule {
#Override public UserStore bindUserStore(Dep1 dep1, Dep2 dep2, Dep3 dep3) {
return new RedisUserStore(dep1, dep3);
}
}
DaggerServerComponent.builder()
.userStoreModule(
useRedis
? new RedisUserStoreModule()
: new MySqlUserStoreModule())
.build();
Of course, your Module could even delegate to an arbitrary external Provider<UserStore>, at which point it would become tantamount to a component dependency. If you use want to use Dagger to generate the Provider or Component you depend on, though, this technique won't help you other than to break your graph into smaller pieces.
Choose among multiple Providers
Wire up both types at compile time, and only use one at runtime. This requires Dagger to prepare injection code for all your options, but allows you to switch by providing a Module parameter, and even allows you to change which object is provided (if you use a mutable parameter or read a value out of the object graph). Note that you'll still have a slight bit more overhead than with #Binds, and Dagger will still generate code for your options' dependencies, but the selection process here is clear, efficient, and Proguard-friendly.
This is probably the best general solution, but not ideal for test implementations; it's generally frowned-upon to let test-specific code sneak into production. You'll want module overrides or separate components for that kind of case instead.
#Module public class UserStoreModule {
private final StoreType storeType;
UserStoreModule(StoreType storeType) { this.storeType = storeType; }
#Provides UserStore provideUserStore(
Provider<MySqlUserStore> mySqlProvider,
Provider<RedisUserStore> redisProvider,
Provider<FakeUserStore> fakeProvider) {
switch(storeType) {
case MYSQL: return mySqlProvider.get();
case REDIS: return redisProvider.get();
case FAKE: return fakeProvider.get(); // you probably don't want this in prod
}
throw new AssertionError("Unknown store type requested");
}
}
In summary
When you truly need to decide at runtime, inject multiple Providers and choose from there. If you only need to select at compile time or test time, you should use module overrides or separate Components.
Related
My team owns a library that provides components that must be referencable by code that consumes the library. Some of our consumers use Spring to instantiate their apps; others use Guice. We'd like some feedback on best-practices on how to provide these components. Two options that present themselves are:
Have our library provide a Spring Configuration that consumers can #Import, and a Guice Module that they can install.
Have our library provide a ComponentProvider singleton, which provides methods to fetch the relevant components the library provides.
Quick sketches of what these would look like:
Present in both approaches
// In their code
#AllArgsConstructor(onConstructor = #__(#Inject))
public class ConsumingClass {
private final FooDependency foo;
...
}
First approach
// In our code
#Configuration
public class LibraryConfiguration {
#Bean public FooDependency foo() {...}
...
}
---
public class LibraryModule extends AbstractModule {
#Provides FooDependency foo() {...}
...
}
========================
========================
// In their code
#Configuration
#Import(LibraryConfiguration.java)
public class ConsumerConfiguration {
// Whatever initiation logic they want - but, crucially, does
// *not* need to define a FooDependency
...
}
---
// *OR*
public class ConsumerModule extends AbstractModule {
#Override
public void configure() {
// Or, simply specify LibraryModule when creating the injector
install(new LibraryModule());
...
// As above, no requirement to define a FooDependency
}
}
Second approach
// In our code
public class LibraryProvider {
public static final INSTANCE = buildInstance();
private static LibraryProvider buildInstance() {...}
private static LibraryProvider getInstance() {return INSTANCE;}
}
========================
========================
// In their code
#Configuration
public class ConsumerConfiguration {
#Bean public FooDependency foo() {
return LibraryProvider.getInstance().getFoo();
}
...
}
// or equivalent for Guice
Is there an accepted Best Practice for this situation? If not, what are some pros and cons of each, or of another option I haven't yet thought of? The first approach has the advantage that consumers don't need to write any code to initialize dependencies, and that DI frameworks can override dependencies (e.g. with mocked dependencies for testing); whereas the second approach has the advantage of being DI-framework agnostic (if a new consumer wanted to use Dagger to instantiate their app, for instance, we wouldn't need to change the library at all)
I think the first option is better. If your library has inter-dependencies between beans then the code of #Configuration in case of spring in the second approach) will be:
Fragile (what if application doesn't know that a certain bean should be created)
Duplicated - this code will appear in each and every consumer's module
When the new version of your library gets released and a consumer wants to upgrade- there might be changes in consumer's configuration ( the lib might expose a new bean, deprecate or even remove some old stuff, etc.)
One small suggestion:
You can use Spring factories and then you don't even need to make an #Import in case of spring boot. just add a maven dependency and it will load the configuration automatically.
Now, make sure that you work correctly with dependencies in case of that approach.
Since you code will include both spring and Juice dependent code, you'll add dependencies on both for your maven/gradle module of the library. This means, that consumer that uses, say, guice, will get all the spring stuff because of your library. There are many ways to overcome this issue depending on the build system of your choice, just want wanted to bring it up
I need to support two versions of a dependency, which have the same API but different package names.
How do I handle this without maintaining two versions of my code, with the only change being the import statement?
For local variables, I guess I could use reflection (ugly!), but I use the classes in question as method argument. If I don't want to pass around Object instances, what else can I do to abstract from the package name?
Is it maybe possible to apply a self-made interface - which is compatible to the API - to existing instances and pass them around as instance of this interface?
I am mostly actually using xtend for my code, if that changes the answer.
Since you're using Xtend, here's a solution that makes use of Xtend's #Delegate annotation. There might be better solutions that aren't based on Xtend though and this will only work for simple APIs that only consist of interfaces with exactly the same method signatures.
So assuming you have interfaces with exactly the same method signatures in different packages, e.g. like this:
package vendor.api1
interface Greeter {
def void sayHello(String name)
}
package vendor.api2
interface Greeter {
def void sayHello(String name)
}
Then you can combine both into a single interface and only use only this combined interface in your code.
package example.api
interface Greeter extends vendor.api1.Greeter, vendor.api2.Greeter {
}
This is also possible in Java so far but you would have to write a lot boilerplate for each interface method to make it work. In Xtend you can use #Delegate instead to automatically generate everything without having to care how many methods the interface has or what they look like:
package example.internal
import example.api.Greeter
import org.eclipse.xtend.lib.annotations.Delegate
import org.eclipse.xtend.lib.annotations.FinalFieldsConstructor
#FinalFieldsConstructor
class GreeterImpl implements Greeter {
#Delegate val Api delegate
}
#FinalFieldsConstructor
class Greeter1Wrapper implements Greeter {
#Delegate val vendor.api1.Greeter delegate
}
#FinalFieldsConstructor
class Greeter2Wrapper implements Greeter {
#Delegate val vendor.api2.Greeter delegate
}
Both Greeter1Wrapper and Greeter2Wrapper actually implement the interface of both packages here but since the signature is identical all methods are forwarded to the respective delegate instance. These wrappers are necessary because the delegate of GreeterImpl needs to implement the same interface as GreeterImpl (usually a single delegate would be enough if the packages were the same).
Now you can decide at run-time which version to use.
val vendor.api1.Greeter greeterApi1 = ... // get from vendor API
val vendor.api2.Greeter greeterApi2 = ... // get from vendor API
val apiWrapper = switch version {
case 1: new Greeter1Wrapper(greeterApi1)
case 2: new Greeter2Wrapper(greeterApi2)
}
val example.api.Greeter myGreeter = new GreeterImpl(apiWrapper)
myGreeter.sayHello("world")
This pattern can be repeated for all interfaces. You might be able to avoid even more boilerplate by implementing a custom active annotation processor that generates all of the required classes from a single annotation.
I'm having a project based on Dagger 2 which consists of two modules. The core module includes some interfaces and some classes that have member injections declared for these interfaces.
The actual implementations of these interfaces are included in the second module which is an Android project. So, naturally the provide methods for these are included in the Android project.
Dagger will complain during compilation about not knowing how to inject these in the core module.
Any thoughts on how to achieve this without using constructor injections?
In short, I just tried this, and it works. Be sure to check the exact error messages and make sure you are providing these interfaces and #Inject annotations are present.
There is probably just some wrong named interface or a missing annotation. Following up is a full sample using your described architecture that is compiling just fine. The issue you are currently experiencing is probably the one described in the last part of this post. If possible, you should go with the first solution though and just add those annotations.
The library
For reproducability this sample has minimalist models. First, the interface needed by my class in the library module:
public interface MyInterface {
}
Here is my class that needs that interface. Make sure to declare it in the constructor and provide the #Inject annotation!
#MyScope // be sure to add scopes in your class if you use constructor injection!
public class MyClassUsingMyInterface {
private MyInterface mMyInterface;
#Inject
public MyClassUsingMyInterface(MyInterface myInterface) {
mMyInterface = myInterface;
}
}
The idea is that the interface will be implemented by the app using MyClassUsingMyInterface and provided by dagger. The code is nicely decoupled, and my awesome library with not so many features is complete.
The application
Here need to supply the actual coupling. This means to get MyClassUsingMyInterface we have to make sure we can supply MyInterface. Let's start with the module supplying that:
#Module
public class MyModule {
#Provides
MyInterface providesMyInterface() {
return new MyInterface() {
// my super awesome implementation. MIT license applies.
};
}
}
And to actually use this, we provide a component that can inject into MyTestInjectedClass that is going to need MyClassUsingMyInterface.
#Component(modules = MyModule.class)
public interface MyComponent {
void inject(MyTestInjectedClass testClass);
}
Now we have a way to provide the requested interface. We declared that interface needed by the library class in a constructor marked with #Inject. Now I want a class that requires my awesome library class to use. And I want to inject it with dagger.
public class MyTestInjectedClass {
#Inject
MyClassUsingMyInterface mMyClassUsingMyInterface;
void onStart() {
DaggerMyComponent.create().inject(this);
}
}
Now we hit compile...and dagger will create all the factories needed.
Inject Libraries you can not modify
To just provide the full scale of dagger, this sample could also have been without actual access to the source code of the library. If there is no #Inject annotation dagger will have a hard time creating the object. Notice the missing annotation:
public class MyClassUsingMyInterface {
private MyInterface mMyInterface;
public MyClassUsingMyInterface(MyInterface myInterface) {
mMyInterface = myInterface;
}
}
In that case we have to manually provide the class. The module would be needed to be modified like the following:
#Module
public class MyModule {
#Provides
MyInterface providesMyInterface() {
return new MyInterface() {
};
}
#Provides
MyClassUsingMyInterface providesMyClass(MyInterface myInterface) {
return new MyClassUsingMyInterface(myInterface);
}
}
This introduces more code for us to write, but will make those classes available that you can not modify.
My dagger configuration for an android project that i'm working on:
Note: I've provided all the needed #Component, #Module, #Provides annotations wherever needed.
MainActivity {
#Inject A a;
#Inject B b;
onCreate(){
ComponentX.inject(this);
ComponentY.inject(this);
}
}
ComponentX-> ModuleA ->providerA
ComponentY -> ModuleB -> providerB
As you can see, these are two completely independent components not related to each other in anyway except for at the point of injection.
During compilation I get the following error:
In file A.java
error: B cannot be provided without an #Provides- or #Produces-annotated method.
MainActivity.b
[injected field of type: B b]
Am I mistaken in thinking that multiple components can be used while using dagger 2 or is the application supposed to use one big component which takes care of all the injections?
Can anyone help me understand where i'm going wrong?
You do not have to have a single component, there are various ways to modularize them, but each object that you create, or inject values into, must have all its values provided by a single component.
One way you could restructure your code is to have ComponentY depend on ComponentX, or vice versa, e.g.
#Component(dependencies = ComponentX.class)
interface ComponentY {
void inject(MainActivity activity);
}
Or you could create a third Component, say ComponentZ, if ComponentX and ComponentY are completely orthogonal to one another.
#Component(dependencies = {ComponentX.class, ComponentY.class})
interface ComponentZ {
void inject(MainActivity activity);
}
Or you could just reuse the modules, e.g.
#Component(modules = {ModuleA.class, ModuleB.class})
interface ComponentZ {
void inject(MainActivity activity);
}
How exactly you decide to split it largely depends on the structure of your code. If the components X and Y are visible but the modules are not then use component dependencies, as they (and module depedencies) are really implementation details of the component. Otherwise, if the modules are visible then simple reuse them.
I wouldn't use scopes for this as they are really for managing objects with different lifespans, e.g. objects associated with a specific user whose lifespan is the time from when a user logs in to when they log out, or the lifespan of a specific request. If they do have different lifespan then you are looking at using scopes and subcomponents.
is the application supposed to use one big component
Kind of, you should think of it in scopes. For a given scope, there is one component. Scopes are for example ApplicationScope, FragmentScope (retained), ActivityScope, ViewScope. For each scope, there is a given component; scopes are not shared between components.
(This essentially means that if you want to have global singletons in the #ApplicationScope, there is one application scoped component for it. If you want activity-specific classes, then you create a component for it for that specific activity, which will depend on the application scoped component).
Refer to #MyDogTom for the #Subcomponent annotation, but you can also use component dependencies for the creation of subscoped components as well.
#YScope
#Component(dependencies = ComponentX.class, modules=ModuleB.class)
public interface ComponentY extends ComponentX {
B b();
void inject(MainActivity mainActivity);
}
#XScope
#Component(modules=ModuleA.class)
public interface ComponentX{
A a();
}
ComponentY componentY = DaggerComponentY.builder().componentX(componentX).build();
is the application supposed to use one big component which takes care
of all the injections?
You can use Subcomponent. In your case components declaration will look like this:
#Subcomponent(modules=ModuleB.class)
public interface ComponentY{
void inject(MainActivity mainActivity);
}
#Component(modules=ModuleA.class)
public interface ComponentX{
ComponentY plus(ModuleB module);
}
ComponentY creation: creationCompunentY = ComponentX.plus(new ModuleB());
Now in MainActivity you call only ComponentY.inject(this);
MainActivity {
#Inject A a;
#Inject B b;
onCreate(){
ComponentY.inject(this);
}
}
More information about sub components can be found in migration from Dagger1 guide (look at Subgraphs part), Subcomponent JavaDoc and Component JavaDoc (look at Subcomponents part).
I gave to Google Guice the responsibility of wiring my objects. But, how can I test if the bindings are working well?
For example, suppose we have a class A which has a dependence B. How can I test that B is injected correctly?
class A {
private B b;
public A() {}
#Inject
public void setB(B b) {
this.b = b
}
}
Notice that A hasn't got a getB() method and I want to assert that A.b isn't null.
For any complex Guice project, you should add tests to make sure that the modules can be used to create your classes. In your example, if B were a type that Guice couldn't figure out how to create, then Guice won't be able to create A. If A wasn't needed to start the server but was needed when your server was handling a request, that would cause problems.
In my projects, I write tests for non-trivial modules. For each module, I use requireBinding() to declare what bindings the module requires but doesn't define. In my tests, I create a Guice injector using the module under test and another module that provides the required bindings. Here's an example using JUnit4 and JMock:
/** Module that provides LoginService */
public class LoginServiceModule extends AbstractModule {
#Override
protected void configure() {
requireBinding(UserDao.class);
}
#Provides
LoginService provideLoginService(UserDao dao) {
...
}
}
#RunWith(JMock.class)
public class LoginServiceModuleTest {
private final Mockery context = new Mockery();
#Test
public void testModule() {
Injector injector = Guice.createInjector(
new LoginServiceModule(), new ModuleDeps());
// next line will throw an exception if dependencies missing
injector.getProvider(LoginService.class);
}
private class ModuleDeps extends AbstractModule {
private final UserDao fakeUserDao;
public ModuleDeps() {
fakeUserDao = context.mock(UserDao.class);
}
#Override
protected void configure() {}
#Provides
Server provideUserDao() {
return fakeUserDao;
}
}
}
Notice how the test only asks for a provider. That's sufficient to determine that Guice could resolve the bindings. If LoginService was created by a provider method, this test wouldn't test the code in the provider method.
This test also doesn't test that you binded the right thing to UserDao, or that UserDao was scoped correctly. Some would argue that those types of things are rarely worth checking; if there's a problem, it happens once. You should "test until fear turns to boredom."
I find Module tests useful because I often add new injection points, and it's easy to forget to add a binding.
The requireBinding() calls can help Guice catch missing bindings before it returns your injector! In the above example, the test would still work if the requireBinding() calls were not there, but I like having them because they serve as documentation.
For more complicated modules (like my root module) I might use Modules.override() to override bindings that I don't want at test time (for instance, if I want to verify that my root object to be created, I probably don't want it to create an object that will connect to the database). For simple projects, you might only test the top-level module.
Note that Guice will not inject nulls unless the field as annotated with #Nullable so you very rarely need to verify that the injected objects are non-null in your tests. In fact, when I annotate constructors with #Inject I do not bother to check if the parameters are null (in fact, my tests often inject null into the constructor to keep the tests simple).
Another way to test your configuration is by having a test suite that tests your app end-to-end. Although end-to-end tests nominally test use cases they indirectly check that your app is configured correctly, (that all the dependencies are wired, etc etc). Unit tests on the other hand should focus exclusively on the domain, and not on the context in which your code is deployed.
I also agree with NamshubWriter's answer. I'm am not against tests that check configuration as long as they are grouped in a separate test suite to your unit tests.
IMHO, you should not be testing that. The Google Guice guys have the unit tests to assert that the injections work as expected - after all, that's what Guice is designed to do. You should only be writing tests for your own code (A and B).
I don't think you should test private members being set. Better to test against the public interface of your class. If member "b" wouldn't be injected, you'll probably get a NullPointerException executing your tests, which should be plenty of warning.