So I'm writing a bunch of components (that will be packaged as JARs), and they are all using Guice for DI. These components are just reusable, "commons"-type JARs that will be used by other downstream projects.
My understanding with Guice is that you implement a concrete Module and use that to bind objects together and, in effect, configure all of your DI. It is also my understanding that you should then have a single "bootstrapping" phase where the Guice injector is created, and then all dependencies the module is configured with are then fetched from that injector with injector.getInstance(SomeClass.class).
That would work great in standalone application, that had some entry point, where you could invoke an init()-style method to then bootstrap Guice with, but in a headless JAR that has no entry point, I'm struggling with trying to determine when/where/how to bootstrap Guice.
These will be JARs living on the classpath and, at any point in time, an external entity could invoke and class and any method inside of them. I thought about using up a "lazy initialization" set up, where a method checks to see if its dependencies have been configured yet, and, if so, kicks off a bootstrap method.
But that's a really terrible solution! Partly, because that would require every class to have its own Module (which is ridiculous), and it would also pollute my entire codebase with DI-related code.
I'm clearly missing some Guice fundamentals here, otherwise I don't see how Guice could be used in anything other than an app where execution from start to finished is known and controlled. Any code samples are a huge plus! Thanks in advance.
If other code wants to configure your classes without using Guice, it should be able to. However, you should provide a Guice module which binds everything in a reasonable way so that other code (perhaps other modules) can install your module, and then inject the dependencies into their own classes.
Of course, you don't need to expose a module yourself at all - you can leave it up to others to perform all the binding. However, you may wish to provide a module to avoid exposing your implementation details - you can expose a public interface and a public module, but then keep the implementation package-private. The module can bind the interface to the implementation without the caller knowing anything about it.
You may also want to investigate private modules, so that you can bind dependencies that your code needs, without exposing them more widely.
Something, somewhere is going to have to create an injector - but if your code is just "library" code, then it almost certainly shouldn't be you. You shouldn't be performing the injection yourself - you should just be making your code amenable to injection.
Related
I have an interface A with a method Result doAction(Param param). I have a Spring application that will use implementations of the interface and call doAction() on it.
But the application does not define an implementation itself. The idea is that other people can provide their own implementations of the interface in JARs (plugins), the main application will pull those in as dependencies, and call doAction() on the JAR's implementation.
Any idea how I can do this in practice? The ideas I had were:
Try to autowire the implementation through Spring Boot, but for that I would need to know its package and add it to the component scan. So it would mean putting requirements on the naming of the "plugin" jar. Something I would prefer not to do.
With plain Java my first idea was to keep a registry of implementations (e.g. a Set<Interface A>), but the plugin wouldn't be able to access the registry -- it would be a dependency cycle.
What I'm doing right now is defining a Rest API that the "plugin" needs to implement, deploy the plugin in the same environment and the main application just makes the calls through the Rest API.
But for performance reasons I'm looking for a solution with more direct calls that doesn't involve communication over the network. Any suggestions?
I have a whole bunch of framework modules that work fine on OSGi, all the services and components are finding one another and running just fine.
There is however one framework that does some dynamic stuff regarding classes. Basically at some point you give it a class name and it performs Class.forName() and then reflection magic happens.
This works great when running in a standard jvm and using SPI to wire together the frameworks but it fails in OSGi because of course that random class "test.MyTest" that you are trying to approach via the framework is not visible to said framework.
It will throw a "java.lang.ClassNotFoundException: test.MyTest not found by framework"
So my question: how can I solve this lack of visibility for the framework that needs to see all? Import-Package: *?
UPDATE
Assuming OSGi hasn't changed much since 2010 on this front, the article http://njbartlett.name/2010/08/30/osgi-readiness-loading-classes.html is very interesting. I have currently added support for both actively registering classes and a domain factory to be injected via OSGi.
Apart from that the default resolving uses context classloader anyway so if all else fails that will be used to try and load the class.
UPDATE
I have added support for the suggested DynamicImport-Package as well which is easier for small projects.
You can use DynamicImport-Package:*. This will allow the bundle to see all classes. The problem is that you have no real control over what exactly is exposed. So this is normally a last resort and not the recommended way.
You should first try to use Thread.currentThread().setContextClassLoader() and set it to the classloader of the class you provide to the framework. Sometimes the frameworks also consult this classloader.
The even better way is to find a method in the framework that allows to provide the user classloader.
If you have control over the code then avoid Class.forName(). Instead let the user either give you a class object instead of a class name or let the user give you the combination of a class name and the classloader to use. Both ways work perfectly in and outside OSGi.
How do you keep an overview over which objects get injected where? I have a rather small project where i use guice, not so much because i really need it (given the project is still small), but rather because i want to get to know it a little better.
I am already starting to loose the overview with only ~10 classes; are there tools that analyze the code to show something like a dependency graph?
That would make it easier to see quickly where i forgot something or where i need singleton scoped injection. Also with guice a lot of things happen implicitly, being able to see these things explicitly would help debugging in the future.
I have a couple of principles which help to manage dependencies using Guice.
Keep all bindings inside modules only. Do not use just-in-time bindings stuff. I mean, do not use #Singleton or #ImplementedBy or #ProvidedBy, i.e. all that is described here. Try also always call binder.requireExplicitBindings() at the top of your modules - it will force you to always bind your dependencies explicitly. When you keep all dependencies to the modules, you can easily find which interface fulfilled by which implementation. This simplifies navigation around bindings a lot.
Try to keep your modules as small as possible, and then combine them when creating an injector (directly via createInjector() call or using a central module which does nothing but install()s other modules). Each module should be responsible for its own part of the application and should be named accordingly. Also your modules should not contain complex initialization and dynamic binding code. This way you will be able to find a module which is responsible for some part of your application quite easily.
These principles are really simple but they make dependency management very easy.
Also, you can visualize dependency graph using special Guice extension. It has it bugs though, and it has been a while since I have used it, so I can't give you exact links on how to avoid these bugs, but googling for it won't take long.
I'm writing a framework that uses Guice to bootstrap a server, and so I've extended Guice's AbstractModule to create a Module that provides some convenience methods for users to configure their code. However, I want to to check that the configuration is sane before launching the code. So it has to go somewhere in here:
// here, before the injector is created?
Injector injector = Guice.createInjector(someModule);
// here, after configure() is called?
Object something = injector.getInstance(SomeServer.class);
// start the server
It seems that there's not much I can check before the injector is created because the modules are not configure()ed yet. There is some mention of using the Guice SPI to validate module configuration, but the documentation is not too clear. Can someone, who uses Guice, give a short description on the best practices for validating modules before injectors are used?
I haven't experienced much of this first-hand, but it seems to me that you have three choices:
Refactor to MyConvenienceMethodModule.myConfigure() and MyConvenienceMethodModule.validate() if your convenience methods are expressive enough to provide useful information without ever running configure(). In theory you could call Module.configure(Binder) with a mock, but with Guice's EDSL that's far too complex; use ElementVisitor (below) instead.
Call Elements.getElements() on a particular Module to check on the binding status. Because the elements might be of a variety of types, you'd probably want to create an ElementVisitor instead (probably by creating a subclass of DefaultElementVisitor to insulate you from future Elements yet to be created). This way you get a good view of all bindings, even bindings in Guice's EDSL, while still in the context of the Module. I think this is your best bet.
Create your Injector as usual and call getAllBindings() to investigate it. This is probably your best option if your configuration's sanity depends on how multiple modules interact, rather than how individual modules are structured. If you only check at this point, you won't really be able to tell one Module from another.
getting started with osgi, i wonder what's the conceptual diffence between bundles and components. And when to use which of them. Any pointers are welcome.
EDIT:
Components and Bundles provide different interfaces and therefore they are probably not interchangeable
ComponentContext
BundleContext
A component is:
an active participant in the system
aware of and adapt to its environment
environment = services provided by other components
environment = resources, devices, ...
may provide services to other components and use services from other components
have a lifecycle
In short:
Component provide services
Bundle manage the lifecycle
A bundle can have only one activator (needing a BundleContext), and can have as many active components as you want.
That means you may end up trying to fit in one activator several loosely-related concerns into a single class.
That is why it may be easier to manage those components by Declarative Services, through the SCR (the "Service Component Runtime" which is an "extender bundle" implementing the new and improved OSGi R4.2 DS - Declarative Service - specification).
This is especially true since OSGi 4.2 because it is now much easier to write DS components as POJOs: the activate and deactivate methods are no longer required to take a ComponentContext parameter. See also Lazy Declarative Service.
Note:
It can help to replace those terms in the context of OSGi and look at "how we got there" (excellent blog post by Neil Bartlett)
Here are some relevant extracts, where the "modules" end up being the OSGi Bundles (managing Components which declare Services):
Module Separation
Our first requirement is to cleanly separate modules so that classes from one module do not have the uncontrolled ability to see and obscure classes from other modules.
In traditional Java the so-called “classpath” is an enormous list of classes, and if multiple classes happen to have the same fully-qualified name then the first will always be found and the second and all others will be ignored.
The way to prevent uncontrolled visibility and obscuring of classes is to create a class loader for each module. A class loader is able to load only the classes it knows about directly, which in our system would be the contents of a single module.
Module Access level
If we stop here then modules will be completely isolated and unable to communicate with each other. To make the system practical we need to add back in the ability to see classes in other modules, but we do it in a careful and constrained way.
At this point we input another requirement: modules would like the ability to hide some of their implementation details.
We would like to have a “module” access level, but the problem today is that the javac compiler has no idea where the module boundaries lie.
The solution we choose in our module system is to allow modules to “export” only portions of their contents. If some part of a module is non-exported then it simply cannot be seen by other modules.
When importing, we should import what we actually need to use, irrespective of where it comes from and ignoring all the things that happen to be packaged alongside it.
Granularity of Exports and Imports
OSGi chooses packages.
The contents of a Java package are intended to be somewhat coherent, but it is not too onerous to list packages as imports and exports, and it doesn’t break anything to put some packages in one module and other packages in another module.
Code that is supposed to be internal to our module can be placed in one or more non-exported packages.
Package Wiring
Now that we have a model for how modules isolate themselves and then reconnect, we can imagine building a framework that constructs concrete runtime instances of these modules. It would be responsible for installing modules and constructing class loaders that know about the contents of their respective modules.
Then it would look at the imports of newly installed modules and trying to find matching exports.
An unexpected benefit from this is we can dynamically install, update and uninstall modules. Installing a new module has no effect on those modules that are already resolved, though it may enable some previously unresolvable modules to be resolved. When uninstalling or updating, the framework knows exactly which modules are affected and it will change their state if necessary.
Versions
Our module system is looking good, but we cannot yet handle the changes that inevitably occur in modules over time. We need to support versions.
How do we do this? First, an exporter can simply state some useful information about the packages it is exporting: “this is version 1.0.0 of the API”. An importer can now import only the version that is compatible with what it expects and has been compiled/tested against, and refuse to accept
Packaging Modules and Metadata
Our module system will need a way to package the contents of a module along with metadata describing the imports and exports into a deployable unit.
So the only question is, where should we put the metadata, i.e. the lists of imports and exports, versions and so on?
As it happens OSGi was designed before 2000, so it did choose either of these solutions. Instead it looked back at the JAR File Specification, where the answer is spelled out:
META-INF/MANIFEST.MF is the standard location for arbitrary application-specific metadata.
Late Binding
The final piece of modularity puzzle is late binding of implementations to interfaces. I would argue that it is a crucial feature of modularity, even though some module systems ignore it entirely, or at least consider it out of scope.
We should look for a decentralised approach.
Rather than being told what to do by the God Class, let us suppose that each module can simply create objects and publish them somewhere that the other modules can find them. We call these published objects “services”, and the place where they are published the “service registry”.
The most important information about a service is the interface (or interfaces) that it implements, so we can use that as the primary registration key.
Now a module needing to find instances of a particular interface can simply query the registry and find out what services are available at that time. The registry itself is still a central component existing outside of any module, but it is not “God”… rather, it is like a shared whiteboard.
In OSGi terminology a "component" is like a run-time service. Each component has an implementation class, and can optionally implement a public interface, effectively providing this "service". This aspect of OSGi is sometimes likened to a service registry pattern.
Components in OSGi are, by definition, provided by a bundle. A bundle may contain/provide multiple components. While by itself a bundle may not provide a service, components/declarative services are used to make OSGi more service oriented. You are under no obligation to use components/services.