For a new project I'm looking at what OSGi has to offer in terms of dependency injection, and I kinda like iPOJO (pure annotation-based, not the xml backed).
However, from a testing perspective, Blueprint might be better, since then for different test cases (functionality testing) it might be enough to rewrite the blueprint configuration and there would be an other service injected right away.
What are your thoughts on this subject?
Can I abandon XML-based Blueprint (I loathe XML) in favor of iPOJO without sacrificing flexibility in terms of testing?
Yes. You can. I don't think you will sacrafice anything.
In fact, iPOJO is more powerful then blueprint, it supports things like "Field injection", "Service lifecycle control", and "Configuration admin" - which Blueprint does not (reference).
Blueprint, however, is part of the OSGi Enterprise Specification, if that matters.
I have not worked with Blueprint, but just looking at the specification for it - unless you are a java beans guy, I would stay away. Also, I prefer iPOJO over DS, as it seems to be a lot smarter in many cases, and just does the right thing.
Regarding iPOJO unit testing, you can use constructor (or constructor injection) to inject mock services and effectively test the behavior of your component:
So two choices:
either you have an additional constructor to inject the dependencies
you need using mocks
or your component is using constructor
injection, and then you just call the constructor with mock objects
If you trying to do integration testing, you can just use pax exam and deploy your bundle (in additional to iPOJO).
Related
I think my understanding of spring beans is a bit off.
I was working on my project and I was thinking about this situation.
Say I have class Foo
class Foo(){
public void doSomething(Object a , Object b){ // input parameters does not matter actually.
//do something
}
}
If I am using this class in another class like :
class Scheduler{
....
#Autowired
private Foo foo;
someMethod(){
foo.doSomeThind(a,b);
}
....
}
In the above case Instead of Autowiring the Foo, I can make doSomeThing static and directly use Foo.doSomeThing(a,b)
I was just wondering if there any advantage of creating a bean or if there any disadvantage of using static methods like this?
If they are same, When should I go for spring bean and when should do I simply use a static method?
Static methods are ok for small utility functions. The limitation of static code is that you can't change it's behavior without changing code itself.
Spring, on the other hand, gives you flexibility.
IoC. Your classes don't know about the exact implementation of their dependencies, they just rely on the API defined by interface. All connections are specified in configuration, that can be different for production/test/other.
Power of metaprogramming. You can change the behavior of your methods by merely marking them (via annotations of in xml). Thus, you can wrap method in transactions, make it asynchronous or scheduled, add custom AOP interceptors, etc.
Spring can instrument your POJO method to make it an endpoint to remote web service/RPC.
http://docs.spring.io/spring-framework/docs/current/spring-framework-reference/html/
Methods in Spring beans can benefit from dependency injection whereas static methods cannot. So, an ideal candidate for static method is the one that does things more or less independently and is not envisioned to ever need any other dependency (say a DAO or Service)
People use Spring not because of some narrow specific futures that cannot be replaced by static classes or DI or whatever. People use Spring because of a more abstracted features and ideas it provide out of the box.
Here is a nice quote from Someone`s blog:
Following are some of the major benefits offered by the Spring Framework:
Spring Enables POJO Programming. Spring enables programmers to develop enterprise-class applications using POJOs. With Spring, you are able to choose your own services and persistence framework. You program in POJOs and add enterprise services to them with configuration files. You build your program out of POJOs and configure it, and the rest is hidden from you.
Spring Provides Better Leverage. With Spring, more work can be done with each line of code. You code in a more fast way, and maintain less. There’s no transaction processing. Spring allows you to build configuration code to handle that. You don’t have to close the session to manage resources. You don’t have to do configuration on your own. Besides you are free to manage the exceptions at the most appropriate place not facing the necessity of managing them at this level as the exceptions are unchecked.
Dependency Injection Helps Testability. Spring greatly improves your testability through a design pattern called Dependency Injection (DI). DI lets you code a production dependency and a test dependency. Testing of a Spring based application is easy because all the related environment and dependent code is moved into the framework.
Inversion of Control Simplifies JDBC. JDBC applications are quite verbose and time-taking. What may help is a good abstraction layer. With Spring you can customize a default JDBC method with a query and an anonymous inner class to lessen much of the hard work.
Spring’s coherence. Spring is a combination of ideas into a coherent whole, along with an overall architectural vision to facilitate effective use, so it is much better to use Spring than create your own equivalent solution.
Basis on existing technologies. The spring framework is based on existing technologies like logging framework, ORM framework, Java EE, JDK timers, Quartz and other view related technologies.
During unit testing you have more flexibility using bean because you can easily mock your bean methods. However, that is not the same with static methods where you may have to resort to PowerMock (which I recommend you stay away from if you can).
It actually depends on the role of the component you are referring to: Is this feature:
An internal tooling: you can use static (you wouldn't wrap Math.abs or String.trim in a bean)
Or a module of the project: design it to be a bean/module-class (a DAO class is best modular to be able to change/mock it easily)
Globally, you should decide w.r.t your project design what are beans and what are not. I think many dev put too much stuff inside bean by default and forget that every bean is an public api that will be more difficult to maintain when refactoring (i.e. restrained visibility is a good thing).
In general, there are already several answers describing the advantages of using spring beans, so I won't develop on that. And also note that you don't need spring to use bean/module design. Then here are the main reasons not to use it:
type-safety: Spring bean are connected "only" at runtime. Not using it, you (can) get much more guaranties at compile time
It can be easier to track your code as there is no indirection due to IoC
You don't need the additional spring dependency/ies which get quite heavy
Obviously, the (3) is correct only if you don't use spring at all in your project/lib.
Also, The (1) and (2) really depend on how you code. And the most important is to have and maintain a clean, readable code. Spring provides a framework that forces you to follow some standard that many people like. I personally don't because of (1) and (2), but I have seen that in heterogeneous dev teams it is better to use it than nothing. So, if not using spring, you have to follow some strong coding guidelines.
First some background:
I'm working on some webapp prototype code based on Apache Sling which is OSGI based and runs on Apache Felix. I'm still relatively new to OSGI even though I think I've grasped most concepts by now. However, what puzzles me is that I haven't been able to find a "full" dependency injection (DI) framework. I've successfully employed rudimentary DI using Declarative Services (DS). But my understanding is that DS are used to reference -- how do I put this? -- OSGI registered services and components together. And for that it works fine, but I personally use DI frameworks like Guice to wire entire object graphs together and put objects on the correct scopes (think #RequestScoped or #SessionScoped for example). However, none of the OSGI specific frameworks I've looked at, seem to support this concept.
I've started reading about OSGI blueprints and iPOJO but these frameworks seem to be more concerned with wiring OSGI services together than with providing a full DI solution. I have to admit that I haven't done any samples yet, so my impression could be incorrect.
Being an extension to Guice, I've experimented with Peaberry, however I found documentation very hard to find, and while I got basic DI working, a lot of guice-servlet's advanced functionality (automatic injection into filters, servlets, etc) didn't work at all.
So, my questions are the following:
How do declarative services compare to "traditional" DI like Guice or Spring? Do they solve the same problem or are they geared towards different problems?
All OSGI specific solutions I've seen so far lack the concept of scopes for DI. For example, Guice + guice-servlet has request scoped dependencies which makes writing web applications really clean and easy. Did I just miss that in the docs or are these concerns not covered by any of these frameworks?
Are JSR 330 and OSGI based DI two different worlds? iPOJO for example brings its own annotations and Felix SCR Annotations seem to be an entirely different world.
Does anybody have experience with building OSGI based systems and DI? Maybe even some sample code on github?
Does anybody use different technologies like Guice and iPOJO together or is that just a crazy idea?
Sorry for the rather long question.
Any feedback is greatly appreciated.
Updates
Scoped injection: scoped injection is a useful mechanism to have objects from a specific lifecycle automatically injected. Think for example, some of your code relies on a Hibernate session object that is created as part of a servlet filter. By marking a dependency the container will automatically rebuild the object graph. Maybe there's just different approaches to that?
JSR 330 vs DS: from all your excellent answers I see that these are a two different things. That poses the question, how to deal with third party libraries and frameworks that use JSR 330 annotations when used in an OSGI context? What's a good approach? Running a JSR 330 container within the Bundle?
I appreciate all your answers, you've been very helpful!
Overall approach
The simplest way to have dependency injection with Apache Sling, and the one used throughout the codebase, is to use the maven-scr-plugin .
You can annotate your java classes and then at build time invoke the SCR plugin, either as a Maven plugin, or as an Ant task.
For instance, to register a servlet you could do the following:
#Component // signal that it's OSGI-managed
#Service(Servlet.class) // register as a Servlet service
public class SampleServlet implements Servlet {
#Reference SlingRepository repository; // get a reference to the repository
}
Specific answers
How do declarative services compare to "traditional" DI like Guice or Spring? Do they solve the same problem or are they geared towards different problems?
They solve the same problem - dependency injection. However (see below) they are also built to take into account dynamic systems where services can appear or disappear at any time.
All OSGI specific solutions I've seen so far lack the concept of scopes for DI. For example, Guice + guice-servlet has request scoped dependencies which makes writing web applications really clean and easy. Did I just miss that in the docs or are these concerns not covered by any of these frameworks?
I haven't seen any approach in the SCR world to add session-scoped or request-scoped services. However, SCR is a generic approach, and scoping can be handled at a more specific layer.
Since you're using Sling I think that there will be little need for session-scoped or request-scoped bindings since Sling has builtin objects for each request which are appropriately created for the current user.
One good example is the JCR session. It is automatically constructed with correct privileges and it is in practice a request-scoped DAO. The same goes for the Sling resourceResolver.
If you find yourself needing per-user work the simplest approach is to have services which receive a JCR Session or a Sling ResourceResolver and use those to perform the work you need. The results will be automatically adjusted for the privileges of the current user without any extra effort.
Are JSR 330 and OSGI based DI two different worlds? iPOJO for example brings its own annotations and Felix SCR Annotations seem to be an entirely different world.
Yes, they're different. You should keep in mind that although Spring and Guice are more mainstream, OSGi services are more complex and support more use cases. In OSGi bundles ( and implicitly services ) are free come and go at any time.
This means that when you have a component which depends on a service which just became unavailable your component is deactivated. Or when you receive a list of components ( for instance, Servlet implementations ) and one of them is deactivated, you are notified by that. To my knowledge, neither Spring nor Guice support this as their wirings are static.
That's a great deal of flexibility which OSGi gives you.
Does anybody have experience with building OSGI based systems and DI? Maybe even some sample code on github?
There's a large number of samples in the Sling Samples SVN repository . You should find most of what you need there.
Does anybody use different technologies like Guice and iPOJO together or is that just a crazy idea?
If you have frameworks which are configured with JSR 330 annotations it does make sense to configure them at runtime using Guice or Spring or whatever works for you. However, as Neil Bartlett has pointed out, this will not work cross-bundles.
I'd just like to add a little more information to Robert's excellent answer, particularly with regard to JSR330 and DS.
Declarative Services, Blueprint, iPOJO and the other OSGi "component models" are primarily intended for injecting OSGi services. These are slightly harder to handle than regular dependencies because they can come and go at any time, including in response to external events (e.g. network disconnected) or user actions (e.g. bundle removed). Therefore all these component models provide an additional lifecycle layer over pure dependency injection frameworks.
This is the main reason why the DS annotations are different from the JSR330 ones... the JSR330 ones don't provide enough semantics to deal with lifecycle. For example they say nothing about:
When should the dependency be injected?
What should we do when the dependency is not currently available (i.e., is it optional or mandatory)?
What should we do when a service we are using goes away?
Can we dynamically switch from one instance of a service to another?
etc...
Unfortunately because the component models are primarily focused on services -- that is, the linkages between bundles -- they are comparatively spartan with regard to wiring up dependencies inside the bundle (although Blueprint does offer some support for this).
There should be no problem using an existing DI framework for wiring up dependencies inside the bundle. For example I had a customer that used Guice to wire up the internal pieces of some Declarative Services components. However I tend to question the value of doing this, because if you need DI inside your bundle it suggests that your bundle may be too big and incoherent.
Note that it is very important NOT to use a traditional DI framework to wire up components between bundles. If the DI framework needs to access a class from another bundle then that other bundle must expose its implementation details, which breaks the encapsulation that we seek in OSGi.
I have some experience in building applications using Aries Blueprint. It has some very nice features regarding OSGi services and config admin support.
If you search for some great examples have a look at the code of Apache Karaf which uses blueprint for all of its wiring.
See http://svn.apache.org/repos/asf/karaf/
I also have some tutortials for Blueprint and Apache Karaf on my website:
http://www.liquid-reality.de/display/liquid/Karaf+Tutorials
In your environment with the embedded felix it will be a bit different as you do not have the management features of Karaf but you simply need to install the same bundles and it should work nicely.
I can recommend Bnd and if you use Eclipse IDE sepcially Bndtools as well. With that you can avoid describing DS in XML and use annotations instead. There is a special Reference annotation for DI. This one has also a filter where you can reference only a special subset of services.
I am using osgi and DI for current my project, I've choosed gemini blueprint because it is second version of SPRING DYNAMIC MODULES, Based on this information I suggest you to read Spring Dynamic Modules in Action. This book will help you to understand some parts and points how to build architecture and why is it good :)
Running into a similar architecture problem here - as Robert mentioned above in his answer:
If you find yourself needing per-user work the simplest approach is to
have services which receive a JCR Session or a Sling ResourceResolver
and use those to perform the work you need. The results will be
automatically adjusted for the privileges of the current user without
any extra effort.
Extrapolating from this (and what I am currently coding), one approach would be to add #param resourceResolver to any #Service methods so that you can pass the appropriately request-scoped object to be used down the execution chain.
Specifically we've got a XXXXService / XXXXDao layer, called from XXXXServlet / XXXXViewHelper / JSP equivalents. So managing all of these components via the OSGI #Service annotations, we can easily wire up the entire stack.
The downside here is that you need to litter your interface design with ResourceResolver or Sessions params.
Originally we tried to inject ResourceResolverFactory into the DAO layer, so that we could easily access the session at will via the factory. However, we are interacting with the session at multiple points in the hierarchy, and multiple times per request. This resulted in session-closed exceptions.
Is there a way to get at that per-request ResourceResolver reliably without having to pass it into every service method?
With request-scoped injection on the Service layers, you could instead just pass the ResourceResolver as a constructor arg & use an instance variable instead. Of course the downside here is you'd have to think about request-scope vs. prototype-scope service code and separate out accordingly.
This seems like it would be a common problem where you want to separate out concerns into service/dao code, leaving the JCR interactions in the DAO, analogous to Hibernate how can you easily get at the per-request Session to perform repo operataions?
I'm reading the Arquillian Reference Guide which is very well written, however in the chapter that talks about setting up dependency injection I can't find where you actually specify the beans/bindings.
Most of the Arquillian CDI code examples show the use of Java's #Inject annotation. So I'm just wondering, where I define these beans/DI mappings/bindings, and how do I configure Arquillian to use them?
In Spring DI, you specify a bean descriptor, like spring-config.xml. In Guice you implement a Module and define its configure(Binder) method. What does this look like in Arquillian-land when using javax.inject.Inject? Thanks in advance.
The short answer - there is no need to define bean mappings in CDI, because CDI works with annotations exclusively. You can add extra information in config-files, but this is usually not required.
The long answer is best taken from this excellent introduction into CDI.
I think you need to use "Alternatives" CDI mechanism
Alternatives are beans whose implementation is specific to a particular client module or deployment scenario.
I've recently studied about Guice in a University course, and have seen the Google I/O video about it. In the video, they claim to use it in every Google project, including Wave, etc. I was wondering - is Guice really that ubiquitous? Is it really a must-know-must-use for programmers in Java? Should I always use it over a factory?
Thanks
Guice is a dependency injection framework. When using Java, you should definitely considering dependency injection framework (DI). DI can save you a lot of boiler plate code for (web)security/authentication, transaction management, logging, databaseaccess, and results in cleaner code.
Alternatively you could consider Spring. Guice is, or at least was easier to use as it didn't rely on XML so much, but Spring has caught up since the latest version (using annotations, javaconfig, etc.).
Well, either way, use a DI framework over you're own factory code, transaction boiler plate code (transaction.start . commit, finally .. etc.), singletons (like static getInstance methods), etc.
is Guice really that ubiquitous? Is it really a must-know-must-use for programmers in Java?
Outside of Google - no, not really. I'm not saying it's not a good product, it just doesn't seem very widely used right now. There are other, more established frameworks out there that provide dependency injection, like Spring or EJB. The main difference is that Guice only does dependency injection.
Should I always use it over a factory?
Of course not. Dependency injection is a useful pattern, but as with all useful tools, there's a right and a wrong time to use it.
I've read about Google Guice, and understand the general issues with other approaches to dependency injection, however I haven't yet seen an example of someone using Guice "in practice" where its value becomes clear.
I'm wondering if anyone is aware of any such examples?
Using Google Guice to provides ease in unit testing is only high-level advantage. Some people might not even use unit testing in their project. People has been using Spring/Dependency Injection more than only for unit testing.
The low level advantage of using Google Guice is a matter of cohesion in your application, your classes in the project can be loosely coupled between each other. I can provide a class for another class without them being dependent to each other.
Consider this example:
public class A {
}
public class B {
A a = new A();
}
Class B would be tightly coupled to Class A, or in other words it is dependent to class A's existence.
But with Guice I can instead make it loosely coupled like this:
public class B {
private A a;
#Inject
public B(A a) {
this.a = a;
}
}
Class B is now loosely coupled to A, and Guice is responsible to provide the instance of A instead of B having to instantiate it. With this you can extend it to provide interface of A to B, and the implementation can be a Mock object if you want to unit test your apps.
Having said that we're only discussing the benefits of Dependency Injection so far. Beyond Dependency Injection, the benefits of using Google Guice is:
Guice has a very clean implementation of constructor Injection. As you can see from the example you just add #Inject annotation constructor.
Guice also has setter Injection using the same annotation.
Having said that, the annotation based Injection is very clean approach compared to XML based injection like some other DI implementation.
All of the dependency injection and configuration is using Java, so you are guaranteed to get a typesafety in your application by default.
Guice has a very lightweight implementation of Aspect Oriented Programming (or maybe you can call it as a wrapper to the AOPAlliance AOP implementation). And the good thing of it is it doesn't generate stubs or what-so-ever.
That's the overview of it. But as you get deeper with Guice, there's so many more good things about it. A simple real life example is if you are using GWT with MVP implementation, the components/widgets in your GWT application are very loosely coupled and not tightly integrated to each other.
Maybe you should go back in time and look closer at the problems Guice wanted to solve. To understand the motivations behind Guice, the Bob Lee: I Don't Get Spring news on TheServerSide.COM (and its comments) is the perfect starting point. Then, go ahead with the announcement of Google Guice, A Java Dependency Injection Framework (and the comments) and the Tech Talk: Bob Lee on Google Guice (and the comments).
Personally, I was sharing concerns about evil XML: XML configuration hell, XML and possible runtime errors, error-prone and refactoring-adverse string identifiers, etc, etc. Actually, I believe that skeptical opinions on Spring and concurrence were good for everybody (including Spring). I was thus happy to see a new player in the DI framework landscape, especially a modern framework leveraging Java 5 features (generics and annotations for the sake of type safety).
And because Google is running Guice in mission critical applications (almost every Java-based application is also a Guice-base application: AdWords, Google Docs, Gmail, and even YouTube as reported by "Crazy" Bob Lee in Guice²), I can't believe Guice is totally wrong and doesn't provide any value. Sadly, I don't think Google will provide much code of these applications as examples... But you may find interesting things in the list of applications that use Guice and/or the list of 3rd party Guice addons. Or check out the books mentioned in Guice². Or ask Bob :)
I think the advantage comes with coding to interfaces, testing, and proxies.
Coding to an interface helps keep your code layered properly, makes it possible to inject mocks for testing, and lets you generate proxies automagically so client code need not worry about implementation.
This is true for Guice, Spring, PicoContainer, and all DI frameworks.
Succinct enough?