As I am learning JSF2, I realized I am not sure what the backing components should be. From design point of view, what is the difference between EJBs and #ManagedBeans?
In the end I am going to use JPA, so EJB is a natural choice for business layer. Is it a good practice to use EJB directly from JSF (as explained here)?
At the moment I'm leaning towards using #ManagedBeans for components which don't need access to business layer (e.g. view helpers) or deal with request/session data. For other purposes, e.g. listing something in a grid, I would directly access EJB.
Is this a good design? Shall I use #ManagedBeans for all backing beans for the sake of clean layer separation, even if in some cases they only delegate to EJB?
Very valid question, but I guess the answer depends on the "strictness" of the approach for your project. There is indeed a bit of redundancy between JSF backing bean and EJB as both implement some business logic.
In a ideal usage of JSF features with converters, rendered, validator, etc. the backing bean could indeed be clean business logic code. But in practice some presentation-related logic frequently leaks in it.
This presentation-related logic should ideally not be in a EJB. This presentation-related logic may depend on the faces package, but not necessary. It's what it does that make it presentation-related or not.
A unified component model has some advantages though. And it's the approach taken by Seam and Spring. In both case, business component with declarative transaction, etc. can be used directly in JSF (Spring does not use EJB but provide a similar model). EJB purist would however say that you should dissociate the two.
So to me it's ultimately a question of taste and size of the project. I can imagine that for a small/medium project using EJB in JSF works fine. For bigger project where this strictness is critical, make sure you don't screw the layers.
In Java EE 6 there is some overlap between the various managed beans: JSF managed beans (#ManagedBean), CDI managed beans (#Named) and EJB beans (#Stateless, #Statefull, #Singleton).
In the view layer I don't see any particular advantage for sticking with #ManagedBean. The CDI variant #Named seems to be able to do the same and more, e.g. provide you with access to the conversion scope.
The current thinking seems to be that eventually the EJB component model will also be retrofitted as a set of CDI annotations. Especially expert group member Reza Rahman frequently hints at this. See e.g. Dependency Injection in Java EE 6 - Part 1
For the time being this has not happened, so EJB beans remain the easiest place to put business logic, especially when JPA is used for persistence.
Nevertheless, whether or not CDI will obtain the capabilities of EJB, IMHO it's still a best practice to use a separate bean for the "backing bean" concept and a separate bean for the "business logic".
The backing bean can be really slim, just containing some references to model objects and services (EJBs). Action Methods of the backing bean can delegate almost directly to the services, but their added value is in providing the user with feedback (adding FacesMessages upon success or failure) and doing small UI modifications (e.g. setting a boolean to false that displayed some dialog).
The Services (business logic) should not know anything about any particular presentation. They should be equally well usable from JSF backing beans, JAX-RS, Servlets, standalone Java SE remote clients or whatever.
Even if all beans would become CDI beans, then this does not change this basic division of responsibility.
Interesting article, didn't knew about that. However, to me this article smells more like a rant towards JSF managed beans. It also tight-couples EJB with JSF. It's maybe interesting in small (personal) applications, but likely not in real world SOA applications where you'd like to have full control over the EJB's been serviced.
As to which one to use for what, I would just stick to #ManagedBean to denote a JSF model --which is tied to one or more specific JSF views. You can perfectly use #EJBs for non-JSF specific business logic and/or datamodels. You can inject #EJBs in a JSF model and delegate it in action methods and maybe also getters/setters, but you should not do the other way round, i.e. do not define the JSF model in an #EJB, this leads to tight coupling and would make #EJBs unuseable outside the JSF context. As far, your design sounds good, as long as you watch to not import anything from javax.faces package in the #EJB class.
By calling EJB from your XHTML, you are introducing tight coupling between the choice of implementation in the view and business tier.
If you use a managed bean to call an EJB, [or even, put in a business delegate], you will be able to change out your business tier completely to Spring without affecting the view layer (XHTML).
Is is technically possible, and very easy for you to (JSR 299) be able to use EJB as your managed bean, and that is probably what these vendors want you to do - get glued to specifics. But whether that is a correct thing to do? - No.
Related
I have been working on a few web applications and REST web services recently (Spring IoC/MVC/Data JPA etc). They usually follow the same pattern: Controller classes --> Service classes (which have several "utility"/business logic classes autowired) --> Spring Data Repositories.
Pretty much all of the classes above are Spring singletons. I feel like this makes the code and some functions within a class dirtier; for example, I can't have a state in a class, I need to pass a lot parameters between methods, and I don't really like having more than 1-2 parameters (although I know sometimes it is necessary).
I was wondering how this problem is overcome in the big (e.g. enterprise) kind of application.
Is it a common practice to use non-Spring managed classes in the Spring application? If so, how do you pass dependencies into it (the ones that would normally be autowired)? If I use constructor injection for example, then I need to autowire all necessary dependencies into the class that creates the object and I wanted to avoid that. Also, I don't really want to be messing with load time weaving etc. to autowire beans into non-Spring objects.
Is using prototype scoped beans a good solution? The only thing is that I need to use AOP's scoped proxies (or method injection etc) to make sure that I get a new instance for any bean that is autowired into a singleton in the first place. Is that a "clean" and reliable option (i.e., is it certain that there will be no concurrency type of issues)? Can I still autowire any singletons into those classes with no issues?
Does anyone that worked on a large system (and actually managed to keep the structure not "bloated" and clean) have any recommendations? Maybe there are some patterns I am not aware and could use?
Any help appreciated.
Spring is well designed, you must not worry about about IoC implementation of DI. The pattern that you have mentioned /Controller Layer -> Service Layer -> Data Access Layer/ is good in practice, and it is ok that these singleton objects does not have state because of rule of OOP: "Think about objects as service providers that does one thing well". Models can have state as JPA units for storing something in Database. Is not mandatory that in large systems you will have dirty code how you mentioned passing a lot of parameters, it just depends on your design decision that will need a deeper construction.
I think my understanding of spring beans is a bit off.
I was working on my project and I was thinking about this situation.
Say I have class Foo
class Foo(){
public void doSomething(Object a , Object b){ // input parameters does not matter actually.
//do something
}
}
If I am using this class in another class like :
class Scheduler{
....
#Autowired
private Foo foo;
someMethod(){
foo.doSomeThind(a,b);
}
....
}
In the above case Instead of Autowiring the Foo, I can make doSomeThing static and directly use Foo.doSomeThing(a,b)
I was just wondering if there any advantage of creating a bean or if there any disadvantage of using static methods like this?
If they are same, When should I go for spring bean and when should do I simply use a static method?
Static methods are ok for small utility functions. The limitation of static code is that you can't change it's behavior without changing code itself.
Spring, on the other hand, gives you flexibility.
IoC. Your classes don't know about the exact implementation of their dependencies, they just rely on the API defined by interface. All connections are specified in configuration, that can be different for production/test/other.
Power of metaprogramming. You can change the behavior of your methods by merely marking them (via annotations of in xml). Thus, you can wrap method in transactions, make it asynchronous or scheduled, add custom AOP interceptors, etc.
Spring can instrument your POJO method to make it an endpoint to remote web service/RPC.
http://docs.spring.io/spring-framework/docs/current/spring-framework-reference/html/
Methods in Spring beans can benefit from dependency injection whereas static methods cannot. So, an ideal candidate for static method is the one that does things more or less independently and is not envisioned to ever need any other dependency (say a DAO or Service)
People use Spring not because of some narrow specific futures that cannot be replaced by static classes or DI or whatever. People use Spring because of a more abstracted features and ideas it provide out of the box.
Here is a nice quote from Someone`s blog:
Following are some of the major benefits offered by the Spring Framework:
Spring Enables POJO Programming. Spring enables programmers to develop enterprise-class applications using POJOs. With Spring, you are able to choose your own services and persistence framework. You program in POJOs and add enterprise services to them with configuration files. You build your program out of POJOs and configure it, and the rest is hidden from you.
Spring Provides Better Leverage. With Spring, more work can be done with each line of code. You code in a more fast way, and maintain less. There’s no transaction processing. Spring allows you to build configuration code to handle that. You don’t have to close the session to manage resources. You don’t have to do configuration on your own. Besides you are free to manage the exceptions at the most appropriate place not facing the necessity of managing them at this level as the exceptions are unchecked.
Dependency Injection Helps Testability. Spring greatly improves your testability through a design pattern called Dependency Injection (DI). DI lets you code a production dependency and a test dependency. Testing of a Spring based application is easy because all the related environment and dependent code is moved into the framework.
Inversion of Control Simplifies JDBC. JDBC applications are quite verbose and time-taking. What may help is a good abstraction layer. With Spring you can customize a default JDBC method with a query and an anonymous inner class to lessen much of the hard work.
Spring’s coherence. Spring is a combination of ideas into a coherent whole, along with an overall architectural vision to facilitate effective use, so it is much better to use Spring than create your own equivalent solution.
Basis on existing technologies. The spring framework is based on existing technologies like logging framework, ORM framework, Java EE, JDK timers, Quartz and other view related technologies.
During unit testing you have more flexibility using bean because you can easily mock your bean methods. However, that is not the same with static methods where you may have to resort to PowerMock (which I recommend you stay away from if you can).
It actually depends on the role of the component you are referring to: Is this feature:
An internal tooling: you can use static (you wouldn't wrap Math.abs or String.trim in a bean)
Or a module of the project: design it to be a bean/module-class (a DAO class is best modular to be able to change/mock it easily)
Globally, you should decide w.r.t your project design what are beans and what are not. I think many dev put too much stuff inside bean by default and forget that every bean is an public api that will be more difficult to maintain when refactoring (i.e. restrained visibility is a good thing).
In general, there are already several answers describing the advantages of using spring beans, so I won't develop on that. And also note that you don't need spring to use bean/module design. Then here are the main reasons not to use it:
type-safety: Spring bean are connected "only" at runtime. Not using it, you (can) get much more guaranties at compile time
It can be easier to track your code as there is no indirection due to IoC
You don't need the additional spring dependency/ies which get quite heavy
Obviously, the (3) is correct only if you don't use spring at all in your project/lib.
Also, The (1) and (2) really depend on how you code. And the most important is to have and maintain a clean, readable code. Spring provides a framework that forces you to follow some standard that many people like. I personally don't because of (1) and (2), but I have seen that in heterogeneous dev teams it is better to use it than nothing. So, if not using spring, you have to follow some strong coding guidelines.
First some background:
I'm working on some webapp prototype code based on Apache Sling which is OSGI based and runs on Apache Felix. I'm still relatively new to OSGI even though I think I've grasped most concepts by now. However, what puzzles me is that I haven't been able to find a "full" dependency injection (DI) framework. I've successfully employed rudimentary DI using Declarative Services (DS). But my understanding is that DS are used to reference -- how do I put this? -- OSGI registered services and components together. And for that it works fine, but I personally use DI frameworks like Guice to wire entire object graphs together and put objects on the correct scopes (think #RequestScoped or #SessionScoped for example). However, none of the OSGI specific frameworks I've looked at, seem to support this concept.
I've started reading about OSGI blueprints and iPOJO but these frameworks seem to be more concerned with wiring OSGI services together than with providing a full DI solution. I have to admit that I haven't done any samples yet, so my impression could be incorrect.
Being an extension to Guice, I've experimented with Peaberry, however I found documentation very hard to find, and while I got basic DI working, a lot of guice-servlet's advanced functionality (automatic injection into filters, servlets, etc) didn't work at all.
So, my questions are the following:
How do declarative services compare to "traditional" DI like Guice or Spring? Do they solve the same problem or are they geared towards different problems?
All OSGI specific solutions I've seen so far lack the concept of scopes for DI. For example, Guice + guice-servlet has request scoped dependencies which makes writing web applications really clean and easy. Did I just miss that in the docs or are these concerns not covered by any of these frameworks?
Are JSR 330 and OSGI based DI two different worlds? iPOJO for example brings its own annotations and Felix SCR Annotations seem to be an entirely different world.
Does anybody have experience with building OSGI based systems and DI? Maybe even some sample code on github?
Does anybody use different technologies like Guice and iPOJO together or is that just a crazy idea?
Sorry for the rather long question.
Any feedback is greatly appreciated.
Updates
Scoped injection: scoped injection is a useful mechanism to have objects from a specific lifecycle automatically injected. Think for example, some of your code relies on a Hibernate session object that is created as part of a servlet filter. By marking a dependency the container will automatically rebuild the object graph. Maybe there's just different approaches to that?
JSR 330 vs DS: from all your excellent answers I see that these are a two different things. That poses the question, how to deal with third party libraries and frameworks that use JSR 330 annotations when used in an OSGI context? What's a good approach? Running a JSR 330 container within the Bundle?
I appreciate all your answers, you've been very helpful!
Overall approach
The simplest way to have dependency injection with Apache Sling, and the one used throughout the codebase, is to use the maven-scr-plugin .
You can annotate your java classes and then at build time invoke the SCR plugin, either as a Maven plugin, or as an Ant task.
For instance, to register a servlet you could do the following:
#Component // signal that it's OSGI-managed
#Service(Servlet.class) // register as a Servlet service
public class SampleServlet implements Servlet {
#Reference SlingRepository repository; // get a reference to the repository
}
Specific answers
How do declarative services compare to "traditional" DI like Guice or Spring? Do they solve the same problem or are they geared towards different problems?
They solve the same problem - dependency injection. However (see below) they are also built to take into account dynamic systems where services can appear or disappear at any time.
All OSGI specific solutions I've seen so far lack the concept of scopes for DI. For example, Guice + guice-servlet has request scoped dependencies which makes writing web applications really clean and easy. Did I just miss that in the docs or are these concerns not covered by any of these frameworks?
I haven't seen any approach in the SCR world to add session-scoped or request-scoped services. However, SCR is a generic approach, and scoping can be handled at a more specific layer.
Since you're using Sling I think that there will be little need for session-scoped or request-scoped bindings since Sling has builtin objects for each request which are appropriately created for the current user.
One good example is the JCR session. It is automatically constructed with correct privileges and it is in practice a request-scoped DAO. The same goes for the Sling resourceResolver.
If you find yourself needing per-user work the simplest approach is to have services which receive a JCR Session or a Sling ResourceResolver and use those to perform the work you need. The results will be automatically adjusted for the privileges of the current user without any extra effort.
Are JSR 330 and OSGI based DI two different worlds? iPOJO for example brings its own annotations and Felix SCR Annotations seem to be an entirely different world.
Yes, they're different. You should keep in mind that although Spring and Guice are more mainstream, OSGi services are more complex and support more use cases. In OSGi bundles ( and implicitly services ) are free come and go at any time.
This means that when you have a component which depends on a service which just became unavailable your component is deactivated. Or when you receive a list of components ( for instance, Servlet implementations ) and one of them is deactivated, you are notified by that. To my knowledge, neither Spring nor Guice support this as their wirings are static.
That's a great deal of flexibility which OSGi gives you.
Does anybody have experience with building OSGI based systems and DI? Maybe even some sample code on github?
There's a large number of samples in the Sling Samples SVN repository . You should find most of what you need there.
Does anybody use different technologies like Guice and iPOJO together or is that just a crazy idea?
If you have frameworks which are configured with JSR 330 annotations it does make sense to configure them at runtime using Guice or Spring or whatever works for you. However, as Neil Bartlett has pointed out, this will not work cross-bundles.
I'd just like to add a little more information to Robert's excellent answer, particularly with regard to JSR330 and DS.
Declarative Services, Blueprint, iPOJO and the other OSGi "component models" are primarily intended for injecting OSGi services. These are slightly harder to handle than regular dependencies because they can come and go at any time, including in response to external events (e.g. network disconnected) or user actions (e.g. bundle removed). Therefore all these component models provide an additional lifecycle layer over pure dependency injection frameworks.
This is the main reason why the DS annotations are different from the JSR330 ones... the JSR330 ones don't provide enough semantics to deal with lifecycle. For example they say nothing about:
When should the dependency be injected?
What should we do when the dependency is not currently available (i.e., is it optional or mandatory)?
What should we do when a service we are using goes away?
Can we dynamically switch from one instance of a service to another?
etc...
Unfortunately because the component models are primarily focused on services -- that is, the linkages between bundles -- they are comparatively spartan with regard to wiring up dependencies inside the bundle (although Blueprint does offer some support for this).
There should be no problem using an existing DI framework for wiring up dependencies inside the bundle. For example I had a customer that used Guice to wire up the internal pieces of some Declarative Services components. However I tend to question the value of doing this, because if you need DI inside your bundle it suggests that your bundle may be too big and incoherent.
Note that it is very important NOT to use a traditional DI framework to wire up components between bundles. If the DI framework needs to access a class from another bundle then that other bundle must expose its implementation details, which breaks the encapsulation that we seek in OSGi.
I have some experience in building applications using Aries Blueprint. It has some very nice features regarding OSGi services and config admin support.
If you search for some great examples have a look at the code of Apache Karaf which uses blueprint for all of its wiring.
See http://svn.apache.org/repos/asf/karaf/
I also have some tutortials for Blueprint and Apache Karaf on my website:
http://www.liquid-reality.de/display/liquid/Karaf+Tutorials
In your environment with the embedded felix it will be a bit different as you do not have the management features of Karaf but you simply need to install the same bundles and it should work nicely.
I can recommend Bnd and if you use Eclipse IDE sepcially Bndtools as well. With that you can avoid describing DS in XML and use annotations instead. There is a special Reference annotation for DI. This one has also a filter where you can reference only a special subset of services.
I am using osgi and DI for current my project, I've choosed gemini blueprint because it is second version of SPRING DYNAMIC MODULES, Based on this information I suggest you to read Spring Dynamic Modules in Action. This book will help you to understand some parts and points how to build architecture and why is it good :)
Running into a similar architecture problem here - as Robert mentioned above in his answer:
If you find yourself needing per-user work the simplest approach is to
have services which receive a JCR Session or a Sling ResourceResolver
and use those to perform the work you need. The results will be
automatically adjusted for the privileges of the current user without
any extra effort.
Extrapolating from this (and what I am currently coding), one approach would be to add #param resourceResolver to any #Service methods so that you can pass the appropriately request-scoped object to be used down the execution chain.
Specifically we've got a XXXXService / XXXXDao layer, called from XXXXServlet / XXXXViewHelper / JSP equivalents. So managing all of these components via the OSGI #Service annotations, we can easily wire up the entire stack.
The downside here is that you need to litter your interface design with ResourceResolver or Sessions params.
Originally we tried to inject ResourceResolverFactory into the DAO layer, so that we could easily access the session at will via the factory. However, we are interacting with the session at multiple points in the hierarchy, and multiple times per request. This resulted in session-closed exceptions.
Is there a way to get at that per-request ResourceResolver reliably without having to pass it into every service method?
With request-scoped injection on the Service layers, you could instead just pass the ResourceResolver as a constructor arg & use an instance variable instead. Of course the downside here is you'd have to think about request-scope vs. prototype-scope service code and separate out accordingly.
This seems like it would be a common problem where you want to separate out concerns into service/dao code, leaving the JCR interactions in the DAO, analogous to Hibernate how can you easily get at the per-request Session to perform repo operataions?
Haven't used EJB3, but by reading a tutorial, EJB3 looks like mostly for manipulating data in database through JPA (of course, it contains other business logic). Just curious, if no database is quired, is it still beneficial to use EJB3 or it just adds complexity to an application? Will POJO be a better choice for implementation?
Big part of EJB benefits is coming from transactions and persistence.
But even without them you may benefit from EJBs. It can give you a proven clustering and balancing model. It can give you the declarative security. It can give you MDBs which are a convenient way to listen to JMS queues/topics and timers.
All above can be done using third-party libraries, such as Spring. EJBs though are highly consistent, while to get, for instance, clustering and security you may need to combine two products, and it's not guaranteed they will work together well and won't need much glue.
EJBs are transactional, distributed components deployed on an app server that manages lifecycle, threading, and other services. Persistence is just one type of EJBs. You still might find stateless, stateful, or message EJBs useful, even if you don't want to use entity beans.
With that said, you can create POJO components that are stateful, stateless, persistent or message-driven. You don't need EJBs; something like Spring can be a good alternative.
Guys, I HAVE tried reading tons of stuff about EJB. And I don't get it. It seems that most of the authors have a superficial knowledge on it. They basically say it's the business-logic 'stuff'. They don't show it how it interacts with the AppServer and so on, what it does, how, and why?
It is a huge question, but not that huge. It is not like asking what is physics. You basically run your business code inside container which is handling all the connections, lookup, transactions etc. There are alternatives to ejb, e.g. spring.
The question is huge indeed. EJBs in a general sense try to enforce a design pattern that encapsulates all of your reusable code or "business logic" into a specific tier in your architecture. By doing this you can reuse this code for your web/presentation layer and web services for example. EJBs provide a way of persisting your data to a DB.
The trend in java development now a days is POJO driven architectures that leverage dependency injection. Spring is a popular tool to facilitate this design pattern and I would encourage you to explore this instead of EJB.
an enterprise bean is a server-side component that
encapsulates the business logic of an application. The business logic is the code that fulfills the
purpose of the application. In an inventory control application, for example, the enterprise
beans might implement the business logic in methods called checkInventoryLevel and
orderProduct. By invoking these methods, clients can access the inventory services provided
by the application.