I am working on the OSGi Service right now, and I have a question about using Services in OSGi. There are some different way to register user service. Can anybody explain the difference between OSGi Service tracker and Declarative Services? Which one is better?
In OSGi, the ServiceTracker is a programmatic way to acquire a reference to a service. i.e. You write ServiceTracker code that "tracks" a reference to another service and let's you use it when it becomes available.
In contrast, Declarative Services (DS), allows you to declare dependencies that are injected into your component. DS is, as such, a form of dependency injection. The dependency graph between services, together with their start order will determine when your service starts. The cardinality property in a DS definition allows you to declare whether the relationship is mandatory (1..1), multiple with a least one (1..n), optional (0..1) or multiple optional (0..n).
When you declare mandatory relationships, your service will not start until all dependencies are satisfied.
When you declare an optional relationship your service will start regardless of the state of the dependency, but you need to take care in the code that the reference to your service might be null.
From a practical perspective, ServiceTracker is a lot of boilerplate code to write and maintain. Given the dynamic nature of OSGi services, there're many states allowed by the OSGi spec that need to be taken into consideration.
DS will give you a clean way to declare and maintain your dependencies. Well-defined dependencies will help you maintaining consistency of your runtime environment.
Declarative Services (DS) are quite easy to use and you avoid some of the boilerplate code associated with the use of ServiceTracker. If you go plain OSGI, using the ServiceTracker only, you need to take care of some aspects of dynamic nature of OSGI services. Services can come and go and your component needs to deal with it. If you use DS, most of this work is already done. You just need to define references to other services and DS will inject those references when they become available. DS will activate your component when the component requirements are met.
If you use Apache Felix SCR annotations or the annotations provided by bndlib you can also avoid writing the xml required by Declarative Services. Recently the OSGI group also published their one annotations. I think the ones provided by bndlib and the ones from OSGI are very similar and I'm almost sure the bnd tool can process both.
I used Apache SCR annotations some time ago, but now I prefer to use bndlib because it includes annotations for Metatype and some classes that make the implementation of a Managed Service a lot easier. Metatype is a specification related to Managed Services. Basically, it provides metadata that can be used by Config Admin implementations to provide a more user friendly interface for configuration of a component.
I know of two other alternatives: iPojo and Blueprint.
iPojo is quite powerful and feature rich. It abstracts most of the OSGI stuff and includes some nice features like EventAdmin support and ConfigAdmin support.
I used Blueprint a bit, but because of the excessive use of xml I don't like it that much. I think you might say that Blueprint is like Spring for OSGI.
An OSGi Service Tracker let's you register listeners for certain services, so you can react when that service becomes available.
Declarative Services, on the other hand, implicitly uses the service tracker to delay the execution of your bundle activation code until the service dependencies have been solved.
and which one is better?
It's quite simple to create and use some Declarative Services, especially with Apache Felix SCR annotations and the Apache Felix SCR Maven plugins:
https://felix.apache.org/site/scr-annotations.html
https://felix.apache.org/site/apache-felix-maven-scr-plugin-use.html
All dependency must be installed, in the NetBeans project with maven, in pom.xml files.
For example for org.osgi.core will be:
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.core</artifactId>
<version>6.0.0</version>
<scope>provided</scope>
</dependency>
Related
I'm planning to build a Java-based system to handle different business processes where each of these is a particular module in the system. Most modules would depend on some of the other modules to handle their particular business process. In other words, top modules would consume some sort of basic services provided by underlying modules. Some modules will be developed from the very beginning, but some will be added to the system later. Next, some modules will expose RESTful interfaces to handle external input / output.
To handle all this, OSGi seems appropriate, but it's a bit difficult to learn with all the different "distributions" out there (Equinox, Felix, etc.) and I'm concerned about the ease of using the Spring framework and other 3rd party libraries within each module (starting with Spring 3.2 the different jars might not come with OSGi manifests).
On top of this, I'd like a central web portal to administer all bundles, thus with each new bundle there will be a new admin section.
that's why we developed osgi-less modularity for Spring https://github.com/griddynamics/banshun Your feedback is appreciated!
Why do you need OSGi? Why not use a Web Server like Tomcat, and deploy your application as a war? You can deploy it on multiple servers in a cluster, and your application can scale on and on.
Why do you need Spring? It has become incredibly coupled. And it has a complexity that find quite useless since OSGi applications tend to be built from small components communicating through services; voiding most of the advantages of the Spring wiring model which assumes it is central.
And hard to configure is a strange remark, OSGi is excellent configuration support. It is just different than what you're used to.
Instead of using spring, why not using OSGi Blueprint it'll give you an "easy" transition from Spring to OSGi.
First some background:
I'm working on some webapp prototype code based on Apache Sling which is OSGI based and runs on Apache Felix. I'm still relatively new to OSGI even though I think I've grasped most concepts by now. However, what puzzles me is that I haven't been able to find a "full" dependency injection (DI) framework. I've successfully employed rudimentary DI using Declarative Services (DS). But my understanding is that DS are used to reference -- how do I put this? -- OSGI registered services and components together. And for that it works fine, but I personally use DI frameworks like Guice to wire entire object graphs together and put objects on the correct scopes (think #RequestScoped or #SessionScoped for example). However, none of the OSGI specific frameworks I've looked at, seem to support this concept.
I've started reading about OSGI blueprints and iPOJO but these frameworks seem to be more concerned with wiring OSGI services together than with providing a full DI solution. I have to admit that I haven't done any samples yet, so my impression could be incorrect.
Being an extension to Guice, I've experimented with Peaberry, however I found documentation very hard to find, and while I got basic DI working, a lot of guice-servlet's advanced functionality (automatic injection into filters, servlets, etc) didn't work at all.
So, my questions are the following:
How do declarative services compare to "traditional" DI like Guice or Spring? Do they solve the same problem or are they geared towards different problems?
All OSGI specific solutions I've seen so far lack the concept of scopes for DI. For example, Guice + guice-servlet has request scoped dependencies which makes writing web applications really clean and easy. Did I just miss that in the docs or are these concerns not covered by any of these frameworks?
Are JSR 330 and OSGI based DI two different worlds? iPOJO for example brings its own annotations and Felix SCR Annotations seem to be an entirely different world.
Does anybody have experience with building OSGI based systems and DI? Maybe even some sample code on github?
Does anybody use different technologies like Guice and iPOJO together or is that just a crazy idea?
Sorry for the rather long question.
Any feedback is greatly appreciated.
Updates
Scoped injection: scoped injection is a useful mechanism to have objects from a specific lifecycle automatically injected. Think for example, some of your code relies on a Hibernate session object that is created as part of a servlet filter. By marking a dependency the container will automatically rebuild the object graph. Maybe there's just different approaches to that?
JSR 330 vs DS: from all your excellent answers I see that these are a two different things. That poses the question, how to deal with third party libraries and frameworks that use JSR 330 annotations when used in an OSGI context? What's a good approach? Running a JSR 330 container within the Bundle?
I appreciate all your answers, you've been very helpful!
Overall approach
The simplest way to have dependency injection with Apache Sling, and the one used throughout the codebase, is to use the maven-scr-plugin .
You can annotate your java classes and then at build time invoke the SCR plugin, either as a Maven plugin, or as an Ant task.
For instance, to register a servlet you could do the following:
#Component // signal that it's OSGI-managed
#Service(Servlet.class) // register as a Servlet service
public class SampleServlet implements Servlet {
#Reference SlingRepository repository; // get a reference to the repository
}
Specific answers
How do declarative services compare to "traditional" DI like Guice or Spring? Do they solve the same problem or are they geared towards different problems?
They solve the same problem - dependency injection. However (see below) they are also built to take into account dynamic systems where services can appear or disappear at any time.
All OSGI specific solutions I've seen so far lack the concept of scopes for DI. For example, Guice + guice-servlet has request scoped dependencies which makes writing web applications really clean and easy. Did I just miss that in the docs or are these concerns not covered by any of these frameworks?
I haven't seen any approach in the SCR world to add session-scoped or request-scoped services. However, SCR is a generic approach, and scoping can be handled at a more specific layer.
Since you're using Sling I think that there will be little need for session-scoped or request-scoped bindings since Sling has builtin objects for each request which are appropriately created for the current user.
One good example is the JCR session. It is automatically constructed with correct privileges and it is in practice a request-scoped DAO. The same goes for the Sling resourceResolver.
If you find yourself needing per-user work the simplest approach is to have services which receive a JCR Session or a Sling ResourceResolver and use those to perform the work you need. The results will be automatically adjusted for the privileges of the current user without any extra effort.
Are JSR 330 and OSGI based DI two different worlds? iPOJO for example brings its own annotations and Felix SCR Annotations seem to be an entirely different world.
Yes, they're different. You should keep in mind that although Spring and Guice are more mainstream, OSGi services are more complex and support more use cases. In OSGi bundles ( and implicitly services ) are free come and go at any time.
This means that when you have a component which depends on a service which just became unavailable your component is deactivated. Or when you receive a list of components ( for instance, Servlet implementations ) and one of them is deactivated, you are notified by that. To my knowledge, neither Spring nor Guice support this as their wirings are static.
That's a great deal of flexibility which OSGi gives you.
Does anybody have experience with building OSGI based systems and DI? Maybe even some sample code on github?
There's a large number of samples in the Sling Samples SVN repository . You should find most of what you need there.
Does anybody use different technologies like Guice and iPOJO together or is that just a crazy idea?
If you have frameworks which are configured with JSR 330 annotations it does make sense to configure them at runtime using Guice or Spring or whatever works for you. However, as Neil Bartlett has pointed out, this will not work cross-bundles.
I'd just like to add a little more information to Robert's excellent answer, particularly with regard to JSR330 and DS.
Declarative Services, Blueprint, iPOJO and the other OSGi "component models" are primarily intended for injecting OSGi services. These are slightly harder to handle than regular dependencies because they can come and go at any time, including in response to external events (e.g. network disconnected) or user actions (e.g. bundle removed). Therefore all these component models provide an additional lifecycle layer over pure dependency injection frameworks.
This is the main reason why the DS annotations are different from the JSR330 ones... the JSR330 ones don't provide enough semantics to deal with lifecycle. For example they say nothing about:
When should the dependency be injected?
What should we do when the dependency is not currently available (i.e., is it optional or mandatory)?
What should we do when a service we are using goes away?
Can we dynamically switch from one instance of a service to another?
etc...
Unfortunately because the component models are primarily focused on services -- that is, the linkages between bundles -- they are comparatively spartan with regard to wiring up dependencies inside the bundle (although Blueprint does offer some support for this).
There should be no problem using an existing DI framework for wiring up dependencies inside the bundle. For example I had a customer that used Guice to wire up the internal pieces of some Declarative Services components. However I tend to question the value of doing this, because if you need DI inside your bundle it suggests that your bundle may be too big and incoherent.
Note that it is very important NOT to use a traditional DI framework to wire up components between bundles. If the DI framework needs to access a class from another bundle then that other bundle must expose its implementation details, which breaks the encapsulation that we seek in OSGi.
I have some experience in building applications using Aries Blueprint. It has some very nice features regarding OSGi services and config admin support.
If you search for some great examples have a look at the code of Apache Karaf which uses blueprint for all of its wiring.
See http://svn.apache.org/repos/asf/karaf/
I also have some tutortials for Blueprint and Apache Karaf on my website:
http://www.liquid-reality.de/display/liquid/Karaf+Tutorials
In your environment with the embedded felix it will be a bit different as you do not have the management features of Karaf but you simply need to install the same bundles and it should work nicely.
I can recommend Bnd and if you use Eclipse IDE sepcially Bndtools as well. With that you can avoid describing DS in XML and use annotations instead. There is a special Reference annotation for DI. This one has also a filter where you can reference only a special subset of services.
I am using osgi and DI for current my project, I've choosed gemini blueprint because it is second version of SPRING DYNAMIC MODULES, Based on this information I suggest you to read Spring Dynamic Modules in Action. This book will help you to understand some parts and points how to build architecture and why is it good :)
Running into a similar architecture problem here - as Robert mentioned above in his answer:
If you find yourself needing per-user work the simplest approach is to
have services which receive a JCR Session or a Sling ResourceResolver
and use those to perform the work you need. The results will be
automatically adjusted for the privileges of the current user without
any extra effort.
Extrapolating from this (and what I am currently coding), one approach would be to add #param resourceResolver to any #Service methods so that you can pass the appropriately request-scoped object to be used down the execution chain.
Specifically we've got a XXXXService / XXXXDao layer, called from XXXXServlet / XXXXViewHelper / JSP equivalents. So managing all of these components via the OSGI #Service annotations, we can easily wire up the entire stack.
The downside here is that you need to litter your interface design with ResourceResolver or Sessions params.
Originally we tried to inject ResourceResolverFactory into the DAO layer, so that we could easily access the session at will via the factory. However, we are interacting with the session at multiple points in the hierarchy, and multiple times per request. This resulted in session-closed exceptions.
Is there a way to get at that per-request ResourceResolver reliably without having to pass it into every service method?
With request-scoped injection on the Service layers, you could instead just pass the ResourceResolver as a constructor arg & use an instance variable instead. Of course the downside here is you'd have to think about request-scope vs. prototype-scope service code and separate out accordingly.
This seems like it would be a common problem where you want to separate out concerns into service/dao code, leaving the JCR interactions in the DAO, analogous to Hibernate how can you easily get at the per-request Session to perform repo operataions?
I'm working with very large JSF/Facelets applications which use Spring for DI/bean management.
My applications have modular structure and I'm currently looking for approaches to standardize the modularization.
My goal is to compose a web application from a number of modules (possibly depending on each other). Each module may contain the following:
Classes;
Static resources (images, CSS, scripts);
Facelet templates;
Managed beans - Spring application contexts, with request, session and application-scoped beans (alternative is JSF managed beans);
Servlet API stuff - servlets, filters, listeners (this is optional).
What I'd like to avoid (almost at all costs) is the need to copy or extract module resources (like Facelets templates) to the WAR or to extend the web.xml for module's servlets, filters, etc. It must be enough to add the module (JAR, bundle, artifact, ...) to the web application (WEB-INF/lib, bundles, plugins, ...) to extend the web application with this module.
Currently I solve this task with a custom modularization solution which is heavily based on using classpath resources:
Special resources servlet serves static resources from classpath resources (JARs).
Special Facelets resource resolver allows loading Facelet templates from classpath resources.
Spring loads application contexts via the pattern classpath*:com/acme/foo/module/applicationContext.xml - this loads application contexts defined in module JARs.
Finally, a pair of delegating servlets and filters delegate request processing to the servlets and filters configured in Spring application contexts from modules.
Last days I read a lot about OSGi and I was considering, how (and if) I could use OSGi as a standardized modularization approach. I was thinking about how individual tasks could be solved with OSGi:
Static resources - OSGi bundles which want to export static resources register a ResourceLoader instances with the bundle context. A central ResourceServlet uses these resource loaders to load resources from bundles.
Facelet templates - similar to above, a central ResourceResolver uses services registered by bundles.
Managed beans - I have no idea how to use an expression like #{myBean.property} if myBean is defined in one of the bundles.
Servlet API stuff - use something like WebExtender/Pax Web to register servlets, filters and so on.
My questions are:
Am I inventing a bicycle here? Are there standard solutions for that? I've found a mentioning of Spring Slices but could not find much documentation about it.
Do you think OSGi is the right technology for the described task?
Is my sketch of OSGI application more or less correct?
How should managed beans (especially request/session scope) be handled?
I'd be generally grateful for your comments.
What you're aiming to do sounds doable, with a few caveats:
The View Layer: First, your view layer sounds a little overstuffed. There are other ways to modularize JSF components by using custom components that will avoid the headaches involved with trying to create something as dramatic as late-binding managed beans.
The Modules Themselves: Second, your modules don't seem particularly modular. Your first bullet-list makes it sound as if you're trying to create interoperable web apps, rather than modules per se. My idea of a module is that each component has a well-defined, and more or less discrete, purpose. Like how ex underlies vi. If you're going down the OSGi route, then we should define modular like this: Modular, for the sake of this discussion, means that components are hot-swappable -- that is, they can be added and removed without breaking the app.
Dependencies: I'm a little concerned by your description of the modules as "possibly depending on each other." You probably (I hope) already know this, but your dependencies ought to form a directed acyclic graph. Once you introduce a circular dependency, you're asking for a world of hurt in terms of the app's eventual maintainability. One of the biggest weaknesses of OSGi is that it doesn't prevent circular dependencies, so it's up to you to enforce this. Otherwise your dependencies will grow like kudzu and gradually choke the rest of your system's ecosystem.
Servlets: Fuhgeddaboudit. You can't late-bind servlets into a web app, not until the Servlet 3.0 spec is in production (as Pascal pointed out). To launch a separate utility servlet, you'll need to put it into its own app.
OK, so much for the caveats. Let's think about how this might work:
You've defined your own JSF module to do... what, exactly? Let's give it a defined, fairly trivial purpose: a login screen. So you create your login screen, late-bind it using OSGi into your app and... then what? How does the app know the login functionality is there, if you haven't defined it in your .jspx page? How does the app know to navigate to something it can't know is there?
There are ways to get around this using conditional includes and the like (e.g., <c:if #{loginBean.notEmpty}>), but, like you said, things get a little hairy when your managed loginBean exists in another module that may not have even been introduced to the app yet. In fact, you'll get a servlet exception unless that loginBean exists. So what do you do?
You define an API in one of your modules. All the managed beans that you intend to share between modules must be specified as interfaces in this API layer. And all your modules must have default implementations of any of these interfaces that they intend to use. And this API must be shared between all interoperable modules. Then you can use OSGi and Spring to wire together the specified beans with their implementation.
I need to take a moment to point out that this is not how I would approach this problem. Not at all. Given something like as simple as a login page, or even as complicated as a stock chart, I'd personally prefer to create a custom JSF component. But if the requirement is "I want my managed beans to be modular (i.e., hot-swappable, etc)," this is the only way I know to make it work. And I'm not even entirely sure it will work. This email exchange suggests that it's a problem that JSF developers have only just started to work on.
I normally consider managed beans to be part of the view layer, and as such I use them only for view logic, and delegate everything else to the service layer. Making managed beans late-binding is, to my mind, promoting them out of the view layer and into the business logic. There's a reason why all those tutorials are so focused on services: because most of the time you want to consider what it would take for your app to run "headless," and how easy it would be to "skin" your view if, for instance, you wanted it to run, with all its functionality, on an Android phone.
But it sounds like a lot of what you're working with is itself view logic -- for instance, the need to swap in a different view template. OSGi/Spring should be able to help, but you'll need something in your app to choose between available implementations: pretty much what OSGi's Service Registry was built to do.
That leaves static resources. You can modularize these, but remember, you'll need to define an interface to retrieve these resources, and you'll need to provide a default implementation so your app doesn't choke if they're absent. If i18n is a consideration, this could be a good way to go. If you wanted to be really adventurous, then you could push your static resources into JNDI. That would make them completely hot-swappable, and save you the pain of trying to resolve which implementation to use programmatically, but there are some downsides: any failed lookup will cause your app to throw a NamingException. And it's overkill. JNDI is normally used in web apps for app configuration.
As for your remaining questions:
Am I inventing a bicycle here? Are there standard solutions for that?
You are, a little. I've seen apps that do this kind of thing, but you seem to have stumbled into a fairly unique set of requirements.
Do you think OSGi is the right technology for the described task?
If you need the modules to be hot-swappable, then your choices are OSGi and the lighter-weight ServiceLocator interface.
Is my sketch of OSGI application more or less correct?
I can't really tell without knowing more about where your component boundaries are. At the moment, it sounds like you may be pushing OSGi to do more than it is capable of doing.
But don't take my word for it. I found other reading material in these places.
And since you ask about Spring Slices, this should be enough to get you started. You'll need a Git client, and it looks like you'll be training yourself on the app by looking through the source code. And it's very early prototype code.
I am facing the same problems in my current project. In my opinion, OSGi is the best and cleanest solution in terms of standards and future support, but currently you may hit some problems if you try using it in a web application:
there is no well integrated solution between a Web Container and the OSGi platform yet.
OSGi may be too much for a custom build web application that is just searching for a simple modularized architecture. I would consider OSGi if my project needs to support third party extensions that are not 100% under our control, if the project needs hot redeployments, strict access rules between plugins, etc.
A custom solution based on class loaders and resource filters seems very appropriate for me.
As an example you can study the Hudson source code or Java Plug-in Framework (JPF) Project(http://jpf.sourceforge.net/).
As about extending the web.xml, we may be lucky with the Servlet 3.0 specification(http://today.java.net/pub/a/today/2008/10/14/introduction-to-servlet-3.html#pluggability-and-extensibility).
The "web module deployment descriptor fragment" (aka web-fragment.xml) introduced by the Servlet 3.0 specification would be nice here. The specification defines it as:
A web fragment is a logical
partitioning of the web app in such a
way that the frameworks being used
within the web app can define all the
artifacts without asking devlopers to
edit or add information in the
web.xml.
Java EE 6 is maybe not an option for you right now though. Still, it would to be the standardized solution.
Enterprise OSGi is a fairly new domain so dont think you will get a solution that directly satisfies your need. That said one of the things I found missing from Equinox (osgi engine behind eclipse and hence one with largest user base!) is a consistent configuration / DI service. In my project recently we had some similar needs and ended building a simple configuration osgi service.
One of the problems which will be inherent to modular applications would be around DI, as the module visibility could prevent class access in some cases. We got around this using a registered-buddy policy, which is not too ideal but works.
Other than configuration, you can take a look at the recently released Equinox book for guidance on using OSGi as base for creating modular applications. The examples may be specific to Equinox, but the principles would work with any OSGi framework. Link - http://equinoxosgi.org/
You should look into Spring DM Server (it's being transitioned to Eclipse Virgo but that's not been released yet). There's a lot of good things in the recent OSGi enterprise spec which has also just been released.
Some of the Spring DM tutorials will help, I'd imagine. But yes, it's possible to have both resources and classes loaded from outside the web bundle using standard modularity. In that, it's a good fit.
As for the session context - it gets handled as you would expect in a session. However, you might run into problems with sharing that session between web bundles to the extent that in not sure if it's even possible.
You could also look to have a single web bundle and then use e.g. the Eclipse extension registry to extend the capabilities of you web app.
What is the difference between Declarative and Bluprint Services in OSGi? as both are aim to achieve Dependency injection in osgi.
Is blueprint services is alternative to declarative services?
or bluprint services fills the limitations (if any) of declarative services?
There isn't a trivial answer to that question, I'm afraid. I would recommend reading the specification of both to see the extent of the differences. Declarative Services is section 112 of the Service Compendium, Blueprint Container is section 121 of the same document. You can get the core and compendium documents here:
http://www.osgi.org/Download/Release4V42
For me though, the most significant difference is that (in DS terms) a Blueprint service can be made active without the services it depends on being present. The container creates proxy services which block until an actual implementation is made available. I believe this is akin to Spring's approach and that people who are used to using Spring's IOC/DI approach will get Blueprint instantly, though I could not comment from personal experience having never used Spring or Blueprint (yet).
Since we are dealing with OSGi services it is possible to mix and match DS and Blueprint as you see fit. So far I haven't found a need to do anything more complicated than can be achieved with DS, so am not sure what use case the Blueprint Container specification meets, though from the spec it does look as though Blueprint provides quite a lot of functionality in order to make component development simple.
I believe it has been added primarily because J2EE developers will find it familiar, but that is just my opinion. :)
I realize this is an old thread, but I think the question is still important.
I agree with the others who said that BS better hides dynamic nature of OSGi than DS.
Also, it looks like DS was meant for binding among bundles, while BS can be used to build object graphs "for internal consumption".
My current project is leveraging Spring, and our architect has decided to let Spring manage Services, Repositories and Factory objects, but NOT domain objects. We are closely following domain driven design. The reasoning behind not using spring for domain objects is primarily that spring only allows static dependency injection. What i mean by static dependency injection is that dependencies are specified inside xml configuration and they get "frozen".
I maybe wrong, but my current understanding is that even though my domain only leverages interfaces to communicate with objects, but spring's xml configuration forces me to specify a concrete dependency. hence all the concrete dependencies have to be resolved at deployment time. Sometimes, this is not feasible. Most of our usecases are based on injecting a particular type based on the runtime data or a message received from an end user.
Most of our design is following command pattern. hence, when we recieve a command, we would like to construct our domain model and based on data received from a command, we inject particular set of types into our aggregate root object. Hence, due to lack of spring's ability to construct a domain model based on runtime data, we are forced to use static factory methods, builders and Factory patterns.
Can someone please advise if spring has a problem to the above scenario ?
I could use AOP to inject dependencies, but then i am not leveraging spring's infrastructure.
I suggest you read the section in the Spring docs concerning Using AspectJ to dependency inject domain objects with Spring.
It's interesting that you said "I could use AOP to inject dependencies, but then i am not leveraging spring's infrastructure, " considering that AOP is a core part of Spring's infrastructure. The two go very well together.
The above link allows you to have Spring's AOP transparently inject dependencies into domain objects that are creating without direct reference to the Spring infrastructure (e.g. using the new operator). It's very clever, but does require some deep-level classloading tinkering.
Spring's dependency injection/configuration is only meant for configuring low-level technical infrastructure, such as data sources, transaction management, remoting, servlet mount-points and so forth.
You use spring to route between technical APIs and your services, and inside those services you just write normal Java code. Keeping Spring away from your domain model and service implementations is a good thing. For a start , you don't want to tie your application's business logic to one framework or let low-level technical issues "leak" into your application domain model. Java code is much easier to modify in the IDE than spring's XML config, so keeping business logic in java let's you deliver new features more rapidly and maintain the application more easily. Java is much more expressive than spring's XML format so you can more clearly model domain concepts if you stick to plain Java.
Spring's dependency injection (and dependency injection in general) is basically for wiring together Services, Repositories and Factories, etc. It's not supposed to directly handle things that need to be done dynamically in response to commands, etc., which includes most stuff with domain objects. Instead, it provides control over how those things are done by allowing you to wire in the objects you want to use to do them.