I am interested in using OSGI as a means of managing plugins for a project. That is there can be many implemenators of my interface, each appearing in its own / separate OSGI bundle with the implementation class exported...
Declarative Service should be the way to go.
You can declare your interface as a service
<service>
<provide interface="my.Interface"/>
<property name="foo" value="bar"
</service>
Each implementation of that interface can define Bundle activation and de-activation methods.
But what is really neat is their nature: if you are using the latest SCR (the "Service Component Runtime" which is an "extender bundle" implementing the new and improved OSGi R4.2 DS - Declarative Service - specification), your classes will not import anything from the OSGI model. They remain pure POJO.
Then define another service which depends on your first service:
<reference name="myInterfaceServiceName"
interface="my.Interface"
bind="myActivationMethod" unbind="myDeactivationMethod"
cardinality="0..n"/>
That service will detect and list all your concrete instances of your first service and deal with them as you intent to.
See the Eclipse Extensions and Declarative Services question for more details.
The presentation:
Component Oriented Development in OSGi with Declarative Services, Spring Dynamic Modules and Apache iPOJO, from EclipseCON2009, will provide you with a concrete example.
This can be done declaratively (like VonC) has detailed, or dynamically at runtime via the standard service registry.
Any implementer can simply register their implementations as a service and consumers can get them from the registry, which is pretty basic OSGi stuff. The services can also be registered with properties, so consumers can use these properties to distinguish between implementations when looking up the service.
Related
I would like to know when a bundle in the environment registers a service using context.registerService(...).
Is there a listener like FrameworkEvent.STARTED or something?
Thanks.
Listening to service changes is very common in OSGi. The plain API way is to use a ServiceTracker. You can specify which services you are interested in and will get callbacks when such a service is registered or unregistered.
The recommended way is to use frameworks like declarative services (DS) or blueprint which also offer ways to listen for services.
This is how to listen for all services by an interface using DS. See also the javadoc of #Reference.
#Reference(unbind="unbind"
public bind(MyService my) {...}
public unbind(MyService my) {...}
You can register ServiceListener via BundleContext#addServiceListener.
For real-world example look how Gemini Blueprint framework works with service listeners: OsgiServiceCollection. There is OsgiServiceCollection$BaseListener listener implementation.
I'm attempting to use Declarative Services to create a service bundle that provides functionality to another bundle. However, I want my Service Provider bundle to not start until it is needed. Let me describe my conditions.
There are two bundles:
-com.example.serviceprovider
-com.example.serviceconsumer
The Service Provider bundle provides a services using Declarative Services as follows:
<scr:component xmlns:scr="http://www.osgi.org/xmlns/scr/v1.1.0" enabled="true" immediate="true" name="samplerunnable1">
<implementation class="com.example.serviceprovider.SampleRunnable"/>
<service>
<provide interface="java.lang.Runnable"/>
</service>
The Service Consumer references the provided services as follows:
<reference name="SampleRunnable"
interface="java.lang.Runnable"
bind="setRunnable"
unbind="unsetRunnable"
cardinality="1..n"
policy="dynamic"/>
When both of these bundles are "ACTIVE" on start up, the Service Consumer has no trouble communicating with the service declared by the Service Provider. The problem happens when I try and have the service provider start in a lazy fashion.
After the Service Provider is set to load lazy this is what I get in the OSGi console:
osgi> ss
"Framework is launched."
id State Bundle
15 STARTING com.example.serviceconsumer_1.0.0.X
16 RESOLVED com.example.serviceprovider_1.0.0.X
What I would expect to see, is that even though bundle 16 is only "RESOLVED" that it would have at least registered is service. But when I call the "bundle" command, it states "No registered services."
osgi> bundle 16
com.example.serviceprovider_1.0.0.X [17]
Id=17, Status=RESOLVED Data Root=C:\apache\apache-tomcat-.0.40\work\Catalina\localhost\examplesX\eclipse\configuration\org.eclipse.osgi\bundles\17\data
"No registered services."
No services in use.
No exported packages
Imported packages
org.osgi.framework; version="1.7.0"<org.eclipse.osgi_3.8.0.v20120529-1548 [0]>
No fragment bundles
Named class space
com.example.serivceprovider; bundle-version="1.0.0.X"[provided]
No required bundles
Maybe I've missed the fundamental concept of lazy loaded bundles and services registration. If a bundle is in a "RESOLVED" state, shouldn't it have all it's "wires" connected? (ie, has a classloader, resolved import and export dependencies and services registered.) If the Service Consumer tries to access the service shouldn't that bundle transition to the "ACTIVE" state? What piece am I missing here?
Bundles in the RESOLVED state cannot provide services, and they will be ignored by Declarative Services. You should in general start all bundles during launch time, even if you want lazy loading behaviour. The key is to make the activation of the bundles cheap (or free!), and only pay for initialization of components when they are required.
DS takes care of lazy activation by default already. There is nothing you need to enable or change for this to happen. Essentially DS publishes the service entry in the registry, but it does not actually instantiate your component (or even load its class) until some client tries to use the service.
Furthermore, because DS does not load the class until required, OSGi does not even need to create a ClassLoader for the bundle, so long as your bundle does not have a BundleActivator.
To reiterate, you should not seek to make your bundles stay in RESOLVED state. Such bundles can only export static code and resources, but they cannot "do" anything and they cannot participate in the service registry.
Declarative services were designed for this case. Starting a bundle means that's functionality should be available, it does not mean it actually uses resources. Only stop bundles when you don't want is function.
This question is a good example of trying to control too much. In a component oriented world programmers should use lazy initialisation as much as possible but they should never attempt to control the life cycle.
I have rather big set of services registered with registerService. For simplicity let's assume they are lookup by some property name. So pair of invocation is straightforward (I use pseudocode for property spec):
context.registerService(
IMyService.getClass().getName(), myServiceInst, {"name"="a"})
After that on client side:
context.getServiceReferences(IMyService.getClass().getName(), {"name"="a"})
For some reason I cannot register all possible combinations of name. Is it possible to intercept all OSGi queries so I could create services on the fly when they are queried?
I would like have basic solution that works on all layers of OSGi - it mean that code above and code with (for example) Declarative Service will work the same way.
Take a look at Service Hooks in the core specification. They allow you to find out who is waiting for what services. Notice that this might imply parsing the filter if you're interested in what properties they're waiting for.
I think you have a couple of options:
Option 1:
If you need only one Service object by client bundle (where the client bundle identifies the key-value pairs) consider using http://www.osgi.org/javadoc/r4v43/core/org/osgi/framework/ServiceFactory.html. I think the javadoc is pretty self explaining and you can find easily usage samples in google. In this case you have to implement ServiceFactory and you have to use that one in Declarative Services (please correct me if I have not used declarative services only blueprint)
Option 2:
Create your services with the help of ConfigAdmin. You create a configuration with your client bundle and your service provider bundle will catch that and export the necessary service. After the service is provided you can catch the new service registration with the client. You can find nice doc at http://felix.apache.org/site/apache-felix-config-admin.html. Well in case of this option you will be able to get more services by client bundles but I do not think you can use this with Declarative Services (You must catch the configuration changes programmatically).
Option 3:
Instead of registering IMyService register IMyServiceFactory as an OSGi service. that has a createService(name) function. In this case in the client bundles you have to take care of the lifecycles of your IMyService objects (if no more IMyService is used you can "unget" IMyServiceFactory).
I have an OSGi bundle (that is not owned by me - so I cannot change it!) that exposes (exports) a service EchoService, and I want to attach an aspect to methods of this service (so as to perform some pre/post processing around it). These are deployed on the Apache Felix container.
I've written my own OSGi bundle (that obviously imports the EchoService), and attaches Spring aspects to it using standard Spring AOP. However, looks like the aspects are not attached and my interceptor is not being invoked.
I suspect that this is because I'm trying to intercept a service that does not belong to my bundle (which seems reasonable). Is that correct? How can I overcome this?
Here's what my interceptor/aspect looks like:
#Before("serviceOperation()")
public void before(JoinPoint jp) {
logger.debug("Entering method: " + jp.toShortString());
}
#AfterReturning("serviceOperation()")
public void after(JoinPoint jp) {
logger.debug("Exiting method: " + jp.toShortString());
}
I'm not an AOP nor a Spring expert, but maybe I could give you some ideas. As far as I see Spring use standard J2SE dynamic proxies for AOP proxies. Hence your clients should use the proxy instead of the original EchoService object. This is also true when you're using CGLIB proxies because "the proxies are created by sub-classing the actual class".
If your client bundles asking for an EchoService you have to pass them the proxy somehow. For this inside an OSGi container you should also export an EchoService (proxy) and make sure that the clients use the proxied service/bundle, not the original. You can accomplish this by setting a different version number for the (proxied) package and set this version as an import requirement in your client bundles. (I suppose you can modify the clients of EchoService.) For services you can set a property when you're registering it and modify the clients to query only for services which have this property.
If you are not able to modify the client bundles another solution could be wrapping the original bundle as an internal jar in your bundle. You can call the wrapped bundle's activator from your activator and pass them a modified BundleContext. This BundleContext should catch the registering service calls and register the proxy object instead of the original EchoService. You can use simple delegate pattern since BundleContext, ServiceListener etc. are usually interfaces. I suppose it could work but it maybe has other challenges.
I am planning an application that must provide services that are very much like those of a Java EE container to third party extension code. Basically, what this app does is find a set of work items (currently, the plan is to use Hibernate) and dispatch them to work item consumers.
The work item consumers load the item details, invoke third party extension code, and then if the third party code did not fail, update some state on the work item and commit all work done.
I am explicitly not writing this as a Java EE application. Essentially, my application must provide many of the services of a container, however; it must provide transaction management, connection pooling and management, and a certain amount of deployment support. How do I either A) provide these directly, or B) choose a third party library to provide them. Due to a requirement of the larger project, the extension writers will be using Hibernate, if that makes any difference.
It's worth noting that, of all of the features I've mentioned, the one I know least about is transaction management. How can I provide this service to extension code running in my container?
Hi I recommend using the Spring Framework. It provides a nice way to bring together a lot of the various services you are talking about.
For instance to address your specific needs:
Transaction Management/Connection pooling
I built a spring-based stand-alone application that used Apache commons connection pooling. Also I believe spring has some kind of transaction mgmt built in.
Deployment support
I use ant to deploy and run things as a front-loader. It works pretty well. I just fork a seperate process using ant to run my Spring stand-alone app.
Threading.
Spring has support for Quartz which deals well with threads and thread pools
DAO
Spring integrates nicely with Hibernate and other similar projects
Configuration
Using its xml property definitions -- Spring is pretty good for multiple-environment configuration.
Spring does have transaction management. You can define a DataSource in your application context using Apache DBCP (using a org.apache.commons.dbcp.BasicDataSourceorg.springframework.jdbc.datasource.DataSourceTransactionManager for the DataSource. After that, any object in your application can define its own transactions programatically if you pass it the TransactionManager, or you can use AOP interceptors on the object's definition in your application context, to define which methods need to be run inside a transaction.
Or, the easier approach nowadays with Spring is to use the #Transactional annotation in any method that needs to be run inside a transaction, and to add something like this to your application context (assuming your transactionManager is named txManager):
<tx:annotation-driven transaction-manager="txManager"/>
This way your application will easily accept new components later on, which can have transaction management simply by using the #Transactional annotation or by directly creating transactions through a PlatformTransactionManager that they will receive through a setter (so you can pass it when you define the object in your app context).
You could try Atomikos TransactionsEssentials for Java transaction management and connection pooling (JDBC+JMS) in a J2SE environment. No need for any appservers, and it is much more fun to work with ;-)
HTH
Guy