Performance of Apache-karaf container while deploying the bundle - java

I creating a osgi bundle and using Apache-karaf as a osgi container. I am testing an application by putting logs and placing it in deploy folder to deploying the application. Everything works fine. while doing the testing the bundle id increases and after some iteration while deploying the application activate method is called two times. I've verified the same in new apache-karaf it works as expected that activate method is called only once.
Note: The bundle is application with some simple print statements.
1. Is this performance issue in Apache-karaf container for reaching more number of bundle ids or kind of caching problem in apache-karaf.
2. Is this problem with deploying the bundle in deploy folder instead of osgi:install?

There are some issues with the deploy folder. It is monitored by felix fileinstall. So the schedule when it checks the file system will determine how it reacts.
Using bundle:install is much more reliable and also works great for testing. Simple deploy your bundle to you local maven repo by using maven install. Then install it into karaf using the mvn:groupId/rtifactId/version url.
If you then change your bundle you can simply upload it using maven install again and do update . This will reload from your local maven repo.
If you use a maven -SNAPSHOT version (which you should) then you can also use bundle:watch *. Karaf will then look for changes in the local maven repo and automatically update the bundles.

Related

Inconsistent bundles integrity in Karaf between deployments

Background
I am using Karaf 4.2.0 on RHEL 6 with the latest available Oracle JDK 1.8.x.
For security reasons, I am trying to find the best way to validate the integrity of the bundles served by Karaf. The current approach I am using is to calculate SHA1 hashes of all bundle.jar files found at $KARAF_HOME/data/cache/bundle*/version0.0/ and compare them with the ones I have deployed to another instance of Karaf in a different environment.
The deployment itself is fully automated and works every time. Before the deployment starts, Karaf is first stopped, then data/cache, data/tmp and data/kar folders are cleaned up, Karaf started up again and deployment performed with the following two steps:
Install fat KAR that covers all third-party bundles my app needs to run, with: kar:install
Install my application bundles through a Karaf feature file hosted on a private Artifactory instance together with referencing bundles, with: feature:repo-add -i
The problem
Each deployment causes the third-party bundles in data/cache/ folder to have different SHA1 hashes, even though JARs content is identical (verified by unpacking them and running recursive diff). Moreover, SHA1 does not match the one from Maven Central. It looks like Karaf is repackaging the JARs during the process of serving them from data/cache, thus making the difference in SHA1 sums.
For my own application bundles, their SHA1 hashes are consistent between application redeployments (and also deployments of the same feature file to different environments) but always differ from the ones on my private Artifactory server.
Is there any way to bypass/fix this problem of inconsistent integrity for bundles served from Karaf's data/cache?

How to manage bundles/dependencies in embedded OSGi application?

I'm currently developing a plugin system in which I embed apache felix in my application. The plugins itself are the OSGi bundles. So far deploying the bundles works just fine but I have trouble interacting with my bundles/plugins. I tried two approaches:
Register the a service "Plugin" in my Plugin and use the service listener in my "host" application to interact with the plugins.
The service listener is not invoked and I can't cast the returned Plugin object because the Plugin.class of my Host application is a different one compared to the Plugin.class thats inside the bundle.
Register the "PluginManager" in the host application and load this manager in the bundle.
In this case I'm again unable to cast the service class because of this class "duplication" issue.
I understand why the classes are "duplicated" but I'm not sure what to do about it.
My current setup:
plugin-api maven module: Provides Plugin interface
app maven module: Contains the app which embeds Apache Felix
dummy plugin has only a dependency on plugin-api
Is there a problem with the way my setup is structured? How can I access host services without creating a class mess? Should I create another module which is used to compile my plugin but it is excluded from the bundle and later provided on the host via FRAMEWORK_SYSTEMPACKAGES_EXTRA?
You should define your Plugin API (and all the non-VM based types that it uses) on the application side. If I would do this, I would make an API bundle (yes bundle) that exports these packages.
Make sure that all plugins not export the API or at least allow it to be imported.
In your application, before you start your Felix embedded framework, you get all the manifests of all JARs on the classpath with getResources("META-INF/MANIFEST.MF")and check for Export-Package. Then concatenate all these exported packages and set the OSGi Framework property org.osgi.framework.system.packages.extra to the joined string.
This will export any package on your classpath, so also your API bundle. Since the framework now exports these packages, your plugins will use the standard classpath as provider. Therefore, the API will have only one source and you will not get into class hell.

bnd osgi project not running through the Firefox

I just start learning how to build a bnd OSGI project.
I try to run a very simple project without any error message,but when I go to localhost, it shows "HTTP ERROR: 404".
the simple class:
an Activator class:
rest build dependencies
Run dependencies
http error:
Thanks for your helps!!
The latest 2.0.4 release of the org.amdatu.web.rest.wink bundle doesn't play well with Felix Http Jetty 3.x.
If you pin the version of that bundle to the 2.0.3 version things should work as expected. To do this change the org.amdatu.web.rest.wink entry your runbnd.bndrun -runbundles to:
org.amdatu.web.rest.wink;version='[2.0.3,2.0.3]'
Your class is annotated with jax-rs annotations and publishes an OSGi service. If this exposes the services as a REST resources depends on the bundles you install.
You have to install a bundle that watches for such services and creates the REST endpoints for them.
See enter link description here
I think you at least need to also add the org.amdatu.web.wink bundle to your bdnrun file.

OSGi how to install a bundle from a remote machine?

I have a bundle:
<groupId>com.helloworld</groupId>
<artifactId>Helloworld</artifactId>
<version>1.0.0-SNAPSHOT</version>
Previously, the bundle and OSGi container(FUSE ESB Enterprise) are at the same machine. I use the following command to install it from local maven repository:
FuseESB:karaf#root> install file:/home/li/.m2/repository/com/helloworld/Helloworld/1.0.0-SNAPSHOT/Helloworld-1.0.0-SNAPSHOT.jar
Now the bundle and OSGi container are at different machine:
bundle in a machine where IP is 192.168.122.22
How can I install this bundle remotely?
Notice that the argument to the install command is a URL. So you can install from any URL for which you have a URL handler available. For example:
install http://www.example.com/helloworld-1.0.jar
For Fuse ESB or more general for Apache Karaf based servers you have the pax url mvn uri prefix. This allows to install bundles from mvn repositories. I propose to always use this uri instead of the file one.
In your case the command would be:
install mvn:com.helloworld/Helloworld/1.0.0-SNAPSHOT
This uri is even a little smaller than the file based one. The big adavantage though is that you have the full mvn resolution available. So this above url will work for bundles from your local maven repo but also from maven central.
Of course you typically will not deploy your own artifacts to maven central. So if you want to use this inside your company you should set up a maven repository like Nexus or Archiva. Then you deploy your own bundle using mvn clean deploy into your company repo. Of course this will require that you set up your pom correctly but you will need that anyway for any larger project.
The last step needed is then to set up your Fuse ESB / Karaf to also use your company repo. This is done by adding the repo uri to the file etc/org.ops4j.pax.url.mvn.cfg.
Of course this is a little more work than the http url that Neil proposed. THe advantage is that this will integrate very well with your maven build process and it will make your bundle mvn uris independent of the location of your maven repo. It will also allow you to mix your own bundles and open source bundles when you start to combine them using features.

Maven: running Unit-Tests remotely

We are currently working on a distributed Java EE-Application and have therefore a separated test and production system.
Compiling and Bundling is done via an Ant-Task. Now we want to deploy the Jar-Files of the different servers to the test-servers and run the JUnit Integration / Function-Tests there. If they succeed, then the current version should be deployed to the live-servers.
Plain Unit-Tests are executed by Hudson.
Is that possible with Maven and is there any information or best practice available?
Yes. Hudson has maven integration. Take a loot this wiki and this link.
You can set unit test case thresholds for your job to see if it does not pass a certain number of test cases. In that the deploy plugin will not get invoked and the app will not get deployed.
Take a JAR built from Ant and reuse it. I would add a Maven repository to your environment such as Artifactory, Archiva, or Nexus and deploy to that using Ivy. You almost certainly need to use a Maven repository to be happy with Maven for anything other than small scale personal projects. http://ant.apache.org/ivy/
Use Maven to grab the JAR from the Maven Repository. For this, just use a normal Maven dependency declaration.
Run Maven on the QA server, with the JUnit tests declared in that project. If that succeeds, deploy the JAR to the production server. For this, the details depend on the production server. If it's a WAR, I would use Cargo, but if it's a JAR it really depends on what's executing the JAR - you might need some sort of file copy, scp, etc. http://cargo.codehaus.org/
Hudson and TeamCity both have deployment features as well. You just set up a job to run (in this case the Maven job) and tell the CI server to deploy on success.

Categories