Efficient OSGi development workflow - java

I work on a product composed of many bundles running as features on top of karaf. Typically our developers work on one bundle at a time. Our normal development goes something like: code, compile, copy bundle to deploy folder, test. We've also found that hotdeploy just refuses to override certain bundles that are installed as features without a server restart or a feature uninstall/reinstall, so sometimes the cycle is longer.
My question is: does anyone in the community have a better way? The way we do things works, but I feel like it's pretty slow and inefficient and I'm betting someone has come up with something better!
EDIT: I realize that I was pretty unclear in my question... We are using Equinox underneath Karaf. We also use Eclipse and Maven, although I don't know that using Maven is relevant.

Sounds like you want the dev:watch command. From the documentation:
The watch command can be used to help at developement time. It allows you to configure a set of URLs that will be monitored. All bundles location matching the given URL will be
automatically updated. This avoids the need for manually updating the bundles or even copying the bundle to the system folder if needed. Note that only maven based urls and maven snapshots will actually be updated automatically, so if you run
dev:watch *
It will actually monitor all bundles that have a location matching mvn:* that have '-SNAPSHOT' in their url.
Doing "dev:watch --help" from the Karaf shell will list its available flags and args.
Something similar is the PAX plugin
Either of these will work quite nicely if you've got the m2 maven plugin for Eclipse.
UPDATED: In my company we strive to be as TDD as possible, therefore a lot a development is done without explicitly starting Karaf. In the normal mix of unit tests we're also using Pax Exam, which is largely fantastic even when run from within Eclipse =)
This helps ensure we're not too tided to any Karaf specifics as it runs with Equinox/Felix/Concierge (so I mock out various Karaf specifics we depend on like JAAS authentication). Along with many other cool tools/functionality, it's capable of provisioning Karaf features and using TinyBundles you can even create bundles on the fly (again useful for mocking/stubbing).
Pax Exam hooks into the JUnit framework by providing a JUnit #Runner, the latest version (2) is much faster and has DSL based API, so the tests are quite concise and readable.
Using Pax Exam gives us good test coverage and short development times. Where tests are less practical or we're hunting bugs that don't surface in tests, the dev:watch command is invaluable.
In summary; IMO you should definitely drive your developments with tests (Pax Exam will slot into your existing build nicely and once you get used to it you'll find development quicker). You can start using the dev:watch command immediately, it will certainly speed up your current situation.
UPDATE 2: In answering another question I've added a maven example Pax-Exam testing a ComponentFactory. Test Driven Development is arguably the most efficient workflow available to developers today. link to question: osgi: Using ServiceFactories? and link to sourcecode: http://dl.dropbox.com/u/2465717/net.earcam.example.servicecomponent_2011-08-16_15-52.tgz

I've had excellent results using Equinox in Eclipse - even hot code replace works properly. granted, the target platform is small and we have only on the order of approx 50 bundles of our own, but workflow goes like this:
First, we have a target platform that contains all third-party and Eclipse bundles, Eclipse takes care of downloading & managing them. Then, the workspace has all the bundles of the project, grouped in 3-4 working sets. Compilation happens as usual on save, sometimes GWT needs to be recompiled, but even then the changes get picked up immediately because no deployment needs to happen - the running Equinox system uses the unpacked project folders as bundles. Running this from within Eclipse gives us hot code replace, on-the-fly changing template files, only MANIFEST.MF/plugin.xml changes need to refresh the bundle - and even then it's usually faster to just restart the framework than to type in the console.

if you use Eclipse Eclipse Libra may be useful for you. Libra can start Felix, Equinox and Knopflerfish inside Eclipse as any other server with WST. They have some youtube videos how to use it.
I also wrote some tools that can help:
An osgi bundle that picks up OSGI services that match the filter (osgitest=junit4). With that you do not write Junit classes but you can provide pre-configured objects (e.g. with OSGI Blueprint). JUnit than runs based on the annotations provided in the interface your service implements.
A maven plugin that has the following useful goals
Start a OSGI containers and deploy the bundle maven project with all of it's dependencies (which are OSGI bundles of course). The OSGI container starting is done with the help of PAX Exam but the JUnit tests are started with the help of the OSGI bundle I wrote (that runs the OSGI services you may provide).
Create a folder that contains a shortcut to all dependencies of the project (located at the maven repo or target directory of the folder)
If the projects are deployed onto the server (Eclipse Libra) I have to say only update X where the X is the id of the bundle and everything is refreshed rapidly. You do not have to re-compile the projects that are published to the server if you run Equinox in Libra as it points to the target classes folder which is refreshed as soon as you save your class or pom.xml.
If you do not publish your project onto the server but add it as a bundle in the container pointing to the shortcut folder you can also run the update command on the OSGi console after running mvn install (without the restarting of the server).
A step-by-step guide is available at http://cookbook.everit.org/
With the following method above it is possible to write tests as TDD tests and run them as part of a maven compile on the CI server.
I hope you will find these tools as useful as I do!

It depends on the platform under Karaf: Felix or Equinox.
Equinox
Eclipse has excellent (or almost excellent) support for launching Equinox with bundles of your choice. The two things you need to prepare are:
Bundles, being developed, available in the workspace as Plug-in projects
Target platform, containing the remaining bundles of the application
Such setup will allow you to easily make changes to your bundles, even runtime and easily restarting the runtime when this is required. I see Karaf as more suitable when you are developing on remote system, where the bundles are deployed via SSH or FTP or when you are using external build tools like Maven, which have ability to automatically copy the bundle in the runtime after it is built.
If you are using Equinox, this will give some extra edge over as the runtime will execute the code directly from the workspace.
Felix
Felix doesn't seem to have such support for launching from Eclipse (although there is a work toward this, tracked in this Jira issue). You can also launch it as normal Java application, but this is far from convenient. In this case, using Maven will be much better alternative. You can still setup Eclipse to take full advantage of the PDE other features, only launching will be done externally.
Summary
In summary, you can always automate everything through Maven and Karaf will greatly help you in this regard. Eclipse will give a little edge, if you are using Equinox. You should be able to have hot-code replace regardless of the method you are using, because the hot-code replace doesn't even consider OSGi at all (except in the only case, when you reload your bundle and fresh class loader is created).

Related

Test dependencies of Java web app library

I am coding a java web app.
When I started, every time I needed to use an external package, I would download the jars manually and download all dependencies of each jar manually and place them in the libraries folder (in Netbeans).
As time went on, I started using a dependency manager (Ant).
Now, I would like to use my dependency manager for all of my external libraries.
If, after executing this change I run my application and it successfully deploys (no ClassNotFoundExceptions and no NoClassDefFoundErrors), is it safe to assume that I have not missed anything and that my application will run smoothly as far as the external packages go?
Or, do I need to individually test out each functionality in my web app to confirm that the changes I made to the libraries didn't change how the application runs?
It's actually depends on the code inside these libraries. Only part of classes are loaded at startup, thus you can miss something. Also there might be a possibility that you're loading some classes in runtime manually, i.e. Class.forName(String) and this code has not been triggered at startup. Thus, I would say you can't be 100% sure.
Generally in Java here are 3 build approaches:
Imperative - you're saying "How to assembly your code". The typical example of this is Apache Ant.
Declarative - you're saying "Which code you want to assembly". The typical example of this is Apache Maven
Mixed - which takes benefits of previous systems. This is Gradle.
How it helps!

How to use SonarQube Maven plugins without running a(n explicit) server?

I'm getting started with SonarQube usage for JSF page static analysis[1] in Maven. I'm only really interested in using it in Maven since I don't like the idea to introduce another build command.
After going through Analyzing the source code and the specific Maven guide I gained the impression that the plugin can only be used after downloading, installing/unpacking and starting a SonarQube instance at localhost and specifying the connection information in the plugin declaration in the POM. The plugin configuration parameter confirm that.
While this workflow might have advantages it is painful to use on CI services and the necessity to start a service manually in order to be able to build seems not very user friedly (given the fact that other development tools like Selenium or Arquillian pull entire browser, driver and servers in the background without one single line of configuration). Am I missing something about a separate plugin or configuration which manages an embedded or otherwise temporary instance to perform the analysis with a single plugin declaration?
[1] I'm aware that there're other tools based on XML validation which could do the job, but setting up a much more powerful tools like SonarQube seems to be a more flexible approach which will probably pay off.
You don't have to install SonarQube on your build server, but it is necessary to execute analysis (results will be pushed to it). It means that you have a working server somewhere and next you have to set required parameters:
sonar.host.url (http://localhost:9000 is a default value)
sonar.login and sonar.password (if your SonarQube server is secured)
See all Analysis Parameters.

How can a ninjaframework-web-application be split up?

I just started to develop a java web application based on the ninjaframework. Everything works great, but: With all the ninja-dependencies, the deploy-war has around 25MB. I really hope, I won't have to upload a 25MB java archive all the time - especially due to the fact, that the dependencies won't barely change as often as e.g. a stylesheet of my app.
Is there a practical solution to move the ninjaframework-dependencies to a separated jar? I am working with eclipse, therefore a solution that integrates in the IDE would be great.
So far, I have had a look into the maven dependency-scoping and have (unsuccessfully) tried to move the dependencies into a separated project and refer to the project with a system-scoped dependency (which I would in my understanding be able to deploy as a separated jar file). I currently fail at building this dependency-jar with maven - but I also wonder, if there are better approaches.
I deploy the application on a tomcat-server in a plesk installation
Another option would be to exclude libraries that you don't use. For instance if you don't use JPA you can safely exclude it from the build via Maven's xml tag.
Background: Ninja 4 potentially bundles too many libraries by default. That's cool, because everything will work out of the box without thinking about libraries needed. The downside is that the jar/war may be too big for what you want to do. There are discussions on the way to make Ninja more modular - feel free to chime in on our mailing list :)
But as written above - you can cut Ninja's bundle down yourself using Maven's exclude.
If you have to use all the dependencies, there is no way to avoid deploying them with your application.
You don't tell if you are deploying into a container (maybe Tomcat). If you do, you can try to deploy the needed libraries into the container and set the Maven scope to provided to avoid redeploying the libraries.
Having the libraries provided by the container has benefits, but it can also be a burden. Depends strongly on your deployment and operation processes.

What's a practicable way for automated configuration, versioning and deployment of Maven-based java applications? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
we're maintaining a medium sized code base consolidated into a single multi(multi)-module maven project. Overall the whole build has up to ten output artifacts for different system components (web applications (.war), utilities (.jar), etc.).
Our deployment process so far is based on simple bash scripts that build the requested artifacts via maven, tag the scm repository with information regarding the artifacts, the target environment and the current build timestamp and then upload the artifacts to the chosen environments application servers and issue commands to restart the running daemons.
Configuration of the built artifacts happens by means of maven profiles and resource filtering. So our builds are specific to the target environment.
This process has served us well but for different reasons I would like to move forward towards a more sophisticated approach. Especially I would like to get rid of the bash scripts.
So what are the best practices regarding configuration, versioning and deployment of Maven-based Java applications?
Should our builds be environment agnostic and the configuration be done via config files on the target systems? If so how would a developer take care that new configuration options are included in the deployed config files on the various application servers?
Should we use Maven-versioning a.k.a. Maven release plugin to tag the various builds?
Is it a good idea to configure a CI server like Jenkins or Teamcity to build and optionally deploy our artifacts for us?
I like to think of there being two problem spaces:
building artifacts (ideally environment agnostic as that means QA can take a hash of the artifact, run their tests on that artifact and when it comes time to deploy, verify the ash and you know it's been QA'd. If your build produces different artifacts depending on whether for QA's env or the staging env, or the production env, then you have to do more work to ensure the artifact going into production has been tested by QA and staged in staging)
shipping artifacts into an environment. Where that environment requires configuration of the artifacts, the shipping process should include that configuration, either by putting the appropriate configuration files in the target environment and letting the artifacts pick that up, or by cracking open the artifacts, configuring them, and sealing them back up (but in a repeatable and deterministic fashion)
Maven is designed for the first problem space. "The Maven way" is all about producing environment agnostic build artifacts and publishing them to a binary artifact store. If you look at the Maven lifecycles, you will see that the phases stop after the artifact is deployed to the Maven repository (a binary artifact store). In short, Maven sees its job as done at that point. Additionally, there are life cycle phases for unit testing and integration-testing both of which should be possible with an environment agnostic artifact, but that is not the full set of testing that you require... Rather to complete your testing you will need to actually deploy the built artifacts into a real environment.
Many people try to hijack Maven to move beyond its goal (myself included). For example you have the cargo-maven-plugin and the ship-maven-plugin which touch on aspects beyond the maven end game (ie after the artifact gets to the maven repository). Of these, I feel personally, that the ship-maven-plugin (which i wrote, hency my previous "myself included") is closest to use "after maven" because by default it is designed to operate, not on the -SNAPSHOT version of the project that you have checked out on disk, but rather on a release version of the same project that it pulls from the remote repository, eg
mvn ship:ship -DshipVersion=2.5.1
IMO, cargo is aimed at use around the integration-test phase in the life cycle, but again, you can hijack it for other purposes.
If you are producing shrink-wrapped software, ie the kind that a user buys, and installs on their system, the installer program itself is designed to configure the application for the end users environment. It is fine to have the Maven build produce the installer because the actual installer is (at least somewhat) environment agnostic. Ok it may be a Microsoft Windows only installer, or a Linux only installer, but it does not care about which users machine it gets installed on.
Now days, though, we tend to concentrate more on Software as a Service, so we are deploying the software onto servers that we control. It becomes more tempting tow to to the "Maven dark side" where build profiles are used to tweak the internal configuration of the build artifacts (after all we only have three environments we deploy to) and we are moving fast so don't want to take the time to make the application pick up the environment specific configuration from external to the built artifact (sound familiar?). The reason I call this the dark side is that you really are fighting the way maven wants to work... You are always wondering if the jar in the local repository was built with a different profile active, so you end up having to do full clean builds all the time. When it comes time to move from QA to staging or from staging to production, you need to do a full build of the software... And all the unit and integration tests end up being run again (or you end up skipping them and in turn skipping the sanity they may be providing on the artifacts they are building) so in effect you are making life harder and more complex... Just for the sake of putting a few profiles into the maven pom.xml... Just think, if you had followed the maven way you'd just take the artifact from the repository and move that along the different environments, unmodified, unmolested, and with MD5, SHA1 (and hopefully GPG) signatures to prove that it is the same artifact.
So, you ask, how do we code the shipping to production...
Well there are a number of ways to tackle this problem. All of them share a similar set of core principles, namely
keep the recipe for shipping to an environment in a source control system
the recipe should ideally have two parts, an environment agnostic part, and the environment specific part.
You can use good old bash scripts, or you can use more "modern" tools such as chef and puppet which are designed for this second problem space.
Recommendations
You should use the right tool for the right job.
If it were me, here's is what I would do:
Cut releases with the Maven Release Plugin
The built artifacts should always be environment agnostic.
The built artifacts should contain "sensible defaults" for all configuration options. In other words, they should either blow up fast if a required configuration option with no sensible default is missing, or they should perform in a sensible way if an optional option is unspecified. An example of a required configuration option might be the database connection details (unless the app is happy to run with an in memory DB)
Pick a side in the chef vs puppet war (doesn't matter which side, and you can change sides if you want. If you have and ANT mindset, chef may suit you better, if you like dependency management magic, puppet may suit you better)
Developers should have a say in defining the chef/puppet scripts for deployment, at least the environment agnostic part of those scripts.
Operations should define the production environment specific details of the chef/puppet deployment
Keep all those scripts in SCM.
Use Jenkins, or any CI, to automate as much of the steps as possible. The promoted builds plugin for Jenkins is your friend.
Your end game is that every commit, providing that it passes all required tests, *could * get deployed into production automatically (or perhaps with the gate of a person saying "go ahead")... note not saying that you actually do this for every commit, only that you could
What I have used in the past which work well is to use Apache Karaf+iPOJO with my version control which was subversion (I would use git today)
What the version control allowed be to do was deploy a versioned copy of Apache Karaf and my configuration files. Any changes made from development or on the production system (when something needed an urgent fix) would still be traced and could be checked in (including information about who made what change when)
What Apache Karaf supports is dynamic deployment of maven libraries from your maven repository. i.e. you have configuration files which specify the versions of jar you want tot release and it will download them as required from your maven repo and run them. The iPOJO adds components for these models which you can configure using properties values (again versioned)
This assumes you have control of the end-to-end development to deployment, but can work very well even with multiple remote sites.

Achieving Eclipse-like OSGi launcher

I am building an OSGi application and need to create an Eclipse-like OSGi application launcher.
For those who do not know, when an OSGi application is run through Eclipse's OSGi framework, Equinox launches and automatically manages the order of bundles being started and stopped. From what I have experienced so far, it seems to be very efficient in what it does.
I want a similar piece of software to be able to create powerful distributable OSGi applications that can take a dynamic group of bundles, and without rewriting any code, start the application correctly and in the right bundle order.
I am curious to know how Eclipse achieves this result efficiently and how I can achieve the same result.
Thank you,
Steve
You have two options:
1) use pax runner
2) Use eclipse bundle witch serves as starter ( i believe it's org.eclipse.equinox.launcher)
Edit:
1*)For equinox options starter see this link paragraph Configurations and all that... BTW I've been wrong it's not launcher bundle it's common and update bundles.
2*)Fox pax runner example see this screen cast

Categories