Inconsistent bundles integrity in Karaf between deployments - java

Background
I am using Karaf 4.2.0 on RHEL 6 with the latest available Oracle JDK 1.8.x.
For security reasons, I am trying to find the best way to validate the integrity of the bundles served by Karaf. The current approach I am using is to calculate SHA1 hashes of all bundle.jar files found at $KARAF_HOME/data/cache/bundle*/version0.0/ and compare them with the ones I have deployed to another instance of Karaf in a different environment.
The deployment itself is fully automated and works every time. Before the deployment starts, Karaf is first stopped, then data/cache, data/tmp and data/kar folders are cleaned up, Karaf started up again and deployment performed with the following two steps:
Install fat KAR that covers all third-party bundles my app needs to run, with: kar:install
Install my application bundles through a Karaf feature file hosted on a private Artifactory instance together with referencing bundles, with: feature:repo-add -i
The problem
Each deployment causes the third-party bundles in data/cache/ folder to have different SHA1 hashes, even though JARs content is identical (verified by unpacking them and running recursive diff). Moreover, SHA1 does not match the one from Maven Central. It looks like Karaf is repackaging the JARs during the process of serving them from data/cache, thus making the difference in SHA1 sums.
For my own application bundles, their SHA1 hashes are consistent between application redeployments (and also deployments of the same feature file to different environments) but always differ from the ones on my private Artifactory server.
Is there any way to bypass/fix this problem of inconsistent integrity for bundles served from Karaf's data/cache?

Related

What is the enterprise use of the Jetty plugin in Maven?

Since I learn from some maven tutorials we can import Jetty as a maven plugin like this.
<plugin>
<groupId>org.mortbay.jetty</groupId>
<artifactId>maven-jetty-plugin</artifactId>
<version>6.1.10</version>
<configuration>
<scanIntervalSeconds>5</scanIntervalSeconds>
</configuration>
</plugin>
And can run the plugin like this.
$ mvn jetty:run
Also we can change the port and context path and lots of stuff in this plugin.
As I understood that we can use Jetty as a server like tomcat, and we can deploy an application through it.
But the thing I don't understand is what is the actual enterprise use of Jetty in maven..
From the official documentation:
The Jetty Maven plugin is useful for rapid development and testing. You can add it to any webapp project that is structured according to the Maven defaults. The plugin can then periodically scan your project for changes and automatically redeploy the webapp if any are found. This makes the development cycle more productive by eliminating the build and deploy steps: you use your IDE to make changes to the project, and the running web container automatically picks them up, allowing you to test them straight away.
However (and maybe this addresses what you call "enterprise use"):
While the Jetty Maven Plugin can be very useful for development we do not recommend its use in a production capacity. In order for the plugin to work it needs to leverage many internal Maven apis and Maven itself it not a production deployment tool. We recommend either the traditional distribution deployment approach or using embedded Jetty.
The main usage is for testing, Jetty can also be started programmatically (see this example Java code) which means you can start server directly from your code and interact with your REST API for instance.
You can also use it for easier deployment of small applications, just package everything into the JAR which runs server from main method when executed via java -jar your-app.jar. You don't need any dependencies installed except Java then.
As a side note, I currently work in Clojure (JVM language based on Lisp) and many people deploys their application as JAR which internally runs embedded Jetty because this way it's also starts REPL which you can connect to remotely and debug your application when it's running.
I don't know exactly what you mean by enterprise use, but let's say you're developing a web application and it's a Maven project.
Each time you want to test whether the web application works correctly, you need to deploy the web archive (WAR) on a web server, e.g. Jetty or Tomcat. Usually this involves a couple of manual steps like:
Start the web server
Deploy the WAR on it
Where the Maven plugin comes in handy is that it allows you to just execute
mvn jetty:run-war
and it does all these steps automatically for you in a single command, saving you lots of time. The plugin is even able to redeploy the application once it notices changes have been made.

Performance of Apache-karaf container while deploying the bundle

I creating a osgi bundle and using Apache-karaf as a osgi container. I am testing an application by putting logs and placing it in deploy folder to deploying the application. Everything works fine. while doing the testing the bundle id increases and after some iteration while deploying the application activate method is called two times. I've verified the same in new apache-karaf it works as expected that activate method is called only once.
Note: The bundle is application with some simple print statements.
1. Is this performance issue in Apache-karaf container for reaching more number of bundle ids or kind of caching problem in apache-karaf.
2. Is this problem with deploying the bundle in deploy folder instead of osgi:install?
There are some issues with the deploy folder. It is monitored by felix fileinstall. So the schedule when it checks the file system will determine how it reacts.
Using bundle:install is much more reliable and also works great for testing. Simple deploy your bundle to you local maven repo by using maven install. Then install it into karaf using the mvn:groupId/rtifactId/version url.
If you then change your bundle you can simply upload it using maven install again and do update . This will reload from your local maven repo.
If you use a maven -SNAPSHOT version (which you should) then you can also use bundle:watch *. Karaf will then look for changes in the local maven repo and automatically update the bundles.

Building Artifacts on Jenkins for Multiple Environments and Uploading them to S3

Environment:
Around a dozen services: both WARs and JARs (run with the Tanuki wrapper)
Each service is being deployed to development, staging, and production: the environments use different web.xml files, Spring profiles (set on the Tanuki wrapper),...
We want to store the artifacts on AWS S3 so they can be easily fetched by our EC2 instances: some services run on multiple machines, AutoScaling instances automatically fetch the latest version once they boot, development artifacts should automatically be deployed once they are updated,...
Our current deployment process looks like this:
Jenkins builds and tests our code
Once we are satisfied, we do a release with the Artifactory plugin (only for production deployments; development and staging artifacts are based on plain Jenkins builds)
We use the Promoted Builds plugin with 3 different promotion jobs for each project (development, staging, production) to build the artifact for the desired environment and then upload it to S3
This works mostly OK, however we've run into multiple annoyances:
The Artifactory plugin cannot tag and updated the source code due to not being able to commit to the repository (this might be specific to CloudBees, our Jenkins provider). Never mind - we can manually do that.
The Jenkins S3 plugin doesn't properly work with multiple upload targets (we are using one for each environment) - it will try to upload to all of them and duplicate the settings when you save the configuration: JENKINS-18563. Never mind - we are using a shell script for the upload, which is working fine.
The promotion jobs will sometimes fail, because they can be run on a different instance than the original build - either resulting in a failed (build because the files are missing) or in an outdated build (since an old version is being used). Again, this might happen due to the specific setup CloudBees is providing. This can be fixed by running the build again and hopefully having the promotion job run on the same instance as the its build this time (which is pure luck in my understanding).
We've been advised by CloudBees to do a code checkout before running the promotion job, since the workspace is ephemeral and it's not guaranteed that it exists or that it's up to date. However, shell scripts in promotion jobs don't seem to honor environment variables as stated in another StackOverflow thread, even though they are linked below the textarea field. So svn checkout ${SVN_URL} -r ${SVN_REVISION} doesn't work.
I'm pretty sure this can be fixed by some workaround, but surely I must be doing something terribly wrong. I assume many others are deploying their applications in a similar fashion and there must be a better way - without various workarounds and annoyances.
Thanks for any pointers and reading everything to the end!
The biggest issue you face is that you actually want to do a build as part of the promotion. The promoted builds plugin is designed to do a few small actions, primarily on the archived artifacts of your build, or else to perform tagging or other such actions.
The design ensures that it gets a workspace on a slave. But (and this is a general Jenkins thing not a CloudBees specific) firstly the workspace you get need not be the workspace that was used for the build you are trying to promote. It could be:
An empty workspace
The workspace of the most recent build of the project (which may be a newer build than you are trying to promote)
The workspace of an older build of the project (when you cannot get onto the slave that has the most recent build of the project)
Now which of these you get entirely depends on the load that your Jenkins cluster is under. When running on CloudBees, you are usually most likely to encounter either of the first two situations above.
The actions you place in your promotion should therefore be “self-sufficient”. So for example they will copy the archived artifacts into a specific directory and then use those copied artifacts to do their 'thing'. Or they will trigger a downstream build passing through parameters from the build that is promoted. Or they will perform some action on a remote server without using any local state (i.e. flipping the switch in a blue-green deployment)
In your case, you are fighting on multiple fronts. The second front you are fighting on is that you are fighting Maven.
Maven does not like it when a build profile being active produces a different, non-equivalent, artifact.
Maven can probably live with the release profile producing a version of the artifact with JavaScript minified and perhaps debug info stripped (though it would be better if it could be avoided)... but that is because the two artifacts are equivalent. A webapp with JavaScript minified should work just as well as one that is not minified (unless minification introduces bugs)
You use profiles to build completely different artifacts. This is bad as Maven does not store the active profile as part of the information that gets deployed to the Maven repository. Thus when you declare a dependency on one of the artifacts, you do not know which profile was active when that artifact was built. A common side-effect of this is that you end up with split-brain artifacts, where one part of a composite artifact is talking to the development DB and the other part is talking to the production DB. You can tell there is this bad code smell when developers routinely run mvn clean install -Pprod && mvn clean install -Pprod when they want to switch to building a production build because they have been burned by split-brain artifacts in the past. This is a symptom of bad design, not an issue with Maven (other than it would be nice to be able to create architecture specific builds of Maven artifacts... but they can pose their own issues)
The “Maven way” to handle the creation of different artifacts for different deployment environments is to use a separate module to repack the “generic” artifact. As is typical of Maven's subtle approach to handling bad architecture, you will need to add repack modules for each environment you want to target... which discourages having lots of different target environments... (i.e. if you have 10 webapps/services to deploy and three target deployment environments you will have 10 generic modules and 3x10 target specific modules... giving a 40 module build) and hopefully encourages you to find a way to have the artifact self-discover its correct configuration when deployed.
I suspect that the safest way to hack a solution is to rely on copying archived artifacts.
During the main build, create a file that will checkout the same revision as is being built, e.g.
echo '#!/usr/bin/env sh' > checkout.sh
echo "svn revert . -R" >> checkout.sh
echo "svn update --force -r ${SVN_REVISION}" >> checkout.sh
Though you may want to write something a little more robust, e.g. ensuring that you are in a SVN working copy, ensuring that the working copy is using the correct remote URI, removing any files that are not needed, etc
Ensure that the script you create is archived.
Add at the beginning of your promotion process, you copy the archived checkout.sh to the workspace and then run that script. The you should be able to do all your subsequent promotion steps as before.
A better solution would be to create build jobs for each of the promotion processes and pass the SVN revision through as a build parameter (but will require more work to set up for you (as you already have the promotion processes set up)
The best solution is to fix your architecture so that you produce artifacts that discover their correct configuration from their target environment, and thus you can just deploy the same artifact to each required environment without having to rebuild artifacts at all.

Deployment from Nexus to different environment (Test, Prerelease, Production)

I have Java projects built with maven, with artifacts (.jar .war) deployed to Nexus release repository. Also Jenkins is used for CI (building every hour) and automatically deploys to Tomcat (integration testing environment). We are using maven-release-plugin for artifact deployment to Nexus, that is done on local PC.
I need to automate deploying to other 3 environments: Test, Prerelease, Production.
There are 2 problems:
It is unlikely that I can use Jenkins for that, as Jenkins can't know when current version is promoted as good & released.
The location of .jar .war is different after every release
http://nexusserver:8081/nexus/content/repositories/releases/com/company/projectname/component/0.2.4/
A bit similar questions is
Deploying from Nexus to Tomcat (via Jenkins/Hudson)
It sounds like you need what is often referred to as a "build pipeline" or "build pipeline manager" a term which I believe became popular with the (excellent) book "Continuous Delivery".
There is an open source Jenkins plugin called Build Pipeline Plugin that may meet your needs.
https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin

Maven: running Unit-Tests remotely

We are currently working on a distributed Java EE-Application and have therefore a separated test and production system.
Compiling and Bundling is done via an Ant-Task. Now we want to deploy the Jar-Files of the different servers to the test-servers and run the JUnit Integration / Function-Tests there. If they succeed, then the current version should be deployed to the live-servers.
Plain Unit-Tests are executed by Hudson.
Is that possible with Maven and is there any information or best practice available?
Yes. Hudson has maven integration. Take a loot this wiki and this link.
You can set unit test case thresholds for your job to see if it does not pass a certain number of test cases. In that the deploy plugin will not get invoked and the app will not get deployed.
Take a JAR built from Ant and reuse it. I would add a Maven repository to your environment such as Artifactory, Archiva, or Nexus and deploy to that using Ivy. You almost certainly need to use a Maven repository to be happy with Maven for anything other than small scale personal projects. http://ant.apache.org/ivy/
Use Maven to grab the JAR from the Maven Repository. For this, just use a normal Maven dependency declaration.
Run Maven on the QA server, with the JUnit tests declared in that project. If that succeeds, deploy the JAR to the production server. For this, the details depend on the production server. If it's a WAR, I would use Cargo, but if it's a JAR it really depends on what's executing the JAR - you might need some sort of file copy, scp, etc. http://cargo.codehaus.org/
Hudson and TeamCity both have deployment features as well. You just set up a job to run (in this case the Maven job) and tell the CI server to deploy on success.

Categories