Deployment from Nexus to different environment (Test, Prerelease, Production) - java

I have Java projects built with maven, with artifacts (.jar .war) deployed to Nexus release repository. Also Jenkins is used for CI (building every hour) and automatically deploys to Tomcat (integration testing environment). We are using maven-release-plugin for artifact deployment to Nexus, that is done on local PC.
I need to automate deploying to other 3 environments: Test, Prerelease, Production.
There are 2 problems:
It is unlikely that I can use Jenkins for that, as Jenkins can't know when current version is promoted as good & released.
The location of .jar .war is different after every release
http://nexusserver:8081/nexus/content/repositories/releases/com/company/projectname/component/0.2.4/
A bit similar questions is
Deploying from Nexus to Tomcat (via Jenkins/Hudson)

It sounds like you need what is often referred to as a "build pipeline" or "build pipeline manager" a term which I believe became popular with the (excellent) book "Continuous Delivery".
There is an open source Jenkins plugin called Build Pipeline Plugin that may meet your needs.
https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin

Related

Android library to work with local maven repository

I have an Android app which uses a library of my own. I am developing them at the same time so when I have a change in my library I want to test it in my app as well.
They are setup as different projects since my library will also be used by other developers. The way we work is we build the library using a CI platform adn deploy it to an artifactory server.
Then from the app I reference this library directly through artifactory. This way when my CI platform builds the app, the build process takes the lib from artifactory and CI works smoothly.
This is a good way to work but is a pain in the ass when developing them in parallel, because I have to commit the changes, create a pull request, merge it with the development branch and wait for CI to build it and deploy it on the artifactory server, just so I can test it on the app.
Coming from java EE development, I used maven install, which deploys the artifact in the local maven repo, then I could already use it from my web application.
I want to do soemthing similar, i.e. have gradle deploy my artifact on my local repo, so the artifact on the local repo is updated but not on the remote one. This way I can debug more easily while still keeping the CI setup in place.
But I have no idea how to do this on gradle. The artifactory plugin seems like it only allows deployment on an artifactory server.
any ideas?
If you're using the maven plugin, you can run the install task to deploy the artifacts to your local Maven repo [1]. After you have the Artifacts deployed to your local maven repo, you need to add mavenLocal() [2] as one of the repositories to be able to resolve the dependency. One strategy I use is to always set a custom version for my local copy so that I can be certain that the local version is getting picked up - but if you choose not to do that the dependencies get resolved in the order the repositories are listed (so you'll need to ensure mavenLocal is before your Artifactory server).
[1] https://docs.gradle.org/current/userguide/maven_plugin.html, https://github.com/dcendents/android-maven-gradle-plugin
[2] https://discuss.gradle.org/t/how-to-use-maven-local-repository-for-gradle-build/2244

tomcat-maven-plugin in GWT project. What is the difference between: org.codehaus.mojo and org.apache.tomcat.maven plugin

I am trying to set up tomcat server for gwt application. I would like to configure server to update my server side code changes immediately.
While looking for some helpful examples over the web, I found that there are to plugins for maven.
One from:
org.codehaus.mojo (which also provide plugin for gwt in maven echosystem).
And second from: org.apache.tomcat.maven.
What is the difference between them? Which one should I choose for:
GWT 2.7 Maven Project. I will develope app in Eclipse IDE. So I would like to have good integration with it to. F.e.: the Eclipse tab/view "Servers".
From https://tomcat.apache.org/maven-plugin-2.2/
This is the new home for the Tomcat Maven Plugin (previously hosted at Codehaus).
The CodeHaus Mojo (now MojoHaus) Tomcat Maven Plugin is obsolete.
I have Maven archetypes that use the Tomcat Maven Plugin to fire up a server for development, with automatic redeployment of the webapp when classes change, at https://github.com/tbroyer/gwt-maven-archetypes
Note that they use a different Maven Plugin for GWT than the one from MojoHaus (ex-CodeHaus Mojo), one that works much better with multi-module builds.
I've never used Eclipse WTP though (tried it and had too much trouble, probably because I didn't really know how to use it properly though) so I can't really comment, but I see no reason why it wouldn't work.

How can I automatically deploy a war from Nexus to Tomcat?

How can I automatically deploy a war from Nexus to Tomcat?
I have a maven web project which gets built and deployed (both SNAPSHOT and release versions) on Nexus successfully. I would like to know if there is feature/plugin in Nexus where it picks the released war and deploys on remote Tomcat automatically?
I know that you can deploy the war to remote Tomcat using maven-tomcat-plugin but would like to know if there is an alternative solution.
Please guide.
Typically you'd use a CI tool like Jenkins to run the Maven build that publishes your War file into Nexus. Nexus would then be used by whatever tool you're using to push the War onto the target tomcat environment:
There are lots and lots of options.
Jenkins post build SSH script
Run a post-build SSH task from Jenkins that does something like this on the target tomcat server:
curl "http://myrepo/nexus/service/local/artifact/maven/redirect?r=releases&g=myorg&a=myapp&v=1.1&e=war" \
-o /usr/local/share/tomcat7/webapps/myapp.war
service tomcat7 restart
Rundeck
My preference is to use Rundeck for deployments, because it has a Nexus plugin, providing convenient drop-down menus of available releases.
There is also a Rundeck plugin for Jenkins that can be used to orchestrate a CI process with Jenkins performing the build, hand-over to Rundeck for deployment, followed by a Jenkins call-back to run the integration tests.
Chef
I also use chef which can be used to automatically deploy software in a pull fashion.
The artifact cookbook has direct support for Nexus, whereas the application_java cookbook uses a more generic "pull from a URL" approach that also works well.
..
..
The list goes on, so I hope this helps.
We used UrbanCode for the deployment automation, retrieves war from Artifactory/Nexus and deploy to the target server.
I used the Nexus Rest-API, these endpoints downloads the artifact to Jenkins workspace.
In order to deploy Snapshot & Release to Tomcat we can create a Jenkins parameterized job and pass the parameters to the REST endpoint, also to deploy to a server like Tomact "Deploy WAR/EAR" Jenkins plugin will help.
We can parameterize the endpoint and use as part of "Build" step along with "Execute Shell script" option for the build.
wget --user=${UserName} --password=${Password} "http://192.168.49.131:8080/nexus/service/local/artifact/maven/redirect?r=releases&g=${GroupId}&a=${ArtifactId}&v=${Version}&e=${TypeOfArtifact}" --content-disposition
Actual endpoints to Nexus looks something like below.
wget --user=admin --password=admin123 "http://localhost:8080/nexus/service/local/artifact/maven/redirect?r=snapshots&g=org.codezarvis.artifactory&a=hushly&v=0.0.1-SNAPSHOT&e=jar" --content-disposition
wget --user=admin --password=admin123 "http://localhost:8080/nexus/service/local/artifact/maven/redirect?r=releases&g=org.codezarvis.artifactory&a=hushly&v=0.0.5&e=jar" --content-disposition
Thanks
-Sudarshan

Building Artifacts on Jenkins for Multiple Environments and Uploading them to S3

Environment:
Around a dozen services: both WARs and JARs (run with the Tanuki wrapper)
Each service is being deployed to development, staging, and production: the environments use different web.xml files, Spring profiles (set on the Tanuki wrapper),...
We want to store the artifacts on AWS S3 so they can be easily fetched by our EC2 instances: some services run on multiple machines, AutoScaling instances automatically fetch the latest version once they boot, development artifacts should automatically be deployed once they are updated,...
Our current deployment process looks like this:
Jenkins builds and tests our code
Once we are satisfied, we do a release with the Artifactory plugin (only for production deployments; development and staging artifacts are based on plain Jenkins builds)
We use the Promoted Builds plugin with 3 different promotion jobs for each project (development, staging, production) to build the artifact for the desired environment and then upload it to S3
This works mostly OK, however we've run into multiple annoyances:
The Artifactory plugin cannot tag and updated the source code due to not being able to commit to the repository (this might be specific to CloudBees, our Jenkins provider). Never mind - we can manually do that.
The Jenkins S3 plugin doesn't properly work with multiple upload targets (we are using one for each environment) - it will try to upload to all of them and duplicate the settings when you save the configuration: JENKINS-18563. Never mind - we are using a shell script for the upload, which is working fine.
The promotion jobs will sometimes fail, because they can be run on a different instance than the original build - either resulting in a failed (build because the files are missing) or in an outdated build (since an old version is being used). Again, this might happen due to the specific setup CloudBees is providing. This can be fixed by running the build again and hopefully having the promotion job run on the same instance as the its build this time (which is pure luck in my understanding).
We've been advised by CloudBees to do a code checkout before running the promotion job, since the workspace is ephemeral and it's not guaranteed that it exists or that it's up to date. However, shell scripts in promotion jobs don't seem to honor environment variables as stated in another StackOverflow thread, even though they are linked below the textarea field. So svn checkout ${SVN_URL} -r ${SVN_REVISION} doesn't work.
I'm pretty sure this can be fixed by some workaround, but surely I must be doing something terribly wrong. I assume many others are deploying their applications in a similar fashion and there must be a better way - without various workarounds and annoyances.
Thanks for any pointers and reading everything to the end!
The biggest issue you face is that you actually want to do a build as part of the promotion. The promoted builds plugin is designed to do a few small actions, primarily on the archived artifacts of your build, or else to perform tagging or other such actions.
The design ensures that it gets a workspace on a slave. But (and this is a general Jenkins thing not a CloudBees specific) firstly the workspace you get need not be the workspace that was used for the build you are trying to promote. It could be:
An empty workspace
The workspace of the most recent build of the project (which may be a newer build than you are trying to promote)
The workspace of an older build of the project (when you cannot get onto the slave that has the most recent build of the project)
Now which of these you get entirely depends on the load that your Jenkins cluster is under. When running on CloudBees, you are usually most likely to encounter either of the first two situations above.
The actions you place in your promotion should therefore be “self-sufficient”. So for example they will copy the archived artifacts into a specific directory and then use those copied artifacts to do their 'thing'. Or they will trigger a downstream build passing through parameters from the build that is promoted. Or they will perform some action on a remote server without using any local state (i.e. flipping the switch in a blue-green deployment)
In your case, you are fighting on multiple fronts. The second front you are fighting on is that you are fighting Maven.
Maven does not like it when a build profile being active produces a different, non-equivalent, artifact.
Maven can probably live with the release profile producing a version of the artifact with JavaScript minified and perhaps debug info stripped (though it would be better if it could be avoided)... but that is because the two artifacts are equivalent. A webapp with JavaScript minified should work just as well as one that is not minified (unless minification introduces bugs)
You use profiles to build completely different artifacts. This is bad as Maven does not store the active profile as part of the information that gets deployed to the Maven repository. Thus when you declare a dependency on one of the artifacts, you do not know which profile was active when that artifact was built. A common side-effect of this is that you end up with split-brain artifacts, where one part of a composite artifact is talking to the development DB and the other part is talking to the production DB. You can tell there is this bad code smell when developers routinely run mvn clean install -Pprod && mvn clean install -Pprod when they want to switch to building a production build because they have been burned by split-brain artifacts in the past. This is a symptom of bad design, not an issue with Maven (other than it would be nice to be able to create architecture specific builds of Maven artifacts... but they can pose their own issues)
The “Maven way” to handle the creation of different artifacts for different deployment environments is to use a separate module to repack the “generic” artifact. As is typical of Maven's subtle approach to handling bad architecture, you will need to add repack modules for each environment you want to target... which discourages having lots of different target environments... (i.e. if you have 10 webapps/services to deploy and three target deployment environments you will have 10 generic modules and 3x10 target specific modules... giving a 40 module build) and hopefully encourages you to find a way to have the artifact self-discover its correct configuration when deployed.
I suspect that the safest way to hack a solution is to rely on copying archived artifacts.
During the main build, create a file that will checkout the same revision as is being built, e.g.
echo '#!/usr/bin/env sh' > checkout.sh
echo "svn revert . -R" >> checkout.sh
echo "svn update --force -r ${SVN_REVISION}" >> checkout.sh
Though you may want to write something a little more robust, e.g. ensuring that you are in a SVN working copy, ensuring that the working copy is using the correct remote URI, removing any files that are not needed, etc
Ensure that the script you create is archived.
Add at the beginning of your promotion process, you copy the archived checkout.sh to the workspace and then run that script. The you should be able to do all your subsequent promotion steps as before.
A better solution would be to create build jobs for each of the promotion processes and pass the SVN revision through as a build parameter (but will require more work to set up for you (as you already have the promotion processes set up)
The best solution is to fix your architecture so that you produce artifacts that discover their correct configuration from their target environment, and thus you can just deploy the same artifact to each required environment without having to rebuild artifacts at all.

Maven: running Unit-Tests remotely

We are currently working on a distributed Java EE-Application and have therefore a separated test and production system.
Compiling and Bundling is done via an Ant-Task. Now we want to deploy the Jar-Files of the different servers to the test-servers and run the JUnit Integration / Function-Tests there. If they succeed, then the current version should be deployed to the live-servers.
Plain Unit-Tests are executed by Hudson.
Is that possible with Maven and is there any information or best practice available?
Yes. Hudson has maven integration. Take a loot this wiki and this link.
You can set unit test case thresholds for your job to see if it does not pass a certain number of test cases. In that the deploy plugin will not get invoked and the app will not get deployed.
Take a JAR built from Ant and reuse it. I would add a Maven repository to your environment such as Artifactory, Archiva, or Nexus and deploy to that using Ivy. You almost certainly need to use a Maven repository to be happy with Maven for anything other than small scale personal projects. http://ant.apache.org/ivy/
Use Maven to grab the JAR from the Maven Repository. For this, just use a normal Maven dependency declaration.
Run Maven on the QA server, with the JUnit tests declared in that project. If that succeeds, deploy the JAR to the production server. For this, the details depend on the production server. If it's a WAR, I would use Cargo, but if it's a JAR it really depends on what's executing the JAR - you might need some sort of file copy, scp, etc. http://cargo.codehaus.org/
Hudson and TeamCity both have deployment features as well. You just set up a job to run (in this case the Maven job) and tell the CI server to deploy on success.

Categories