I'm trying to learn more about how big project builds are being versioned by developer teams using maven. For example, some projects have versions like: 2.0.0-SNAPSHOT-g57517b7, what that "g57517b7" represents exactly? and is it possible to automate versioning process that increments those number or some kind of build number on maven?
The last part of the version name looks like a current git commit id.
Have a look here Include git commit hash in jar version
Related
When I maven deploy a snapshot build such as myproject-1.0-SNAPSHOT maven will helpfully tag the snapshot with the date and the build number - something like myproject-1.0-20160720.182254-6.jar. Is there any way I can control the format of this unique tag?
In particular I'm trying to solve two problems:
I want to know the exact artifact that I just uploaded so that I can pull it into a docker image. There are potentially several builds in parallel for different developers so I need to get the exact version.
I want to tie the snapshot unique ID to the checkin id in git.
Use a concrete version number in the pom, this way you will have predictable builds that you can reuse later on.
And use e.g.:
mvn org.codehaus.mojo:versions-maven-plugin:2.1:set -DnewVersion=1.1 -DgenerateBackupPoms=false
to set the version and do a mvn clean deploy afterwards, this way you will know what version you can use in docker.
We are working on a legacy project and the first task is to setup a DevOps for the same.
The important thing is we are very new to this area.
We are planning to use jenkins and sonarqube for the purpose initially. Let me start with the requirements.
Currently the the project is sub divided into multiple projects (not modules)
We had to follow this build structure as there are no plans for re-organising it as a single multimodule maven project
Currently the builds and dependencies are managed manually
Eg: The project is subdivided in to 5 multi-module maven projects,
say A,B,C,D and E
1. A and C are completely independent and can be easly built
2. B depends on the artifact generated by A (jar) and has multiple maven profiles (say tomcat and websphere, it is a webservice module)
3. D depends on the artifact generated by C
4. E depends on A, B and D and has multiple maven profiles (say tomcat and websphere, it is a web project)
Based on jenkins documentation to handle this scenario, we are thinking about parameterized builds using “parameterized build plugin" and "extended choice parameter plugin" with the help of these plugins we are able to parameterize the profile name. But before each build, the builder waits for the profile parameter.
So we are still searching to find an good solution to
1. keep the dependency between projects an built the whole projects if there is any change in SCM (SVN). For that we are used "Build whenever a SNAPSHOT dependency is built" and "SCM polling option". Unfortunately this option seems not working in our case (we have given an interval of 5 min for scm polling but no build is happening based on test commits)
2. Even though we are able to parameterize the profile, this seems as a manual step (is there an option to automate this part too, ie. build with tomcat profile and websphere profile should happen sequentially).
We are struggling to find a solution to cater all these core requirements. Any pointer would be greatly appreciated.
Thanks,
San
My maven knowledge is limited, however since you didnt get any response yet, ill try to give some general advice.
There are usually multiple ways to reach some aim in Jenkins, each has its pros and cons. Choosing the most fitting solution depends on the specific requirements and your environment/setup.
However you first need something that just works, then you can refine it.
A quick result you get with the following
Everything in one job
Configure your subversion repo (Multiple are possible) to be checked out into your workspace
Enable Poll SCM trigger
Build your modules/projects via Execute shell build steps. (Failed builds can be handed to the job result by using Exit 1 on a Execute shell Build step.)
However keep in mind that this will prevent advanced functionality on a per project/module basis, such as mail notifications to the dev to blame. Or trend of metrics, like warnings or static code analysis.
The following solution is easier to extend in that direction.
Wrapper job around your various build jobs
Use Build step Trigger/call builds on other projects to build A, archive needed artifacts
Use Build step Trigger/call builds on other projects with some parameter tomcatto build B tomcat version, use Copy Artifact Plugin to copy over jar from A
...
Use Build step Trigger/call builds on other projects with some parameter tomcatto build E tomcat version. Use Copy Artifact Plugin to copy all needed artifacts, you can specify parameter there if you need artifact of i.e. B tomcat version
In this setup, monitoring the svn is an issue since if you trigger it from polling SCM, it will checkout it in your wrapper workspace while you dont actually need it checked out there, but in your build jobs.
Possible solution: Share the workspace between wrapper job and your build jobs, so the duplicate checkouts in the build jobs will find the files already in the right revision. However then you *need+ to make sure the downstream jobs are executed on the same machine (there are plugins to do so)
Or even more elegant: Use a post-commit hook (See here, section Post-comit hook) on your svn to notify jenkins of changes.
Edit: For the future, its worth looking into the Pipeline Plugin and its documentation for more complex builds, this is the engine for the upcoming jenkins version 2.0, see here.
I would create 5 different jobs for ABCDE.
As you mentioned A and C would be standalone jobs so I would just do mvn clean install/pkg/verify based on your need.
For B I would first build A and then invoke another maven target in build to build B
For D, I would first build C and then build D
Finally for E , i would use invoke top level mvn targets 5 times A , B,C,D and finally E
Edit:
Jenkins 2 is out and has a built-in support for pipelines.
A few pointers for your requirements:
"built the whole projects if there is any change in SCM"
Although Poll SCM usually requires less work, the proper way to do it is to use SVN hooks.
The solution works as follows:
First you enable the Trigger builds remotely feature and enter a random token in Authentication Token.
This allows you to trigger builds remotely using Jenkins REST API (http[s]://JENKINS_URL/job/BUILD_NAME/build?token=TOKEN)
Then you create a SVN hook (a script that runs whenever you commit) which triggers the build by sending a request to that URL (using curl,wget, python,...).
There are a lot of manuals on how to create SVN hooks, here's the first result on "SVN Hooks" from Google.
"keep the dependency between projects"
I would create a different Jenkins Job for each project separately, then make sure builds are executed in the required order.
I think the best way to order your builds (dependencies) is to create a Build Pipeline using the Pipeline Plugin (previously known as Workflow Plugin).
There is a lot to explain here, so it's better you read on your own. You can start here.
There are also other (simpler) solutions, like Build Flow Plugin or Parameterized Trigger Plugin which can help create dependencies between builds, but I think Pipeline is the newest and considered a best practice (it's definitely the most advanced solution).
Still, having said that, if you feel Pipeline is an overkill for you, go for the alternatives.
I would recommend making sure each build does a mvn install to the same local repo, and also deploys the artifact to Artifactory (hopefully you have one).
Automate parameterized builds: "build with tomcat profile and websphere profile"
To do that you'll need to create parameterized builds.
That's pretty easy to do, you just check This build is parameterized in your build config and add a MVN_PROFILE string/choice parameter.
After that you can trigger each build several times, with different parameters, using any one of the plugins mentioned in the previous bullet.
Extra Tip:
While hacking your way through this, consider using Job Configuration History Plugin, it can help review and revert changes made to the configuration.
Good luck, hope this helps :)
I would consider a bit different approach to fully de-couple the projects.
If you are able to create your internal artifactory, than I would consider in the maven build each on of the dependencies as a third party library exactly like it is done with any other external libraries you are using.
This way, each such project can be seperatly built and stored in the artifactory and when a dependent project will be built it will just take the right version as mentioned in the pom file.
This way you'll have different build process for each one of the projects and only relevant projects (relevant = changed) will be built.
I have a Maven 3 multi-module project (~50 modules) which is stored in Git. Multiple developers are working on this code and building it, and we also have automated build machines that run cold builds on every push.
Most individual changelogs alter code in a fairly small number of modules, so it's a waste of time to rebuild the entire source tree with every change. However, I still want the final result of running the parent project build to be the same as if it had built the entire codebase. And I don't want to start manually versioning modules, as this would become a nightmare of criss-crossing version updates.
What I would like to do is add a plugin which intercepts some step in build or install, and takes a hash of the module contents (ideally pulled from Git), then looks in a shared binary repository for an artifact stored under that hash. If one is found, it uses that artifact and doesn't even execute the full build. If it finds nothing in the cache it performs the build as normal, then stores its artifact in the cache. It would also be good to rebuild any modules which have dependencies (direct or transient) which themselves had a cache miss.
Is there anything out there which does anything like this already? If not, what would be the cleanest way to go about adding it to Maven? It seems like plugins might be able to accomplish it, but for a couple pieces I'm having trouble finding the right way to attach to Maven. Specifically:
How can you intercept the "install" goal to check the cache, and only invoke the module's 'native' install goal on a cache miss?
How should a plugin pass state from one module to another regarding which cache misses have occurred in order to force rebuilds of dependencies with changes?
I'm also open to completely different ways to achieve the same end result (fewer redundant builds) although the more drastic the solution the less value it has for me in the near term.
I have previously implemented a more complicated solution with artifact version manipulation and deployment to private Maven repository. However, I think this will fit your needs better and is somewhat more simple:
Split your build into multiple builds (e.g., with a single build per module using maven -pl argument).
Setup parent-child relationships between these builds. (Bamboo even has additional support for figuring out Maven dependencies, but I'm not sure how it works.)
Configure Maven settings.xml to use a different local repository location - specify a new directory inside your build working directory. See docs: https://maven.apache.org/guides/mini/guide-configuring-maven.html
Use mvn install goal to ensure newly built artifacts are added to local repository
Use Bamboo artifact sharing to expose built artifacts from local repository - you should probably filter this to include only the package(s) you're interested in
Set dependent builds to download all artifacts from parent builds and put them into proper subdirectory of local repository (which is customized to be in working directory)
This should even work for feature branch builds thanks to the way Bamboo handles parent-child relations for branch builds.
Note that this implies that Maven will redownload all other dependencies, so you should use a proxy private Maven repository on local network, such as Artifactory or Nexus.
If you want, I can also describe the more complicated scenario I've already implemented that involves modifying artifact versions and deploying to private Maven repository.
The Jenkins plugin allows you to manage/minimize dependent builds
whenever a SNAPSHOT dependency is built (determined by Maven)
after other projects are built (manually via Jenkins jobs)
And if you do a 'mvn deploy' to save the build into your corporate Maven repo then you don't have to worry about dependencies when builds run on slave Jenkins machines. The result is that no module is ever built unless it or one of its dependencies has changed.
Hopefully you can apply these principles to a solution with Bamboo.
I have a web application where we deploy to production whenever a feature is ready, sometimes that can be a couple of times a day, sometimes it can be a couple of weeks between releases.
Currently, we don't increment our version numbers for our project, and everything has been sitting at version 0.0.1-SNAPSHOT for well over a year.
I am wondering what is the Maven way for doing continuous delivery for a web apps. It seems overkill to bump up the version number on every commit, and never bumping the version number like we are doing now, also seems wrong.
What is the recommend best practice for this type of Maven usage?
The problem is actually a two-fold one:
Advancing project version number in individual pom.xml file (and there can be many).
Updating version number in all dependent components to use latest ones of each other.
I recommend the following presentation that discusses the practical realities of doing continuous delivery with Maven:
You tube presentation on CD with Maven
Slides
The key takeaway is each build is a potential release, so don't use snapshots.
This is my summary based on the video linked by Mark O'Connor's answer.
The solution requires a DVCS like git and a CI server like Jenkins.
Don't use snapshot builds in the Continuous Delivery pipeline and don't use the maven release plugin.
Snapshot versions such as 1.0-SNAPSHOT are turned into real versions such as 1.0.buildNumber where the buildNumber is the Jenkins job number.
Algorithm steps:
Jenkins clones the git repo with the source code, and say the source code has version 1.0-SNAPSHOT
Jenkins creates a git branch called 1.0.JENKINS-JOB-NUMBER so the snapshot version is turned into a real version 1.0.124
Jenkins invokes the maven versions plugin to change the version number in the pom.xml files from 1.0-SNAPSHOT to 1.0.JENKINS-JOB-NUMBER
Jenkins invokes mvn install
If the mvn install is a success then Jenkins will commit the branch 1.0.JENKINS-JOB-NUMBER and a real non-snapshot version is created with a proper tag in git to reproduce later. If the mvn install fails then Jenkins will just delete the newly created branch and fail the build.
I highly recommend the video linked from Mark's answer.
Starting from Maven 3.2.1 continuous delivery friendly versions are supported out of the box : https://issues.apache.org/jira/browse/MNG-5576
You can use 3 predefined variables in version:
${changelist}
${revision}
${sha1}
So what you basically do is :
Set your version to e.g. 1.0.0-${revision}. (You can use mvn versions:set to do it quickly and correctly in multi-module project.)
Put a property <revision>SNAPSHOT</revision> for local development.
In your CI environment run mvn clean install -Drevision=${BUILD_NUMBER} or something like this or even mvn clean verify -Drevision=${BUILD_NUMBER}.
You can use for example https://wiki.jenkins-ci.org/display/JENKINS/Version+Number+Plugin to generate interesting build numbers.
Once you find out that the build is stable (e.g. pass acceptance tests) you can push the version to Nexus or other repository. Any unstable builds just go to trash.
There are some great discussions and proposals how to deal with the maven version number and continuous delivery (CD) (I will add them after my part of the answer).
So first my opinion on SNAPSHOT versions. In maven a SNAPSHOT shows that this is currently under development to the specific version before the SNAPSHOT suffix. Because of this, tools like Nexus or the maven-release-plugin has a special treatment for SNAPSHOTS. For Nexus they are stored in a separate repository and its allowed to update multiple artefacts with the same SNAPSHOT release version. So a SNAPSHOT can change without you knowing about it (because you never increment any number in your pom). Because of this I do not recommend to use SNAPSHOT dependencies in a project especially in a CD world since the build is not reliable any more.
SNAPSHOT as project version would be a problem when your project is used by other ones, because of the above reasons.
An other problem of SNAPSHOT for me is that is not really traceable or reproducibly any more. When I see a version 0.0.1-SNAPSHOT in production I need to do some searching to find out when it was build from which revision it was build. When I find a releases of this software on a filesystem I need to have a look at the pom.properties or MANIFEST file to see if this is old garbage or maybe the latest and greatest version.
To avoid the manual change of the version number (especially when you build multiple builds a day) let the Build Server change the number for you. So for development I would go with a
<major>.<minor>-SNAPSHOT
version but when building a new release the Build Server could replace the SNAPSHOT with something more unique and traceable.
For example one of this:
<major>.<minor>-b<buildNumber>
<major>.<minor>-r<scmNumber>
So the major and minor number can be used for marketing issues or to just show that a new great milestone is reached and can be changed manually when ever you want it. And the buildNumber (number from your Continuous Integration server) or the scmNumber (Revision of SUbversion or GIT) make each release unique and traceable. When using the buildNumber or Subversion revision the project versions are even sortable (not with GIT numbers). With the buildNumber or the scmNumber is also kinda easy to see what changes are in this release.
An other example is the versioning of stackoverflow which use
<year>.<month>.<day>.<buildNumber>
And here the missing links:
Versioning in a Pipeline
Continuous Delivery and Maven
DON'T DO THIS!
<Major>.<minor>-<build>
will bite you in the backside because Maven treats anything after a hyphen as LEXICAL. This means version 1 will be lexically higher than 10.
This is bad as if you're asking for the latest version of something in maven, then the above point wins.
The solution is to use a decimal point instead of a hyphen preceding the build number.
DO THIS!
<Major>.<minor>.<build>
It's okay to have SNAPSHOT versions locally, but as part of a build, it's better to use
mvn versions:set -DnewVersion=${major}.${minor}.${build.number}
There are ways to derive the major/minor version from the pom, eg using help:evaluate and pipe to a environment variable before invoking versions:set. This is dirty, but I really scratched my head (and others in my team) to make it simpler, and (at the time) Maven wasn't mature enough to handle this. I believe Maven 2.3.1 might have something that go some way in helping this, so this info may no longer be relevant.
It's okay for a bunch of developers to release on the same major.minor version - but it's always good to be mindful that minor changes are non-breaking and major version changes have some breaking API change, or deprecation of functionality/behaviour.
From a Continuous Delivery perspective every build is potentially releasable, therefore every check-in should create a build.
At my work for web apps we currently use this versioning pattern:
<jenkins build num>-<git-short-hash>
Example: 247-262e37b9.
This is nice because it it gives you a version that is always unique and traceable back to the jenkins build and git revision that produced it.
In Maven 3.2.1+ they finally killed the warnings for using a ${property} as a version so that makes it really easy to build these. Simply change all your poms to use <version>${revision}</version> and build with -Drevision=whatever. The only issue with that is that in your released poms the version will stay at ${revision} in the actual pom file which can cause all sorts of weird issues. To solve this I wrote a simple maven plugin (https://github.com/jeffskj/cd-versions-maven-plugin) which does the variable replacement in the file.
As a starting point you may have a look at Maven: The Complete Reference. Project Versions.
Then there is a good post on versioning strategy.
I'm new to Maven, using the m2e plugin for Eclipse. I'm still wrapping my head around Maven, but it seems like whenever I need to import a new library, like java.util.List, now I have to manually go through the hassle of finding the right repository for the jar and adding it to the dependencies in the POM. This seems like a major hassle, especially since some jars can't be found in public repositories, so they have to be uploaded into the local repository.
Am I missing something about Maven in Eclipse? Is there a way to automatically update the POM when Eclipse automatically imports a new library?
I'm trying to understand how using Maven saves time/effort...
You picked a bad example. Portions of the actual Java Library that come with the Java Standard Runtime are there regardless of Maven configuration.
With that in mind, if you wanted to add something external, say Log4j, then you would need to add a project dependency on Log4j. Maven would then take the dependency information and create a "signature" to search for, first in the local cache, and then in the external repositories.
Such a signature might look like
groupId:artifactId:version
or perhaps
groupId:artifactId:version:classifier
This identifies a maven "module" which will then be downloaded and configured into your system. Once in place it adds all of the classes within the module to your configured project.
Maven principally saves time in downloading and organizing JAR files in your build. By defining a "standard" project layout and a "standard" build order, Maven eliminates a lot of the guesswork in the "why isn't my project building" sweepstakes. Also, you can use neat commands like "mvn dependency:tree" to print out a list of all the JARs your project depends on, recursively.
Warning note: If you are using the M2E plugin and Eclipse, you may also run into problems with the plugin itself. The 1.0 version (hosted at eclipse.org) was much less friendly than the previous 0.12 version (hosted at Sonatype). You can get around this to some extent by downloading and installing the "standalone" version of Maven from apache (maven.apache.org) and running Maven from the command line. This is actually much more stable than trying to run Maven inside Eclipse (in my personal experience) and may save you some pain as you try to learn about Maven.