Completely override Maven Release lifecycle - java

In effort to customize the release of a project I came across the following article:
http://www.sonatype.com/people/2011/01/using-the-maven-release-plugin-things-to-know/
In essence the following assumptions are made for using the default release plugin:
Your codebase is going to be versioned and released as a “unit”.
You are using an SCM tool and a repository manager.
You are performing your release from a single, “versionable” unit in SCM.
You are using standard version numbers
You are publishing artifacts to a repository
In mostly all these cases, our project does not meet these assumptions. We want to use a custom version schema (independent of SCM, maven, etc). Deploy the artifacts to a filesystem (not a repository). Not have maven mess around with SCM at all. Etc.
As recommended, we should probably define our own release lifecycle. Therefore I am assuming we would need to override the maven default lifecycle release phase to run our plugin. I guess I am missing the location of the required documentation. Is this even possible?

What you're doing may be too radical to be worth doing within Maven, you may find yourself effectively writing a parallel build system simply triggered from Maven.
Assuming the effort is worth it - I would start by configuring the various plugins (most of Maven's functionality is delivered through plugins, triggered by phases in the lifecycle) and have them execute at a different goals within different phases. You may need to write Maven plugins to do whatever you want that doesn't already exist; which I expect you'll need to do in such a radical case.
http://maven.apache.org/guides/plugin/guide-java-plugin-development.html

Related

BOM override order (with overlapping BOMs)

I have a parent POM and a normal Maven project.
Both define BOMs in their dependencyManagement. In some cases, these BOMs may overlap, e.g. both specify a version for log4j.
From my tests it seems that:
For overlapping BOMs in the parent POM, the first one wins, i.e. supplies the version for the artifact.
If a BOM from the child and from the parent overlap, then the version from the BOM in the child wins.
Unfortunately, I did not find any documentation about this.
Am I right and can I rely on this behaviour?
Logging framework traditionally lives "next to" your actual code, so it is a bit unclear how to handle this.
I found that separating the build phase dependencies from the deployment phase dependencies works for me.
The basic idea is that you write the code only requiring a dependency to the API of the logging framework (slf4j used to be a natural choice), and then you have several deployment Maven configurations (one for JBoss, one for WebSphere, one for running in your IDE etc) where you add the dependencies relevant to that deployment.
So my suggestion is to change your codebase accordingly to only have API dependencies for the logging framework, and then create a new Maven project for each actual deployment type. You might also want to bake in deployment specific configuration files if needed.

How to manage dependencies for project support tooling like code generators?

Never found a really satisfactory solution to this. How do you do it? I am looking for inspiration for new approaches.
For context, assume I write a generator that takes a project resource and generates a code file. But it could be any other project support tool - validator, converter, deployer etc. Often manually triggered actions that are not running as part of normal build.
Such tools typically require a few dependencies that are not required by the project itself at runtime.
Strategies that I have applied or considered in the past:
add tool dependency to project anyway, and either mark it "provided" or filter it out during the packaging process (this is what I usually do, but now I am in danger of adding normal project code that uses the tool dependency, potentially resulting in an error that only manifests during runtime)
use a script (trying hard to avoid scripts and their hidden dependencies and complexities)
create separate support projects (trying hard to avoid project explosion, especially for seemingly small tasks that are handled by a few lines of code)
subprojects / modules (only vaguely aware of this option, never really tried it)
maven plugin that is run with a profile with separate dependencies (trying to avoid the separate project required to maintain the custom plugin)
Inspiration from answers and comments
separate tools project shared by multiple projects
I just realized that maven and eclipse already solved exactly this problem for a very specific "tool": test code.
Test code often needs additional dependencies not used by the application itself.
People obviously invested quite a bit to keep the "test / tool" infrastructure within the same project, as opposed to creating a separate test-project:
separate source locations (src/main/java, src/test/java)
separate resource locations (src/main/resources, src/test/resources)
a full-blown separate maven dependency scope "test", complete with transitive resolution
separate compilation phases (compile / test) with separate dependency trees
eclipse supports special junit launch configurations that are able to correctly resolve the test dependencies
probably more stuff that I am not aware of currently
So, I am strongly considering to program all my supporting tools as "junit test cases".
I am planning to create and commit shared junit launch configs for the team that execute just one specific "test case", which will run the tool logic instead of testing.
The problem I have to solve is to avoid running these dummy tests during the normal maven test phase.
Also, writing this, I realize that there is even another such system already in place: the maven plugin infrastructure, that also has a separate dependency resolution mechanism. Although, so far it seems necessary or normal to create separate projects to create plugins. I will look into ways of writing and building project specific maven plugins without needing to create separate projects. I am thinking about generating the pom.xml needed for plugin compilation on the fly, and including all the test dependencies.

Why doesn't Gradle or Maven have a dependency version lock file?

I've recently been introduced to the concept of a dependency version lock file when reading about package managers like NPM, Yarn, Paket, Cargo, etc. My understanding is that it is a file that lists all direct and transitive dependencies along with their exact version number so subsequent builds are guaranteed to use an equivalent set of dependencies. This seems to be a desirable feature since many package managers have or are adopting the concept.
My questions are then:
Why doesn't Maven or Gradle use a lock file? Or if they do, why haven't I seen it?
What are the pros and cons of allowing version ranges in a package manager's dependency resolution strategy vs only allowing exact versions?
Maven does not have a way of to achieve what you are asking for. Even if you set specific versions for your direct dependencies, which you should, your transitive dependencies can easily be unintentionally changed due to a seemingly unrelated change. For example, adding a dependency on a new library can give you an older version of an existing transitive dependency.
What you need is to have a dependencyManagement section that lists all your direct and transitive dependencies. You will still not be able to detect if a transitive dependency is removed or added which is a feature that, for example, NPM provides. The problem with missing that is that all your dependencies are no longer in the dependencyManagement section. To detect those changes you could use something like dependency-lock-maven-plugin which I have written. Using it will also make it less important to have everything in a dependencyManagement section since changes in transitive dependencies will be detected.
I would also recommend having https://maven.apache.org/enforcer/enforcer-rules/requireUpperBoundDeps.html in your build since Maven chooses the versions of the transitive dependencies that are closes in the tree and not, as you would expect, the highest version.
I have seen many runtime problems caused by developers accidentally changing transitive dependencies.
TL;DR: You do need something like a lock file in Maven, but it is not there due to historical ideological reasons.
I would not recommend using version ranges since they make your build not reproducible.
Neither does it behave as you would believe when it comes to transitive dependencies.
Dependency locking was a feature that achieved some maturity by Gradle 5.0:
https://docs.gradle.org/current/userguide/dependency_locking.html
Gradle's implementation was inspired by the Nebula plugin: https://github.com/nebula-plugins/gradle-dependency-lock-plugin
Version ranges do work well, when used as input to whatever updates your locking mechanism. So, for Gradle, you can actually just target specific dependencies that will look to resolve version ranges you've specified for:
gradle classes --update-locks org.apache.commons:commons-lang3,org.slf4j:slf4j-api
Or, you can just say "go update all my deps":
gradle dependencies --write-locks
Specifying resolution strategies is also worth reviewing, if you're looking into automation: https://docs.gradle.org/current/userguide/dependency_resolution.html
Both Maven, SBT and Gradle have what you're describing. It's called "using released (or fixed) versions". A released version looks like 1.2.3, as compared to a version range [1.2.3,), or a snapshot (1.2.3-SNAPSHOT).
If all your dependencies are using released versions, you will achieve what you're describing.
Version ranges are a valid form of versions as well, depending on your use case, but I would normally advise against them, unless they're used for parent POM-s, or just during active development. Version ranges can come handy when you'd like to not have to keep updating the fixed version of a third-party, or parent POM, if you're certain that the respective artifact can in no way break things for you (and, trust me, this does happen a lot with version ranges). Fixed versions should be used when you'd like to guarantee that the code will build and work against what you originally devised and tested it.
There is no need to have a feature such as "lock file", or anything like this, if your pom.xml strictly defines the versions of your dependencies.
If you read the documentation regarding dependency management, you will see that this is indeed so:
Maven
Gradle
SBT

Jenkins and Maven profiles

We are working on a legacy project and the first task is to setup a DevOps for the same.
The important thing is we are very new to this area.
We are planning to use jenkins and sonarqube for the purpose initially. Let me start with the requirements.
Currently the the project is sub divided into multiple projects (not modules)
We had to follow this build structure as there are no plans for re-organising it as a single multimodule maven project
Currently the builds and dependencies are managed manually
Eg: The project is subdivided in to 5 multi-module maven projects,
say A,B,C,D and E
1. A and C are completely independent and can be easly built
2. B depends on the artifact generated by A (jar) and has multiple maven profiles (say tomcat and websphere, it is a webservice module)
3. D depends on the artifact generated by C
4. E depends on A, B and D and has multiple maven profiles (say tomcat and websphere, it is a web project)
Based on jenkins documentation to handle this scenario, we are thinking about parameterized builds using “parameterized build plugin" and "extended choice parameter plugin" with the help of these plugins we are able to parameterize the profile name. But before each build, the builder waits for the profile parameter.
So we are still searching to find an good solution to
1. keep the dependency between projects an built the whole projects if there is any change in SCM (SVN). For that we are used "Build whenever a SNAPSHOT dependency is built" and "SCM polling option". Unfortunately this option seems not working in our case (we have given an interval of 5 min for scm polling but no build is happening based on test commits)
2. Even though we are able to parameterize the profile, this seems as a manual step (is there an option to automate this part too, ie. build with tomcat profile and websphere profile should happen sequentially).
We are struggling to find a solution to cater all these core requirements. Any pointer would be greatly appreciated.
Thanks,
San
My maven knowledge is limited, however since you didnt get any response yet, ill try to give some general advice.
There are usually multiple ways to reach some aim in Jenkins, each has its pros and cons. Choosing the most fitting solution depends on the specific requirements and your environment/setup.
However you first need something that just works, then you can refine it.
A quick result you get with the following
Everything in one job
Configure your subversion repo (Multiple are possible) to be checked out into your workspace
Enable Poll SCM trigger
Build your modules/projects via Execute shell build steps. (Failed builds can be handed to the job result by using Exit 1 on a Execute shell Build step.)
However keep in mind that this will prevent advanced functionality on a per project/module basis, such as mail notifications to the dev to blame. Or trend of metrics, like warnings or static code analysis.
The following solution is easier to extend in that direction.
Wrapper job around your various build jobs
Use Build step Trigger/call builds on other projects to build A, archive needed artifacts
Use Build step Trigger/call builds on other projects with some parameter tomcatto build B tomcat version, use Copy Artifact Plugin to copy over jar from A
...
Use Build step Trigger/call builds on other projects with some parameter tomcatto build E tomcat version. Use Copy Artifact Plugin to copy all needed artifacts, you can specify parameter there if you need artifact of i.e. B tomcat version
In this setup, monitoring the svn is an issue since if you trigger it from polling SCM, it will checkout it in your wrapper workspace while you dont actually need it checked out there, but in your build jobs.
Possible solution: Share the workspace between wrapper job and your build jobs, so the duplicate checkouts in the build jobs will find the files already in the right revision. However then you *need+ to make sure the downstream jobs are executed on the same machine (there are plugins to do so)
Or even more elegant: Use a post-commit hook (See here, section Post-comit hook) on your svn to notify jenkins of changes.
Edit: For the future, its worth looking into the Pipeline Plugin and its documentation for more complex builds, this is the engine for the upcoming jenkins version 2.0, see here.
I would create 5 different jobs for ABCDE.
As you mentioned A and C would be standalone jobs so I would just do mvn clean install/pkg/verify based on your need.
For B I would first build A and then invoke another maven target in build to build B
For D, I would first build C and then build D
Finally for E , i would use invoke top level mvn targets 5 times A , B,C,D and finally E
Edit:
Jenkins 2 is out and has a built-in support for pipelines.
A few pointers for your requirements:
"built the whole projects if there is any change in SCM"
Although Poll SCM usually requires less work, the proper way to do it is to use SVN hooks.
The solution works as follows:
First you enable the Trigger builds remotely feature and enter a random token in Authentication Token.
This allows you to trigger builds remotely using Jenkins REST API (http[s]://JENKINS_URL/job/BUILD_NAME/build?token=TOKEN)
Then you create a SVN hook (a script that runs whenever you commit) which triggers the build by sending a request to that URL (using curl,wget, python,...).
There are a lot of manuals on how to create SVN hooks, here's the first result on "SVN Hooks" from Google.
"keep the dependency between projects"
I would create a different Jenkins Job for each project separately, then make sure builds are executed in the required order.
I think the best way to order your builds (dependencies) is to create a Build Pipeline using the Pipeline Plugin (previously known as Workflow Plugin).
There is a lot to explain here, so it's better you read on your own. You can start here.
There are also other (simpler) solutions, like Build Flow Plugin or Parameterized Trigger Plugin which can help create dependencies between builds, but I think Pipeline is the newest and considered a best practice (it's definitely the most advanced solution).
Still, having said that, if you feel Pipeline is an overkill for you, go for the alternatives.
I would recommend making sure each build does a mvn install to the same local repo, and also deploys the artifact to Artifactory (hopefully you have one).
Automate parameterized builds: "build with tomcat profile and websphere profile"
To do that you'll need to create parameterized builds.
That's pretty easy to do, you just check This build is parameterized in your build config and add a MVN_PROFILE string/choice parameter.
After that you can trigger each build several times, with different parameters, using any one of the plugins mentioned in the previous bullet.
Extra Tip:
While hacking your way through this, consider using Job Configuration History Plugin, it can help review and revert changes made to the configuration.
Good luck, hope this helps :)
I would consider a bit different approach to fully de-couple the projects.
If you are able to create your internal artifactory, than I would consider in the maven build each on of the dependencies as a third party library exactly like it is done with any other external libraries you are using.
This way, each such project can be seperatly built and stored in the artifactory and when a dependent project will be built it will just take the right version as mentioned in the pom file.
This way you'll have different build process for each one of the projects and only relevant projects (relevant = changed) will be built.

Using cached artifacts in Maven to avoid redundant builds?

I have a Maven 3 multi-module project (~50 modules) which is stored in Git. Multiple developers are working on this code and building it, and we also have automated build machines that run cold builds on every push.
Most individual changelogs alter code in a fairly small number of modules, so it's a waste of time to rebuild the entire source tree with every change. However, I still want the final result of running the parent project build to be the same as if it had built the entire codebase. And I don't want to start manually versioning modules, as this would become a nightmare of criss-crossing version updates.
What I would like to do is add a plugin which intercepts some step in build or install, and takes a hash of the module contents (ideally pulled from Git), then looks in a shared binary repository for an artifact stored under that hash. If one is found, it uses that artifact and doesn't even execute the full build. If it finds nothing in the cache it performs the build as normal, then stores its artifact in the cache. It would also be good to rebuild any modules which have dependencies (direct or transient) which themselves had a cache miss.
Is there anything out there which does anything like this already? If not, what would be the cleanest way to go about adding it to Maven? It seems like plugins might be able to accomplish it, but for a couple pieces I'm having trouble finding the right way to attach to Maven. Specifically:
How can you intercept the "install" goal to check the cache, and only invoke the module's 'native' install goal on a cache miss?
How should a plugin pass state from one module to another regarding which cache misses have occurred in order to force rebuilds of dependencies with changes?
I'm also open to completely different ways to achieve the same end result (fewer redundant builds) although the more drastic the solution the less value it has for me in the near term.
I have previously implemented a more complicated solution with artifact version manipulation and deployment to private Maven repository. However, I think this will fit your needs better and is somewhat more simple:
Split your build into multiple builds (e.g., with a single build per module using maven -pl argument).
Setup parent-child relationships between these builds. (Bamboo even has additional support for figuring out Maven dependencies, but I'm not sure how it works.)
Configure Maven settings.xml to use a different local repository location - specify a new directory inside your build working directory. See docs: https://maven.apache.org/guides/mini/guide-configuring-maven.html
Use mvn install goal to ensure newly built artifacts are added to local repository
Use Bamboo artifact sharing to expose built artifacts from local repository - you should probably filter this to include only the package(s) you're interested in
Set dependent builds to download all artifacts from parent builds and put them into proper subdirectory of local repository (which is customized to be in working directory)
This should even work for feature branch builds thanks to the way Bamboo handles parent-child relations for branch builds.
Note that this implies that Maven will redownload all other dependencies, so you should use a proxy private Maven repository on local network, such as Artifactory or Nexus.
If you want, I can also describe the more complicated scenario I've already implemented that involves modifying artifact versions and deploying to private Maven repository.
The Jenkins plugin allows you to manage/minimize dependent builds
whenever a SNAPSHOT dependency is built (determined by Maven)
after other projects are built (manually via Jenkins jobs)
And if you do a 'mvn deploy' to save the build into your corporate Maven repo then you don't have to worry about dependencies when builds run on slave Jenkins machines. The result is that no module is ever built unless it or one of its dependencies has changed.
Hopefully you can apply these principles to a solution with Bamboo.

Categories