How to build only modified modules and dependencies in Gradle - java

I use gitflow-incremental-builder with Maven in a monorepo model. It allows me to:
Build only these modules in feature branch that differ from main branch.
Build only these modules that changed from last successful build tag.
When a library changes, build all the modules that use it.
Build a library if needed for modules, but if nothing changed, skip tests (skipTestsForUpstreamModules)
Force build all.
Changes are resolved using git log and then it affects the reactor config.
I am looking for a similar tool that will do it for Gradle.

Gradle Build Cache will automatically track the inputs and outputs of tasks and will skip any that have not changed.
Enabling Gradle Build Cache
It can be enabled locally by adding in gradle.properties
org.gradle.caching=true
or by adding a flag to the command line
./gradlew tests --build-cache
Sharing the build cache
The build cache for a project can be shared across multiple machines over HTTP. Sharing the build cache remotely isn't necessary - it can still work, even if the cache is stored locally.
Register task inputs
Gradle needs to know about the all inputs and outputs of tasks, otherwise tasks might be skipped, so make sure they are correctly registered.
For example, if some integration tests depend on an environment variable, then register the environment variable as a test-task input.
// build.gradle.kts
tasks.named("integrationTest") {
// TEST_TASK_QUALITY is used in integration tests to change <blah blah blah>
// register it as an input so Gradle knows when to re-run the tests
inputs.property("TEST_TASK_QUALITY", providers.environmentVariable("TEST_TASK_QUALITY"))
}
Stable task outputs
Gradle will use the outputs of some tasks as the inputs of other tasks. If the outputs aren't stable, then Gradle will always re-run the dependent tasks.
For that reason, it's worth enabling reproducible builds in all projects.
// build.gradle.kts
tasks.withType<AbstractArchiveTask>().configureEach {
isPreserveFileTimestamps = false
isReproducibleFileOrder = true
}
Also, consider input normalization for any custom files your project has.

Related

Example of debug/release Java build using Gradle?

I'm using (or trying to use) Gradle to build a plain Java (not Android) multi-module project, which contains a CLI and several micro-services. I have a simple, single-configuration build working.
I'd like to be able to build it two different ways: a "development" build with one group of settings and dependencies, and a "deployment" build with different settings and dependencies. Some settings and dependencies will overlap between the two.
In other build tools, this would correspond to "Debug" and "Release" build configurations. But for Gradle, I've seen build types, variants, flavors, and capabilities, and combinations of all of those—some of which seem to be Android specific, some depending on plugins that seem to have fallen out of date. But I can't seem to locate a straightforward example of a "traditional" debug/release build setup.
I have a simple approach working using manually created buildDebug, buildRelease, assembleDebug, assembleRelease, etc. tasks, but it feels like I'm working around Gradle rather than with it.
Does anyone have such an example who would be willing to share their work? Many thanks!
It looks like my early searches (i.e. "gradle debug and release builds") and my expectation of something built into Gradle sent me down the wrong rabbit hole. I finally stumbled across this question only after it occurred to me to search on "gradle equivalent of maven build profiles".
It's possible I'm missing a Gradle feature (i.e. variants) I could be taking advantage of, but it appears the correct solution may be:
ext {
env = findProperty('env') ?: 'debug'
}
dependencies {
// shared dependencies
if (env == 'debug') {
// debug build dependencies
}
if (env == 'release') {
// release build dependencies
}
}
The build is selected by setting the env property on the command line:
# debug build; can use either
$ gradle build
$ gradle build -Penv=debug
# release build
$ gradle build -Penv=release
Hope that helps a fellow Gradle newbie.

Gradle: Multiproject build - How to determine projects that need a rebuild?

I have a multi project build setup. If I execute the "jar" task of any subproject, gradle checks whether it needs to rebuild a certain dependent project or not by using the org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.
Is there a way to access this information to like build a custom task or a task in a custom plugin which automatically copies the jars of theses projects to somewhere?
You should be able to use jar.didWork to determine whether the task jar actually did some work or not if I remember correctly: https://docs.gradle.org/current/javadoc/org/gradle/api/Task.html#getDidWork()
Or maybe more appropriate, use something like the following:
gradle.taskGraph.afterTask { task, state ->
// check anything on Task or TaskState, like didWork, executed, failure, noSource, skipMessage, skipped or upToDate
}

Jenkins and Maven profiles

We are working on a legacy project and the first task is to setup a DevOps for the same.
The important thing is we are very new to this area.
We are planning to use jenkins and sonarqube for the purpose initially. Let me start with the requirements.
Currently the the project is sub divided into multiple projects (not modules)
We had to follow this build structure as there are no plans for re-organising it as a single multimodule maven project
Currently the builds and dependencies are managed manually
Eg: The project is subdivided in to 5 multi-module maven projects,
say A,B,C,D and E
1. A and C are completely independent and can be easly built
2. B depends on the artifact generated by A (jar) and has multiple maven profiles (say tomcat and websphere, it is a webservice module)
3. D depends on the artifact generated by C
4. E depends on A, B and D and has multiple maven profiles (say tomcat and websphere, it is a web project)
Based on jenkins documentation to handle this scenario, we are thinking about parameterized builds using “parameterized build plugin" and "extended choice parameter plugin" with the help of these plugins we are able to parameterize the profile name. But before each build, the builder waits for the profile parameter.
So we are still searching to find an good solution to
1. keep the dependency between projects an built the whole projects if there is any change in SCM (SVN). For that we are used "Build whenever a SNAPSHOT dependency is built" and "SCM polling option". Unfortunately this option seems not working in our case (we have given an interval of 5 min for scm polling but no build is happening based on test commits)
2. Even though we are able to parameterize the profile, this seems as a manual step (is there an option to automate this part too, ie. build with tomcat profile and websphere profile should happen sequentially).
We are struggling to find a solution to cater all these core requirements. Any pointer would be greatly appreciated.
Thanks,
San
My maven knowledge is limited, however since you didnt get any response yet, ill try to give some general advice.
There are usually multiple ways to reach some aim in Jenkins, each has its pros and cons. Choosing the most fitting solution depends on the specific requirements and your environment/setup.
However you first need something that just works, then you can refine it.
A quick result you get with the following
Everything in one job
Configure your subversion repo (Multiple are possible) to be checked out into your workspace
Enable Poll SCM trigger
Build your modules/projects via Execute shell build steps. (Failed builds can be handed to the job result by using Exit 1 on a Execute shell Build step.)
However keep in mind that this will prevent advanced functionality on a per project/module basis, such as mail notifications to the dev to blame. Or trend of metrics, like warnings or static code analysis.
The following solution is easier to extend in that direction.
Wrapper job around your various build jobs
Use Build step Trigger/call builds on other projects to build A, archive needed artifacts
Use Build step Trigger/call builds on other projects with some parameter tomcatto build B tomcat version, use Copy Artifact Plugin to copy over jar from A
...
Use Build step Trigger/call builds on other projects with some parameter tomcatto build E tomcat version. Use Copy Artifact Plugin to copy all needed artifacts, you can specify parameter there if you need artifact of i.e. B tomcat version
In this setup, monitoring the svn is an issue since if you trigger it from polling SCM, it will checkout it in your wrapper workspace while you dont actually need it checked out there, but in your build jobs.
Possible solution: Share the workspace between wrapper job and your build jobs, so the duplicate checkouts in the build jobs will find the files already in the right revision. However then you *need+ to make sure the downstream jobs are executed on the same machine (there are plugins to do so)
Or even more elegant: Use a post-commit hook (See here, section Post-comit hook) on your svn to notify jenkins of changes.
Edit: For the future, its worth looking into the Pipeline Plugin and its documentation for more complex builds, this is the engine for the upcoming jenkins version 2.0, see here.
I would create 5 different jobs for ABCDE.
As you mentioned A and C would be standalone jobs so I would just do mvn clean install/pkg/verify based on your need.
For B I would first build A and then invoke another maven target in build to build B
For D, I would first build C and then build D
Finally for E , i would use invoke top level mvn targets 5 times A , B,C,D and finally E
Edit:
Jenkins 2 is out and has a built-in support for pipelines.
A few pointers for your requirements:
"built the whole projects if there is any change in SCM"
Although Poll SCM usually requires less work, the proper way to do it is to use SVN hooks.
The solution works as follows:
First you enable the Trigger builds remotely feature and enter a random token in Authentication Token.
This allows you to trigger builds remotely using Jenkins REST API (http[s]://JENKINS_URL/job/BUILD_NAME/build?token=TOKEN)
Then you create a SVN hook (a script that runs whenever you commit) which triggers the build by sending a request to that URL (using curl,wget, python,...).
There are a lot of manuals on how to create SVN hooks, here's the first result on "SVN Hooks" from Google.
"keep the dependency between projects"
I would create a different Jenkins Job for each project separately, then make sure builds are executed in the required order.
I think the best way to order your builds (dependencies) is to create a Build Pipeline using the Pipeline Plugin (previously known as Workflow Plugin).
There is a lot to explain here, so it's better you read on your own. You can start here.
There are also other (simpler) solutions, like Build Flow Plugin or Parameterized Trigger Plugin which can help create dependencies between builds, but I think Pipeline is the newest and considered a best practice (it's definitely the most advanced solution).
Still, having said that, if you feel Pipeline is an overkill for you, go for the alternatives.
I would recommend making sure each build does a mvn install to the same local repo, and also deploys the artifact to Artifactory (hopefully you have one).
Automate parameterized builds: "build with tomcat profile and websphere profile"
To do that you'll need to create parameterized builds.
That's pretty easy to do, you just check This build is parameterized in your build config and add a MVN_PROFILE string/choice parameter.
After that you can trigger each build several times, with different parameters, using any one of the plugins mentioned in the previous bullet.
Extra Tip:
While hacking your way through this, consider using Job Configuration History Plugin, it can help review and revert changes made to the configuration.
Good luck, hope this helps :)
I would consider a bit different approach to fully de-couple the projects.
If you are able to create your internal artifactory, than I would consider in the maven build each on of the dependencies as a third party library exactly like it is done with any other external libraries you are using.
This way, each such project can be seperatly built and stored in the artifactory and when a dependent project will be built it will just take the right version as mentioned in the pom file.
This way you'll have different build process for each one of the projects and only relevant projects (relevant = changed) will be built.

Using cached artifacts in Maven to avoid redundant builds?

I have a Maven 3 multi-module project (~50 modules) which is stored in Git. Multiple developers are working on this code and building it, and we also have automated build machines that run cold builds on every push.
Most individual changelogs alter code in a fairly small number of modules, so it's a waste of time to rebuild the entire source tree with every change. However, I still want the final result of running the parent project build to be the same as if it had built the entire codebase. And I don't want to start manually versioning modules, as this would become a nightmare of criss-crossing version updates.
What I would like to do is add a plugin which intercepts some step in build or install, and takes a hash of the module contents (ideally pulled from Git), then looks in a shared binary repository for an artifact stored under that hash. If one is found, it uses that artifact and doesn't even execute the full build. If it finds nothing in the cache it performs the build as normal, then stores its artifact in the cache. It would also be good to rebuild any modules which have dependencies (direct or transient) which themselves had a cache miss.
Is there anything out there which does anything like this already? If not, what would be the cleanest way to go about adding it to Maven? It seems like plugins might be able to accomplish it, but for a couple pieces I'm having trouble finding the right way to attach to Maven. Specifically:
How can you intercept the "install" goal to check the cache, and only invoke the module's 'native' install goal on a cache miss?
How should a plugin pass state from one module to another regarding which cache misses have occurred in order to force rebuilds of dependencies with changes?
I'm also open to completely different ways to achieve the same end result (fewer redundant builds) although the more drastic the solution the less value it has for me in the near term.
I have previously implemented a more complicated solution with artifact version manipulation and deployment to private Maven repository. However, I think this will fit your needs better and is somewhat more simple:
Split your build into multiple builds (e.g., with a single build per module using maven -pl argument).
Setup parent-child relationships between these builds. (Bamboo even has additional support for figuring out Maven dependencies, but I'm not sure how it works.)
Configure Maven settings.xml to use a different local repository location - specify a new directory inside your build working directory. See docs: https://maven.apache.org/guides/mini/guide-configuring-maven.html
Use mvn install goal to ensure newly built artifacts are added to local repository
Use Bamboo artifact sharing to expose built artifacts from local repository - you should probably filter this to include only the package(s) you're interested in
Set dependent builds to download all artifacts from parent builds and put them into proper subdirectory of local repository (which is customized to be in working directory)
This should even work for feature branch builds thanks to the way Bamboo handles parent-child relations for branch builds.
Note that this implies that Maven will redownload all other dependencies, so you should use a proxy private Maven repository on local network, such as Artifactory or Nexus.
If you want, I can also describe the more complicated scenario I've already implemented that involves modifying artifact versions and deploying to private Maven repository.
The Jenkins plugin allows you to manage/minimize dependent builds
whenever a SNAPSHOT dependency is built (determined by Maven)
after other projects are built (manually via Jenkins jobs)
And if you do a 'mvn deploy' to save the build into your corporate Maven repo then you don't have to worry about dependencies when builds run on slave Jenkins machines. The result is that no module is ever built unless it or one of its dependencies has changed.
Hopefully you can apply these principles to a solution with Bamboo.

Manage groups of build configurations in Hudson

I'm using Hudson to build my application. I have several branches that come and go. Whenever there's a new branch, I have to set up the following builds for it:
a continuous build that runs after every change in SVN
a nightly build
a nightly site generation (I'm using Maven under the hood)
and a weekly integration build for some branches
currently this means I need to copy four template configurations and set them up with the branch URL. I don't like this for two reasons:
It's redundant, so modifying something is error-prone and takes a lot of time.
I need four full checkouts of the product per branch on every build slave, plus four separate private Maven repository, not to mention the built artifacts. This is a lot of space wasted.
What I'd like instead is to have one workspace and one configuration for allthese builds. Is this possible with Hudson?
If you go for the assumption that your nightly build is the same than your continuous build. You can publish your continuous build artifacts into a folder/repository path that contains the date. So your second and subsequent builds of a day will overwrite the previous builds of that day.
The site generation and the weekly integration build is more difficult since you would need conditional build steps. (The idea is to run batch/shell scripts that will determine if it is time for the action (like site build) and run that as part of that script).
In my opinion the better solution is to write a batch/shell script (or a Java program would work too) that copies your templates and replaces the svn entry in all your new jobs. Than you have two steps for creating a new branch. First run your script with the SVN path as the parameter and second tell Hudson to reload the configuration. The beauty of the solution is, that you can change your templates when necessary without making changes to your scripts.

Categories