I'm working on some code with several sub-projects many of which depend on each other. For example, there's a utils project with various utilities, a queueing project with some code for managing queues of things, and then a mainApp project. The queueing project depends on utils, and mainApp depends on both utils and queueing. There are many more projects in addition, but hopefully you get the general idea.
We use to have a standard sbt submodule setup for this with one root build file, lots of sub-projects and the standard aggregate and dependsOn stuff. This worked, but was problematic:
It could be very slow to build and run tests for the project. Building everyting takes about 15 minutes and unit testing everything takes even longer.
Due to the way aggregate works, unit testing mainApp resulted in runs of all tests in utils and queuing even if they hadn't changed. True, you can do "test-quick" but that stops working when you change git branches and such so you still end up running the whole test suite often.
While sbt does let you build just a subproject, you have to be in the root directory and remember to qualify the build. For example, you have to remember to run sbt utils/compile rather than sbt compile.
So we switched to completely separate project and separate git repos. Each project builds and then deploys an artifact to Nexus. Thus, to build and test the mainApp you build just it and it pulls already compiled .jar files for the other projects from the nexus server. This makes it easier to work in just one project, but it becomes harder to make a change to, for example, utils and then use it in mainApp. Specifically, you often want to do something like add a method to utils and then immediately test that method in mainApp to see if it worked before pushing a new version of utils. But now, to do that, you have to:
Add your code to utils
Bump the version number in utils
Run "sbt publish-loca"
Edit the build.sbt in mainApp and add -SNAPSHOT to the utils dependency
Run sbt update
Build and test mainApp
If that all works, you then have to push utils and mainApp in the right order to the continuous build server and be sure to wait for the utils build to complete before you push mainApp.
Even worse, maintaining the versioning becomes really complicated. Suppose we start with all projects at version 1.0 and depending on version 1.0 of the other projects. Now suppose we find a bug in utils that affects mainApp. So we fix utils, change its version number to 1.1, update mainApp to depend on 1.1 and re-build. Suppose the code that fixed the bug in mainApp introduces a bug in queue. The problem is that if you go to queue and run its tests, they'll run against version 1.0 of utils (even if you use ranges as sbt only re-checks those periodically). However, when you build mainApp there will be a dependency conflict on utils which sbt will resolve. No matter how it resolves it, it will be "wrong" for at least one project. We can set the resolver to "strict" but then managing versions becomes very labor intensive across the 10+ projects.
What I'm really looking for is a build setup that is the best of both worlds. Specifically:
I have a root dir with sub-dirs for each project
If pwd is one of those sub-dirs all sbt commands refer to just that project. So, if pwd is queue, then sbt test compiles and tests only queue.
If there's a version of an artifact in Nexus, it is pulled from there, but if there's more recent code on the disk, the dependency is built locally and that is used instead.
Or something like that. Does anyone have any advice on how to set this up with sbt?
Related
My library depends on another library; let's call it "lib". I want to test my library with multiple versions of lib, in an automated manner.
Test if my library compiles for each version of lib.
Run JUnit 5 tests for each version of lib.
Are there any existing solutions for this?
I could write a script that changes the version number of lib in my pom.xml and executes mvn compile and mvn surefire:test. I could also use profiles and automate this with a script. I was hoping there is a better way, through something like a Maven plugin.
Maven focuses on reproducible builds which means that if you repeat the build at a later date you should get the same results which in turn requires that the dependency versions are fixed.
This fundamental mindset is what you want to challenge. Maven won't like it even if it is for a good reason, and you will most likely need to have a separate full run for each version instead of looping inside Maven.
I was thinking that the way I would approach this, is to have a bill-of-materials POM that has a dependencyManagement section that lists the exact version you want to have, which is generated in the local filesystem before each run, and then orchestrate a run for each version you want to test.
You can also leverage your build system and have a repository which orchestrates this. Github Actions can do array builds which might be what you need.
I'm trying to convert existing Java projects with Maven and Eclipse into Java 9+ modules. The projects have unit tests and the unit tests have test dependencies. I need the test dependencies to be available in the test code, but I don't want them exposed to the rest of the world in the published modules.
I think Testing in the Modular World describes the Maven solutions well. In summary one solution is to create one module-info.java in the main source folder and another in the testing folder. The file in the main folder has the real dependencies. The file in the test folder adds the test dependencies.
The solution works well in Maven and I can build and run tests from the command line. However, when I import the project into Eclipse as a Maven project it balks. Eclipse complains that "build path contains duplicate entry module-info" and refuses to build the project at all.
Using the other suggested solution in the article with a module-info.test containing --add-reads has no effect and the build fails in both Maven and Eclipse as the tests can't find their dependencies.
To make matters more complex I need to import the test dependencies from Maven, but I also need to import standard Java modules that are not used by the main code. For example one unit test relies on the built-in web server provided by java.httpserver and as it is part of the JDK any magic done on the test dependencies will miss it.
Is there a solution for this that works in Maven and Eclipse (latest versions)? It sounds like a very common problem and the module system has been around for a while by now.
Note that I really don't want to change the project settings in Eclipse. I can fiddle with plugins in the pom files, but adding a manual routine where all developers need to edit the generated/imported project settings manually is not an option.
EDIT:
There is an open Eclipse bug report for this, see Eclipse bug 536847. It seems it is not supported yet, but perhaps someone can suggest a workaround?
The Eclipse emulation of the multiple-classpaths-per-project feature in Maven has been broken for very long. The symptom is that you can have non-test classes using test dependencies just fine.
Essentially Eclipse just considers each project to have a single classpath instead of two parallel ones which causes things like this to ... not do the right thing.
I would suggest splitting each of the problematic projects into two. One with the actual sources and one with the test sources (depending on the actual source). This will avoid the Eclipse bug and also allow you to use the newest version of Java for your tests while having your application built for an older version of Java.
I had a code that was working correctly when was executed during standard unit testing, but didn't work when it was compiled into the jar and was added as dependency for some other project.
It wasn't an issue to find the root cause and fix it, but I started to think how can I test freshly made jar artifact before deploying it anywhere, to make sure that it will work for end users and other projects. I have googled this topic for several hours, but didn't even find something close to it.
Maybe I'm totally wrong and trying to achieve something weird, but I cannot figure out another way how to verify compiled packages and be confident that it will work for others.
Some details about the project - simple Java library with few classes, using Gradle 5.5 as a build system and travis-ci as CI/CD tool, for testing I'm using TestNG, but I can easily switch to JUnit if it will be required.
If you curious about the code, which was not working when was compiled into the package, here is simplified version:
public String readResourceByURI() throws IOException, URISyntaxException
{
new String(Files.readAllBytes(Paths.get(ClassLoader.getSystemClassLoader().getResource("resource.txt").toURI())));
}
This function will throw java.nio.file.FileSystemNotFoundException if packaged into the jar file. But as I said the problem is not with the code...
Ideally, I want to create a build pipeline, that will produce jar artifacts, which then will be tested and if tests are successful those jars will be automatically deployed to repository (maven and/or bintray).
At the moment all tests are executed before jar creation and as a result there is chance, that compiled code inside jar package will not work due to packaging.
So, to simplify my question I'm looking for a Gradle configuration that can execute unit tests on a freshly made jar file.
That's what I came up with:
test {
// Add dependency on jar task, since it will be main target for testing
dependsOn jar
// Rearrange test classpath, add compiled JAR instead of main classes
classpath = project.sourceSets.test.output + configurations.testRuntimeClasspath + files(jar.archiveFile)
useTestNG()
}
Here I'm changing default classpath for test task by combining folder with test classes, runtime dependencies and compiled JAR file. Not sure if it's correct way to do it...
I don't think there is a good way to detect this kind of problem in a unit test. It is the kind of problem that is normally found in an integration test.
If your artifact / deliverable is a library, integration tests don't normally make a lot of sense. However, you could spend some time to create a sample or test application that uses your library, which you can then write an integration test for.
You would need to ask yourself whether there are enough potential errors of this kind to warrant doing that:
I don't image that you will make this particular mistake again soon.
Other problems of this nature might include assumptions in your library about the OS platform or (occasionally) Java versions ... which can only really be tested by running an application on the different platforms.
Maybe the pragmatic answer is to recognize that you cannot (or cannot afford to) test everything.
Having said that, one possible approach might be to chose a free-standing test runner (packaged as a non-GUI Java application). Then get Gradle to run the test runner as a scripted task with JARs for your library and the unit tests on the classpath.
In Gradle you can try to execute some scripting task that run code from your jar. Or complex but on simple POC it works.
In main gradle project create subproject 'child'.
Add inforation about it in settings.gradle:
include 'child'
In build.gradle add this:
task externalTest {
copy {
from 'src/test'
into './child/src/test'
}
}
externalTest.dependsOn(':child:build')
jar.doLast {
externalTest
}
And in child/settings.gradle in dependency add parent jar:
compile files('../build/libs/parent.jar')
Now in main project on build, the child project will be build after jar creation.
We are working on a legacy project and the first task is to setup a DevOps for the same.
The important thing is we are very new to this area.
We are planning to use jenkins and sonarqube for the purpose initially. Let me start with the requirements.
Currently the the project is sub divided into multiple projects (not modules)
We had to follow this build structure as there are no plans for re-organising it as a single multimodule maven project
Currently the builds and dependencies are managed manually
Eg: The project is subdivided in to 5 multi-module maven projects,
say A,B,C,D and E
1. A and C are completely independent and can be easly built
2. B depends on the artifact generated by A (jar) and has multiple maven profiles (say tomcat and websphere, it is a webservice module)
3. D depends on the artifact generated by C
4. E depends on A, B and D and has multiple maven profiles (say tomcat and websphere, it is a web project)
Based on jenkins documentation to handle this scenario, we are thinking about parameterized builds using “parameterized build plugin" and "extended choice parameter plugin" with the help of these plugins we are able to parameterize the profile name. But before each build, the builder waits for the profile parameter.
So we are still searching to find an good solution to
1. keep the dependency between projects an built the whole projects if there is any change in SCM (SVN). For that we are used "Build whenever a SNAPSHOT dependency is built" and "SCM polling option". Unfortunately this option seems not working in our case (we have given an interval of 5 min for scm polling but no build is happening based on test commits)
2. Even though we are able to parameterize the profile, this seems as a manual step (is there an option to automate this part too, ie. build with tomcat profile and websphere profile should happen sequentially).
We are struggling to find a solution to cater all these core requirements. Any pointer would be greatly appreciated.
Thanks,
San
My maven knowledge is limited, however since you didnt get any response yet, ill try to give some general advice.
There are usually multiple ways to reach some aim in Jenkins, each has its pros and cons. Choosing the most fitting solution depends on the specific requirements and your environment/setup.
However you first need something that just works, then you can refine it.
A quick result you get with the following
Everything in one job
Configure your subversion repo (Multiple are possible) to be checked out into your workspace
Enable Poll SCM trigger
Build your modules/projects via Execute shell build steps. (Failed builds can be handed to the job result by using Exit 1 on a Execute shell Build step.)
However keep in mind that this will prevent advanced functionality on a per project/module basis, such as mail notifications to the dev to blame. Or trend of metrics, like warnings or static code analysis.
The following solution is easier to extend in that direction.
Wrapper job around your various build jobs
Use Build step Trigger/call builds on other projects to build A, archive needed artifacts
Use Build step Trigger/call builds on other projects with some parameter tomcatto build B tomcat version, use Copy Artifact Plugin to copy over jar from A
...
Use Build step Trigger/call builds on other projects with some parameter tomcatto build E tomcat version. Use Copy Artifact Plugin to copy all needed artifacts, you can specify parameter there if you need artifact of i.e. B tomcat version
In this setup, monitoring the svn is an issue since if you trigger it from polling SCM, it will checkout it in your wrapper workspace while you dont actually need it checked out there, but in your build jobs.
Possible solution: Share the workspace between wrapper job and your build jobs, so the duplicate checkouts in the build jobs will find the files already in the right revision. However then you *need+ to make sure the downstream jobs are executed on the same machine (there are plugins to do so)
Or even more elegant: Use a post-commit hook (See here, section Post-comit hook) on your svn to notify jenkins of changes.
Edit: For the future, its worth looking into the Pipeline Plugin and its documentation for more complex builds, this is the engine for the upcoming jenkins version 2.0, see here.
I would create 5 different jobs for ABCDE.
As you mentioned A and C would be standalone jobs so I would just do mvn clean install/pkg/verify based on your need.
For B I would first build A and then invoke another maven target in build to build B
For D, I would first build C and then build D
Finally for E , i would use invoke top level mvn targets 5 times A , B,C,D and finally E
Edit:
Jenkins 2 is out and has a built-in support for pipelines.
A few pointers for your requirements:
"built the whole projects if there is any change in SCM"
Although Poll SCM usually requires less work, the proper way to do it is to use SVN hooks.
The solution works as follows:
First you enable the Trigger builds remotely feature and enter a random token in Authentication Token.
This allows you to trigger builds remotely using Jenkins REST API (http[s]://JENKINS_URL/job/BUILD_NAME/build?token=TOKEN)
Then you create a SVN hook (a script that runs whenever you commit) which triggers the build by sending a request to that URL (using curl,wget, python,...).
There are a lot of manuals on how to create SVN hooks, here's the first result on "SVN Hooks" from Google.
"keep the dependency between projects"
I would create a different Jenkins Job for each project separately, then make sure builds are executed in the required order.
I think the best way to order your builds (dependencies) is to create a Build Pipeline using the Pipeline Plugin (previously known as Workflow Plugin).
There is a lot to explain here, so it's better you read on your own. You can start here.
There are also other (simpler) solutions, like Build Flow Plugin or Parameterized Trigger Plugin which can help create dependencies between builds, but I think Pipeline is the newest and considered a best practice (it's definitely the most advanced solution).
Still, having said that, if you feel Pipeline is an overkill for you, go for the alternatives.
I would recommend making sure each build does a mvn install to the same local repo, and also deploys the artifact to Artifactory (hopefully you have one).
Automate parameterized builds: "build with tomcat profile and websphere profile"
To do that you'll need to create parameterized builds.
That's pretty easy to do, you just check This build is parameterized in your build config and add a MVN_PROFILE string/choice parameter.
After that you can trigger each build several times, with different parameters, using any one of the plugins mentioned in the previous bullet.
Extra Tip:
While hacking your way through this, consider using Job Configuration History Plugin, it can help review and revert changes made to the configuration.
Good luck, hope this helps :)
I would consider a bit different approach to fully de-couple the projects.
If you are able to create your internal artifactory, than I would consider in the maven build each on of the dependencies as a third party library exactly like it is done with any other external libraries you are using.
This way, each such project can be seperatly built and stored in the artifactory and when a dependent project will be built it will just take the right version as mentioned in the pom file.
This way you'll have different build process for each one of the projects and only relevant projects (relevant = changed) will be built.
I have a Maven 3 multi-module project (~50 modules) which is stored in Git. Multiple developers are working on this code and building it, and we also have automated build machines that run cold builds on every push.
Most individual changelogs alter code in a fairly small number of modules, so it's a waste of time to rebuild the entire source tree with every change. However, I still want the final result of running the parent project build to be the same as if it had built the entire codebase. And I don't want to start manually versioning modules, as this would become a nightmare of criss-crossing version updates.
What I would like to do is add a plugin which intercepts some step in build or install, and takes a hash of the module contents (ideally pulled from Git), then looks in a shared binary repository for an artifact stored under that hash. If one is found, it uses that artifact and doesn't even execute the full build. If it finds nothing in the cache it performs the build as normal, then stores its artifact in the cache. It would also be good to rebuild any modules which have dependencies (direct or transient) which themselves had a cache miss.
Is there anything out there which does anything like this already? If not, what would be the cleanest way to go about adding it to Maven? It seems like plugins might be able to accomplish it, but for a couple pieces I'm having trouble finding the right way to attach to Maven. Specifically:
How can you intercept the "install" goal to check the cache, and only invoke the module's 'native' install goal on a cache miss?
How should a plugin pass state from one module to another regarding which cache misses have occurred in order to force rebuilds of dependencies with changes?
I'm also open to completely different ways to achieve the same end result (fewer redundant builds) although the more drastic the solution the less value it has for me in the near term.
I have previously implemented a more complicated solution with artifact version manipulation and deployment to private Maven repository. However, I think this will fit your needs better and is somewhat more simple:
Split your build into multiple builds (e.g., with a single build per module using maven -pl argument).
Setup parent-child relationships between these builds. (Bamboo even has additional support for figuring out Maven dependencies, but I'm not sure how it works.)
Configure Maven settings.xml to use a different local repository location - specify a new directory inside your build working directory. See docs: https://maven.apache.org/guides/mini/guide-configuring-maven.html
Use mvn install goal to ensure newly built artifacts are added to local repository
Use Bamboo artifact sharing to expose built artifacts from local repository - you should probably filter this to include only the package(s) you're interested in
Set dependent builds to download all artifacts from parent builds and put them into proper subdirectory of local repository (which is customized to be in working directory)
This should even work for feature branch builds thanks to the way Bamboo handles parent-child relations for branch builds.
Note that this implies that Maven will redownload all other dependencies, so you should use a proxy private Maven repository on local network, such as Artifactory or Nexus.
If you want, I can also describe the more complicated scenario I've already implemented that involves modifying artifact versions and deploying to private Maven repository.
The Jenkins plugin allows you to manage/minimize dependent builds
whenever a SNAPSHOT dependency is built (determined by Maven)
after other projects are built (manually via Jenkins jobs)
And if you do a 'mvn deploy' to save the build into your corporate Maven repo then you don't have to worry about dependencies when builds run on slave Jenkins machines. The result is that no module is ever built unless it or one of its dependencies has changed.
Hopefully you can apply these principles to a solution with Bamboo.