Maven plugins; using output of one as input for another - java

My problem is that I have written a maven plugin to deploy the artifact to a user specified location. I'm now trying to write another maven plugin to use this deployed artifact, change some things and zip it again.
I want to write the second plugin such that i use the first plugin to get the information for where it was deployed.
I don't know how to access this information from the first plugin.

I would agree with #Barend that if you can afford to make changes before deploy, that could be best strategy.
If you cannot do that, you can follow strategy of a plugin like Maven Release plugin. Maven release plugin runs in two phases where second run needs output of the first run. They manage it by keeping temporary properties file in the project directory which carry the information like tag name, SNAPSHOT version name etc.
You could use the same approach with plugin. Just remember that your plugin will be sort of transactional, where it expects the other goal to have run before it can do its work.

It seems to me that the easiest workaround is to reverse the order in which the plugins run.
Have Plugin B run first, using the known location under target/ to modify the artifact and then run Plugin A, deploying the modified artifact to the configured location.
If that's no option, I suggest you simply duplicate the configuration value (so that both plugins are told about the new location in their <configuration> element). This keeps both plugins independent, which is what Maven assumes them to be.
A last option is to make make Plugin B parse the entire POM and extract the information from Plugin A's <configuration> element, but I really can't recommend this. If you go this way the two plugins are so closely intertwined that they're really just one plugin. This is poor design, violates the principle of least surprise and might cause nasty configuration problems down the line.

Related

How to add 70 local jars on maven project?

why use Maven when you have such quantity of local jars?
So we have a client that have a lot of private jars and custom jars.
For example commons-langMyCompanyCustom.jar which is commons-lang.jar with 10 more classes in it.
So on their environment we use 100% Maven without local dependencies.
But on our site we have the jars for development in Eclipse and have Maven build with the public ones, but we do not have permission to add their jars in our organizational repository.
So we want to use the Maven good things like: compile,test, build uber-jar, add static code analysis, generate java-docs, sources-jars etc. not to do this thinks one by one with the help of Eclipse.
So we have 70 jar some of them are public if I get the effective pom on their environment I found 50 of them in Maven Central, but the other 20 are as I called "custom" jars. I searched for decision of course but found this:
<dependency>
<groupId>sample</groupId>
<artifactId>com.sample</artifactId>
<version>1.0</version>
<scope>system</scope>
<systemPath>${project.basedir}/src/main/resources/yourJar.jar</systemPath>
</dependency>
So for all 20 of them I have to add this in the development maven profile??
Is there a easy way like in Gradle where you can add all folder with its dependencies to the existing ones?
Also installing one by one in every developer's repo is not acceptable.
Please forget the system scope as mentioned before! Too problematic...
Ideally:
Ideally, all your developers have access to Repository Manager in your or their organization (if possible).
A central environment for your System Integration Testing, maybe?
Alternatively, you may have a central environment for testing where all the dependencies are provided. This approach can be used to simulate how a compilation would work as if it's in your client's environment. Plus you only setup jars once.
So on their environment we use 100% Maven without local dependencies.
But on our site we have the jars for development in Eclipse and have
Maven build with the public ones, but we do not have permission to add
their jars in our organizational repository.
According to what you're saying in the above-quoted excerpt I believe you want to have set in your build's pom.xml assuming that in the client setup the dependencies will be present.
Especially, as you indicate that the organization doesn't give you permission to add their jars in your repository, I would use the provided scope.
As stated in the Maven docs, the definition of a provided dependency is as followed:
This is much like compile, but indicates you expect the JDK or a container to provide the dependency at runtime. For example, when building a web application for the Java Enterprise Edition, you would set the dependency on the Servlet API and related Java EE APIs to scope provided because the web container provides those classes. This scope is only available on the compilation and test classpath, and is not transitive.
So basically you assume that these dependencies will be present at your client's setup. However, this has some limitations. Meaning you can build solutions independently but cannot test it locally because you won't have the dependencies on your workstation.
If you won't even have access to the jars to configure your central environment ask if your client can provide a DEV/SIT environment.
None of the above? Inherit a parent pom.
To avoid the whole constant copy-paste process for every single (related) project, maven has the tools to centralize dependency and plugin configurations, one of such is by inheriting the configuration of a parent pom. As is explaining in the following documentation it is quite simple:
First you create a project with just a pom.xml where you define everything you wish to centralize (watch out, certain items have slight differences in their constructs);
Use as property of packaging tag the option pom: <packaging>pom</packaging>;
In the pom's that have to inherit these configurations set the parent configuration tags in <parent> ... </parent> (documentation is very clear with this);
Now everytime you update any "global" pom configuration only the parent version has to be updated on every project. As a result of this, you only need to configure everything once.
You can also apply this together with the abovementioned solutions, this way combining to find a solution that fits best to your needs.
But there is a big Maven world out there, so I advise a good read in its doc's to further acknowledge your possibilities. I remembered these situations because I've been in a similar situation you seem to be now.
Good luck!
Another alternative is the project RepoTree.
This one creates a Maven repository directory (not a server) from another directory which contains just the .jars. In other words, it creates the necessary .pom files and directory structure. It takes into account only the precise information from metadata contained in the archives (MANIFEST.MF, pom.xml).
Utility to recursively install artifacts from a directory into a local
Maven repository Based on Aether 1.7
This is 5 years old, but still should work fine.
TL;DR: MavenHoe creates a Maven repository server (not a directory) which serves the artefacts from a directory, guessing what you ask for if needed. The purpose is to avoid complicated version synchronizing - it simply takes whatever is closest to the requested G:A:V.
I have moved the MavenHoe project, which almost got lost with the decline of Google Code, to Github. Therefore I put it here for availability in the form of a full answer:
One of the options you have when dealing with conditions like that is to take whatever comes in form of a directory with .jar's and treat it as a repository.
Some time ago I have written a tool for that purpose. My situation was that we were building JBoss EAP and recompiled every single dependency.
That resulted in thousands of .jars which were most often the same as their Central counterpart (plus security and bug fixes).
I needed the tests to run against these artifacts rather than the Central ones. However, the Maven coordinates were the same.
Therefore, I wrote this "Maven repository/proxy" which provided the artifact if it found something that could be it, and if not, it proxied the request to Central.
It can derive the G:A:V from three sources:
MANIFEST.MF
META-INF/.../pom.xml
Location of the file in the directory, in combination with a configuration file like this:
jboss-managed.jar org/jboss/man/ jboss-managed 2.1.0.SP1 jboss-managed-2.1.0.SP1.jar
getopt.jar gnu-getopt/ getopt 1.0.12-brew getopt-1.0.12-brew.jar
jboss-kernel.jar org/jboss/microcontainer/ jboss-kernel 2.0.6.GA jboss-kernel-2.0.6.GA.jar
jboss-logging-spi.jar org/jboss/logging/ jboss-logging-spi 2.1.0.GA jboss-logging-spi-2.1.0.GA.jar
...
The first column is the filename in the .zip; Then groupId (with either slashes or dots), artifactId, version, artifact file name, respectively.
Your 70 files would be listed in this file.
See more information at this page:
https://rawgit.com/OndraZizka/MavenHoe/master/docs/README.html
The project is available here.
Feel free to fork and push further, if you don't find anything better.

Jenkins and Maven profiles

We are working on a legacy project and the first task is to setup a DevOps for the same.
The important thing is we are very new to this area.
We are planning to use jenkins and sonarqube for the purpose initially. Let me start with the requirements.
Currently the the project is sub divided into multiple projects (not modules)
We had to follow this build structure as there are no plans for re-organising it as a single multimodule maven project
Currently the builds and dependencies are managed manually
Eg: The project is subdivided in to 5 multi-module maven projects,
say A,B,C,D and E
1. A and C are completely independent and can be easly built
2. B depends on the artifact generated by A (jar) and has multiple maven profiles (say tomcat and websphere, it is a webservice module)
3. D depends on the artifact generated by C
4. E depends on A, B and D and has multiple maven profiles (say tomcat and websphere, it is a web project)
Based on jenkins documentation to handle this scenario, we are thinking about parameterized builds using “parameterized build plugin" and "extended choice parameter plugin" with the help of these plugins we are able to parameterize the profile name. But before each build, the builder waits for the profile parameter.
So we are still searching to find an good solution to
1. keep the dependency between projects an built the whole projects if there is any change in SCM (SVN). For that we are used "Build whenever a SNAPSHOT dependency is built" and "SCM polling option". Unfortunately this option seems not working in our case (we have given an interval of 5 min for scm polling but no build is happening based on test commits)
2. Even though we are able to parameterize the profile, this seems as a manual step (is there an option to automate this part too, ie. build with tomcat profile and websphere profile should happen sequentially).
We are struggling to find a solution to cater all these core requirements. Any pointer would be greatly appreciated.
Thanks,
San
My maven knowledge is limited, however since you didnt get any response yet, ill try to give some general advice.
There are usually multiple ways to reach some aim in Jenkins, each has its pros and cons. Choosing the most fitting solution depends on the specific requirements and your environment/setup.
However you first need something that just works, then you can refine it.
A quick result you get with the following
Everything in one job
Configure your subversion repo (Multiple are possible) to be checked out into your workspace
Enable Poll SCM trigger
Build your modules/projects via Execute shell build steps. (Failed builds can be handed to the job result by using Exit 1 on a Execute shell Build step.)
However keep in mind that this will prevent advanced functionality on a per project/module basis, such as mail notifications to the dev to blame. Or trend of metrics, like warnings or static code analysis.
The following solution is easier to extend in that direction.
Wrapper job around your various build jobs
Use Build step Trigger/call builds on other projects to build A, archive needed artifacts
Use Build step Trigger/call builds on other projects with some parameter tomcatto build B tomcat version, use Copy Artifact Plugin to copy over jar from A
...
Use Build step Trigger/call builds on other projects with some parameter tomcatto build E tomcat version. Use Copy Artifact Plugin to copy all needed artifacts, you can specify parameter there if you need artifact of i.e. B tomcat version
In this setup, monitoring the svn is an issue since if you trigger it from polling SCM, it will checkout it in your wrapper workspace while you dont actually need it checked out there, but in your build jobs.
Possible solution: Share the workspace between wrapper job and your build jobs, so the duplicate checkouts in the build jobs will find the files already in the right revision. However then you *need+ to make sure the downstream jobs are executed on the same machine (there are plugins to do so)
Or even more elegant: Use a post-commit hook (See here, section Post-comit hook) on your svn to notify jenkins of changes.
Edit: For the future, its worth looking into the Pipeline Plugin and its documentation for more complex builds, this is the engine for the upcoming jenkins version 2.0, see here.
I would create 5 different jobs for ABCDE.
As you mentioned A and C would be standalone jobs so I would just do mvn clean install/pkg/verify based on your need.
For B I would first build A and then invoke another maven target in build to build B
For D, I would first build C and then build D
Finally for E , i would use invoke top level mvn targets 5 times A , B,C,D and finally E
Edit:
Jenkins 2 is out and has a built-in support for pipelines.
A few pointers for your requirements:
"built the whole projects if there is any change in SCM"
Although Poll SCM usually requires less work, the proper way to do it is to use SVN hooks.
The solution works as follows:
First you enable the Trigger builds remotely feature and enter a random token in Authentication Token.
This allows you to trigger builds remotely using Jenkins REST API (http[s]://JENKINS_URL/job/BUILD_NAME/build?token=TOKEN)
Then you create a SVN hook (a script that runs whenever you commit) which triggers the build by sending a request to that URL (using curl,wget, python,...).
There are a lot of manuals on how to create SVN hooks, here's the first result on "SVN Hooks" from Google.
"keep the dependency between projects"
I would create a different Jenkins Job for each project separately, then make sure builds are executed in the required order.
I think the best way to order your builds (dependencies) is to create a Build Pipeline using the Pipeline Plugin (previously known as Workflow Plugin).
There is a lot to explain here, so it's better you read on your own. You can start here.
There are also other (simpler) solutions, like Build Flow Plugin or Parameterized Trigger Plugin which can help create dependencies between builds, but I think Pipeline is the newest and considered a best practice (it's definitely the most advanced solution).
Still, having said that, if you feel Pipeline is an overkill for you, go for the alternatives.
I would recommend making sure each build does a mvn install to the same local repo, and also deploys the artifact to Artifactory (hopefully you have one).
Automate parameterized builds: "build with tomcat profile and websphere profile"
To do that you'll need to create parameterized builds.
That's pretty easy to do, you just check This build is parameterized in your build config and add a MVN_PROFILE string/choice parameter.
After that you can trigger each build several times, with different parameters, using any one of the plugins mentioned in the previous bullet.
Extra Tip:
While hacking your way through this, consider using Job Configuration History Plugin, it can help review and revert changes made to the configuration.
Good luck, hope this helps :)
I would consider a bit different approach to fully de-couple the projects.
If you are able to create your internal artifactory, than I would consider in the maven build each on of the dependencies as a third party library exactly like it is done with any other external libraries you are using.
This way, each such project can be seperatly built and stored in the artifactory and when a dependent project will be built it will just take the right version as mentioned in the pom file.
This way you'll have different build process for each one of the projects and only relevant projects (relevant = changed) will be built.

Including .jar files in Github for consistency

I am new to using github and have been trying to figure out this question by looking at other people's repositories, but I cannot figure it out. When people fork/clone repositories in github to their local computers to develop on the project, is it expected that the cloned project is complete (ie. it has all of the files that it needs to run properly). For example, if I were to use a third-party library in the form of a .jar file, should I include that .jar file in the repository so that my code is ready to run when someone clones it, or is it better to just make a note that you are using such-and-such third-party libraries and the user will need to download those libraries elsewhere before they begin work. I am just trying to figure at the best practices for my code commits.
Thanks!
Basically it is as Chris said.
You should use a build system that has a package manager. This way you specify which dependencies you need and it downloads them automatically. Personally I have worked with maven and ant. So, here is my experience:
Apache Maven:
First word about maven, it is not a package manager. It is a build system. It just includes a package manager, because for java folks downloading the dependencies is part of the build process.
Maven comes with a nice set of defaults. This means you just use the archtype plugin to create a project ("mvn archetype:create" on the cli). Think of an archetype as a template for your project. You can choose what ever archetype suits your needs best. In case you use some framework, there is probably an archetype for it. Otherwise the simple-project archetype will be your choice. Afterwards your code goes to src/main/java, your test cases go to src/test/java and "mvn install" will build everything. Dependencies can be added to the pom in maven's dependency format. http://search.maven.org/ is the place to look for dependencies. If you find it there, you can simply copy the xml snippet to your pom.xml (which has been created by maven's archetype system for you).
In my experience, maven is the fastest way to get a project with dependencies and test execution set up. Also I never experienced that a maven build which worked on my machine failed somewhere else (except for computers which had year-old java versions). The charm is that maven's default lifecycle (or build cycle) covers all your needs. Also there are a lot of plugins for almost everything. However, you have a big problem if you want to do something that is not covered by maven's lifecycle. However, I only ever encountered that in mixed-language projects. As soon as you need anything but java, you're screwed.
Apache Ivy:
I've only ever used it together with Apache Ant. However, Ivy is a package manager, ant provides a build system. Ivy is integrated into ant as a plugin. While maven usually works out of the box, Ant requires you to write your build file manually. This allows for greater flexibility than maven, but comes with the prize of yet another file to write and maintain. Basically Ant files are as complicated as any source code, which means you should comment and document them. Otherwise you will not be able to maintain your build process later on.
Ivy itself is as easy as maven's dependency system. You have an xml file which defines your dependencies. As for maven, you can find the appropriate xml snippets on maven central http://search.maven.org/.
As a summary, I recommend Maven in case you have a simple Java Project. Ant is for cases where you need to do something special in your build.

Using cached artifacts in Maven to avoid redundant builds?

I have a Maven 3 multi-module project (~50 modules) which is stored in Git. Multiple developers are working on this code and building it, and we also have automated build machines that run cold builds on every push.
Most individual changelogs alter code in a fairly small number of modules, so it's a waste of time to rebuild the entire source tree with every change. However, I still want the final result of running the parent project build to be the same as if it had built the entire codebase. And I don't want to start manually versioning modules, as this would become a nightmare of criss-crossing version updates.
What I would like to do is add a plugin which intercepts some step in build or install, and takes a hash of the module contents (ideally pulled from Git), then looks in a shared binary repository for an artifact stored under that hash. If one is found, it uses that artifact and doesn't even execute the full build. If it finds nothing in the cache it performs the build as normal, then stores its artifact in the cache. It would also be good to rebuild any modules which have dependencies (direct or transient) which themselves had a cache miss.
Is there anything out there which does anything like this already? If not, what would be the cleanest way to go about adding it to Maven? It seems like plugins might be able to accomplish it, but for a couple pieces I'm having trouble finding the right way to attach to Maven. Specifically:
How can you intercept the "install" goal to check the cache, and only invoke the module's 'native' install goal on a cache miss?
How should a plugin pass state from one module to another regarding which cache misses have occurred in order to force rebuilds of dependencies with changes?
I'm also open to completely different ways to achieve the same end result (fewer redundant builds) although the more drastic the solution the less value it has for me in the near term.
I have previously implemented a more complicated solution with artifact version manipulation and deployment to private Maven repository. However, I think this will fit your needs better and is somewhat more simple:
Split your build into multiple builds (e.g., with a single build per module using maven -pl argument).
Setup parent-child relationships between these builds. (Bamboo even has additional support for figuring out Maven dependencies, but I'm not sure how it works.)
Configure Maven settings.xml to use a different local repository location - specify a new directory inside your build working directory. See docs: https://maven.apache.org/guides/mini/guide-configuring-maven.html
Use mvn install goal to ensure newly built artifacts are added to local repository
Use Bamboo artifact sharing to expose built artifacts from local repository - you should probably filter this to include only the package(s) you're interested in
Set dependent builds to download all artifacts from parent builds and put them into proper subdirectory of local repository (which is customized to be in working directory)
This should even work for feature branch builds thanks to the way Bamboo handles parent-child relations for branch builds.
Note that this implies that Maven will redownload all other dependencies, so you should use a proxy private Maven repository on local network, such as Artifactory or Nexus.
If you want, I can also describe the more complicated scenario I've already implemented that involves modifying artifact versions and deploying to private Maven repository.
The Jenkins plugin allows you to manage/minimize dependent builds
whenever a SNAPSHOT dependency is built (determined by Maven)
after other projects are built (manually via Jenkins jobs)
And if you do a 'mvn deploy' to save the build into your corporate Maven repo then you don't have to worry about dependencies when builds run on slave Jenkins machines. The result is that no module is ever built unless it or one of its dependencies has changed.
Hopefully you can apply these principles to a solution with Bamboo.

Maven requires manual dependency update?

I'm new to Maven, using the m2e plugin for Eclipse. I'm still wrapping my head around Maven, but it seems like whenever I need to import a new library, like java.util.List, now I have to manually go through the hassle of finding the right repository for the jar and adding it to the dependencies in the POM. This seems like a major hassle, especially since some jars can't be found in public repositories, so they have to be uploaded into the local repository.
Am I missing something about Maven in Eclipse? Is there a way to automatically update the POM when Eclipse automatically imports a new library?
I'm trying to understand how using Maven saves time/effort...
You picked a bad example. Portions of the actual Java Library that come with the Java Standard Runtime are there regardless of Maven configuration.
With that in mind, if you wanted to add something external, say Log4j, then you would need to add a project dependency on Log4j. Maven would then take the dependency information and create a "signature" to search for, first in the local cache, and then in the external repositories.
Such a signature might look like
groupId:artifactId:version
or perhaps
groupId:artifactId:version:classifier
This identifies a maven "module" which will then be downloaded and configured into your system. Once in place it adds all of the classes within the module to your configured project.
Maven principally saves time in downloading and organizing JAR files in your build. By defining a "standard" project layout and a "standard" build order, Maven eliminates a lot of the guesswork in the "why isn't my project building" sweepstakes. Also, you can use neat commands like "mvn dependency:tree" to print out a list of all the JARs your project depends on, recursively.
Warning note: If you are using the M2E plugin and Eclipse, you may also run into problems with the plugin itself. The 1.0 version (hosted at eclipse.org) was much less friendly than the previous 0.12 version (hosted at Sonatype). You can get around this to some extent by downloading and installing the "standalone" version of Maven from apache (maven.apache.org) and running Maven from the command line. This is actually much more stable than trying to run Maven inside Eclipse (in my personal experience) and may save you some pain as you try to learn about Maven.

Categories