How does m2eclipse artifact search work? - java

Can someone explain the search function in m2eclipse? I am not clear on where this info is coming from and how to troubleshoot this when it can't find the artifact. I have two Eclipse installations, both with m2eclipse plugin, one works (find some artifacts not others) and the second one doesn't return anything.

When using maven from multiple eclipse installations, one method to get consistent results is to install maven separately and point eclipse (under Preferences->Maven->Installations) to the external maven instance in place of the embedded installation. An additional advantage to this approach is the ability to run maven from the command line independent of IDE to get a 'pure' view of the build process. This can be valuable when troubleshooting.
Regardless, m2eclipse uses the standard maven practice of locating a dependency in the local repository (typically {home directory}/.m2/repository), then turns to any 'remote' repositories. The local repository location can be found in eclipse under Preferences->Maven->User Settings. If no other configuration has been done, the 'central' maven repository at http://repo.maven.apache.org/maven2/ is the next location searched.
Since you are getting different results from each of your eclipse installations, I would assume that they are looking at different repositories, although not certain how the settings would have gotten that way. It would be interesting to know if the artifacts you are seeking are in the registered repository locations.
Note that this assumes you are accessing released artifacts. If you are working with snapshots, the rules change a little and configuration (in settings.xml file) is significant.

Related

How to add 70 local jars on maven project?

why use Maven when you have such quantity of local jars?
So we have a client that have a lot of private jars and custom jars.
For example commons-langMyCompanyCustom.jar which is commons-lang.jar with 10 more classes in it.
So on their environment we use 100% Maven without local dependencies.
But on our site we have the jars for development in Eclipse and have Maven build with the public ones, but we do not have permission to add their jars in our organizational repository.
So we want to use the Maven good things like: compile,test, build uber-jar, add static code analysis, generate java-docs, sources-jars etc. not to do this thinks one by one with the help of Eclipse.
So we have 70 jar some of them are public if I get the effective pom on their environment I found 50 of them in Maven Central, but the other 20 are as I called "custom" jars. I searched for decision of course but found this:
<dependency>
<groupId>sample</groupId>
<artifactId>com.sample</artifactId>
<version>1.0</version>
<scope>system</scope>
<systemPath>${project.basedir}/src/main/resources/yourJar.jar</systemPath>
</dependency>
So for all 20 of them I have to add this in the development maven profile??
Is there a easy way like in Gradle where you can add all folder with its dependencies to the existing ones?
Also installing one by one in every developer's repo is not acceptable.
Please forget the system scope as mentioned before! Too problematic...
Ideally:
Ideally, all your developers have access to Repository Manager in your or their organization (if possible).
A central environment for your System Integration Testing, maybe?
Alternatively, you may have a central environment for testing where all the dependencies are provided. This approach can be used to simulate how a compilation would work as if it's in your client's environment. Plus you only setup jars once.
So on their environment we use 100% Maven without local dependencies.
But on our site we have the jars for development in Eclipse and have
Maven build with the public ones, but we do not have permission to add
their jars in our organizational repository.
According to what you're saying in the above-quoted excerpt I believe you want to have set in your build's pom.xml assuming that in the client setup the dependencies will be present.
Especially, as you indicate that the organization doesn't give you permission to add their jars in your repository, I would use the provided scope.
As stated in the Maven docs, the definition of a provided dependency is as followed:
This is much like compile, but indicates you expect the JDK or a container to provide the dependency at runtime. For example, when building a web application for the Java Enterprise Edition, you would set the dependency on the Servlet API and related Java EE APIs to scope provided because the web container provides those classes. This scope is only available on the compilation and test classpath, and is not transitive.
So basically you assume that these dependencies will be present at your client's setup. However, this has some limitations. Meaning you can build solutions independently but cannot test it locally because you won't have the dependencies on your workstation.
If you won't even have access to the jars to configure your central environment ask if your client can provide a DEV/SIT environment.
None of the above? Inherit a parent pom.
To avoid the whole constant copy-paste process for every single (related) project, maven has the tools to centralize dependency and plugin configurations, one of such is by inheriting the configuration of a parent pom. As is explaining in the following documentation it is quite simple:
First you create a project with just a pom.xml where you define everything you wish to centralize (watch out, certain items have slight differences in their constructs);
Use as property of packaging tag the option pom: <packaging>pom</packaging>;
In the pom's that have to inherit these configurations set the parent configuration tags in <parent> ... </parent> (documentation is very clear with this);
Now everytime you update any "global" pom configuration only the parent version has to be updated on every project. As a result of this, you only need to configure everything once.
You can also apply this together with the abovementioned solutions, this way combining to find a solution that fits best to your needs.
But there is a big Maven world out there, so I advise a good read in its doc's to further acknowledge your possibilities. I remembered these situations because I've been in a similar situation you seem to be now.
Good luck!
Another alternative is the project RepoTree.
This one creates a Maven repository directory (not a server) from another directory which contains just the .jars. In other words, it creates the necessary .pom files and directory structure. It takes into account only the precise information from metadata contained in the archives (MANIFEST.MF, pom.xml).
Utility to recursively install artifacts from a directory into a local
Maven repository Based on Aether 1.7
This is 5 years old, but still should work fine.
TL;DR: MavenHoe creates a Maven repository server (not a directory) which serves the artefacts from a directory, guessing what you ask for if needed. The purpose is to avoid complicated version synchronizing - it simply takes whatever is closest to the requested G:A:V.
I have moved the MavenHoe project, which almost got lost with the decline of Google Code, to Github. Therefore I put it here for availability in the form of a full answer:
One of the options you have when dealing with conditions like that is to take whatever comes in form of a directory with .jar's and treat it as a repository.
Some time ago I have written a tool for that purpose. My situation was that we were building JBoss EAP and recompiled every single dependency.
That resulted in thousands of .jars which were most often the same as their Central counterpart (plus security and bug fixes).
I needed the tests to run against these artifacts rather than the Central ones. However, the Maven coordinates were the same.
Therefore, I wrote this "Maven repository/proxy" which provided the artifact if it found something that could be it, and if not, it proxied the request to Central.
It can derive the G:A:V from three sources:
MANIFEST.MF
META-INF/.../pom.xml
Location of the file in the directory, in combination with a configuration file like this:
jboss-managed.jar org/jboss/man/ jboss-managed 2.1.0.SP1 jboss-managed-2.1.0.SP1.jar
getopt.jar gnu-getopt/ getopt 1.0.12-brew getopt-1.0.12-brew.jar
jboss-kernel.jar org/jboss/microcontainer/ jboss-kernel 2.0.6.GA jboss-kernel-2.0.6.GA.jar
jboss-logging-spi.jar org/jboss/logging/ jboss-logging-spi 2.1.0.GA jboss-logging-spi-2.1.0.GA.jar
...
The first column is the filename in the .zip; Then groupId (with either slashes or dots), artifactId, version, artifact file name, respectively.
Your 70 files would be listed in this file.
See more information at this page:
https://rawgit.com/OndraZizka/MavenHoe/master/docs/README.html
The project is available here.
Feel free to fork and push further, if you don't find anything better.

Maven - Best Practice Production Classpath/Jar Organisation? (Non-WAR/EAR)

Simple console maven artifacts with shared dependencies (some also provide public API's in addition to their own class) living on same production server. How to best organise/install on production server?
My instinct is for a single folder holding all (version numbered) jars (ie. a 'flattened'/dependency populated 'repository') however:
(a) Can't see how such a folder would increase, on a 'dependency' basis, it's population from maven deployment repository
(b) How a jar's manifest's classpath would change from the default 'lib/...,lib/...' (ie. relative to 'main' jar, sensible for dev/test using Eclipse) to just '...,...'
What is recommended best practice as regards organisation on production server?
Google'ing 'maven production classpath' (amongst others) resulted in http://blog.armstrongconsulting.com/?p=232 which seems related but light on detail.
Any pointers?
How experienced are you with Maven? If you are the process described in the blog you mention is pretty straightforward also without going into details.
Re (a): Dependencies are downloaded from a remote Maven repository into a local Maven repository on demand. Default in ${user.home}/.m2/repository or according to <localRepository> at the beginning of your settings.xml. See Introduction to Repositories. So, there's no need for a single 'flattened'/dependency populated 'repository' folder.
A local repository can also be populated with the install:install-file goal manually. But this can be a cumbersome process if there are many artifacts to install.
See Maven, Available Plugins for what the mentioned plugin:goals do.

Using cached artifacts in Maven to avoid redundant builds?

I have a Maven 3 multi-module project (~50 modules) which is stored in Git. Multiple developers are working on this code and building it, and we also have automated build machines that run cold builds on every push.
Most individual changelogs alter code in a fairly small number of modules, so it's a waste of time to rebuild the entire source tree with every change. However, I still want the final result of running the parent project build to be the same as if it had built the entire codebase. And I don't want to start manually versioning modules, as this would become a nightmare of criss-crossing version updates.
What I would like to do is add a plugin which intercepts some step in build or install, and takes a hash of the module contents (ideally pulled from Git), then looks in a shared binary repository for an artifact stored under that hash. If one is found, it uses that artifact and doesn't even execute the full build. If it finds nothing in the cache it performs the build as normal, then stores its artifact in the cache. It would also be good to rebuild any modules which have dependencies (direct or transient) which themselves had a cache miss.
Is there anything out there which does anything like this already? If not, what would be the cleanest way to go about adding it to Maven? It seems like plugins might be able to accomplish it, but for a couple pieces I'm having trouble finding the right way to attach to Maven. Specifically:
How can you intercept the "install" goal to check the cache, and only invoke the module's 'native' install goal on a cache miss?
How should a plugin pass state from one module to another regarding which cache misses have occurred in order to force rebuilds of dependencies with changes?
I'm also open to completely different ways to achieve the same end result (fewer redundant builds) although the more drastic the solution the less value it has for me in the near term.
I have previously implemented a more complicated solution with artifact version manipulation and deployment to private Maven repository. However, I think this will fit your needs better and is somewhat more simple:
Split your build into multiple builds (e.g., with a single build per module using maven -pl argument).
Setup parent-child relationships between these builds. (Bamboo even has additional support for figuring out Maven dependencies, but I'm not sure how it works.)
Configure Maven settings.xml to use a different local repository location - specify a new directory inside your build working directory. See docs: https://maven.apache.org/guides/mini/guide-configuring-maven.html
Use mvn install goal to ensure newly built artifacts are added to local repository
Use Bamboo artifact sharing to expose built artifacts from local repository - you should probably filter this to include only the package(s) you're interested in
Set dependent builds to download all artifacts from parent builds and put them into proper subdirectory of local repository (which is customized to be in working directory)
This should even work for feature branch builds thanks to the way Bamboo handles parent-child relations for branch builds.
Note that this implies that Maven will redownload all other dependencies, so you should use a proxy private Maven repository on local network, such as Artifactory or Nexus.
If you want, I can also describe the more complicated scenario I've already implemented that involves modifying artifact versions and deploying to private Maven repository.
The Jenkins plugin allows you to manage/minimize dependent builds
whenever a SNAPSHOT dependency is built (determined by Maven)
after other projects are built (manually via Jenkins jobs)
And if you do a 'mvn deploy' to save the build into your corporate Maven repo then you don't have to worry about dependencies when builds run on slave Jenkins machines. The result is that no module is ever built unless it or one of its dependencies has changed.
Hopefully you can apply these principles to a solution with Bamboo.

Apache Maven: where do dependencies and libraries installed locally up?

I'm building a Java project that has a dependency on a library. mvn.bat clean install produced the target subdirectories as expected, and the project built fine with mvn.bat clean install as well.
What's not expected is that when I deleted the entire directory of the library, the outer project still built fine, although the library it depends on was gone.
How does this work?
UPDATE: Turns out Maven makes some sort of cache in %USERPROFILE\.m2.
You are most likely thinking of your local repository where everything you install locally (and maven downloads for you from the central repository) is put for later usage.
The behavior you describe is intentional, and allows for building A once and then let B reference it whenever needed, without having to recompile A every time. This is usually very desirable, especially in teams or with large code bases.
Note, that for changing code you should be using -SNAPSHOT artifacts. They are treated slightly differently.
Your dependencies are always downloaded into .m2/repository.
If you want to have some predictability on downloaded libraries in your team, you can put in place a repository manager like Nexus : https://repository.apache.org/index.html#welcome
Instead of downloading dependencies from Maven central, your developers will download their dependencies from this repository manager.

Why does Hudson ignore my profiles.xml file?

I have a Maven2 project, with a pom.xml and a profiles.xml files at the same level.
The project configuration is provided by Maven profile properties:
dbhost=${dbhost}
dbport=${dbport}
// etc.
Locally, each developper customize his build in the "profiles.xml". It works well.
For continuous integration, a ci "profiles.xml" has been put on our SCM server (at the same level as the pom.xml).
The problem is that Hudson simply ignores this file during the Maven build, whereas the "-P hudsonprofile" is correctly set.
If the same profile is moved directly in the "pom.xml", or in the global "settings.xml" the build works. So we already have a solution.
I also know that the "profiles.xml" file is deprecated, but I would like to understand why the comportement is different between Hudson build and my local build...
Note: Hudson and my local build use the same version of Maven (2.2.1).
Sounds like a classpath problem to me. Why would Hudson not notice the profiles.xml? The only reason I can think of is that Hudson uses a different classpath than you would expect.
A best practice (atleast in my experience), is to try to build the project from the command line on your CI server (where Hudson runs). If that works, then Hudson should work too. Unless you have configured Maven in Hudson weirdly.
Also, adjusting the settings.xml of Maven is not that bad. At least, if you dont expect it to change too much. Even so, it is fixed quickly.
I think the best solution is defining the profiles directly in your pom.xml file for both CI specifically and then generically for local builds. Then the devs can override any profile settings in their own personal settings.xml file for local builds. This has the added benefit of not having to check-in a profiles.xml file which will not work for devs, forcing them to modify this versioned file and remember to not check in their changes. This also has the added benefit on making your build not dependent on a deprecated feature of Maven. After all, I would not count on the behavior of a deprecated feature in the first place. Hopefully this is an elegant solution which uses ideas from what you know already works.

Categories