I have ended up in a situation where I need to create a Maven Plugin which as part of it's job needs to inspect a number of dependencies and find certain xml files.
(If anyone has a better way of reading files inside an artifact jar, please say so as that will very much also be considered an accepted answer)
I need to inspect a number of known dependencies that I have referenced as org.apache.maven.artifact.Artifact references and find all XML files within them. The only way I know of is to unpack the artifact and search though the file system. I basically need to do exactly what the Maven Dependency Plugin's "unpack" goal does. So how do I use the Maven Dependency Plugin from my own plugin? Do I simply use it as a normal Dependency or is there a more "maven" way of doing it?
EDIT:
I encountered another thing which is close enough to this one that I will update the question instead of posting a new one.
If I need to use an outside dependency, such as Jackson for example, how do I include the dependency for the plugin? It feels wrong to create a fat jar with the dependencies in it. Is there another trick I am missing?
Related
I want to find all available versions of a dependency in my project using a Mojo. I need this information to create a complete dependency tree where not only the transitive dependencies are included, but also all available versions and then their respective dependencies.
The problem is that I can't simply download each individual metadata file since that would make the plugin too slow. What other ways are there to find all other versions through a Mojo and the Maven plugin API, and how do I achieve it?
Example of tree I'm trying to generate.
If I only look at the components specified in the pom, I will miss out on the dependency a1.0 -> c1.1 and b1.0 -> d1.1.
To clarify what information I am missing; the following graph shows what would appear if I where to simply use dependency:tree.
I am new to using github and have been trying to figure out this question by looking at other people's repositories, but I cannot figure it out. When people fork/clone repositories in github to their local computers to develop on the project, is it expected that the cloned project is complete (ie. it has all of the files that it needs to run properly). For example, if I were to use a third-party library in the form of a .jar file, should I include that .jar file in the repository so that my code is ready to run when someone clones it, or is it better to just make a note that you are using such-and-such third-party libraries and the user will need to download those libraries elsewhere before they begin work. I am just trying to figure at the best practices for my code commits.
Thanks!
Basically it is as Chris said.
You should use a build system that has a package manager. This way you specify which dependencies you need and it downloads them automatically. Personally I have worked with maven and ant. So, here is my experience:
Apache Maven:
First word about maven, it is not a package manager. It is a build system. It just includes a package manager, because for java folks downloading the dependencies is part of the build process.
Maven comes with a nice set of defaults. This means you just use the archtype plugin to create a project ("mvn archetype:create" on the cli). Think of an archetype as a template for your project. You can choose what ever archetype suits your needs best. In case you use some framework, there is probably an archetype for it. Otherwise the simple-project archetype will be your choice. Afterwards your code goes to src/main/java, your test cases go to src/test/java and "mvn install" will build everything. Dependencies can be added to the pom in maven's dependency format. http://search.maven.org/ is the place to look for dependencies. If you find it there, you can simply copy the xml snippet to your pom.xml (which has been created by maven's archetype system for you).
In my experience, maven is the fastest way to get a project with dependencies and test execution set up. Also I never experienced that a maven build which worked on my machine failed somewhere else (except for computers which had year-old java versions). The charm is that maven's default lifecycle (or build cycle) covers all your needs. Also there are a lot of plugins for almost everything. However, you have a big problem if you want to do something that is not covered by maven's lifecycle. However, I only ever encountered that in mixed-language projects. As soon as you need anything but java, you're screwed.
Apache Ivy:
I've only ever used it together with Apache Ant. However, Ivy is a package manager, ant provides a build system. Ivy is integrated into ant as a plugin. While maven usually works out of the box, Ant requires you to write your build file manually. This allows for greater flexibility than maven, but comes with the prize of yet another file to write and maintain. Basically Ant files are as complicated as any source code, which means you should comment and document them. Otherwise you will not be able to maintain your build process later on.
Ivy itself is as easy as maven's dependency system. You have an xml file which defines your dependencies. As for maven, you can find the appropriate xml snippets on maven central http://search.maven.org/.
As a summary, I recommend Maven in case you have a simple Java Project. Ant is for cases where you need to do something special in your build.
My (maven)project is dependent on both stanford-CoreNLP and stanford-Parser and apparently the (lexicalized)parser of each dependency is producing different outputs, they are not alike.
My question is that how can I determine which package the parser should be loaded from ? the parser class has a same name in both packages:
edu.stanford.nlp.parser.lexparser.LexicalizedParser
and maven automatically loads the class from stanford-coreNLP package while I want it to be loaded from stanford-Parser.
I'd appreciate if you please help me with your suggestions.
I would raise a bug asking them to move the lexical parser into a new maven artifact (or several of them), so you can distinguish them.
If that doesn't happen, you have two options:
Use the Maven shade plugin (as suggested by ooxi)
Delete the offending classes
Breakdown of the second approach:
Use you favorite ZIP tool to open the JAR archive.
Delete the offending packages.
Copy the original POM
Change the version version to something like 1.1.MythBuster.1 or 1.1.no-lexer.1
Use mvn file:install to install the modified artifact in your local repo
Test it
Use mvn deploy:deploy-file to install the modified artifact in your company's repo
I prefer the second approach since it makes sure the build has a clean classpath, people know that you messed with the original file and it's pretty obvious what is going on.
I once had this problem and could solve it by using a virtual package depending on the two conflicting dependencies (in your case stanford-CoreNPL and stanford-Parser) and merging them using the Maven shade plugin.
When shading only one class will be in the virtual package, depending on the order of <dependency /> tags.
I want to use Jackson JSON parser library in my android project. I saw this library in Maven repository, but I don't know how to use it. I've downloaded sources from the Maven repository and Jackson jars and attached sources to jar, but in the logcat I saw error message NoClassDefFoundError. When googling I' ve read that I have to declare Jackson dependencies in pom.xml file.I' m a newbie in Java development so I don't know what all these means. And have some questions:
1.How to write pom.xml for the Jackson library
2.Where to put this pom.xml
3. Do I really need to install Maven if I just want to use the library.
4. What else I need to begin work with the library?
No, you do not need to write a pom file, unless you are using Maven for building (in which case you need it regardless of Jackson).
What you need are just Jackson jars -- there is more than one, since some projects only need some pieces. This page:
http://wiki.fasterxml.com/JacksonDownload
should show what you need, and where to get them from. If you are starting from scratch, I would strongly recommend using Jackson 2.1 (not 1.9). And then you most likely need 3 jars (jackson-annotations, jackson-databind, jackson-core) -- although minimal is just jackson-core, if you use so-called "streaming API" (low-level, highest performance, but more work).
The benefit of using Maven would be just that you can define logical depenendency (group and artifact id of jar), and Maven would resolve it to physical jar, as well as references to other jars.
My problem is that I have written a maven plugin to deploy the artifact to a user specified location. I'm now trying to write another maven plugin to use this deployed artifact, change some things and zip it again.
I want to write the second plugin such that i use the first plugin to get the information for where it was deployed.
I don't know how to access this information from the first plugin.
I would agree with #Barend that if you can afford to make changes before deploy, that could be best strategy.
If you cannot do that, you can follow strategy of a plugin like Maven Release plugin. Maven release plugin runs in two phases where second run needs output of the first run. They manage it by keeping temporary properties file in the project directory which carry the information like tag name, SNAPSHOT version name etc.
You could use the same approach with plugin. Just remember that your plugin will be sort of transactional, where it expects the other goal to have run before it can do its work.
It seems to me that the easiest workaround is to reverse the order in which the plugins run.
Have Plugin B run first, using the known location under target/ to modify the artifact and then run Plugin A, deploying the modified artifact to the configured location.
If that's no option, I suggest you simply duplicate the configuration value (so that both plugins are told about the new location in their <configuration> element). This keeps both plugins independent, which is what Maven assumes them to be.
A last option is to make make Plugin B parse the entire POM and extract the information from Plugin A's <configuration> element, but I really can't recommend this. If you go this way the two plugins are so closely intertwined that they're really just one plugin. This is poor design, violates the principle of least surprise and might cause nasty configuration problems down the line.