I want to write a piece of Java code which can be executed with 2 different kinds of dependencies (or version of a dependency). Namely speaking about org.apache.poi. The code must run on a system with version=2 as well as version=3 or org.apache.poi.
Unfortunately between the versions 2 & 3 some interfaces have changed, code must be build slightly different and there is no way to upgrade both system to the same org.apache.poi version.
So my questions are:
Is there a way to compile the code with both versions to not run into compiler errors?
Is there a way to execute the right code based on the available org.apache.poi version?
What would be an appropriate approach to solve this issue?
As an amendment:
I'm building a code which shall work for two applications which provides an interface in different versions (maven scope of the dependency is provided).
If I have both dependencies in maven, it takes any of the dependencies and IF clauses will fail to compile as Cell.CELL_TYPE_STRING or CellType.STRING is not available in the chosen dependency.
And I would like to have the code working regardless of which dependency is plugged in the application.
// working with old poi interface
if (cell != null && cell.getCellType() == Cell.CELL_TYPE_STRING
&& cell.getRichStringCellValue().getString().trim().equals(cellContent)) {
return row;
}
// working with new poi interface
if (cell != null && cell.getCellType() == CellType.STRING
&& cell.getRichStringCellValue().getString().trim().equals(cellContent)) {
return row;
}
This i probably opinion based, but it seams legit.
First, you will have to create common interface that you will use to do your job.
Second, you will have to create adapter classes that implements that interface and will do required job using particular version of POI library
Third, you will write adapter factory that will return proper instance of adapter.
Adapter itself should provide "isSupported" method that will detect if given adapter can be used based on what kind of POI classes are currently loaded (detect by reflection - there must be some version specific classes or other markers)
Then you will put each adapter into separate maven module, so each module can be compiled independently (thus you will have no class conflicts). Each module will have POI dependency in "provided" scope in version that this adapter is going to support
Either module registers itself with the factory in your main module, or factory itself detects all adapters that are available (like #ComponentScan in Spring).
Then you will pack everything into single app bundle. Main module will use only common interface. All in all it will be kind of extensible plugin system
I do not think there is a single "best way".
Nonetheless, we faced a similar issue in a few of our apps that share a common library. I ended up with a variant of #Antoniossss's variant, except that the library itself does not use dependency injection (the parent app may or may not, but the library is free of it).
To be more specific, and due to transitive dependencies, some of our apps need a certain version of Apache Lucene (e.g. 7.x.y, or more) and other are stuck on older versions (5.5.x).
So we needed a way to build one of our lib against those versions, using maven in our case.
What we ended uses the following principles :
We share some code, which is common between all versions of Lucene
We have specific code, for each target version of Lucene that has an incompatible API (e.g. package change, non existing methods, ...)
We build as many jars as there are supported versions of lucene, with a naming scheme such as groupId:artifact-luceneVersion:version
Where the lib is used, direct access to the Lucene API is replaced by access to our specific classes
For exemple, un Lucene v5 there is a org.apache.lucene.analysis.synonym.SynonymFilterFactory facility. In v7 the same functionnality is implemented using org.apache.lucene.analysis.synonym.SynonymGraphFilterFactory e.g. same package, but different class.
What we end up with is providing a com.mycompany.SynonymFilterFactoryAdapter. In the v5 JAR, this class extends the Lucene v5 class, and respectively with v7 or any other version.
In the final app, we always instantiate the com.mycompany object, that behaves just the same as the native org.apache class.
Project structure
The build system being maven, we build it as follow
project root
|- pom.xml
|-- shared
|---|- src/main/java
|---|- src/test/java
|-- v5
|---|- pom.xml
|-- v7
|---|- pom.xml
Root pom
The root pom is a classic multimodule pom, but it does not declare the shared folder (notice that the shared folder has no pom).
<modules>
<module>v5</module>
<module>v7</module>
</modules>
The shared folder
The shared folder stores all non-version specific code and the tests. On top of that, when a version specific class is needed, it does not code against the API of this class (e.g. it does not import org.apache.VersionSpecificStuff), it does against com.mycompany.VersionSpecificStuffAdapter).
The implementation of this Adapter being left to the version specific folders.
Version specific folders
The v5 folder declares in its artifact id the Lucene version it compiles to, and of course declares it as a dependency
....
<artifactId>myartifact-lucene-5.5.0</artifactId>
....
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-analyzers-common</artifactId>
<version>5.5.0</version>
</dependency>
But the real "trick" is that it declares an external source folder for classes and tests using the build-helper-maven-plugin : see below how the source code from the shared folder is imported "as if" it was from this project itself.
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<version>3.0.0</version>
<executions>
<execution>
<id>add-5.5.0-src</id>
<phase>generate-sources</phase>
<goals>
<goal>add-source</goal>
</goals>
<configuration>
<sources>
<source>../shared/src/main/java</source>
</sources>
</configuration>
</execution>
<execution>
<id>add-5.5.0-test</id>
<phase>generate-test-sources</phase>
<goals>
<goal>add-test-source</goal>
</goals>
<configuration>
<sources>
<source>../shared/src/test/java</source>
</sources>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
For the whole implementation to work, it provides the Adapter implementations in its own source folder src/main/java, e.g.
package com.mycompany
public class VersionSpecificStuffAdapter extends org.apache.VersionSpecificStuff {
}
If both the v5 and the v7 package do it the same way, then client code using the com.mycompany.xxxAdapter will always compile, and under the hood, get the corresponding implementation of the library.
This is one way to do it. You can also, as already suggested, define your whole new interfaces and have clients of your lib code against your own interface. This is kind of cleaner, but depending on the case, may imply more work.
In your edit, you mention refering to constants that are not defined the same way, e.g. CellType.TYPE_XX.
In the version specific code, you could either produce another constant MyCellType.TYPE_XX that would duplicate the actual constant, under a stable name.
In case of an enum, you could create a CellTypeChecker util with a method isCellTypeXX(cell), that would be implemented in a version specific way.
v7 folder
It's pretty much the same structure, you just swap what changed between v5 and v7.
Caveats
This may not always scale.
If you have hundreds of types you need to adapt, this is cumbersome to say the least.
If you have 2 or more libs you need to cross-compile against (e.g. mylib-poi-1.0-lucene-5.5-guava-19-....) it's a mess.
If you have final classes to adapt, it gets harder.
You have to test to make sure every JAR has all the adapters. I do that by testing each Adapted class in the shared test folder.
Related
I have a Java 11 application which I develop using Maven and in the pom.xml I have a version declared.
<groupId>my.group.id</groupId>
<artifactId>artifact</artifactId>
<version>0.1.2.3</version>
I want to get this version at runtime e.g. using getClass().getPackage().getImplementationVersion() as it's described in this question. This works as long as I don't package my application as a modular runtime image using Jlink. Then I only get null returned from above call.
I package my application using:
jlink --output target/artifact-image --module-path target/dependencies --launcher MyApp=my.module.name/my.main.Class --add-modules my.module.name
Jlink has actually a parameter --version but this returns the Jlink version instead setting it for the generated artifact.
So, how can I get the version (of my Maven project) at runtime?
How to define it in the modular application?
How to get it into the modular application?
How to read it in the modular application?
I know I could define it in a resource file and simply read it from there, however I prefer to have it only in the pom.xml (= to have a single source of truth).
In the end I did this using the filtering function of the Maven Resources Plugin.
First, enable filtering in the pom.xml:
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
</resource>
</resources>
</build>
Then add a src/main/resources/my-version.properties file containig:
my.version=${project.version}
So you can use the following code in Java:
Properties myProperties = new Properties();
try {
myProperties.load(getClass().getResourceAsStream("/my-version.properties"));
} catch (IOException e) {
throw new IllegalStateException(e);
}
String theVersion = Objects.requireNonNull((String) myProperties.get("my.version"));
I had a similar problem in my last job. I needed to get the version for modules/jars that are not a direct dependency of the application, as well as the module's version itself. The classpath is assembled from multiple modules when the application starts, the main application module has no knowledge of how many jars are added later.
That's why I came up with a different solution, which may be a little more elegant than having to read XML or properties from jar files.
The idea
use a Java service loader approach to be able to add as many components/artifacts later, which can contribute their own versions at runtime. Create a very lightweight library with just a few lines of code to read, find, filter and sort all of the artifact versions on the classpath.
Create a maven source code generator plugin that generates the service implementation for each of the modules at compile time, package a very simple service in each of the jars.
The solution
Part one of the solution is the artifact-version-service library, which can be found on github and MavenCentral now. It covers the service definition and a few ways to get the artifact versions at runtime.
Part two is the artifact-version-maven-plugin, which can also be found on github and MavenCentral. It is used to have a hassle-free generator implementing the service definition for each of the artifacts.
Examples
Fetching all modules with coordinates
No more reading jar manifests or property files, just a simple method call:
// iterate list of artifact dependencies
for (Artifact artifact : ArtifactVersionCollector.collectArtifacts()) {
// print simple artifact string example
System.out.println("artifact = " + artifact);
}
A sorted set of artifacts is returned. To modify the sorting order, provide a custom comparator:
new ArtifactVersionCollector(Comparator.comparing(Artifact::getVersion)).collect();
This way the list of artifacts is returned sorted by version numbers.
Find a specific artifact
ArtifactVersionCollector.findArtifact("de.westemeyer", "artifact-version-service");
Fetches the version details for a specific artifact.
Find artifacts with matching groupId(s)
Find all artifacts with groupId de.westemeyer (exact match):
ArtifactVersionCollector.findArtifactsByGroupId("de.westemeyer", true);
Find all artifacts where groupId starts with de.westemeyer:
ArtifactVersionCollector.findArtifactsByGroupId("de.westemeyer", false);
Sort result by version number:
new ArtifactVersionCollector(Comparator.comparing(Artifact::getVersion)).artifactsByGroupId("de.", false);
Implement custom actions on list of artifacts
By supplying a lambda, the very first example could be implemented like this:
ArtifactVersionCollector.iterateArtifacts(a -> {
System.out.println(a);
return false;
});
Installation
Add these two tags to all pom.xml files, or maybe to a company master pom somewhere:
<build>
<plugins>
<plugin>
<groupId>de.westemeyer</groupId>
<artifactId>artifact-version-maven-plugin</artifactId>
<version>1.1.1</version>
<executions>
<execution>
<goals>
<goal>generate-service</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>de.westemeyer</groupId>
<artifactId>artifact-version-service</artifactId>
<version>1.1.1</version>
</dependency>
</dependencies>
Feedback
It would be great if you could give the solution a try. Getting feedback about whether you think the solution fits your needs would be even better. So please don't hesitate to add a new issue on any of the github projects if you have any suggestions, feature requests, problems, whatsoever.
Licence
All of the source code is open source, free to use even for commercial products (MIT licence).
General question
I have two projects A and B; B has a dependency on A. I want to generate some code in B with an Annotation Processor, based on annotations on objects in A. When I run the compilation with the correct Processor implementation, only the annotated objects from B are picked up.
I understand that scanning other JARs must be disabled by default, because you usually don't want to do an annotation scan for all your dependencies. I also understand that it may be impossible to do what I want to do because of compiler magic - which I don't know a lot about - but I'm hoping it's not.
Specific case
My projects are called DB and WEB. WEB obviously depends on DB for its JPA access; this is configured in Maven. Due to a number of architectural choices, DB must remain a separate JAR. DB doesn't use Spring except for some annotations which are consumed by WEB; WEB uses Spring MVC.
I'm trying to generate the CrudRepository interfaces for all my JPA entities with an Annotation Processor. The #Repository objects are supposed to go in a repo package in the WEB project, so they can be used with #Autowired wherever in my WEB application. The annotation I'm performing the scan for is #javax.persistence.Entity, but I've also tried a custom annotation, with the same results.
#SupportedAnnotationTypes("javax.persistence.Entity")
#SupportedSourceVersion(SourceVersion.RELEASE_8)
public class RepositoryFactory extends AbstractProcessor {
#Override
public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) {
for (Element e : roundEnv.getElementsAnnotatedWith(Entity.class)) {
if (e.getKind() != ElementKind.CLASS) {
continue;
}
// TODO: implement logic to skip manually implemented Repos
try {
String name = e.getSimpleName().toString();
TypeElement clazz = (TypeElement) e;
JavaFileObject f = processingEnv.getFiler().
createSourceFile("blagae.web.repo." + name + "Repo");
try (Writer w = f.openWriter()) {
PrintWriter pw = new PrintWriter(w);
pw.println("package blagae.web.repo;");
pw.println("import org.springframework.data.repository.CrudRepository;");
pw.printf("import %s;\n", clazz.toString());
pw.println("import org.springframework.stereotype.Repository;");
pw.println("#Repository");
pw.printf("public interface %sRepo extends CrudRepository<%s, Long> {}\n", name, name);
pw.flush();
}
} catch (IOException ex) {
Logger.getLogger(RepositoryFactory.class.getName()).log(Level.SEVERE, null, ex);
}
}
return false;
}
}
Ideally, I'd love for someone to tell me about an annotation that would be as simple as
#ComponentScan(basePackages = "blagae.db.*")
But of course, I'm not counting on that because it would probably be documented somewhere. As a workaround, I could just add the Spring dependency to the db and generate the classes there, but they only serve a purpose in the Spring MVC app. I'm also wary of the config it might take to make this work.
UPDATE
Some extra info: I'm using the maven-processor-plugin, which I've verified to work well in the WEB project for classes that are defined there. However, I specifically want access classes annotated in the dependency project DB. I have looked into the method AbstractProcessor::getSupportedOptions but it's unclear to me what I could do there.
Maven config:
<plugin>
<groupId>org.bsc.maven</groupId>
<artifactId>maven-processor-plugin</artifactId>
<version>2.2.4</version>
<configuration>
<processors>
<processor>blagae.utils.RepositoryFactory</processor>
</processors>
</configuration>
<executions>
<execution>
<id>process</id>
<goals>
<goal>process</goal>
</goals>
<phase>generate-sources</phase>
</execution>
</executions>
</plugin>
SUGGESTION
Another random thought I had would be to run a JavaCompiler process for the DB project in WEB, but how would I inject my Processor ?
Annotation processor works on compilation phase of your project (WEB in your case) and compiler compiles this project. Dependencies of current project are already compiled and compiler (and as result your annotation processor) don't touch (or have no access) third party libraries (DB).
You can try to extract annotation processor into separate project/jar and use it in WEB and DB projects. In this case annotation processor will create CrudRepository on compilation phase of concrete project. And all generated classes in DB project will be available in WEB.
Personally, I would extract the annotation processor in a separate maven module and add a dependency to it from the WEB module.
However, this doesn't matter that much for successfully triggering an annotation processor.
In order to have an annotation processor working, there are two things you have to provide:
a class that extends the javax.annotation.processing.AbstractProcessor class.
a special file, nested in the META-INF/services of the project.
Since you mentioned that currently no classes are generated, I would assume that you're missing the meta file. So, open your WEB project and navigate to src/main/resouces folder. Within, you have to create a META-INF folder with a nested services folder in it. Then, in services create a file, named javax.annotation.processing.Processor. The content of the file should list the fully-qualified class name(s) of your annotation processor(s). If there is more than one annotation processor, the fully-qualified class-names should be on separate lines. But since you have just one, you'd have something like:
com.yourdomain.processor.RepositoryFactory
Note that you will have to change this line with the actual fully-quallified class-name of your annotation processor.
In the end, you should end up with similar structure:
This meta file is important, because otherwise the compiler is not aware of the user-defined annotation processors. Having it, it will make use of all the registered processors.
After that, when you do a mvn clean install all your modules will be cleaned and built. However, since the compiler will now be aware of your annotation processor, it will trigger it. All the generated sources will be located (by default) in the target/generated-sources folder. Moreover, they will all be under the package you've configured in the annotation process, i.e. blagae.web.repo.
In order to use the generated sources within your code, you will have to add the targer/generated-sources to the project classpath. If you don't want to rely on the IDE to do this, you can extend the maven <build> by adding the target/generated-sources to the classpath. Something like:
<build>
<resources>
...
<resource>
<directory>${project.build.directory}/generated-resources</directory>
</resource>
</resources>
</build>
In your project A include the META-INF/beans.xml file,
which will contain following:
<beans xmlns="http://xmlns.jcp.org/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_1_1.xsd"
version="1.1" bean-discovery-mode="all">
</beans>
and give it a try. You should use JavaEE 7/CDI 1.1. Refer to Java EE 7 Deployment Descriptors for more information.
You can also refer to this related question: How to #Inject object from different project module included as jar
I'm building (multiple) complex webservice with base XSD types from all kinds of standards (GML, SWE, XLINK, etc). Now, I would like to break up the compilation into more steps, preferrably one for each of the standards I'm using.
Advantages:
1) I can add create tooling libraries that I can re-use in all of my webservices on each of the standards.
2) I can make use of the power of JAXB2 basics plugin, which seems to work very nicely with the maven-jaxb2-plugin (org.jvnet.jaxb2.maven2) and create for instance interface bindings. This in contrast with the jaxws-maven-plugin plugin.
The final step would be using the org.jvnet.jax-ws-commons:maven-jaxb2-plugin to create the actual web service that I can implement in an EJB (or call as a client).
Now, the org.jvnet.jaxb2.maven2:maven-jaxb2-plugin plugin allows me to refer to episodes by means of their maven coordinate, as part of its like this:
<episodes>
<episode>
<groupId>org.example</groupId>
<artifactId>jaxb2-basics-test-episodes-a</artifactId>
</episode>
</episodes>
How can I do this by means of the org.jvnet.jax-ws-commons:maven-jaxb2-plugin? I've searched a lot, and experimented like this:
<plugin>
<groupId>org.jvnet.jax-ws-commons</groupId>
<artifactId>>maven-jaxb2-plugin</artifactId>
<version>2.1</version>
<executions>
<execution>
<goals>
<goal>wsimport</goal>
</goals>
</execution>
</executions>
<configuration>
<wsdlDirectory>src/main/resources/</wsdlDirectory>
<wsdlFiles>
<wsdlFile>example.wsdl</wsdlFile>
</wsdlFiles>
<xjcArgs>
<xjcArg>-b</xjcArg>
<xjcArg>../cpt-xsd/target/generated-sources/xjc/META-INF/sun-jaxb.episode</xjcArg>
</xjcArgs>
<verbose>true</verbose>
</configuration>
</plugin>
Which takes the episode file from the target dir of the (compiled) JAXB dependend project. This sometimes even fails in the maven build (why I did not figure out yet).
I've tried to use catalog files to make a mapping but (I think I saw somewhere a catalog mapping that took maven coordinates as destination), but haven't succeeded yet.
Are you aware of the OGC Schemas and Tools Project? (Disclaimer: I'm the author.)
Now, to your question. My guess would be that org.jvnet.jax-ws-commons:maven-jaxb2-plugin does not support the "Maven coordinates" as you call them. This was a feature I've specifically implemented for my org.jvnet.jaxb2.maven2:maven-jaxb2-plugin (disclaimer: I'm the author).
From the other hand, episode file is nothing but a JAXB binding file. So you can simply extract this file from the JAR artifact (for instance using the maven-dependency-plugin) and then include it more or less like you do it already. Just don't point to directories in other modules, this is not reliable.
Attempting to modify an existing Java/Tomcat app for deployment on Heroku following their tutorial and running into some issues with AppAssembler not finding the entry class. Running target/bin/webapp (or deploying to Heroku) results in Error: Could not find or load main class org.stopbadware.dsp.Main
Executing java -cp target/classes:target/dependency/* org.stopbadware.dsp.Main runs properly however. Here's the relevant portion of pom.xml:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>appassembler-maven-plugin</artifactId>
<version>1.1.1</version>
<configuration>
<assembleDirectory>target</assembleDirectory>
<programs>
<program>
<mainClass>org.stopbadware.dsp.Main</mainClass>
<name>webapp</name>
</program>
</programs>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>assemble</goal>
</goals>
</execution>
</executions>
</plugin>
My guess is mvn package is causing AppAssembler to not use the correct classpath, any suggestions?
Your artifact's packaging must be set to jar, otherwise the main class is not found.
<pom>
...
<packaging>jar</packaging>
...
</pom>
The artifact itself is added at the end of the classpath, so nothing other than a JAR file will have any effect.
Try:
mvn clean package jar:jar appassembler:assemble
Was able to solve this by adding "$BASEDIR"/classes to the CLASSPATH line in the generated script. Since the script gets rewritten on each call of mvn package I wrote a short script that calls mvn package and then adds the needed classpath entry.
Obviously a bit of a hack but after a 8+ hours of attempting a more "proper" solution this will have to do for now. Will certainly entertain any more elegant ways of correcting the classpath suggested here.
I was going through that tutorial some time ago and had very similar issue. I came with a bit different approach which works for me very nicely.
First of all, as it was mentioned before, you need to keep your POM's type as jar (<packaging>jar</packaging>) - thanks to that, appassembler plugin will generate a JAR file from your classes and add it to the classpath. So thanks to that your error will go away.
Please note that this tutorial Tomcat is instantiated from application source directory. In many cases that is enough, but please note that using that approach, you will not be able to utilize Servlet #WebServlet annotations as /WEB-INF/classes in sources is empty and Tomcat will not be able to scan your servlet classes. So HelloServlet servlet from that tutorial will not work, unless you add some additional Tomcat initialization (resource configuration) as described here (BTW, you will find more SO questions talking about that resource configuration).
I did a bit different approach:
I run a org.apache.maven.plugins:maven-war-plugin plugin (exploded goal) during package and use that generated directory as my source directory of application. With that approach my web application directory will have /WEB-INF/classes "populated" with classes. That in turn will allow Tomcat to perform scanning job correctly (i.e. Servlet #WebServlet annotations will work).
I also had to change a source of my application in the launcher class:
public static void main(String[] args) throws Exception {
// Web application is generated in directory name as specified in build/finalName
// in maven pom.xml
String webappDirLocation = "target/embeddedTomcatSample/";
Tomcat tomcat = new Tomcat();
// ... remaining code does not change
Changes to POM which I added - included maven-war-plugin just before appassembler plugin:
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.5</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>exploded</goal>
</goals>
</execution>
</executions>
</plugin>
...
Please note that exploded goal is called.
I hope that small change will help you.
One more comment on that tutorial and maven build: note that the tutorial was written to show how simple is to build an application and run it in Heroku. However, that is not the best approach to maven build.
Maven recommendation is that you should adhere to producing one artifact per POM. In your case there are should two artifacts:
Tomcat launcher
Tomcat web application
Both should be build as separate POMs and referenced as modules from your parent POM. If you look at the complexity of that tutorial, it does not make much sense to split that into two modules. But if your applications gets more and more complex (and the launcher gets some additional configurations etc.) it will makes a lot of sense to make that "split". As a matter of fact, there are some "Tomcat launcher" libraries already created so alternatively you could use of one them.
You can set the CLASSPATH_PREFIX environment variable:
export CLASSPATH_PREFIX=target/classes
which will get prepended to the classpath of the generated script.
The first thing is that you are using an old version of appassembler-maven-plugin the current version is 1.3.
What i don't understand why are you defining the
<assembleDirectory>target</assembleDirectory>
folder. There exists a good default value for that. So usually you don't need it. Apart from that you don't need to define an explicit execution which bounds to the package phase, cause the appassembler-maven-plugin is by default bound to the package phase.
Furthermore you can use the useWildcardClassPath configuration option to make your classpath shorter.
<configuration>
<useWildcardClassPath>true</useWildcardClassPath>
<repositoryLayout>flat</repositoryLayout>
...
</configruation>
And that the calling of the generated script shows the error is depending on the thing that the location of the repository where all the dependencies are located in the folder is different than in the generated script defined.
How would you structure Freemarker (or an alternative) as a templating code generator into a Maven project? I'm pretty new to Maven and would appreciate some help.
I want to generate some code from templates in my project. [a]
Rather than write my own, googling found freemarker which appears to be used by Spring which is a good enough reference for me, though as I haven't started with it yet, any other suggestions that work well with Maven would be appreciated too.
This website tells me how to add it as a dependency to my pom.xml.
This SO question tells me where the generated sources should go. What I can't work out is how to tie it all together, so I get my generated sources generated from the templates, and then my generated sources used like regular sources for compile, test, jar, javadoc etc. Has anyone else used a template code generator for java within maven and could help?
[a] I know Generics would be the usual solution, and in fact I'm using them, but I have to use templates to cope with the primitive cases, without introducing copy/paste errors. Please trust me on this :-)
I had written a maven plugin for this purpose. It uses the FreeMarker Pre Processor.
Heres the fragment from pom.xml highlighting its usage:
<plugins>
<plugin>
<configuration>
<cfgFile>src/test/resources/freemarker/config.fmpp</cfgFile>
<outputDirectory>target/test/generated-sources/fmpp/</outputDirectory>
<templateDirectory>src/test/resources/fmpp/</templateDirectory>
</configuration>
<groupId>com.googlecode.fmpp-maven-plugin</groupId>
<artifactId>fmpp-maven-plugin</artifactId>
<version>1.0</version>
<executions>
<execution>
<phase>generate-sources</phase>
<goals>
<goal>generate</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
Here the cfgFile is the path where you keep the config file for FMPP. (if you are not using any special data passing in FreeMarker then an empty file will be enough)
templateDirectory is where you keep the FreeMarker templates.
outputDirectory is where you want the output files to be generated.
I am in process of writing a detailed documentation highlighting the plugins usage and will update the project website accordingly.
Here is another plugin for the job:
https://code.google.com/p/maven-replacer-plugin/
From the original description of the problem it sounds like you should consider creating a Maven Archetype (aka Project Template):
http://maven.apache.org/archetype/maven-archetype-plugin/
And it sounds like you might want to add some properties into the equation:
http://maven.apache.org/archetype/maven-archetype-plugin/examples/create-with-property-file.html
Maven Archetype functionality also provides a means of doing substitution using Apache Velocity (near enough the same as Freemarker) ... but I haven't worked that bit out yet.