I am working for a company that is using multiple Maven projects/modules to create what will eventually become one product. To help me explain, imagine a file structure similar to below:
- Parent Directory
- Project_1
- /src/
- /target/
- POM.xml
- Project_2
- /src/
- /target/
- POM.xml
Along the way we are using JUnit to unit test our code, and it is an important contractual requirement that we achieve above a certain percentage threshold of code coverage with our tests.
We are using JaCoCo to generate coverage reports in the form of a HTML website. JaCoCo itself is proving to be invaluable but one major problem we have is that this creates a single site under the /target/site/jacoco/ directory.
I have done some investigating myself and found that, unless I am mistaken, JaCoCo by default does not support the ability to converge multiple Maven projects into a single JaCoCo report.
So my question is, can anybody suggest an alternative solution - something that will allow us to converge multiple reports onto a single web server?
One option we have is to move all sites into individual folders on a web server and then have an index page linking them together, but it's "clumsy" at best. For example:
- Web Server
- index.html
- Project_1
- (Generated report files)
- Project_2
- (Generated report files)
Any better suggestions would be greatly appreciated.
JaCoCo does not provide a simple way to do this as of today. However, they do specify three alternatives that are described here: https://github.com/jacoco/jacoco/wiki/MavenMultiModule
Their most suitable approach involves creating a separate reporter module that contains dependencies on all the other modules (in the github article referred to as Strategy: Module with Dependencies).
The reporter module uses the jacoco:report-aggregate (http://www.eclemma.org/jacoco/trunk/doc/report-aggregate-mojo.html) maven goal to fetch all the individual reports and binds them together into one.
An example project:
https://prismoskills.appspot.com/lessons/Maven/Chapter_06_-_Jacoco_report_aggregation.jsp
There are many different approaches you can go with.
First of all you might want to consider something like Sonar, so that you'll compile all your modules and will run a Sonar that will inspect the coverage among other things. Sonar will take the results and upload to sonar server (with the database and everything) so that you'll be able to see in UI what went wrong
Another approach is just rolling your own Maven plugin (assuming you're using Maven). The reports generated by jacoco is also an XML report if I'm not mistaken. So it can be parsed pretty easily. So, one could write a Maven plugin that would identify all the reports like this, parse them and provide some unified view.
Yet another approach is to cause the whole build failure when the coverage doesn't reach some threshold. I know, it doesn't answer your question directly, but if you do it like this, you'll kind of guarantee the minimum level of coverage (that can be increased from time to time at the level of project).
Related
A common use case for me is that I need to deploy multiple Java applications/customizations into customer environments. Often these different parts have to be developed as separate Maven projects where each project contains/produces multiple files of various types (jar, JSP, properties, XML etc). A lot of these extra files are generated via the use of Java annotations and the preprocessor functionality.
The problem I'm facing is that multiple projects may produce the same files which must then be merged together and it is becoming increasingly harder to do this as the number of submodules go up. The result being that I may mess up the merging and create either invalid or incomplete files which are then deployed causing problems in the target environment. Not to mention that this manual merging process makes it impossible to automate the builds.
I'm looking for a way to structure my projects or for plugins (or some other way) that may help, so that I can have an aggregator project that can collect all these files from all the projects and merge them into final versions that contains the contributions from each individual submodule. The goal is of course that building some sort of top project will result in a complete deployment package that can be directly installed without any further manual processing.
A good approach could have been if the aggregator project could run preprocessing on all annotations from all the submodules and simply create the finished files but as far as I understand how the preproccesor this is not really an option.
Do you know of any technique, plugin or something else that could help me accomplish this?
I have a Maven-based Java webapp that has bunch of unit tests, integration tests, code coverage reports etc and some more technical details.
I would like generate project information which would contain all the above information aggregated in one place so that it could be seen by others.
What's are some of the tools available in order to achieve this?
Remote server upload can be done either via http/dav/scp etc.
You can maintain a roadmap via markdown/apt or other formats.
Current open issues (may be jira) via maven-changes-plugin ?
Technical details such as tools etc. can be documented by the above as well?
Unit tests results can be done via usual maven site generation (surefire reporting) Code coverage via cobertura the old way or via JaCoCo.
I need advice on how to structure a multi-tier GWT/Spring project so that Gradle can build the artifacts and deploy the correct jars..
Google hasn’t helped much – I can find a number of articles on building multi-projects and indeed building GWT project in Gradle however, all of these seem incomplete for my problem domain as I’m finding the following problems as I have encountered the following issues.
In the multi-project examples, the GWT dependencies are being included in the web-application from the war plug-in.
If I go down the single gradle build route then I’m losing decoupling with the projects..
Both the client & Server have dependencies on certain class files (for GWT-RPC); currently these are packaged in the client project so has resulted, again, in a server dependency on the client (for the GWT-RPC DTO objects).. This leads me to feel I need a third module exclusively for the shared class files with the source being also present in the gwt-client project (for the GWT compiler to pick these up)..
So; the question is has anyone came across a multi-tier GWT examples that uses Gradle as the build tool & deals with some/all of the above issues?
Thanks in advance,
Ian.
We're using a single build, but we address point #2 - "coupling of projects" using the Classcycle maven dependency plugin.
Ultimately, you want three genres of code: server, client and shared. The advantage of packaging those separately in separate jars (as you said in point #3) is that your server jar size will be decreased, and you could use more liberal source directories in your .gwt.xml file.
If you decide to use a single jar/war, then you will be including the extra, unused client classes on the server. This could lead to runtime exceptions from code leakage and (potentially?) worse performance on the server. We avoid the runtime exceptions by enforcing the layering separation at build time (using Classcycle), and the extra performance overhead from the extra client classes should be marginal. You can always strip out the client code from the jar after compile, using a post-build task.
Sorry, I don't know much about gradle, but I figured I would try to help anyways.
We've just started looking at using JBehave for acceptance tests and I was wondering how people that are using it are organising the writing of stories and the storage of story files. It's just development that are working on them at the moment so we have the story files stored in the resources folder alongside the Java code to implement them.
I guess my actual question is how and where are you storing your story files and how does this work with the product owner or QA writing stories?
#MrWiggles
as t0rx told you are lucky to have QA to write stories/scenarios. coming to your question:Behaviour-Driven Development encourages you to start defining the stories via scenarios that express the desired behavior in a textual format.
JBehave Stories you can run by configuring in Maven (pom.xml).
You can make a folder for storing your story files in your package structure, like below:
Your_Project
|
|
|--Source_Code
|
|--Stories
|
|--Testing
|
*pom.xml
By configuring your stories in maven, every time you build project it will give result with succeeded and failed stories/scenarios results.
QA will update the scenarios in the folder Stories, and the developer will implement the scenarios step by step by omitting existing steps (which are already developed and came in other scenarios).
QA simply run the scenario/story and he will find out the result in a textual (understandable) format.
Like below:
Behaviour-Driven Development in test levels.
Some of the JBehave features concentrate on easy organizing.
Annotation-based configuration and Steps class specifications
Dependency Injection support allowing both configuration and Steps instances composed via your favourite container (Guice, PicoContainer, Spring).
Extensible story reporting: outputs stories executed in different human-readable file-based formats (HTML, TXT, XML). Fully style-able view.
Auto-generation of pending steps so the build is not broken by a missing step, but has option to configure breaking build for pending steps.
Localisation of user stories, allowing them to be written in any language.
IDE integration: stories can be run as JUnit tests or other annotation-based unit test frameworks, providing easy integration with your favourite IDE.
Ant integration: allows stories to be run via Ant task
Maven integration: allows stories to be run via Maven plugin at given build phase
If you are lucky enough to have the product owner or QA writing stories then you probably want them in a specific area of your source code repository so you can control access independently from your main source (and also give you more flexibility with when CI builds are triggered if you're doing that).
You'll likely find a lot of back-and-forth to minimise the number of new steps the devs have to write (i.e. stop them using ten different ways to write the same step), so will also need to run with pending steps not failuring the scenario (which is the default out of the box).
An alternative approach is that QA/product owner send scenarios to the devs who then cleanse them before adding to source control, but this puts effort back on the devs.
I'm working on a couple of web services that use JAXB bindings for the messages (in JAX-WS or spring-ws). When using these bindings there's always some code that is automatically generated from the WSDL to bind the message objects. I'm struggling to figure out the best way I can make this work so that it's easy to work with, hard to break and integrates nicely with IDEs (mostly using eclipse).
I think there are a couple of ways to go about this. The three main options I see right now are:
Generate code, keep the source artifacts and check them into the repository. Pros: integrates easily with IDEs (source highlighting etc), works within the build system. Cons: generated code changes each time you regenerate it, possibly creating noisy commits. It's also redundant since the WSDL file is already checked in, usually.
Generate code as part of the build process. Don't keep source artifacts or only keep them in output directories. Pros: fixes all the cons from the previous one. Cons: harder to integrate with IDE, though maybe this build step can be run automatically? I currently use this on one of my projects but the first time I checkout the project it appears broken, which is a minor nuisance.
Keep generated bindings in separate libraries (jars) included with maven or manually updated jars, depending on your build process. I got the idea from a thread on java.net. This seems more stable and uses explicit versioning but seems a bit heavyweight.
Which one of these options would you implement and how? We're currently using maven and eclipse, so any ideas in that regard would be great. I think this problem generalises to most other build systems and IDE combinations though, even other languages perhaps.
I went for option 3. If you already host your own repository (and optionally CI), it's not that heavyweight. All it takes is a simple POM. It's even possible to include some utility/wrapper/builder classes (that often make life easier with generated classes) and use them in several projects.
I'd go for option 2 and generate code in the "standard" ${project.build.directory}/generated-sources/<toolname> location as part of the build process. Using generated sources is well supported by m2eclipse (use Maven > Update Project Configuration once sources have been generated) and, if I remember well, by the maven eclipse plugin as well (i.e. the folder will be added to the Java Build Path). Actually, I think NetBeans also handle this fine. Not sure for Idea.
For the generation itself, you may need the maven-jaxb2-plugin if I understood correctly.