How to test compiled JAR file using Gradle - java

I had a code that was working correctly when was executed during standard unit testing, but didn't work when it was compiled into the jar and was added as dependency for some other project.
It wasn't an issue to find the root cause and fix it, but I started to think how can I test freshly made jar artifact before deploying it anywhere, to make sure that it will work for end users and other projects. I have googled this topic for several hours, but didn't even find something close to it.
Maybe I'm totally wrong and trying to achieve something weird, but I cannot figure out another way how to verify compiled packages and be confident that it will work for others.
Some details about the project - simple Java library with few classes, using Gradle 5.5 as a build system and travis-ci as CI/CD tool, for testing I'm using TestNG, but I can easily switch to JUnit if it will be required.
If you curious about the code, which was not working when was compiled into the package, here is simplified version:
public String readResourceByURI() throws IOException, URISyntaxException
{
new String(Files.readAllBytes(Paths.get(ClassLoader.getSystemClassLoader().getResource("resource.txt").toURI())));
}
This function will throw java.nio.file.FileSystemNotFoundException if packaged into the jar file. But as I said the problem is not with the code...
Ideally, I want to create a build pipeline, that will produce jar artifacts, which then will be tested and if tests are successful those jars will be automatically deployed to repository (maven and/or bintray).
At the moment all tests are executed before jar creation and as a result there is chance, that compiled code inside jar package will not work due to packaging.
So, to simplify my question I'm looking for a Gradle configuration that can execute unit tests on a freshly made jar file.

That's what I came up with:
test {
// Add dependency on jar task, since it will be main target for testing
dependsOn jar
// Rearrange test classpath, add compiled JAR instead of main classes
classpath = project.sourceSets.test.output + configurations.testRuntimeClasspath + files(jar.archiveFile)
useTestNG()
}
Here I'm changing default classpath for test task by combining folder with test classes, runtime dependencies and compiled JAR file. Not sure if it's correct way to do it...

I don't think there is a good way to detect this kind of problem in a unit test. It is the kind of problem that is normally found in an integration test.
If your artifact / deliverable is a library, integration tests don't normally make a lot of sense. However, you could spend some time to create a sample or test application that uses your library, which you can then write an integration test for.
You would need to ask yourself whether there are enough potential errors of this kind to warrant doing that:
I don't image that you will make this particular mistake again soon.
Other problems of this nature might include assumptions in your library about the OS platform or (occasionally) Java versions ... which can only really be tested by running an application on the different platforms.
Maybe the pragmatic answer is to recognize that you cannot (or cannot afford to) test everything.
Having said that, one possible approach might be to chose a free-standing test runner (packaged as a non-GUI Java application). Then get Gradle to run the test runner as a scripted task with JARs for your library and the unit tests on the classpath.

In Gradle you can try to execute some scripting task that run code from your jar. Or complex but on simple POC it works.
In main gradle project create subproject 'child'.
Add inforation about it in settings.gradle:
include 'child'
In build.gradle add this:
task externalTest {
copy {
from 'src/test'
into './child/src/test'
}
}
externalTest.dependsOn(':child:build')
jar.doLast {
externalTest
}
And in child/settings.gradle in dependency add parent jar:
compile files('../build/libs/parent.jar')
Now in main project on build, the child project will be build after jar creation.

Related

Test dependencies for white-box unit testing Java modules with Maven and Eclipse

I'm trying to convert existing Java projects with Maven and Eclipse into Java 9+ modules. The projects have unit tests and the unit tests have test dependencies. I need the test dependencies to be available in the test code, but I don't want them exposed to the rest of the world in the published modules.
I think Testing in the Modular World describes the Maven solutions well. In summary one solution is to create one module-info.java in the main source folder and another in the testing folder. The file in the main folder has the real dependencies. The file in the test folder adds the test dependencies.
The solution works well in Maven and I can build and run tests from the command line. However, when I import the project into Eclipse as a Maven project it balks. Eclipse complains that "build path contains duplicate entry module-info" and refuses to build the project at all.
Using the other suggested solution in the article with a module-info.test containing --add-reads has no effect and the build fails in both Maven and Eclipse as the tests can't find their dependencies.
To make matters more complex I need to import the test dependencies from Maven, but I also need to import standard Java modules that are not used by the main code. For example one unit test relies on the built-in web server provided by java.httpserver and as it is part of the JDK any magic done on the test dependencies will miss it.
Is there a solution for this that works in Maven and Eclipse (latest versions)? It sounds like a very common problem and the module system has been around for a while by now.
Note that I really don't want to change the project settings in Eclipse. I can fiddle with plugins in the pom files, but adding a manual routine where all developers need to edit the generated/imported project settings manually is not an option.
EDIT:
There is an open Eclipse bug report for this, see Eclipse bug 536847. It seems it is not supported yet, but perhaps someone can suggest a workaround?
The Eclipse emulation of the multiple-classpaths-per-project feature in Maven has been broken for very long. The symptom is that you can have non-test classes using test dependencies just fine.
Essentially Eclipse just considers each project to have a single classpath instead of two parallel ones which causes things like this to ... not do the right thing.
I would suggest splitting each of the problematic projects into two. One with the actual sources and one with the test sources (depending on the actual source). This will avoid the Eclipse bug and also allow you to use the newest version of Java for your tests while having your application built for an older version of Java.

Eclipse + m2e + junit5 - already possible?

Tried to get Eclipse 2018-09 + Patch with Java 11 support, m2e, and junit5 working together.
As recommended in junit5-modular-world example I introduced a second module-info.java under test/java.
The reaction of Eclipse was astonishing to me:
I could not save that file after changing it.
It was saved only by closing Eclipse at all.
However, re-opening, bewildered Eclipse. It cannot show any details of the project hosting multiple module-info.java, just the project name.
Probably Eclipse identifies one project with one Java module, while mvn test compiles and executes obviously a different module than the one created by mvn install.
Experienced a lot of options I can think of. Currently I had to give up and fall back to junit 4.12.
Do you know of a better solution?
A secondary module-info.java in the test source folder is not supported by Eclipse at this time (but its behaviour if you try to do that should probably be improved).
For now, you probably won't need it at all:
Maven puts dependencies that are mentioned in the module-info.java on the module path, all others (e.g. test-only dependencies like junit) on the class path, so they become part of the unnamed module. When tests are compiled, command line options are added, so the test code that is treated as part of the module in the main source folder still can read the unnamed module (by adding --add-reads modulename=ALL-UNNAMED), so junit is visible to the test code.
Eclipse Photon and later also supports this behaviour.
Some background regarding the secondary test module-info.java: maven-compiler-plugin supports this since version 3.8 (see https://www.mail-archive.com/announce#maven.apache.org/msg00866.html, implemented in issue https://issues.apache.org/jira/browse/MCOMPILER-341), but I'm not aware that a matching maven-surefire-plugin has been released, so I think you currently wouldn't be able to run these kinds of tests with maven.
Implementing support for a secondary test module-info.java in Eclipse may be possible, as long as it is a strict superset of of the primary module-info.java in the main source folder, or maybe as long as they specify the same module and their contents would get merged as in the "pro" build tool https://github.com/forax/pro. But nobody has worked on that yet.
What will probably be never supported in Eclipse, is to have a secondary test module-info.java that specifies a different module as Eclipse has the assumption that one java project belongs to only one module. But that shouldn't matter, as these tests can only use public and exported code of the main sources, so they can simply be put into their own maven module.

How to manage dependencies for project support tooling like code generators?

Never found a really satisfactory solution to this. How do you do it? I am looking for inspiration for new approaches.
For context, assume I write a generator that takes a project resource and generates a code file. But it could be any other project support tool - validator, converter, deployer etc. Often manually triggered actions that are not running as part of normal build.
Such tools typically require a few dependencies that are not required by the project itself at runtime.
Strategies that I have applied or considered in the past:
add tool dependency to project anyway, and either mark it "provided" or filter it out during the packaging process (this is what I usually do, but now I am in danger of adding normal project code that uses the tool dependency, potentially resulting in an error that only manifests during runtime)
use a script (trying hard to avoid scripts and their hidden dependencies and complexities)
create separate support projects (trying hard to avoid project explosion, especially for seemingly small tasks that are handled by a few lines of code)
subprojects / modules (only vaguely aware of this option, never really tried it)
maven plugin that is run with a profile with separate dependencies (trying to avoid the separate project required to maintain the custom plugin)
Inspiration from answers and comments
separate tools project shared by multiple projects
I just realized that maven and eclipse already solved exactly this problem for a very specific "tool": test code.
Test code often needs additional dependencies not used by the application itself.
People obviously invested quite a bit to keep the "test / tool" infrastructure within the same project, as opposed to creating a separate test-project:
separate source locations (src/main/java, src/test/java)
separate resource locations (src/main/resources, src/test/resources)
a full-blown separate maven dependency scope "test", complete with transitive resolution
separate compilation phases (compile / test) with separate dependency trees
eclipse supports special junit launch configurations that are able to correctly resolve the test dependencies
probably more stuff that I am not aware of currently
So, I am strongly considering to program all my supporting tools as "junit test cases".
I am planning to create and commit shared junit launch configs for the team that execute just one specific "test case", which will run the tool logic instead of testing.
The problem I have to solve is to avoid running these dummy tests during the normal maven test phase.
Also, writing this, I realize that there is even another such system already in place: the maven plugin infrastructure, that also has a separate dependency resolution mechanism. Although, so far it seems necessary or normal to create separate projects to create plugins. I will look into ways of writing and building project specific maven plugins without needing to create separate projects. I am thinking about generating the pom.xml needed for plugin compilation on the fly, and including all the test dependencies.

Gradle: generate separate eclipse projects for unit tests

I would love to generate a separate eclipse project from gradle modules with unit tests: one project for the main module, and another one for the unit test. This would help prevent the 'cross classpath' issues that happen in eclipse due to the test classpath becoming part of the main project classpath (for modules with unit tests adding extra entries to the classpath).
I wonder if anyone has tried this?
Thanks
I managed this in a recent PR, but the solution is messy and unpleasant: copy-pasting EclipsePlugin to create three new tasks to set up the second Eclipse project. I suspect the result is fragile against changes to Gradle, as well as being rather opaque for future maintainers.

Ultimate subproject sbt setup

I'm working on some code with several sub-projects many of which depend on each other. For example, there's a utils project with various utilities, a queueing project with some code for managing queues of things, and then a mainApp project. The queueing project depends on utils, and mainApp depends on both utils and queueing. There are many more projects in addition, but hopefully you get the general idea.
We use to have a standard sbt submodule setup for this with one root build file, lots of sub-projects and the standard aggregate and dependsOn stuff. This worked, but was problematic:
It could be very slow to build and run tests for the project. Building everyting takes about 15 minutes and unit testing everything takes even longer.
Due to the way aggregate works, unit testing mainApp resulted in runs of all tests in utils and queuing even if they hadn't changed. True, you can do "test-quick" but that stops working when you change git branches and such so you still end up running the whole test suite often.
While sbt does let you build just a subproject, you have to be in the root directory and remember to qualify the build. For example, you have to remember to run sbt utils/compile rather than sbt compile.
So we switched to completely separate project and separate git repos. Each project builds and then deploys an artifact to Nexus. Thus, to build and test the mainApp you build just it and it pulls already compiled .jar files for the other projects from the nexus server. This makes it easier to work in just one project, but it becomes harder to make a change to, for example, utils and then use it in mainApp. Specifically, you often want to do something like add a method to utils and then immediately test that method in mainApp to see if it worked before pushing a new version of utils. But now, to do that, you have to:
Add your code to utils
Bump the version number in utils
Run "sbt publish-loca"
Edit the build.sbt in mainApp and add -SNAPSHOT to the utils dependency
Run sbt update
Build and test mainApp
If that all works, you then have to push utils and mainApp in the right order to the continuous build server and be sure to wait for the utils build to complete before you push mainApp.
Even worse, maintaining the versioning becomes really complicated. Suppose we start with all projects at version 1.0 and depending on version 1.0 of the other projects. Now suppose we find a bug in utils that affects mainApp. So we fix utils, change its version number to 1.1, update mainApp to depend on 1.1 and re-build. Suppose the code that fixed the bug in mainApp introduces a bug in queue. The problem is that if you go to queue and run its tests, they'll run against version 1.0 of utils (even if you use ranges as sbt only re-checks those periodically). However, when you build mainApp there will be a dependency conflict on utils which sbt will resolve. No matter how it resolves it, it will be "wrong" for at least one project. We can set the resolver to "strict" but then managing versions becomes very labor intensive across the 10+ projects.
What I'm really looking for is a build setup that is the best of both worlds. Specifically:
I have a root dir with sub-dirs for each project
If pwd is one of those sub-dirs all sbt commands refer to just that project. So, if pwd is queue, then sbt test compiles and tests only queue.
If there's a version of an artifact in Nexus, it is pulled from there, but if there's more recent code on the disk, the dependency is built locally and that is used instead.
Or something like that. Does anyone have any advice on how to set this up with sbt?

Categories