I have a project that uses a Java test framework (cucumber) to run against a Python job (does some SQL querying)
The project is build with Gradle and spins up a docker instance for the Python environment to run the tests against.
However debugging is proving difficult because When I make chances to the python code it is not picked up when rerunning the tests - resulting in the same result (failure) as the previous run.
I noticed the Build files are not being updated, but even when i have done this manually and re-ran the tests again I get the same result.
I have tried 'Invalidate Caches/Restart' but had no joy.
I have tried reimporting the project but no joy.
I then tried to swap branch back to master and ran the 'working' tests but got a failed result that would only have come from the code in the feature branch.
My knowledge is a limited but a logical guess is the code is being packaged/wrapped up somewhere and not being refreshed(cached) on each test run.
I have also tried deleting IntelliJ's run configuration for the tests.
So I am now a little lost to where it could be caching this so I can clear it and hopefully it picks up the new changes.
Thanks
Did you looked in the gradle build output? try running the gradle build with --info, that will show you if the gradle build identify the changes in the python files or not.
If not , add the python sources to jar task dependencies:
task createJar(type: Jar) {
inputs.files(myPythonfolder)
..
it can also be achieved by adding new task with input files on the python sources and adding dependency between the jar task and the python task:
jar.dependsOn myPythonTask
for more accurate details , add the build.gradle file of your project here
Related
The project I am working on is Java, Gradle based. So I want to create a plugin/script that will run a certain Gradle task whenever I made changes to one of the files from the project.
For example
There is A.xml file and a B Gradle task. Whenever I do changes to A.xml and save it I want the B Gradle task to be run.
I am using IntelliJ IDEA, and my initial thoughts are that it could be solved through plugins/scripts.
Can you suggest where to start? Should it be done through plugins? Maybe there are automatisation settings in IntelliJ that have file watchers or smth. Thanks.
I tried to search similiar plugins, and didn't find any. Read documentation for plugin creation, I think I can reach the result, but isn't it overhead?
I had a code that was working correctly when was executed during standard unit testing, but didn't work when it was compiled into the jar and was added as dependency for some other project.
It wasn't an issue to find the root cause and fix it, but I started to think how can I test freshly made jar artifact before deploying it anywhere, to make sure that it will work for end users and other projects. I have googled this topic for several hours, but didn't even find something close to it.
Maybe I'm totally wrong and trying to achieve something weird, but I cannot figure out another way how to verify compiled packages and be confident that it will work for others.
Some details about the project - simple Java library with few classes, using Gradle 5.5 as a build system and travis-ci as CI/CD tool, for testing I'm using TestNG, but I can easily switch to JUnit if it will be required.
If you curious about the code, which was not working when was compiled into the package, here is simplified version:
public String readResourceByURI() throws IOException, URISyntaxException
{
new String(Files.readAllBytes(Paths.get(ClassLoader.getSystemClassLoader().getResource("resource.txt").toURI())));
}
This function will throw java.nio.file.FileSystemNotFoundException if packaged into the jar file. But as I said the problem is not with the code...
Ideally, I want to create a build pipeline, that will produce jar artifacts, which then will be tested and if tests are successful those jars will be automatically deployed to repository (maven and/or bintray).
At the moment all tests are executed before jar creation and as a result there is chance, that compiled code inside jar package will not work due to packaging.
So, to simplify my question I'm looking for a Gradle configuration that can execute unit tests on a freshly made jar file.
That's what I came up with:
test {
// Add dependency on jar task, since it will be main target for testing
dependsOn jar
// Rearrange test classpath, add compiled JAR instead of main classes
classpath = project.sourceSets.test.output + configurations.testRuntimeClasspath + files(jar.archiveFile)
useTestNG()
}
Here I'm changing default classpath for test task by combining folder with test classes, runtime dependencies and compiled JAR file. Not sure if it's correct way to do it...
I don't think there is a good way to detect this kind of problem in a unit test. It is the kind of problem that is normally found in an integration test.
If your artifact / deliverable is a library, integration tests don't normally make a lot of sense. However, you could spend some time to create a sample or test application that uses your library, which you can then write an integration test for.
You would need to ask yourself whether there are enough potential errors of this kind to warrant doing that:
I don't image that you will make this particular mistake again soon.
Other problems of this nature might include assumptions in your library about the OS platform or (occasionally) Java versions ... which can only really be tested by running an application on the different platforms.
Maybe the pragmatic answer is to recognize that you cannot (or cannot afford to) test everything.
Having said that, one possible approach might be to chose a free-standing test runner (packaged as a non-GUI Java application). Then get Gradle to run the test runner as a scripted task with JARs for your library and the unit tests on the classpath.
In Gradle you can try to execute some scripting task that run code from your jar. Or complex but on simple POC it works.
In main gradle project create subproject 'child'.
Add inforation about it in settings.gradle:
include 'child'
In build.gradle add this:
task externalTest {
copy {
from 'src/test'
into './child/src/test'
}
}
externalTest.dependsOn(':child:build')
jar.doLast {
externalTest
}
And in child/settings.gradle in dependency add parent jar:
compile files('../build/libs/parent.jar')
Now in main project on build, the child project will be build after jar creation.
I'm trying to run a complete Maven build of multiple projects for an automated build tool. If unit tests fail, but the project itself builds correctly, I want to be able to continue the build and detect this after the build completes. I tried doing this:
mvn clean package -Dmaven.test.failure.ignore=true -Dmaven.test.error.ignore=true -Dmaven.test.reportsDirectory=/Users/bfraser/misc/reports
The "maven.test.failure.ignore" and "maven.test.error.ignore" properties work fine. However, surefire seems to ignore the "maven.test.reportsDirectory" completely (in fact, if you look at the documentation for the test goal, the reportsDirectory property is not documented to be tied to the system variable). This may be because I'm building a multi-module project? All reports seem to go in the target/ folder of the subprojects.
It is very difficult for me to be able to edit the POMs in an automated way since many of them have parent POMs that might be on a Nexus repo somewhere, etc. -- I need to be able to do this externally to the project (preferable via command line switches, but if I need to create some files so be it... as long as I don't have to edit the project POM it's all good).
I just need to know if any test failed. I'm not particularly fussy about what/how many tests failed.
I am currently thinking about the potential solutions for building and running a Jenkins Maven project. I am a Jenkins Noob and what I currently think of is providing a Maven Plugin that runs the project right after the build and test phase. This feels wrong... .
So my basic question is, is it possible in Jenkins to configure a process to build a maven project and execute it right away and taking care for not interfering with it by starting another process and rebuilding it since a change arrived.
If this is possible it would ease the task by omitting the "Let's write a Maven plugin".
What do you means by 'execute'? Your program is a jar? If you do have to deploy it, there is tools in Jenkins to deploy with the build phase. If not, I think you can always make a 'Post build' command like java -jar nameofprogram.jar
In Jenkins you can configure jobs to execute multiple Maven targets when the jobs are run. I don't know if this answers your question, but you should be able to accomplish what you want by using "post build steps" and trigger certain behaviour from there.
We want to accelerate our build pipeline for a multi-module Java web application, which roughly consists of
compile/code analysis
unit tests
integration tests
GUI tests
At the moment each of these build steps starts from scratch, compiling and building again and again, which costs time and means that we do not deploy the actual files to production that have gone through the tests. Is it possible to get Maven to not recompile everything on subsequent steps but instead run the tests against the previously compiled classes?
We are using Maven3 to manage our dependencies and Teamcity as a build server (7 at the moment, planning to upgrade to 8 soon). I have tried to set up a build chain, doing a
mvn clean install
on the first step and then exporting all the */target/ folders to the following builds. Then ideally I would only do a
mvn test
mvn integration-test
Unfortunately I have not been able to persuade Maven to do this properly. Either it compiles the classes again or produces errors.
Has anyone successfully done this kind of setup and has any pointers for me? Is this possible with Maven and is this even the right way to do things?