My product is using a third party dependency called matlab control version 4.1.0. This jar file allow java to send command to matlab.
The way we use it is by calling a java command in matlab (triggered by the user), this perform a bunch of computation in the java plugin, then java sends a notification to matlab which then start executing m code with data received from java.
So far, the jar file is stored in our repository, and directly loaded in matlab.
We want to move to matlab-control v5.0.0, which is on maven. So we thought it was the occasion to get rid of the jar file, include it as a dependency from maven in our gradle build.
However the api seems to not be visible to Matlab, which returns an exception 'can not find class org.n52.matlabcontrol.MatlabProxy'.
If I test the connection directly from java (by creating a java test), java will by itself launch matlab and request the command to be executed (for instance, an hello world command). It's working fine, and I can then instantiate a MatlabProxy. But if I launch Matlab and try to access the MatlabProxy directly from it, it cannot find it.
So it seems that the Matlab-control API is not exposed by my gradle build. So far I have tried the following:
apply plugin: 'java-library'
dependencies {
implementation('org.n52.matlab:matlab-control:5.0.0')
api('org.n52.matlab:matlab-control:5.0.0')
}
But it is not working, so what I am missing ?
Thanks to JB Nizet who put me in the right direction, I found a solution.
I just had to create a configuration first, then assign dependencies to this configuration in order to copy them:
configurations {
deployerJars
}
dependencies {
deployerJars group: 'org.n52.matlab', name: 'matlab-control', version: '5.0.0'
}
task copyToLib(type: Copy) {
into "$buildDir/libs"
from configurations.deployerJars
}
Related
I know there are a lot of questions that seem similar. I have also spent a few hours getting to grips with Gradle multiprojects. But I still don't understand what the best course of action is here. Incidentally I am using Groovy as my coding language, but explanations referencing Java would be just as good.
I have developed an Eclipse Gradle project, "ProjectA", which in particular has a class, IndexManager, which is responsible for creating and opening and querying Lucene indices.
Now I am developing a new Eclipse Gradle project, "ProjectB", which would like to use the IndexManager class from ProjectA.
This doesn't really mean that I would like both projects to be part of a multiproject. I don't want to compile the latest version of ProjectA each time I compile ProjectB - instead I would like ProjectB to be dependent on a specific version of ProjectA's IndexManager. With the option of upgrading to a new version at some future point. I.e. much as with the sorts of dependencies you get from Maven or JCenter...
Both projects have the application plugin, so ProjectA produces an executable .jar file whose name incorporates the version. But currently this contains only the .class files, the resource files, and a file called MANIFEST.MF containing the line "Manifest-Version: 1.0". Obviously it doesn't contain any of the dependencies (e.g. Lucene jar files) needed by the .class files.
The application plugin also lets you produce a runnable distribution: this consists of an executable file (2 in fact, one for *nix/Cygwin, one for Windows), but also all the .jar dependencies needed to run it.
Could someone explain how I might accomplish the task of packaging up this class, IndexManager (or alternatively all the classes in ProjectA possibly), and then including it in my dependencies clause of ProjectB's build.gradle... and then using it in a given file (Groovy or Java) of ProjectB?
Or point to some tutorial about the best course of action?
One possible answer to this which I seem to have found, but find a bit unsatisfactory, appears to be to take the class which is to be used by multiple projects, here IndexManager, and put it in a Gradle project which is specifically designed to be a Groovy library. To this end, you can kick it off by creating the project directory and then:
$ gradle init --type groovy-library
... possible to do from the Cygwin prompt, but not from within Eclipse as far as I know. So you then have to import it into Eclipse. build.gradle in this library project then has to include the dependencies needed by IndexManager, in this case:
compile 'org.apache.lucene:lucene-analyzers-common:6.+'
compile 'org.apache.lucene:lucene-queryparser:6.+'
compile 'org.apache.lucene:lucene-highlighter:6.+'
compile 'commons-io:commons-io:2.6'
compile 'org.apache.poi:poi-ooxml:4.0.0'
compile 'ch.qos.logback:logback-classic:1.2.1'
After this, I ran gradle jar to create the .jar which contains this IndexManager class, initially without any fancy stuff in the manifest (e.g. name, version). And I put this .jar file in a dedicated local directory.
Then I created another Gradle project to use this .jar file, the critical dependency here being
compile files('D:/My Documents/software projects/misc/localJars/XGradleLibExp.jar' )
The file to use this class looks like this:
package core
import XGradleLibExp.IndexManager
class Test {
public static void main( args ) {
println "hello xxx"
Printer printer = new Printer()
IndexManager im = new IndexManager( printer )
def result = im.makeIndexFromDbaseTable()
println "call result $result"
}
}
class Printer {
def outPS = new PrintStream(System.out, true, 'UTF-8' )
}
... I had designed IndexManager to use an auxiliary class, which had a property outPS. Groovy duck-typing means you just have to supply anything with such a property and hopefully things work.
The above arrangement didn't run: although you can do build and installdist without errors, the attempt to execute the distributed executable fails because the above 6 compile dependency lines are not present in build.gradle of the "consumer" project. When you put them in this "consumer" Gradle project's build.gradle, it works.
No doubt you can add the version to the generated .jar file, and thus keep older versions for use with "consumer" projects. What I don't understand is how you might harness the mechanism which makes the downloading and use of the dependencies needed by the .jar as automatic as we are used to for things obtained from "real repositories".
PS in the course of my struggles today I seem to have found that Gradle's "maven-publish" plugin is not compatible with Gradle 5.+ (which I'm using). This may or may not be relevant: some people have talked of using a "local Maven repository". I have no idea whether this is the answer to my problem... Await input from an über-Gradle-geek... :)
You should be able to update the Eclipse model to reflect this project-to-project dependency. It looks something like this (in ProjectB's build.gradle):
apply plugin: 'eclipse'
eclipse {
classpath.file.whenMerged {
entries << new org.gradle.plugins.ide.eclipse.model.ProjectDependency('/ProjectA')
}
project.file.whenMerged {
// add a project reference, which should show up in /ProjectB/.project's <projects> element
}
}
These changes may be to the running data model, so they may not actually alter the .classpath and .project files. More info can be found here: https://docs.gradle.org/current/dsl/org.gradle.plugins.ide.eclipse.model.EclipseModel.html
This issue is discussed here: http://gradle.1045684.n5.nabble.com/Gradle-s-Eclipse-DSL-and-resolving-dependencies-to-workspace-projects-td4856525.html and a bug was opened but never resolved here: https://issues.gradle.org/browse/GRADLE-1014
I try to setup gradle for a proper JNI compilation, so I need to build first a shared library (with the c plugin), and then compile and test the java code (which consumes the library).
Here a sample of the build.gradle, related to the native compilation:
model {
components {
yli(NativeLibrarySpec) {
sources {
c {
source {
srcDir 'src/main/c'
include "Yli.c"
commonFolders.each {
include "$it/**/*.c"
}
}
}
}
buildTypes {
release
}
}
}
}
What is the best way to tell gradle that the compileJava should wait for the build of the NativeLibrarySpec?
Edit: When I try to add
compileJava.dependsOn(yliSharedLibrary)
I have the following error during gradle build:
* What went wrong:
A problem occurred evaluating root project 'yli'.
> Could not get unknown property 'sharedLibrary' for root project 'yli' of type org.gradle.api.Project.
Note: I used the command 'gradle tasks' in order to found the name of the task: 'yliSharedLibrary'.
I played around with this and discovered that you can access the tasks created by the software model within closures. For example, if you want to depend on one of the native tasks, you can do so with:
compileJava.dependsOn { yliNativeCompileTask }
Of course, if you want the Java task to come after the native one, but not force an actual dependency between them, you can use mustRunAfter():
compileJava.mustRunAfter { yliNativeCompileTask }
This syntax also works for declared inputs and outputs:
compileJava.inputs.files { yliNativeCompileTask }
Note that if you tie the inputs of a task to the outputs of another task, you don't have to explicitly declare a dependsOn. Gradle infers the task dependency.
Disclaimer I don't know if this is the correct way to do this, or how far you can take this approach.
One final thing: the old native software model is being replaced by a new set of native plugins based on Gradle's original model. It should be much easier to integrate Java projects with these new plugins, but you may want to wait until the plugins have been fully fleshed out before attempting a migration.
I use OperatingSystem.current() for my daily work with gradle. Now, I want to assemble my java project for different platforms, so I change manually my build.gradle files to build for a specific OS.
My question: Is there a way to specify the OS to use (returned by OperatingSystem.current()) directly in the gradle command line? If no, what is the best strategy to do cross building ?
Note: I depend on some libraries that use themselves the OperatingSystem.current().
I'm cringing while writing this, it's wrong on so many levels - I suggest you'll avoid forcing Gradle to think it's in a different OS.
But assuming you can't avoid it ->
I all depends on the version of Gradle you're using, I'd assume you're using the latest version (in older version that might be simpler)
OperatingSystem.current() works with the "os.name" environment variable which you can override very simply with a -D flag on the Gradle command.
BUT, and this is a big but, Gradle is not the problem here. The underlying JRE being used to execute the build contains OS specific code - see UNIXProcess on UNIX systems.
The current implementation of UNIXProcess blocks overriding the "os.name" value as it performs validations on it.
It's possible you'de be able to bypass that by creating a class in org.gradle.internal.os package that exposes the package-private OperatingSystem.resetCurrent() method and then force OperatingSystem.current() to re-evaluate, bypassing any real JRE checks.
Something like so:
print OperatingSystem.current()
System.setProperty("os.name", <some other OS>)
OperatingSystemWrapper.resetCurrent()
print OperatingSystem.current()
After some additional conversation over the comments, now I understand that the real requirement here is to take conditional dependency on native libs in a simple way that will allow the OP to remove dependency on OperatingSyste.current()
Taking a sample
apply plugin: "java"
dependencies {
compile "org.lwjgl:lwjgl:3.2.0"
compile "org.lwjgl:lwjgl-platform:3.2.0:natives-windows"
compile "org.lwjgl:lwjgl-platform:3.2.0:natives-linux"
compile "org.lwjgl:lwjgl-platform:3.2.0:natives-osx"
}
One can add conditional dependency via a "-P" flag (see https://docs.gradle.org/current/userguide/build_environment.html#sec:gradle_properties_and_system_properties)
apply plugin: "java"
dependencies {
compile "org.lwjgl:lwjgl:3.2.0"
if (buildos == "windows") {
compile "org.lwjgl:lwjgl-platform:3.2.0:natives-windows"
} else if (buildos == "linux") {
compile "org.lwjgl:lwjgl-platform:3.2.0:natives-linux"
} else if (buildos == "osx") {
compile "org.lwjgl:lwjgl-platform:3.2.0:natives-osx"
}
}
gradle build -Pbuildos=windows
A similar thing can be done with a "-D" flag, but then you need to access it with System.getProperty
One can take it even further by building custom tasks and configurations (inheriting from compile/implementation) for each flavor instead of relying on -P flags
I have a Gradle build script which has to call a Scala (or Java) method in a Task.
currently, i have src/.../utils/lib.java
this file has a simple class with a static method that I would like to call in the build.gradle script.
but I have no idea how to import this file and use this method.
This really depends on how your gradle script is set up and what you are trying to accomplish. Without seeing it, it is hard to give a concrete all encompassing response. Here are the two most common methods I know of that would allow you to execute code in a gradle script.
1) To directly answer your response, to execute a static java method, it needs to be compiled first. If the rest of your build requires this to be executed, then you would need to structure this as a multi-project build.
https://docs.gradle.org/current/userguide/multi_project_builds.html
Assuming you have
apply plugin: 'java'
You would then be able to create a jar with the class containing your static method, and create a task that you can execute of type Exec or JavaExec that would execute this jar.
2) The easier and less complex approach:
Since gradle allows you to directly execute groovy code, if possible, I would try to move the logic from the static method into the gradle file. You can reference Objects and methods on the org.gradle.api to customize a lot of build actions.
https://docs.gradle.org/current/javadoc/
Here is an example from my build.gradle use to copy resources files after a compile step into the build directory before it is packaged into a jar.
compileJava {
doLast {
def f = file("src/main/java/resources/db/migration")
println "Copying SQL from {$f} to {$buildDir/db/migration}"
copy {
from file(f)
into file("$buildDir/db/migration")
include('*.sql')
}
}
}
This may not be the easy solution that you were looking for but I hope it gives you some good guidance.
I am using JOOQ code generation Tool for generating source code for my schema(MYSQL). I would like to generate source code every time I compile my Project. But I am not able to do it because when I run Code generation gradle Task, Compiler starts complaining about references of deleted source code.
Here is what I did:-
Created an Empty Spring boot Project.
Generated Source code using config xml(jooq.xml below) like this
Triggered Code Generation using a Gradle Task.
Build.gradle
task generateJooqDatabaseSource(type: JavaExec) {
classpath = sourceSets.main.runtimeClasspath
main = 'org.jooq.util.GenerationTool'
args = ['/jooq.xml']
standardOutput = System.out
errorOutput = System.err
}
Used the generated source code and wrote SQLs using JOOQ.
Everything is fine till here. But now I don't want to Push the generated Java Classes to my Project. I would like it to create every time when I compile my Project.
so lets delete the generated source code and re-generate it again(say for my Test environment)
But as soon as I run the Gradle Task generateJooqDatabaseSource
it starts complaining about the generated code references.
error: package autogenered.jooq.code.db.tables does not exist
import autogenered.jooq.code.db.tables.Author;
Tried googling the problem and found suggestions to use plugins like flyway, suggested here
But I really don't want to add another plugin if it can be achieved easily without it.
PS:- Just started to use Gradle, JOOQ from couple of days, apologies if answer is obvious.
Adding Following Lines in build.gradle have done the tweak for me:
compileJava.dependsOn(generateJooqDatabaseSource)
generateJooqDatabaseSource.dependsOn = [processResources, processTestResources]
Intellij Specific configuration:-
Added gradle build task to be triggered every time I do
make Project (Ctrl-F9)
or
Re-build Project: