Let us say an Interface MyInterface is present in jar A. jar B implements the interface . Now if I have to use this interface I need to take a dependency on jar A and B . What is the best way to avoid this double dependency ? The problem is if you are using gradle you need to specify a version for jar A and jar B. Updating at 2 places is tricky and half of the times I see one of them is updated but not the other. One way I am thinking is implement MyInterface2 on jarB and ask to modify the downstream consumers to use MyInterface2 they can declare dependency only on Jar B. But gradle will do the transitive dependency walk so that they can declare only one version. Any thoughts/comments ?
//======> Jar A <======
public interface MyInterface {
public void doSoemthing();
}
// ======> Jar B <=======
public class Impl1 implements MyInterface {
public void doSomething() { }
}
The gradle file for Jar B will look something like
compile('package:jarA:<Version>')
======> Project 1 <============
MyInterface foo = new Impl1();
The gradle file will look something like
compile('package:jarA:<Version>')
compile('package:jarB:<Version>')
The problem I have is I want the gradle to pull in jarA from transitive dependency, but if it is not there I get an compilation error saying that it could not find MyInterface. But if I have the explicit version I need to update it at 2 places and need to track the version dependencies.
Is there an easier way ?
Related
Trying to create a custom gradle plugin in java, how do i get the resources path from inside the task class?
public class MyCustomPlugin implements Plugin<Project> {
#Override
public void apply(Project project) {
project.getTasks().register("doStuff", CustomTask.class);
}
}
public class CustomTask extends DefaultTask {
// How do I get java project resources dir from here?
#Inject
public CustomTask(ProjectLayout projectLayout) {
directoryProperty = projectLayout.getBuildDirectory();
}
#TaskAction
public void execute() {
...
}
}
I would recommend to not get the directory inside the task, because the plugin that provides it might not be applied. Instead I would do it from within your plugin that registers the task, this way you can also ensure that the necessary plugin is actually applied. Gradle will display an error if the task is used without a value being assigned to the input that explains that nothing was assigned.
With the kotlin-dsl:
#CacheableTask
abstract class CustomTask : DefaultTask() {
#get:InputFiles
abstract val resources: FileCollection
//...
}
I cannot answer if #InputFiles is the right annotation for your use case, because I don't know what you want to do with the resource. Refer to the Gradle documentation for more information on the available annotations, and what they do.
plugins {
java
}
tasks.register<CustomTask>("customTask") {
resources.set(sourceSets.main.map { it.resources })
}
Notice the map {} which ensures that our task has a dependency on the processResources task, this is done automatically for us because we stick to the provider API of Gradle for everything.
Note that the resources are by default in one directory, but they don't have to be. This is why the resources are defined as SourceDirectorySet and not as Provider<Directory>. The same is true for anything that originates from the SourceSetContainer. It is easier to explain with Java source code: imagine you have Java and Kotlin, then you will have src/main/java and src/main/kotlin, hence, 2 directories. The former will have a **/*.java include filter, whereas the latter has a **/*.kt includes filter. If we just want to get all sources then we use sourceSets.main.map { it.java.sourceDirectories }, and if we want to get one of both it gets complicated. 😝
First, you'd have to ensure this is a Java project: either applying the "java" plugin from your plugin (project.getPluginManager().apply("java")), or only registering the task when the "java" plugin has been applied by the user (project.getPluginManager().withPlugin("java", ignored -> { project.getTasks().register(…); });).
You could then get the resources from the main source set:
SourceSetContainer sourceSets = project.getExtensions().getByType(SourceSetContainer.class);
// Use named() instead of get() if you prefer/need to use providers
SourceSet mainSourceSet = sourceSets.get(SourceSet.MAIN_SOURCE_SET_NAME);
SourceDirectorySet resources = mainSourceSet.getResources();
BTW, the best practice is to have tasks only declare their inputs and outputs (e.g. I need a set of directories, or files, as inputs, and my outputs will be one single file, or in one single directory) and have the actual wiring with default values be done by the plugin.
You could have the plugin unconditionally register the task, then conditionally when the "java" plugin is applied configure its inputs to the project resources; or conditionally register the task or unconditionally apply the "java" plugin, as I showed above.
You can access the sources through the project.sourceSets.
#Inject
public CustomTask(Project project) {
directoryProperty = project.projectLayout.getBuildDirectory();
sourceSet = project.sourceSets.main
}
See also the reference documentation here: https://docs.gradle.org/current/userguide/java_plugin.html#sec:java_project_layout
Consider the following interface
// src/MyInterface.java
interface MyInterface {
public void quack();
}
which is used in the following application dynamically; i.e. its implementation is loaded dynamically—for demonstration purposes we'll just use the implementing class' name to determine which implementation to load.
// src/Main.java
class Main {
public static void main(String[] args) {
try {
MyInterface obj = (MyInterface) Class.forName("Implementation")
.getDeclaredConstructor()
.newInstance();
obj.quack();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
The following implementation of the interface is available:
// src/Implementation.java
class Implementation implements MyInterface {
public void quack() {
System.out.println("This is a sample implementation!");
}
}
As I would intuitively think, MyInterface provides information that is only relevant at compile-time, such as which methods can be invoked on objects that implement it, but it shouldn't be needed at runtime, since it doesn't provide any "executable code". But this is not the case: if I try to run the compiled Main.class without MyInterface.class, it complains:
$ javac -d bin/ src/*
$ rm bin/MyInterface.class
$ java -cp bin/ Main
Exception in thread "main" java.lang.NoClassDefFoundError: MyInterface
[...]
I guess it makes sense because it needs access to the MyInterface's Class object to perform the cast to MyInterface, so it needs to load MyInterface. But I feel there should be a way to make it a compile-time only dependency. How?
Some context
This question arose when I learned that there can be compile-time only dependencies, an example of which is the servlet api. I read that when compiling servlet code, you need to have the servlet-api (in Tomcat's case) jar, but at runtime it is not needed because the server provides an implementation. Since I didn't understand exactly how that could work, I tried setting up the little experiment above. Did I misunderstand what that means?
Edit: this Gradle page mentions that a compile-time only dependency could be
Dependencies whose API is required at compile time but whose implementation is to be provided by a consuming library, application or runtime environment.
What would be an example for that? I find that sentence a bit confusing, because it seems to imply that the API is not needed at runtime, and only the implementation is. From the answers, I gather that's not possible, right? (Unless somehow implementing a custom classloader?)
Yes, looks like you misunderstood example with servlet-api.jar. You need it in your project as a compile time dependency because Tomcat comes itself with that jar and that jar will be added to runtime classpath by Tomcat.
if you use classes/interfaces in your code they should be somehow added to classpath since your code depends on them.
And starting Java 8 interfaces can have default implementations for methods ("executable code") and interfaces also can have constants.
Maybe it is possible to run application without interface declaration but in that case you need to develop your custom Classloader which will check for interface implementation and load it instead of interface itself.
Did I misunderstand what that means?
Yes.
You're talking about "provided" dependencies (at least, that's what Maven calls them). Such a dependency still must be present on the classpath/modulepath at both compile-time and runtime. However, you don't have to include the provided dependency with your application when deploying your application, because the target container/framework already includes the dependency.
Consider the following scenario:
Let say I have a class A in "src" folder of my project.
class A {
void foo() {
B b = new B();
}
}
Class B is defined in another jar which is included as a dependency in build.gradle
class B extends C {
}
Now, Class C is defined in another jar which will be provided on runtime and not on compile time. Gradle is able to compile Class A without error.
But, when I import Class c in Class A then it gives "class not found".
import other.C; // this line gives error
class A {
void foo() {
B b = new B();
}
}
Is this the desired behavior of Java compiler to ignore the Class C if it not imported directly?
Also, what happened if use a function in class A using object of B which is in Class C but not overridden in class B.
The exact answer to your question depends on the Java compiler version and whether or not it requires access to C for doing its job.
All in all, I would say that such a setup is fragile and you should not do it. If your library that defines A requires B which effectively makes use of C in its public API as is the case for extends then C should be made visible to your library.
First of all we need to understand how java compilers work.
Whatever you reference by name or as a 'token' in your code, should be accessible to compiler during compilation phase.
The classes that you load using Class.forload or getClass method are not required to be available in the classpath.
What you are essentially referring by compiletime and runtime is about packaging.
What you say that a particular dependency say Class C will be provided at runtime, its an instruction to bundle tasks to ignore that dependency while building the jar. So if you compile and deploy your application as jar, class C will not be present in it. At the same time, the jar containing class B will be included in your deployment package.
If you provide exact gradle file, I might be able to answer more precise.
So I've been making some kind of plugins API for a Java project (to load JAR files externally) and well, I wanted to be able to add any Guice module inside any plugin to my project's dependency graph.
What I did was have a PluginsModule and in the configure method scan for other modules in plugins and install them using Java's ServiceLoader.
I made a test plugin and made a module for it, I confirmed it did get installed. No problems at this point. The problems appear when I do anything inside that module, for example I bound some interface to an implementation in that plugin (just to clear this up, I did the same thing without the plugin and it worked so it's not a binding problem) and tried to inject it, configuration errors saying there was no implementation for that interface appear.
public enum StandardGuiceModuleScanningStrategy implements GuiceModuleScanningStrategy {
INSTANCE;
#Override
public Set<Module> scan(Path directory) throws IOException {
File directoryAsFile = directory.toFile();
File[] childrenFiles = directoryAsFile.listFiles();
if (!directoryAsFile.isDirectory()
|| childrenFiles == null
|| childrenFiles.length == 0) {
return Collections.emptySet();
}
Set<Module> modules = new HashSet<>();
for (File childrenFile : childrenFiles) {
ClassLoader directoryClassLoader = new URLClassLoader(
new URL[]{childrenFile.toURI().toURL()});
ServiceLoader<Module> moduleServiceLoader = ServiceLoader.load(
Module.class, directoryClassLoader);
moduleServiceLoader.forEach(modules::add);
}
return modules;
}
In that implementation of my GuiceModuleScanningStrategy, as I mentioned before, I did use ServiceLoader. Anyways, I also tried other stuff, like scanning the JAR file and checking for a Module, and seeing if it has a specific annotation.
All Guice Modules annotated with #GuiceModule, will be installed into a child Injector. All classes annotated with #AutoBind will be bound to all inherited interfaces. You can also name it, which would lead to a named binding and overwrite the interfaces, which should be used. And if you don't want to use all Features, just overwrite the StartupModule and bind only the Features you want or your own.
While exploring Guice, I had a question on the way the dependencies are injected.
Based on my understanding, one of the important aspects of DI is that, the dependency is known and is injected at runtime.
In Guice, to inject a dependency we either need to add the binding or implement a provider. Adding a dependency takes a class object which adds a compile time dependency on that class. One way to avoid that is to implement it as a provider and let the provider use reflection to dynamic load the class.
public class BillingModule extends AbstractModule {
#Override
protected void configure() {
bind(CreditCardProcessor.class).toProvider(
BofACreditCardProcessorProvider.class);
bind(CreditCardProcessor.class).annotatedWith(BofA.class).toProvider(
BofACreditCardProcessorProvider.class);
bind(CreditCardProcessor.class).annotatedWith(Amex.class).toProvider(
AmexCreditCardProcessorProvider.class);
}
#Provides
PaymentProcessor createPaymentProcessor() {
return new PayPalPaymentProcessor();
}
#Provides
PayPalPaymentProcessor createPayPalPaymentProcessor() {
return new PayPalPaymentProcessor();
}}
Is there a reason why Guice choose class object over class name? That could have removed the compile time dependency right?
If your interface and implementation are defined in the same dependency (that is, in the same JAR file) then you already have a hard build dependency on the implementation, whether you use Guice or not.
Basically, as soon as you have:
public final class MyClass {
public void doSomething(Foo foo);
}
Then to compile MyClass a definition of Foo needs to be on the compile-time classpath.
The way to resolve this is to separate out the interface from the implementation. For example, if Foo is an interface, and FooImpl is the implementation of it, you would put FooImpl in a different dependency (that is, a different JAR file) from Foo.
Now, let's say you have two sub-projects in Maven:
foo-api/
pom.xml
src/main/java/com/foo/Foo.java
foo-impl/
pom.xml
src/main/java/com/foo/FooImpl.java
Where should the Guice module that binds Foo live? It shouldn't live in the foo-api project, it should live in the foo-impl project, alongside FooImpl.
Now suppose you have a separate implementation of Foo (let's call it SuperFoo), and your project needs a Foo, but it could be either FooImpl or SuperFoo.
If we make SuperFoo its own project:
super-foo/
pom.xml
src/main/java/com/super/foo/SuperFoo.java
src/main/java/com/super/foo/SuperFooModule.java
Now all your application code can simply #Inject Foo and use the foo. In your main() method (or wherever you create your Injector) you need to decide whether to install FooModule (from foo-impl) or SuperFooModule (from super-foo).
That is the place where reflection may be warranted. For example, you could have a configuration flag foo_module which could be set to either "com.foo.FooModule" or "com.super.foo.SuperFooModule". You could decide which one to install using code like this:
public static void main(String[] args) {
Config config = parseConfig(args);
List<Module> modules = new ArrayList<>();
modules.add(...); // application modules
String fooModuleName = config.get("foo_module");
Class<? extends Module> moduleClass =
Class.forName(fooModuleName).asSubclass(Module.class);
modules.add(moduleClass.newInstance());
Injector injector = Guice.createInjector(modules);
injector.getInstance(MyApplication.class).run();
}
Of course, you could also use any other mechanism you like to select which module to install. In many cases, you don't even really want to do this reflectively, you can simply change the code at the same time you change the build dependency.