What if I have a project that relies on the abstraction(interface) but does not contain any implementation for this interface. Therefore I want to give that interface to someone who can implement it so I (or someone else who is going to use my software) will be able to use their implementation of my interface.
Thus, I have the following question. How can I, let's say, share that interface?
The idea which came to me is to make JAR which contains interface and give it to someone who is going to implement that interface in JAR. After it, the one who implemented the interface, creates his JAR and gives it to me. So I can use his JAR with the implementation of my interface. Is it a proper way to do so?
The purpose of it is to make modular architecture so that if I need a new game(according to the above example), I'll take a JAR with implemented interface and just plug it in my project.
Yes.
You should have a shared build artifact (JAR file) that contains only the interfaces, which your project and the implementing project can both depend on.
You may want to look into tools like Maven or Gradle for helping orchestrate your build process with dependencies like this. For example, you may want to publish your API JAR to a shared package repository that both developers can work with.
You may also want to look into java.util.ServiceLoader and the Service Locator pattern, for discovering which specific implementation(s) you have available.
Related
I am developping a screenshot software which can load plugins from JAR. Thoses are developped using the API package, which is made of interfaces to implement, so the person who wants to make a plugin does not have to use the full source code.
This works well for adding like action (Upload to X or X host for example), but what if I want to send variable the other way around, like from a plugin TO the core ? How am I supposed to do this?
The only solution I can think of would be to use callbacks, but I don't find this so clean...
By the way, is my solution to use interface that devs implements, which I then instanciate is correct ? Or there is a better way?
Your solution is the most common way to implement such a scenario. You give plugins an instance of a class (instantiated by core) and they can store it for future use (e.g. to pass data to the core or trigger another action). Normally name of such classes ends with Context (e.g. BundleContext, PluginContext, etc.).
Another pattern is to use a sort of Mediator class. A class with some static methods that plugins can use to send some data to core or trigger some actions. I don't like it and it's not a very clean solution, but it makes it much easier for plugin developers to access the API as they don't need to store the context instance and respect its life cycle. This pattern is used widely in IntelliJ IDEA architecture.
As you're developing a plugin based system, I highly recommend you to take a look at OSGi architecture and APIs. It can be helpful in this regard.
thrift interface can be compiled across multiple languages. it's just text files, why there are no online tools like swagger hub? I don't want to copy paste interface across projects that use that interface
also i don't find it useful to package interface with jar file, because only jvm languages can resolve that interface and also it's not user friendly way. It's not only about thrift, it's about grpc also. I didn't find any docs concerned with this question and couldn't find any best practises
Assuming you have a .proto file with your interfaces, each sub project will need to know about the file. There are two main approaches to this problem: Vendor the file, or copy the file.
Vendor the File
In this option, you make an addition project (like a git repo) which stores all your interface definitions. Each project that needs to know about the interfaces will include a reference (git submodule or git subtree) which includes the interface project. When you build your project, the interfaces will need to be synced and then used to generate the necessary code.
The downside of this approach is that git subtree and submodule (or whatever version control you use) are more difficult to use, and require extra work by people building your code. If you make changes to the interface in the subproject it can be difficult to apply those changes back upstream to the interface project.
Copy the File
In this option, you manually copy the file around between projects, and manually keep them in sync. Each time you make a change, you'll want to apply that change to every other project that depends on the interface. When using Protobuf though, it is important to note that you don't have to do this. Protos are designed to be highly backwards compatible.
For example, code that is changing a proto definition from one form to the other can actually use both forms. Old code will look at the old form, and new code can decide on looking at the old or new form. Once all users have been upgraded, you can remove the old form.
The downside to this approach is that it pushes complexity into the decoding portion of your code. You end up needing to be backwards compatible, with an unknown number of older clients. Since not every project will be in sync with the interface definitions, all the users of the interface will need to be more flexible. This problem is not specific to Proto, but happens naturally; it happens to everyone.
A second downside is having to manually copy changes. You must make sure never to reuse field numbers or names. If you have a lot of projects that depend on the interface, its more work for you.
Which to Choose?
Neither approach is objectively better than the other. Each one pushes complexity into a different part of your build. From what I have seen, most people prefer to copy the file, since it is easier than learning advanced git commands.
Given our maven projet provides some api for clients to interact with it, those are just few java interfaces which are implemented in interal codebase...
Now if we just build the jar and publish it anyone can see the internal classes we used for implementation, yet we only need few java interfaces to be published (along with few DTO classes maybe)...
Is it possible to pick exactly which java files we want to build jar for and create two artifacts like (product.jar/war and product-api.jar)
Prupose is to limit possible misuse of the code by other teams...
The best is to make separate modules in Maven which represent your modules like:
project-api
which contains only the interfaces and which can be used by others separately.
project-impl
one implementation etc.
The above makes testing easier etc. is a good choice with regards to separation of concerns.
Your question is about securing code instead of maven in general. You can have multi-module maven project but still anyone can download that and decomopile it.
Few thoughts as Java does have inbuilt mechanism to support this but there are workarounds...some thoughts..
When you package a project as jar, don't put.java classes in jar/build.
Well the code can be decompiled back to java but at-least u dont
give .java classes to start with.
You can obfuscate your code with
various available options. Read bit here...
At the extereme,
expose your api as web services where you define a contract for
request/response. No one can see your code...
I am looking for a tutorial on how to create a plugin system, preferably in Java, but I can't find any generic examples on google (they are all about making plugins) - can anyone explain or link to how to achieve this?
A plugin system, at it's core is usually composed of two things.
1) An interface or set of interfaces that the plugin must implement so that the core system can use them.
2) A custom classloader that the main system implements to load the plugins that are usually packaged as jars.
The main system builds the classloader based on some predefined directory, or configuration file that specifies where the plugins exist. This loader iterates over the classes and finds ones that implement the specified interface, and calls methods based on that interface as appropriate for the system.
Why dont use something that's already there like Equinox, or go one step further and use Eclipse plugin system.
I am writing an application that will ship in several different versions (initially around 10 variations of the code base will exist, and will need to be maintained). Of course, 98% or so of the code will be the same amongst the different systems, and it makes sense to keep the code base intact.
My question is - what would be the preferred way to do this? If I for instance have a class (MyClass) that is different in some versions (MyClassDifferent), and that class is referenced at a couple of places. I would like for that reference to change depending on what version of the application I am compiling, rather than having to split all the classes referring to MyClassDifferent too. Preprocessor macros would be nice, but they bloat the code and afaik there are only proof of concept implementations available?
I am considering something like a factory-pattern, coupled with a configuration file for each application. Does anyone have any tips or pointers?
You are on the right track: Factory patterns, configuration etc.
You could also put the system specific features in separate jar files and then you would only need to include the appropriate jar alongside your core jar file.
I'd second your factory approach and you should have a closer look at maven or ant (depending on what you are using).
You can deploy the different configuration files that determine which classes are used based on parameters/profiles.
Preprocessor makros like C/C++ have are not available directly for java. Although maybe it's possible to emulate this via build scripts. But I'd not go down that road. My suggestion is stick with the factory approach.
fortunately you have several options
1) ServiceLoader (builtin in java6) put your API class like MyClass in a jar, the compile your application against this API. Then put a separate implementation of MyClass in a separate jar with /META-INF/services/com.foo.MyClass. . Then you can maintain several version of your application simply keeping a "distribution" of jars. Your "main" class is just a bunch of ServiceLoader calls
2) same architecture of 1) but replacing META-INF services with Spring or Guice config
3) OSGI
4) your solution
Look up the AbstractFactory design pattern, "Dependency Injection", and "Inversion of Control". Martin Fowler writes about these here.
Briefly, you ship JAR files with all the needed components. For each service point that can be customized, you define an Interface for the service. Then you write one or more implementations of that Interface. To create a service object, you ask an AbstractFactory for it, eg:
AbstractFactory factory = new AbstractFactory();
...
ServiceXYZ s = factory.newServiceXYZ();
s.doThis();
s.doThat();
Inside your AbstractFactory you construct the appropriate ServiceXYZ object using the Java reflection method Class.classForName(), and SomeClassObject.newInstance(). (Doing it this way means you don't have to have the ServiceXYZ class in the jar files unless it makes sense. You can also build the objects normally.)
The actual class names are read in from a properties file unique to each site.
You can roll your own solution easily enough, or use a framework like Spring, Guice, or Pico.