Suppose there is a REST service with several clients. Each client has to do roughly the same to call the REST service: construct a URL, send a request, deserialize the response, map HTTP return codes to exceptions, etc.
This seems to be duplicate code so maybe it's a good idea to put the REST gateway code into a reusable library. In Java this could look like the following:
There is a jar file which depends on Jersey and contains the rest client code.
The rest client is a simple class or CDI bean.
All clients simply depend on this jar and call the members of the aforementioned class.
So no client has to worry about REST and can just call methods.
Is this a good idea?
(this question originates from this other question)
This can get you into trouble with dependencies. It is a quite typical problem with dependencies and not bound to REST or Jersey. But let's have a look at the following scenario:
Dependencies
Suppose there are two servers (S1 and S2), two clients (C1 and C2) and two libraries containing the rest client code for accessing teh servers (L1 and L2). Both clients need to query both servers so the call structure looks like this:
C1 ---> L1 ---> S1
\ ^
\ /
X
/ \
/ v
C2 ---> L2 ---> S2
Furthermore both L1 and L2 depend on Jersey. Also the container (maybe you are running your application on a Glassfish or Wildfly application server) has a dependency on Jersey (or at least on the jax-rs API [1]).
The simplified dependency structure of C1 looks like this:
C1
/ | \
/ | \
< v >
Container L1 L2
| | |
v v v
Jersey Jersey Jersey
As long as all the three versions of Jersey are the same, everything is fine. No problem what so ever. But if the versions differ, you may run into nasty NoClassDefFoundErrors, MethodNotFoundErrors and the like [2].
Rigid Structures
If the versions are similar enough you won't run into these errors directly. Maybe it will work for quite a while. But then the following may happen:
You want to update your container. This updates the container version of Jersey which may get incompatible. ==> Boom. So using L1 and L2 prevents you from updating. You are stuck with the old version as long as L1 and L2 are not updated.
L2 is updated to a new version of Jersey. You can only use it if L1 and your container are also updated. So you stick to the old version (you can do that because of the loose coupling of REST). But then new functionality is added to S2 which is only usable with a new version of L2 and again you are stuck.
Keep in mind that the errors may or may not occur. There is no guarantee that you get into trouble and there is no guarantee that it will work. It's a risk, a ticking bomb.
On the other hand this is a simple example with just two services and two clients. When there are more, the risk is increasing, of course.
Stable Dependencies Principle
Dependencies should adhere to the Stable Dependencies Principle (SDP) which reads "Depend in the direction of stability." Uncle Bob defines it using the number afferent and efferent dependencies but SDP can be understood in a broader sense.
The connection to SDP becomes obvious when we look at the given dependency structure and replace Jersey with guava. This problem is really common with guava. Problem is that Guava is typically not downwards compatible. It's a rather unstable library that can be used in an application (which is unstable) but you should never use it in a reusable library (which ought to be stable).
With Jersey it's not that obvious because you may consider Jersey quite stable. In fact it is very stable with respect to Bob Martin's definition. But L2 may be quite stable, too. Maybe it's developed by another team, by some guy who left the company and nobody dares to touch it because last year somebody tried to do so which resulted in the C2 team having dependency issues. Maybe L2 is also stable because there is some ongoing quarrel between the managers of the teams developing C1 and L2 so the L2 manager claims there are no resources, no budget or whatever so updating L2 can only be done at the end of next year.
So depending on you code base, your organizational structure, etc. libraries can be more stable than you would which them to be.
Exceptions and Remedies
There are some languages which have "isolated dependencies", which means you can have multiple versions of the same library in the same application.
You could repackage Jersey (copy the source, change the package name and maintain it for yourself). Then you can have the normal Jersey in version X and your repackaged version of Jersey in version Y. This works but of course this has its own problems. It's much work and you have to maintain software which is not written by you.
The problem can be negligible if everything is developed in the same team. Then you have control over the whole thing and you don't depend on other teams doing something. Nevertheless as soon as your code base grows, updating to a new version can be much work as you cannot do it gradually anymore but you have to move many services to a new version at once.
Build tools like maven give you some control over the version you depend on. You may consider marking the Jersey dependency optional in L1 and L2 and provided in C1 and C2. Then it is not included in your war file so there is just one version of Jersey (the one in the container). As long as this version is interface compatible, it will work [3].
What to Do Instead
I would recommend to just put your representation classes, i.e. your DTOs, into a client jar and let each client decide on which library to use for making the REST call. Typically those REST gateways are quite simple and sharing them between applications is normally not worth the risk. At least think about the issues mentioned here before running into them without even knowing.
[1] In order to keep the explanation simple, lets neglect the difference between Jersey and the jax-rs API. The argument is the same.
[2] In reality it's a bit more complicated. Typically your build tool (maven, etc.) will select one version of Jersey and put it into the war file. The other version of Jersey is "omitted for conflict" with the other version. This is OK as long as the versions are compatible. So you end up with two versions the one in your container and the one in your war file. Whereas interface compatibility in the former case should be enough, it won't be enough for these remaining two versions.
[3] As there is only one version, the problem mentioned in [2] cannot occur. The problem with interface compatibility stays of course.
Related
I need to support two builds with some set of differences in libraries versions, so I made two build profiles and this works fine, but now I have a versioning problem while preparing a release.
I use the major.minor.revision-qualifier versioning schema, where:
major - a breaking change
minor - a backward-compatible changes (new features)
revision - a bug fix
qualifier - the only qualifier I use now is SNAPSHOT to mark unreleased versions.
But since now I have two builds, I need to add some qualifiers to release versions e.g. 1.8.0-v1 and 1.8.0-v2, but then I won't be able to have two SNAPSHOT versions. Or I need to break the "rules" about major\minor version usage and make two "branches" e.g. release 1.8.0 and 1.9.0 and then increase only last number no matter when fixing a bug or adding a new features.
I have a feeling like I am doing something antipattern, could anyone please give me some advice?
P.S. I already have heavily reworked 2.x version, so I can't have separate "branches" as 2.x and 1.x versions, unless i change this new version to 3.0
upd
I guess i can't make this story short, so here we go.
In my project i use to have ojdbc6 and aqapi jars(oracle libraries), my app was working on java 7 and Apache ServiceMix 5 with oracle 11 database. But then some clients updated to oracle 12 and i need new libraries for that, but they only work on java 8, but ActiveMQ i am using as part of ServiceMix 5 doesn't work on java 8. So i updated to servicemix 7 and and after some chances it works fine. So rest of the difference in build profiles are versions of servicemix provided libraries (a complete list is redundant here i guess).
In the end despite the fact that new jdbc driver is fully compatible with old database(not completely sure about aqapi and client side of ActiveMQ, but they should be also compatible), i can't force every client to update and reinstall java\servicemix at the same time, but i still wanna be able to fix\add stuff for all of them.
So i need to support two builds for different versions of servicemix, at least for now(its a temporary solution, but as proverb says: there is nothing more permanent than temporary, so i want to make it in the most right way possible)
P.S.
I decided to make profiles instead of separate brunch in VCS, because it looks like much easier solution, but it doesn't metter in terms of the versioning problem.
So as #Software Engineer said, after thinking about reasons and writing a post update i realised its not multiprofile problem, it's purely versioning problem, it would be the absolutely the same if i make brunch in VCS.
So in the end i decided to make 1.x.x and 2.x.x versions despite the fact that changes are not that "breaking", but they are not fully backward-compatible(even tho new version can work with old database it still needs new servicemix).
This multiprofile workaround doesn't looks pretty, but i left it there, it allows me to build both versions in one go(i use mvn versions:set -DnewVersion command after the first build) and i don't need to support two brunches this way, so it saves some time.
Just as shown in the picture, one app (Java) referenced two third-party package jars (packageA and packageB), and they referenced packageC-0.1 and packageC-0.2 respectively. It would work well if packageC-0.2 was compatible with packageC-0.1. However sometimes packageA used something that could not be supported in packageC-0.2 and Maven can only use the latest version of a jar. This issue is also known as "Jar Hell".
It would be difficult in practice to rewrite package A or force its developers to update packageC to 0.2.
How do you tackle with these problems? This often happens in large-scale companies.
I have to declare that this problem is mostly occurred in BIG companies due to the fact that big company has a lot of departments and it would be very expensive to let the whole company update one dependency each time certain developers use new features of new version of some dependency jars. And this is not big deal in small companies.
Any response will be highly appreciated.
Let me throw away a brick in order to get a gem first.
Alibaba is one of the largest E-Commerces in the world. And we tackle with these problems by creating an isolation container named Pandora. Its principle is simple: packaging those middle-wares together and load them with different ClassLoaders so that they can work well together even they referenced same packages with different versions. But this need a runtime environment provided by Pandora which is running as a tomcat process. I have to admit that this is a heavy plan. Pandora is developed based on a fact that JVM identifies one class by class-loader plus classname.
If you know someone maybe know the answers, share the link with him/her.
We are a large company and we have this problem a lot. We have large dependency trees that over several developer groups. What we do:
We manage versions by BOMs (lists of Maven dependencyManagement) of "recommended versions" that are published by the maintainers of the jars. This way, we make sure that recent versions of the artifacts are used.
We try to reduce the large dependency trees by separating the functionality that is used inside a developer group from the one that they offer to other groups.
But I admit that we are still trying to find better strategies. Let me also mention that using "microservices" is a strategy against this problem, but in many cases it is not a valid strategy for us (mainly because we could not have global transactions on databases any more).
This is a common problem in the java world.
Your best options are to regularly maintain and update dependencies of both packageA and packageB.
If you have control over those applications - make time to do it. If you don't have control, demand that the vendor or author make regular updates.
If both packageA and packageB are used internally, you can use the following practise: have all internal projects in your company refer to a parent in the maven pom.xml that defines "up to date" versions of commonly used third party libraries.
For example:
<framework.jersey>2.27</framework.jersey>
<framework.spring>4.3.18.RELEASE</framework.spring>
<framework.spring.security>4.2.7.RELEASE</framework.spring.security>
Therefore, if your project "A" uses spring, if they use the latest version of your company's "parent" pom, they should both use 4.3.18.RELEASE.
When a new version of spring is released and desirable, you update your company's parent pom, and force all other projects to use that latest version.
This will solve many of these dependency mismatch issues.
Don't worry, it's common in the java world, you're not alone. Just google "jar hell" and you can understand the issue in the broader context.
By the way mvn dependency:tree is your friend for isolating these dependency problems.
I agree with the answer of #JF Meier ,In Maven multi-module project, the dependency management node is usually defined in the parent POM file when doing unified version management. The content of dependencies node declared by the node class is about the resource version of unified definition. The resources in the directly defined dependencies node need not be introduced into the version phase. The contents of the customs are as follows:
in the parent pom
<dependencyManagement>
<dependencies >
<dependency >
<groupId>com.devzuz.mvnbook.proficio</groupId>
<artifactId>proficio-model</artifactId>
<version>${project.version}</version>
</dependency >
</dependencies >
</dependencyManagement>
in your module ,you do not need to set the version
<dependencies >
<dependency >
<groupId>com.devzuz.mvnbook.proficio</groupId>
<artifactId>proficio-model</artifactId>
</dependency >
</dependencies >
This will avoid the problem of inconsistency .
This question can't be answered in general.
In the past we usually just didn't use dependencies of different versions. If the version was changed, team-/company-wide refactoring was necessary. I doubt it is possible with most build tools.
But to answer your question..
Simple answer: Don't use two versions of one dependency within one compilation unit (usually a module)
But if you really have to do this, you could write a wrapper module that references to the legacy version of the library.
But my personal opinion is that within one module there should not be the need for these constructs because "one module" should be relatively small to be manageable. Otherwise it might be a strong indicator that the project could use some modularization refactoring. However, I know very well that some projects of "large-scale companies" can be a huge mess where no 'good' option is available. I guess you are talking about a situation where packageA is owned by a different team than packageB... and this is generally a very bad design decision due to the lack of separation and inherent dependency problems.
First of all, try to avoid the problem. As mentioned in #Henry's comment, don't use 3rd party libraries for trivial tasks.
However, we all use libraries. And sometimes we end up with the problem you describe, where we need two different versions of the same library. If library 'C' has removed and added some APIs between the two versions, and the removed APIs are needed by 'A', while 'B' needs the new ones, you have an issue.
In my company, we run our Java code inside an OSGi container. Using OSGi, you can modularize your code in "bundles", which are jar files with some special directives in their manifest file. Each bundle jar has its own classloader, so two bundles can use different versions of the same library. In your example, you could split your application code that uses 'packageA' into one bundle, and the code that uses 'packageB' in another. The two bundles can call each others APIs, and it will all work fine as long as your bundles do not use 'packageC' classes in the signature of the methods used by the other bundle (known as API leakage).
To get started with OSGi, you can e.g. take a look at OSGi enRoute.
Let me throw away a brick in order to get a gem first.
Alibaba is one of the largest E-Commerces in the world. And we tackle with these problems by creating an isolation container named Pandora. Its principle is simple: packaging those middle-wares together and load them with different ClassLoaders so that they can work well together even they referenced same packages with different versions. But this need a runtime environment provided by Pandora which is running as a tomcat process. I have to admit that this is a heavy plan.
Pandora is developed based on a fact that JVM identifies one class by class-loader plus classname.
I was used to manage versions with a tag in Git. But that was a long time ago, for stand-alone applications. Now the problem is that I have a web application, and at the same application might connect clients that expect to communicate to different versions of the application.
So, I added to the input a path variable for the version in that way :
#PathParam("version") String version
And the client can specify the version in the URL:
https://whatever.com/v.2/show
Then across the code I added conditions like this:
if(version.equals("v.2") {
// Do something
}
else if(version.equals("v.3") {
// Do something else
}
else {
// Or something different
}
The problem is that my code is becoming very messy. So I decided to do in a different way. I added this condition only in one point of the code, and from there I call different classes according to the version:
MyClassVersion2.java
MyClassVersion3.java
MyClassVersion4.java
The problem now is that I have a lot of duplication.
And I want to solve this problem as well. How can I do now to have a web application that:
1) Deal with multiple versions
2) It is not messy (with a lot of conditions)
3) Doesn't have much duplication
Normally, when we speak of an old version of an application, we mean that the behavior and appearance of that version is cast in stone and does not change. If you make even the slightest modification to the source files of that application, then its behavior and/or appearance may change, (and according to Murphy's law it will change,) which is unacceptable.
So, if I were you, I would lock all the source files of the old version in the source code repository, so that nobody can commit to them, ever. This approach solves the problem and dictates how you have to go about everything else: Every version would have to have its own set of source files which would be completely unrelated to the source files of all other versions.
Now, if the old versions of the application must have something in common with the newest version, and this thing changes, (say, the database,) then we are not exactly talking about different versions of the application, we have something more akin to different skins: The core of the application evolves, but users who picked a skin some time ago are allowed to stick with that skin. In this case, the polymorphism solution which has already been suggested by others might be a better approach.
your version number is in a place in the URL named the 'Context Root'.
You could release multiple different WAR files each of which is configured to respond on different Context Roots.
So one war for version 1, one war for version 2 etc.
This leaves you with code duplication.
So what you are really asking is, "how do I efficiently modularise Java web applications?".
This is a big question, and leads you into "Enterprise Java".
Essentially you need to solve it by abstracting your common code to a different application. Usually this is called 'n-tier' design.
So you'd create an 'integration tier' application which your 'presentation' layer war files speaks to.
The Integration tier contains all the common code so that it isn't repeated.
Your integration tier could be EJB or webservices etc.
Or you could investigate using OSGi.
I have 2 Maven artifacts with 2 versions, lets say A1, A2, B1, B2. B1 depends on A1, B2 depends on A2. A1 and A2 are very similar, lets say A1 is using Java 7 and A2 is using Java 8 and lambdas.
All artifacts are used by our clients and sometimes they install the wrong artifact for their environment.
I want to build a base A artifact, A1 and A2 will inherit A and add custom functionality and another artifact, A_Client, and I want to choose at runtime based on some properties(JDK and some others) which Ax module should be used. This way, our clients will have to install A_Client and they will not have to worry about the right version.
B1 and B2 are the same, the only thing that's different is their Ax dependency. If I can merge A1 and A2 somehow, I will have only a B artifact available for clients that will depend only on A_Client. This way I will eliminate the B versions hell too.
So, the question:
Is it possible to decide at runtime a dependency? My guess is that it may be possible using OSGi or custom Class loaders, but I have very limited knowledge in both areas so any help is appreciated.
Maven
Maven is a build tool, so it won't help you at runtime. It's possible to activate different profiles based on the used JDK, but it's based on system which builds it, not the system which runs it.
OSGI
I didn't use OSGI yet, but what I know is that it's a very different deployment and runtime model. It doesn't justify the effort, just to prevent your customers deploying the wrong version.
Deployment process
Not everything should be solved by a technical solution. Deploying the wrong artifact should definitely solved with an adjusted deployment process. Think about..
Why did they deploy the wrong artifact?
Do they have a documented deployment process? How should the process be changed?
Can you make the artifacts more distinguishable? Could you rename it?
Can you make it fail fast? It should fail to deploy, not when it's already in use.
This is an old question but deserves a modern answer that makes sense in the now-modular-JDK world.
What I really wanted in the original question is to have a JAR which could execute a different implementation, chosen at runtime, depending on the Java version I'm running. I've thought that Maven could do that, since there were no other options available. In the modular world, this is exactly what Multi-Release-Jars solves.
If Java version is not a problem, but there are 2 implementations which must be used based on a system property, application state or whatever, one could pack both of them as service implementations, add both of them in the module path and in the client load them via ServiceLoader and decide which implementation is needed at runtime.
I have a Java-based server, transmitting data from many remote devices to one app via TCP/IP. I need to develop several versions of it. How can I develop and then dwell them without need in coding for 2 projects?I'm asking not only for that project, but for different approaches.
Where the behaviour differs, make the behaviour "data driven" - typically by externalizing the data the drives the behaviour to properties files that are read at runtime/startup.
The goal is to have a single binary whose behaviour varies depending on the properties files found in the runtime environment.
Java supports this pattern through the Properties class, which offers convenient ways of loading properties. In fact, most websites operate in this way, for example the production database user/pass details are never (should never be) in the code. The sysadmins will edit a properties file that is read at start up, and which is protected by the operating system's file permissions.
Other options are to use a database to store the data that drives behaviour.
It can be a very powerful pattern, but it can be abused too, so some discretion is advised.
I think you need to read up on Source Control Management (SCM) and Version Control Systems (VCS).
I would recommend setting up a git or Subversion repository and adding the code initially to trunk and then branching it off to the number of branches (versions you'll be working on).
The idea of different versions is this:
You're developing your code and have it in your SCM's trunk (or otherwise known as a HEAD). At some point you consider the code stable enough for a release. You therefore create a tag (let's call it version 1.0). You cannot (should not) make changes to tags -- they're only there as a marker in time for you. If you have a client who has version 1.0 and reports bugs which you would like to fix, you create a branch based on a copy of your tag. The produced version would (normally) be 1.x (1.1, 1.2, etc). When you're done with your fixes, you tag again and release the new version.
Usually, most of the development happens on your trunk.
When you are ready with certain fixes, or know that certain fixes have already been applied to your trunk, you can merge these changes to other branches, if necessary.
Make any other version based on previous one by reusing code base, configurations and any other asset. In case if several versions should be in place at one time use configuration management practices. Probably you should consider some routing activities and client version checks on server side. This is the place where 'backward compatibility' comes to play.
The main approach is first to find and extract the code that won't change from one version to another. The best is to maximize this part to share the maximum of code base and to ease the maintenance (correcting a bug for one means correcting for all).
Then it depends on what really changes from one version to another. The best is that on the main project you can use some abstract classes or interfaces that you will be able to implement for each specific project.