We have a situation where some of our dependencies have conflicting dependencies.
We depend on A & B.
A dependes on version a of X.
B depends on version b of X.
Are there any dependency management tools that handle this type of a situation. I feel as if I had heard about some dependency management tool that dynamically loaded the dependencies or something. It seemed like it avoided ever running into a situation like above. I think you could specify somehow which version of X to load at a given instant or something.
Is it possible to do something like that? Is there any way in the code you can load and unload the dependency on a need basis?
I have forgotten most of compiler theory. And I haven't dealt much with dependency management. So excuse any ignorance showing through. It's probably genuine!
You can use OSGi or some other framework that manages multiple class-loaders so that the conflicting versions don't end up in the same class loader.
You can try to do the same thing yourself on a small scale by creating class loaders.
You can use the maven-shade-plugin to rename the packages in one or more copies to avoid the conflict.
Related
What do you normally do when you want to make use of some utility which is part of some Java library? I mean, if you add it as a Maven dependency is the whole library get included in the assembled JAR?
I don't have storage problems of any sort and I don't mind the JAR to get bloated but I'm just curious if there's some strategy to reduce the JAR size in this case.
Thanks
It is even worse: You do not only add the "whole library" to your project, but also its dependencies, whether you need them or not.
Joachim Sauer is right in saying that you do not bundle dependencies into your artifact unless you want it to be runnable. But IMHO this just moves the problem to a different point. Eventually, you want to run the stuff. At some point, a runnable JAR, a WAR or an EAR is built and it will incorporate the whole dependency tree (minus the fact that you get only one version per artifact).
The Maven shade plugin can help you to minimise your jar (see Minimize an Uber Jar correctly, Using Shade-Plugin) by only adding the "necessary" classes. But this is of course tricky in general:
Classes referenced by non-Java elements are not found.
Classes used by reflection are not found.
If you have some dependency injection going on, you only use the interfaces in your code. So the implementations might get kicked out.
So your minimizedJar might in general just be too small.
As you see I don't know any general solution for this.
What are sensible approaches?
Don't build JARs that have five purposes, but build JARs that are small. To avoid a huge number of different build processes, use multi-module projects.
Don't add libraries as dependencies unless you really need them. Better duplicate a method to convert Strings instead of adding a whole String library just for this purpose.
Just as shown in the picture, one app (Java) referenced two third-party package jars (packageA and packageB), and they referenced packageC-0.1 and packageC-0.2 respectively. It would work well if packageC-0.2 was compatible with packageC-0.1. However sometimes packageA used something that could not be supported in packageC-0.2 and Maven can only use the latest version of a jar. This issue is also known as "Jar Hell".
It would be difficult in practice to rewrite package A or force its developers to update packageC to 0.2.
How do you tackle with these problems? This often happens in large-scale companies.
I have to declare that this problem is mostly occurred in BIG companies due to the fact that big company has a lot of departments and it would be very expensive to let the whole company update one dependency each time certain developers use new features of new version of some dependency jars. And this is not big deal in small companies.
Any response will be highly appreciated.
Let me throw away a brick in order to get a gem first.
Alibaba is one of the largest E-Commerces in the world. And we tackle with these problems by creating an isolation container named Pandora. Its principle is simple: packaging those middle-wares together and load them with different ClassLoaders so that they can work well together even they referenced same packages with different versions. But this need a runtime environment provided by Pandora which is running as a tomcat process. I have to admit that this is a heavy plan. Pandora is developed based on a fact that JVM identifies one class by class-loader plus classname.
If you know someone maybe know the answers, share the link with him/her.
We are a large company and we have this problem a lot. We have large dependency trees that over several developer groups. What we do:
We manage versions by BOMs (lists of Maven dependencyManagement) of "recommended versions" that are published by the maintainers of the jars. This way, we make sure that recent versions of the artifacts are used.
We try to reduce the large dependency trees by separating the functionality that is used inside a developer group from the one that they offer to other groups.
But I admit that we are still trying to find better strategies. Let me also mention that using "microservices" is a strategy against this problem, but in many cases it is not a valid strategy for us (mainly because we could not have global transactions on databases any more).
This is a common problem in the java world.
Your best options are to regularly maintain and update dependencies of both packageA and packageB.
If you have control over those applications - make time to do it. If you don't have control, demand that the vendor or author make regular updates.
If both packageA and packageB are used internally, you can use the following practise: have all internal projects in your company refer to a parent in the maven pom.xml that defines "up to date" versions of commonly used third party libraries.
For example:
<framework.jersey>2.27</framework.jersey>
<framework.spring>4.3.18.RELEASE</framework.spring>
<framework.spring.security>4.2.7.RELEASE</framework.spring.security>
Therefore, if your project "A" uses spring, if they use the latest version of your company's "parent" pom, they should both use 4.3.18.RELEASE.
When a new version of spring is released and desirable, you update your company's parent pom, and force all other projects to use that latest version.
This will solve many of these dependency mismatch issues.
Don't worry, it's common in the java world, you're not alone. Just google "jar hell" and you can understand the issue in the broader context.
By the way mvn dependency:tree is your friend for isolating these dependency problems.
I agree with the answer of #JF Meier ,In Maven multi-module project, the dependency management node is usually defined in the parent POM file when doing unified version management. The content of dependencies node declared by the node class is about the resource version of unified definition. The resources in the directly defined dependencies node need not be introduced into the version phase. The contents of the customs are as follows:
in the parent pom
<dependencyManagement>
<dependencies >
<dependency >
<groupId>com.devzuz.mvnbook.proficio</groupId>
<artifactId>proficio-model</artifactId>
<version>${project.version}</version>
</dependency >
</dependencies >
</dependencyManagement>
in your module ,you do not need to set the version
<dependencies >
<dependency >
<groupId>com.devzuz.mvnbook.proficio</groupId>
<artifactId>proficio-model</artifactId>
</dependency >
</dependencies >
This will avoid the problem of inconsistency .
This question can't be answered in general.
In the past we usually just didn't use dependencies of different versions. If the version was changed, team-/company-wide refactoring was necessary. I doubt it is possible with most build tools.
But to answer your question..
Simple answer: Don't use two versions of one dependency within one compilation unit (usually a module)
But if you really have to do this, you could write a wrapper module that references to the legacy version of the library.
But my personal opinion is that within one module there should not be the need for these constructs because "one module" should be relatively small to be manageable. Otherwise it might be a strong indicator that the project could use some modularization refactoring. However, I know very well that some projects of "large-scale companies" can be a huge mess where no 'good' option is available. I guess you are talking about a situation where packageA is owned by a different team than packageB... and this is generally a very bad design decision due to the lack of separation and inherent dependency problems.
First of all, try to avoid the problem. As mentioned in #Henry's comment, don't use 3rd party libraries for trivial tasks.
However, we all use libraries. And sometimes we end up with the problem you describe, where we need two different versions of the same library. If library 'C' has removed and added some APIs between the two versions, and the removed APIs are needed by 'A', while 'B' needs the new ones, you have an issue.
In my company, we run our Java code inside an OSGi container. Using OSGi, you can modularize your code in "bundles", which are jar files with some special directives in their manifest file. Each bundle jar has its own classloader, so two bundles can use different versions of the same library. In your example, you could split your application code that uses 'packageA' into one bundle, and the code that uses 'packageB' in another. The two bundles can call each others APIs, and it will all work fine as long as your bundles do not use 'packageC' classes in the signature of the methods used by the other bundle (known as API leakage).
To get started with OSGi, you can e.g. take a look at OSGi enRoute.
Let me throw away a brick in order to get a gem first.
Alibaba is one of the largest E-Commerces in the world. And we tackle with these problems by creating an isolation container named Pandora. Its principle is simple: packaging those middle-wares together and load them with different ClassLoaders so that they can work well together even they referenced same packages with different versions. But this need a runtime environment provided by Pandora which is running as a tomcat process. I have to admit that this is a heavy plan.
Pandora is developed based on a fact that JVM identifies one class by class-loader plus classname.
For my job I use Spark every day. One of the problems comes from dependency conflicts. I can't help but think that they would all go away if people released their jars already shaded to their own namespace.
For internal jars, I'm considering doing this for all our dependencies. Other than a small bit of work, I'm seeing this as a good idea. Is there any drawbacks/risks I'm missing?
Some problems go away with shading, but new problems arise. One problem is that you take away the chance for your users to use a different (patched) version of a dependency than the version used in shading.
But the main risk of shading is that shaded classes end up exposed to clients.
So imagine you have 2 dependencies a, b, each shading log4j. So when you include a and b, you get classes a.shaded.log4j.Logger(v1.3) and b.shaded.log4j.Logger(1.4) on your compile/runtime classpath. And you may have your own log4j.Logger(1.5).
Then you want to do something with all Loggers in your system at runtime, but suddenly you get many different logger classes and class version at runtime.
So shading is only without risk when you can make sure that the clients will not ever see any instances of shaded classes via the API of your library. But this is very difficult to guarantee. Maybe with modules in Java9 this will be a little less problematic, but even then having just one known version of any class on the classpath is much easier to debug/manage than a wild mix of shaded classes with same names but different versions.
In my project i have dependencies displaytag and birt-runtime. And here display tag needs itext-2.0.1.jar and birt-runtime needs itext-2.1.7.jar so how can i declare this in my pom.xml to use itext-2.0.1.jar only by displaytag and use itext-2.1.7.jar only by birt-runtime. could some one let me know how to achieve this?
Thanks in advance.
In normal java application this is not possible, because in the application itext 2.1 and 2.0 will share the same classloader.
But normally, java-apis take care about backward-compatibility, so it should be possible to include only 2.1.
If not, you need multiple classloaders and then it will become complicated.
Existing solutions:
You could try to add an OSGi container to your application and run
both dependencies as a separate osgi-bundle.
If you run a jboss
application server, you could create one module with displaytag and
another one with birt-runtime.
DIY:
I've never done this but you could try to create to manage your own classloaders in your application and load the dependent jars into each own classloader. This article seems to cover the topic.
Short answer: you can't.
Long answer: Maven is a build tool and has no effect on runtime class loading in your application. Normally what it generates is one (or more) artifact(s), tipically jar or war files that may or may not contain your project's dependencies (depending on your POM files).
What you want to achieve is done at runtime by class loaders but under normal circumstances you don't want to tamper with class loading.
What you could do is:
Exclude unnecessary transitive dependencies of a dependency in your pom by defining exclusions, this way only one version of itext would be used. Of course, this only works if your dependencies don't rely on the internals of itext and their public API's are compatible but this might be the cleanest and easiest solution.
Use a framework/container that has stronger control over class loading, such as an OSGi container. These container provide bundles (somewhat equivalent to artifacts) with "private" class loaders enabling your application having multiple versions of the same library loaded that don't interfere with each other. This solution, however, has other disadvantages and I would only recommend this if you're already familiar with OSGi.
I am a beginner in Spring and I am trying to learn it by reading some codes scattered around on the internet. When I look at the pom.xml of these codes, I almost always see that people use "spring-core" and "spring-context" next to each other as added dependencies for that project. When I look at the dependency hierarchy, I see that spring-core is already in the spring-context.
So my question: is it necessary to use both? Is there a difference between "spring-core" in the "spring-context" and "spring-core" as a separate artifact?
This is called transitive dependency. If you don't declare spring-core, you still get it, since spring-context declares it.
Your code will work, today, whether you declare spring-core or not, because you get it anyway thanks to the transitive dependency mechanism. So this is an issue of best practice, not of whether it works or not.
The question to ask is, why do I need spring-core?
If your code doesn't directly reference it, then you don't need it, even if spring-context does. So, let spring-context take care of declaring that it requires spring-core. Don't declare it yourself.
If your code does directly reference something from spring-core, then you do need it, whether or not spring-context needs it. Maybe in a future version, spring-context won't depend on spring-core any more. So, you should declare spring-core.
In one sentence, you should explicitly declare those dependencies that you use yourself, even if they're also brought in transitively.
It's not important on a toy project. But it makes long-term maintenance a lot easier on big projects.
EDIT: spring-core and spring-context are so closely linked that the above advice doesn't matter. A common case where it does matter is when you have library A that depends on logging package L. If your own code uses L, then you'd better declare that dependency explicitly, because A could quite easily switch in future to use a different logging package. On the other hand, it's not so likely that spring-context will switch to a different provider for the functionality of spring-core...
Both Andrew and Gus have done full justice to your question. I would just like to elaborate some more from the Spring side of things. If you've just begun with Spring I think it's safe to assume that you're probably working on understanding how Dependency Injection works.
You must be going over samples with BeanFactory and ApplicationContext. The bean side of things are contained in spring-beans jar (like XmlBeanFactory etc.) and the context side of things (like ClassPathXmlApplicationContext etc.) are contained in spring-context jar.
But, to use any of the two Dependency Injection containers you'd be making use of certain common classes (like ClassPathResource, FileSystemResource) which are provided by the core.io package and is the reason why your application is dependent on spring-core jar as well.
Also, notice how "string" property values that you define in your web.xml are automatically converted to the right data types like a primitive or a wrapper type. That happens through Propert Editor support also provided by the spring-core jar.