I have 2 Maven artifacts with 2 versions, lets say A1, A2, B1, B2. B1 depends on A1, B2 depends on A2. A1 and A2 are very similar, lets say A1 is using Java 7 and A2 is using Java 8 and lambdas.
All artifacts are used by our clients and sometimes they install the wrong artifact for their environment.
I want to build a base A artifact, A1 and A2 will inherit A and add custom functionality and another artifact, A_Client, and I want to choose at runtime based on some properties(JDK and some others) which Ax module should be used. This way, our clients will have to install A_Client and they will not have to worry about the right version.
B1 and B2 are the same, the only thing that's different is their Ax dependency. If I can merge A1 and A2 somehow, I will have only a B artifact available for clients that will depend only on A_Client. This way I will eliminate the B versions hell too.
So, the question:
Is it possible to decide at runtime a dependency? My guess is that it may be possible using OSGi or custom Class loaders, but I have very limited knowledge in both areas so any help is appreciated.
Maven
Maven is a build tool, so it won't help you at runtime. It's possible to activate different profiles based on the used JDK, but it's based on system which builds it, not the system which runs it.
OSGI
I didn't use OSGI yet, but what I know is that it's a very different deployment and runtime model. It doesn't justify the effort, just to prevent your customers deploying the wrong version.
Deployment process
Not everything should be solved by a technical solution. Deploying the wrong artifact should definitely solved with an adjusted deployment process. Think about..
Why did they deploy the wrong artifact?
Do they have a documented deployment process? How should the process be changed?
Can you make the artifacts more distinguishable? Could you rename it?
Can you make it fail fast? It should fail to deploy, not when it's already in use.
This is an old question but deserves a modern answer that makes sense in the now-modular-JDK world.
What I really wanted in the original question is to have a JAR which could execute a different implementation, chosen at runtime, depending on the Java version I'm running. I've thought that Maven could do that, since there were no other options available. In the modular world, this is exactly what Multi-Release-Jars solves.
If Java version is not a problem, but there are 2 implementations which must be used based on a system property, application state or whatever, one could pack both of them as service implementations, add both of them in the module path and in the client load them via ServiceLoader and decide which implementation is needed at runtime.
Related
Let's say I have a library, which provides two independent plugin interfaces, 2 implementations per plugin, and one parent POM file.
There are also some abstract tests in the "core", which the plugins have to implement and pass to be considered compliant.
From code perspective, the plugin interfaces don't depend on core at all.
Only core depends on plugins.
My assumptions are:
the core abstractions go into the "core" artifact and its tests are packaged in a test-jar.
each implementation of a "plugin" goes into a separate artifact (4 artifacts in this example).
the parent POM also goes into a separate artifact.
I have considered several options about how to structure the dependencies between each artifact, which can be boiled down to these 2:
Leave it at just 6 artifacts. Every "plugin" depends on "core". Every "plugin" and "core" all depend on parent artifact.
This makes it possible for the library users to only specify 2 artifacts in their pom.xml/build.gradle, because "core" is a transitive dependency.
BUT, given I have some changes to the "core" which cause a version bump, I would have to update every plugin implementaton to bump the dependency version. Also, if users didn't specify the core explicitly, they now depend on outdated core.
Extract plugin interfaces into separate artifacts, so that implementations no longer depend on core. Which now creates 8 artifacts.
Now, unlike the previous approach, library users can no longer skip the "core" dependency in their pom.xml/build.gradle - it is now a given that it has to be specified. Which means, they would have to depend on 3 artifacts.
But, overall, any update of the core no longer forces a cascading update of the plugins. The plugin implementations need version bumps only if their respective interface updates.
The downside is probably that I now have 2 more artifacts to maintain.
My questions are:
Which approach is the more correct one? Does it depend on project size or some other factors?
Are there other approaches?
Is it bad that users have to depend on plugins & "core" explicitly, even if plugins transitively bring "core" in the first approach?
Anything that is intrinsic to the problem and cannot be solved? (like, is it a given that 8 artifacts are to be maintained, with no way to minimize that?)
Is it correct to provide abstract tests in the "test-jar", if I want to make sure that all plugin implementations comply with the interface contracts? Or do I have to copy-paste the tests in each plugin implementation?
Reply to #vokail
Generally, If you release a new version of the core, you must release a new version of the plugin, right?
Currently the code is structured in such a way, that plugin have no dependencies on the core. With 1st scheme, if core updates, plugins must update. With 2nd scheme - if core updates, plugins don't care.
I think it's possible to have more than two plugins implementations
for plugin developers they need to use only this as dependency directly
True & true
plugin-api need only core-api
Currently, I cannot invert the dependency in such a way. Plugins know nothing about the core, except the plugins API.
As a note, there are 2 plugin APIs. Their code doesn't depend on core and their code doesn't depend on each other.
With 1st scheme, all plugin APIs are inside a single core artifact.
With 2nd scheme all plugin APIs are in separate artifacts (so it's 1 core artifact, and 2 separate API artifacts = 3 artifacts in total).
core-api can be implemented by more than one core-impl ( in the future )
Mhm... Don't see it in the future.
It's better to depend for my plugin implementation from an interface only, not from a core one
To clarify, this is what I meant.
From library user perspective, 1st scheme looks like this:
// Implementaton of "A" api, variant 1
implementation 'library:plugin-a1-impl:1.0.0'
// Implementaton of "B" api, variant 2
implementation 'library:plugin-b2-impl:1.0.0'
// Both plugins transitively bring in "library:core:1.0.0".
// But if for example core:1.1.0 is released, it has to be included explicitly
2nd scheme looks like this:
// Implementaton of "A" api, variant 1
// Transitively brings in "library:plugin-a-api" - a new artifact
implementation 'library:plugin-a1-impl:1.0.0'
// Implementaton of "B" api, variant 2
// Transitively brings in "library:plugin-b-api" - a new artifact
implementation 'library:plugin-b2-impl:1.0.0'
// Core has to be explicitly specified, nobody depends on it, only core depends on plugins
implementation 'library:core:1.0.0'
just do one artifact and let people depend on that only ( as example to minimize that ).
Currently there are separate projects that depend on the library, and they use different plugin implementations. Users pick between different implementation of the same APIs depending on shared dependencies.
For example, there's A, and there are 2 implementations: A-oranges, A-apples. If the project already uses oranges, it imports A-oranges. If it already uses apples, it imports A-apples.
In other words, the plugins are more like adapters between the library and external projects.
Another depiction of the differences between 2 options:
Squares represent ".jar" artifacts. Circles inside a square represent interfaces/classes and their dependencies on each other.
It could be said, that the code is DIP compliant - both core and plugin implementations depend on abstractions.
It's only a question of artifact structuring - is it worth extracting abstractions into separate artifacts as well?
I suppose there is an issue on how and how much often do you release a new version of the core and the plugin. Generally, If you release a new version of the core, you must release a new version of the plugin, right? If not so, please specify this.
I'm for solution 2, but with a little difference, as the following example:
As you can see I've introduced a plugin-api artifact, with only interfaces used by plugins, because:
I think it's possible to have more than two plugins implementations
for plugin developers they need to use only this as dependency directly
plugin-api need only core-api
core-api can be implemented by more than one core-impl ( in the future )
Following this approach you focus will be to design plugin-api better you can, stabilize it and then let plugin developers do the job.
What if:
core-impl change ? For example a bugfix or new release. Ask yourself: do I need to change core-api ? For example to provide a new feautre to plugin-api ? If so, release a new core-api and then release a new plugin-api
core-api change? Like before
plugin-api ? if plugin-api change, you need to change only plugin-impls
To answer on your questions:
Which approach is the more correct one? Does it depend on project size or some other factors?
There is no "correct one", depends for sure on project size ( just count how many feature/methos/interfaces you have in core-api and plugin-api ), how many developers works on it and how your release process works
Are there other approaches?
See one answer before, you can search from some big project like apache or eclipse foundation one to learn their patterns, but depends heavily on the subject and can be an huge task.
Is it bad that users have to depend on plugins & "core" explicitly, even if plugins transitively bring "core" in the first approach?
For my understanding, yes. It's better to depend for my plugin implementation from an interface only, not from a core one
Anything that is intrinsic to the problem and cannot be solved? (like, is it a given that 8 artifacts are to be maintained, with no way to minimize that?)
Well, If you are alone, this is an open source project used only by yourself, don't overengineering this, just do one artifact and let people depend on that only ( as example to minimize that ).
Is it correct to provide abstract tests in the "test-jar", if I want to make sure that all plugin implementations comply with the interface contracts? Or do I have to copy-paste the tests in each plugin implementation?
For me it's better to have a plugin-api and let plugin implementation declare only that, it's more clear and concise. For tests, I'm not sure if you plan to do tests on implementations by yourself, of "ask" to plugin developers to do the test. For sure copy-paste is not the right choice, you can use a command pattern or similar to make these tests, see here
After updated question, I'm still for solution 2, event if there are two separated plugin-api, is better to have different plugin-api.
It's only a question of artifact structuring - is it worth extracting abstractions into separate artifacts as well?
I think yes, in the long run. If you separate in different artifacts, you can change them independently, for example change something in plugin-apiA and this doesn't affect plugin-apiB. If you change the core, yes of course.
Note: for my diagram above I think can still be working, can't you make an abstract set of interfaces for plugin-api and have a common artifact for them ?
If plugin A and B are two distinct type of plugin, then the option 2 is a better pick however:
If A depends on core#v1
If B depends on core#v2
Then core#v2 have to be binary compatible with core#v1, otherwise it will fail when for example someone depends on an implementation of A and an implementation of B: there always have to upgrade the plugin version in any case.
You could probably use Java Module do hide the details (eg: only provide an interface that is likely to never changes) which will makes the solution 2 of Vokail useless in some sense: you don't need a core-impl because Java module will ensure you that, apart from your core module, no one access the details (the impl). This also allow you to reuse the same package.
If A and B interface are in core, then the likeness of a binary incompatibility fall down.
Just as shown in the picture, one app (Java) referenced two third-party package jars (packageA and packageB), and they referenced packageC-0.1 and packageC-0.2 respectively. It would work well if packageC-0.2 was compatible with packageC-0.1. However sometimes packageA used something that could not be supported in packageC-0.2 and Maven can only use the latest version of a jar. This issue is also known as "Jar Hell".
It would be difficult in practice to rewrite package A or force its developers to update packageC to 0.2.
How do you tackle with these problems? This often happens in large-scale companies.
I have to declare that this problem is mostly occurred in BIG companies due to the fact that big company has a lot of departments and it would be very expensive to let the whole company update one dependency each time certain developers use new features of new version of some dependency jars. And this is not big deal in small companies.
Any response will be highly appreciated.
Let me throw away a brick in order to get a gem first.
Alibaba is one of the largest E-Commerces in the world. And we tackle with these problems by creating an isolation container named Pandora. Its principle is simple: packaging those middle-wares together and load them with different ClassLoaders so that they can work well together even they referenced same packages with different versions. But this need a runtime environment provided by Pandora which is running as a tomcat process. I have to admit that this is a heavy plan. Pandora is developed based on a fact that JVM identifies one class by class-loader plus classname.
If you know someone maybe know the answers, share the link with him/her.
We are a large company and we have this problem a lot. We have large dependency trees that over several developer groups. What we do:
We manage versions by BOMs (lists of Maven dependencyManagement) of "recommended versions" that are published by the maintainers of the jars. This way, we make sure that recent versions of the artifacts are used.
We try to reduce the large dependency trees by separating the functionality that is used inside a developer group from the one that they offer to other groups.
But I admit that we are still trying to find better strategies. Let me also mention that using "microservices" is a strategy against this problem, but in many cases it is not a valid strategy for us (mainly because we could not have global transactions on databases any more).
This is a common problem in the java world.
Your best options are to regularly maintain and update dependencies of both packageA and packageB.
If you have control over those applications - make time to do it. If you don't have control, demand that the vendor or author make regular updates.
If both packageA and packageB are used internally, you can use the following practise: have all internal projects in your company refer to a parent in the maven pom.xml that defines "up to date" versions of commonly used third party libraries.
For example:
<framework.jersey>2.27</framework.jersey>
<framework.spring>4.3.18.RELEASE</framework.spring>
<framework.spring.security>4.2.7.RELEASE</framework.spring.security>
Therefore, if your project "A" uses spring, if they use the latest version of your company's "parent" pom, they should both use 4.3.18.RELEASE.
When a new version of spring is released and desirable, you update your company's parent pom, and force all other projects to use that latest version.
This will solve many of these dependency mismatch issues.
Don't worry, it's common in the java world, you're not alone. Just google "jar hell" and you can understand the issue in the broader context.
By the way mvn dependency:tree is your friend for isolating these dependency problems.
I agree with the answer of #JF Meier ,In Maven multi-module project, the dependency management node is usually defined in the parent POM file when doing unified version management. The content of dependencies node declared by the node class is about the resource version of unified definition. The resources in the directly defined dependencies node need not be introduced into the version phase. The contents of the customs are as follows:
in the parent pom
<dependencyManagement>
<dependencies >
<dependency >
<groupId>com.devzuz.mvnbook.proficio</groupId>
<artifactId>proficio-model</artifactId>
<version>${project.version}</version>
</dependency >
</dependencies >
</dependencyManagement>
in your module ,you do not need to set the version
<dependencies >
<dependency >
<groupId>com.devzuz.mvnbook.proficio</groupId>
<artifactId>proficio-model</artifactId>
</dependency >
</dependencies >
This will avoid the problem of inconsistency .
This question can't be answered in general.
In the past we usually just didn't use dependencies of different versions. If the version was changed, team-/company-wide refactoring was necessary. I doubt it is possible with most build tools.
But to answer your question..
Simple answer: Don't use two versions of one dependency within one compilation unit (usually a module)
But if you really have to do this, you could write a wrapper module that references to the legacy version of the library.
But my personal opinion is that within one module there should not be the need for these constructs because "one module" should be relatively small to be manageable. Otherwise it might be a strong indicator that the project could use some modularization refactoring. However, I know very well that some projects of "large-scale companies" can be a huge mess where no 'good' option is available. I guess you are talking about a situation where packageA is owned by a different team than packageB... and this is generally a very bad design decision due to the lack of separation and inherent dependency problems.
First of all, try to avoid the problem. As mentioned in #Henry's comment, don't use 3rd party libraries for trivial tasks.
However, we all use libraries. And sometimes we end up with the problem you describe, where we need two different versions of the same library. If library 'C' has removed and added some APIs between the two versions, and the removed APIs are needed by 'A', while 'B' needs the new ones, you have an issue.
In my company, we run our Java code inside an OSGi container. Using OSGi, you can modularize your code in "bundles", which are jar files with some special directives in their manifest file. Each bundle jar has its own classloader, so two bundles can use different versions of the same library. In your example, you could split your application code that uses 'packageA' into one bundle, and the code that uses 'packageB' in another. The two bundles can call each others APIs, and it will all work fine as long as your bundles do not use 'packageC' classes in the signature of the methods used by the other bundle (known as API leakage).
To get started with OSGi, you can e.g. take a look at OSGi enRoute.
Let me throw away a brick in order to get a gem first.
Alibaba is one of the largest E-Commerces in the world. And we tackle with these problems by creating an isolation container named Pandora. Its principle is simple: packaging those middle-wares together and load them with different ClassLoaders so that they can work well together even they referenced same packages with different versions. But this need a runtime environment provided by Pandora which is running as a tomcat process. I have to admit that this is a heavy plan.
Pandora is developed based on a fact that JVM identifies one class by class-loader plus classname.
Suppose there is a REST service with several clients. Each client has to do roughly the same to call the REST service: construct a URL, send a request, deserialize the response, map HTTP return codes to exceptions, etc.
This seems to be duplicate code so maybe it's a good idea to put the REST gateway code into a reusable library. In Java this could look like the following:
There is a jar file which depends on Jersey and contains the rest client code.
The rest client is a simple class or CDI bean.
All clients simply depend on this jar and call the members of the aforementioned class.
So no client has to worry about REST and can just call methods.
Is this a good idea?
(this question originates from this other question)
This can get you into trouble with dependencies. It is a quite typical problem with dependencies and not bound to REST or Jersey. But let's have a look at the following scenario:
Dependencies
Suppose there are two servers (S1 and S2), two clients (C1 and C2) and two libraries containing the rest client code for accessing teh servers (L1 and L2). Both clients need to query both servers so the call structure looks like this:
C1 ---> L1 ---> S1
\ ^
\ /
X
/ \
/ v
C2 ---> L2 ---> S2
Furthermore both L1 and L2 depend on Jersey. Also the container (maybe you are running your application on a Glassfish or Wildfly application server) has a dependency on Jersey (or at least on the jax-rs API [1]).
The simplified dependency structure of C1 looks like this:
C1
/ | \
/ | \
< v >
Container L1 L2
| | |
v v v
Jersey Jersey Jersey
As long as all the three versions of Jersey are the same, everything is fine. No problem what so ever. But if the versions differ, you may run into nasty NoClassDefFoundErrors, MethodNotFoundErrors and the like [2].
Rigid Structures
If the versions are similar enough you won't run into these errors directly. Maybe it will work for quite a while. But then the following may happen:
You want to update your container. This updates the container version of Jersey which may get incompatible. ==> Boom. So using L1 and L2 prevents you from updating. You are stuck with the old version as long as L1 and L2 are not updated.
L2 is updated to a new version of Jersey. You can only use it if L1 and your container are also updated. So you stick to the old version (you can do that because of the loose coupling of REST). But then new functionality is added to S2 which is only usable with a new version of L2 and again you are stuck.
Keep in mind that the errors may or may not occur. There is no guarantee that you get into trouble and there is no guarantee that it will work. It's a risk, a ticking bomb.
On the other hand this is a simple example with just two services and two clients. When there are more, the risk is increasing, of course.
Stable Dependencies Principle
Dependencies should adhere to the Stable Dependencies Principle (SDP) which reads "Depend in the direction of stability." Uncle Bob defines it using the number afferent and efferent dependencies but SDP can be understood in a broader sense.
The connection to SDP becomes obvious when we look at the given dependency structure and replace Jersey with guava. This problem is really common with guava. Problem is that Guava is typically not downwards compatible. It's a rather unstable library that can be used in an application (which is unstable) but you should never use it in a reusable library (which ought to be stable).
With Jersey it's not that obvious because you may consider Jersey quite stable. In fact it is very stable with respect to Bob Martin's definition. But L2 may be quite stable, too. Maybe it's developed by another team, by some guy who left the company and nobody dares to touch it because last year somebody tried to do so which resulted in the C2 team having dependency issues. Maybe L2 is also stable because there is some ongoing quarrel between the managers of the teams developing C1 and L2 so the L2 manager claims there are no resources, no budget or whatever so updating L2 can only be done at the end of next year.
So depending on you code base, your organizational structure, etc. libraries can be more stable than you would which them to be.
Exceptions and Remedies
There are some languages which have "isolated dependencies", which means you can have multiple versions of the same library in the same application.
You could repackage Jersey (copy the source, change the package name and maintain it for yourself). Then you can have the normal Jersey in version X and your repackaged version of Jersey in version Y. This works but of course this has its own problems. It's much work and you have to maintain software which is not written by you.
The problem can be negligible if everything is developed in the same team. Then you have control over the whole thing and you don't depend on other teams doing something. Nevertheless as soon as your code base grows, updating to a new version can be much work as you cannot do it gradually anymore but you have to move many services to a new version at once.
Build tools like maven give you some control over the version you depend on. You may consider marking the Jersey dependency optional in L1 and L2 and provided in C1 and C2. Then it is not included in your war file so there is just one version of Jersey (the one in the container). As long as this version is interface compatible, it will work [3].
What to Do Instead
I would recommend to just put your representation classes, i.e. your DTOs, into a client jar and let each client decide on which library to use for making the REST call. Typically those REST gateways are quite simple and sharing them between applications is normally not worth the risk. At least think about the issues mentioned here before running into them without even knowing.
[1] In order to keep the explanation simple, lets neglect the difference between Jersey and the jax-rs API. The argument is the same.
[2] In reality it's a bit more complicated. Typically your build tool (maven, etc.) will select one version of Jersey and put it into the war file. The other version of Jersey is "omitted for conflict" with the other version. This is OK as long as the versions are compatible. So you end up with two versions the one in your container and the one in your war file. Whereas interface compatibility in the former case should be enough, it won't be enough for these remaining two versions.
[3] As there is only one version, the problem mentioned in [2] cannot occur. The problem with interface compatibility stays of course.
I have a system consisting of multiple web applications (war) and libraries (jar). All of them are using maven and are under my control (source code, built artifacts in Nexus,...). Let say that application A is using library L1 directly and L2 indirectly (it is used from L1). I can easily check the dependency tree top-down from the application, using maven's dependency:tree or graph:project plugins. But how can I check, who's using my library? From my example, I want to know, whether A is the only application (or library) using L1 and that L2 is used from L1 and from some other application, let say B. Is there any plugin for maven or nexus or should I try to write some script for that? What are your suggestions?
If you wish to achieve this on a repository level, Apache Archiva has a "used by" feature listed under project information
.
This is similar to what mvnrepository.com lists under its "used by" section of an artifact description.
Unfortunately, Nexus does not seem to provide an equivalent feature.
Now I suppose it would be a hassle to maintain yet another repository just for that, but then it would probably easier than what some other answers suggestions, such as writing a plugin to Nexus. I believe Archiva can be configured to proxy other repositories.
Update
In fact, there's also a plugin for Nexus to achieve the "used by" feature.
As far as I know nothing along these lines exists as an open source tool. You could write a Nexus plugin that traverses a repo and checks for usages of your component in all other components by iterating through all the pom's and analyzing them. This would be a rather heavy task to run though since it would have to look at all components and parse all the poms.
In a similar fashion you could do it on a local repository with some other tool. However it probably makes more sense to parse the contents of a repo manager rather than a local repository.
I don't think there's a Maven way to do this. That being said, there are ways of doing this or similar things. Here's a handful examples:
Open up your projects in your favorite IDE. For instance Eclipse will help you with impact analysis on a class level, which most of the time might be good enough
Use a simple "grep" on your source directory. This sounds a bit brusk (as well as stating the obvious), perhaps, but we've used this a lot
Use dependency analysis tools such as Sonargraph or Lattix
I am not aware of any public libraries for this job, so I wrote a customized app which does it for me.
I work with a distribution which involves more than 70 artifacts bundled together. Many times after modifying an artifact, I want to ensure changes are backward compatible (i.e. no compilation errors are introduced in dependent artifacts). To achieve this, it was crucial to know all dependents of modified artifact.
Hence, I wrote an app which scans through all artifacts under a directory(/subdirectories), extracts their pom.xml and searches (in dependency section of pom) for occurrence of modified artifact.
(I did this in java although shell/windows script can do this even more compactly.)
I'll be happy to share code on github, if that could be of any help.
One way that might suit your needs are to create a master-pom with all your maven projects. Then you run the following command on the master-pom:
mvn dependency:tree -DoutputType=graphml -DoutputFile=dependency.graphml
Open the generated file in yEd.
Used the instructions found here:
http://www.summa-tech.com/blog/2011/04/12/a-visual-maven-dependency-tree-view/
More interesting is probably: what would you do with this information? Inform the developers of A not to use library L1 or L2 anymore, because it has a critical bug?
In my opinion you should be able to create a blacklist of dependencies/parents/plugins on your repository manager. Once a project tries to deploy/upload itself with a blacklisted artifact, it should fail. I'm saying uploading and not downloading, because that might break a lot of projects. As far as I know, this is not yet available for any repository-manager.
One of the ways to approach this problem is outside Java itself : write an OS-level monitoring script that tracks each case of fopen() on the jar file under question! Assuming this is in a corporate environemnt, you might have to wait for a few weeks (!) to allow all using processes to access the library at least once!
On Windows, you might use Sysinternals Process Monitor to do this:
http://technet.microsoft.com/en-us/sysinternals/bb896645
On Unix variants, you would use DTrace or strace.
IMHO and also from my experience, looking for a technical solution for such a problem is often an overkill. If the reason why you want to know who is using your artifact(library) is because you want to ensure backward compatibility when you change an artifact or something similar, I think it is best done by communicating your changes using traditional channels and also encourage other teams who might be using your library to talk about it (project blogs, wiki, email, a well known location where documentations are put, Jour fixe etc.).
In theory, you could write a script that crawls though each project in your repository and then parses the maven build.xml (assuming they all use maven) and see whether they have defined a dependency to your artifact. If all the projects in your organization follows the standard maven structure, it should be easy to write one such script (though if any of those projects have a dependency to your artifact via a transitive dependency, things can get a bit more tricky).
I have 2 Java packages, A & B. Let's say that some classes in package B want to use some classes in package A however, when a developer comes along and develops package C (or, say, application C), he/she will use my package B but I do not want him/her to be able to use the classes in A that B is using. That is to say, I want my classes in package A to be package private so that they are hidden from the application developer. However, I do want my own package B to be able to access those classes which are package private. Can this be done in Java? Do I basically just need to bite the bullet and make the classes public and just hope that the user doesn't try to use them? Or, do I need to replicate the classes that are in A inside of B?
My preference would be something that is not hack-y (i.e. I don't want to use reflection). Help?
You can do it with JDK 8 and its Project Jigsaw. You might want to give a look to Project Jigsaw Quickstart Guide.
Unfortunately, Jigsaw is part of the JDK8, and it is not totally ready yet. It is not expected to be feature complete until January 2013 and will not be released before midyear 2013.
However, you can already compile your classes with the JDK 8 preview and make your ideas work.
In this case, your problem can be solved by dividing your application in independent modules. You could do something like:
module foo {
exports foo;
permits bar;
permits baz;
}
Here the module foo can be required only by modules named bar or baz. A dependence from a module of some other name upon foo will not be resolvable at compile time, install time, or run time. If no permits clauses are present then there are no such constraints.
Not sure if alternative frameworks like OSGI, of which you can find implementations in Apache Felix and Eclipse Equinox offer some kind of functionality to implement these levels of encapsulation. It is probable that you may want to investigate a bit about that.
The problem with OSGi without the existence of Jigsaw is that whatever rules enforced by the framework can be broken by reflection, once Jigsaw is ready for public use, though, these rules will be enforced by Java itself, as you read above, at compile time, runtime and install time.
You can do this with OSGi. Android and JDK 6 as target are not a problem in this case, there are OSGi frameworks running on Android -> e.g. see mBedded Server for Android. You can download a free non-commercial version from the link.
You have several options how to do it in OSGi, depending on what you want to achieve.
Option 1 (recommended):
You can put packages A and B in one and the same bundle AB, and export only package B in the manifest of this bundle with Export-Package.
Package/application C or any other "user" app can import package B and use it. And it cannot use and doesn't even see package A, because it is internal for the bundle AB. You don't need any special declarations or dependencies on Java level; this will work with ANY Jva version, because the modularity and the separate bundle spaces are part of the OSGi basics and don't depend on the latest Java version or smth.
Option 2:
If for some reason you want packages A and B separated in different bundles, you can have them so, you'll export and import the packages in the manifest and then control which bundle has the right to import which package by using Permissions (see OSGi Permission and Conditional Permission services). However, this is more complicated to realize.
Option 3: You can also put package A in a Fragment bundle and allow it to attach to the bundle containing B. In this way B will have access to package A, but at the same time you'll be able to update package A separately during runtime if you want. Since packages in fragments are treated as private for the host bundle (in this case host is the bundle containing package B), Bundle C won't see A. It will only see what is exported by Bundle B.
Since you are not that familiar with OSGi, I will recommend to start witl Option 1 and then later if needed you can upgrade your approach to Option 3 if you want.
#Edwin Dalorzo : It is definitely not true that the rules in OSGi can be broken by reflection. Bundles have separate classloaders in OSGi. You can reflect from Bundle C as much as you won't for the classes A and the only thing you'll get is a ClassNotFound exception - believe me, I have seen it enough times ;)