I am writing a Java library right now that I publish as a Maven artifact and use in a different Java/Groovy project. I was wondering whether in general it is a good idea to write a library that depends on a certain version of Groovy (e.g. has a dependency on groovy-all-2.x.y).
The discomfort of just using Java in the library would not be too bad.
What do you think?
Should I better use a generous version range for the Groovy dependency? Should I rather write a plain Java library?
I guess it depends on how you want it to be used.
If it's not a utility and you don't think other projects will use it, then do whatever you want.
If it's a utility designed to be used in testing, I don't think a groovy dependency on the test classpath is too bad. I'm sure some projects would still avoid your utility because of the groovy dependency.
If it's a general utility that you want people to use everywhere, then I'd say a groovy dependency is definately a bad idea. I certainly wouldn't use it and I'm sure many others would avoid for the same reason.
If you want maximum adoption of your utility, keep the dependencies as few as possible. Groovy is a huge, bloated dependency that many projects will avoid.
I would say it depends on the intended use of this library. If you only plan on using it yourself and are perfectly fine with the groovy dependency as it is then leave it that way. If the library is meant to be used by others than the easiest thing for them might be to write it all in Java, since then there is less that can go wrong when trying to use the library. It really comes down to if the work required to switch it to Java is worth the benefit to you of having it all in Java.
Related
Most of the time, I don't like Javascript and would prefer strict and compiled languages like Scala, Java, Haskell...
However, one thing that can be nice with Javascript is to be able to easily change code of external dependencies. For exemple, if you have a bug and you think it's one of your dependency library you can easily hack around and swap a library method by your own override and check if it's better. You can even add methods to Array ou String prototypes and things like that... One could even go to node_modules and alter the library code here temporarily if he wants to.
In the JVM world this seems to me like an heavy process to just get started:
Clone the dependency sources
Hack it
Compile it
Publish it to some local maven/ivy repository
Integrate the fixed version in your project
This is a pain, I just don't want to do that more than once in a year
Today I was trying to fix a bug in my app, and the lib did not provide me enough information. I would have loved to just be able to put a Logger on one line of that lib to have better insight of what was happening but instead I tried to hack with the debugger with no success (the bug was not reproductible on my computer anyway...)
Isn't there any simple alternative for rapidly altering the code of a dependency?
I would be interested in any solution for Scala, Java, Clojure or any other JVM language.
I'm not looking for a production-deployable solution, just a quick solution to use locally and eventually deployable on a test env.
Edit: I'm talking about library internals that are not intended to be modified by the library author. Please assume that the class to change is final, not replaceable by library configuration, and not injectable by any way into the library.
In Clojure you can re-bind vars, also from other namespaces, by using intern. So as long as the code you want to alter is Clojure code, that's a possible way to monkeypatch.
(intern 'user 'inc dec)
(inc 1)
=> 0
This is not something to do lightly though, since it can and will lead to problems with other code not expecting this behavior. It can be handy to use during development to temporarily fix edge cases or bugs in other libraries, but don't use it in published libraries or production code.
Best to simply fork and fix these libraries, and send a pull request to have it fixed in the original library.
When you're writing a library yourself that you expect people need to extend or overload, implement it in Clojure protocols, where these changes can be restricted to the extending/overloading namespaces only.
I disagree that AspectJ is difficult to use, it, or another bytecode manipulation library is your only realistic alternative.
Load-time weaving is a definite way around this issue. Depending on how you're using the class in question you might even be able to use a mocking library to achieve the same results, but something like AspectJ, which is specifically designed for augmentation and manipulation, would likely be the easiest.
I have created a library which supports an application, however in the newest version of the application the developer has changed the structure without changing the class names.
So version 1 of the application has classX in package A but version 2 has classX in package B. How can I develop my library in a way which allows supporting both of these in the same build?
Edit: My library is dependent on the application, not the other way around.
That is a bad decision, if you still want to make it work you need to provide skeleton classes with old structure and delegate calls to new version of class but it would get very dirty
better to not provide backward compatibility if you are firm with the renaming decision
Short answer: You can't.
Real answer: Your library should be able to exist independently of any application that uses it. The purpose of a library is to provide a set of reusable, modular code that you can use in any application. If your library is directly dependent on application classes, then it seems like a redesign should be seriously considered, as your dependencies are backwards. For example, have A.classX and B.classX both implement some interface (or extend some class) that your library provides, then have the application pass instances of those objects, or Class's for those objects, to the library.
If your "library" can't be designed this way then consider integrating it into application code, making it a direct part of the application, and come up with a better team workflow for you, the other developer, and others to work on the same project together.
Quick fix answer: Do not provide backward compatibility, as Jigar Joshi states in his answer.
Bad answer: You could hack a fragile solution together with reflection if you really had to. But please note that the "real answer" is going to last in the long run. You are already seeing the issues with the design you have currently chosen (hence your question), and a reflection based solution isn't going to prevent that from happening again (or even be reliable).
What is the simplest way to add a compile-time step to analyse and modify the source code before it is compiled to byte code?
Can I write this in Java?
Would it be best written as an IDE plugin?
Can I write this in Java?
Yes, definitely. There are numerous Java based libraries for manipulating bytecode:
Commons BCEL
ASM
Javassist
Would it be best written as an IDE plugin?
In my opinion, no. You didn't mention which IDE you're using, but from my own experience, writing an IDE plugin has a steeper learning curve than adding a custom step to a build tool like Ant/Maven/Gradle. Even if you aren't currently using one of these build tools, in my personal opinion, it would be easier to adopt one of these tools rather than write an IDE plugin.
Also, tying a build step to a particular IDE makes your build less portable. Two things to consider before going this route:
1) How you would run your build on a continuous integration server like Jenkins or Bamboo. It's not impossible to invoke a headless Eclipse/Netbeans build that uses custom plugins on a build server, but it's not nearly as straightforward as running a build that uses "standard" tools like Ant/Maven/Gradle.
2) How would it impact other members of your team? You'd need to find a way to distribute to the plugin to each developer, deal with versioning and updates of the plugin, etc. Is everyone on your team using the same IDE?
I don't know anything about your project, your team (if you're working on a team), or the type of software you're developing so these considerations may not apply to you. I've only mentioned them as food for thought based on my own experiences.
What is the simplest way to add a compile-time step to analyse and
modify the source code before it is compiled to byte code?
What are you using for your builds? Ant? Maven? Gradle? The exact steps you'd follow are highly dependent on your build tool.
Depending on what you're trying to accomplish, you may not need to write anything at all.
For example, analysing parts of the code and splitting the work into
multiple threads where necessary –
Check out AspectJ. You can probably write an aspect that intercepts calls to certain methods and submits them to an ExecutorService. There are off the shelf plugins to invoke the AspectJ compiler from most common build systems.
If you do want to write something on your own, I think your best bet would be to write a custom Ant task. I suggest an Ant task because it's the lowest common denominator. It can of course be run using Ant, but both Maven and Gradle can invoke Ant tasks as well.
Write a new class that extends Task and do your thing in there.
public class MyTask extends Task {
public void execute() {
// do your bytecode manipulation here...
}
}
You'd invoke it like this from your Ant script:
<taskdef name="mytask" classname="MyTask" classpath="classes"/>
<mytask/>
Check out the Apache Axis2 code generation task for an example of doing build time code generation and how to deal with classpath issues/accessing your code.
First off, I'm coming (back) to Java from C#, so apologies if my terminology or philosophy doesn't quite line up.
Here's the background: we've got a growing collection of internal support tools written for the web. They use HTML5/AJAX/other buzzwords for the frontend and Java for the backend. These tools utilize a lightweight in-house framework so they can share an administrative interface for security and other configuration. Each tool has been written by a separate author and I expect that trend to continue, so I'd like to make it easy for future authors to stay "standardized" on the third-party libraries that we've already decided to use for things like DI, unit testing, ORM, etc.
Our package naming currently looks like this:
com.ourcompany.tools.framework
com.ourcompany.tools.apps.app1name
com.ourcompany.tools.apps.app2name
...and so on.
So here's my question: should each of these apps (and the framework) be treated as a separate project for purposes of Maven setup, Eclipse, etc?
We could have lots of apps appear here over time, so it seems like separation would keep dependencies cleaner and let someone jump in on a single tool more easily. On the other hand, (1) maybe "splitting" deeper portions of a package structure over multiple projects is a code smell and (2) keeping them combined would make tool writers more inclined to use third-party libraries already in place for the other tools.
FWIW, my initial instinct is to separate them.
What say you, Java gurus?
I would absolutely separate them. For the purposes of Maven, make sure each app/project has the appropriate dependencies to the framework/apps so you don't have to build everything when you just want to build a single app.
I keep my projects separated out, but use a parent pom for including all of the dependencies and other common properties. Individual tools / projects have a name and a reference to the parent project, and any project-specific dependencies, if any. This works for helping to keep to common libraries and dependencies, since the common ones are already all configured, but allows me to focus on the specific portion of the codebase that I need to work with.
I'd definitely separate these kind of things out into separate projects.
You should use Maven to handle the dependencies / build process automatically (both for your own internal shared libraries and third party dependencies). There won't be any issue having multiple applications reference the same shared libraries - you can even keep multiple versions around if you need to.
Couple of bonuses from this approach:
This forces you to think carefully about your API design for the shared projects which will be a good thing in the long run.
It will probably also give you about the right granularity for source code control - i.e. your developers can check out and work on specific applications or backend modules individually
If there is a section of a project that is likely to be used on more than one project it makes sense to pull that out. It will make it a little cleaner as well if you need to update the code in one of the commonly used projects.
If you keep them together you will have fewer obstacles developing, building and deploying your tools.
We had the opposite situation, having many separate projects. After merging them into one project tree we are much more productive and this is more important to us than whatever conventions happen to be trending.
Is there a tool to detect unneeded jar-files?
For instance say that I have myapp.jar, which I can launch with a classpath containing hibernate.jar, junit.jar and easymock.jar. But actually it will work fine using only hibernate.jar, since the code that calls junit.jar is not reachable.
I realize that reflection might complicate things, but I could live with a tool that ignored reflection. Except for that it seems like a relatively simple problem to solve.
If there is no such tool, what is best practices for deciding which dependencies are needed? It seems to me that it must be a common problem.
This is not possible in a system that might use reflection.
That said, a static analysis tool could do a pretty good job if you don't use ANY reflection.
Have you taken a look at Dependency Finder?
http://depfind.sourceforge.net/
A handy list of most of the other available Java dependency tools is also available on that site.
I have used
http://code.google.com/p/jarjar/
and found it to be pretty good.
Also, you will find out if you have broken any reflection easily if you have a good set of unit/acceptance tests :).
Something to add to Bill K's reply: you might not use reflection at all, but the JARs you are using might. I remember encountering something like that with xalan & xerces, where a ClassNotFoundException has been thrown at runtime.