Can impact analysis be done in Eclipse? If there are a few classes and methods that need to be changed, finding the impact of that change on rest of the application code (other classes and methods)
The core issue is when there is code apart from core java that is XML, JSP, framework code etc
One of the most advanced project on this topic might be XRay.
You can try it and check if that does provide some of the answer you are looking for (note: I have not yet tested it)
X-Ray is an open-source software visualization plug-in for the Eclipse framework. It provides System Complexity View, Class and Package Dependency View for a given Java project.
Other advanced tools exists (but are not free) for exploring code dependencies:
nWire for SO contributor extraordinaire Zviki Cohen (zvikico)
XDepend, now part of JArchitect (lets you extract, visualize, seek and control the structure of your applications and frameworks)
The most simple way (and still free) to make a quick dependency analysis remains for me:
CDA - Class Dependency Analyzer
(not directly integrated to eclipse, but very simple to use)
Simplest method is: right-click the class or method you would like to change, select "refactor" (or press alt-shift-T) and then the refactoring you propose to do (rename, move, change method signature, etc ). Then select "preview" (or next as the case may be). You'll then see the impact of the proposed change. For rename and move class, you'll also get the option to apply the changes to non-java files. Next to that, you can use the search function.
Try JRipple eclipse plugin. Its good one.
There is a plugin for jQAssisant available, which brings Test Impact Analysis to the Java world. The plugin is called jQAssistat Test Impact Analysis and available via https://github.com/jqassistant-contrib/jqassistant-test-impact-analysis-plugin.
Related
Most of the time, I don't like Javascript and would prefer strict and compiled languages like Scala, Java, Haskell...
However, one thing that can be nice with Javascript is to be able to easily change code of external dependencies. For exemple, if you have a bug and you think it's one of your dependency library you can easily hack around and swap a library method by your own override and check if it's better. You can even add methods to Array ou String prototypes and things like that... One could even go to node_modules and alter the library code here temporarily if he wants to.
In the JVM world this seems to me like an heavy process to just get started:
Clone the dependency sources
Hack it
Compile it
Publish it to some local maven/ivy repository
Integrate the fixed version in your project
This is a pain, I just don't want to do that more than once in a year
Today I was trying to fix a bug in my app, and the lib did not provide me enough information. I would have loved to just be able to put a Logger on one line of that lib to have better insight of what was happening but instead I tried to hack with the debugger with no success (the bug was not reproductible on my computer anyway...)
Isn't there any simple alternative for rapidly altering the code of a dependency?
I would be interested in any solution for Scala, Java, Clojure or any other JVM language.
I'm not looking for a production-deployable solution, just a quick solution to use locally and eventually deployable on a test env.
Edit: I'm talking about library internals that are not intended to be modified by the library author. Please assume that the class to change is final, not replaceable by library configuration, and not injectable by any way into the library.
In Clojure you can re-bind vars, also from other namespaces, by using intern. So as long as the code you want to alter is Clojure code, that's a possible way to monkeypatch.
(intern 'user 'inc dec)
(inc 1)
=> 0
This is not something to do lightly though, since it can and will lead to problems with other code not expecting this behavior. It can be handy to use during development to temporarily fix edge cases or bugs in other libraries, but don't use it in published libraries or production code.
Best to simply fork and fix these libraries, and send a pull request to have it fixed in the original library.
When you're writing a library yourself that you expect people need to extend or overload, implement it in Clojure protocols, where these changes can be restricted to the extending/overloading namespaces only.
I disagree that AspectJ is difficult to use, it, or another bytecode manipulation library is your only realistic alternative.
Load-time weaving is a definite way around this issue. Depending on how you're using the class in question you might even be able to use a mocking library to achieve the same results, but something like AspectJ, which is specifically designed for augmentation and manipulation, would likely be the easiest.
I have a scenario where I have code written against version 1 of a library but I want to ship version 2 of the library instead. The code has shipped and is therefore not changeable. I'm concerned that it might try to access classes or members of the library that existed in v1 but have been removed in v2.
I figured it would be possible to write a tool to do a simple check to see if the code will link against the newer version of the library. I appreciate that the code may still be very broken even if the code links. I am thinking about this from the other side - if the code won't link then I can be sure there is a problem.
As far as I can see, I need to run through the bytecode checking for references, method calls and field accesses to library classes then use reflection to check whether the class/member exists.
I have three-fold question:
(1) Does such a tool exist already?
(2) I have a niggling feeling it is much more complicated that I imagine and that I have missed something major - is that the case?
(3) Do you know of a handy library that would allow me to inspect the bytecode such that I can find the method calls, references etc.?
Thanks!
I think that Clirr - a binary compatibility checker - can help here:
Clirr is a tool that checks Java libraries for binary and source compatibility with older releases. Basically you give it two sets of jar files and Clirr dumps out a list of changes in the public api. The Clirr Ant task can be configured to break the build if it detects incompatible api changes. In a continuous integration process Clirr can automatically prevent accidental introduction of binary or source compatibility problems.
Changing the library in your IDE will result in all possible compile-time errors.
You don't need anything else, unless your code uses another library, which in turn uses the updated library.
Be especially wary of Spring configuration files. Class names are configured as text and don't show up as missing until runtime.
If you have access to the source code, you could just compile source against the new library. If it doesn't compile, you have definitely a problem. If it compiles you may still have a problem if the program uses reflection, some kind of IoC stuff like Spring etc.
If you have unit tests, then you may have a better change catch any linking errors.
If you have only have a .class file of the program, then I don't know any tools that would help besides decomplining class file to source and compiling source again against the new library, but that doesn't sound too healthy.
The checks you mentioned are done by the JVM/Java class loader, see e.g. Linking of Classes and Interfaces.
So "attempting to link" can be simply achieved by trying to run the application. Of course you could hoist the checks to run them yourself on your collection of .class/.jar files. I guess a bunch of 3rd party byte code manipulators like BCEL will also do similar checks for you.
I notice that you mention reflection in the tags. If you load classes/invoke methods through reflection, there's no way to analyse this in general.
Good luck!
Currently we are studying the Java based tool which is primarily Reporting tool.It was developed in 2000/2001 period and uses many open source libraries like Apache Avalon/Mx4J.Adaptor/edu.Oswego(java concurrent package) etc. Tool uses jdk 1.3.1 and goal is to upgrade to jdk 1.5.We have also been asked to remove these 'outdated' packages and replace by standard Java packages if possible.
Unfortunately we have the code available for study but lacks any documentation and really difficult to track the flow(Total number of classes written might be more than 1000) during debugging.
Whats the best way to understand this kind of tool? Any graphical tool to see the relationship between the classes?
Thanks,
SR
You could try some of the Source Code Analyzer plugins to eclipse. Tools like DIVER or X-Ray might be useful.
That's a common problem (unfortunately), and again unfortunately there is no easy solution.
There are many tools to help you (see below), but these are only helpers, they will not solve the problem for you.
I have found that a systematic approach is best. There is a good article on this:
Swallowing an elephant in 10 easy steps , about understanding a large, undocumented system. It's about Perl, but the ideas are independent of language.
Some tools that might help:
Step through interesting parts in a debugger (e.g. Eclipses debugger)
Use Eclipse's "Call hierarchy" and "find references" to understand which part of the code uses what
Run tests with simple input data, understand what they produce
Write javadocs into the code documenting what you found, possibly correcting existing docs
Use tools to visualize class dependencies. I have unsed JDepend with some success; there are many others.
Eclipse (and newer version of NetBeans and perhaps IntelliJ) have wonderful tools for analyzing large codebases:
Call hierarchy (CTRL + ALT + H) - you see the hierarchies of calls to/from a given method
Type hierarcy (F4) - you see the whole inheritance structure
Data hierarchy
Right click on item > References
many different search options
Any graphical tool to see the relationship between the classes?
If you want to see the relationship between classes you could try Green UML . It creates a nice UML class diagram out of your repository. It works on Eclipse.
I hope that helps.
You can do it easily in NetBeans.
Select the method signature and press ALT+F7 (or alternately right click and then click "Find Usages") this would show you from where a particular method is being called.
Second option is little hectic but may give some results. Configure log4j for your project and try to give the proper logging code in each method.
Is there an IDE/Tool/script/something that can show call hierarchy and/or data flow in Scala+Java programs (preferably from source code).
Or (as a backup plan) is there a tool that can show it using Java bytecode? (And preferably give the option to go to source code, if provided by user).
All that, preferably integrated into an IDE and/or Maven :-)
The requirement to support Scala is crucial in this question. I Already know of and use such tools for Java, in 3 IDEs. They do not work very well (actually: at all) when Scala is involved.
TIA
Poor man's call hierarchy: Comment the method out and see where your red squigglies show up. [/me ducks]
Did you tried Eclipse?
SBT can do that. You'll have to check it out to get more information, because I haven't done it.
EDIT
Sorry, I confused things. SBT can generate component dependencies, not call hierarchy.
I have a rather large (several MLOC) application at hand that I'd like to split up into more maintainable separate parts. Currently the product is comprised of about 40 Eclipse projects, many of them having inter-dependencies. This alone makes a continuous build system unfeasible, because it would have to rebuild very much with each checkin.
Is there a "best practice" way of how to
identify parts that can immediately be separated
document inter-dependencies visually
untangle the existing code
handle "patches" we need to apply to libraries (currently handled by putting them in the classpath before the actual library)
If there are (free/open) tools to support this, I'd appreciate pointers.
Even though I do not have any experience with Maven it seems like it forces a very modular design. I wonder now whether this is something that can be retrofitted iteratively or if a project that was to use it would have to be layouted with modularity in mind right from the start.
Edit 2009-07-10
We are in the process of splitting out some core modules using Apache Ant/Ivy. Really helpful and well designed tool, not imposing as much on you as maven does.
I wrote down some more general details and personal opinion about why we are doing that on my blog - too long to post here and maybe not interesting to everyone, so follow at your own discretion: www.danielschneller.com
Using OSGi could be a good fit for you. It would allow to create modules out of the application. You can also organize dependencies in a better way. If you define your interfaces between the different modules correctly, then you can use continuous integration as you only have to rebuild the module that you affected on check-in.
The mechanisms provided by OSGi will help you untangle the existing code. Because of the way the classloading works, it also helps you handle the patches in an easier way.
Some concepts of OSGi that seem to be a good match for you, as shown from wikipedia:
The framework is conceptually divided into the following areas:
Bundles - Bundles are normal jar components with extra manifest headers.
Services - The services layer connects bundles in a dynamic way by offering a publish-find-bind model for plain old Java objects(POJO).
Services Registry - The API for management services (ServiceRegistration, ServiceTracker and ServiceReference).
Life-Cycle - The API for life cycle management (install, start, stop, update, and uninstall bundles).
Modules - The layer that defines encapsulation and declaration of dependencies (how a bundle can import and export code).
Security - The layer that handles the security aspects by limiting bundle functionality to pre-defined capabilities.
First: good luck & good coffee. You'll need both.
I once had a similiar problem. Legacy code with awful circular dependencies, even between classes from different packages like org.example.pkg1.A depends on org.example.pk2.B and vice versa.
I started with maven2 and fresh eclipse projects. First I tried to identify the most common functionalities (logging layer, common interfaces, common services) and created maven projects. Each time I was happy with a part, I deployed the library to the central nexus repository so that it was almost immediately available for other projects.
So I slowly worked up through the layers. maven2 handled the dependencies and the m2eclipse plugin provided a helpful dependency view. BTW - it's usually not too difficult to convert an eclipse project into a maven project. m2eclipse can do it for you and you just have to create a few new folders (like src/main/java) and adjust the build path for source folders. Takes just a minute or two. But expect more difficulties, if your project is an eclipse plugin or rcp application and you want maven not only to manage artifacts but also to build and deploy the application.
To opinion, eclipse, maven and nexus (or any other maven repository manager) are a good basis to start. You're lucky, if you have a good documentation of the system architecture and this architecture is really implemented ;)
I had a similar experience in a small code base (40 kloc). There are no °rules":
compiled with and without a "module" in order to see it's usage
I started from "leaf modules", modules without other dependencies
I handled cyclic dependencies (this is a very error-prone task)
with maven there is a great deal with documentation (reports) that can be deployed
in your CI process
with maven you can always see what uses what both in the site both in netbeans (with a
very nice directed graph)
with maven you can import library code in your codebase, apply source patches and
compile with your products (sometimes this is very easy sometimes it is very
difficult)
Check also Dependency Analyzer:
(source: javalobby.org)
Netbeans:
(source: zimmer428.net)
Maven is painful to migrate to for an existing system. However it can cope with 100+ module projects without much difficulty.
The first thing you need to decide is what infra-structure you will move to. Should it be a lot of independently maintained modules (which translates to individual Eclipse projects) or will you consider it a single chunk of code which is versioned and deployed as a whole. The first is well suited for migrating to a Maven like build environment - the latter for having all the source code in at once.
In any case you WILL need a continuous integration system running. Your first task is to make the code base build automatically, so you can let your CI system watch over your source repository and rebuild it whenyou change things. I decided for a non-Maven approach here, and we focus on having an easy Eclipse environment so I created a build enviornment using ant4eclipse and Team ProjectSet files (which we use anyway).
The next step would be getting rid of the circular dependencies - this will make your build simpler, get rid of Eclipse warnings, and eventually allow you to get to the "checkout, compile once, run" stage. This might take a while :-( When you migrate methods and classes, do not MOVE them, but extract or delegate them and leave their old name lying around and mark them deprecated. This will separate your untangeling with your refactoring, and allow code "outside" your project to still work with the code inside your project.
You WILL benefit from a source repository which allows for moving files, and keeping history. CVS is very weak in this regard.
I wouldn't recommend Maven for a legacy source code base. It could give you many headaches just trying to adapt everything to work with it.
I suppose what you need is to do an architectural layout of your project. A tool might help, but the most important part is to organize a logical view of the modules.
It's not free but Structure101 will give you as good as you will get in terms of tool support for hitting all your bullet points. But for the record I'm biased, so you might want to check out SonarJ and Lattix too. ;-)