I help maintain and build on a fairly large Swing GUI, with a lot of complex interaction. Often I find myself fixing bugs that are the result of things getting into odd states due to some race condition somewhere else in the code.
As the code base gets large, I've found it's gotten less consistent about specifying via documentation which methods have threading restrictions: most commonly, methods that must be run on the Swing EDT. Similarly, it would be useful to know and provide static awareness into which (of our custom) listeners are notified on the EDT by specification.
So it came to me that this should be something that could be easily enforced using annotations. Lo and behold, there exists at least one static analysis tool, CheckThread, that uses annotations to accomplish this. It seems to allow you to declare a method to be confined to a specific thread (most commonly the EDT), and will flag methods that try to call that method without also declaring themselves as confined to that thread.
So on the surface this just seems like a low-pain, huge-gain addition to the source and build cycle. My questions are:
Are there any success stories for people using CheckThread or similar libraries to enforce threading constraints? Any stories of failure? Why did it succeed/fail?
Is this good in theory? Are there theoretical downsides?
Is this good in practice? Is it worth it? What kind of value has it delivered?
If it works in practice, what are good tools to support this? I've just found CheckThread but admit I'm not entirely sure what I'm searching for to find other tools that do the same thing.
I know whether it's right for us depends on our scenario. But I've never heard of people using something like this in practice, and to be honest it doesn't seem to have taken hold much from some general browsing. So I'm wondering why.
This answer is more focused on the theory aspect of your question.
Fundamentally you are making an assertion: "This methods runs only under certain threads". This assertion isn't really different than any other assertion you might make ("The method accepts only integers less than 17 for parameter X"). Issues are
Where do such assertions come from?
Can static analyzers check them?
Where do you get such a static analyzer?
Mostly such assertions have to come from the software designers, as they are the only people that know the intentions. The traditional term for this is "Design by Contract",
although most DBC schemes are only over the current program state (C's assert macro) and they should really be over the programs' past and future states ("temporal assertions"), e.,g., "This routine will allocate a block of storage, and eventually some piece of code will deallocate it". One can build tools that try to determine hueristically what the assertions are (e.g., Engler's assertion induction work; others have done work in this area). That's useful, but the false positives are an issue. As practical matter, asking the designers to code such assertions doesn't seem particularly onerous, and is really good long term documentation. Whether you code such assertions with a specific "Contract" language construct, or with an if statement ("if Debug && Not(assertion) Then Fail();") or hide them in an annotation is really just a matter of convenience. Its nice when the language allows to code such assertions directly.
Checking of such assertions statically is difficult. If you stick with current-state only, the static analyzer pretty much has to do full data flow analysis of your entire application, because the information needed to satisfy the assertion likely comes from data created by another part of the application. (In your case, the "inside EDT" signal has to come from analyzing the whole call graph of the application to see if there is any call-path that leads to the method from a thread which is NOT the EDT thread). If you use temporal properties, the static check pretty much needs some kind of state-space verification logic in addition; these are presently still pretty much research tools. Even with all this machinery, static analyzers generally have to be "conservative" in their anlayses; if they can't demonstrate that something is false, they pretty much have to assume it is true, because of the halting problem.
Where do you get such analyzers? Given all the machinery needed, they're hard to build and so you should expect them to be rare. If somebody has built one, great. If not... as a general rule, you don't want do this yourself from scratch. The best long-term hope is to have generic program analysis machinery available on which to build such analyzers, to amortize the cost of building all the infrastructure. (I build program analyzer tool foundations; see our DMS Software Reengineering Toolkit).
One way to make it "easier" to build such static analyzers is to restrict the cases they handle to narrow scope, e.g., CheckThread. I'd expect CheckThread to do exactly what it presently does, and it would be unlikely to get a lot stronger.
The reason that "assert" macros and other such dynamic "current state" checks are popular is that they can actually be implemented by a simple runtime test. That's pretty practical. The problem here is that you may never exercise a path that leads to a failed conditions. So, for dynamic analysis, absence of detected failure is not really evidence of correctness. Still feels good.
Bottom line: static analyzers and dynamic analyzers each have their strength.
We haven't tried any static analysis tools, but we've used AspectJ to write a simple aspect that detects at runtime when any code in java.awt or javax.swing is invoked outside the EDT. It has found several places in our code that were missing a SwingUtilities.invokeLater(). We run with this aspect enabled throughout our QA cycle, then turn it off shortly before release.
As requested, this doesn’t pertain specifically to Java or the EDT, but I’ve seen good results with Coverity’s concurrency static analysis checkers for C/C++. They did have a higher false positive rate than less complicated checkers, but the code owners seemed willing to put up with that, given how hard threading bugs can be to find via testing. The details are confidential, I’m afraid, but Dawson Engler’s public papers (e.g., “Bugs as Deviant Behavior”) are very good on the general approach of “The following «N» instances of your code do «X» before doing «Y»,; this instance doesn’t.”
Related
I work on a large legacy Java 8 (Android) application. We recently found a bug that was caused by an ignored result of method. Specifically a caller of a send() method didn't take the right actions when it the sending failed. It's been fixed but now I want to add some static analysis to help find if other existing bugs of the same nature exist in our code. And additionally, to prevent new bugs of the same nature from being added in the future.
We already use Find Bugs, PMD, Checkstyle, Lint, and SonarQube. So I figured that one of these probably already has the check I'm looking for, but it just needs to be enabled. But after a few hours of searching and testing, I don't think that's the case.
For reference, this is the code I was testing with:
public class Application {
public status void main(String[] args) {
foo(); // I want this to be caught
Bar aBar = new Bar();
aBar.baz(); // I want this to be caught
}
static boolean foo() {
return System.currentTimeMillis() % 2 == 0;
}
}
public class Bar {
boolean baz() {
return System.currentTimeMillis() % 2 == 0;
}
}
I want to catch this on the caller side since some callers may use the value while others do not. (The send() method described above was this case)
I found the following existing static analysis rules but they only seem to apply to very specific circumstances to avoid false positives and not work on my example:
Return values from functions without side effects should not be ignored (only for immutable classes in the Java API)
Method ignores exceptional return value (only for known methods like File.delete())
Method ignores return value (only for methods annotated with javax.annotation.CheckReturnValue I think...)
Method ignores return value, is this OK? (only when the return value is the same type as the type the method is invoked on)
Return value of method without side effect is ignored (only when the method does not produce any effect other than return value)
So far the best option seems to be #3 but it requires me to annotate EVERY method or class in my HUGE project. Java 9+ seems to allow annotating at the package-level but that's not an option for me. Even if it was, the project has A LOT of packages. I really would like a way to configure this to be applied to my whole project via one/few locations instead needing to modify every file.
Lastly I came across this Stack Overflow answer that showed me that IntelliJ has this check with a "Report all ignored non-library calls" check. Doing this seems to work as far as highlighting in the IDE. But I want this to cause CI fail. I found there's a way to trigger this via command line using intelliJ tools but this still outputs an XML/JSON file and I'll need to write custom code to parse that output. I'd also need to install IDE tools onto the CI machine which seems like overkill.
Does anyone know of a better way to achieve what I want? I can't be the first person to only care about false negatives and not care about false positives. I feel like it should be manageable to have any return value that is currently being unused to either be logged or have it explicitly stated that the return value is intentionally ignored it via an annotation or assigning to a variable convention like they do in Error Prone
Scenarios like the one you describe invariably give rise to a substantial software defect (a true bug in every respect); made more frustrating and knotty because the code fails silently, and which allowed the problem to remain hidden. Your desire to identify any similar hidden defects (and correct them) is easy to understand; however, (I humbly suggest) static code analysis may not be the best strategy:
Working from the concerns you express in your question: a CheckReturnValue rule runs a high risk of producing a cascade of //Ignore code comments, rule violationSuppress clauses, and/or #suppressRule annotations that far outnumber the rule's positive defect detection count.
The Java programming language further increases the likelihood of a high rule suppression count, after taking Java garbage collection into consideration and assessing how garbage collection effects software development. Working from the understanding that Java garbage collection is based on object instance reference counting, that only instances with a reference count of 0 (zero) are eligible for garbage collection, it makes perfect sense for Java developers to avoid unnecessary references, and to naturally adopt the practice of ignoring unimportant method call return values. The ignored instances will simply fall off of the local call stack, most will reach a reference count of 0 (zero), immediately become eligible for and quickly undergo garbage collection.
Shifting now from a negative perspective to positive, I offer alternatives, for your consideration, that (I believe) will improve your results, as well as your probability to reach a successful outcome.
Based on your description of the scenario and resulting defect / bug, it feels like the proximate root cause of the problem is a unit testing failure or an integration testing failure. The implementation of a send operation that may (and almost certainly will at some point) fail, both unit testing and integration testing absolutely should have incorporated multiple possible failure scenarios and verified failure scenario handling. I obviously don't know, but I'm willing to bet that if you focus on creating and running unit tests and integration tests, the quality of the system will improve at every step, the improvements will be clearly evident, and you may very well uncover some or all of the hidden bugs that are the cause of your current concern, trepidation, stress, and worry.
Consider keeping the gist of your current static code analysis research alive, but shift your approach in a new direction. The first time I read your question, I was struck by the realization that the code checks you would like to perform exist in multiple unrelated locations across the code base and are quickly becoming overly complex, the specific details of the checks are different in many section of code, and each of the special cases make the overall effort unrealistic. Basically, what you would like to implement represents a cross-cutting goal that falls across a sizable section of the code base, and the implementation details have made what is a fairly simple good idea ridiculously complex. Your question is almost a textbook example of a problem that is best implemented taking a cross-cutting aspect-oriented approach.
If you have the time and interest, please take a look at the AspectJ framework, maybe code a few exploratory aspects, and let me know what you think. I'd like to hear your thoughts, if you feel like having a geeky dev conversation at some point. I really hope this is helpful-
You may use the intelliJ IDEA's inspection: Java | Probable bugs | Result of method call ignored with "Report all ignored non-library calls" option enabled. It catches both cases provided in your code sample.
When I receive code I have not seen before to refactor it into some sane state, I normally fix "cosmetic" things (like converting StringTokenizers to String#split(), replacing pre-1.2 collections by newer collections, making fields final, converting C-style arrays to Java-style arrays, ...) while reading the source code I have to get familiar with.
Are there many people using this strategy (maybe it is some kind of "best practice" I don't know?) or is this considered too dangerous, and not touching old code if it is not absolutely necessary is generally prefered? Or is it more common to combine the "cosmetic cleanup" step with the more invasive "general refactoring" step?
What are the common "low-hanging fruits" when doing "cosmetic clean-up" (vs. refactoring with more invasive changes)?
In my opinion, "cosmetic cleanup" is "general refactoring." You're just changing the code to make it more understandable without changing its behavior.
I always refactor by attacking the minor changes first. The more readable you can make the code quickly, the easier it will be to do the structural changes later - especially since it helps you look for repeated code, etc.
I typically start by looking at code that is used frequently and will need to be changed often, first. (This has the biggest impact in the least time...) Variable naming is probably the easiest and safest "low hanging fruit" to attack first, followed by framework updates (collection changes, updated methods, etc). Once those are done, breaking up large methods is usually my next step, followed by other typical refactorings.
There is no right or wrong answer here, as this depends largely on circumstances.
If the code is live, working, undocumented, and contains no testing infrastructure, then I wouldn't touch it. If someone comes back in the future and wants new features, I will try to work them into the existing code while changing as little as possible.
If the code is buggy, problematic, missing features, and was written by a programmer that no longer works with the company, then I would probably redesign and rewrite the whole thing. I could always still reference that programmer's code for a specific solution to a specific problem, but it would help me reorganize everything in my mind and in source. In this situation, the whole thing is probably poorly designed and it could use a complete re-think.
For everything in between, I would take the approach you outlined. I would start by cleaning up everything cosmetically so that I can see what's going on. Then I'd start working on whatever code stood out as needing the most work. I would add documentation as I understand how it works so that I will help remember what's going on.
Ultimately, remember that if you're going to be maintaining the code now, it should be up to your standards. Where it's not, you should take the time to bring it up to your standards - whatever that takes. This will save you a lot of time, effort, and frustration down the road.
The lowest-hanging cosmetic fruit is (in Eclipse, anyway) shift-control-F. Automatic formatting is your friend.
First thing I do is trying to hide most of the things to the outside world. If the code is crappy most of the time the guy that implemented it did not know much about data hiding and alike.
So my advice, first thing to do:
Turn as many members and methods as
private as you can without breaking the
compilation.
As a second step I try to identify the interfaces. I replace the concrete classes through the interfaces in all methods of related classes. This way you decouple the classes a bit.
Further refactoring can then be done more safely and locally.
You can buy a copy of Refactoring: Improving the Design of Existing Code from Martin Fowler, you'll find a lot of things you can do during your refactoring operation.
Plus you can use tools provided by your IDE and others code analyzers such as Findbugs or PMD to detect problems in your code.
Resources :
www.refactoring.com
wikipedia - List of tools for static code analysis in java
On the same topic :
How do you refactor a large messy codebase?
Code analyzers: PMD & FindBugs
By starting with "cosmetic cleanup" you get a good overview of how messy the code is and this combined with better readability is a good beginning.
I always (yeah, right... sometimes there's something called a deadline that mess with me) start with this approach and it has served me very well so far.
You're on the right track. By doing the small fixes you'll be more familiar with the code and the bigger fixes will be easier to do with all the detritus out of the way.
Run a tool like JDepend, CheckStyle or PMD on the source. They can automatically do loads of changes that are cosemetic but based on general refactoring rules.
I do not change old code except to reformat it using the IDE. There is too much risk of introducing a bug - or removing a bug that other code now depends upon! Or introducing a dependency that didn't exist such as using the heap instead of the stack.
Beyond the IDE reformat, I don't change code that the boss hasn't asked me to change. If something is egregious, I ask the boss if I can make changes and state a case of why this is good for the company.
If the boss asks me to fix a bug in the code, I make as few changes as possible. Say the bug is in a simple for loop. I'd refactor the loop into a new method. Then I'd write a test case for that method to demonstrate I have located the bug. Then I'd fix the new method. Then I'd make sure the test cases pass.
Yeah, I'm a contractor. Contracting gives you a different point of view. I recommend it.
There is one thing you should be aware of. The code you are starting with has been TESTED and approved, and your changes automatically means that that retesting must happen as you may have inadvertently broken some behaviour elsewhere.
Besides, everybody makes errors. Every non-trivial change you make (changing StringTokenizer to split is not an automatic feature in e.g. Eclipse, so you write it yourself) is an opportunity for errors to creep in. Do you get the exact behaviour right of a conditional, or did you by mere mistake forget a !?
Hence, your changes implies retesting. That work may be quite substantial and severely overwhelm the small changes you have done.
I don't normally bother going through old code looking for problems. However, if I'm reading it, as you appear to be doing, and it makes my brain glitch, I fix it.
Common low-hanging fruits for me tend to be more about renaming classes, methods, fields etc., and writing examples of behaviour (a.k.a. unit tests) when I can't be sure of what a class is doing by inspection - generally making the code more readable as I read it. None of these are what I'd call "invasive" but they're more than just cosmetic.
From experience it depends on two things: time and risk.
If you have plenty of time then you can do a lot more, if not then the scope of whatever changes you make is reduced accordingly. As much as I hate doing it I have had to create some horrible shameful hacks because I simply didn't have enough time to do it right...
If the code you are working on has lots of dependencies or is critical to the application then make as few changes as possible - you never know what your fix might break... :)
It sounds like you have a solid idea of what things should look like so I am not going to say what specific changes to make in what order 'cause that will vary from person to person. Just make small localized changes first, test, expand the scope of your changes, test. Expand. Test. Expand. Test. Until you either run out of time or there is no more room for improvement!
BTW When testing you are likely to see where things break most often - create test cases for them (JUnit or whatever).
EXCEPTION:
Two things that I always find myself doing are reformatting (CTRL+SHFT+F in Eclipse) and commenting code that is not obvious. After that I just hammer the most obvious nail first...
I do use static code analysis on a project with more than 100.000 lines of Java code for quite a while now. I started with Findbugs, which gave me around 1500 issues at the beginning. I fixed the most severe over time and started using additional tools like PMD, Lint4J, JNorm and now Enerjy.
With the more severe issues being fixed, there is a huge number of low severity issues. How do you handle these low priority issues?
Do you try fixing all of them?
Or only in newly written code?
Do you regularly disable certain rules? (I found that I do on nearly any of the available tools).
And if you ignore or disable rules, do you document those? What do your managers say about "leaving some thousand low priority issues not fixed"? Do you use (multiple) tool specific comments in the code or is there any better way?
Keep in mind that static analysis is meant to generate a lot of false positives; this is the price you pay for generally avoiding false negatives. That is, they assume that you would much rather be told incorrectly that something looks suspicious (a false positive) instead of being told that something that's actually a problem is perfectly fine (a false negative).
So in general, you should be configuring these tools rather than accepting the out-of-the-box defaults, which generate a lot of noise.
Do you try fixing all of them?
On projects where I have technical control, my standard modus operandi is to encourage a culture where people review of all new static analysis defects from our CI system. If we decline to fix enough defects over a period of time that are of a specific kind, we disable that rule since it's become just noise. Every so often we'll look at the disabled rules to make sure that they're still relevant.
But once we've turned off the less effective rules, yes, we fix all the defects. If you think that something is a problem, you should fix it if the priority doesn't exceed that of other things you have to do. Ignoring it is damaging to your team's culture and sends the wrong message.
And if you ignore or disable rules, do you document those?
The rules file is part of our project, so a commit message is sufficient to document the fact that such-and-such rules were disabled in this commit.
What do your managers say about "leaving some thousand low priority issues not fixed"?
Nothing. We make sure that they understand what we're doing and why, but they're usually (rightfully so) focused on higher-level metrics, like velocity and overall project health.
What do your managers say about "leaving some thousand low priority issues not fixed"?
I expect managers to prioritize: to decide (or, be told) whether any task is high- or low-priority, and to be happy with people's spending time on high-priority instead of low-priority tasks.
If you were to look at the analogy of your bug tracking database, a good number of those reported are low priority bugs that you'll probably never get to. Sure, they are real bugs and you would like to fix them but most programmers work under very real constraints and don't have the time to address every concern. I wrote an article recently about the special nature of static analysis defects.
One important difference about addressing static analysis bugs though is that they are typically much easier to deal with than a regularly reported bug. Thus a quick scan of the defects to identify not only the high priority items to fix but also the ones that are easiest to fix can be useful. Static analysis defects after all are detected very early in the development process and the specific parts o the code in question are very plainly spelled out. Thus you'll likely catch quite a few low hanging fruit on the lower priority ones.
The various strategies I've seen used to make this successful include:
* First of all, make sure the analysis is tuned properly. Static analysis comes "out of the box" with factory settings and can't possibly understand all code. If you can't tune it yourself get some help (disclaimer, we provide some of that type of help). You'll lower the false positive rate and find more good bugs.
* Identify the characteristics that for the most part prioritize the defects (they could be specific categories, specific areas of the code, built-in prioritization scoring provided by the static analysis tool, etc.).
* Determine what level of threshold is important and possibly make it an acceptance criteria (e.g. all high and critical need to be addressed)
* Make sure that each defect that blocks the acceptance criteria is addressed (addressed meaning that it at least has to be looked at because some could be false positives)
* Make sure that the ones marked false positive are checked, either through a peer code review process or through a tail end audit process so you don't have any mismarking problems.
Bottom line: choose what to focus on, review what is necessary, don't fix everything unless your business requirements make you
Decide what rules you want to follow at your company and disable the rest. A lot of the rules that you can set up with those tools are subjective style issues and can safely be ignored. You should document what rules you are following once in a company style guide (so you don't clutter your code with that information).
I wouldn't make a change to old code just based on the recommendation of a static analysis tool. That code is already tested and presumably working. If I need to go into the code and make a change for any other reason, then I'll run the analysis and make any changes it recommends.
The key in.my mind is to at least review all the issues even if you decide in the end not to fix them. The reason is that bugs, like misery, love company. I can't enumerate ow many times I've found all kinds of nasty bugs that findbugs didn't report; but found them by looking at seemingly unimportant ones that it did report.
I personally find code analysis though useful but overrated.
I can't say for Java but with C# there is also such a thing as a code analysis. I agree it gives you a lot of smart ideas how to do thing better, but at times the recommendations are just annoying. Some suggestions are not based on common sense but are only a matter of style. After some time playing around with the code analysis, I stopped using it. The reason is that I happened to disagree with many warnings and didn't want to follow the proposals.
In general, I would recommend doing what the code analysis says. But up to a certain point. Whatever seems to be a matter of personal style of whoever write the rules definitions, can easily be ignored. You can add exceptions for rule 1, then 2, then three... until it gets old. Then you just deactivate the feature.
This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
What should I keep in mind in order to refactor huge code base?
When is it good (if ever) to scrap production code and start over?
I am currently working with some legacy source code files. They have quite a few problems because they were written by a database expert who does not know much about Java. For instance,
Fields in classes are public. No getters and setters.
Use raw types, not parameterized types.
Use static unnecessarily.
Super long method names.
Methods need too many parameters.
Repeat Yourself frequently.
I want to modify them so that they are more object-oriented. What are some best practices and effective/efficient approaches?
Read "Working Effectively with Legacy Code" by Michael Feathers. Great book - and obviously it'll be a lot more detailed than answers here. It's got lots of techniques for handling things sensibly.
It looks like you've identified a number of issues, which is a large part of the problem. Many of those sound like they can be fixed relatively easily - it's overall design and architecture which is harder to do, of course.
Are there already unit tests, or will you be adding those too?
Before you start, create a system-level regression test suite for the application. You need this so that you can verify that your changes don't break things.
To do the refactoring, you want a use a combination of a good IDE, and text search tool (e.g. grep). Use the text search tool to find occurrences of the "syndromes" that you want to fix, then use the IDE (and its builtin refactoring capabilities) to fix the instances ... one at a time.
For example, Eclipse allows you to rename a method or class, or generate getters and setters. So you'd cure a 'public' attribute by:
Change the attribute to private.
Generate the getter and setter methods.
Save the file.
Go through all of the Java compilation errors resulting from the fact that the attribute is now private, and change to use the getter or setter as appropriate.
This approach will give you the low-hanging fruit. More fundamental design issues are more difficult, and may be impossible to fix without fundamental restructuring of the application. The refactoring capabilities will help you execute such changes, but deciding what to do is ultimately up to you.
Finally, my advice is to not be too ambitious. Go for incremental improvement, and be prepared to draw the line when the code is "good enough". You won't achieve perfection ... not even if you start from a clean slate ... so don't set your expectations high.
Is it just the code that is bad, or does it also hurt the user experience? Refactoring continuously is a good idea, but it should not be a goal unto itself. It should improve the application in terms of user interaction, maintainability, stability, performance, etc.
That is why I am not extremely fond of huge refactoring just to improve the code quality. Instead, refactor the code that you work with.
While working with a legacy system for several years, I have personally found that:
Create for yourself a vision of how you want the code after you're done. It should be attainable, contain a list of technology changes, general architecture changes. It may also be a good idea to make a rough priority of what classes are most critical to change. We lacked such a vision a few years ago, and while we refactored a lot, the code quality barely improved.
Now, you should restrict your refactoring to those that make you reach your vision. Don't fall into the trap of doing what appears good at the moment.
Focus on a particular component, and make it better. Then move on to the next. It's tempting to make huge changes that affect the entire system, but in truth you will introduce more problems than you solve.
Write integration regression tests. I.e., a few big tests that test a lot of functionality. It's not optimal, but it's the best you can do. Writing unit tests for every single class in your old system may end up a waste of time because it's not designed to be tested anyway and you want to redesign half of the classes.
Accept that it will take time.
Eclipse should be able to take care of #1 and help you work your way through many of the others.
As for converting poor OO code to good OO code it is amazingly difficult. Often it seems easier to rewrite it from scratch.
I tend to go from the bottom up. As I'm working on some small section I'll recognize a bunch of data that belongs together as a group and I'll make a good object that replaces that code without changing anything else--Very Small Changes with constant tests between each change.
This makes for a mediocre design at best, but I honestly don't know if you can go from not OO to good OO on a large project without dissecting the original program, understanding it and using it as a template for the rewrite and few projects allow this (even though it might be faster, you'll rarely if ever be able to convince management of that fact)
The point is risk I think.
The ugly code is just ugly, but it could work, it has been tested and bugfixed. If runnable code is changed, risk will follow. so test is critical.
You could refactor related code when
you have to bugfix, as a conservatism solution.
Maybe the first challenge is to persuade your manager:)
What's the problem with it not having getters and setters? I'd suggest refactoring those only when you need to add non-trivial getters or setters (e.g., with validation).
The rest sounds like you need to identify groups of values and create new types holding them, so instead of passing a String name, String address, int yearOfBirth, String[] accountNames, int[] balances you would pass a Customer around, which would in turn have an Account[].
IDEA Ultimate Edition has a code duplication detector that's very good (it's only missing a 'suggested solution' button!), and there are CPD etc.
I'd suggest that in a large legacy codebase you might waste time refactoring code only to find out it wasn't used anyway. I outlined some steps for removing unused code: http://rickyclarkson.blogspot.com/2009/12/deleting-code-what-first.html
How many of those "issues" are real problems and not just matters of style? Of this list, the only 'real' issue I can see is "Repeat Yourself frequently", and that's more of an ongoing maintenance problem that should be resolved during normal code maintenance when someone's going to be changing the code anyway.
I want to modify them so that they are more object-oriented.
Object-Orientation should not be your only goal when refactoring. The question you should ask yourself is what is the expected ROI (better quality ? easier enhancements ? better sharing of this code across a team ?) A ROI is not just words, you should be prepared to measure with numbers the return on investment (even the quality enhancements for example). You should take into accounts the life duration of your products in estimating the ROI.
You should also ask yourself what is the size of the code which is dependent on the code you want to refactor. Refactoring a library could be easy but could lead to a lot of changes in source codes dependent on this library, a work well larger than just refactoring the library.
Before touching any code, you should estimate the total work that needs to be done to finish refactoring the code and dependent code. You should estimate a total rewrite of the code, a partial rewrite, or just an internal rewrite without touching APIs.
With the costs and returns, you could decide if it's worth the effort to refactor your code.
I am building a spring mvc web application.
I plan on using hibernate.
I don't have much experience with obfuscating etc.
What are the potential downsides to obfuscating an application?
I understand that there might be issues with debugging the app, and recovering lost source code is also an issue.
Are there any known issues with the actually running of the application? Can bugs be introduced?
Since this is an area I am looking for general guidance, please feel free to open up any issues that I should be aware of.
There are certainly some potential performance/maintenance issues, but a good obfuscator will let you get round at least some of them. Things to look out for:
an obvious one: if your code calls methods by reflection or dynamically loads classes, then this is liable to fail if the class/method names are obfuscated; a good obfuscator will let you select class/method names not to obfuscate to get round this problem;
a similar issue can occur if not all of your application is compiled at the same time;
if it deals directly at the bytecode level, an obfuscator can create code that in principle a Java compiler cannot create (e.g. it can insert arbitrary GOTO instructions, whereas from Java these can only be created as part of a loop)-- this may be a bit theoretical, but if I were writing a JVM, I'd optimise performance for sequences of bytecodes that a Java compiler can create, not ones that it can't...
the obfuscator is liable to make other subtle changes to performance if it significantly alters the number of bytecodes in a method, or in some way changes whether a given method/piece of code hits thresholds for certain JVM optimisations (e.g. "inline methods with fewer than X bytecodes").
But as you can see, some of these effects are a little subtle and theoretical-- so to some extent what you need to do is soak-test your application after obfuscation, just as you would with any other major change.
You should also be careful not to assume that obfuscation hides your code/algorithm (if that is your intention) as much as you want it to-- use a decompiler to have a look at the contents of the resulting obfuscated classes.
Surprised no one has mentioned speed - in general, more obfuscated = slower-running code
[Edit] I can't believe this has -2. It is a correct answer.
Shortening identifiers and removing unused methods will decrease the file-size, but have 0 impact on the running speed (other than the few nanoseconds shaved off the loading time). In the meanwhile, most of the obfuscation of the program comes from added code:
Breaking 1 method into 5; interleaving methods; merging classes [aggregation transformations]
Splitting 1 arithmetic expression into 10; jumbling the control-flow [computation transformations]
And adding chunks of code that do nothing [opaque predicates]
are all common obfuscation techniques that cause a program to run slower.
You may want to look at some of the comments here, to decide if obfuscating makes sense:
https://stackoverflow.com/questions/1988451/net-obfuscation
You may want to express why you want to obfuscate. IMO the best reasons are mainly to have a smaller application, as you can get rid of classes that aren't being used in your project, while obfuscating.
I have never seen bugs introduced, as long as you aren't using reflection, assuming you can find something, as private methods for example will have their names changed.
The biggest problem centers around that fact that obfuscating programs generally make a guarantee of not changing the behavior of their target program. In some cases it proves to be very hard to do this -- for example, imagine a program which checks the value of certain private fields via reflection from a string array. An obfuscator may not be able to tell that this string also needs to be updated correspondingly, and the result will be unexpected access errors that pop up at runtime.
Worse still, it may not be obvious that the behavior of a program has changed subtly -- then you may not know that there's a problem at all, until your customer finds it first and gets upset.
Generally, professional-grade obfuscation products are sophisticated enough to catch some kinds of problems and prevent them, but ultimately it can be challenging to cover all the bases. The best defense is to run unit tests against the obfuscated result and make sure that all your expected behavior continues to hold true.
1 free one you might want to check out is Babel. It is designed to be used on the command line (like many other obfuscators), there is a Reflector addin that will provide a UI for you.
When it comes to obfuscation, you really need to analyze what your goal is. In your case - if you have a web application (mvc) are you planning on selling it as a canned downloadable application? (if not and you keep the source on your web servers then you don't need it).
You might look at the components and pick only certain parts to obfuscate ... not the whole thing. In general ASP.Net apps break pretty easy when you try to add obfuscation after you developed them due to all the reflection used.
Pretty much everything mentioned above is true ... it all depends on how many features you turn on to make it hard to reverse your code:
Renaming of members (fields/methods/events/properties) is most common (comes in different flavors: simple renaming of methods from something like GetId() to a() all the way to unreadable characters and removal of namespaces). BTW: This is where reflection usually breaks. Your assembly file may end up being smaller due to smaller strings being used too.
String encryption: this makes it harder to reverse your static strings used in your code. BTW: this paired with renaming makes it difficult for you to debug your renaming problems ... so you might turn it on after you have that working. This also will have to add code to decrypt the string right before it is used in IL
Code mangling ... this is what BlueRaja was refering to. It makes your code look like spagetti code - to make it harder for someone to figure out. The CLR does not like this ... it can't optimize things as easy and your final code will mostlikely proccess slower due to the additional branching and something not being inlined due to the IL rewriting used for this option. BTW: this option really does raise the bar on what it takes to reverse you source code, but may come with a performance hit.
Removal of unused code. Some obfuscators offer you the option to trim any code that it finds not being used. This may make your assembly a little smaller if you have alot of dead code hanging around ... but it is just a free benefit obfuscators throw in.
My advice is to only use it if you know why you are using it and design with that end in mind ... don't try to add it after you've finished your code (I've done that and it's not fun)