I would like to transition our codebase from poorly written PHP code to poorly written Java, since I believe Java code is easier to tidy up. What are the pros and cons, and for those who have done it yourselves, would you recommend PtoJ for a project of about 300k ugly lines of code? Tips and tricks are most welcome; thanks!
Poorly written PHP is likely to be very hard to convert because a lot of the bad stuff in PHP just doesn't exist in Java (the same is true vice versa though, so don't take that as me saying Java is better - I'm going to keep well clear of that flame-war).
If you're talking about a legacy PHP app, then its highly likely that your code contains a lot of procedural code and inline HTML, neither of which are going to convert well to Java.
If you're really unlucky, you'll have things like eval() statements, dynamic variable names (using $$ syntax), looped include() statements, reliance on the 'register_globals' flag, and worse. That kind of stuff will completely thwart any conversion attempt.
Your other major problem is that debugging the result after the conversion is going to be hell, even if you have beautiful code to start with. If you want to avoid regressions, you will basically need to go through the entire code base on both sides with a fine comb.
The only time you're going to get a satisfactory result from an automated conversion of this type is if you start with a reasonably tide code base, written at least mainly in up-to-date OOP code.
In my opinion, you'd be better off doing the refacting excersise before the conversion. But of course, given your question, that would rather defeat the point. Therefore my recommendation is to stick it in PHP. PHP code can be very good, and even bad PHP can be polished up with a bit of refactoring.
[EDIT]
In answer to #Jonas's question in the comments, 'what is the best way to refactor horrible PHP code?'
It really depends on the nature of the code. A large monolithic block of code (which describes a lot of the bad PHP I've seen) can be very hard (if not imposible) to implementunit tests for. You may find that functional tests are the only kind of tests you can write on the old code base. These would use Selenium or similar tools to run the code through the browser as if it were a user. If you can get a set of reliable functional tests written, it is good for helping you remain confident that you aren't introducing regressions.
The good news is that it can be very easy - and satisfying - to rip apart bad code and rebuild it.
The way I've approached it in the past is to take a two-stage approach.
Stage one rewrites the monolithic code into decent quality procedural code. This is relatively easy, and the new code can be dropped into place as you go. This is where the bulk of the work happens, but you'll still end up with procedural code. Just better procedural code.
Stage two: once you've got a critical mass of reasonable quality procedural code, you can then refactor it again into an OOP model. This has to wait until later, because it is typically quite hard to convert old bad quality PHP straight into a set of objects. It also has to be done in fairly large chunks because you'll be moving large amounts of code into objects all at once. But if you did a good job in stage one, then stage two should be fairly straightforward.
When you've got it into objects, then you can start seriously thinking about unit tests.
I would say that automatic conversion from PHP to Java have the following:
pros:
quick and dirty, possibly making happy some project manager concerned with short-time delivery (assuming that you're lucky and the automatically generated code works without too much debugging, which I doubt)
cons:
ugly code: I doubt that automatic conversion from ugly PHP will generate anything but ugly Java
unmaintainable code: the automatically generate code is likely to be unmaintainable, or, at least, very difficult to maintain
bad approach: I assume you have a PHP Web application; in this case, I think that the automatic translation is unlikely to use Java best practices for Web application, or available frameworks
In summary
I would avoid automatic translation from PHP to Java, and I woudl at least consider rewriting the application from the ground up using Java. Especially if you have a Web application, choose a good Java framework for webapps, do a careful design, and proceed with an incremental implementation (one feature of your original PHP webapp at a time). With this approach, you'll end up with cleaner code that is easier to maintain and evolve ... and you may find out that the required time is not that bigger that what you'd need to clean/debug automatically generated code :)
P2J appears to be offline now, but I've written a proof-of-concept that converts a subset of PHP into Java. It uses the transpiler library for SWI-Prolog:
:- use_module(library(transpiler)).
:- set_prolog_flag(double_quotes,chars).
:- initialization(main).
main :-
Input = "function add($a,$b){ print $a.$b; return $a.$b;} function squared($a){ return $a*$a; } function add_exclamation_point($parameter){return $parameter.\"!\";}",
translate(Input,'php','java',X),
atom_chars(Y,X),
writeln(Y).
This is the program's output:
public static String add(String a,String b){
System.out.println(a+b);
return a+b;
}
public static int squared(int a){
return a*a;
}
public static String add_exclamation_point(String parameter){
return parameter+"!";
}
In contrast to other answers here, I would agree with your strategy to convert "PHP code to poorly written Java, since I believe Java code is easier to tidy up", but you need to make sure the tool that you are using doesn't introduce more bugs than you can handle.
An optimum stategy would be:
1) Do automated conversion
2) Get an MVP running with some basic tests
3) Start using the amazing Eclipse/IntelliJ refractoring tool to make the code more readable.
A modern Java IDE can refactor code with zero bugs when done properly. It can also tell you which functions are never called and a lot of other inspections.
I don't know how "PtoJ" was, since their website has vanished, but you ideally want something that doesn't just translate the syntax, but the logic. I used php2java.com recently and it worked very well. I've also used various "syntax" converters (not just for PHP to Java, but also ObjC -> Swift, Java -> Swift), and even they work just fine if you put in the time to make things work after.
Also, found this interesting blog entry about what might have happened to numiton PtoJ (http://www.runtimeconverter.com/single-post/2017/11/14/What-happened-to-numition).
http://www.numiton.com/products/ntile-ptoj/translation-samples/web-and-db-access/mysql.html
Would you rather not use Hibernate ?
Related
I have read a lot about test-driven design. My project is using tests, but currently they are written after the code has been written, and I am not clear how to do it in the other direction.
Simple example: I have a class Rectangle. It has private fields width and height with corresponding getters and setters. Common Java. Now, I want to add a function getArea() which returns the product of both, but I want to write the test first.
Of course, I can write a unit test. But it isn’t the case that it fails, but it does not even compile, because there is no getArea() function yet. Does that mean that writing the test always already involves changing the productive code to introduce dummys without functionality? Or do I have to write the test in a way that it uses introspection? I don’t like the latter approach, because it makes the code less readable and later refactoring with tools will not discover it and break the test, and I know that we refactor a lot. Also, adding ‘dummys’ may include lots of changes, i.e. if I need additional fields, the database must be changed for Hibernate to continue to work, … that seems way to much productive code changes for me when yet “writing tests only”. What I would like to have is a situation where I can actually only write code inside src/test/, not touching src/main at all, but without introspection.
Is there a way to do that?
Well, TDD does not mean, that you cannot have anything in the production code before writing the test.
For example:
You put your method, e.g. getArea(param1, param2) in your production code with an empty body.
Then you write the test with valid input and your expected result.
You run the test and it will fail.
Then you change the production code and run the test again.
If it still fails: back to the previous step.
If it passes, you write the next test.
A quick introduction can be found for example here: codeutopia -> 5-step-method-to-make-test-driven-development-and-unit-testing-easy
What I would like to have is a situation where I can actually only write code inside src/test/, not touching src/main at all, but without introspection.
There isn't, that I have ever seen, a way to write a test with a dependency on a new part of the API, and have that test immediately compile without first extending the API of the test subject.
It's introspection or nothing.
But it isn’t the case that it fails, but it does not even compile, because there is no getArea() function yet
Historically, writing code that couldn't compile was part of the rhythm of TDD. Write a little bit of test code, write a little bit of production code, write a little bit of test code, write a little bit of production code, and so on.
Robert Martin describes this as the nano-cycle of TDD
... the goal is always to promote the line by line granularity that I experienced while working with Kent so long ago.
I've abandoned the nano-cycle constraint in my own work. Perhaps I fail to appreciate it because I've never paired with Kent.
But I'm perfectly happy to write tests that don't compile, and then back fill the production code I need when the test is in a satisfactory state. That works well for me because I normally work in a development environment that can generate production implementations at just a few key strokes.
Another possibility is to consider a discipline like TDD as if you meant it, which does a lot more of the real work in the test source hierarchy before moving code into the production hierarchy.
I've been working on Android dev quite sometimes, but never fully adopt TDD in Android. However I tried recently to develop my new app with complete TDD. So here is my opinion..
Does that mean that writing the test always already involves changing the productive code to introduce dummys without functionality?
I think is the yes. As I understand every tests are equivalent to every specs/use cases I have on the software. So writing a fail test first is about the attempt to filling in the requirement specs with test codes. Then when I tried to fill the productive code to pass the just-written TC, I really tried to make it work. After a doing this a while, I was pretty surprised how with my productive code size is very small, but it's able to fill how much of the requirement.
For me personally all the fail TC I wrote before productive code, were actually come from list of questions, which I brainstormed them about the requirement, and I sometimes used it to explore edge cases of requirement.
So the basic workflow is Red - Green - Refactor, which I got from the presentation from Bryan Breecham - https://www.infoq.com/presentations/tdd-lego/
About,
What I would like to have is a situation where I can actually only write code inside src/test/, not touching src/main at all, but without introspection.
For me I think it's possible, when you write all your productive logic first, then UT plays the roles of fulfilling the requirement. It's just the another way around. So in overall I think TDD is the approach but people may use Unit Test in different purposes, e.g reduce testing time, etc.
When I receive code I have not seen before to refactor it into some sane state, I normally fix "cosmetic" things (like converting StringTokenizers to String#split(), replacing pre-1.2 collections by newer collections, making fields final, converting C-style arrays to Java-style arrays, ...) while reading the source code I have to get familiar with.
Are there many people using this strategy (maybe it is some kind of "best practice" I don't know?) or is this considered too dangerous, and not touching old code if it is not absolutely necessary is generally prefered? Or is it more common to combine the "cosmetic cleanup" step with the more invasive "general refactoring" step?
What are the common "low-hanging fruits" when doing "cosmetic clean-up" (vs. refactoring with more invasive changes)?
In my opinion, "cosmetic cleanup" is "general refactoring." You're just changing the code to make it more understandable without changing its behavior.
I always refactor by attacking the minor changes first. The more readable you can make the code quickly, the easier it will be to do the structural changes later - especially since it helps you look for repeated code, etc.
I typically start by looking at code that is used frequently and will need to be changed often, first. (This has the biggest impact in the least time...) Variable naming is probably the easiest and safest "low hanging fruit" to attack first, followed by framework updates (collection changes, updated methods, etc). Once those are done, breaking up large methods is usually my next step, followed by other typical refactorings.
There is no right or wrong answer here, as this depends largely on circumstances.
If the code is live, working, undocumented, and contains no testing infrastructure, then I wouldn't touch it. If someone comes back in the future and wants new features, I will try to work them into the existing code while changing as little as possible.
If the code is buggy, problematic, missing features, and was written by a programmer that no longer works with the company, then I would probably redesign and rewrite the whole thing. I could always still reference that programmer's code for a specific solution to a specific problem, but it would help me reorganize everything in my mind and in source. In this situation, the whole thing is probably poorly designed and it could use a complete re-think.
For everything in between, I would take the approach you outlined. I would start by cleaning up everything cosmetically so that I can see what's going on. Then I'd start working on whatever code stood out as needing the most work. I would add documentation as I understand how it works so that I will help remember what's going on.
Ultimately, remember that if you're going to be maintaining the code now, it should be up to your standards. Where it's not, you should take the time to bring it up to your standards - whatever that takes. This will save you a lot of time, effort, and frustration down the road.
The lowest-hanging cosmetic fruit is (in Eclipse, anyway) shift-control-F. Automatic formatting is your friend.
First thing I do is trying to hide most of the things to the outside world. If the code is crappy most of the time the guy that implemented it did not know much about data hiding and alike.
So my advice, first thing to do:
Turn as many members and methods as
private as you can without breaking the
compilation.
As a second step I try to identify the interfaces. I replace the concrete classes through the interfaces in all methods of related classes. This way you decouple the classes a bit.
Further refactoring can then be done more safely and locally.
You can buy a copy of Refactoring: Improving the Design of Existing Code from Martin Fowler, you'll find a lot of things you can do during your refactoring operation.
Plus you can use tools provided by your IDE and others code analyzers such as Findbugs or PMD to detect problems in your code.
Resources :
www.refactoring.com
wikipedia - List of tools for static code analysis in java
On the same topic :
How do you refactor a large messy codebase?
Code analyzers: PMD & FindBugs
By starting with "cosmetic cleanup" you get a good overview of how messy the code is and this combined with better readability is a good beginning.
I always (yeah, right... sometimes there's something called a deadline that mess with me) start with this approach and it has served me very well so far.
You're on the right track. By doing the small fixes you'll be more familiar with the code and the bigger fixes will be easier to do with all the detritus out of the way.
Run a tool like JDepend, CheckStyle or PMD on the source. They can automatically do loads of changes that are cosemetic but based on general refactoring rules.
I do not change old code except to reformat it using the IDE. There is too much risk of introducing a bug - or removing a bug that other code now depends upon! Or introducing a dependency that didn't exist such as using the heap instead of the stack.
Beyond the IDE reformat, I don't change code that the boss hasn't asked me to change. If something is egregious, I ask the boss if I can make changes and state a case of why this is good for the company.
If the boss asks me to fix a bug in the code, I make as few changes as possible. Say the bug is in a simple for loop. I'd refactor the loop into a new method. Then I'd write a test case for that method to demonstrate I have located the bug. Then I'd fix the new method. Then I'd make sure the test cases pass.
Yeah, I'm a contractor. Contracting gives you a different point of view. I recommend it.
There is one thing you should be aware of. The code you are starting with has been TESTED and approved, and your changes automatically means that that retesting must happen as you may have inadvertently broken some behaviour elsewhere.
Besides, everybody makes errors. Every non-trivial change you make (changing StringTokenizer to split is not an automatic feature in e.g. Eclipse, so you write it yourself) is an opportunity for errors to creep in. Do you get the exact behaviour right of a conditional, or did you by mere mistake forget a !?
Hence, your changes implies retesting. That work may be quite substantial and severely overwhelm the small changes you have done.
I don't normally bother going through old code looking for problems. However, if I'm reading it, as you appear to be doing, and it makes my brain glitch, I fix it.
Common low-hanging fruits for me tend to be more about renaming classes, methods, fields etc., and writing examples of behaviour (a.k.a. unit tests) when I can't be sure of what a class is doing by inspection - generally making the code more readable as I read it. None of these are what I'd call "invasive" but they're more than just cosmetic.
From experience it depends on two things: time and risk.
If you have plenty of time then you can do a lot more, if not then the scope of whatever changes you make is reduced accordingly. As much as I hate doing it I have had to create some horrible shameful hacks because I simply didn't have enough time to do it right...
If the code you are working on has lots of dependencies or is critical to the application then make as few changes as possible - you never know what your fix might break... :)
It sounds like you have a solid idea of what things should look like so I am not going to say what specific changes to make in what order 'cause that will vary from person to person. Just make small localized changes first, test, expand the scope of your changes, test. Expand. Test. Expand. Test. Until you either run out of time or there is no more room for improvement!
BTW When testing you are likely to see where things break most often - create test cases for them (JUnit or whatever).
EXCEPTION:
Two things that I always find myself doing are reformatting (CTRL+SHFT+F in Eclipse) and commenting code that is not obvious. After that I just hammer the most obvious nail first...
This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
What should I keep in mind in order to refactor huge code base?
When is it good (if ever) to scrap production code and start over?
I am currently working with some legacy source code files. They have quite a few problems because they were written by a database expert who does not know much about Java. For instance,
Fields in classes are public. No getters and setters.
Use raw types, not parameterized types.
Use static unnecessarily.
Super long method names.
Methods need too many parameters.
Repeat Yourself frequently.
I want to modify them so that they are more object-oriented. What are some best practices and effective/efficient approaches?
Read "Working Effectively with Legacy Code" by Michael Feathers. Great book - and obviously it'll be a lot more detailed than answers here. It's got lots of techniques for handling things sensibly.
It looks like you've identified a number of issues, which is a large part of the problem. Many of those sound like they can be fixed relatively easily - it's overall design and architecture which is harder to do, of course.
Are there already unit tests, or will you be adding those too?
Before you start, create a system-level regression test suite for the application. You need this so that you can verify that your changes don't break things.
To do the refactoring, you want a use a combination of a good IDE, and text search tool (e.g. grep). Use the text search tool to find occurrences of the "syndromes" that you want to fix, then use the IDE (and its builtin refactoring capabilities) to fix the instances ... one at a time.
For example, Eclipse allows you to rename a method or class, or generate getters and setters. So you'd cure a 'public' attribute by:
Change the attribute to private.
Generate the getter and setter methods.
Save the file.
Go through all of the Java compilation errors resulting from the fact that the attribute is now private, and change to use the getter or setter as appropriate.
This approach will give you the low-hanging fruit. More fundamental design issues are more difficult, and may be impossible to fix without fundamental restructuring of the application. The refactoring capabilities will help you execute such changes, but deciding what to do is ultimately up to you.
Finally, my advice is to not be too ambitious. Go for incremental improvement, and be prepared to draw the line when the code is "good enough". You won't achieve perfection ... not even if you start from a clean slate ... so don't set your expectations high.
Is it just the code that is bad, or does it also hurt the user experience? Refactoring continuously is a good idea, but it should not be a goal unto itself. It should improve the application in terms of user interaction, maintainability, stability, performance, etc.
That is why I am not extremely fond of huge refactoring just to improve the code quality. Instead, refactor the code that you work with.
While working with a legacy system for several years, I have personally found that:
Create for yourself a vision of how you want the code after you're done. It should be attainable, contain a list of technology changes, general architecture changes. It may also be a good idea to make a rough priority of what classes are most critical to change. We lacked such a vision a few years ago, and while we refactored a lot, the code quality barely improved.
Now, you should restrict your refactoring to those that make you reach your vision. Don't fall into the trap of doing what appears good at the moment.
Focus on a particular component, and make it better. Then move on to the next. It's tempting to make huge changes that affect the entire system, but in truth you will introduce more problems than you solve.
Write integration regression tests. I.e., a few big tests that test a lot of functionality. It's not optimal, but it's the best you can do. Writing unit tests for every single class in your old system may end up a waste of time because it's not designed to be tested anyway and you want to redesign half of the classes.
Accept that it will take time.
Eclipse should be able to take care of #1 and help you work your way through many of the others.
As for converting poor OO code to good OO code it is amazingly difficult. Often it seems easier to rewrite it from scratch.
I tend to go from the bottom up. As I'm working on some small section I'll recognize a bunch of data that belongs together as a group and I'll make a good object that replaces that code without changing anything else--Very Small Changes with constant tests between each change.
This makes for a mediocre design at best, but I honestly don't know if you can go from not OO to good OO on a large project without dissecting the original program, understanding it and using it as a template for the rewrite and few projects allow this (even though it might be faster, you'll rarely if ever be able to convince management of that fact)
The point is risk I think.
The ugly code is just ugly, but it could work, it has been tested and bugfixed. If runnable code is changed, risk will follow. so test is critical.
You could refactor related code when
you have to bugfix, as a conservatism solution.
Maybe the first challenge is to persuade your manager:)
What's the problem with it not having getters and setters? I'd suggest refactoring those only when you need to add non-trivial getters or setters (e.g., with validation).
The rest sounds like you need to identify groups of values and create new types holding them, so instead of passing a String name, String address, int yearOfBirth, String[] accountNames, int[] balances you would pass a Customer around, which would in turn have an Account[].
IDEA Ultimate Edition has a code duplication detector that's very good (it's only missing a 'suggested solution' button!), and there are CPD etc.
I'd suggest that in a large legacy codebase you might waste time refactoring code only to find out it wasn't used anyway. I outlined some steps for removing unused code: http://rickyclarkson.blogspot.com/2009/12/deleting-code-what-first.html
How many of those "issues" are real problems and not just matters of style? Of this list, the only 'real' issue I can see is "Repeat Yourself frequently", and that's more of an ongoing maintenance problem that should be resolved during normal code maintenance when someone's going to be changing the code anyway.
I want to modify them so that they are more object-oriented.
Object-Orientation should not be your only goal when refactoring. The question you should ask yourself is what is the expected ROI (better quality ? easier enhancements ? better sharing of this code across a team ?) A ROI is not just words, you should be prepared to measure with numbers the return on investment (even the quality enhancements for example). You should take into accounts the life duration of your products in estimating the ROI.
You should also ask yourself what is the size of the code which is dependent on the code you want to refactor. Refactoring a library could be easy but could lead to a lot of changes in source codes dependent on this library, a work well larger than just refactoring the library.
Before touching any code, you should estimate the total work that needs to be done to finish refactoring the code and dependent code. You should estimate a total rewrite of the code, a partial rewrite, or just an internal rewrite without touching APIs.
With the costs and returns, you could decide if it's worth the effort to refactor your code.
In school we were assigned to design a language and then to implement it, (I'm having so much fun implementing it =)). My teacher told us to use yacc/lex, but i decided to go with java + regex API, here is how the the language I designed looks:
Program "my program"
var yourName = read()
if { equals("guy1" to yourName) }
print("hello my friend")
else
print("hello extranger")
end
Program End
Well, as you can see, its a pretty basic language =).
I thought I could implement it in a very OOP fashion, like make an abstract class Sentence and then have subclasses like VariableAssignment, IfSentence etc. and have a class Program which is only a bunch of sentences right? And then call an abstract method eval on all Sentences, so my initial approach to complie the language consisted only of two phases:
Identify syntax of seach line
Create the correspondig class for each line
of course, if something goes wrong on any phase Ii could raise an error.
My question is, am I doing it wrong? Should I go over all phases like the theory says (lexical, syntactical, semantical)? Should I continue with my naive two-phase compiler?
I won't ask the obvious question of why you're not following the advice of your instructor and using yacc/lex because I know the answer. You wanted to go off and do something that you thought was cool and would help you learn. Unfortunately, that approach was recommended by your professor because as another posted stated, a lot of very smart people before you have explored multiple approaches and spent vast quantities of time trying to find a good solution.
You can make a two-phase compiler work, but you will need to accept that it will never be as good as going through the full process because it's harder to detect errors. A lot harder in fact. In some cases, you won't even be able to tell that there's an error until it's too late. ie: already compiled and attempting to run.
If you want to learn a lot more about it, go with the two phase approach and you will run into the same problems that the people before you ran into. Just be sure to understand that it will take you a lot longer to get to a final solution, you might be docked points on your project, and it might not work right.
That said, you're going to learn more about it than anyone else in the class. If you have the time to spare, I'd do it the way you are now. The knowledge might come in handy down the road. I would also talk to your professor and tell him that you're going to do it another way against his recommendations because you want to have a more thorough understanding. Perhaps he won't knock points off from your project for being ambitious, even if it turns out wrong.
After all, the point of doing projects in college is to learn.
A lot of smart people thought about this, and from your post I take, they came to the conclusion that all the phases are needed.
So if you want your compiler to work, go the way the theory dictates.
If you want to understand, why it dictates the phases, try the short cut. It will probably take a lot longer.
Disclaimer: I have no idea about compiler theory
Another note: You have a problem; You decide to solve it using regexps; Now you have two problems
If you use regexes to parse each line your language would have a very limited syntax.
You would not be able to parse each line using just a regular expression API if your syntax becomes more complex. Even the if { equals("guy1" to yourName) } would become impossible to parse with regexes if you start adding AND and OR operators, and what would happen if you start supporting escape characters like \n in your string literals?
The Java Regex API would be able to help you with the lexical analyzer, but you would have to write the parser from there. You could take one of several approaches:
If you're using Java, you could look at Antlr (which negates the need for writing a lexicall analyzer with Java's regex library), or
You could write a recursive descent parser by hand
among others
(also, "Statement" is a synonym for "Sentence" that is more common in compiler texts)
If you want to use only regular expressions to parse your language, your language can only be regular. This is a big constriction, for example, arbitrarily deep nesting would be impossible, as you would have to teach your parser each nesting combination separately. I am not sure if building a Turing-complete regular language is even possible.
If u really want to dirty ur hands code a recursive descent parser. If you want to understand compiler theory use antlr and concentrate on the principles leaving the implementation for the parser generator.
BTW, why would wnat to complicate your life with regex?!
I am building a spring mvc web application.
I plan on using hibernate.
I don't have much experience with obfuscating etc.
What are the potential downsides to obfuscating an application?
I understand that there might be issues with debugging the app, and recovering lost source code is also an issue.
Are there any known issues with the actually running of the application? Can bugs be introduced?
Since this is an area I am looking for general guidance, please feel free to open up any issues that I should be aware of.
There are certainly some potential performance/maintenance issues, but a good obfuscator will let you get round at least some of them. Things to look out for:
an obvious one: if your code calls methods by reflection or dynamically loads classes, then this is liable to fail if the class/method names are obfuscated; a good obfuscator will let you select class/method names not to obfuscate to get round this problem;
a similar issue can occur if not all of your application is compiled at the same time;
if it deals directly at the bytecode level, an obfuscator can create code that in principle a Java compiler cannot create (e.g. it can insert arbitrary GOTO instructions, whereas from Java these can only be created as part of a loop)-- this may be a bit theoretical, but if I were writing a JVM, I'd optimise performance for sequences of bytecodes that a Java compiler can create, not ones that it can't...
the obfuscator is liable to make other subtle changes to performance if it significantly alters the number of bytecodes in a method, or in some way changes whether a given method/piece of code hits thresholds for certain JVM optimisations (e.g. "inline methods with fewer than X bytecodes").
But as you can see, some of these effects are a little subtle and theoretical-- so to some extent what you need to do is soak-test your application after obfuscation, just as you would with any other major change.
You should also be careful not to assume that obfuscation hides your code/algorithm (if that is your intention) as much as you want it to-- use a decompiler to have a look at the contents of the resulting obfuscated classes.
Surprised no one has mentioned speed - in general, more obfuscated = slower-running code
[Edit] I can't believe this has -2. It is a correct answer.
Shortening identifiers and removing unused methods will decrease the file-size, but have 0 impact on the running speed (other than the few nanoseconds shaved off the loading time). In the meanwhile, most of the obfuscation of the program comes from added code:
Breaking 1 method into 5; interleaving methods; merging classes [aggregation transformations]
Splitting 1 arithmetic expression into 10; jumbling the control-flow [computation transformations]
And adding chunks of code that do nothing [opaque predicates]
are all common obfuscation techniques that cause a program to run slower.
You may want to look at some of the comments here, to decide if obfuscating makes sense:
https://stackoverflow.com/questions/1988451/net-obfuscation
You may want to express why you want to obfuscate. IMO the best reasons are mainly to have a smaller application, as you can get rid of classes that aren't being used in your project, while obfuscating.
I have never seen bugs introduced, as long as you aren't using reflection, assuming you can find something, as private methods for example will have their names changed.
The biggest problem centers around that fact that obfuscating programs generally make a guarantee of not changing the behavior of their target program. In some cases it proves to be very hard to do this -- for example, imagine a program which checks the value of certain private fields via reflection from a string array. An obfuscator may not be able to tell that this string also needs to be updated correspondingly, and the result will be unexpected access errors that pop up at runtime.
Worse still, it may not be obvious that the behavior of a program has changed subtly -- then you may not know that there's a problem at all, until your customer finds it first and gets upset.
Generally, professional-grade obfuscation products are sophisticated enough to catch some kinds of problems and prevent them, but ultimately it can be challenging to cover all the bases. The best defense is to run unit tests against the obfuscated result and make sure that all your expected behavior continues to hold true.
1 free one you might want to check out is Babel. It is designed to be used on the command line (like many other obfuscators), there is a Reflector addin that will provide a UI for you.
When it comes to obfuscation, you really need to analyze what your goal is. In your case - if you have a web application (mvc) are you planning on selling it as a canned downloadable application? (if not and you keep the source on your web servers then you don't need it).
You might look at the components and pick only certain parts to obfuscate ... not the whole thing. In general ASP.Net apps break pretty easy when you try to add obfuscation after you developed them due to all the reflection used.
Pretty much everything mentioned above is true ... it all depends on how many features you turn on to make it hard to reverse your code:
Renaming of members (fields/methods/events/properties) is most common (comes in different flavors: simple renaming of methods from something like GetId() to a() all the way to unreadable characters and removal of namespaces). BTW: This is where reflection usually breaks. Your assembly file may end up being smaller due to smaller strings being used too.
String encryption: this makes it harder to reverse your static strings used in your code. BTW: this paired with renaming makes it difficult for you to debug your renaming problems ... so you might turn it on after you have that working. This also will have to add code to decrypt the string right before it is used in IL
Code mangling ... this is what BlueRaja was refering to. It makes your code look like spagetti code - to make it harder for someone to figure out. The CLR does not like this ... it can't optimize things as easy and your final code will mostlikely proccess slower due to the additional branching and something not being inlined due to the IL rewriting used for this option. BTW: this option really does raise the bar on what it takes to reverse you source code, but may come with a performance hit.
Removal of unused code. Some obfuscators offer you the option to trim any code that it finds not being used. This may make your assembly a little smaller if you have alot of dead code hanging around ... but it is just a free benefit obfuscators throw in.
My advice is to only use it if you know why you are using it and design with that end in mind ... don't try to add it after you've finished your code (I've done that and it's not fun)