Related
While studying the standard Java library and its classes, i couldn't help noticing that some of those classes have methods that, in my opinion, have next to no relevance to those classes' cause.
The methods i'm talking about are, for example, Integer#getInteger, which retrieves a value of some "system property", or System#arraycopy, whose purpose is well-defined by its name.
Still, both of these methods seem kinda out of place, especially the first one, which for some reason binds working with system resources to a primitive type wrapper class.
From my current point of view, such method placement policy looks like a violation of a fundamental OOP design principle: that each class must be dedicated to solving its particular set of problems and not turn itself into a Swiss army knife.
But since i don't think that Java designers are idiots, i assume that there's some logic behind a decision to place those methods right where they are. So i'd be grateful if someone could explain what that logic really is.
Thanks!
Update
A few people have hinted at the fact that Java does have its illogical things that are simply remnants of a turbulent past. I reformulate my question then: why is Java so unwilling to mark its architectural flaws as deprecated, since it's not like that the existing deprecated features are likely to be discontinued in any observable future, and making things deprecated really helps refraining from using them in newly created code?
This is a good thing to wonder about. I know about more recent features (such as generics, lambda's etc) there are several blogs and posts on mailing lists that explain the choices made by the library makers. These are very interesting to read.
In your case I expect the answer isn't too exiting. The reason they were made is hard to tell. But both classes exist since JDK1.0. In those days the quality of programming in general (and also Java and OO in particular) was perhaps lower (meaning there were fewer common practices, library makers had to invent many paradigms themselves). Also there were other constraints in those times, such as Object creation being expensive.
Many of those awkwardly designed methods and classes now have a better alternative. (See Date and the package java.time)
The arraycopy you would expect to be added to the Arrays class, but unfortunately it is not there.
Ideally the original method would be deprecated for a while and then removed. Many libraries follow this strategy. Java however is very conservative about this and only deprecates things that really should not be used (such as Thread.stop(). I don't think a method has ever been removed in Java due to deprecation. This means it is fairly easy to upgrade your software to a newer version of Java, but it comes at the cost of leaving some clutter in the libraries.
The fact that java is so conservative about keeping the new JDK/JRE versions compatible with older source code and binaries is loved and hated. For your hobby project, or a small actively developed project upgrading to a new JVM that removes deprecated functions after a few years is not too difficult. But don't forget that many projects are not actively developed or the developers have a hard time making changes securely, for instance because they lack a proper regression test. In these projects changes in APIs cost a lot of time to comply to, and run the risk of introducing bugs.
Also libraries often try to support older versions of Java as well as newer version, they will have a problem doing so when methods have been deleted.
The Integer-example is probably just a design decision. If you want to implicitly interpret a property as Integer use java.lang.Integer. Otherwise you would have to provide a getter method for each java.lang-Type. Something like:
System.getPropertyAsBoolean(String)
System.getPropertyAsByte(String)
System.getPropertyAsInteger(String)
...
And for each data type, you'd require one additional method for the default:
- System.getPropertyAsBoolean(String, boolean)
- System.getPropertyAsByte(String, byte)
...
Since java.lang-Types already have some cast abilities (Integer.valueOf(String)), I am not too surprised to find a getProperty method here. Convenience in trade for breaking principles a tiny bit.
For the System.arraycopy, I guess it is an operation that depends on the operating system. You probably copy memory from one location to another in a very efficient way. If I would want to copy an array like that, I'd look for it in java.lang.System
"I assume that there's some logic behind a decision to place those
methods right where they are."
While that is often true, I have found that when somethings off, this assumption is typically where you are mislead.
A language is in constant development, from the day someone proposes a new language to the day it is antiquated. In between those extremes are some phases that the language, go through. Especially if someone is spending money on it and wants people to use it, a very peculiar phase often occurs, just before or after the first release:
The "we need this to work yesterday" phase.
This is where stuff like this happens, you have an almost complete language, but the programmers need to do something to to show what the language can do, or a specific application needs a feature that was not designed into the language.
So where do we add this feature?
- well, where it makes most sense to that particular programmer who's task it is to "make it work yesterday".
The logic may be that, this is where the function makes the most sense, since it doesn't belong anywhere else, and it doesn't deserve a class of its own. It could also be something like: so far, we have never done an array copy, without using system.. lets put arraycopy in there, and save everyone an extra include..
in the next generation of the language, people will not move the feature, since some experienced programmers will complain. So the feature may be duplicated, and found in a place where it makes more sense.
much later, it will be marked as deprecated, and deleted, if anyone cares to clean it up..
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have read through a bunch of best practices online for JUnit and Java in general, and a big one that people like to point out is that fields and methods should be private unless you really need to let users access them. Class variables should be private with getters and setters, and the only methods you should expose should be ones that users will call directly.
My question: how strictly necessary are these rules when you have things like standalone apps that don't have any users? I'm currently working on something that will get run on a server maybe once a month. There are config files that the app uses that can be modified, but otherwise there is no real user interaction once it runs. I have mostly been following best practices but have run into issues with unit testing. A lot of the time it feels like I am just jumping through hoops with my unit testing getting things just right, and it would be much easier if the method or whatever was public or even protected instead.
I understand that encapsulation will make it easier to make changes behind the scenes without needing to change code all over, but without users to impact that seems a bit more flimsy. I am just making my current job harder on the off-chance it will save me time later. I've also seen all of the answers on this site saying that if you need to unit test a private method you are doing something wrong. But that is predicated on the idea that those methods should always be private, which is what I am questioning.
If I know that no one will be using the application (calling its methods from a jar or API or whatever) is there anything wrong with making everything protected? Or even public? What about keeping private fields but making every method public? Where is the balance between "correct" accessibility on pieces of code, and ease of use?
It is not "necessary", but applying standards of good design and coding principles even in the "small" projects will help you in the long run.
Yes, it takes discipline to write good software. Languages are tools that help you accomplish a goal. Like any tool, they can be misused, and when misused can be dangerous. Power tools, like a table saw, can be very dangerous if misused, so if you care about your own safety you always follow proper procedure, even if it might feel a little inconvenient (or you end up nicknamed "stubby").
I'd argue that it's on the small projects, where you want to cut corners and "just write the code", that adhering to the best practices is most important. You are training yourself in the proper use of your tools, so when it really matters you do the right thing automatically.
Also consider that projects that start out "small" can evolve over time to become quite large as you keep adding enhancements and new functionality. This is the nature of agile software development. If you followed best practices from the start you'll find it much easier to adapt as the project grows.
Another factor is that using OOP principles is a way of taming complexity. If you have a well-defined API and, for example, use only getters and setters, you can partition off the API from the implementation in your own mind. After writing class A, when writing a client of A, say B, you can think only about the API. Later when you need to enhance A you can easily tell what parts of A affect the API vs what parts are purely internal. If you didn't use encapsulation you'd have to scan your entire codebase to see if a change to A would break something else.
Do I apply this to EVERYTHING I write? No, of course not. I don't do this with short single-use scripts in dynamic languages (Perl, AWK, etc) but when writing Java I make it a point to always write "good" code, if only to keep my skills sharp.
There is generally no necessity to follow any rules as long as your code compiles and runs correctly.
However code style "best practices" have proven to enhance code quality, especially over time when a project develops/matures. Making fields private makes your code more resilient to later changes; if you ommit the getters/setters and access fields directly, any changes to a field impact related code much more directly.
While there is seemingly no advantange in a getter/setter at first, the advantage lies in the future: A getter forces any code working with the attribute through a single point of control which in case of any changes related to that field helps either mask the concrete representation/location of the field and/or allows for polymorphism when required whithout changes/checking all the existing callers.
Finally the less surface (accessible methods/fields) a class exposes to other classes (users) the less you have to maintain. Reducing the exposed API to the absolute minimum reduces coupling between classes, which again is an advantage when something needs to be changed. Striving to hide the inner workings of every object as good as possible is not a goal by itself, its the advantages that result from it that are the goal.
As always, good balancing is always required. But when in doubt, it is better to error/lean on the side of "source code quality" practices; instead of taking too many shortcuts; as there are many different aspects in your "simple" question one should consider:
It is hard to anticipate what will happen to some piece of software over time. Yes, you don't have any users today. But you know what: one major property of great tools is ... as soon as other people see them, they want to use them, too. And all of a sudden, you have users. And feature requests, bug reports, ... and make no mistake: first people will love you for the productivity gain from your tool; and then they will start to put pressure on you because all of a sudden your tool is essential for other people to make their goals.
Many things are fine to be addressed via convention. Example: sometimes, if I would only be using public methods of my "class under test", unit tests become more complicated than necessary. In such a case, I absolutely have no problem about putting a getter here or there that allows me to inspect the internal state of my "class under test"; so that my test can trigger some activity; and then call the getter. I make those methods "package protected"; and I put a // used for unit testing above them. I have not seen problems coming out of that informal practice. Please note: those methods should only be used in test cases. No other production class is supposed to call them.
Regarding the core of your question on private stuff: I think, one should always hide implementation details from the outside. Whenever you write a piece of code that is supposed to live longer than the next hour, you should do the right thing - and try to write code with very high quality. And making the internals of your objects visible on the outside
comes only with drawbacks; there is nothing positive in doing so.
Good OO is about using models that come with certain behavior.
Internal state should stay internal; there is no benefit in
exposing. For the record: sometimes, you have simple data
containers. Classes that only have some fields, but no methods on
them. In that case, yeah, make the fields public; there is (not
much) advantage in providing getters/setters. ( See "Clean Code" by
Robert Martin, chapter 6 on "Objects and Data structures")
I have a question about Java style. I've been programming Java for years, but primarily for my own purposes, where I didn't have to worry much about style, but I've just not got a job where I have to use it professionally. I'm asking because I'm about to have people really go over my code for the first time and I want to look like I know what I'm doing. Heh.
I'm developing a library that other people will make use of at my work. The way that other code will use my library is essentially to instantiate the main class and maybe call a method or two in that. They won't have to make use of any of my data structures, or any of the classes I use in the background to get things done. I will probably be the primary person who maintains this library, but other people are going to probably look at the code every once in a while.
So when I wrote this library, I just used the default no modifier access level for most of my fields, and even went so far as to have other classes occasionally read and possibly write from/to those fields directly. Since this is within my package this seemed like an OK way to do things, given that those fields aren't going to be visible from outside of the package, and it seemed to be unnecessary to make things private and provide getters and setters. No one but me is going to be writing code inside my package, this is closed source, etc.
My question is: is this going to look like bad style to other Java programmers? Should I provide getters and setters even when I know exactly what will be getting and setting my fields and I'm not worried about someone else writing something that will break my code?
Even within your closed-source package, encapsulation is a good idea.
Imagine that a bunch of classes within your package are accessing a particular property, and you realize that you need to, say, cache that property, or log all access to it, or switch from an actual stored value to a value you generate on-the-fly. You'd have to change a lot of classes that really shouldn't have to change. You're exposing the internal workings of a class to other classes that shouldn't need to know about those inner workings.
I would adhere to a common style (and in this case provide setters/getters). Why ?
it's good practise for when you work with other people or provide libraries for 3rd parties
a lot of Java frameworks assume getter/setter conventions and are tooled to look for these/expose them/interrogate them. If you don't do this, then your Java objects are closed off from these frameworks and libraries.
if you use setters/getters, you can easily refactor what's behind them. Just using the fields directly limits your ability to do this.
It's really tempting to adopt a 'just for me' approach, but a lot of conventions are there since stuff leverages off them, and/or are good practise for a reason. I would try and follow these as much as possible.
I don't think a good language should have ANY level of access except private--I'm not sure I see the benefit.
On the other hand, also be careful about getters and setters at all--they have a lot of pitfalls:
They tend to encourage bad OO design (You generally want to ask your object to do something for you, not act on it's attributes)
This bad OO design causes code related to your object to be spread around different objects and often leads to duplication.
setters make your object mutable (something that is always nice to avoid if you can)
setters and getters expose your internal structures (if you have a getter for an int, it's difficult to later change that to a double--you have to touch every place it was accessed and make sure it can handle a double without overflowing/causing an error, if you had just asked your object to manipulate the value in the first place, the only changes would be internal to your object.
Most Java developers will prefer to see getters and setters.
No one may be developing code in your package, but others are consuming it. By exposing an explicitly public interface, you can guarantee that external consumers use your interface as you expect.
If you expose a class' internal implementation publicly:
It isn't possible to prevent consumers from using the class inappropriately
There is lost control over entry/exit points; any public field may be mutated at any time
Increase coupling between the internal implementation and the external consumers
Maintaining getters and setters may take a little more time, but offers a lot more safety plus:
You can refactor your code any time, as drastically as you want, so long as you don't break your public API (getters, setters, and public methods)
Unit testing well-encapsulated classes is easier - you test the public interface and that's it (just your inputs/outputs to the "black box")
Inheritance, composition, and interface designs are all going to make more sense and be easier to design with decoupled classes
Decide you need to add some validation to a mutator before it's set? One good place is within a setter.
It's up to you to decide if the benefits are worth the added time.
I wouldn't care much about the style per se (or any kind of dogma for that matter), but rather the convenience in maintainability that comes with a set of getter/setter methods. If you (or someone else) later needed to change the behavior associated with a change of one of those attributes (log the changes, make it thread-safe, sanitize input, etc.), but have already directly modified them in lots of other places in your code, you will have wished you used getter and setter methods instead.
I would be very loath to go into a code review with anything but private fields, with the possible exception of a protected field for the benefit of a subclass. It won't make you look good.
Sure, I think from the vantage point of a Java expert, you can justify the deviation from style, but since this is your first professional job using Java, you aren't really in that position.
So to answer your question directly: "Is this going to look like bad style?" Yes, it will.
Was your decision reasonable? Only if you are really confident that this code won't go anywhere. In a typical shop, there may be chances to reuse code, factor things out into utility classes, etc. This code won't be a candidate without significant changes. Some of those changes can be automated with IDEs, and are basically low risk, but if your library is at the point where it is stable, tested and used in production, encapsulating that later will be regarded as a bigger risk than was needed.
Since you're the only one writing code in your closed-source package/library, I don't think you should worry too much about style - just do what works best for you.
However, for me, I try to avoid directly accessing fields because it can make the code more difficult to maintain and read - even if I'm the sole maintainer.
Style is a matter of convention. There is no right answer as long as it is consistent.
I'm not a fan of camel, but in the Java world, camelStyle rules supreme and all member variables should be private.
getData();
setData(int i);
Follow the Official Java code convention by Sun (cough Oracle) and you should be fine.
http://java.sun.com/docs/codeconv/
To be brief, you said "I'm asking because I'm about to have people really go over my code for the first time and I want to look like I know what I'm doing". So, change your code, because it does make it look like you do not know what you are doing.
The fact that you have raised it shows that you are aware that it will probably look bad (this is a good thing), and it does. As has been mentioned, you are breaking fundamentals of OO design for expediency. This simply results in fragile, and typically unmaintainable code.
Even though it's painful, coding up properties with getters and setters is a big win if you're ever going to use your objects in a context like JSP (the Expression Language in particular), OGNL, or another template language. If your objects follow the good old Bean conventions, then a whole lot of things will "just work" later on.
I find getters and setters are better way to program and its not about only a matter of coding convention. No one knows the future, so we can write a simple string phonenumber today but tomorrow we might have to put "-" between the area code and the number, in that case, if we have a getPhonenumber() method defined, we can do such beautifications very easily.
So I would imagine, we always should follow this style of coding for better extensibility.
Breaking encapsulation is a bad idea. All fields should be private. Otherwise the class can not itself ensure that its own invariants are kept, because some other class may accidentally modify the fields in a wrong way.
Having getters and setters for all fields is also a bad idea. A field with getter and setter is almost the same as a public field - it exposes the implementation details of the class and increases coupling. Code using those getters and setters easily violates OO principles and the code becomes procedural.
The code should be written so that it follows Tell Don't Ask. You can practice it for example by doing an Object Calisthenics exercise.
Sometimes I use public final properties w/o get/setter for short-living objects which just carry some data (and will never do anything else by design).
Once on that, I'd really love if Java had implied getters and setters created using a property keyword...
Using encapsulation is a good idea even for closed source as JacobM already pointed out. But if your code is acting as library for other application, you can not stop the other application from accessing the classes that are defined for internal use. In other words, you can not(?) enforce restriction that a public class X can be used only by classes in my application.
This is where I like Eclipse plugin architecture where you can say what packages in my plugin can dependent plugins access during runtime. JSR 277 aimed at bringing this kind of modular features to JDK but it is dead now. Read more about it here,
http://neilbartlett.name/blog/2008/12/08/hope-fear-and-project-jigsaw/
Now the only option seems to be OSGi.
While I am well aware about the common pressure to use getters and setters everywhere regardless the case, and the code review process leaves me no choice, I am still not convinced in the universal usefulness of this idea.
The reason, for the data carrying classes, over ten years of development it has never been for me a single case where I would write anything different from set the variable in the setter and read the variable in the getter while lots of time has been spent on generating, understanding and maintaining this cargo cult code that seems not making any sense.
The data class is a structure or record, not a class. It does not do anything itself. Other classes are making changes to it. It should not be any functionality there at all, leave alone the functionality in the setters or getters. Java probably needs a separate keyword for the multi-field data record that has no methods.
From the other side, the process seems gone so far now that probably makes a lot of sense to put getters and setters just from beginning, even first time in the new team. It is important not to conflict with the team.
I have always worked on statically typed languages (C/C++, Java). I have been playing with Clojure and I really like it.
One thing I am worried about is: say that I have a windows that takes 3 modules as arguments and along the way the requirements change and I need to pass another module to the function. I just change the function and the compiler complains everywhere I used it. But in Clojure it won't complain until the function is called. I can just do a regex search and replace but it seems there is a chance to miss a call and it will go unnoticed until that function is actually called. How do you guys deal with this?
This is one of the reasons automated testing/test driven development is even more important in dynamically typed languages. I haven't used Clojure (I mostly use Ruby), so unfortunately I can't recommend a specific testing framework.
The first thing I'd like to mention is that Bruce Eckel has written a very interesting article called Strong Typing vs Strong Testing (the link is down at the moment, unfortunately, but hopefully it will be up soon).
His idea is that when dealing with compiled languages, the compiler is just acting as the first, automatic step of automatic testing. When making the move to a dynamic language, you lose this first level of automatic testing. But in both cases, this first, automatic level is just one part of testing, and not even a very important part.
His point is that if you're developing programs properly, i.e. doing some form of tests and regression tests, the lack of a compiler will only force you to add some more, somewhat basic tests anyways, which is why it's no big loss.
So I guess the first answer I'd give you is, focus on your testing, something you should be doing anyway, and such changes shouldn't affect you too badly.
The second thing I'd like to mention is many dynamic languages that I've seen (for example, Python) have much better abilities to change what methods/classes do without breaking existing code.
For example, with Python, if your method used to accept two parameters but now requires a third one, you can always add a default parameter without breaking any existing code, but that you can now utilize. This is a very basic technique, but in Python's case (and I assume most other dynamic languages as well), these techniques can get much more interesting; since they're dynamic, you can pretty much change the implementation of functions for specific modules, change what variables mean, etc.
I'd suggest looking at which techniques Clojure has that allow similair things, and deciding if they apply in your situation.
You do the same thing you did if the method was part of a public interface that you weren't the only user of.
You add a new method with the extra module and and change the old one to call the new one with a suitable default.
Oh and if your program is that big, make sure you have good tests (test-is should make it simpler than Java)
Test coverage is definitely important. But a dynamically typed language will allow you to work in a different way. In a strongly typed language (like Java), a change in the interface needs to modify all the callers. In Ruby, you could do this-- but probably won't. Instead, you'll probably add flexibility to the method on one of a few ways. Namely:
you tend to have very few methods that take as many as three parameters in Ruby (as opposed to Java). Because you don't have strong typed interface of Java, you break the problem down into smaller pieces and steps. It's much more common to write methods that take just 1 parameter, and then refactor when it becomes more complex.
it's possible-- and common-- to leave the old behavior in place while adding more arguments. For example, if you have to add a third argument to a two argument method, you will set its default value to preserve the old behavior (and save you a refactor). If you are familiar with Javascript libraries like jQuery, they take advantage of this everywhere with "optional" arguments.
similar to optional arguments, methods can grow to take a flexible parameter list. With solid test coverage, you can quite easily add a new behavior to an existing method and safely know you haven't broken the existing code. In Rails, methods like "render" take a wide range of options.
You're not completely without compiler support in Clojure. In the specific example you give, it's the arity of the function that changed, which would be picked up by compiling the Clojure code. I'm still making the strong -> dynamic typing transition and find this comforting!
You lose some level of refactoring and type safety when you move to dynamic languages. The more information the compiler has, the more it can do at compile time for you.
Tim Bray discusses it here,critique of which by Cedric is here,and a post on artima discussing it at length.
If you really need static typing, you can use https://github.com/clojure/core.typed and it's leiningen module to test static variable passing.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've been a fan of EasyMock for many years now, and thanks to SO I came across references to PowerMock and it's ability to mock Constructors and static methods, both of which cause problems when retrofitting tests to a legacy codebase.
Obviously one of the huge benefits of unit testing (and TDD) is the way it leads to (forces?) a much cleaner design, and it seems to me that the introduction of PowerMock may detract from that. I would see this mostly manifesting itself as:
Going back to initialising collaborators rather than injecting them
Using statics rather than making the method be owned by a collaborator
In addition to this, something doesn't quite sit right with me about my code being bytecode manipulated for the test. I can't really give a concrete reason for this, just that it makes me feel a little uneasy as it's just for the test and not for production.
At my current gig we're really pushing for the unit tests as a way for people to improve their coding practices and it feels like introducing PowerMock into the equation may let people skip that step somewhat and so I'm loathe to start using it. Having said that, I can really see where making use of it can cut down on the amount of refactoring that needs to be done to start testing a class.
I guess my question is, what are peoples experiences of using PowerMock (or any other similar library) for these features, would you make use of them and how much overall do you want your tests influencing your design?
I have to strongly disagree with this question.
There is no justification for a mocking tool that limits design choices. It's not just static methods that are ruled out by EasyMock, EasyMock Class Extension, jMock, Mockito, and others. These tools also prevent you from declaring classes and methods final, and that alone is a very bad thing. (If you need one authoritative source that defends the use of final for classes and methods, see the "Effective Java" book, or watch this presentation from the author.)
And "initialising collaborators rather than injecting them" often is the best design, in my experience. If you decompose a class that solves some complex problem by creating helper classes that are instantiated from that class, you can take advantage of the ability to safely pass specific data to those child objects, while at the same time hiding them from client code (which provided the full data used in the high-level operation). Exposing such helper classes in the public API violates the principle of information hiding, breaking encapsulation and increasing the complexity of client code.
The abuse of DI leads to stateless objects which really should be stateful because they will almost always operate on data that is specific to the business operation.
This is not only true for non-public helper classes, but also for public "business service" classes called from UI/presentation objects. Such service classes are usually internal code (to a single business application) that is inherently not reusable and have only a few clients (often only one) because such code is by nature domain/use-case specific.
In such a case (a very common one, by the way) it makes much more sense to have the UI class directly instantiate the business service class, passing data provided by the user through a constructor.
Being able to easily write unit tests for code like this is precisely what led me to create the JMockit toolkit. I wasn't thinking about legacy code, but about simplicity and economy of design. The results I achieved so far convinced me that testability really is a function of two variables: the maintainability of production code, and the limitations of the mocking tool used to test that code. So, if you remove all limitations from the mocking tool, what do you get?
I totally agree that Testability is not an end goal, this has been one of the things I have realized when developing PowerMock. I also agree that writing unit tests is one way of getting good design. Using PowerMock should probably be an exception rather than a rule, at least features such as expectations on constructors and static mocking.
The main motivation we have for using PowerMock is when using third party code that prevents your code from being testable. A good alternative is using an anti-corruption-layer that abstracts the third party code and makes it testable. However, sometimes I think the code is cleaner just using the standard APIs. A good example of this is the Java ME API. This is full of static method calls that prevent unit testing.
The same problem can occur with legacy code. Some organizations are extremely afraid of modifying their existing code and in this case PowerMock can be used to introduce unit testing in the parts you are writing at the moment, without forcing big refactorings.
Our problem is specifying a set of best practice rules when to use PowerMock or not that a rookie developer can follow. Creating good design is really hard and since PowerMock gives you more options, maybe it just gets harder for a beginner? I think a more experienced developer appreciates having more choices.
(founder of PowerMock)
I think you're right - if you need PowerMock, you probably have smelly code. Get rid of those statics.
However, I think you're wrong about bytecode instrumentation. I mock out concrete classes all the time using mockito - it keeps me from having to write an interface for every. single. class. That is much cleaner.
Only you can prevent code smells.
We've had many of the same questions arise in the .NET arena, regarding Typemock Isolator.
See This blog post
I think that when people start to realize that Testability is not an end goal, and that design is learned in other ways, then we will stop letting our fear dictate which tools we use, or not use a more advanced technology when and if it becomes relevant.
Also, it makes sense to be able to choose the way you design, based on the application needs. don't let a tool tell you how to design - it will leave you no choice.
(I work at Typemock, but was once against it)
I think you're right to be concerned. Refactoring legacy code to be testable isn't that hard in most cases once you've learned how.
Better to go a bit slower and have a supportive environment for learning than take a short cut and learn bad habits.
(And I just read this and feel like it is relevant.)
The layer of abstraction that Powermock provides over reflection seems attractive, but it makes the tests brittle. More specifically:
Reflection relies on string names of the methods/fields etc. Any renaming would break the tests, followed by fixing all tests accessing this method through reflection. Compared to a test that doesn't require reflection, all renamings would be refactored by the IDE.
Powermock's features of stubbing new, static method calls etc. makes you look into implementation details of the function. Tests should ideally test the functionality eg.
functionOldImplementation(){
List l = new ArrayList();
}
// old implementation changed to new
functionNewImplementation(){
List l = new LinkedList();
}
Mocking the new ArrayList() call would break the tests for the above refactoring. Tests are needed most for running regressions and they fail if done this way.
Recently came across this article, it addresses most of the points asked in the question, thought I'd share it.
Some key points from the article:
It took to 1.5 years to make PowerMock + Javaassist compatible with
Java7 since its introduction. Here is note from PowerMock change log:
Change log 1.5 (2012-12-04)
---------------------------
Upgraded to Javassist 3.17.1-GA, this means that PowerMock works in Java 7!
PowerMock site says:
"Please note that PowerMock is mainly intended for people with expert
knowledge in unit testing. Putting it in the hands of junior
developers may cause more harm than good".