Is private necessary for a standalone Java app? [closed] - java

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have read through a bunch of best practices online for JUnit and Java in general, and a big one that people like to point out is that fields and methods should be private unless you really need to let users access them. Class variables should be private with getters and setters, and the only methods you should expose should be ones that users will call directly.
My question: how strictly necessary are these rules when you have things like standalone apps that don't have any users? I'm currently working on something that will get run on a server maybe once a month. There are config files that the app uses that can be modified, but otherwise there is no real user interaction once it runs. I have mostly been following best practices but have run into issues with unit testing. A lot of the time it feels like I am just jumping through hoops with my unit testing getting things just right, and it would be much easier if the method or whatever was public or even protected instead.
I understand that encapsulation will make it easier to make changes behind the scenes without needing to change code all over, but without users to impact that seems a bit more flimsy. I am just making my current job harder on the off-chance it will save me time later. I've also seen all of the answers on this site saying that if you need to unit test a private method you are doing something wrong. But that is predicated on the idea that those methods should always be private, which is what I am questioning.
If I know that no one will be using the application (calling its methods from a jar or API or whatever) is there anything wrong with making everything protected? Or even public? What about keeping private fields but making every method public? Where is the balance between "correct" accessibility on pieces of code, and ease of use?

It is not "necessary", but applying standards of good design and coding principles even in the "small" projects will help you in the long run.
Yes, it takes discipline to write good software. Languages are tools that help you accomplish a goal. Like any tool, they can be misused, and when misused can be dangerous. Power tools, like a table saw, can be very dangerous if misused, so if you care about your own safety you always follow proper procedure, even if it might feel a little inconvenient (or you end up nicknamed "stubby").
I'd argue that it's on the small projects, where you want to cut corners and "just write the code", that adhering to the best practices is most important. You are training yourself in the proper use of your tools, so when it really matters you do the right thing automatically.
Also consider that projects that start out "small" can evolve over time to become quite large as you keep adding enhancements and new functionality. This is the nature of agile software development. If you followed best practices from the start you'll find it much easier to adapt as the project grows.
Another factor is that using OOP principles is a way of taming complexity. If you have a well-defined API and, for example, use only getters and setters, you can partition off the API from the implementation in your own mind. After writing class A, when writing a client of A, say B, you can think only about the API. Later when you need to enhance A you can easily tell what parts of A affect the API vs what parts are purely internal. If you didn't use encapsulation you'd have to scan your entire codebase to see if a change to A would break something else.
Do I apply this to EVERYTHING I write? No, of course not. I don't do this with short single-use scripts in dynamic languages (Perl, AWK, etc) but when writing Java I make it a point to always write "good" code, if only to keep my skills sharp.

There is generally no necessity to follow any rules as long as your code compiles and runs correctly.
However code style "best practices" have proven to enhance code quality, especially over time when a project develops/matures. Making fields private makes your code more resilient to later changes; if you ommit the getters/setters and access fields directly, any changes to a field impact related code much more directly.
While there is seemingly no advantange in a getter/setter at first, the advantage lies in the future: A getter forces any code working with the attribute through a single point of control which in case of any changes related to that field helps either mask the concrete representation/location of the field and/or allows for polymorphism when required whithout changes/checking all the existing callers.
Finally the less surface (accessible methods/fields) a class exposes to other classes (users) the less you have to maintain. Reducing the exposed API to the absolute minimum reduces coupling between classes, which again is an advantage when something needs to be changed. Striving to hide the inner workings of every object as good as possible is not a goal by itself, its the advantages that result from it that are the goal.

As always, good balancing is always required. But when in doubt, it is better to error/lean on the side of "source code quality" practices; instead of taking too many shortcuts; as there are many different aspects in your "simple" question one should consider:
It is hard to anticipate what will happen to some piece of software over time. Yes, you don't have any users today. But you know what: one major property of great tools is ... as soon as other people see them, they want to use them, too. And all of a sudden, you have users. And feature requests, bug reports, ... and make no mistake: first people will love you for the productivity gain from your tool; and then they will start to put pressure on you because all of a sudden your tool is essential for other people to make their goals.
Many things are fine to be addressed via convention. Example: sometimes, if I would only be using public methods of my "class under test", unit tests become more complicated than necessary. In such a case, I absolutely have no problem about putting a getter here or there that allows me to inspect the internal state of my "class under test"; so that my test can trigger some activity; and then call the getter. I make those methods "package protected"; and I put a // used for unit testing above them. I have not seen problems coming out of that informal practice. Please note: those methods should only be used in test cases. No other production class is supposed to call them.
Regarding the core of your question on private stuff: I think, one should always hide implementation details from the outside. Whenever you write a piece of code that is supposed to live longer than the next hour, you should do the right thing - and try to write code with very high quality. And making the internals of your objects visible on the outside
comes only with drawbacks; there is nothing positive in doing so.
Good OO is about using models that come with certain behavior.
Internal state should stay internal; there is no benefit in
exposing. For the record: sometimes, you have simple data
containers. Classes that only have some fields, but no methods on
them. In that case, yeah, make the fields public; there is (not
much) advantage in providing getters/setters. ( See "Clean Code" by
Robert Martin, chapter 6 on "Objects and Data structures")

Related

Code Standard - is it better to have a getter/setter even though they are never used?

The IDE has suggested to add a getter/setter to a private field.
It is never used, and reaching the field is only from within the class.
what is the preferred coding style? keeping the never used methods?
Im asking specifically about java/kotlin but this is a general question
There are a few distinctions that you need to know about to answer this question yourself - as it depends on a ton of things; far too much to ask for and for you to write down:
For this entire answer it's important to think about the distinction between layers of code. These layers can be a bit hard to think about if the project you're imagining when thinking about layers is something small and written just by yourself. So don't do that - think about, say, Microsoft Word as a product. It's written by many people, over many years - entire departments and dev teams. It's somewhat modular (there's the "Mail Merge" system that doesn't interact, at all, with the 'show available fonts' dropdown).
What's the whole private fields, public getters/setters all about in the first place?
Fields are highly inflexible constructs. If you 'expose' them (make it public), then there is no granularity available to you. The only knobs you can twiddle with is:
You can make a field unchangable for everybody - you can't change it, nor can anybody else. (To do this, mark it final).
That's it. You can't do anything else 'to' it. You can't have more fine grained control about access (such as allowing code 'nearby' to change it, but not code further out), you can't run some code as field writes/read happen either. Perhaps you need more granularity. Keep in mind that we're trying to wrote code that will survive 10 years in an environment with 100 programmers, most of whom won't last the entire 10 years, in many different teams. So, imagine you wanted to:
Make it a field that everybody gets to read, but only 'your' code (that is, the programmers working on this particular corner of the codebase who are aware of this particular corner's rules and needs) should get to change.
Make it a field that everybody gets to read, and write, but, if its not 'your' code doing the writing, a log line should be emitted.
Make it a field that nobody gets to write (not even you - it is initialized at object creation, that's it, makes it easier to reason abou, that's why we 'handcuff' ourselves: When you need to maintain code for 10 years, limiting certain things off and having a compiler that enforces these is quite useful), and 'outsiders' can read, but you want to tweak the read a bit, for example, substitute 'the current date' when the value is blank.
And so on.
Even more importantly, is time: Sometimes you start out just wanting to expose a field to everybody right now, but later on you realize: Oh, wait, we need to emit a log line. Or: Oops, we need to return the current date if the value is blank.
If you just make a field public, you:
Do not have any of that granularity.
Even if you're okay with that now, you can't later on update your code and add stuff that needs this granularity; not without turning the field into a getter/setter pair, and that is not backwards compatible: You need to send a mail to those 100 developers or start refactoring their code which is a huge undertaking.
Hence, even if you don't see any point or purpose in giving you the granularity powers right now, it's still advised to just make that field private and add getters and setters: That way if later on some currently not forseeable request comes in (such as: Log the writes to this field, please!), you can add that feature without having to ask all other 100 developers to pull the change and edit all their branches, which is a huge undertaking.
YAGNI
A maxim in the programming world is YAGNI - You aren't gonna need it.
YAGNI is a dangerous beast - it applies -solely- to semi-local endeavours.
The basic principle of YAGNI is: Code is a flowing concept, and you should never hesitate to make improvements, especially if you can't think of a way this would break any existing usage. Hence, given that your development processes should be set up such that adding stuff is easy, don't add stuff until you need it - after all, if you add stuff even if you don't currently need it, maybe you never need it and you're now just clogging up the code for no good reason. IF somebody needs it, they can trivially add it then.
The problem with YAGNI is that predicate: YAGNI is based on the notion that making a change is quick and painless.
Imagine this scenario: The Microsoft Office development crew decides to write their own font rendering system, because what windows delivers just looks bad on HiDPI screens. So, they spend a ton of time and research on this and with much fanfare release a new version. Everybody loves it.
The OS team comes aknocking and the MS Office team decides to 'hand over' the new font rendering engine to the OS team. In order to avoid having 2 teams spend the resources on maintaining it, the next version of MS Office is pegged to only run on a new version of the OS that includes the new pipeline, and thus, the MS Office team removes the font rendering engine from it - it's now the OS's job.
Whoops, any YAGNI is now quite a big problem: If there's something foreseeable and obvious the MSOffice team needed that they didn't add (or if the Windows OS team applied YAGNI to the API they expose to apps to do font rendering stuff), then the MS Office team needs to give a call to the Windows OS team that's in another country, working on other source control, and having entirely different versioning pipelines, and ask them for a change. It'll take 2 years before it's all said and done.
Linters/stylcheckers are tools, and fairly stupid ones at that
Any warning about style or suggestion about changes are just that - suggestions. These tools aren't perfect, and will absolutely suggest very silly things from time to time. You should never apply style advice until you understand why it is given and under what circumstances it should be followed, and you should feel free to tell linters/stylecheckers to buzz off if they are wrong.
Some dev shops put out absolutist rules ('you can NEVER check in code that fails our linter tool - we have git commit hooks that enfore this!'), but those shops are misguided: They seem to think that if only you rigorously apply enough style rules, that code will therefore be well written, bug free, and performant. This is obviously entirely false. You should absolutely help programmers (and might lightly enforce this even) to help themselves and avail themselves of the tools available to write better code, but you can't beat the bird to make it sing, so to speak.
Thus, be aware that sometimes the best thing to do about a style suggestion - is to ignore it.
Back to your question
So, now you know what I'm driving at when I ask these questions, which naturally lead you to answering your own question:
Is the field even meant to be exposed in the first place? Anything you 'expose' is likely to be used by code that's relatively far removed from you (different team, different time, different context), and once you expose it, you have to continue to support it - any changes you make can't fundamentally change/remove what you exposed. So, perhaps just having a private field with no getters and setters is the best place to start:
If you're sure it makes no sense to expose it, then don't. Just leave them as private fields, the code in this source file can edit them, and other code cannot even assume this field exists - they should know nothing about it.
If you're sure it makes perfect sense to expose it; it is the very point of the class, then make a private field with public getter (and if you want, setter - do you intend for it to be mutable or not?) - even if you don't see any need to do special stuff in that getter. Java programmers expect to access properties from other source files via getters and setters and you keep the flexibility to change things later without breaking compatibility.
If you're not sure, then think about YAGNI: Is this an API that is going to be exposed so far and wide it'll affect people who cannot easily modify the codebase? Then, sorry, you're going to have to think some more and make a decision. But most likely you're not writing that kind of code, and anybody who might want to access this thing could change the code fairly easily: It'll be you, or a colleague working in the same source tree. In which case, don't think about it too long - err on the side of caution and don't make getters and setters. If someone needs em later, well, let them make the call - with the benefit of that use case they now have, they'll be more likely to make a well informed decision than you can, without that benefit.

when and why do we need to divide a class into many classes?

I am an android beginner developer. Currently, I am developing an application. However, my class is quite large because there are many UI components (to handle onClick, onProgressBarChanged, etc.).
Most of my components are dynamic. So, I have method to create those components.
Now I split some methods for initializing UI components into another class.
At this point, I am trying to think/search for a good reason to split my class into several classes.
Advantage: maintainability, testability, reusability
Disadvantage: reduce runtime performance
I am not sure that there is any advantage or disadvantage that I have missed?
Furthermore, I will divide a class when I find an overlap method
I am not sure that there is another situation when a class must be divided.
First, if you've never looked into refactoring, then I would strongly encourage you to do so. Martin Fowler has some excellent resources to get you started. But, I'm getting slightly ahead of myself.
To begin with, you split out classes to maintain a clear delineation of responsibilities. You can think of the SOLID principle here - each class does one thing, and one thing very clearly.
If you notice that a method, let alone a class, is doing more than one thing, then that is a good time to stop and refactor - that is, take the code you have, and apply a particular, focused refactoring to it to improve readability and flow, while maintaining the same functionality. You're essentially looking for code smells - parts of the code that are suspect, not following a specific contract or methodology, or are legitimate anti-patterns - which are, themselves, practices that developers strive to avoid.
Programs that deal with UI (especially in Java) tend to be pretty verbose. What you should avoid doing is placing any conditional business logic in the UI layer, for ease of separability, testing and clarity. Make use of the Model-View-Controller pattern to understand and abstract away the necessary separations between the UI (Views), and the actual work that's needed to be done (Controllers), while maintaining some semblance of state (Models).
We use OOPs Concept in Android(core java) Application Development. If we split our one class in many class it gives a good sense of maintainability, re-usability, Security and Easy change in Coding during Development.
As for example:- Util class for Database handling, Network Class for Internet connection , Dialog class for different type dialog and so...
This way we can categories our coding and change or re use it any time. So it is good practice to follow the OOPS concept during Development.
Thanks

How to refactor legacy code effectively and efficiently? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
What should I keep in mind in order to refactor huge code base?
When is it good (if ever) to scrap production code and start over?
I am currently working with some legacy source code files. They have quite a few problems because they were written by a database expert who does not know much about Java. For instance,
Fields in classes are public. No getters and setters.
Use raw types, not parameterized types.
Use static unnecessarily.
Super long method names.
Methods need too many parameters.
Repeat Yourself frequently.
I want to modify them so that they are more object-oriented. What are some best practices and effective/efficient approaches?
Read "Working Effectively with Legacy Code" by Michael Feathers. Great book - and obviously it'll be a lot more detailed than answers here. It's got lots of techniques for handling things sensibly.
It looks like you've identified a number of issues, which is a large part of the problem. Many of those sound like they can be fixed relatively easily - it's overall design and architecture which is harder to do, of course.
Are there already unit tests, or will you be adding those too?
Before you start, create a system-level regression test suite for the application. You need this so that you can verify that your changes don't break things.
To do the refactoring, you want a use a combination of a good IDE, and text search tool (e.g. grep). Use the text search tool to find occurrences of the "syndromes" that you want to fix, then use the IDE (and its builtin refactoring capabilities) to fix the instances ... one at a time.
For example, Eclipse allows you to rename a method or class, or generate getters and setters. So you'd cure a 'public' attribute by:
Change the attribute to private.
Generate the getter and setter methods.
Save the file.
Go through all of the Java compilation errors resulting from the fact that the attribute is now private, and change to use the getter or setter as appropriate.
This approach will give you the low-hanging fruit. More fundamental design issues are more difficult, and may be impossible to fix without fundamental restructuring of the application. The refactoring capabilities will help you execute such changes, but deciding what to do is ultimately up to you.
Finally, my advice is to not be too ambitious. Go for incremental improvement, and be prepared to draw the line when the code is "good enough". You won't achieve perfection ... not even if you start from a clean slate ... so don't set your expectations high.
Is it just the code that is bad, or does it also hurt the user experience? Refactoring continuously is a good idea, but it should not be a goal unto itself. It should improve the application in terms of user interaction, maintainability, stability, performance, etc.
That is why I am not extremely fond of huge refactoring just to improve the code quality. Instead, refactor the code that you work with.
While working with a legacy system for several years, I have personally found that:
Create for yourself a vision of how you want the code after you're done. It should be attainable, contain a list of technology changes, general architecture changes. It may also be a good idea to make a rough priority of what classes are most critical to change. We lacked such a vision a few years ago, and while we refactored a lot, the code quality barely improved.
Now, you should restrict your refactoring to those that make you reach your vision. Don't fall into the trap of doing what appears good at the moment.
Focus on a particular component, and make it better. Then move on to the next. It's tempting to make huge changes that affect the entire system, but in truth you will introduce more problems than you solve.
Write integration regression tests. I.e., a few big tests that test a lot of functionality. It's not optimal, but it's the best you can do. Writing unit tests for every single class in your old system may end up a waste of time because it's not designed to be tested anyway and you want to redesign half of the classes.
Accept that it will take time.
Eclipse should be able to take care of #1 and help you work your way through many of the others.
As for converting poor OO code to good OO code it is amazingly difficult. Often it seems easier to rewrite it from scratch.
I tend to go from the bottom up. As I'm working on some small section I'll recognize a bunch of data that belongs together as a group and I'll make a good object that replaces that code without changing anything else--Very Small Changes with constant tests between each change.
This makes for a mediocre design at best, but I honestly don't know if you can go from not OO to good OO on a large project without dissecting the original program, understanding it and using it as a template for the rewrite and few projects allow this (even though it might be faster, you'll rarely if ever be able to convince management of that fact)
The point is risk I think.
The ugly code is just ugly, but it could work, it has been tested and bugfixed. If runnable code is changed, risk will follow. so test is critical.
You could refactor related code when
you have to bugfix, as a conservatism solution.
Maybe the first challenge is to persuade your manager:)
What's the problem with it not having getters and setters? I'd suggest refactoring those only when you need to add non-trivial getters or setters (e.g., with validation).
The rest sounds like you need to identify groups of values and create new types holding them, so instead of passing a String name, String address, int yearOfBirth, String[] accountNames, int[] balances you would pass a Customer around, which would in turn have an Account[].
IDEA Ultimate Edition has a code duplication detector that's very good (it's only missing a 'suggested solution' button!), and there are CPD etc.
I'd suggest that in a large legacy codebase you might waste time refactoring code only to find out it wasn't used anyway. I outlined some steps for removing unused code: http://rickyclarkson.blogspot.com/2009/12/deleting-code-what-first.html
How many of those "issues" are real problems and not just matters of style? Of this list, the only 'real' issue I can see is "Repeat Yourself frequently", and that's more of an ongoing maintenance problem that should be resolved during normal code maintenance when someone's going to be changing the code anyway.
I want to modify them so that they are more object-oriented.
Object-Orientation should not be your only goal when refactoring. The question you should ask yourself is what is the expected ROI (better quality ? easier enhancements ? better sharing of this code across a team ?) A ROI is not just words, you should be prepared to measure with numbers the return on investment (even the quality enhancements for example). You should take into accounts the life duration of your products in estimating the ROI.
You should also ask yourself what is the size of the code which is dependent on the code you want to refactor. Refactoring a library could be easy but could lead to a lot of changes in source codes dependent on this library, a work well larger than just refactoring the library.
Before touching any code, you should estimate the total work that needs to be done to finish refactoring the code and dependent code. You should estimate a total rewrite of the code, a partial rewrite, or just an internal rewrite without touching APIs.
With the costs and returns, you could decide if it's worth the effort to refactor your code.

Java getter/setter style question

I have a question about Java style. I've been programming Java for years, but primarily for my own purposes, where I didn't have to worry much about style, but I've just not got a job where I have to use it professionally. I'm asking because I'm about to have people really go over my code for the first time and I want to look like I know what I'm doing. Heh.
I'm developing a library that other people will make use of at my work. The way that other code will use my library is essentially to instantiate the main class and maybe call a method or two in that. They won't have to make use of any of my data structures, or any of the classes I use in the background to get things done. I will probably be the primary person who maintains this library, but other people are going to probably look at the code every once in a while.
So when I wrote this library, I just used the default no modifier access level for most of my fields, and even went so far as to have other classes occasionally read and possibly write from/to those fields directly. Since this is within my package this seemed like an OK way to do things, given that those fields aren't going to be visible from outside of the package, and it seemed to be unnecessary to make things private and provide getters and setters. No one but me is going to be writing code inside my package, this is closed source, etc.
My question is: is this going to look like bad style to other Java programmers? Should I provide getters and setters even when I know exactly what will be getting and setting my fields and I'm not worried about someone else writing something that will break my code?
Even within your closed-source package, encapsulation is a good idea.
Imagine that a bunch of classes within your package are accessing a particular property, and you realize that you need to, say, cache that property, or log all access to it, or switch from an actual stored value to a value you generate on-the-fly. You'd have to change a lot of classes that really shouldn't have to change. You're exposing the internal workings of a class to other classes that shouldn't need to know about those inner workings.
I would adhere to a common style (and in this case provide setters/getters). Why ?
it's good practise for when you work with other people or provide libraries for 3rd parties
a lot of Java frameworks assume getter/setter conventions and are tooled to look for these/expose them/interrogate them. If you don't do this, then your Java objects are closed off from these frameworks and libraries.
if you use setters/getters, you can easily refactor what's behind them. Just using the fields directly limits your ability to do this.
It's really tempting to adopt a 'just for me' approach, but a lot of conventions are there since stuff leverages off them, and/or are good practise for a reason. I would try and follow these as much as possible.
I don't think a good language should have ANY level of access except private--I'm not sure I see the benefit.
On the other hand, also be careful about getters and setters at all--they have a lot of pitfalls:
They tend to encourage bad OO design (You generally want to ask your object to do something for you, not act on it's attributes)
This bad OO design causes code related to your object to be spread around different objects and often leads to duplication.
setters make your object mutable (something that is always nice to avoid if you can)
setters and getters expose your internal structures (if you have a getter for an int, it's difficult to later change that to a double--you have to touch every place it was accessed and make sure it can handle a double without overflowing/causing an error, if you had just asked your object to manipulate the value in the first place, the only changes would be internal to your object.
Most Java developers will prefer to see getters and setters.
No one may be developing code in your package, but others are consuming it. By exposing an explicitly public interface, you can guarantee that external consumers use your interface as you expect.
If you expose a class' internal implementation publicly:
It isn't possible to prevent consumers from using the class inappropriately
There is lost control over entry/exit points; any public field may be mutated at any time
Increase coupling between the internal implementation and the external consumers
Maintaining getters and setters may take a little more time, but offers a lot more safety plus:
You can refactor your code any time, as drastically as you want, so long as you don't break your public API (getters, setters, and public methods)
Unit testing well-encapsulated classes is easier - you test the public interface and that's it (just your inputs/outputs to the "black box")
Inheritance, composition, and interface designs are all going to make more sense and be easier to design with decoupled classes
Decide you need to add some validation to a mutator before it's set? One good place is within a setter.
It's up to you to decide if the benefits are worth the added time.
I wouldn't care much about the style per se (or any kind of dogma for that matter), but rather the convenience in maintainability that comes with a set of getter/setter methods. If you (or someone else) later needed to change the behavior associated with a change of one of those attributes (log the changes, make it thread-safe, sanitize input, etc.), but have already directly modified them in lots of other places in your code, you will have wished you used getter and setter methods instead.
I would be very loath to go into a code review with anything but private fields, with the possible exception of a protected field for the benefit of a subclass. It won't make you look good.
Sure, I think from the vantage point of a Java expert, you can justify the deviation from style, but since this is your first professional job using Java, you aren't really in that position.
So to answer your question directly: "Is this going to look like bad style?" Yes, it will.
Was your decision reasonable? Only if you are really confident that this code won't go anywhere. In a typical shop, there may be chances to reuse code, factor things out into utility classes, etc. This code won't be a candidate without significant changes. Some of those changes can be automated with IDEs, and are basically low risk, but if your library is at the point where it is stable, tested and used in production, encapsulating that later will be regarded as a bigger risk than was needed.
Since you're the only one writing code in your closed-source package/library, I don't think you should worry too much about style - just do what works best for you.
However, for me, I try to avoid directly accessing fields because it can make the code more difficult to maintain and read - even if I'm the sole maintainer.
Style is a matter of convention. There is no right answer as long as it is consistent.
I'm not a fan of camel, but in the Java world, camelStyle rules supreme and all member variables should be private.
getData();
setData(int i);
Follow the Official Java code convention by Sun (cough Oracle) and you should be fine.
http://java.sun.com/docs/codeconv/
To be brief, you said "I'm asking because I'm about to have people really go over my code for the first time and I want to look like I know what I'm doing". So, change your code, because it does make it look like you do not know what you are doing.
The fact that you have raised it shows that you are aware that it will probably look bad (this is a good thing), and it does. As has been mentioned, you are breaking fundamentals of OO design for expediency. This simply results in fragile, and typically unmaintainable code.
Even though it's painful, coding up properties with getters and setters is a big win if you're ever going to use your objects in a context like JSP (the Expression Language in particular), OGNL, or another template language. If your objects follow the good old Bean conventions, then a whole lot of things will "just work" later on.
I find getters and setters are better way to program and its not about only a matter of coding convention. No one knows the future, so we can write a simple string phonenumber today but tomorrow we might have to put "-" between the area code and the number, in that case, if we have a getPhonenumber() method defined, we can do such beautifications very easily.
So I would imagine, we always should follow this style of coding for better extensibility.
Breaking encapsulation is a bad idea. All fields should be private. Otherwise the class can not itself ensure that its own invariants are kept, because some other class may accidentally modify the fields in a wrong way.
Having getters and setters for all fields is also a bad idea. A field with getter and setter is almost the same as a public field - it exposes the implementation details of the class and increases coupling. Code using those getters and setters easily violates OO principles and the code becomes procedural.
The code should be written so that it follows Tell Don't Ask. You can practice it for example by doing an Object Calisthenics exercise.
Sometimes I use public final properties w/o get/setter for short-living objects which just carry some data (and will never do anything else by design).
Once on that, I'd really love if Java had implied getters and setters created using a property keyword...
Using encapsulation is a good idea even for closed source as JacobM already pointed out. But if your code is acting as library for other application, you can not stop the other application from accessing the classes that are defined for internal use. In other words, you can not(?) enforce restriction that a public class X can be used only by classes in my application.
This is where I like Eclipse plugin architecture where you can say what packages in my plugin can dependent plugins access during runtime. JSR 277 aimed at bringing this kind of modular features to JDK but it is dead now. Read more about it here,
http://neilbartlett.name/blog/2008/12/08/hope-fear-and-project-jigsaw/
Now the only option seems to be OSGi.
While I am well aware about the common pressure to use getters and setters everywhere regardless the case, and the code review process leaves me no choice, I am still not convinced in the universal usefulness of this idea.
The reason, for the data carrying classes, over ten years of development it has never been for me a single case where I would write anything different from set the variable in the setter and read the variable in the getter while lots of time has been spent on generating, understanding and maintaining this cargo cult code that seems not making any sense.
The data class is a structure or record, not a class. It does not do anything itself. Other classes are making changes to it. It should not be any functionality there at all, leave alone the functionality in the setters or getters. Java probably needs a separate keyword for the multi-field data record that has no methods.
From the other side, the process seems gone so far now that probably makes a lot of sense to put getters and setters just from beginning, even first time in the new team. It is important not to conflict with the team.

Using PowerMock or How much do you let your tests affect your design? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've been a fan of EasyMock for many years now, and thanks to SO I came across references to PowerMock and it's ability to mock Constructors and static methods, both of which cause problems when retrofitting tests to a legacy codebase.
Obviously one of the huge benefits of unit testing (and TDD) is the way it leads to (forces?) a much cleaner design, and it seems to me that the introduction of PowerMock may detract from that. I would see this mostly manifesting itself as:
Going back to initialising collaborators rather than injecting them
Using statics rather than making the method be owned by a collaborator
In addition to this, something doesn't quite sit right with me about my code being bytecode manipulated for the test. I can't really give a concrete reason for this, just that it makes me feel a little uneasy as it's just for the test and not for production.
At my current gig we're really pushing for the unit tests as a way for people to improve their coding practices and it feels like introducing PowerMock into the equation may let people skip that step somewhat and so I'm loathe to start using it. Having said that, I can really see where making use of it can cut down on the amount of refactoring that needs to be done to start testing a class.
I guess my question is, what are peoples experiences of using PowerMock (or any other similar library) for these features, would you make use of them and how much overall do you want your tests influencing your design?
I have to strongly disagree with this question.
There is no justification for a mocking tool that limits design choices. It's not just static methods that are ruled out by EasyMock, EasyMock Class Extension, jMock, Mockito, and others. These tools also prevent you from declaring classes and methods final, and that alone is a very bad thing. (If you need one authoritative source that defends the use of final for classes and methods, see the "Effective Java" book, or watch this presentation from the author.)
And "initialising collaborators rather than injecting them" often is the best design, in my experience. If you decompose a class that solves some complex problem by creating helper classes that are instantiated from that class, you can take advantage of the ability to safely pass specific data to those child objects, while at the same time hiding them from client code (which provided the full data used in the high-level operation). Exposing such helper classes in the public API violates the principle of information hiding, breaking encapsulation and increasing the complexity of client code.
The abuse of DI leads to stateless objects which really should be stateful because they will almost always operate on data that is specific to the business operation.
This is not only true for non-public helper classes, but also for public "business service" classes called from UI/presentation objects. Such service classes are usually internal code (to a single business application) that is inherently not reusable and have only a few clients (often only one) because such code is by nature domain/use-case specific.
In such a case (a very common one, by the way) it makes much more sense to have the UI class directly instantiate the business service class, passing data provided by the user through a constructor.
Being able to easily write unit tests for code like this is precisely what led me to create the JMockit toolkit. I wasn't thinking about legacy code, but about simplicity and economy of design. The results I achieved so far convinced me that testability really is a function of two variables: the maintainability of production code, and the limitations of the mocking tool used to test that code. So, if you remove all limitations from the mocking tool, what do you get?
I totally agree that Testability is not an end goal, this has been one of the things I have realized when developing PowerMock. I also agree that writing unit tests is one way of getting good design. Using PowerMock should probably be an exception rather than a rule, at least features such as expectations on constructors and static mocking.
The main motivation we have for using PowerMock is when using third party code that prevents your code from being testable. A good alternative is using an anti-corruption-layer that abstracts the third party code and makes it testable. However, sometimes I think the code is cleaner just using the standard APIs. A good example of this is the Java ME API. This is full of static method calls that prevent unit testing.
The same problem can occur with legacy code. Some organizations are extremely afraid of modifying their existing code and in this case PowerMock can be used to introduce unit testing in the parts you are writing at the moment, without forcing big refactorings.
Our problem is specifying a set of best practice rules when to use PowerMock or not that a rookie developer can follow. Creating good design is really hard and since PowerMock gives you more options, maybe it just gets harder for a beginner? I think a more experienced developer appreciates having more choices.
(founder of PowerMock)
I think you're right - if you need PowerMock, you probably have smelly code. Get rid of those statics.
However, I think you're wrong about bytecode instrumentation. I mock out concrete classes all the time using mockito - it keeps me from having to write an interface for every. single. class. That is much cleaner.
Only you can prevent code smells.
We've had many of the same questions arise in the .NET arena, regarding Typemock Isolator.
See This blog post
I think that when people start to realize that Testability is not an end goal, and that design is learned in other ways, then we will stop letting our fear dictate which tools we use, or not use a more advanced technology when and if it becomes relevant.
Also, it makes sense to be able to choose the way you design, based on the application needs. don't let a tool tell you how to design - it will leave you no choice.
(I work at Typemock, but was once against it)
I think you're right to be concerned. Refactoring legacy code to be testable isn't that hard in most cases once you've learned how.
Better to go a bit slower and have a supportive environment for learning than take a short cut and learn bad habits.
(And I just read this and feel like it is relevant.)
The layer of abstraction that Powermock provides over reflection seems attractive, but it makes the tests brittle. More specifically:
Reflection relies on string names of the methods/fields etc. Any renaming would break the tests, followed by fixing all tests accessing this method through reflection. Compared to a test that doesn't require reflection, all renamings would be refactored by the IDE.
Powermock's features of stubbing new, static method calls etc. makes you look into implementation details of the function. Tests should ideally test the functionality eg.
functionOldImplementation(){
List l = new ArrayList();
}
// old implementation changed to new
functionNewImplementation(){
List l = new LinkedList();
}
Mocking the new ArrayList() call would break the tests for the above refactoring. Tests are needed most for running regressions and they fail if done this way.
Recently came across this article, it addresses most of the points asked in the question, thought I'd share it.
Some key points from the article:
It took to 1.5 years to make PowerMock + Javaassist compatible with
Java7 since its introduction. Here is note from PowerMock change log:
Change log 1.5 (2012-12-04)
---------------------------
Upgraded to Javassist 3.17.1-GA, this means that PowerMock works in Java 7!
PowerMock site says:
"Please note that PowerMock is mainly intended for people with expert
knowledge in unit testing. Putting it in the hands of junior
developers may cause more harm than good".

Categories