Yesterday in an Interview I was asked
How Does Encapsulation works internally ?
I was really confused because as per me Encapsulation is simply
"The state of a class should only be accessed through its public interface."
but when it comes to internal working I ran out of words.So if anyone can explain that it would be so helpful.
I agree with commenters that asking for clarifications is a good idea - good questions can be even better than good answers, and allow you to ensure that you are actually answering what they think that they are asking.
In this case, I assume that they wanted you to explain how Java ensures that programmers do not violate encapsulation. This involves
built-in syntax and semantics for marking fields / methods as public, private, protected, or package-protected.
compiler checks to ensure that these are not violated
(external) tools available to detect code smells relating to encapsulation, such as calls to overridable methods from within a constructor.
(somewhat more far-fetched) no direct access to program memory, making, for example, reinterpret-casts such as found in C / C++ unavailable in Java; this also preserves encapsulation.
You could have ensured that this is what they wanted by asking "are you referring to how Java ensures that programmers do not violate encapsulation, that is, that they do not access the state of objects except through their public interface?"
Additional answers come to mind:
use of meaningful comments, easily accessible via JavaDoc both in-IDE and as browsable documentation, that allow programmers to understand how classes are meant to be used and composed.
strong coding conventions that enforce encapsulation, such as setting fields to the most restrictive access possible, and only making public those parts that should actually be public.
Related
I read the following article about reflection in Java:
https://community.oracle.com/docs/DOC-983192
In it, the author describes how to change the values of an object's fields through reflection. He explains how to do it even if the field has private access.
I while back, I read Joshua Block's book: "Effective Java". There, he says that, in order to prevent unsafe access to an object's fields, methods, etc, whenever possible, we should give fields and methods the most restrictive modifier (ie. private whenever possible, public or protected if it is part of the exposed api).
My question is the following:
Why bother designing your classes to not expose sensible information if it can be accessed through reflection anyway?
(Actually, I am asking for the piece of information that I am missing to understand this topic)
For one thing, 'private' is not meant as a security feature. See this similar question. Java has a security system, which is what you should use if you really want that kind of protection.
'private' in OOP is a signal of intent and is part of the contract of your class. By marking a field as 'private', you are stating that if somebody sneaks in and modifies stuff with reflection or something, then all guarantees you make in the rest of your class are no longer valid.
It's kind of like the fine print in the warranty of your TV or other devices - if you start digging around inside the wiring (the private fields, so to speak), then the warranty is void and Samsung or whoever it is won't cover the cost of repairing whatever you may screw up while you're in there.
i need your help in understanding a question.
which of these cannot be treated as the friend in contrast with oop:
Function
Class
Object
Operator function
i think answer should be Operator function but i am not sure.please
anyone explain this to me.
thanks in advance.
Object.
An object is instantiated, the others are not.
Think about what 'friend' means. It's like schema, you're defining access, but it's all done at compile time... an object is a run time thing so friendship is meaningless and uninforcable. Once your code is compiled it's all reduced to pointers and references and no checks are done.
Also, to further clarify, to create friendship relationships between objects and other objects, or between objects and anything else, you couldn't do that at compile/code time, as you don't know what objects will exist and you can't reference them... Such behaviour, or similar behaviour anyway, COULD be implemented by a language, but the friendships would have to be added at run time, and this would be quite an interesting feature of a high level language, but quite a different feature to friendship as we know it.
Your question makes only sense for C++.
friend is not a contrast to OOP. friend helps OOP by allowing you to expose fewer member variables and member functions. friend allows you to expose your private members to one particular external component. Without friend, you would have to make the members public and expose them to the whole world.
Objects cannot be made friends. friend is a mechanism to control member access and hence, like public, protected and private specifiers, a compile-time issue. Objects, in contrast, exist a run-time[*].
An "operator function" (the correct word would be "overloaded operator") is not that much different from a normal function, really. You can mostly consider overloaded operators as functions with funny names. As far as friend is concerned, there is no difference whether you call your function Add or +, for example.
[*] I realise that this is a slight oversimplification when you consider template metapropgramming or constexpr.
This might be the duplicate question but I haven't found the answer yet.
Link 1
Encapsulation:
Encapsulation is the technique of making the fields in a class private
and providing access to the fields via public methods. If a field is
declared private, it cannot be accessed by anyone outside the class,
thereby hiding the fields within the class. For this reason, encapsulation is also referred to as data hiding
Link 2
Encapsulation:
"It […] refers to building a capsule, in the case a conceptual barrier, around some collection of things." — [Wirfs-Brock et al, 1990]
"As a process, encapsulation means the act of enclosing one or more items within a […] container. Encapsulation, as an entity, refers to a package or an enclosure that holds (contains, encloses) one or more items."
"If encapsulation was 'the same thing as information hiding,' then one might make the argument that 'everything that was encapsulated was also hidden.' This is not obviously not true."
Which one should I go with ? Or have I misunderstood the definition ?
The main point is that it doesn't really matter. Anyone can define a term in a slightly different way, and usually various authors adapt the meaning to the various contexts within which they use those terms.
You will not gain any enlightenment from trying to figure out which one is "right" and which one is "wrong". Quotes taken out of context are especially uninformative.
The important thing is to understand the underlying ideas without reference to the vocabulary items used to refer to them.
There is disagreement as to whether the definition of encapsulation should include data hiding so this is going to be a strictly opinion answer. I believe that the latter definitions are more correct since data hiding is not unique to OO programming. It is a separate feature that does not preclude encapsulation which is the binding of functions/methods with a set of variables. In fact, data hiding was the hallmark of early modular programming in languages such as C and Pascal.
The first definition is very Java-centric. The second one is more generic. Both are correct. As to which one to go with, that's a subjective question. Since both are correct, I'd suggest going with the one you prefer...
Encapsulation is more than just data-hiding. It is decoupling internal data representation and implementation from the public interface. Thanks to encapsulation, as long as you don't break the interface contract, you can change internal implementation without anyone outside ever knowing. So I'd say encapsulation = data-hiding + implementation-hiding.
I was wondering why Java has been designed without the frienddirective that is available in C++ to allow finer control over which methods and instance variables are available from outside the package in which a class has been defined.
I don't see any practical reason nor any specific drawback, it seems just a design issue but something that wouldn't create any problem if added to the language.
Here are a few reasons off the top of my head:
friend is not required. It is convenient, but not required
friend supports bad design. If one class requires friend access to another, you're doing it wrong. (see above, convenient, not required).
friend breaks encapsulation. Basically, all my privates are belong to me, and that guy over there (my friend).
In general i think it was because of the added cognitive complexity and low number of cases in which it creates an improvement.
I would say that the extremely huge number of lines of java in production at this moment can attest that the friend keyword is not really a big loss :).
Please see #dwb's answer for some more specific reasons.
Only a very naive and inexperienced programmer would advocate against friends. Of course it can be misused, but so can public data, yet that capability is provided.
Contrary to popular opinion, here are many cases, in particular for infrastructure capabilities, where friend access leads to BETTER design, not worse design. Encapsulation is often violated when a method is FORCED to be made public when it really shouldn't be, but we are left with no choice because Java does not support friends.
In addition to the aforementioned package visibility, Java also offers inner and anonymous classes which are not only friends by default, but also automatically have a reference to the containing class. Since creating such helper classes is probably the only reasonable way to use friend in C++, Java doesn't need it since it has another mechanism for that. Iterators are a very good example of this.
Completely agree with spaceghost's statement in his answer
Contrary to popular opinion, here are many cases, in particular for infrastructure capabilities, where friend access leads to BETTER design, not worse design.
My example is simple - if a class A has to provide a special "friend" interface to class B in java we have to place them into the same package. No exceptions. In that case if A is a friend of B and B is a friend of C, A has to be a friend of C which isn't always true. This "friendship transitivity" breaks encapsulation more then any problems which C++ friendship could lead to.
Why not simply think that Java requires friend classes to be co-located ? The package-private visibility allows everyone from the same package to access those members. So you're not only limited to explicitly declared friends, but you allow any (existing or future) friend to alter some members that are specifically designed for this purpose (but not your private stuff). You're still able to fully rely on encapsulation.
Just to add to the other answers:
There is the default package visibility in Java. So, you could call all classes in the same package neighbors. In that case you have explicit control of what you show to the neighbors - just members with package visibility.
So, it's not really a friend but can be similar. And yes, this too leads to bad design...
In my opinion some kind of friend feature (not necessarily very similar to C++'s) would be very helpful in some situations in Java. Currently we have package private/default access hacks to allow collaboration between tightly coupled classes in the same package (String and StringBuffer for instance), but this opens the private implementation interface up to the whole package. Between packages we have evil reflection hacks which causes a whole host of problems.
There is a bit of an additional complication in does this in Java. C++ ignores access restrictions whilst resolving function overloads (and similar) - if a program compiles #define private public shouldn't do anything. Java (mostly) discards non-accessible members. If friendship needs to be taken into account then the resolution is more complicated and less obvious.
The open-closed principle states that "Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification".
However, Joshua Bloch in his famous book "Effective Java" gives the following advice: "Design and document for inheritance, or else prohibit it", and encourages programmers to use the "final" modifier to prohibit subclassing.
I think these two principles clearly contradict each other (am I wrong?). Which principle do you follow when writing your code, and why? Do you leave your classes open, disallow inheritance on some of them (which ones?), or use the final modifier whenever possible?
Frankly I think the open/closed principle is more an anachronism than not. It sems from the 80s and 90s when OO frameworks were built on the principle that everything must inherit from something else and that everything should be subclassable.
This was most typified in UI frameworks of the era like MFC and Java Swing. In Swing, you have ridiculous inheritance where (iirc) button extends checkbox (or the other way around) giving one of them behaviour that isn't used (I think it's its the setDisabled() call on checkbox). Why do they share an ancestry? No reason other than, well, they had some methods in common.
These days composition is favoured over inheritance. Whereas Java allowed inheritance by default, .Net took the (more modern) approach of disallowing it by default, which I think is more correct (and more consistent with Josh Bloch's principles).
DI/IoC have also further made the case for composition.
Josh Bloch also points out that inheritance breaks encapsulation and gives some good examples of why. It's also been demonstrated that changing the behaviour of Java collections is more consistent if done by delegation rather than extending the classes.
Personally I largely view inheritance as little more than an implemntation detail these days.
I don't think the two statements contradict each other. A type can be open for extension and still be closed for inheritance.
One way to do this is to employ dependency injection. Instead of creating instances of its own helper types, a type can have these supplied upon creation. This allows you to change the parts (i.e. open for extension) of the type without changing the type itself (i.e. close for modification).
In open-closed principle (open for extension, closed for modification) you can still use the final modifier. Here is one example:
public final class ClosedClass {
private IMyExtension myExtension;
public ClosedClass(IMyExtension myExtension)
{
this.myExtension = myExtension;
}
// methods that use the IMyExtension object
}
public interface IMyExtension {
public void doStuff();
}
The ClosedClass is closed for modification inside the class, but open for extension through another one. In this case it can be of anything that implements the IMyExtension interface. This trick is a variation of dependency injection since we're feeding the closed class with another, in this case through the constructor. Since the extension is an interface it can't be final but its implementing class can be.
Using final on classes to close them in java is similar to using sealed in C#. There are similar discussions about it on the .NET side.
I respect Joshua Bloch a great deal, and I consider Effective Java to pretty much be the Java bible. But I think that automatically defaulting to private access is often a mistake. I tend to make things protected by default so that they can at least be accessed by extending the class. This mostly grew out of a need to unit test components, but I also find it handy for overriding the default behavior of classes. I find it very annoying when I'm working in my own company's codebase and end up having to copy & modify the source because the author chose to "hide" everything. If it's at all in my power, I lobby to have the access changed to protected to avoid the duplication, which is far worse IMHO.
Also keep in mind that Bloch's background is in designing very public bedrock API libraries; the bar for getting such code "correct" must be set very high, so chances are it's not really the same situation as most code you'll be writing. Important libraries such as the JRE itself tend to be more restrictive in order to ensure that the language is not abused. See all the deprecated APIs in the JRE? It's almost impossible to change or remove them. Your codebase is probably not set in stone, so you do have the opportunity to fix things if it turns out you made a mistake initially.
Nowadays I use the final modifier by default, almost reflexively as part of the boilerplate. It makes things easier to reason about, when you know that a given method will always function as seen in the code you're looking at right now.
Of course, sometimes there are situations where a class hierarchy is exactly what you want, and it would be silly not to use one then. But be scared of hierarchies of more than two levels, or ones where non-abstract classes are further subclassed. A class should be either abstract or final.
Most of the time, using composition is the way to go. Put all the common machinery into one class, put the the different cases into different classes, then composit instances to have working whole.
You can call this "dependency injection", or "strategy pattern" or "visitor pattern" or whatever, but what it boils down to is using composition instead of inheritance to avoid repetition.
The two statements
Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.
and
Design and document for inheritance, or else prohibit it.
are not in direct contradiction with one another. You can follow the open-closed principle as long as you design and document for it (as per Bloch's advice).
I don't think that Bloch states that you should prefer to prohibit inheritance by using the final modifier, just that you should explicitly choose to allow or disallow inheritance in each class you create. His advice is that you should think about it and decide for yourself, instead of just accepting the default behavior of the compiler.
I don't think that the Open/closed principle as originally presented allows the interpretation that final classes can be extended through injection of dependencies.
In my understanding, the principle is all about not allowing direct changes to code that has been put into production, and the way to achieve that while still permitting modifications to functionality is to use implementation inheritance.
As pointed out in the first answer, this has historical roots. Decades ago, inheritance was in favor, developer testing was unheard of, and recompilation of the codebase often took too long.
Also, consider that in C++ the implementation details of a class (in particular, private fields) were commonly exposed in the ".h" header file, so if a programmer needed to change it, all clients would require recompilation. Notice this isn't the case with modern languages like Java or C#. Besides, I don't think developers back then could count on sophisticated IDEs capable of performing on-the-fly dependency analysis, avoiding the need for frequent full rebuilds.
In my own experience, I prefer to do the exact opposite: "classes should be closed for extension (final) by default, but open for modification". Think about it: today we favor practices like version control (makes it easy to recover/compare previous versions of a class), refactoring (which encourages us to modify code to improve design, or as a prelude to introducing new features), and developer testing, which provides a safety net when modifying existing code.