The question about access to protected member in Java was already asked and answered a lot of times, for example:
Java: protected access across packages
But I can't understand why it is implemented this way, see explanation from "Java Programming Language" (4 ed.):
"The reasoning behind the restriction is this: Each subclass inherits the contract of the superclass and expands that contract in some way. Suppose that one subclass, as part of its expanded contract, places constraints on the values of protected members of the superclass. If a different subclass could access the protected members of objects of the first subclass then it could manipulate them in a way that would break the first subclass's contract and this should not be permissible."
OK, that's clear, but consider this inheritance structure (extract from some code):
package package1;
public class A {
protected int x;
}
package package2;
public class B extends A {
public static void main(String[] args)
C subclass = new C();
subclass.x = 7; // here any constraints can be broken - ??
}
}
class C extends B {
// class which places constraints on the value of protected member x
...
}
Here subclass.x = 7 is a valid statement which still can break a C's contract.
What am I missing?
Edited (added): Maybe I should not apply the cited logic in this situation? If we were dealing with only one package, the no restrictions exist at all. So maybe direct inheritance chain is treated in simplified way, meaning that superclass must know what it is doing...
It's ultimately all about following contracts, as stated in your posted quote. If you're really worried that someone won't read the contract, then there's a defensive programming solution to all this that introduces validation on modification.
By this I mean that the code you posted can break contract; this, however, couldn't:
public class A {
private int x;
protected final void setX(int x) throws IllegalArgumentException {
if (x < 0)
throw new IllegalArgumentException("x cannot be negative");
subValidateX(x);
this.x = x;
}
/**
* Subclasses that wish to provide extra validation should override this method
*/
protected void subValidateX(int x) {
// Intentional no-op
}
}
Here, I've done three major things:
I made x private so it can only be assigned from within A (excluding things like reflection, of course),
I made the setter final which prevents subclasses from overriding it and removing my validation, and
I made a protected method that can be overridden by subclasses to provide extra validation in addition to mine to make sure that subclasses can narrow requirements on x, but not widen them to include things like negative integers since my validation already checked that.
There are lots of good resources for how to design for inheritance in Java, especially when it comes to super-defensive protect-the-contract API programming like my example above. I'd recommend looking them up on your favorite search engine.
Ultimately, though, the developer writing the subclass needs to be responsible enough to read documentation, especially when you get into interface implementation.
Inherited classes are implicitly friends with their parent. So as soon as C is inherited from B, it is actually normal that B has the vision on C's x attribute.
Since C extends B, having
C c = new C();
c.x = 1;
is, with respect to your issue, exactly the same as
B b = new C();
b.x = 1;
Java compiler doesn't consider the runtime type of the object referred to by b and c in the above code; all it sees is the declared type, which is B and C, respectively. Now, since my second example obviously must work (the code in class B is accessing its own property, after all), it follows that the first example must work as well; otherwise it would mean that Java allows you to do less on a more specific type, which is a paradox.
Related
In the code below I get a compiler error at b.printname();. As I understand it the error is to do with the fact that the compiler is effectively operating in a non polymorphic way( i.e. the compiler is essentially only choosing to look at the left side of the operand and therefore b is a Question). Since b is of type Question and since Question does not have a no-args printName method you get a compilation error. Is that correct?
Now assuming that is correct, my question is why? Surely the compiler should know that Question b is referring to an object that does in fact support the no-args printName method? E.g. if you look at how the compiler behaves in terms of casting there are examples where the compiler, for lack of a better word, acts polymorphicly or to put it a different way the compiler knows what's going on in terms of the right hand side of the operand and acts upon that knowledge. An example would be if an interface type refers to an object that implements the interface, then the compiler looks at the right hand side of the statement (i.e. the object that implements the interface) and decides no cast is required. So why doesn't the compiler act that way here, why doesn't it look and see that the object in question is actually a Blue and that a Blue does indeed support the no-arg method printName?
public class Polymorf3 {
public static void main(String[] args){
Polymorf3 me = new Polymorf3();
me.doStuff();
}
public void doStuff() {
Bat a = new Bat();
Question b = new Blue();
//a.printName();
a.printName(a.name);
b.printName(); // Compiler Error:Required String Found no args
}
abstract class Question {
String name="Question_name";
public void printName(String name){ System.out.println(name);}
}
class Bat extends Question {
String name = "Bat_Bruce";
//public void printName(){ System.out.println(name);}
}
class Blue extends Question {
String name = "Clark";
public void printName() {System.out.println(name);}
}
}
Though b is of type Blue, since you declared it as Question b = new Blue();, the compiler treats it as type Question, and thus that's the only interface available to it without an explicit cast:
((Blue)b).printName();
Alternatively, you can declare it as Blue b = new Blue(); and b.printName(); will not throw a compile time error.
Essentially what's happening here is that you're declaring your new variable b at a higher level of abstraction, so the only printName method available to b is the one in the higher level of abstraction, the one with the args.
Edit:
OP asked why the compiler treats b as a Question even though it's initialized as Blue. Consider the following:
Question q = new Blue();
// ... some other code...
q = new Bat(); // Valid!!
q.printName("some string");
Now consider that tomorrow, some other developer comes in and changes it to the following:
Blue q = new Blue();
// ... some other code...
q = new Bat(); // Invalid!! Compiler error
q.printName("some string");
Declaring a variable at the highest level of abstraction required for your operation means you can later change the implementation more easily and without affecting all the rest of your code. Thus, it should be clear why the Java compiler is treating b as a Question. It's because b can, at any time, become an instance of Blue or Bat, so treating it as the implementation (Blue or Bat) would violate the contract of the Question interface by allowing some other non-arg getName method.
You seem to have misunderstood what polymorphism means. It means that you can treat an instance of the derived class as if it was an instance of the base class. That includes not calling methods on it that the base class doesn't provide. The variable type informs what methods you can call, and the instantiation type determines what implementations of those methods are run.
By putting your Blue instance in a Question variable, you are asking to treat it like a Question. If you wanted to call methods on your Question variable that are not provided by the Question class, then why have it be a Question variable at all? If you could call derived-class methods on a base class variable, it would not be a base class variable.
This question already has answers here:
Overriding public virtual functions with private functions in C++
(7 answers)
Changing Function Access Mode in Derived Class
(4 answers)
Closed 7 years ago.
I came from Java where it's not allowed to decrease access modifiers in derived classes. For instnace, the following is not compile in Java:
public class A{
public void foo(){ }
}
public class B extends A{
#Override
private void foo(){ } //compile-error
}
But, in C++ it's fine:
struct A {
A(){ }
virtual ~A(){ }
A(A&&){ }
public:
virtual void bar(){ std::cout << "A" << std::endl; }
};
struct B : public A{
private:
virtual void bar(){ std::cout << "B" << std::endl; }
};
int main()
{
A *a = new B;
a -> bar(); //prints B
}
DEMO
Where might it be useful? Moreover, is it safe to do so?
As you observed, the access specifier works based on the type of the pointer, not based on the dynamic type. So specifying private in B in this case simply means the functions cannot be accessed through a pointer to B. This could therefore be useful in keeping things locked down in cases where the client should not being using pointers to B (or creating B's on the stack). Basically, in cases where B's constructor is private and you create B's (and possibly other children of A) through a factory that returns a unique_ptr<A>. In this case, you could just specify all of B's methods as private. In principle this prevents the client from "abusing" the interface by dynamic casting the unique_ptr downwards and then accessing B's interface directly.
I don't really think this should be done though; it's a more Java-y approach than C++. In general if the client wants to use a derived object on the stack instead of on the heap through a base class pointer, they should be able. It gives better performance and it's easier to reason about. It also works better in generic code.
Edit: I think I should clarify. Consider the following code:
enum class Impl {FIRST, SECOND, THIRD};
unique_ptr<A> create(Impl i) {
...
}
Suppose this is the only way to create concrete instances that use A's interface. I could desire perhaps that the derived classes are pure implementation details. For instance, I could implement each of the three implementations in a different class, then later decide that two of the three can be lumped together into one class with different options, and so on. It's none of the user's business; their world is just A's interface plus the create function. But now suppose a user happens to look at the source and knows that the FIRST implementation is implemented using B. They want better performance, so they do this:
auto a = create(Impl::FIRST);
auto b = dynamic_cast<B *>(a.get());
// Use b below, potentially avoiding vtable
If a user has code like this, and you eliminate or rename the class B, their code will break. By making all of B's methods private, you make the b pointer useless to the user, ensuring that they use the interface as intended.
As I said before, I don't particularly advocate programming this way in C++. But there may be situations where you really do need the derived classes to be pure implementation details; in those cases changing the access specifier can help enforce it.
I want to write a program in java that consists of three classes, A, B, and C, such that B extends A and C extends B. Each class defines an instance variable (named "x").
how can I write a method in C to access and set A's version of x to a given value, without changing B or C's version?
I tried super.x but It wasn't true.
any help?
thanks for your attention in advance
You can access A's version of x like this:
((A)this).x
as long as x wasn't declared private in class A. I've just tested it.
Note that for fields, there is no overriding (as there is for methods). Thus, for an object of class C, there will be three x fields, but two of them can't be accessed normally because they are hidden by the other field named x. But casting the object as above will allow you to get at it, if it would have been visible if not hidden.
I think it is very poor practice to declare fields of the same name in a class and its subclasses. It's confusing. It can happen legitimately if, say, you have a class A and you later change the implementation of A and add a new private field z; in that case, it may not be possible to make sure no subclasses of A already have a field z, since you don't even always know what all the subclasses are (if A is a class you've distributed publicly, for instance). I think it's for that reason that Java allows you to have fields of the same name, and why the hiding rules are the way they are, because it allows things like this to work without breaking all the other subclasses. Other than that, though, I recommend not having fields of the same name in superclasses and subclasses. Perhaps if they're all private it might be OK, though.
Do the following
public static void main(String[] args) throws Exception {
C c = new C();
System.out.println("c:" + c.x);
System.out.println("a:" + ((A)c).x);
c.changeAX();
System.out.println("c:" + c.x);
System.out.println("a:" + ((A)c).x);
}
static class A {
int x;
}
static class B extends A {
int x;
}
static class C extends B {
int x;
public void changeAX() {
((A)this).x = 4;
}
}
Fields are resolved relative to the declared type of the reference. The above prints
c:0
a:0
c:0
a:4
The field will have to have at least protected visibility.
You don't want to be hiding class members, it's bad practice because it can easily confuse anyone trying to figure out which member you are referring to.
I misread your question. You can't do what you're trying to do.
Extending classes means adding information in several layers, ultimately resulting in one object. Although there are multiple layers, this doesn't mean that the layers are separate of eachother.
The variable X will be defined at one level (probably A) and after that the other classes will use this variable (if it's declared protected), but they won't have their own copy of it. You can only access your direct superclass.
This class might give you additional access to its own superclass, but you don't have direct contact with the super-super class.
Let's say you have some Java code as follows:
public class Base{
public void m(int x){
// code
}
}
and then a subclass Derived, which extends Base as follows:
public class Derived extends Base{
public void m(int x){ //this is overriding
// code
}
public void m(double x){ //this is overloading
// code
}
}
and then you have some declarations as follows:
Base b = new Base();
Base d = new Derived();
Derived e = new Derived();
b.m(5); //works
d.m(6); //works
d.m(7.0); //does not compile
e.m(8.0); //works
For the one that does not compile, I understand that you are passing in a double into Base's version of the m method, but what I do not understand is... what is the point of ever having a declaration like "Base b = new Derived();" ?
It seems like a good way to run into all kinds of casting problems, and if you want to use a Derived object, why not just go for a declaration like for "e"?
Also, I'm a bit confused as to the meaning of the word "type" as it is used in Java. The way I learned it earlier this summer was, every object has one class, which corresponds to the name of the class following "new" when you instantiate an object, but an object can have as many types as it wants. For example, "e" has type Base, Derived, (and Object ;) ) but its class is Derived. Is this correct?
Also, if Derived implemented an interface called CanDoMath (while still extending Base), is it correct to say that it has type "CanDoMath" as well as Base, Derived, and Object?
I often write functions in the following form:
public Collection<MyObject> foo() {}
public void bar(Collection<MyObject> stuff){}
I could just as easily have made it ArrayList in both instances, however what happens if I later decide to make the representation a Set? The answer is I have a lot of refactoring to do since I changed my method contract. However, if I leave it as Collection I can seamlessly change from ArrayList to HashSet at will. Using the example of ArrayList it has the following types:
Serializable, Cloneable, Iterable<E>, Collection<E>, List<E>, RandomAccess
There are a number of cases where confining yourself to a particular (sub)class is not desired, such as the case you have where e.m(8.0);. Suppose, for example, you have a method called move that moves an object in the coordinate graph of a program. However, at the time you write the method you may have both cartesian and radial graphs, handled by different classes.
If you rely on knowing what the sub-class is, you force yourself into a position wherein higher levels of code must know about lower levels of code, when really they just want to rely on the fact that a particular method with a particular signature exists. There are lots of good examples:
Wanting to apply a query to a database while being agnostic to how the connection is made.
Wanting to authenticate a user, without having to know ahead of time the strategy being used.
Wanting to encrypt information, without needing to rip out a bunch of code when a better encryption technique comes along.
In these situations, you simply want to ensure the object has a particular type, which guarantees that particular method signatures are available. In this way your example is contrived; you're asking why not just use a class that has a method wherein a double is the signature's parameter, instead of a class where that isn't available. (Simply put; you can't use a class that doesn't have the available method.)
There is another reason as well. Consider:
class Base {
public void Blah() {
//code
}
}
class Extended extends Base {
private int SuperSensitiveVariable;
public setSuperSensistiveVariable(int value) {
this.SuperSensistiveVariable = value;
}
public void Blah() {
//code
}
}
//elsewhere
Base b = new Extended();
Extended e = new Extended();
Note that in the b case, I do not have access to the method set() and thus can't muck up the super sensitive variable accidentally. I can only do that in the e case. This helps make sure those things are only done in the right place.
Your definition of type is good, as is your understanding of what types a particular object would have.
What is the point of having Base b = new Derived();?
The point of this is using polymorphism to change your implementation. For example, someone might do:
List<String> strings = new LinkedList<String>();
If they do some profiling and find that the most common operation on this list is inefficient for the type of list, they can swap it out for an ArrayList. In this way you get flexibility.
if you want to use a Derived object
If you need the methods on the derived object, then you would use the derived object. Have a look at the BufferedInputStream class - you use this not because of its internal implementation but because it wraps an InputStream and provides convenience methods.
Also, I'm a bit confused as to the meaning of the word "type" as it is used in Java.
It sounds like your teacher is referring to Interfaces and Classes as "types". This is a reasonable abstraction, as a class that implement an interface and extends a class can be referred to in 3 ways, i.e.
public class Foo extends AbstractFoo implements Comparable<Foo>
// Usage
Comparable<Foo> comparable = new Foo();
AbstractFoo abstractFoo = new Foo();
Foo foo = new Foo();
An example of the types being used in different contexts:
new ArrayList<Comparable>().Add(new Foo()); // Foo can be in a collection of Comparable
new ArrayList<AbstractFoo>().Add(new Foo()); // Also in an AbstractFoo collection
This is one of the classic problems on object oriented designs. When something like this happens, it usually means the design can be improved; there is almost always a somewhat elegant solution to these problems....
For example, why dont you pull the m that takes a double up into the base class?
With respect to your second question, an object can have more than one type, because Interfaces are also types, and classes can implement more than one interface.
This is possible in Java:
package x;
public class X {
// How can this method be public??
public Y getY() {
return new Y();
}
}
class Y {}
So what's a good reason the Java compiler lets me declare the getY() method as public? What's bothering me is: the class Y is package private, but the accessor getY() declares it in its method signature. But outside of the xpackage, I can only assign the method's results to Object:
// OK
Object o = new X().getY();
// Not OK:
Y y = new X().getY();
OK. Now I can somehow try to make up an example where this could somehow be explained with method result covariance. But to make things worse, I can also do this:
package x;
public class X {
public Y getY(Y result) {
return result;
}
}
class Y {}
Now I could never call getY(Y result) from outside of the x package. Why can I do that? Why does the compiler let me declare a method in a way that I cannot call it?
A lot of thinking has gone into the design of Java, but sometimes some sub-optimal design just slips through. The famous Java Puzzlers clearly demonstrate that.
Another package can still call the method with the package-private parameter. The easiest way is to pass it null. But it's not because you can still call it, that such a construct really makes sense. It breaks the basic idea behind package-private: only the package itself should see it. Most people would agree that any code that makes use of this construct is at least confusing and just has a bad smell to it. It would have been better not to allow it.
Just as a side note, the fact that it's allowed opens up some more corner cases. For example, from a different package doing Arrays.asList(new X().getY()) compiles, but throws an IllegalAccessError when executing because it tries to create an array of the inaccessible Y class. That just shows that this leaking of inaccessible types doesn't fit into the assumptions the rest of the language design makes.
But, like other unusual rules in Java, it was allowed in the first versions of Java. Because it's not such a big deal, and because backwards compatibility is more important for Java, improving this situation (disallowing it) simply isn't worth it anymore.
First of all, you could call the method. The trivial example is calling it within the same package.
A non trivial example:
package x;
public class Z extends Y {}
package x2;
x.getY( new Z() ); // compiles
But that is not the point.
The point is, Java tries to forbid some of the obviously nonsensical designs, but it cannot forbid all.
what is nonsensical? that is very subjective.
if it's too strict, it's bad for development when things are still plastic.
language spec is already way too complicated; adding more rules is beyond human capacity. [1]
[1] http://java.sun.com/docs/books/jls/third_edition/html/names.html#6.6
It is sometimes useful to have public methods that return an instance of a type that is not public, e.g. if this type implements an interface. Often factories work like this:
public interface MyInterface { }
class HiddenImpl implements MyInterface { }
public class Factory {
public HiddenImpl createInstance() {
return new HiddenImpl();
}
}
Of course one could argue that the compiler could force the return value of createInstance() to be MyInterface in this case. However, there are at least two advantages of allowing it to be HiddenImpl. One is, that HiddenImpl could implement several separate interfaces, and the caller is free to choose as which type it wants to use the return value. The other is that callers from inside the package can use the same method to get an instance of HiddenImpl and use it as such, without the need for casting it or having two methods in the factory (one public MyInterface createInstance() and one package-protected HiddenImpl createPrivateInstance()) that do the same thing.
The reason for allowing something like public void setY(Y param) is similar. There may be public sub-types of Y, and callers from outside the package may pass instances of these types. Again, the same two avantages as above apply here (there may be several such sub-types, and callers from the same package may choose to pass Y instances directly).
A big reason to allow it is to allow for opaque types. Imagine the following scenario:
package x;
public interface Foo;
class X
{
public Foo getFoo ( )
{
return new Y( );
}
class Y implements Foo { }
}
Here we have your situation (a package-protected inner class exported through public API), but this makes sense since as far as a caller is concerned the returned Y is an opaque type. That said, IIRC NetBeans does give a warning for this type of behaviour.
Couldn't you do something like:
Object answer = foo1.getY();
foo2.setY( foo1.getY().getClass().cast(answer) );
Yes it is ugly and dumb and pointless but you can still do.
That said, I believe your orignal code would produce a compiler warning.