implementing interfaces after the fact - java

I think, the following can't be done in Java. But I would be happy to learn how to implement something that resembles it.
Suppose we have a class C, that is already used in compiled code. (We can neither change that code nor the original definition of C).
Suppose further there is interesting code that could be re-used, if only C would implement interface I. It is, in fact, more or less trivial to derive D that is just C + the implementation of the interface methods.
Yet, it seems there is no way, once I have a C, to say: I want you to be a D, that is, a C implementing I.
(Side remark: I think the cast (D)c, where c's runtime type is C, should be allowed if D is a C and the only difference to C are added methods. This should be safe, should it not?)
How could one work around this calamity?
(I know of the factory design pattern, but this is not a solution, it seems. For, once we manage to create D's in all places where formerly were C's, somebody else finds another interface J useful and derives E extends C implements J. But E and D are incompatible, since they both add a different set of methods to C. So while we can always pass an E where a C is expected, we can't pass an E where a D is expected. Rather, now, we'd need a new class F extends C implements I,J.)

Couldn't you use a delegate class, i.e. a new class which wraps an instance of "Class C", but also implements "Interface I" ?
public class D implements I {
private C c;
public D (C _c) {
this.c = _c;
}
public void method_from_class_C() {
c.method_from_class_C();
}
// repeat ad-nauseum for all of class C's public methods
...
public void method_from_interface_I() {
// does stuff
}
// and do the same for all of interface I's methods too
}
and then, if you need to invoke a function which normally takes a parameter of type I just do this:
result = some_function(new D(c));

If all that you need to be compatible with is interfaces then no problem take a look at dynamic proxy classes, its basically how you implement interfaces at runtime in java.
if you need similar runtime compatibility with classes I suggest you take a look at cglib or javaassist opensource libraries.

If you (can) manage the ClassLoader that loads your class C then you can try to do some class-loading time shenanigans with bytecode instrumentation to make the class implement the interface.
The same can be done during build-time, of course. It might even be easier this way (as you don't need access to the ClassLoader).

(Side remark: I think the cast (D)c,
where c's runtime type is C, should be
allowed if D is a C and the only
difference to C are added methods.
This should be safe, should it not?)
Not at all. If you could make this cast, then you could compile code that attempted to call one of the "added methods" on this object, which would fail at runtime since that method does not exist in C.
I think you are imagining that the cast would detect the methods that are "missing" from C and delegate them to D automatically. I doubt that would be feasible, although I can't speak to the language design implications.
It seems to me the solution to your problem is:
Define class D, which extends C and implements I
Define a constructor D(C c) which essentially clones the state of the given C object into a new D object.
The D object can be passed to your existing code because it is a C, and it can be passed to code that wants an I because it is an I

I believe what you want is possible by using java.lang.reflect.Proxy; in fact I have done something similar for a current project. However, it's quite a bit of work, and the resulting "hybrid objects" can expose strange behaviour (because method calls on them are routed to different concrete objects, there are problems when those methods try to call each other).

I think you that can't do it because Java is strictly typed. I believe it can be done in languages like Ruby and Python with a usage of mixins.
As for Java it definitely looks like a good usage for the Adapter design pattern (it was already proposed earlier as a "wrapper" object).

Related

Is there a point in upcasting "this" reference in Java?

I have come across a weird piece of code. I was wondering if there is any usage for it.
class C extends B {
int xB = 4;
C() {
System.out.println(this.xB);
System.out.println(super.xB);
System.out.println(((B)this).xB); // This is the weird code.
}
}
Program prints 4, 10, 10. public xB field of class B has the value 10.
In Java, you can only directly inherit from a single class. But you can have multiple indirect superclasses. Could this be used in upcasting the "this" reference to one of those? Or is this bad programming practice and i should forget about it?
So "((B)this)" basically acts as if it is "super". We could just use super instead of it.
It does NOT generally do the same thing as super.
It does in this case, because fields do not have dynamic dispatch. They are resolved by their compile-time type. And you changed that with the cast.
But super.method() and ((SuperClass)this).method() are not the same. Methods are dispatched at runtime based on the actual type of the instance. The type-cast does not affect this at all.
I was wondering if people are using this structure to upcast "this" to indirect superclasses.
They don't have to, because they don't duplicate field names like that.
It is bad practice to shadow an inherited (visible) field in a subclass (exactly because it leads to confusion like this). So don't do that, and you want have to have this cast.
And you cannot "upcast to indirect superclasses" at all where methods are concerned: You can call super.method() directly (if you are in the subclass), but not something like super.super.method().
this is an instance of C, it can be upcasted to its direct (e.g. B) or indirect (e.g Object) parent.
C c = this;
B b = (B)c;
Object o = (Object)c;
Is this bad programming practice and I should forget about it?
It's a workaround since polymorphism doesn't work for fields. It's a bad practice. Why would C need to declare xB if it's already defined in B and B can grant access to its subclasses to access and work with the field? It's weird, indeed.

Mocking locally created objects in java using Mockito2

I am writing module tests for a project using testng and Mockito2. I want to mock a few methods which make outbound requests. Now, the object to mock is created locally within another object's method. So, if I have say, 4 classes, A, B, C and D, such that A creates an object of type B, B creates an object of type C and so on, and object of type D is to be mocked, I see I have two options to mock it.
Option 1 is to spy on objects of type A,B,C and inject spy of B into A and C into B and finally inject mock of of D into C during object creation. Following is an example.
class A {
public B createB()
{
retrun new B();
}
public void someMethod ()
{
B b = createB();
}
}
In this way I can can spy on A and inject mock object for B when createB is called. This way I can ultimately mock D.
Option 2 is to not mock intermittent classes and directly have a Factory class like the one below:
class DFactory {
private static D d;
static public void setD (D newD)
{
d = newD;
}
public static D getD()
{
if (d!=null)
{
return d;
} else
{
return new D();
}
}
}
The above option is simple, but I am not sure if this is the right thing to do as it creates more static methods, something that should be avoided, I believe.
I would like to know which method should be preferred and if there is some other alternative.
Please note that I do not wish to use powermockito or any other such frameworks which encourage bad code design. I want to stick to mockito2. I am fine with refactoring my code to make it more testable.
The way you have it now, with A creating B and B creating C and C creating D, all of that creation are implementation details you can't see or change, specifically the creation of dependency objects.
You are admirably avoiding the use of PowerMockito, and you are also admirably interested in refactoring your code to handle this change well, which means delegating the choice of D to the creator of A. Though I understand that you only really mean for this choice to happen in testing, the language doesn't know that; you are choosing a different implementation for the dependency, and taking the choice away from C's implementation. This is known as an inversion of control, or dependency injection. (You've probably heard of them before, but I introduce those terms at the end because they typically associated with weight and frameworks that aren't really necessary for this conversation right now.)
It's a little trickier because it looks like you don't just need an implementation of D, but that you need to create new implementations of D. That makes things a little harder, but not by much, especially if you are able to use Java 8 lambdas and method references. Anywhere below that you see a reference to D::new, that's a method reference to D's constructor that could be accepted as a Supplier<D> parameter.
I would restructure your class in one of the following ways:
Construct A like new A(), but leave the control over the implementation of D for when you actually call A, like aInstance.doSomething(new D()) or aInstance.doSomething(D::new). This means that C would delegate to the caller every single time you call a method, giving more control to the callers. Of course, you might choose to offer an overload of aInstance.doSomething() that internally calls aInstance.doSomething(new D()), to make the default case easy.
Construct A like new A(D::new), where A calls new B(dSupplier), and B calls new C(dSupplier). This makes it harder to substitute B and C in unit tests, but if the only likely change is to have the network stack represented by D, then you are only changing your code as required for your use-case.
Construct A like new A(new B(new C(D::new))). This means that A is only involved with its direct collaborator B, and makes it much easier to substitute any implementation of B into A's unit tests. This assumes that A only needs a single instance of B without needing to create it, which may not be a good assumption; if all classes need to create new instances of their children, A would accept a Supplier<B>, and A's construction would look like new A(() -> new B(() -> new C(D::new))). This is compact, but complicated, and you might choose to create an AFactory class that manages the creation of A and the configuration of its dependencies.
If the third option is tempting for you, and you think you might want to automatically generate a class like AFactory, consider looking into a dependency injection framework like Guice or Dagger.

design pattern / multiple inheritance workaround java

I have a Java package with a class hierarchy, say class A with subclasses A1, A2, A3, ...
For a specific application I need to make classes (of type A) which implement an interface B. These new classes will have some things in common so my first thought was to make a base class C which inherits from A and implements B, and my new classes C1, C2, C3, ... would just inherit from C. However, some of these classes Ci will need functionality existing in one of the Aj and by this method I'd need to re-implement such functionality (functionality not defined in A). For some of the Ci's I'd like to inherit behavior from the various Ai's, but at the same time inherit other behavior common to all Ci's which is in C. Of course, Java won't allow multiple inheritance so I can't do this directly. I certainly don't want to re-implement this much stuff just because MI is not supported, but I don't see a way around it. Any ideas?
Do you really need to subclass or do you just want code re-use?
Sounds like you use the Composite and possibly Strategy Patterns:
Your Class C will have fields of type A and possibly B and delegate calls to them where appropriate. This gives you all the advantages of code re-use without the messiness of inheritance (single or multi)
Everyone seems to notice this problem when they first begin seeing the usefulness of class hierarchies. The problem stems from orthogonal concerns in the classes. Some of the subclasses share characteristic 1 and some share characteristic 2. Some have neither. Some have both.
If there was just one characteristic ... (Say, some had an inside and so they could be filled.) ... we would be fine. Make an intermediate subclass to handle those concerns. Some subclasses actually subclass the intermediate one. And we are done.
But there are two and there is no way to do the subclassing. I suppose multiple inheritance is a solution to that problem but it adds a level of complexity and subverts the simplicity of thinking that makes hierarchical class structures useful.
I find it best to use subclassing for the one concern that it solves easily. Then pick a way to isolate and share the code besides subclassing.
One solution is to extract the functionality and move it elsewhere. Then all the Aj's and Ci's can call it there. The advantage is that you don't copy and paste any code and it can be fixed in one place if it gets broken.
One The code could go into the base class A and be given a name indicating it only applies to some of the children. Make the methods protected. Then call them from the actual classes. Its ugly but effective.
A
protected String formStringForSubclassesAjAndCi(String a, int b) {
return a + b;
}
Ci and Aj
public String formString(String a, int b) {
return formStringForSubclassesAjAndCi(a, b);
}
Two Similarly you can put the shared code in some sort of helper class:
CiAjHelper
public static String formStringForSubclassesAjAndCi(String a, int b) {
return a + b;
}
Aj and Ci
public String formString(String a, int b) {
return CaAjHelper.formStringForSubclassesAjAndCi(a, b);
}
Three The third way is to put the code in say, Aj, and then call it from the Cj by having an instance of the Aj for each Ci instance and delegating the common functions to it. (Its still ugly.)
Aj
public String formString(String a, int b) {
return a + b;
}
Cj
private Aj instanceAj = new Aj();
public String formString(String a, int b) {
return instanceAj.formString(a, b);
}

Why is access to protected member in Java implemented as it does?

The question about access to protected member in Java was already asked and answered a lot of times, for example:
Java: protected access across packages
But I can't understand why it is implemented this way, see explanation from "Java Programming Language" (4 ed.):
"The reasoning behind the restriction is this: Each subclass inherits the contract of the superclass and expands that contract in some way. Suppose that one subclass, as part of its expanded contract, places constraints on the values of protected members of the superclass. If a different subclass could access the protected members of objects of the first subclass then it could manipulate them in a way that would break the first subclass's contract and this should not be permissible."
OK, that's clear, but consider this inheritance structure (extract from some code):
package package1;
public class A {
protected int x;
}
package package2;
public class B extends A {
public static void main(String[] args)
C subclass = new C();
subclass.x = 7; // here any constraints can be broken - ??
}
}
class C extends B {
// class which places constraints on the value of protected member x
...
}
Here subclass.x = 7 is a valid statement which still can break a C's contract.
What am I missing?
Edited (added): Maybe I should not apply the cited logic in this situation? If we were dealing with only one package, the no restrictions exist at all. So maybe direct inheritance chain is treated in simplified way, meaning that superclass must know what it is doing...
It's ultimately all about following contracts, as stated in your posted quote. If you're really worried that someone won't read the contract, then there's a defensive programming solution to all this that introduces validation on modification.
By this I mean that the code you posted can break contract; this, however, couldn't:
public class A {
private int x;
protected final void setX(int x) throws IllegalArgumentException {
if (x < 0)
throw new IllegalArgumentException("x cannot be negative");
subValidateX(x);
this.x = x;
}
/**
* Subclasses that wish to provide extra validation should override this method
*/
protected void subValidateX(int x) {
// Intentional no-op
}
}
Here, I've done three major things:
I made x private so it can only be assigned from within A (excluding things like reflection, of course),
I made the setter final which prevents subclasses from overriding it and removing my validation, and
I made a protected method that can be overridden by subclasses to provide extra validation in addition to mine to make sure that subclasses can narrow requirements on x, but not widen them to include things like negative integers since my validation already checked that.
There are lots of good resources for how to design for inheritance in Java, especially when it comes to super-defensive protect-the-contract API programming like my example above. I'd recommend looking them up on your favorite search engine.
Ultimately, though, the developer writing the subclass needs to be responsible enough to read documentation, especially when you get into interface implementation.
Inherited classes are implicitly friends with their parent. So as soon as C is inherited from B, it is actually normal that B has the vision on C's x attribute.
Since C extends B, having
C c = new C();
c.x = 1;
is, with respect to your issue, exactly the same as
B b = new C();
b.x = 1;
Java compiler doesn't consider the runtime type of the object referred to by b and c in the above code; all it sees is the declared type, which is B and C, respectively. Now, since my second example obviously must work (the code in class B is accessing its own property, after all), it follows that the first example must work as well; otherwise it would mean that Java allows you to do less on a more specific type, which is a paradox.

Java downcasting and is-A has-A relationship

HI,
I have a down casting question, I am a bit rusty in this area.
I have 2 clasess like this:
class A{ int i; String j ; //Getters and setters}
class B extends A{ String k; //getter and setter}
I have a method like this, in a Utility helper class:
public static A converts(C c){}
Where C are objects that are retireved from the database and then converted.
The problem is I want to call the above method by passing in a 'C' and getting back B.
So I tried this:
B bClasss = (B) Utility.converts(c);
So even though the above method returns A I tried to downcast it to B, but I get a runtime ClassCastException.
Is there really no way around this? DO I have to write a separate converts() method which returns a B class type?
If I declare my class B like:
class B { String k; A a;} // So instead of extending A it has-a A, getter and setters also
then I can call my existing method like this:
b.setA(Utility.converts(c) );
This way I can reuse the existing method, even though the extends relationship makes more sense. What should I do? Any help much appreciated. Thanks.
The cast from type A to type B:
B bClasss = (B) Utility.converts(c);
doesn't work because objects of type A don't have all the methods that might be called from references of type B. What would you expect to happen if you called
bClasss.getK();
on the next line? The underlying object has no member variable k, so this cast is not allowed.
You can use references of the higher types in your class hierarchy to refer to objects of lower types, but not the other way around.
Without knowing more, I think the best thing to do is implement multiple methods
A aObj = Utility.convertToA(c);
B bObj = Utility.convertToB(c);
If B extends A, then you should still benefit from some code reuse in the constructors of your classes.
What's important here is what Utility.converts() actually returns - if it doesn't create a new B object and return it, there's no way to get a B from it.
(since you're getting ClassCastException, then it doesn't create B inside)
You should work in the appropriate level of abstraction and write your method signatures to do the same. If the public/default interface of B is modified that heavily from A, then your method signature really should be returning a B. Otherwise, ditch trying to cast it, assign the result of .converts to a variable of type A, and treat it like an A even though it's true type is really a B. You would be defeating the point of abstracting through inheritance if you are trying to downcast here.
Without seeing your source code, I have no clue whether or not it makes sense to use composition in lieu of inheritance here. The above paragraph assumes what you say about "extends relationship makes more sense" is really true.
If your converts() method doesn't actually return a B, then there is no way to cast it to a B. Since you are getting a ClassCastException it clearly doesn't return a B.
You can of course write a converts(C c) that returns a B. But an alternative approach might be to write a constructor:
B(A a)
which creates a B based on the contents of A. Then you use converts to get a C, and create a B from it.

Categories