Call same method signature with different implementations - java

Given:
I have two different projects legacy A and nextGen B.
There is the third project C which is shared between A and B.
Objective:
I have to do an averageCalculation() in the third project C. But the implementation is different for the projects A and B. Using the same method signature but different implementations, how do I create a design? Note: the project A and B should just call averageCalulation() the same method signature.
Project C
Interface I {
averageCalculation();
}
Class CClass implements I{
?<averageCalculation()-for A>
?<averageCalculation()- for B>
}
Project A
{
I i1 = new CClass();
i1.averageCalculation();
}
Project B
{
I i2 = new CClass();
i2.averageCalculation();
}
Is the above approach correct? if so how would i create two implementations of averageCalculation() in CClass?

Create two different classes that implement your interface, and use a different class in each project:
Project C
interface I {
averageCalculation();
}
class CClassForA implements I{
averageCalculation(){...} // for A
}
class CClassForB implements I{
averageCalculation(){...} // for B
}
Project A
{
I i1 = new CClassForA();
i1.averageCalculation();
}
Project B
{
I i2 = new CClassForB();
i2.averageCalculation();
}

I am not sure I understand your problem entirely but here is a solution. You can't change the legacy projects but you want to A and B to conform to some Interface I. You can do this by wrapping A and B in something that does conform to I and implement I with A and B's respective implementations.
public class Problem {
public static class A{
public int foo(){
return 3;
}
}
public static class B{
public int foo(){
return 5;
}
}
public interface TheFoo{
public int foo();
}
public static class AWrapper extends A implements TheFoo{
public int foo(){
return super.foo();
}
}
public static class BWrapper extends B implements TheFoo{
public int foo(){
return super.foo();
}
}
public static void main(String[] args){
//TheFoo[] myFoos = new TheFoo[]{new A(), new B()}; Won't work
TheFoo[] myFoos = new TheFoo[]{new AWrapper(), new BWrapper()};
for(TheFoo curFoo : myFoos){
System.out.println(curFoo.foo());
}
}
}

An interface is just a contract for classes that implement it to define. If a class needs two different implementations of a single method in an interface then you should consider redesigning your project
Why do you need CClass ? You can have a class in Project A implement I and another class in Project B doing the same.
EDIT : the compiler will not let you have two different implementations of the method with the same signature. You do have an option to overload it if that is what you want
public interface SomeInter
{
public void doSomething();
}
public class ImplClass implements SomeInter
{
#Override
public void doSomething() {
// TODO Auto-generated method stub
}
public void doSomething(String abc)
{
// TODO Auto-generated method stub
}
}
Hope this helps!!

Related

Call base method on generic object of derived class

Trying to add a base interface with method so all derived classes have to implement the method or use default method. What's the best way to going about getting this method callable? See comment in code block below.
public interface IA{}
public interface IB{
public Integer doWork();
}
public interface IC extends IB{
}
class B implements IB{
Integer doWork(){
return 2;
}
}
class C extends B implements IC{
#Override
Integer doWork(){
return 7;
}
}
//What do I need to do to cast clazz to an object so I can call the derived class' doWork method?
private Integer newClient(Class<T> clazz){
((B) clazz).doWork();
}
Ended up finding a solution:
B.class.cast(clazz);
As for how to ensure you call the derived class' method that overrides the base, that is a native behavior of Java.
Example Program:
public class Foo {
static class A {
int get() { return 0; }
}
static class B extends A {
#Override
int get() { return 1; }
}
public static void main(final String[] args)
{
A a = new A();
B b1 = new B();
A b2 = new B();
printA(a);
printA(b1);
printA(b2);
}
public static <T extends A> void printA(T bObj) {
System.out.println(bObj.get());
}
}
Output:
0
1
1
Note that the output returned from b2::get()::int is the same as b1::get()::int, even though b2 is type A and b1 is type B. This is because even though we only have a reference to the A class in b2, the object implementation is still B.
It seems that you only want to know how to instantiate the Class. Assuming it has a default constructor you can do it this way:
private Integer newClient(Class<B> clazz){
try {
((B) (clazz.getConstructor().newInstance())).doWork();
} catch ...
}

Is this an anti-pattern or violates it some design-principles?

I try myself with design-patterns & -principles and have a question.
Before, sorry for the bad coding-style habit !!
I have an interface like ITest in this case:
public interface ITest
{
public void method1();
}
and then implement the methods and fields, if any, into a concrete class B like this:
public class B implements ITest
{
//This is the method from the interface
#Override
public void method1()
{
System.out.println("method1");
}
//This is another method in class B
public void method2()
{
System.out.println("method2");
}
}
Now in the application code I put it in like this:
public class Main
{
public static void main(final String args[]) throws Exception
{
//One principle says:
//programm to an interface instead to an implementation
ITest test = new B();
//method from interface
test.method1();
//this method is not accessible because not part of ITest
test.method2(); //compile-time error
}
}
You see that method2() from class B is not available because to the interface of ITest.
Now, what if I need this 'important' method?
There are several possibilities. I could abstract it in the interface or make class B abstract and extend into another class and so on, or make the reference in the main() method like:
B test = new B();
But this would violate the principle.
So, I modified the interface to:
public interface ITest
{
//A method to return the class-type B
public B hook();
public void method1();
}
And put in class B the implementation:
public class B implements ITest
{
//this returns the object reference of itself
#Override
public B hook()
{
return this;
}
//This is the method from the interface
#Override
public void method1()
{
System.out.println("method1");
}
//This is the 'important' method in class B
public void method2()
{
System.out.println("method2");
}
}
Now in my main()-method I can call both methods with a little hook or chaining mechanism without referencing a new object nor does it violate the design-principle and I don't need an extra class for extension or abstraction.
public class Main
{
public static void main(final String args[])
{
//programm to an interface instead into an implemintation
ITest test = new B();
//method from interface
test.method1();
//method2 will not be accessible from ITest so we referencing B through a method hook()
//benefits: we don't need to create extra objects nor additional classes but only referencing
test.hook().method2();
System.out.println("Are they both equal: "+test.equals(test.hook()));
}
}
Also, I can encapsulate, inherit and abstract other methods, fields etc.
This means, that I can create more complex and flexible hierarchies.
My question now:
Is this a kind of anti-pattern, bad design-principle or could we benefit from this?
Thank you for watching. :-)
Is this a kind of anti-pattern, bad design-principle or could we
benefit from this?
Yes, it is a bad pattern.
The problem stems from the fact that you have tightly coupled ITest to B. Say I want to create a new implementation of ITest - let's call it C.
public class C implements ITest
{
#Override
public B hook()
{
// How do I implement this?
}
#Override
public void method1()
{
System.out.println("method1");
}
}
There's no sane way we can implement this method. The only reasonable thing to do is to return null. Doing so would force any users of our interface to constantly perform defensive null checks.
If they're going to have to check every time before using the result of the method, they might as well just do an instanceof and cast to B. So what value are you adding? You're just making the interface less coherent and more confusing.
Adding a method returning B to interface ITest implemented by B is definitely an awful design choice, because it forces other classes implementing ITest return B, for example
public class C implements ITest {
#Override
public B hook()
{
return // What do I return here? C is not a B
}
...
}
Your first choice is better:
B test1 = new B();
C test2 = new C();

Can a super class method implementation depend on child class field

I am in a situation as follows.
I have an interface A which is inherited by class B,C,D (B,C,D implements A).
public interface A{
public String someMethod();
}
class B implements A{
ObjectType1 model;
#Override
public String someMethod(){
if(model instanceof X){
System.out.print(true);
}
}
}
class C implements A{
ObjectType2 model;
#Override
public String someMethod(){
if(model instanceof X){
System.out.print(true);
}
}
class D implements A{
ObjectType3 model;
#Override
public String someMethod(){
if(model instanceof X){
System.out.print(true);
}
}
As you can see all method implementations are the same. So I am duplicating code. My plan was to move the method to A and make A an abstract class. But the problem is my method depends on the model field. So what would be my options to make this code better?
bdw class A,B,C extends and implements other classes too.
EDIT
modification in code. check field
I don't see any problem related to the model field transforming the interface A into an abstract class.
There is no need to reimplement the method in the subclasses if it is the same, unless you want to change its behavior (override it).
public abstract class A {
// Make it protected so it can accessible by subclasses
protected Object model;
// Common behavior that will be inherited by subclasses
public String someMethod() {
if (model instanceof X) {
return "x";
} else {
return "not x";
}
}
}
public class B extends A {
// Subclasses may access superclasses fields if protected or public.
public void someOtherMethod() {
System.out.println(super.model.toString());
}
}
public class C extends A {
// You may wish to override a parent's method behavior
#Override
public String someMethod() {
return "subclass implements it different";
}
}
For your new code example, if you really want to do that in a procedural way you can create an abstract superclass ObjectType and then it will be accessible for the parent as well.
However I wouldn't do that. It seems to me that in doing so is the very opposite of what object orientation tries to solve.
By using a subclass to define the behavior, you wouldn't need to do it in a procedural logic. That's precisely then point of using objects, inheritance and overriding/implementing behavior as needed.
Create a parent class A with said field, and said function. Have the other classes extend A. No need to override them if they function the same.
To deduplicate, you can either make A an abstract class and move the implementation of the method and the field there, or create an abstract class, say E, that implements the interface with that method and field and then have B, C and D extend that class E.
For the more general question of depending on a subclass's field, you can create an abstract method getModel which the subclasses decide how to implement -- by returning a model field or doing something else.
If you are using java 8 you could use default method in interface A, with a getter method for model.
public interface A{
default public String someMethod() {
if(getModel() instanceof X){
System.out.print(true);
}
}
public Object model getModel();
}
Then implement getModel method in all child interfaces.
If you're going to do this you must have model to be of the same (basic) type in all derived objects. If it were of the same type there's a case for putting the model to a base class. Anyway if they are of different derived types you would need to have an accessor to get it.
interface B {
BaseModel getModel();
default public strict doSomething() {
BaseModel m = getModel();
// do something with m
}
}
class D implements B {
DerivedModel model;
public getModel() {
return model;
}
}
If I was given a chance to refactor it, I will follow below approach, leveraging Java 8 Default Methods:
interface A {
default String someMethod(X objectType) {
if (objectType instanceof X) {
System.out.println(true);
}
// return something, for now returning class
return objectType.getClass().toString();
}
}
class B implements A {
#Override
public String someMethod(X objectType) {
if (objectType instanceof X) {
System.out.println(true);
}
// return "Hello"
return "Hello";
}
}
class C implements A {}
class D implements A {}
Usage:
public class Main implements A {
public static void main(String[] args) {
B b = new B();
C c = new C();
D d = new D();
Main main = new Main();
main.call(b);
main.call(c);
main.call(d);
}
public void call(A clazz) {
ObjectType1 objectType1 = new ObjectType1();
String type = clazz.someMethod(objectType1);
System.out.println(type);
}
}
interface X {
}
class ObjectType1 implements X {
}

Implement two interfaces in an anonymous class

I have two interfaces:
interface A {
void foo();
}
interface B {
void bar();
}
I am able to create anonymous instances of classes implementing either of these interfaces like so:
new A() {
void foo() {}
}
or:
new B() {
void bar() {}
}
I want to create an anonymous class that implements both interfaces. Something like (the fictitious):
new A implements B {
void foo() {}
void bar() {}
}
This obviously gives a compile error: "B cannot be resolved to a type".
The workaround is quite simple:
class Aggregate implements A, B {
void foo() {}
void bar() {}
}
I then use Aggregate where ever I would have used the anonymous class.
I was wondering if it is even legal for an anonymous class to implement two interfaces.
"An anonymous inner class can extend one subclass or implement one
interface. Unlike non-anonymous classes (inner or otherwise), an anonymous
inner class cannot do both. In other words, it cannot both extend a class and
implement an interface, nor can it implement more than one interface. " (http://scjp.wikidot.com/nested-classes)
If you are determined to do this, you could declare a third interface, C:
public interface C extends A, B {
}
In this way, you can declare a single anonymous inner class, which is an implementation of C.
A complete example might look like:
public class MyClass {
public interface A {
void foo();
}
public interface B {
void bar();
}
public interface C extends A, B {
void baz();
}
public void doIt(C c) {
c.foo();
c.bar();
c.baz();
}
public static void main(String[] args) {
MyClass mc = new MyClass();
mc.doIt(new C() {
#Override
public void foo() {
System.out.println("foo()");
}
#Override
public void bar() {
System.out.println("bar()");
}
#Override
public void baz() {
System.out.println("baz()");
}
});
}
}
The output of this example is:
foo()
bar()
baz()
For save some keystrokes (for example if the interfaces have a lot of methods) you can do this:
abstract class Aggregate implements A, B {
}
new MyObject extends Aggregate {
void foo() {}
void bar() {}
}
Notice the key is to declare the Aggregate as abstract.
Note that you can make a named local class that implements the two interfaces:
void method() {
class Aggregate implements A, B {
void foo() {}
void bar() {}
}
A a = new Aggregate();
B b = new Aggregate();
}
This save you from doing a class-level or top-level class declaration.
The result is called a local class. Local classes declared in instance methods are also inner classes, which means that they can reference the containing object instance.

Java - Change reference variable

So I know that reference variables cannot be changed.
I'm in a position where I have two different classes. Let's call them A and B. They have the same methods (the methods are called the same) they're just specified for the class.
I need a clever way of changing between which classes to instantiate.
One way could be with some boolean tests checking which option has been selected and then instantiate the corresponding class. Although I fear that this might become bulky and ugly, so I'm trying to avoid this way. There must be something more clever.
Currently I'm thinking of making a new class (e.g. C) that extends the same class as A and B.
I would then override the methods (as class A and B also do btw) and then execute the methods depending on the setting (i.e. which class A or B is selected). The methods would return the same as it would in class A or B.
Hope I'm not talking complete gibberish.
One way to do this is to use the Factory Pattern.
Partial quote from Wikipedia:
Like other creational patterns, it
deals with the problem of creating
objects (products) without specifying
the exact class of object that will be
created.
E.g.
public abstract class Base {
public abstract void doSomething();
}
public class A extends Base {
public void doSomething() {
System.out.println("A");
}
}
public class B extends Base {
public void doSomething() {
System.out.println("B");
}
}
public class C extends Base {
public void doSomething() {
System.out.println("C");
}
}
public interface BaseFactory {
public Base createBase(int condition);
}
public class DefaultBaseFactory implements BaseFactory {
public Base createBase(int condition) {
switch (condition) {
case 0 : return new A();
break;
case 1: return new B();
break;
case 3: return new C();
break;
default: return null;
break;
}
}
}
Your explanation is confusing. But it sounds like you should extend A and B from a common base class (or an interface):
abstract class Base {
public abstract void someMethod();
}
class A extends Base {
public void someMethod() { System.out.println("A"); }
}
class B extends Base {
public void someMethod() { System.out.println("B"); }
}
That means you can do something like this:
Base base;
if (someCondition) {
base = new A();
}
else {
base = new B();
}
base.someMethod();

Categories