Imagine finding out if two shapes intersect. An intersection of two shapes may be either another shape, or nothing. If there is no intersects(Shape) method in Shape, then, I believe, the proper object-oriented solution would be:
public final class ShapesIntersection implements Maybe<Shape> {
public ShapesIntersection(Shape a, Shape b) {
this.a = a;
this.b = b;
}
#Override
public boolean isPresent() {
// find out if shapes intersect
}
#Override
public Shape get() {
// find the common piece of two shapes
}
}
In JDK, Optional is a final class, not an interface. To properly solve problems like this one, I'm going to write my own Maybe interface that will look like this:
public inteface Maybe<T> {
T get();
boolean isPresent();
default Optional<T> asOptional() {
return isPresent() ?
Optional.of(get()) :
Optional.empty();
}
}
What caveats may there be if I stick to this solution implementing Maybe whenever I need optional behavior? Also, this task seems to be quite universal. Am I reinventing the wheel here with introducing my own Maybe interface?
I should add that the whole hassle with a separate class and interface is to omit implementing the behavior using static methods.
You are reinventing the wheel here. The reason Optional is final, is because there is really no reason to change it, and the internal semantics require consistency across the usage.
The real issue here is the logic of your constructor. You should not be using a constructor to determine the logic of the intersection. What you want is a (static?) method that performs the calculation for you, and returns the relevant Optional.
public static Optional<Shape> intersection(Shape a, Shape b) {
// compute if there is an overlap
if (!checkOverlaps(a,b)) {
return Optional.empty();
}
Shape intersection = ....
return Optional.of(intersection);
}
Note that the Optional.empty() and Optional.of(....) are factory methods that create appropriate instances of the Optional. Java 8 streams, functions, and other supporting structures use a number of static factory methods to create instances of these otherwise final classes.
As rolfl said, this is a strange idea. Imagine you want to compute xy for two ints. Sometimes it's undefined, so would you implement a Maybe<Integer>? And then another implementation for e.g. nCr(x, y)?
This sound wrong, doesn't it? The problem is that you're binding the origin of the thing (intersection, power, choose) to the thing itself. But an intersection of two Shapes is nothing but a Shape again (or nothing at all, which can be nicely represented via Optional. Or even better with null; just call me old-school).
The OO aproach makes no sense here, as there's no new kind of object. 22 is exactly the same thing as nCr(4, 1) and both are exactly of the same kind as 4.
Another thing is that you have to call the ShapesIntersection constructor. This is actually a static call, so you may as well write a static helper method instead.
Extending Shape by some IntersectableShape might make sense. There are cases when some operations are common enough for such a thing, see e.g. FluentIterable, but I doubt you'd make that many intersections.
Related
Imagine any Java class which is entirely immutable. I will use the following as an example:
public class Point2D {
public final int x;
public final int y;
public Point2D(final int x, final int y) {
this.x = x;
this.y = y;
}
}
Now consider adding an operator on this class: a method which takes one or more instances of Point2D, and returns a new Point2D.
There are two possibilities for this - a static method, or an instance method:
public static Point2D add(final Point2D first, final Point2D second) {
return new Point2D(first.x + second.x, first.y + second.y);
}
or
public Point2D add(final Point2D other) {
return new Point2D(this.x + other.x, this.y + other.y);
}
Is there any reason to pick one over the other? Is there any difference at all between the two? As far as I can tell their behaviour is identical, so any differences must be either in their efficiency, or how easy they are to work with as a programmer.
Using a static method prevents two things:
mocking the class with most mocking frameworks
overwriting the method in a subclass
Depending on context, these things can be okay, but they can also create serious grief in the long run.
Thus, me personally, I only use static when there are really good reasons to do so.
Nonetheless, given the specific Point2D class from the question, I would tend to actually use the static methods. This class smells like it should have "value" semantics, so that two points for the same coordinates are equal and have the same hash code. I also don't see how you would meaningfully extend this class.
Imagine for example a Matrix2D class. There it might make a lot of sense to consider subclasses, such as SparseMatrix for example. And then, most likely, you would want to override computation intensive methods!
There is no practical difference between the two. Where it matters most is in the area of OO design and readability.
The static version of the operation seems more aligned with the static factory pattern. In addition to using a common design pattern, it is a clear creational design, which seems to meet its intent: to create a new object.
On the other hand, instance methods creating new objects are very practical when it comes to immutable objects. The best example of this is the String methods (String.concat(string), etc.). In my opinion, this is more a question of practicality (you don't want to mutate the state of the object; you need to augment the it, but the operation has to result in a new instance).
Is there any reason to pick one over the other?
There may be cases where one fits better than the other (for example, I'd prefer the static method to the instance version in a stream pipeline's reduction - as an example), but there is no evident, absolute preference to be claimed here. So...
I would use the static method for factory operations (although I'd call the method something more like create..., newInstance... for clarity)
I would use the instance method for transformations operations that return new instances to avoid mutating the object.
First and foremost, if it is an immutable make it unsubclassable to others. Usually final is used although you can hide the constructor. Not particularly relevant in this case, but static creation methods allows common values to be reuse instances, specialist implementations to be selected and the ugly diamond (<>) notation to be elided. (If you call your static creation method of it is clear to use when qualified with the type name.)
Addition is usually written as infix. If there are subexpressions involved this will make the client code look much better, though the Java syntax will still force you to have parentheses everywhere. A static method requires qualification or an import static for the client (the latter not really helpful if the method has a name like and, and 'import *' is bad if there other static method that don't make sense without qualification).
Reserve static methods for cases where the object is, in a sense, incidental to the function. For example String's join and format.
As for testing, it should not be necessary to mock a value class or static method. Immutable types should have trusted implementations and therefore not be subtypable by others.
I came up with this question writing specific code, but I'll try to keep the question as generic as possible.
Other similar question refer to C# which seems to have some language specific handling for this and below code is Java, but again let's try to keep it generic.
Let's say I have class A which implements interface I.
This is useful to me cause I can implement methods that use A only as a I type and abstract the implementation.
Let's now say, I have class B which implements all methods in interface I, but it's never referred to as only I.
Let's now say, I have class B which implements methods that have the same name/signature as the ones in interface I, but it doesn't implements the interface.
Should I always explicitly implement I?
Even if I don't use it (though I might in the future) for type abstraction?
A more meaningful, even if probably not realistic, example would be:
interface Printable {
String print()
class A implements Printable {
//code...
String print(){return "A";}
//code...
}
class B {
//code...
String print(){return "B";}
void otherMethod(){/*code*/}
//code...
}
class Test {
Printable a = new A();
System.out.println(a.print());
B b = new B();
b.otherMethod();
System.out.println(b.print());
}
Are there any drawbacks on explicitly implementing, or not, the interface Printable?
The only one I can think of is scalability for the second case.
In the sense that if one day I'll want to explicitly use it as Printable, I'll be able to do so without any more effort.
But is there anything else (patterns, optimization, good programming, style, ..) I should take into consideration?
In some cases the type hierarchy will affect the method call cost due to not playing well with JIT method inlining. An example of that can be found in Guava ImmutableList (and others) offer awful performance in some cases due to size-optmized specializations #1268 bug:
Many of the guava Immutable collections have a cute trick where they have specializations for zero (EmptyImmutableList) and one (SingletonImmutableList) element collections. These specializations take the form of subclasses of ImmutableList, to go along with the "Regular" implementation and a few other specializations like ReverseImmutable, SubList, etc.
Unfortunately, the result is that when these subclasses mix at some call site, the call is megamorphic, and performance is awful compared to classes without these specializations (worse by a factor of 20 or more).
I don't think there is a simple correct answer for this question.
However, if you do not implement the method, you should do this:
public void unusedBlahMethod() {
throw new UnsupportedOperationException("operation blah not supported");
}
The advantages of omitting the unused method are:
You save yourself time and money (at least in the short term).
Since you don't need the method, it might not be clear to you how best to implement it anyway.
The disadvantages of omitting the method are:
If you need the method in the future, it will take longer to add it as you may have to refamiliarize yourself with the code, check-out, re-test, etc.
Throwing an UnsupportedOperationException may cause bugs in the future (though good test coverage should prevent that).
If you're writing disposable code, you don't need to write interfaces, but one day you might notice, that you should've taken your time and write an interface.
The main advantage and purpose of interfaces is the flexibility of using different implementations. I can put something, that offers the same functionality inside a method, I can create a fake of it for test purposes and I can create a decorator that behaves like the original object, but can log the stuff.
Example:
public interface A {
void someMethod();
}
public class AImplementation {
#Override
public void someMethod() {
// implementation
}
}
public class ADecorator {
private final A a;
public ADecorator(A a) {
this.a = a;
}
#Override
public void someMethod() {
System.out.println("Before method call");
a.someMethod();
System.out.println("After method call");
}
}
Nice side effect: ADecorator works with every implementation of A.
The cost for this flexibility isn't that high and if your code will live a little bit longer, you should take it.
I don't really have a lot of practical experience with either java or oop in general so now I'm stuck with a problem that's probably really easy to work around but where I'm not sure at all how an elegant, oop oriented solution might look like.
So here's a simplified rundown:
Say I wanted to write some sort of calculating application which first of all contains several methods like:
static double sine(double x){...}
static double cosine(double x){...}
and so forth.
Some other static method would then perform some sort of calculation that involves the derivative of one of these functions. If we pretend there was no way to approximate that derivative, the easiest solution that came to mind for me was to wrap each of the method above in a class and to let those classes implement an interface 'Differentiable'
that defines the method 'evaluateDerivative', e.g.:
interface Differentiable {
double evaluateDerivative(double x);
}
class sine implements Differentiable {
static double evaluate(double x){
return...;
}
public double evaluateDerivative(double x) {
return cosine.evaluate(x);
}
}
so if I needed the derivative of any method for another calculation I could simply do something like this:
static double returnDerivativePlusOne(Differentiable d, double x){
return d.evaluateDerivative(x) + 1;
}
Okay, now the problem is this: when I actually want to call the method above, I need an instance of the sine class, e.g.:
DerivativePlusOne(new sine(), 1);
which doesn't really make sense because the sine class only contains static methods (and maybe some final fields) so creating an object seems strange to me.
So, is there a different approach that would produce the same outcome in a more elegant way ? Any help would be appreciated.
Why not make evaluateDerivative function static as well. There is no need of interface.
To make use of polymorphism, we can do the following. Suppose we are doing the following: we have two class Sine and Cosine, and an interface Differentiable.
interface Differentiable {
double evaluateDerivative(double x);
}
class Sine implements Differentiable {
static double evaluate(double x){return...}
public double evaluateDerivative(double x) {return somevalue;}
}
class Cosine implements Differentiable {
static double evaluate(double x){return...}
public double evaluateDerivative(double x) {return somevalue;}
}
In that case, to make use of polymorphism, what you can do is:
Differentiable d = new Sine();
double derivative = d.evaluateDerivate();
d = new Cosine();
derivative = d.evaluateDerivate();
Hope it helps.
Why do you want that methods to be static? And if there is no particular reason, maybe your application should just create an instance of a calculating class at startup, and then use it?
In my opinion, if you really insist on creating objects that provide typical functions and you want to have derivatives of those functions as well, you may hand-code those derived methods (in another or even the same class). I actually can't see where you'd use polymorphism with such case, as this is not a typical oo app (because your objects are just bundles of calculating methods).
What's even more, if you really wanted to create derived classes to calculate derivatives, your evaluateDerivative method should return an object of the derived class, and not a numer.
An elegant solution in this case would be, in my opinion, to create a kind-of-library containing the methods you want. Just an easy-to-use bundle of methods, as your classes does not seem to provide anything more than calculating methods (of typical maths functions, for which there are already written functions as well). I'd still say that created bundle (which may be even a static class) fulfills Single Responsibility Principle (as it only provides some maths functions), but even this does not appear to be so important here. The rules of creating elegant OO solutions (like SOLID rules, for example) are there to help you write code that is easier to manage and handy to build on top of. I can't see how would you build a bigger class hierarchy actually based on your calculating class, so the simplest solution may be the best.
I'ver been wondering how to best implement equals() for a family of classes that all implement the same interface (and the client is supposed to work only with said interface and never to know about implementing classes).
I haven't cooked up my own concrete example, but there are two examples in the JDK - java.lang.Number and java.lang.CharSequence that illustrate the decision:
boolean b1 = new Byte(0).equals( new Integer(0) ) );
or with CharSequence
boolean b2 = "".equals(new StringBuilder());
Would you ideally want those to evaluate to true or not? Both types do implement the same datatype interface, and as a client working with Numbers (resp. CharSequences) instances I would have an easier life if equals would compare the interface types instead of the implementing types.
Now this is not an ideal example, as the JDK exposes the implementing types to the public, but suppose we had not have to uphold compatibility with what is already there - from a designers point of view: Should equals check against the interface or is it better the way it is, checking against the implementation?
Note: I understand that checking for equality against an interface can be very hard to actually implement properly in practice and its made even more tricky since equal interfaces also need to return the same hashCode().
But those are only obstacles in implementation, take for example CharSequence, although the interface is pretty small, everything required for equality checks is present whithout revealing the internal structure of the implementation (so it is principally possible to implement properly, even without knowing about future implementations in advance).
But I am more interested in the design aspect, not on how to actually implement it. I wouldn't decide solely based on how hard something is to implement.
Define an abstract class that implements your interface and defines final equals()/hashCode() methods and have your customers extend that instead:
public interface Somethingable {
public void something();
}
public abstract class AbstractSomethingable implements Somethingable {
public final boolean equals(Object obj) {
// your consistent implementation
}
public final int hashCode() {
// your consistent implementation
}
}
Notice that by making your class abstract, you can implements the interface without defining the interface's methods.
Your customers still have to implement the something() method, but all their instances will use your code for equals()/hashCode() (because you've made those methods final).
The difference to your customers is:
Using the extends keyword instead of the implements keyword (minor)
Not being able to extend some other class of their choosing to use your API (could be minor, could be major - if it's acceptable then go for it)
I would normally assume that "similar" objects would not be equal - for example I wouldn't expect the Integer(1) would pass equals(Long(1)) . I can imagine situations where that would be reasonable, but as the jdk needs to be a general-purpose API you wouldn't be able to make the assumption that that would always be the correct thing to do.
If you've got some sort of custom objects where it's reasonable, I think it's perfectly fine to implement an expanded definition of equals if you
are sure that you don't have some edge cases where you really do need the more specific equality (i.e. that would require the identical classes)
document it very clearly
make sure that hashcode behaves consistently with your new equals.
For what it's worth, I'd probably do an implementation-specific equals implementation (side note - don't forget to implement hashCode...). Interface-level equals() puts a pretty heavy burden on implementers of the interface - who might or might not be aware of the special requirement.
Often, implementation-level works fine as your client only deals with one implementation (i.e. MyNumberProcessor can works on any Number, but practically one instance of it would only have to handle Long and maybe another only Double). Generics are a great way of making sure that happens.
In the rare case where it does matter, I would probably design the client to allow injection of a Comparator or - when not available - encapsulate my Numbers into a VarTypeNumber.
I'd try to add another equals Method to my interface. How about that:
assertFalse(new Integer(0).equals(new Byte(0))); // pass
assertTrue(new Integer(0).valueEquals(new Byte(0))); // hypothetical pass
This does not produce unexpected behaviour (different types equal) but keeps the possibility open to check for equal values.
There's a somewhat related topic in effective java where equals with instanceof and getClass is discussed. Can't remember the item number, though.
I would consider any implementation of equals that returns true for two objects that do not have the same concrete type to be extremely 'surprising' behavior. If you're operating inside a box where you know at compile time every possible implementor of the interface, you can fabricate equals that make sense with only interface methods, but that's not a reality for API/framework code.
You can't even be sure that nobody's going to write an implementation of the interface that mutates its internal state when you call the methods that you used to implement equals! Talk about confusing, an equals check that returns true and invalidates itself in the process?
--
This is what I understood to be the question as far as 'checking equality against the interface':
public interface Car {
int speedKMH();
String directionCardinal();
}
public class BoringCorrolla implements Car {
private int speed;
private String directionCardinal;
public int speedKMH() { return speed; }
public String directionCardinal() { return directionCardinal; }
#Override
public boolean equals(Object obj) {
if (obj isntanceof Car) {
Car other = (Car) obj;
return (other.speedKMH() == speedKMH() && other.directionCardinal().equals(directionCardinal());
}
}
}
public class CRAZYTAXI implements Car, RandomCar {
public int speedKMH() { return randomSpeed(); }
public String directionCardinal() { return randomDirection();}
}
It is possible to define equality among different classes.
In your case, the exact equality algorithm must be specified by the interface, so any class implementing the interface must abide by it. Better yet, since the algorithm depends only on information exposed by the inferface, just implement it already, so subclasses can simply borrow it.
interface Foo
class Util
static int hashCode(Foo foo){ ... }
static boolean equal(Foo a, Foo b){ ... }
static boolean equal(Foo a, Object b)
return (b instanceof Foo) && equal(a, (Foo)b);
class FooX implements Foo
int hashCode()
return Util.hashCode(this);
boolean equals(Object that)
return Util.equal(this, that);
We've got a set of classes which derive from a common set of interfaces such that
IFoo-> BasicFoo, ReverseFoo, ForwardFoo
IBar -> UpBar, DownBar, SidewaysBar
IYelp -> Yip, Yap, Yup
wherein the constructor for the Foo's looks like Foo(IBar, IYelp) These items are used throughout the project.
There exists another class which has a method whose signature is public double CalcSomething(IFoo, IAnotherClass) that is applied at some point to each and every Foo. We've had a request come down from above that one particular object composition, let's say a BasicFoo(UpBar,Yip), use a different algorithm other than the one found in CalcSomething.
My first instinct was to say let's change the IFoo interface so we can move the logic down to the Foo class level, change the constructor to be Foo(IBar, IYelp, IStrategy) and then have the Foo objects encapsulate this logic. Unfortunately we've also been told the design of the architecture stipulates that there be no dependencies between IFoo, it's implementations and IAnotherClass. They're adamant about this.
Ok, sure, then I thought I might use a visitor pattern but... how? The whole point of making the composition was so that no other class could see the implementation details. Reflection to look inside the objects, totally breaking encapsulation? Oh hell no.
So I've come here because I'm at a loss. Does anyone have any suggestions how we could treat a special case of one of the compositions without modifying the composition or breaking encapsulation? There has got to be a simple solution I'm over-looking.
Edit:
Removed offending beginning.
Changed "handled specially" into a more descriptive meaning.
A CalculationFactory that chooses an appropriate algorithm based on the type of IFoo you provide would solve the problem (at the cost of a conditional):
interface ICalcSomethingStrategy {
public double CalcSomething(IFoo, IAnotherClass);
}
CalcSomethingStrategyFactory {
ICalcSomethingStrategy CreateCalcSomethingStrategy(IFoo foo) {
// I'm not sure whether this is the idiomatic java way to check types D:
if (foo.Bar instanceof UpBar && foo instanceof Yip) {
return new UnusualCalcSomethingStrategy();
} else {
return new StandardCalcSomethingStrategy();
}
}
}
In the spirit of KISS I would add a method isSpecial() to IFoo, and use that to decide which algorithm to use in CalcSomething().
This assumes that this is the only special case.
There's no way for calcSomething to avoid having the knowledge needed to do the "special" behavior, but other than that, you can maintain most of your encapsulation this way.
Create a marker interface IQualifyForSpecialTreatment which extends IFoo. Extend BasicFoo to SpecialBasicFoo, and have it implement IQualifyForSpecialTreatment.
interface IQualifyForSpecialTreatment extends IFoo {
}
class SpecialBasicFoo extends BasicFoo implements IQualifyForSpecialTreatment {
...
}
You can then add another flavor of calcSomething:
calcSomething (IQualifyForSpecialTreatment foo, IAnotherClass whatever) {
... perform "special" variant of calculation
}
calcSomething (IFoo foo, IAnotherClass whatever) {
... perform "normal" variant of calculation
}