In Java i have abstract class named Operation and three its subclasses called OperationActivation, OperationPayment and OperationSendEmail.
ADDED FROM COMMENT: Operation* objects are EJB Entity Beans so I can't have business logic inside them.
No I want to create processor class like this:
public class ProcessOperationService {
public void processOperation(Operation operation) {
out.println("process Operation");
process(operation);
}
public void process(OperationActivation operationActivation) {
out.println("process Activation");
}
public void process(OperationPayment operationPayment) {
out.println("process Payment");
}
public void process(OperationSendEmail operationSendEmail) {
out.println("process OperationSendEmail");
}
}
Processing each operation requires different logic so I want to have three different methods , one for each operation.
Of course this code doesn't compile. Am I missing something or it can't be done that way?
You are mixing up overloading and polymorphic method handling. When you overload methods based on the parameter type, that is static polymorphism. Those methods should be called from code that knows at compile-time what the type is. You could possibly do the following, but it wouldn't be clean object-oriented code:
public class ProcessOperationService {
public void processOperation(Operation operation) {
out.println("process Operation");
if (operation instanceof OperationActivation)
process((OperationActivation)operation);
else if (operation instanceof OperationPayment)
process((OperationPayment)operation);
...
}
public void process(OperationActivation operationActivation) {
out.println("process Activation");
}
...
}
It would be much better to let the automatic run-time polymorphism work, by doing as Brian Agnew suggested, and making process be a method of each Operation subtype itself.
Shouldn't your Operation* objects be doing the work themselves ? So you can write (say)
for (Operation op : ops) {
op.process();
}
You can encapsulate the logic for each particular operation in its own class, and that way everything related to OperationPayment remains in the OperationPayment class. You don't need a Processor class (and so you don't need to modify a Processor class everytime you add an Operation)
There are more complex patterns to enable objects to mediate wrt. what they need to execute, but I'm not sure you need something that complex at this stage.
Assumption: Operation* objects are subclasses of Operation
Unless the processOperation(Operation) method is performing some common functionality, you could just remove it and expose the process(Operation) methods.
The Command Pattern (JavaWorld Explanation) might be useful, but it's tricky to tell exactly what properties you want from your question.
The problem with the code is that any object that matches one of the process(Operation*) methods will also match the process(Operation) method. As there are 2 methods that can be used, the compiler is warning you of an ambiguous situation.
If you really want/need the code above, I would suggest implementing the process(Operation*) methods, and modify the process(Operation) method so it is called processCommon(Operation). Then, the first thing each process(Operation*) does is call processCommon.
Alternatively, you can code exactly as Avi said, using instanceof comparisons.
Neither is ideal, but it will accomplish what you want.
So you have an abstract class called 'Operation' and it has 3 classes extending it. Not sure if this is what you are after but I'd imagine it be designed something like this:
Operation.java
public abstract class Operation {
public abstract void process();
}
OperationActivation.java
public class OperationActivation extends Operation {
public void process() {
//Implement OperationActivation specific logic here
}
}
OperationPayment.java
public class OperationPayment extends Operation {
public void process() {
//Implement OperationPayment specific logic here
}
}
OperationSendEmail.java
public class OperationSendEmail extends Operation {
public void process() {
//Implement OperationSendEmail spepcific logic here
}
}
ProcessOperationService.java
public class ProcessOperationService {
public void processOperation(Operation operation) {
out.println("process Operation");
operation.process();
}
}
Won't the Visitor pattern be of use here ?
The class Operation can declare an "accept" method that takes a Visitor object and the subclasses can have provide the implementation :
public interface IOperationVisitor {
public void visit (OperationActivation visited);
public void visit (OperationPayment visited);
public void visit (OperationSendEmail visited);
}
abstract class Operation {
public void accept(IOperationVisitor visitor)();
}
class OperationActivation extends Operation {
public void accept(IOperationvisitor visitor) {
visitor.visit(this);
}
}
Similarly define "accept" method for classes OperationPayment and OperationSendEmail ..
Now your class can implement the visitor :
public class ProcessOperationService implements IOperationVisitor {
public void processOperation(Operation operation) {
operation.accept(this);
}
public void visit (OperationActivation visited) {
// Operation Activation specific implementation
}
public void visit (OperationPayment visited) {
// OperationPayment specific implementation
}
public void visit ((OperationSendEmail visited) {
// (Operation SendEmail specific implementation
}
}
Related
I was wondering if it's frowned upon that when designing an framework to be used by others, a class has some function as default behavior and expects its customers to override it if necessary. An example would be something like the following:
public class RecordProcessor<T extends Record> {
// ...
public void process() {
// process record logic
}
}
Consumers of this library creates their concrete classes to process their own records of type T.
Now I want to add a function called preProcess() to offer the ability for the consumers to preprocess their records. It would then look something like this:
public class RecordProcessor<T extends Record> {
// ...
public void process() {
preprocess();
// process record logic
}
public void preProcess() {
// By default no preprocessing
}
}
I know I can make preProcess an abstract function, but I dont want to due to a couple reasons:
Not all customers need to preprocess their records
We have a pipeline structure that autodeploys pushed code, so making RecordProcessor an abstract class would immediately break our customers' applications.
Is making preProcess do nothing in the parent class and let child classes override it considered bad practice? If not, what should the best way be to let customers know that they now have the power to preprocess the records? Through java docs?
One approach is to mark the public method as final (but this might also break existing apps) and allow protected hook methods to be overridden. For example:
public class RecordProcessor<T extends Record> {
// ...
public final void process() {
doPreProcess();
doProcess();
doPostProcess();
}
protected void doPreProcess() {
// By default no preprocessing
return;
}
protected void doProcess() {
// some default implementation
}
protected void doPostProcess() {
// By default no postprocessing
return;
}
}
Having some documentation should make it natural for other developers to recognize the optional extension methods.
I don't see anything wrong with having a hook method which does nothing. However, it should contain a return statement so static analysis tools won't complain.
UPDATE: in order to avoid breaking existing apps, if possible mark the existing method as deprecated and introduce a new method. For example:
public class RecordProcessor<T extends Record> {
// ...
public final void execute() {
doPreProcess();
doProcess();
doPostProcess();
}
#Deprecated - use execute() method instead.
public void process() {
doProcess();
}
protected void doPreProcess() {
// By default no preprocessing
return;
}
protected void doProcess() {
// some default implementation
}
protected void doPostProcess() {
// By default no postprocessing
return;
}
}
Prefer composition over inheritance. If you want your clients to add custom pre processing then do it by delegating to a separate objects.
public interface RecordPreProcessor<T extends Record>{
public void process(T record);
}
public class RecordProcessor<T extends Record> {
private RecordPreProcessor<T> recordPreProcessor = null;
public void setRecordPreProcessor(RecordPreProcessor<T> recordPreProcessor) {
this.recordPreProcessor = recordPreProcessor;
}
public void process() {
if (recordPreProcessor != null) recordPreProcessor.process(record);
// process record logic
}
}
No, overriding is not discouraged in Java.
The language allows overriding.
The language makes all methods overridable by default.
The Java class library includes examples of the same pattern.
Your approach is one reasonable way to allow subclasses to extend the behavior of their parent class. There are alternatives, such as passing a behavior as an object. However, there is no one true way.
One way you could improve your code is to mark preProcess() as protected. It's an implementation detail of the class. You don't want just anyone holding a RecordProcessor to decide they can call preProcess() by itself, right?
public class RecordProcessor<T extends Record> {
...
protected void preProcess() {
^^^^^^^^^
// By default no preprocessing
}
}
Another way to improve this is to consider whether you intend anyone to create an instance of the superclass RecordProcessor. If you don't, make the class abstract, to prevent that. The class name can express that, if you like, or your coding guidelines call for it.
public abstract class AbstractRecordProcessor<T extends Record> {
^^^^^^^^ ^^^^^^^^
...
protected void preProcess() {
// By default no preprocessing
}
}
One common way to document such methods is with the phrase "The default implementation does nothing. Subclasses may override this method ...". For example, below is the documentation for java.util.concurrent.FutureTask.done(). You can find more examples by searching for the first sentence of that phrase online.
public class FutureTask<V> implements RunnableFuture<V> {
...
/**
* Protected method invoked when this task transitions to state
* {#code isDone} (whether normally or via cancellation). The
* default implementation does nothing. Subclasses may override
* this method to invoke completion callbacks or perform
* bookkeeping. Note that you can query status inside the
* implementation of this method to determine whether this task
* has been cancelled.
*/
protected void done() { }
}
What I ended up doing- which I also thought was pretty good, inspired by #tsolakp, was simply creating a child class to RecordProcessor, called something like PreprocessRecordProcessor. This has no way of interfering existing code because nothing existing was touched. The class would something like this:
public class PreprocessRecordProcessor<T extends Record> extends RecordProcessor<T> {
// ...
public void process() {
preProcess();
super.process();
}
protected abstract void preProcess();
}
And if customers of this library would like to add their own logic they can simply extend this class and they'd be forced to provide pre-processing logic (as supposed to having the option to provide, which may result in unexpected results if they forgot to.)
I wanted to know what are the advantages / disadvantages of using each of the following ways to differentiate between sub-classes of the main parent class and handle them differently. I know this is pretty basic, but i couldnt find a full comparison between these ways anywhere.
For example:
- I have a Payment super abstract class and two extending classes OneTimePayment and Subscription
- I have a method switchPaymentState that should handle each one of these types differently
Option 1: Using instanceof
public void switchPaymentState(Payment payment) {
if(payment instanceof OneTimePayment) {
//do something
} else if(payment instanceof Subscription) {
//do something else
}
}
Option 2: Using enum type argument (or other...)
public enum PaymentType {
ONE_TIME_PAYMENT,
SUBSCRIPTION;
}
public abstract Payment(PaymentType type) {
this.type = type;
}
public OneTimePayment() {
super(ONE_TIME_PAYMENT);
}
public Subscription() {
super(SUBSCRIPTION);
}
and then:
public void switchPaymentState(Payment payment) {
switch(payment.type) {
case ONE_TIME_PAYMENT:
//do something
break;
case SUBSCRIPTION:
//do something
break;
}
}
Option 3: Using overload methods
public void switchPaymentState(OneTimePayment payment){
//do something
}
public void switchPaymentState(Subscription payment){
//do something
}
So, which is the best way to go (or a complete other way?) and why?
EDIT:
The operations i need to do based on the class type are NOT operations on the class itself, i need to take some data form the payment and send it via other services, so solutions like implementing this functionality inside the classes and calling it regardless of the type, will unfortunately not help in this case. Thanks!
The most modular way would be to use overriding.
You'll have a single switchPaymentState method which accepts the base type - Payment - and calls a method in the Payment class to do the handling. That method can be overridden in each sub-class of Payment.
public void switchPaymentState(Payment payment)
{
payment.handlePayment();
}
Your switchPaymentState method doesn't have to know which sub-classes of Payment exist, and it doesn't have to change if you add new sub-classes tomorrow.
Your option 3 will in many cases not work, because overloading is resolved atcompile-time rather than at run-time. If the type of your references is Payment, it is not possible to use overloading.
In terms of object-oriented design, using overridden methods is the "cleanest" method. However, it has the disadvantage that similar functionality is spead over multiple classes, whereas in the switch and instanceof solutions everything is together.
An alternative that offers the best of both worlds is the so-called Visitor Pattern. You create an interface PaymentVisitor with for each class you want a handle a method, as follows:
interface PaymentVisitor {
void visitOneTimePayment(OneTimePayment payment);
void visitSubscription(Subscription payment);
}
Then in you abstract superclass you add a method visit:
abstract class Payment {
...
abstract void callVisitor(PaymentVisitor visitor);
}
Which you implement in all you subclasses as follows:
class OneTimePayment {
...
#Override void callVisitor(PaymentVisitor visitor) {
visitor.handleOneTimePayment(this);
}
}
class Subscription {
...
#Override void callVisitor(PaymentVisitor visitor) {
visitor.handleSubscription(this);
}
}
Now, in all cases where you would otherwise write something like (in pseudo-Java):
switch (type of x) {
case OneTimePayment:
// Code
break;
case Subscription:
// Code
break;
}
You can now write, cleanly and type-safe:
x.callVisitor(new PaymentVisitor() {
#Override void handleOneTimePayment(OneTimePayment payment) {
// Code
}
#Override void handleSubscription(Subscription payment) {
// Code
}
});
Note also that the visitor is implemented in an inner class, so you still have access to all (effectively) final variables defined in the method body.
I think the switch is a bit of an anti-pattern regardless of how you do it. The more standard OO way would be to implement the same method or methods in both of the subclasses, and let each class manage things as appropriate. In other words
abstract class Payment {
abstract void processPayment(BigDecimal amount);
abstract void processRefund...
}
class OneTimePayment extends Payment {
void processPayment(BigDecimal amount){... }
void processRefund...
}
etc.
Also, unless you're reusing a considerable amount of code in the super class, consider an interface-based implementation instead of subclassing.
I have an interface called Section and MapSection which extends section. I have a list of Sections and if it is a MapSection I need to do some additional processing. I can thing of two ways to handle this. I can add a boolean isAMapSection() to the Section interface but that leads to alot of isA.. if I add more types. The other way I could think of is instanceof check but my OOP senses think this is not great either.
curSection instanceof MapSection
which one of these is the right way? or is there another way?
As mentioned above by Oliver Charlesworth's comment, you could use a Visitor Design Pattern to give your code to do different actions depending on the type involved, without having to use a bunch of instanceof's or class equals.
For example, say you have two similar interfaces, Section and MapSection, where for grins will give MapSection one additional method:
interface Section {
void someMethod();
void accept(SectionVisitor visitor);
}
interface MapSection extends Section {
void additionalProcessingMethod();
}
We'll also give Section the accept(...) method to allow action by a Visitor of type SectionVisitor whose interface looks like:
interface SectionVisitor {
void visit(Section section);
void visit(MapSection mapSection);
}
The visit method will hold code that knows which methods to call depending on the type passed into it.
A very simple concrete example could look like:
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
public class VisitorTest {
public static void main(String[] args) {
Random random = new Random();
List<Section> sectionList = new ArrayList<>();
for (int i = 0; i < 10; i++) {
Section section = random.nextBoolean() ? new ConcreteSection() : new ConcreteMapSection();
sectionList.add(section);
}
SectionVisitor visitor = new ConcreteSectionVisitor();
for (Section section : sectionList) {
section.accept(visitor);
}
}
}
interface Section {
void someMethod();
void accept(SectionVisitor visitor);
}
interface MapSection extends Section {
void additionalProcessingMethod();
}
interface SectionVisitor {
void visit(Section section);
void visit(MapSection mapSection);
}
class ConcreteSection implements Section {
#Override
public void someMethod() {
System.out.println("someMethod in ConcreteSection");
}
#Override
public void accept(SectionVisitor visitor) {
visitor.visit(this);
}
}
class ConcreteMapSection implements MapSection {
#Override
public void someMethod() {
System.out.println("someMethod in ConcreteMapSection");
}
#Override
public void additionalProcessingMethod() {
System.out.println("additionalProcessingMethod in ConcreteMapSection");
}
#Override
public void accept(SectionVisitor visitor) {
visitor.visit(this);
}
}
class ConcreteSectionVisitor implements SectionVisitor {
#Override
public void visit(Section section) {
section.someMethod();
}
#Override
public void visit(MapSection mapSection) {
mapSection.someMethod();
mapSection.additionalProcessingMethod();
}
}
Best way might be to add a method "additionalProcessing" to Section. Implement this method to do your additional processing in MapSection, and leave it blank in your other implementations
Sometimes it's fine to have an isXXX method (and the corresponding asXXX method is nice too), but it really depends on how open-ended your object hierarchy is.
For example in StAX the XMLEvent interface will have descendants that represent the different types of events that can come from an XML document. But the list of those types is closed (no-one's going to radically change the XML format any time soon) and very short (there are about 10 different types of events in the StAX API), so it's fine. These interfaces also define the primary nature of their implementations, you wouldn't realistically just tag an object with an XMLEvent interface like you do with Serializable or Iterable.
If your interface is more "behavioural" (for want of a better word), more optional (like Comparable) or too open-ended (like LayoutManager), things like the visitor or the strategy pattern may be more appropriate.
Judging just by the names Section and MapSection, your model seems to belong to the first category but really only you can make that decision. What I definitely wouldn't do is leave it to the client of the code to fool around with instanceof calls. One way or another the solution should be part of Section.
Given the following Class and Service layer signatures:
public class PersonActionRequest {
PersonVO person
// ... other fields
}
public class MyServiceLayerClass {
public void requestAction(PersonActionRequest request)
{
PersonVO abstractPerson = request.getPerson();
// call appropriate executeAction method based on subclass of PersonVO
}
private void executeAction(PersonVO person) {}
private void executeAction(EmployeeVO employee) {}
private void executeAction(ManagerVO manager) {}
private void executeAction(UnicornWranglerVO unicornWrangler) {}
}
As discussed here, java will select the best method based on type info at compile time. (Ie., it will always select executeAction(PersonVO person) ).
What's the most appropriate way to select the correct method?
The internet tells me that using instanceof gets me slapped. However, I don't see the appropraite way to select the method without explictly casting abstractPerson to one of the other concrete types.
EDIT: To Clarify - The VO passed in is a simple ValueObject exposed for web clients to instantiate and pass in. By convention it doesn't have methods on it, it's simply a data structure with fields.
For this reason, calling personVO.executeAction() is not an option.
Thanks
Marty
If executeAction was a method in a base class or interface that was common to PersonVO, EmployeeVO, ManagerVO and UnicornWranglerVO, you could just call abstractPerson.executeAction() instead of having multiple overridden methods.
Your principle obstacle to polymorphism here seems to be a 'dumb-struct' data object + 'manager class' service non-pattern. The "more polymorphic' approach would be for execute() to be a method that the various person implementations override.
Assuming that can't change, the way you do multiple dispatch in Java is with visitor-looking callbacks.
public interface PersonVisitor {
void executeAction(EmployeeVO employee);
void executeAction(ManagerVO manager);
void executeAction(UnicornWranglerVO unicornWrangler);
}
public abstract class PersonVO {
public abstract void accept(PersonVisitor visitor);
}
public class EmployeeVO extends PersonVO {
#Override
public void accept(PersonVisitor visitor) {
visitor.executeAction(this);
}
}
public class MyServiceLayerClass implements PersonVisitor {
public void requestAction(PersonActionRequest request)
{
PersonVO abstractPerson = request.getPerson();
abstractPerson.accept(this);
}
public void executeAction(EmployeeVO employee) {}
public void executeAction(ManagerVO manager) {}
public void executeAction(UnicornWranglerVO unicornWrangler) {}
}
You could change the way you are approaching the design and use a Visitor, passing the executor into the Person and have the person type determine which to call.
The Visitor pattern is often used to overcome Java lacking double-dispatch.
I would explicitly cast the abstractPerson. Not only does it ensure the JVM gets the right method, it makes it a hell of a lot easier to read and ensure you know what's going on.
I've never been so good at design because there are so many different possibilities and they all have pros and cons and I'm never sure which to go with. Anyway, here's my problem, I have a need for many different loosly related classes to have validation. However, some of these classes will need extra information to do the validation. I want to have a method validate that can be used to validate a Object and I want to determine if an Object is validatable with an interface, say Validatable. The following are the two basic solutions I can have.
interface Validatable {
public void validate() throws ValidateException;
}
interface Object1Validatable {
public void validate(Object1Converse converse) throws ValidateException;
}
class Object1 implements Object1Validatable {
...
public void validate() throws ValidateException {
throw new UnsupportedOperationException();
}
}
class Object2 implements Validatable {
...
public void validate() throws ValidateException {
...
}
}
This is the first solution whereby I have a general global interface that something that's validatable implements and I could use validate() to validate, but Object1 doesn't support this so it's kind of defunc, but Object2 does support it and so may many other classes.
Alternatively I could have the following which would leave me without a top level interface.
interface Object1Validatable {
public void validate(Object1Converse converse) throws ValidateException;
}
class Object1 implements Object1Validatable {
...
public void validate(Object1Converse converse) throws ValidateException {
...
}
}
interface Object2Validatable {
public void validate() throws ValidateException;
}
class Object2 implements Object2Validatable {
...
public void validate() throws ValidateException {
...
}
}
I think the main problem I have is that I'm kind of stuck on the idea of having a top level interface so that I can at least say X or Y Object is validatable.
what about this :
interface Validatable {
void validate(Validator v);
}
class Object1 implements Validatable{
void validate(Validator v){
v.foo
v.bar
}
}
class Object1Converse implements Validator{
//....
}
class Object2 implements Validatable{
void validate(Validator v){
//do whatever you need and ingore validator ?
}
}
What do you care if Object2 receives an unneeded argument ? if it is able to operatee correctly without it it can just ignore it right ?
If you are worried about introducing an unneeded dependency between object2 and Object1Converse then simply specify an interface to decouple them and use that as the validator.
Now I must add that having a mixed model where you have both object able to self validate and object which need external state information to validate sounds weird.
care to illustrate ?
Perhaps the apache commons validator project would be useful here - either directly or as a model for how to attack your problem. They effectively have a parallel set of objects that do the validation - so there is no interface on the objects, just the presence/absence of a related validator for the object/class.
This is in C#, but the same ideas can certainly be implemented in many other languages.
public class MyClass {
//Properties and methods here
}
public class MyClassValidator : IValidator<MyClass> {
IList<IValidatorError> IValidator.Validate(MyClass obj) {
//Perform some checks here
}
}
//...
public void RegisterValidators() {
Validators.Add<MyClassValidator>();
}
//...
public void PerformSomeLogic() {
var myobj = new MyClass { };
//Set some properties, call some methods, etc.
var v = Validators.Get<MyClass>();
if(v.GetErrors(myobj).Count() > 0)
throw new Exception();
SaveToDatabase(myobj);
}
As simple solution to the "can an object be validated" problem is to add a third interface.
This third interface is an empty one that parents both of the others, meaning you can just check against that interface (Assuming you aren't worried about someone spoofing being validate-able), and then iteratively check against the possible validation interfaces if you need to actually validate.
Example:
interface Validateable
{
}
interface EmptyValidateable inherits Validateable //Or is it implements?
{
void validate() throws ValidateException;
}
interface Objectvalidateable inherits Validateable
{
void validate(Object validateObj);
}