Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Is there a way in Java to have methods that can be accessed by certain Classes?
class CommonClass{
void methodAvailableForClassA() {code goes here}
void methodAvailableForClassB() {code goes here}
}
class A{
CommonClass cc;
public void useCC(){
cc.methodAvalableForClassA();
}
}
class B{
CommonClass cc;
public void useCC(){
cc.methodAvalableForClassB();
}
}
What I am asking is if there is a way to make available methods to certain classes
As I wrote in the comments you're not providing enough context and I suspect that this problem can be avoided with a better design.
That said, you can "hack" it by overloading the same method with the different types of the classes, and have the objects send themselves to the method:
class CommonClass{
void methodAvailableForClass(A a) {...}
void methodAvailableForClass(B b) {...}
}
class A{
CommonClass cc;
public void useCC(){
cc.methodAvailableForClass(this);
}
}
class B{
CommonClass cc;
public void useCC(){
cc.methodAvailableForClass(this);
}
}
This is an X/Y Problem:
You are asking the wrong question, you are asking about a solution that is inappropriate and not the problem.
You should keep methods that are localized to a specific class specialization or implementation of an interface with that implementation.
This is know as High Cohesion and Loose Coupling.
You are trying to do the exact opposite of both of these things, and that is not a good path to be going down.
Any time you have something that is a catch all like CommonClass you are creating a tangled mess of dependencies on completely unrelated things.
Solution:
Those methods that are specialized for each of those classes should be owned by those classes that use them.
As mentioned elsewhere you should be looking for loose coupling - as long as ClassA and ClassB both know about CommonClass you've got strongly coupled code and you've got this kind of problem. However, if you split out separate interfaces which represent the various roles which CommonClass may play then you may be able to pass an instance of CommonClass to each of ClassA and ClassB but as an instance of separate roles which only give the client classes access to specific methods.
e.g.
public interface RoleForA {
void methodAvailableForClassA();
}
public interface RoleForB {
void methodAvailableForClassB();
}
public class CommonClass implements RoleForA, RoleForB {
....
}
public class ClassA {
private final RoleForA cc;
public void useCC(){
cc.methodAvailableForClassA();
}
}
public class ClassB{
private final RoleForB cc;
public void useCC(){
cc.methodAvailableForClassB();
}
}
The major caveat here is that CommonClass shouldn't just be a dumping ground for all kinds of unrelated functionality. It needs to maintain cohesiveness so you should only do this for the right reason. One example of that might be providing a read-only interface to CommonClass to ClassA and a write-only interface to ClassB. The other thing to be aware of is that the interfaces involved shouldn't be simply tailored to the ClassA/B use cases or else any loose-coupling is purely illusory.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
my question is more a personnal mind challenge than a production purpose... which means that despite there are obviously better ways to achieve my goal* , I am curious about how - AND IF - I could do it this way.
*I am thus not interested in other ways atm.
I would like to "register" within a list several classes objects (Foo.class, Bar.class, etc.) sharing a common static method inherited from a common parent class.
Then I want to iterate over this list, and invoke that static method.
The following code is wrong indeed, but it may at least show what I am trying to achieve:
======== Classes definition
public class SomeGenericClass {
public abstract static String getType();
}
public class SomeSpecializedClassA extends SomeGenericClass{
public static String getType(){
return "I am of type A";
}
}
public class SomeSpecializedClassB extends SomeGenericClass{
public static String getType(){
return "I am of type B";
}
}
======== Main
class Main{
void main(){
List<Class<SomeGenericClass>> classes = new ArrayList<Class<SomeGenericClass>> ();
classes.add(SomeSpecializedClassA.class);
classes.add(SomeSpecializedClassB.class);
for((SomeGenericClass.class)Class c : classes){
System.out.println(c.getMethod("getType", null).invoke(null, null));
}
}
}
========
Any idea?
sharing a common static method inherited from a common parent class.
This is impossible; static methods do not 'do' inheritance, hence why they are called static methods. There is NO way to specify that a given class adheres to a spec, where 'the spec' involves 'has static method XYZ'.
Why do you think java has the cliché of having 'factories'? A factory is just a container concept where a single instance of a class is the place you ask questions about the concept of another class: A "PersonFactory" is a class for which usually only a single instance exists and it answers questions about persons in general. Most usually the constructor (which doesn't 'do' specs/interfaces either), but anything else goes too.
Then I want to iterate over this list, and invoke that static method.
Reflection can do this. It'd be horrible code style, hard to maintain, and all around entirely the wrong way to go about it. You're asking me: "May I have a gun because there is an annoying mosquito balanced on my left toe", and that's the bazooka. If you want to take it and let er rip, okay. Your funeral.
So what's the better way?
Why is 'static' important here? It's not. Register 'TypeOracle' objects:
public interface CommandHandlerFactory {
String getCommand();
CommandHandler makeHandler();
}
public interface CommandHandler {
void handleCommand(UserInfo sendingUser, String cmdData);
}
public class WelcomeHandler {
#Override
public void handleCommand(UserInfo sendingUser, String cmdData) {
sendMsg("Well hello there, " + sendingUser.getUserName() + "!");
}
}
channelBot.registerHandler(new CommandHandlerFactory() {
#Override
public String getCommand() {
return "/hello";
}
#Override
public CommandHandler makeHandler() {
return new WelcomeHandler();
}
}
That's how you do it in a non-blow-your-feet-right-off fashion.
NB: A comment on your question suggest using asm. This is an utterly nonsensical comment; ASM has nothing to do with this and can't help you. Ignore this comment.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I know the advantages of Composition over Inheritance but in some situation instances of the class are being created by framework using default constructor and we can not define constructor with parameter nor we can set attribute of the object using setter methods. To make this situation clear consider following example:
public class Main {
public static void main(String... str){
TargetFramework.component(Child.class);
}
}
Here the TargetFramework get a class and it will create instance of that class behind the scene using default Constructor.
Imagine I want to implement FramewrokInterface as below:
public interface FrameworkInterface {
void setup();
void doAction(Record record);
void doAnotherAction(Record record, boolean isValid);
}
Now I can implement this interface in two ways considering Inheritance and Composition:
Approach 1: (Mixing and Matching Composition and Inheritance)
public abstract class Parent implements FrameworkInterface {
RecordValidator recordValidator;
#Override
public abstract void setup();
#Override
public void doAction(Record record){
boolean isValid = recordValidator.validate(record);
doAnotherAction(record, isValid);
}
#Override
public void doAnotherAction(Record record, boolean isValid){
}
}
In this Implementation I decided to use composition and I've defined a RecordValidator as bellow:
public interface RecordValidator {
boolean validate(Record record);
}
The problem here is that I can't set RecordValidator in Parent class when creating instance of this class because instances of this class are created by framework using default constructor but I can create this instance in setup method in child Class which extends parent class as below:
public class Child extends Parent {
#Override
public void setup() {
recordValidator = new DefaultRecordValidator();
}
}
The setup method of the FramworkInterface will be called just after instance created by default Constructor so we can use it to initialize our RecordValidator attribute; This is kind of Mixing and Matching Composition and Inheritance together to me because I'm using Composition with Inheritance together. However this approach has its own advantages because I've separated the Concern of validation of record from the Parent class Concerns.
Approach 2: (Just Inheritance)
In this approach I've implemented the FrameworkInterface in the following way:
public abstract class Parent1 implements FrameworkInterface {
#Override
public void setup() {
}
#Override
public void doAction(Record record) {
boolean isValid = validate(record);
doAnotherAction(record, isValid);
}
#Override
public void doAnotherAction(Record record, boolean isValid) {
}
protected abstract boolean validate(Record record);
}
This way instead of using composition and defining RecordValidator I've defined abstract validate method in my Parent1 class so that Child class can use it to implement validation behaviour, so the Child class can be implemented as follow:
public class Child extends Parent1 {
#Override
protected boolean validate(Record record) {
return false;
}
}
My question is:
Which approach is better for this situation and what are the pros and cons of them?
Which approach is better for this situation and what are the pros and cons of them?
I would argue that both of them are suboptimal to a degree where I would look for other solutions.
Looking at the sample code, there is, for example, no possibility to mock the dependencies of Child1 in both situations. You could introduce mock capabilities by implementing setters or special constructors that are only used for testing. The core problem I have with this setup, however, is that you bow to the framework.
I would recommend exploring other possibilities, e.g. do the necessary dependency injection manually, then "register" a finished bean with the framework. This is what Uncle Bob means when he talks about keeping the framework at arm's length.
If we start talking about Java in particular and the framework does not allow any other solution to, e.g., create beans beforehand and registering them with the framework, I would contact the framework maintainers and ask to implement CDI support since this is a standardized way to handle Depencency Injection.
Looking at your example, you take two different approaches, i.e. you redefine the capabilites of Parent. Just as you did with Parent in the inheritance example, you could define abstract boolean validate(); in Parent, delegating the implementation to Child. I would even go a step further and define
public interface class Parent extends FrameworkInterface, RecordValidator {
...
}
(all methods in Parent are either abstract or can be seen as defaults, the field can be removed). Thus, each class implementing this interface implements the methods as it sees fit.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Hi to refactor code of 50k+ lines of code which one is better Inheritance or Composition.
My approach is as follows:
Create the subclass that will extend parent class(needs refactoring).
create interface for subclass
transfer the inner public methods to the child class that are also declared in interface of child class.
Now why this approach:
1.Parent Class want to refactor is #ManagedBean and spring #Component.
#Component
public class MBean extends ManagedBean{
#Autowired
transient SomeService someService;
private void calltoPriavateMethod(){
//100loc
}
public void calltoPublicMethod(){
//200loc
}
public void getExportJson(){
//100 loc
try{
calltoPrivateMethod()
}catch(Exception e){
//catch exception
}
try{
calltoPublicMethod()
}catch(Exception e){
//catch exception
}
}
}
Solution I tried
public Interface ChildMBeanInterface{
calltoPublicMethod();
}
#Component
public class ChildMbean extend MBean implements ChildMBeanInterface{
calltoPublicMethod(){
//200 loc copied here
}
}
#Component
public class MBean extends ManagedBean{
#Autowired
transient SomeService someService;
#Autowired
ChildMBeanInterface childMBeanInterface;
public void getExportJson(){
//100 loc
try{
calltoPrivateMethod()
}catch(Exception e){
//catch exception
}
try{
childMBeanInterface.calltoPublicMethod()
}catch(Exception e){
//catch exception
}
}
}
JSF CODE : is directly calling getExportJson()
<p:commandLink id="exportCaseWithJsonId"
value="Export Data" partialSubmit="true"
onclick="PF('statusDialog').show();"
action="#{MBean.getExportCaseJSON}" process="#this"
immediate="true">
So my is Question my class structure looks like this ? Is my approach is fine or it can be improved. Please give suggestions.
MBean is JSF managed Bean and this contains many other functions for different services.function that are called from jsf are public, however some inner method calls are private as well as public.
In general, favor Composition over Inheritance. Inheritance has many limitations that don't apply to composition, and it can make things way too complicated.
Before getting started, you need to know which parts can be seperated. Take a piece of paper, map out the usages of your fields and the relations of the methods. Try to determine which parts are isolated. Then move that part to a different class. (And obviously, you can't isolate parts of code that call methods or use fields of the super class).
Here is an example of a piece of code that contains a lot of code regarding listeners.
class Foo {
List<Listener> listeners;
// and 101 other fields
public void addListener(...) { }
public boolean removeListener(...) { }
private void notifyListeners(...) { }
// and 101 other mthods
private void somethingHappens() {
notifyListeners();
}
}
In a case like this you could regard the listeners part as an isolated feature of the class. The fields and methods which are used by this part of code, are not used by other methods, meaning you could isolate them.
So, you could move them to a new "feature class" named Listeners for example.
class Listeners() {
List<Listener> listeners;
public void add(...) { ... }
public boolean remove(...) { ... }
public void notifyListeners(...) { ... }
}
Now, in the original class, most code dissapears.
class Foo {
Listeners listeners = new Listeners();
public Listeners getListeners() { ... }
private void somethingHappens() {
listeners.notifyListeners();
}
}
(Note: the new Listeners() could also go in a protected createListeners() method, which still allows subclasses to override the behavior which you just isolated.)
Your class gets a lot thinner. But it does mean that the usages and signatures change a little. i.e. addListener(...) vs getListeners().add(...). And that may be a problem.
So, before you get started, you should determine if that is a problem or not. For internal usage this can't be a problem. But if you implemented an interface it certainly will be.
You could just add thin wrapper methods that forward requests. But often this won't be a big step forward. You moved some code, and you added some new. You may end up wondering if it's worth it. It's a trade-off worth considering if there are a lot of private methods and only a limited amount of public ones.
Alternatively, sometimes with legacy code, you may just chose to divide your classes in collapsable sections. That in itself can be a step forward.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I have an interfaceFileService
And an implementation of it FileServiceBean
I want to be able to process multiple filetypes.
e.g. fileService.processFile(FileDescriptor);
Where, the fileDescriptor is a class e.g.
public class FileDescriptor {
#Column(name = "FILE_TYPE")
protected String fileType;
}
Then I want multiple extensions of the FileServiceBean to process different filetypes. And FileServiceBean would have all the methods common to all filetypes.
e.g.
PhotoProcessingBean extends FileProcessingBean
VideoProcessingBean extends FileProcesingBean
How do I make the interface decide what implementation to use? I am rather new to this and not really quite sure how to ask the question to search google for the answer.
Ideally it would not just accept FileDescriptor. e.g. It could be something else like just File.
fileService.processFile(Object);
Well, in the end you have to put the decision logic somewhere, the only question is where?
I think this is a classic application of the factory-pattern: you create an object (the "factory") which has the sole purpose of deciding which concrete implemenation of a common interface to create for a given case. See https://en.wikipedia.org/wiki/Factory_method_pattern
Along the lines of:
PhotoProcessingBean extends FileProcessingBean {...}
VideoProcessingBean extends FileProcesingBean {...}
class FileProcessingFactory {
public static FileService createFileService(FileDescriptor descriptor) {
switch(descriptor.getFileType()) {
case 'Photo': return new PhotoProcessingBean();
case 'Video': return new VideoProcessingBean();
default: // do some error handling
}
}
}
And using it:
for(FileDescriptor descriptor : /* wherever they come from */) {
FileService processor = FileProcessingFactory.createFileService(descriptor);
processor.processFile(descriptor);
}
Sure enough you can also soften up the interface by accepting objects instead of file descriptors. This depends on the concrete application.
Assuming you have an interface:
public interface IFileService{
void processFile();
}
And the FileProcessingBean class that implements this:
public class FileProcessingBean implements IFileService{
//other code here
#Override
public void processFile(){
//add code for implementation of method
}
}
If you have two other classes that extend FileProcessingBean:
public class PhotoProcessingBean extends FileProcessingBean{
#Override
public void processFile(){
System.out.println("Processing PHOTO...");
}
}
public class VideoProcessingBean extends FileProcessingBean{
#Override
public void processFile(){
System.out.println("Processing VIDEO...");
}
}
If you would like to use it:
//This is an OOP concept called Polymorphism:
IFileService photoProcess = new PhotoProcessingBean();
IFileService videoProcess = new VideoProcessingBean();
Calling photoProcess.processFile(); and videoProcess.processFile() would yield different the implementations:
photoProcess.processFile();
videoProcess.processFile();
And you'll get the following output:
Processing PHOTO...
Processing VIDEO...
Regarding your point about not just accepting FileDescriptor but also 'something else', my recommendation would be to either know exactly what sort of arguments you are expecting, and then either implementing overriding methods or via an interface. It would not be wise to use Object as a method argument as Object is a superclass from which all objects are descendants of. You would essentially be opening the 'floodgates' and potentially run into runtime errors.
I design my game application and face some troubles in OOP design.
I want to know some patterns which can help me, because java have not any multiple extends option. I will describe my problem below, and also explain why multiple interface doesn't help me at all. Lets go.
What we want is "class is set of features". By feature I mean construction like:
field a;
field b;
field c;
method m1(){
// use, and change fields a,b,c;
}
method m2(){
// use, and change fields a,b,c;
}
//etc
So, basically the feature is a set of methods and corresponding fields. So, it's very close to the java interface.
When I talk that class implemets "feature1" I mean that this class contains ALL "feature needed" fields, and have realisation of all feature related methods.
When class implements two features the tricky part begins. There is a change, that two different features contains similar fields (names of this fields are equal). Let the case of different types for such fields will be out of scope. What I want - is "feature naming tolerance" - so that if methodA() from feature A change the field "common_field", the methodB from feature B, that also use "common_field" as field will see this changes.
So, I want to create a set of features (basically interfaces) and their implementations. After this I want to create classes which will extends multiple features, without any copy-paste and other crap.
But I can't write this code in Java:
public static interface Feature1 {
public void method1();
}
public static interface Feature2 {
public void method2();
}
public static class Feature1Impl implements Feature1 {
int feature1Field;
int commonField;
#Override
public void method1() {
feature1Field += commonField;
commonField++;
}
}
public static class Feature2Impl implements Feature2 {
int feature2Field;
int commonField;
#Override
public void method2() {
commonField++;
}
}
public static class MyFeaturedClass extends Feature1Impl, Feature2Impl implements Feature1, Features2 {
}
So, as you can see the problem are really complex.
Below I'll describe why some standart approaches doesn't work here.
1) Use something like this:
public static class MyFeaturesClass implements Feature1,Feature2{
Feature1 feature1;
Feature2 feature2;
#Override
public void method2() {
feature2.method2();
}
#Override
public void method1() {
feature1.method1();
}
}
Ok, this is really nice approach - but it does not provide "feature field name tolerance" - so the call of method2 will not change the field "commonField" in object corresponding the feature1.
2) Use another design. For what sake you need such approach?
Ok. In my game there is a "unit" concept. A unit is MOVABLE and ALIVE object.
Movable objects has position, and move() method. Alive objects has hp and takeDamage() and die() methods.
There is only MOVABLE objects in my game, but this objects isn't alive.
Also, there is ALIVE objects in my game, but this objects isn't movable (buildings for example).
And when I realize the movable and alive as classes, that implements interfaces, I really don't know from what I should extends my Unit class. In both cases I will use copy-paste for this.
The example above is really simple, actually I need a lot of different features for different game mechanics. And I will have a lot of different objects with different properties.
What I actually tried is:
Map<Field,Object> fields;
So any object in my game has such Map, and to any object can be applied any method. The realization of method is just take needed fields from this map, do its job and change some of them. The problem of this approach is performance. First of all - I don't want to use Double and Interger classes for double and int fields, and second - I want to have a direct accsess to the fields of my objects (not through the map object).
Any suggestions?
PS. What I want as a result:
class A implements Feature1, Feature2, Feature3, Feature4, Feature5 {
// all features has corresponding FeatureNImpl implementations;
// features 1-2-3 has "shared" fields, feature 3-4 has, features 5-1 has.
// really fast implementation with "shared field tolerance" needed.
}
One possibility is to add another layer of interfaces. XXXProviderInterface could be defined for all possible common fields, that define a getter and setter for them.
A feature implementation class would require the needed providers in the constructor. All access to common fields are done through these references.
A concrete game object class implementation would implement the needed provider interfaces and feature interfaces. Through aggregation, it would add the feature implementations (with passing this as provider), and delegate the feature calls to them.
E.g.
public interface Feature1 {
void methodF1();
}
public interface Feature2 {
void methodF2();
}
public interface FieldAProvider {
int getA();
void setA(int a);
}
public class Feature1Impl implements Feature1 {
private FieldAProvider _a;
Feature1Impl(FieldAProvider a) {
_a = a;
}
void methodF1() {
_a.setA(_a.getA() * 2);
}
}
// Similar for Feature2Impl
public class GameObject implements Feature1, Feature2, FieldAProvider
{
int _fieldA;
Feature1 _f1;
Feature2 _f2;
GameObject() {
_f1 = new Feature1Impl(this);
_f2 = new Feature2Impl(this);
}
int getA() {
return _fieldA;
}
void setA(int a) {
_fieldA = a;
}
void methodF1() {
_f1.methodF1();
}
void methodF2() {
_f2.methodF2();
}
}
However, I don't think this is an optimal solution