So right now, I have a Preprocessor class that generates a bunch of instance variable maps, and a Service class that has a setPreprocessor(Preprocessor x) method, so that an instance of the Service class is able to access the maps that the preprocessor generated.
At the moment, my Service class needs to call three methods in succession; for sake of simplicity, let's call them executePhaseOne, executePhaseTwo, and executePhaseThree. Each of these three methods instantiate/modify Service instance variables, some of which are pointers to the Service instance's Preprocessor object.
My code has this structure right now:
Preprocessor preprocessor = new Preprocessor();
preprocessor.preprocess();
Service service = new Service();
service.setPreprocessor(preprocessor);
service.executePhaseOne();
service.executePhaseTwo();
service.executePhaseThree();
To better organize my code, I want to put each executePhaseXXX() call in its own separate subclass of Service, and leave the common data structures for all the phases in the parent class Service. Then, I want to have an execute() method in the Service parent class that executes all three phases in succession:
class ServiceChildOne extends Service {
public void executePhaseOne() {
// Do stuff
}
}
class ServiceChildTwo extends Service {
public void executePhaseTwo() {
// Do stuff
}
}
class ServiceChildThree extends Service {
public void executePhaseThree() {
// Do stuff
}
}
EDIT:
The problem is, how do I write my execute() method in the Service parent class? I have:
public void execute() {
ServiceChildOne childOne = new ServiceChildOne();
ServiceChildTwo childTwo = new ServiceChildTwo();
ServiceChildThree childThree = new ServiceChildThree();
System.out.println(childOne.preprocessor); // prints null
childOne.executePhaseOne();
childOne.executePhaseTwo();
childOne.executePhaseThree();
}
However, my childOne, childTwo, and childThree objects aren't able to access the preprocessor instance variable that lives in the parent class Service... How could I get past this problem?
Use the protected modifier for your Preprocessor instance variable of Service, like so:
public class Service {
protected Preprocessor preprocessor;
}
Then each subclass of Service has a this.preprocessor.
you could provide some method like getPreprocessorInstance() in your Service class that returns Preprocessor instance
Your preprocessor should be protected or public to be able to have an access from the child.
You can read about modifiers here.
UPDATE
new ServiceChildOne(new Preprocessor());
.....
class ServiceChildOne extends Service {
public ServiceChildOne(Preprocessor preprocessor) {
super.preprocessor = preprocessor;
}
public void executePhaseOne() {
// Do stuff
}
}
It looks like your problem is that you have not one, but four different instances of Service - each of which has its own uninitialized copy of the base class variables.
The only solutions I can think of at the moment are, first, the rather poor design of making your Service member variables static - which, in effect, means you can only have one version of Service at a time. A better solution, to my mind, would be to not make the processing phases subclasses, but instead make them independent classes that take an instance of Service as a parameter.
EDIT:
For a quick example, the Service class could look like:
class Service
{
public Service() { ... }
public Preprocessor getPreprocessor() { ... }
public void setPreprocessor(Preprocessor preprocessor { ... }
public Type2 getVariable2() { ... }
public void setVariable2(Type2 variable2) { ... }
...
}
and the phase classes could look something like:
class ServicePhaseOne
{
private Service m_dataHost;
public ServicePhaseOne(Service dataHost)
{
m_dataHost = dataHost;
}
public void executePhaseOne()
{
// Do phase 1 stuff
}
}
... and so on for phase 2 and phase 3.
The execute() method would then look like:
public void execute()
{
ServicePhaseOne phaseOne = new ServicePhaseOne(this);
ServicePhaseTwo phaseTwo = new ServicePhaseTwo(this);
ServicePhaseThree phaseThree = new ServicePhaseThree(this);
phaseOne .executePhaseOne();
phaseTwo .executePhaseTwo();
phaseThree .executePhaseThree();
}
Related
Hi I am using Sprint boot and creating microservices. I have a scenario where an object will be created and it will be used by other methods of the same class and methods of other classes. But scope will be only when this method gets called.
Class SharedObject {
private String name;
//getters setters
}
#Service
Class FirstServiceImpl {
#Autowired
SecondServiceImpl second;
public void process() {
SharedObject obj = new SharedObject();
//...
process2(obj);
}
private void process2(SharedObject obj) {
//...
second.process(obj);
}
}
#Service
Class SecondServiceImpl {
public void process(SharedObject obj) {
//...
}
}
Here SharedObject needs to be created in process method of FirstServiceImpl class and that needs to be accessed in rest of the places. But next call of process method of FirstServiceImpl class, it should create a new object. Considering this, I can pass to all the methods it requires. But any other cleaner way to achieve this ?
You can use ThreadLocal (https://docs.oracle.com/javase/8/docs/api/java/lang/ThreadLocal.html) for this use case. But use it with care, it can easily cause memory leaks if not done properly.
There are some interesting articles to read on ThreadLocal.
https://dzone.com/articles/an-alternative-approach-to-threadlocal-using-sprin-1 - Better implementation of ThreadLocal
https://plumbr.io/blog/locked-threads/how-to-shoot-yourself-in-foot-with-threadlocals
]https://javarevisited.blogspot.com/2013/01/threadlocal-memory-leak-in-java-web.html#axzz7iil9FiAC
I am currently reading "Thinking in Java 4th edition". In the Chapter "Interface" and the sub-chapter "Interfaces and factories", it states the following
An interface is intended to be a gateway to multiple implementations,
and a typical way to produce objects that fit the interface is the
Factory Method design pattern. Instead of calling a constructor
directly, you call a creation method on a factory object which
produces an implementation of the interface—this way, in theory, your
code is completely isolated from the implementation of the interface,
thus making it possible to transparently swap one implementation for
another. Here’s a demonstration showing the structure of the Factory
Method:
(for easy reference, the example codes quoted after my question)
My question is that why don't we just make the "serviceConsumer" method to be like
public static void serviceConsumer(Service s) {
s.method1();
s.method2();
}
In this case, the code depends on the interface "Service" but not the implementation. (It can also "swap" transparently, isn't it?). So, I don't really get to the point of using "factory" here and what it states at start.
-----------------------------below quoted from "Thinking in Java"------------------------------
//: interfaces/Factories.java
import static net.mindview.util.Print.*;
interface Service {
void method1();
void method2();
}
interface ServiceFactory {
Service getService();
}
class Implementation1 implements Service {
Implementation1() {} // Package access
public void method1() {
print("Implementation1 method1");
}
public void method2() {
print("Implementation1 method2");
}
}
class Implementation1Factory implements ServiceFactory {
public Service getService() {
return new Implementation1();
}
}
class Implementation2 implements Service {
Implementation2() {} // Package access
public void method1() {
print("Implementation2 method1");
}
public void method2() {
print("Implementation2 method2");
}
}
class Implementation2Factory implements ServiceFactory {
public Service getService() {
return new Implementation2();
}
}
public class Factories {
public static void serviceConsumer(ServiceFactory fact) {
Service s = fact.getService();
s.method1();
s.method2();
}
public static void main(String[] args) {
serviceConsumer(new Implementation1Factory());
// Implementations are completely interchangeable:
serviceConsumer(new Implementation2Factory());
}
}
/* Output:
Implementation1 method1
Implementation1 method2
Implementation2 method1
Implementation2 method2
*/ //:~
Well nothing prevents you from writing such method, the quoted statement is about creation of the object itself.
In this case, the code depends on the interface "Service" but not the implementation
In both cases the code depends on the interface, the difference is, that in your implementation the Service is created outside the method serviceConsumer
Maybe it will be clearer if you see a real use of Factory Method. The TIJ example is without any context.
My favorite example is Collection.iterator(), where Collection is the ServiceFactory and Iterator is the Service. You can see the calls in the serviceConsumer() but think of the following:
Collection c = new ArrayList(); // ArrayList is a Factory for its iterator
Iterator i = c.iterator(); // getService()
if (i.hasNext()) { ...}
If serviceConsumer were a method to print the collection (instead of something without context), you could see how passing an ServiceFactory (ArrayList) is better than passing the Service (Iterator). There is more encapsulation using that (the details of the Service are hidden in the method).
Here are some UML diagrams to help understand the similarities:
Factory method pattern
TIJ Example
Collection.iterator()
Note: The pink classes are actually anonymous classes that implement the Iterator interface type that corresponds to the Collection. They're not normally classes a client will instantiate any other way (hidden).
I have 2 projects in my Eclipse which are actually two standalone webservice(publishes SOAP) applications. Usually I will create 2 JAR files from the 2 projects and I used to run to publish the services. Both applications have the same methods.
Now I have to provide a wrapper on top of those two services provided with a variable to distinguish between the services.
If I try to access a method by passing a variable it should call the appropriate class's implementation.
in the below example, I'm passing animal as an integer. if the animal is 1 Cat class method should be called, if 2 then Dog class method has to respond.
Wrapper wrap = new Wrapper();
wrap.makeNoise(int animal); // 1=Bow-Bow, 2=Meow-Meow
Below are the two different Webservice application publishing SOAP
class Cat(){
public void makeNoise(){
System.out.println("Meow-Meow");
}
}
Cat.java
class Dog(){
public void makeNoise(){
System.out.println("Bow-Bow");
}
}
Dog.java
Please suggest me how to implement this requirement
Have you thought about using Strategy pattern ?
For example you could declare an Interface with void method to start a service.
Next you implement that interface in wrappers and each of them would start different web service and finally you could use some sort of switch or if operators to select (using int) which reference you should cast to an interface reference. Just next after the if/switch operation you could just use that interface method to start the selected service. For example:
public class Test1 {
public interface IWebserviceWrapper {
void startWebservice();
}
public class Cat implements IWebserviceWrapper {
public void makeNoise() {
System.out.println("Meow-Meow");
}
#Override
public void startWebservice() {
this.makeNoise();
}
}
public class Dog implements IWebserviceWrapper {
public void makeNoiseORAnythingElse() {
System.out.println("Bow-Bow");
}
#Override
public void startWebservice() {
this.makeNoiseORAnythingElse();
}
}
public Test1() {
}
public IWebserviceWrapper chooseAnimal(int chosenParam) {
switch (chosenParam) {
case 1:
return new Dog();
case 2:
return new Cat();
default:
break;
}
return null;
}
public static void main(String[] args) {
Test1 example = new Test1();
int chosenService = 2;
IWebserviceWrapper service = example.chooseAnimal(chosenService);
service.startWebservice();
}
}
I hope this will help you. There is much more on the topic, so you should probably try to read more about design patterns.
Having switch or if else statement is always a threat to OCP principle of SOLID principles . Hence as #Grzegor Mandela said you can have a common interface by which the client classes can call the required classes ,by the process of dynamic binding which is the core concept of Invetsion Of Control pattern.
This way of having an adapter for a set of underlying classes also has some similarities as Adapter pattern.
Then you can have one of the factory patterns, like simple factory where you just move your logic of creating objects in to seperate classes , or Abstract Factory pattern in which seperate classes can be created for instatiating each classes object .
Using reflection to create objects by just passing the name of webservice we want can make the code more extensible.
You could just use the factory pattern
public class ServiceFactory {
public Service createService(int type) {
return type == 1 ? new Dog() : new Cat();
}
}
It's a totally valid place to have conditional logic around object instantiation. Also, get rid of the "wrapper" term. It doesn't apply here because you aren't using decorators at any point. You may want to use the term "client" because it sounds like you are acting as the client to several web services.
I am learning of interface in java while i access it in my sub class main method i can access in three way what are the difference of those am learner could some one help on this
public interface interfa
{
void educationloan();
abstract void homeloan();
static int i = 10;;
}
public class testinter implements interfa {
public static void main(String args[])
{
System.out.println("Sub class access a interface by implement");
testinter t = new testinter();
t.miniloan();
t.educationloan();
t.homeloan();
System.out.println("Super class access a only interface in sub class");
interfa a = new testinter();
a.educationloan();
//a.miniloan();
a.homeloan();
System.out.println("Annomys class access a only interface in sub class");
interfa xx = new interfa() {
#Override
public void homeloan() {
}
#Override
public void educationloan() {
// TODO Auto-generated method stub
}
};
xx.educationloan();
xx.homeloan();
}
}
Here my question comes which one can use in which situation and what are the difference???
First thing you will get a compile time error big time as you haven't implemented the interface methods in the child class.
testinter t = new testinter();
t.miniloan();
t.educationloan(); // these methods should be initialized
t.homeloan();
Now regarding your interface implementation ways:
testinter t = new testinter();
t is an instance of a child class & can be used like a regular class object.
interfa a = new testinter();
The upside of using this approach is say you have used the reference a n times in your code & in future you want to change the implementation of your interface to interfa a = new AnotherTestinter(); All you have to do is change the implementation the reference will not be changed. This is loose coupling otherwise you have to change the reference a everywhere in the code. This approach is always known as Programming to an interface.
Using anonymous class
interfa xx = new interfa() {
#Override
public void homeloan() {
}
#Override
public void educationloan() {
// TODO Auto-generated method stub
}
};
Anonymous classes enable you to make your code more concise. They enable you to declare and instantiate a class at the same time. They are like local classes except that they do not have a name. Use them if you need to use a local class only once.
So doing this interfa xx = new interfa() { helps you define your methods educationloan() homeloan() at the same place.
t itselft is a instance of a child class. it can be used normally like other class objects
t is a instance of the interfa. Here t can be used in both cases where either the child or the parent class is needed.
in this implementation you have to implement the methods of the interface in your own way. You can implement different things here instead of using the default implementation.
N.B. I overlooked one thing, you will get a compiler error as you haven't implemented the methods of interfa in the child class
Example:
public class TestClass {
public static void main(String[] args) {
TestClass t = new TestClass();
}
private static void testMethod() {
abstract class TestMethod {
int a;
int b;
int c;
abstract void implementMe();
}
class DummyClass extends TestMethod {
void implementMe() {}
}
DummyClass dummy = new DummyClass();
}
}
I found out that the above piece of code is perfectly legal in Java. I have the following questions.
What is the use of ever having a class definition inside a method?
Will a class file be generated for DummyClass
It's hard for me to imagine this concept in an Object Oriented manner. Having a class definition inside a behavior. Probably can someone tell me with equivalent real world examples.
Abstract classes inside a method sounds a bit crazy to me. But no interfaces allowed. Is there any reason behind this?
This is called a local class.
2 is the easy one: yes, a class file will be generated.
1 and 3 are kind of the same question. You would use a local class where you never need to instantiate one or know about implementation details anywhere but in one method.
A typical use would be to create a throw-away implementation of some interface. For example you'll often see something like this:
//within some method
taskExecutor.execute( new Runnable() {
public void run() {
classWithMethodToFire.doSomething( parameter );
}
});
If you needed to create a bunch of these and do something with them, you might change this to
//within some method
class myFirstRunnableClass implements Runnable {
public void run() {
classWithMethodToFire.doSomething( parameter );
}
}
class mySecondRunnableClass implements Runnable {
public void run() {
classWithMethodToFire.doSomethingElse( parameter );
}
}
taskExecutor.execute(new myFirstRunnableClass());
taskExecutor.execute(new mySecondRunnableClass());
Regarding interfaces: I'm not sure if there's a technical issue that makes locally-defined interfaces a problem for the compiler, but even if there isn't, they wouldn't add any value. If a local class that implements a local interface were used outside the method, the interface would be meaningless. And if a local class was only going to be used inside the method, both the interface and the class would be implemented within that method, so the interface definition would be redundant.
Those are called local classes. You can find a detailed explanation and an example here. The example returns a specific implementation which we doesn't need to know about outside the method.
The class can't be seen (i.e. instantiated, its methods accessed without Reflection) from outside the method. Also, it can access the local variables defined in testMethod(), but before the class definition.
I actually thought: "No such file will be written." until I just tried it: Oh yes, such a file is created! It will be called something like A$1B.class, where A is the outer class, and B is the local class.
Especially for callback functions (event handlers in GUIs, like onClick() when a Button is clicked etc.), it's quite usual to use "anonymous classes" - first of all because you can end up with a lot of them. But sometimes anonymous classes aren't good enough - especially, you can't define a constructor on them. In these cases, these method local classes can be a good alternative.
The real purpose of this is to allow us to create classes inline in function calls to console those of us who like to pretend that we're writing in a functional language ;)
The only case when you would like to have a full blown function inner class vs anonymous class ( a.k.a. Java closure ) is when the following conditions are met
you need to supply an interface or abstract class implementation
you want to use some final parameters defined in calling function
you need to record some state of execution of the interface call.
E.g. somebody wants a Runnable and you want to record when the execution has started and ended.
With anonymous class it is not possible to do, with inner class you can do this.
Here is an example do demonstrate my point
private static void testMethod (
final Object param1,
final Object param2
)
{
class RunnableWithStartAndEnd extends Runnable{
Date start;
Date end;
public void run () {
start = new Date( );
try
{
evalParam1( param1 );
evalParam2( param2 );
...
}
finally
{
end = new Date( );
}
}
}
final RunnableWithStartAndEnd runnable = new RunnableWithStartAndEnd( );
final Thread thread = new Thread( runnable );
thread.start( );
thread.join( );
System.out.println( runnable.start );
System.out.println( runnable.end );
}
Before using this pattern though, please evaluate if plain old top-level class, or inner class, or static inner class are better alternatives.
The main reason to define inner classes (within a method or a class) is to deal with accessibility of members and variables of the enclosing class and method.
An inner class can look up private data members and operate on them. If within a method it can deal with final local variable as well.
Having inner classes does help in making sure this class is not accessible to outside world. This holds true especially for cases of UI programming in GWT or GXT etc where JS generating code is written in java and behavior for each button or event has to be defined by creating anonymous classes
I've came across a good example in the Spring. The framework is using concept of local class definitions inside of the method to deal with various database operations in a uniform way.
Assume you have a code like this:
JdbcTemplate jdbcOperations = new JdbcTemplate(this.myDataSource);
jdbcOperations.execute("call my_stored_procedure()")
jdbcOperations.query(queryToRun, new MyCustomRowMapper(), withInputParams);
jdbcOperations.update(queryToRun, withInputParams);
Let's first look at the implementation of the execute():
#Override
public void execute(final String sql) throws DataAccessException {
if (logger.isDebugEnabled()) {
logger.debug("Executing SQL statement [" + sql + "]");
}
/**
* Callback to execute the statement.
(can access method local state like sql input parameter)
*/
class ExecuteStatementCallback implements StatementCallback<Object>, SqlProvider {
#Override
#Nullable
public Object doInStatement(Statement stmt) throws SQLException {
stmt.execute(sql);
return null;
}
#Override
public String getSql() {
return sql;
}
}
//transforms method input into a functional Object
execute(new ExecuteStatementCallback());
}
Please note the last line. Spring does this exact "trick" for the rest of the methods as well:
//uses local class QueryStatementCallback implements StatementCallback<T>, SqlProvider
jdbcOperations.query(...)
//uses local class UpdateStatementCallback implements StatementCallback<Integer>, SqlProvider
jdbcOperations.update(...)
The "trick" with local classes allows the framework to deal with all of those scenarios in a single method which accept those classes via StatementCallback interface.
This single method acts as a bridge between actions (execute, update) and common operations around them (e.g execution, connection management, error translation and dbms console output)
public <T> T execute(StatementCallback<T> action) throws DataAccessException {
Assert.notNull(action, "Callback object must not be null");
Connection con = DataSourceUtils.getConnection(obtainDataSource());
Statement stmt = null;
try {
stmt = con.createStatement();
applyStatementSettings(stmt);
//
T result = action.doInStatement(stmt);
handleWarnings(stmt);
return result;
}
catch (SQLException ex) {
// Release Connection early, to avoid potential connection pool deadlock
// in the case when the exception translator hasn't been initialized yet.
String sql = getSql(action);
JdbcUtils.closeStatement(stmt);
stmt = null;
DataSourceUtils.releaseConnection(con, getDataSource());
con = null;
throw translateException("StatementCallback", sql, ex);
}
finally {
JdbcUtils.closeStatement(stmt);
DataSourceUtils.releaseConnection(con, getDataSource());
}
}
Everything is clear here but I wanted to place another example of reasonable use case for this definition type of class for the next readers.
Regarding #jacob-mattison 's answer, If we assume we have some common actions in these throw-away implementations of the interface, So, it's better to write it once but keep the implementations anonymous too:
//within some method
abstract class myRunnableClass implements Runnable {
protected abstract void DO_AN_SPECIFIC_JOB();
public void run() {
someCommonCode();
//...
DO_AN_SPECIFIC_JOB();
//..
anotherCommonCode();
}
}
Then it's easy to use this defined class and just implement the specific task separately:
taskExecutor.execute(new myRunnableClass() {
protected void DO_AN_SPECIFIC_JOB() {
// Do something
}
});
taskExecutor.execute(new myRunnableClass() {
protected void DO_AN_SPECIFIC_JOB() {
// Do another thing
}
});