I am implementing the factory pattern in my code, so came across one interesting thing in factory that I can replace the if else condition in the factory method with Reflection to make my code more dynamic.
Below is the code for both the designs......
1) With if-else conditions
public static Pizza createPizza(String type) {
Pizza pizza = null;
if(type.equals(PizzaType.Cheese))
{
pizza = new CheesePizza();
}
else if (type.equals(PizzaType.Tomato))
{
pizza = new TomatoPizza();
}
else if (type.equals(PizzaType.Capsicum))
{
pizza = new CapsicumPizza();
}
else
{
try {
throw new Exception("Entered PizzaType is not Valid");
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
return pizza;
}
2) With Reflection
public static Pizza createPizza(String type) {
Pizza pizza = null;
for(PizzaType value : PizzaType.values())
{
if(type.equals(value.getPizzaTypeValue()))
{
String fullyQualifiedclassname = value.getClassNameByPizzaType(type);
try {
pizza = (Pizza)Class.forName(fullyQualifiedclassname).newInstance();
} catch (InstantiationException | IllegalAccessException
| ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
return pizza;
}
Second way looks very good to me as I can make my code more dynamic by using it, as I can create one property file with the names of classes and type associated with it to serve the Open and close a bit more, as if in future owners wants to add more pizzas to PizzaStore then he would just make the entry in the property file and will just create one more subclass of Pizza.
But I read that reflection has many disadvantages mentioned few of them.
It is hack of compiler
Automatic development tools are not able to work with reflections code
It's difficult to debug reflections code
Reflection complicates understanding and navigation in code
Significant performance penalty
So very curious to know that, which design is good, as I am very interested to make my code more and more dynamic.
You can also use a design in the middle. E.g.
public interface Factory<T> {
public T newInstance();
}
public class TomatoPizzaFactory implements Factory<TomatoPizza> {
#Override
public TomatoPizza newInstance() {
return new TomatoPizza();
}
}
public class PizzaFactory {
private Map<String, Factory<? extends Pizza>> factories = new HashMap<String, Factory<? extends Pizza>>();
public PizzaFactory(){
factories.put(PizzaType.Cheese, new CheesePizzaFactory());
factories.put(PizzaType.Tomato, new TomatoPizzaFactory());
}
public Pizza createPizza(String type){
Factory<? extends Pizza> factory = factories.get(type);
if(factory == null){
throw new IllegalArgumentException("Unknown pizza type");
}
return factory.newInstance();
}
}
Implement a DefaultConstructorFactory for simple instantiations.
public class DefaultConstructorFactory<T> implements Factory<T> {
private Class<T> type;
public DefaultConstructorFactory(Class<T> type) {
this.type = type;
}
public T newInstance() {
try {
return type.newInstance();
} catch (InstantiationException e) {
throw new IllegalStateException("Can not instantiate " + type, e);
} catch (IllegalAccessException e) {
throw new IllegalStateException("Can not instantiate " + type, e);
}
}
}
But I read that reflection has many disadvantages mentioned few of them.
It is hack of compiler
It can be a hack, but if you are writing infrastructure code you will often use reflections. Especially when you write frameworks that must introspect classes at runtime, because the framework doesn't know the classes it will handle at runtime. Think about hibernate, spring, jpa, etc.
Automatic development tools are not able to work with reflections code
That's true, because you shift a lot of compiler issues to runtime. So you should handle reflection exceptions like a compiler and provide good error messages.
There is also only a little refactoring support in some IDEs. So you must be careful when changing code used by reflection code.
Nevertheless you can write tests to find bugs fast.
It's difficult to debug reflections code
That's also true, because you can't jump into a method directly and you have to investigate the variables to find out which member of which object is accessed.
It is a bit more indirection.
Reflection complicates understanding and navigation in code
Yes, if it is used the wrong way. Only use reflection if you do not know the types you have to handle at runtime (infrastructure or framework code). Don't use reflection if you know the types and you only want to minimize code written. If you want to minimize code written select a better design.
Significant performance penalty
I don't think so. Of course reflection calls are indirect method invokations and thus more code must be executed in order to do the same as a direct method call. But the overhead for this is minimal.
If you use reflection code in factories, the reflection overhead is insignificant compared to the lifetime of the created objects.
Also your reflection code will be JIT optimized. Since Java 1.4 the hot spot compiler will inflate a method that is reflectively called more then 15 times by generating a pure-java accessor.
See:https://blogs.oracle.com/buck/entry/inflation_system_properties, https://stackoverflow.com/a/7809300/974186 and https://stackoverflow.com/a/28809546/974186
Frameworks use reflection and generate a lot of proxies that use reflection, e.g. spring's transaction management, jpa, etc. If it would have a significant performance impact all applications using these frameworks would be very slow, wouldn't they.
Using reflection this way is not recommended in general,
but if creating objects dynamically is a top priority in your design,
then it can be acceptable.
Another good option might be to not use different Pizza subtypes,
but a single Pizza class that can be parameterized.
That would make a lot of sense,
since I don't think pizzas behave so very differently.
It seems to me that a diverse range of pizzas could be easily created using the builder pattern.
Btw, another bad practices catches the eye here,
that the parameter of the factory method is a String,
when you actually seem to have PizzaType which is an enum,
and you compare the String parameter to enum values.
It would be much better to use enum values as the parameter,
and a switch instead of the chained if-else.
But, another problem here is that the enum duplicates the available pizza types. So when you add a new pizza type, you also have to update the enum, and the factory method. That's too many files changed. This is code smell.
I suggest to give a try to the single Pizza class and the builder pattern.
Reflection seems like an overkill to me in this situation. Why don't you add a factory method Pizza create(); to your PizzaType interface?
Then just make every implementing class do the initialization, like:
class CheesePizza implements PizzaType {
Pizza create() {
return new CheesePizza();
}
}
So you only end up with:
for(PizzaType value : PizzaType.values()
if(type.equals(value))
pizza = value.create();
This way the loop in the first example wouldn't be ineffective and it is expandable easily .
Also you shouldn't worry about reflection performance too much, if your code hasn't a need to perform in realtime. I agree, that it makes the code unreadable though, so avoiding it is better in most cases.
Edit: I've overseen, that PizzaType is an enum, not an Interface. In this case you may create one or add a create method to your enum, but I don't think the latter is very favourable.
The if - else statements won't contribute to your design. This merely ensures that the pizza's exist - in a inheritance hierarchy. Try to use a decorator pattern to minimise the hierarchy levels and use a concrete class (like: plainPizza) to add your decorations to.
The decorator pattern is aimed at adding decorations dynamically in run-time.
If you have only few classes returned by factory (let's say three or four) you should just leave it in simplest form with if statements
if(type.equals("someType"){
return new SomeType();
} else if(type.equals("otherType"){
return new OtherType();
}
If you have more class types or you predict frequent changes in this hierarchy then you should switch to implementation where factory class will have collection of available implementations. Your factory class will have field with possible instance types.
public class MyFactory{
private Map<String, Class<? extends ReturnedClass> availableTypes;
}
It might be problematic how to fill this map with elements.
You may:
Hardcode them inside constructor/ init method inside Factory. Note: each new added class will require changes in factory.
Make method registerNewReturnedType(Class c) inside factory. Each returned class will have to register itself inside factory. Note: after adding new class you will not have to change factory
(Recommended) Use dependency injection to fill map.
Related
I came up with this question writing specific code, but I'll try to keep the question as generic as possible.
Other similar question refer to C# which seems to have some language specific handling for this and below code is Java, but again let's try to keep it generic.
Let's say I have class A which implements interface I.
This is useful to me cause I can implement methods that use A only as a I type and abstract the implementation.
Let's now say, I have class B which implements all methods in interface I, but it's never referred to as only I.
Let's now say, I have class B which implements methods that have the same name/signature as the ones in interface I, but it doesn't implements the interface.
Should I always explicitly implement I?
Even if I don't use it (though I might in the future) for type abstraction?
A more meaningful, even if probably not realistic, example would be:
interface Printable {
String print()
class A implements Printable {
//code...
String print(){return "A";}
//code...
}
class B {
//code...
String print(){return "B";}
void otherMethod(){/*code*/}
//code...
}
class Test {
Printable a = new A();
System.out.println(a.print());
B b = new B();
b.otherMethod();
System.out.println(b.print());
}
Are there any drawbacks on explicitly implementing, or not, the interface Printable?
The only one I can think of is scalability for the second case.
In the sense that if one day I'll want to explicitly use it as Printable, I'll be able to do so without any more effort.
But is there anything else (patterns, optimization, good programming, style, ..) I should take into consideration?
In some cases the type hierarchy will affect the method call cost due to not playing well with JIT method inlining. An example of that can be found in Guava ImmutableList (and others) offer awful performance in some cases due to size-optmized specializations #1268 bug:
Many of the guava Immutable collections have a cute trick where they have specializations for zero (EmptyImmutableList) and one (SingletonImmutableList) element collections. These specializations take the form of subclasses of ImmutableList, to go along with the "Regular" implementation and a few other specializations like ReverseImmutable, SubList, etc.
Unfortunately, the result is that when these subclasses mix at some call site, the call is megamorphic, and performance is awful compared to classes without these specializations (worse by a factor of 20 or more).
I don't think there is a simple correct answer for this question.
However, if you do not implement the method, you should do this:
public void unusedBlahMethod() {
throw new UnsupportedOperationException("operation blah not supported");
}
The advantages of omitting the unused method are:
You save yourself time and money (at least in the short term).
Since you don't need the method, it might not be clear to you how best to implement it anyway.
The disadvantages of omitting the method are:
If you need the method in the future, it will take longer to add it as you may have to refamiliarize yourself with the code, check-out, re-test, etc.
Throwing an UnsupportedOperationException may cause bugs in the future (though good test coverage should prevent that).
If you're writing disposable code, you don't need to write interfaces, but one day you might notice, that you should've taken your time and write an interface.
The main advantage and purpose of interfaces is the flexibility of using different implementations. I can put something, that offers the same functionality inside a method, I can create a fake of it for test purposes and I can create a decorator that behaves like the original object, but can log the stuff.
Example:
public interface A {
void someMethod();
}
public class AImplementation {
#Override
public void someMethod() {
// implementation
}
}
public class ADecorator {
private final A a;
public ADecorator(A a) {
this.a = a;
}
#Override
public void someMethod() {
System.out.println("Before method call");
a.someMethod();
System.out.println("After method call");
}
}
Nice side effect: ADecorator works with every implementation of A.
The cost for this flexibility isn't that high and if your code will live a little bit longer, you should take it.
There is a possible optimization I could apply to one of my methods, if I can determine that another method in the same class is not overridden. It is only a slight optimization, so reflection is out of the question. Should I just make a protected method that returns whether or not the method in question is overridden, such that a subclass can make it return true?
I wouldn't do this. It violates encapsulation and changes the contract of what your class is supposed to do without implementers knowing about it.
If you must do it, though, the best way is to invoke
class.getMethod("myMethod").getDeclaringClass();
If the class that's returned is your own, then it's not overridden; if it's something else, that subclass has overridden it. Yes, this is reflection, but it's still pretty cheap.
I do like your protected-method approach, though. That would look something like this:
public class ExpensiveStrategy {
public void expensiveMethod() {
// ...
if (employOptimization()) {
// take a shortcut
}
}
protected boolean employOptimization() {
return false;
}
}
public class TargetedStrategy extends ExpensiveStrategy {
#Override
protected boolean employOptimization() {
return true; // Now we can shortcut ExpensiveStrategy.
}
}
Well, my optimization is a small yield on a case-by-case basis, and it only speeds things a lot because it is called hundreds of times per second.
You might want to see just what the Java optimizer can do. Your hand-coded optimization might not be necessary.
If you decide that hand-coded optimization is necessary, the protected method approach you described is not a good idea because it exposes the details of your implementation.
How many times do you expect the function to be called during the lifetime of the program? Reflection for a specific single method should not be too bad. If it is not worth that much time over the lifetime of the program my recommendation is to keep it simple, and don't include the small optimization.
Jacob
Annotate subclasses that overrides the particular method. #OverridesMethodX.
Perform the necessary reflective work on class load (i.e., in a static block) so that you publish the information via a final boolean flag. Then, query the flag where and when you need it.
maybe there is a cleaner way to do this via the Strategy Pattern, though I do not know how the rest of your application and data are modeled but it seem like it might fit.
It did to me anyhow when I was faced with a similar problem. You could have a heuristic that decides which strategy to use depending on the data that is to be processed.
Again, I do not have enough information on your specific usage to see if this is overkill or not. However I would refrain from changing the class signature for such specific optimization. Usually when I feel the urge to go against the current I take it as a sing that I had not forseen a corner case when I designed the thing and that I should refactor it to a cleaner more comprehensive solution.
however beware, such refactoring when done solely on optimization grounds almost inevitably lead to disaster. If this is the case I would take the reflecive approach suggested above. It does not alter the inheritance contract, and when done properly needs be done once only per subclass that requires it for the runtime life of the application.
I know this is a slightly old question, but for the sake of other googlers:
I came up with a different solution using interfaces.
class FastSub extends Super {}
class SlowSub extends Super implements Super.LetMeHandleThis {
void doSomethingSlow() {
//not optimized
}
}
class Super {
static interface LetMeHandleThis {
void doSomethingSlow();
}
void doSomething() {
if (this instanceof LetMeHandleThis)
((LetMeHandleThis) this).doSomethingSlow();
else
doSomethingFast();
}
private final void doSomethingFast() {
//optimized
}
}
or the other way around:
class FastSub extends Super implements Super.OptimizeMe {}
class SlowSub extends Super {
void doSomethingSlow() {
//not optimized
}
}
class Super {
static interface OptimizeMe {}
void doSomething() {
if (this instanceof OptimizeMe)
doSomethingFast();
else
doSomethingSlow();
}
private final void doSomethingFast() {
//optimized
}
void doSomethingSlow(){}
}
private static boolean isMethodImplemented(Object obj, String name)
{
try
{
Class<? extends Object> clazz = obj.getClass();
return clazz.getMethod(name).getDeclaringClass().equals(clazz);
}
catch (SecurityException e)
{
log.error("{}", e);
}
catch (NoSuchMethodException e)
{
log.error("{}", e);
}
return false;
}
Reflection can be used to determine if a method is overridden. The code is a little bit tricky. For instance, you need to be aware that you have a runtime class that is a subclass of the class that overrides the method.
You are going to see the same runtime classes over and over again. So you can save the results of the check in a WeakHashMap keyed on the Class.
See my code in java.awt.Component dealing with coalesceEvents for an example.
it might be another workaround which is similar to override another protected method returns true/false
I would suggest creating an empty interface, markup interface, then make the subclass implements this interface and inside the superclass check that this instance is instanceof this interface before calling the overridden expensive method.
I've recently started working on a new project in Java which will have a local database. As part of the design, I have created an AbstractEntity class - this is intended as an object representation of a row (or potential row) on the database.
I've run into a few issues early on in this design though and want to make sure I'm not going down a bad path. One particular method I'm having trouble with is the following:
public ArrayList retrieveEntities(String sql)
{
ArrayList ret = new ArrayList();
String query = "SELECT " + getColumnsForSelectStatement() + " FROM " + getTableName() + " WHERE " + sql;
try (Connection conn = DatabaseUtil.createDatabaseConnection();
Statement s = conn.createStatement();
ResultSet rs = s.executeQuery(query))
{
while (rs.next())
{
AbstractEntity entity = factoryFromResultSet(rs);
ret.add(entity);
}
}
catch (SQLException sqle)
{
Debug.logSqlException(query, sqle);
}
return ret;
}
The idea behind this method is to have a generic way to retrieve things from the database, where the only thing I have to pass in are the conditions for the SQL. As it stands it works correctly, but I have two problems with it:
1) Type Safety
I can't seem to parameterise this method without causing compiler errors. ArrayList<AbstractEntity> is no good, and I can't seem to get ArrayList<? extends AbstractEntity> to work either. When I try the latter (which makes sense to me), the following line gives me a compiler error:
ArrayList<PlayerEntity> list = new PlayerEntity().retrieveEntities("1 = 1");
The error is 'Type mismatch: cannot convert from ArrayList<capture#1-of ? extends AbstractEntity> to ArrayList<PlayerEntity>'
Is there a way I can directly reference the superclass from an abstract class? This method isn't static, and since you cannot instantiate an abstract class (it has no constructor), to call this I must always have an extending class. So why can't I reference it's type?
2) Staticness
Ideally, I'd like for this method to be static. That way I can call PlayerEntity.retrieveEntities() directly, rather than making an object just to call it. However, since it refers to abstract methods I can't do this, so I'm stuck with it.
Both of the above gripes are ringing alarm bells in my head. Is there a better way to design this that avoids these problems, or better yet are there direct solutions to either of these problems that I'm missing?
I think you are reinventing the wheel. ORMs (Object-Relational Mappers) have been around for many years and have proven to be very useful.
They're no bullet-proof, though. As the problem they intend to solve is quite difficult (I mean object-relational impedance mismatch), solutions usually have its difficulties as well.
In order to do their work in a flexible manner, some ORMs compromise performance, while others compromise simplicity of usage, etc. What I mean is that there are no perfect solutions here.
I'd like to point you to three different ORMs I've worked with in different projects:
Hibernate
ActiveJDBC
E-Bean
THere are many comparisons and benchmarks out there, that cover this topic in depth.
Hibernate is the most widely-used, it's robust and powerful, offers a lot of flexibility and performs well if used well. On the cons, it has a steep learning curve, it's a little bit complex for beginners and, in general, solutions that use Hibernate end up using Hibernate forever, since it's very easy to inadvertently let Hibernate sneak into your business layer.
ActiveJDBC is not very popular, but is the best ActiveRecord solution for Java. If you come from Ruby, this is your choice. Its API is very fluent and expressive and code that uses it is very easy to read and maintain. It's very simple and a really thin framework.
E-Bean is quite powerful, its API is fluent and expressive and the product provides adaptive behavior to optimize queries on-the-fly. It's simple to use and code that uses it has good readability and is easy to maintain.
Regarding type safety, I usually follow this approach:
public class AbstractRepository<T extends AbstractEntity> {
protected final Class<T> entityClazz;
protected AbstractRepository() {
Type type = this.getClass().getGenericSuperclass();
ParameterizedType paramType = (ParameterizedType) type;
this.entityClazz = (Class<T>) paramType.getActualTypeArguments[0];
// TODO exception handling
}
public List<T> list(String sql) { // retrieveEntities => very long name
List<T> ret = new ArrayList<>();
String query = "SELECT " + getColumnsForSelectStatement() + " FROM " + getTableName() + " WHERE " + sql;
try (Connection conn = DatabaseUtil.createDatabaseConnection();
Statement s = conn.createStatement();
ResultSet rs = s.executeQuery(query)) {
while (rs.next()) {
T entity = factoryFromResultSet(rs);
ret.add(entity);
}
} catch (SQLException sqle) {
Debug.logSqlException(query, sqle);
}
return ret;
}
protected T factoryFromResultSet(ResultSet rs) {
// Create new entity instance by reflection
T entity = clazz.getConstructor().newInstance();
// TODO exception handling
// Fill entity with result set data
return entity;
}
}
I've declared an abstract repository class, which needs to be extended with the right parameter types:
public class Person extends AbstractEntity {
}
public class PersonRepository extends AbstractRepository<Person> {
}
PersonRepository repo = new PersonRepository();
List<Person> people = repo.list("some SQL");
I usually separate the code that generates the entities from the actual entities' code, otherwise entities end up having many responsibilities and doing too much work. However, the ActiveRecord approach addresses this problem by letting the entities do all the work, and it's a very popular choice beyond the world of Java.
First point: if you make your method static, then it makes no sense using polymorphism. Polymorphism works at runtime, but making the method static forces you to indicate the dynamic type at compile time, which is a non-sense.
Concerning the return type of the method, you may want first to set it to ArrayList<? extends AbstractEntity>, otherwise you are just saying that you return any Object (i.e. ArrayList<Object>).
After this, you have to create a local variable of the same type as the return type, so to not have a compile error.
Now, how do you populate this collection?
I am going to give you just two hints:
You can use Reflection, in particular you can invoke a class constructor by selecting at runtime the class you want to instantiate (use getClass()).
You can leverage the Template method pattern to end up having a flexible design and no duplicate code.
In general, however, the problem you are facing has already been solved. So, if you are just looking for a ready-to-use solution, you may take a look at a ORM framework like Hibernate.
I've been searching to many places but I didn't find a good answer for my problem:
I have an enum, for example:
public enum Window { CLASSIC, MODERN }
and I need to separate the behavior of my application according to the enum value, like that:
switch (myWindow.getType()) {
case CLASSIC: new ClassicWindow(...);
case MODERN: new ModernWindow(...);
}
I know what you think: simply put that in the enum and basta, however this is not the only class depending on my enum! And I can't write as many object creation methods as I have objects!
Simply put, what can I do in this situation? A friend of mine said to me to get rid of the enum and to use derived classes everytime, but in the end I'd have to create as many instances as subclasses for all my tests!
In short, I'm stuck.
Do you know a best practice for that? Thanks
You seem to be looking for a design pattern, rather than good practices for using enums. The code you're intending to write will be full of switch statements, with one condition for each possible value of the enum - making it hard to maintain and extend in the long run. A better idea would be to refactor each possible case's behavior in a separate class, maybe using the Abstract Factory pattern.
This is the factory pattern. This example actually shows exactly what you're doing.
You could either implement an interface in your enum and have them act as a factory:
interface WindowBuilder {
Window makeWindow();
}
enum WindowType implements WindowBuilder {
SIMPLE {
public Window makeWindow() { return new SimpleWindow() }
},
[... other enums]
}
or you could use reflection and bind a class to the enum type to have them (again) work as a factory:
enum WindowType {
SIMPLE(SimpleWindow.class),
[... other enums]
private final Class<? extends Window> wndType;
private WindowType(Class<? extends Window> wndType) {
this.wndType = wndType;
}
public Window makeWindow() {
// in fact you will need to either catch the exceptions newInstance can throw, or declare the method can throw them
return this.wndType.newInstance();
}
}
Either way you will be able to call them like that afterward:
Window window = myWindow.getType().makeWindow();
I am reading a Stream, which provides an identifier (a simple int). Depending on the int different data follows, which i need to turn into objects. So far i created classes for each object type, and each class provides a read(InputStream input)-method which reads whatever data there is to be read for that kind of object (all object classes inherit from a common base class).
However, there are numerous id's and thus numerous classes. What is the most elegant way to determine and create the instance of the class?
The most naive approach i tried first was to have a switch-case block to create the instances, but i find that it clutters the code (unreasonably). It also forces me to have every class available at compile time.
Second try was to create a map that maps each int to a class and use newInstance() to create the objects. There is still the problem that i need to initialize the map, which still requires that i have every class available at compile time. It more or less just moved the clutter from one place to another.
Removing the compile time dependencies is not required, it would just be a bonus if possible. The main goal is to avoid the boilerplate code.
Constraints: I don't want to add a library to solve this. Reflection is fine with me.
An alternative approach is to still use a Map but essentially use late-binding, if that's preferable. You could even store the config in a properties file like:
1=java.lang.String
2=my.class.Something
...etc...
You then do something like this:
Map<Integer,ObjectFactory> loader = ... // load from properties; fairly trivial
assuming:
public class ObjectFactory {
private Final String className;
private transient Class clazz;
public ObjectFactory(String className) {
this.className = className;
}
public Object createInstance() {
try {
if (clazz == null) {
clazz = Class.forName(className);
}
return clazz.newInstance();
} catch (Exception e) {
throw new IllegalStateExxception("Could not crate " + className, e);
}
}
}
I think your map solution sounds fine, but move the initial map setup out of of the Java code and into a config file. (Class.forName will help here)
You could have a registry with prototypes.
A prototype of each class you want to be able to create (at a point in time) could be added to your registry object at runtime, these prototypes would each have their own unique integer id.
When you want an object of id x, you just ask of your registry object to clone and return the prototype which id is x. (or null if no such prototype is currently registered).
Internally the registry could be a (hash)map for quick retrieval, but it could just as easily be a list of prototypes (Do make sure of course that all prototypes implement a common interface the registry can work with). Best thing is, no need for reflection!