Is there an elegant and best approach to write the following method?
private void throwException(Object obj) {
if (obj instanceof ClassA) {
ClassA resp = (ClassA) obj;
throw new CustomException(resp.getMessage(), resp.getCode());
} else if (obj instanceof ClassB) {
ClassB resp = (ClassB) obj;
throw new CustomException(resp.getMessage(), resp.getCode());
}
}
Note that ClassA and ClassB has the same exact properties. My point is that, I don't want to repeat the throw statement as much as possible.
Define a map like
Map<Class<?>, Function<Object, ? extends RuntimeException>> handlers = new LinkedHashMap<>();
The handlers will contain a Function that you can use to map the Object passed to throwException to get an exception. The key of the map is a class and value is the function that will map an object of the type (class) of the key to an exception.
Populate the above map as
handlers.put(ClassA.class, (obj) -> new CustomException(((ClassA) obj).getMessage(), ((ClassA) obj).getCode));
handlers.put(ClassB.class, (obj) -> new CustomException(((ClassB) obj).getMessage(), ((ClassB) obj).getCode));
With this, throwException would look like,
private void throwException(Object obj) {
Function<Object, ? extends RuntimeException> handler = handlers.entrySet().stream()
.filter(entry -> entry.getKey().isAssignableFrom(obj.getClass()))
.map(Map.Entry::getValue)
.findFirst()
.orElseThrow(() -> new RuntimeException("No handler found")); //Or use a default using orElseGet
throw handler.apply(obj);
}
I agree that it moves the casting elsewhere to make the method look clean.
The key part is the line
.filter(entry -> entry.getKey().isAssignableFrom(obj.getClass()))
We are checking if the object (passed to throwException) is of type returned by entry.getKey (the class of the map key). So, if you have inheritance hierarchy among the classes (ClassA, ClassB..), then you must populate the map in such an order so that the most generic ones (say like Object.class) comes after more specific ones.
A consequence of putting an entry into handlers with Object.class as the first entry would mean that the function (value) of Object.class will always be picked up for any object passed to it.
You can use Function<>s to wrap the getters into a custom interface beforehand.
interface Wrapper {
String getMessage();
int getCode();
<T> Function<T, Wrapper<T>> of(Function<T,String> getMsg, Function<T, Integer> getCde) {
return object -> new Wrapper() {
public String getMessage() { return getMsg.apply(object); }
public int getCode() { return getCde.apply(object); }
}
}
}
class Thrower {
Map<Class<?>, Supplier<Wrapper>> wrappers = new HashMap();
static {
wrappers.put(A.class, Wrapper.of(A.class, A::getMessage, A::getCode));
wrappers.put(B.class, Wrapper.of(B.class, B::getMessage, B::getCode));
}
void throwException(Object o) {
Wrapper wrapper = wrappers.get(o.getClass()).apply(o);
throw new CustomException(wrapper.getMessage(), wrapper.getCode());
}
}
You can kill two birds with one stone as this is a classical problem in the clean code design where you can choose to implement Visitor Design Pattern as a workaround to multiple If/else instanceof type of conditioning with a little bit of expansion to cover the problem of excessive throwing of new CustomExeption.
Here is what I can suggest you :
First it's better to change you design for ClassA and ClassB as:
abstract class ClassParent {
// your fields
public ClassParent (//your fields) {
// initializing your fields
}
public abstract void accept(ClassVisitor cv);
}
class ClassA extends ClassParent {
public ClassA(//your fileds) {
super(//your fileds);
}
/// other Implementation
public void accept(ClassVisitor cv) {
cv.visit(this);
}
}
class ClassB extends ClassParent {
public ClassB(//your fileds) {
super(//your fileds);
}
/// other Implementation
public void accept(ClassVisitor cv) {
cv.visit(this);
}
}
Now define your vistors as :
interface ClassVisitor {
abstract void visit(ClassA classA);
abstract void visit(ClassB classB);
}
class Visitor implements ClassVisitor {
public void visit(ClassA classA) {
classA.DoSomthing();
}
public void visit(ClassB classB) {
classB.DoSomthing();
}
}
Now it your ThrowException you can define :
private ClassVisitor visitor = new Visitor();
public void ThrowException(ClassParent classParent){
classParent.accept(visitor);
throw new CustomException(classParent.getMessage(), classParent.getCode);
}
This way you achieve both a cleaner code, more maintainable with
higher readablity by avoiding duplicate codes at the same time.
Related
I'm relatively new to generics in Java, so I apologize if this is something common that gets taught in schools (I'm pretty much self-taught). Let's say I have the interface and abstract class below
public interface IChallenge<T> {
boolean handle(T e);
Class<? extends T> getType();
}
public abstract class AbstractChallenge<T> implements IChallenge<T> {
protected Class<T> clazz;
#Override
public Class<? extends T> getType() {
return this.clazz;
}
}
For every class that extends AbstractChallenge, the handle method takes in the parameter that is specified for the generic. So if I had an event class that gets triggered when Event happens, I would have
public class EventChallenge extends AbstractChallenge<Event> {
public EventChallenge() {
super(Event.class);
}
#Override
public boolean handle(Event e) {}
}
My problem comes when I'm trying to pass a specific class to the handle method. Since the generic can be any class, and there can be multiple challenges with the same type, I have the challenges stored in a map with their type as the key.
private Map<Something, List<AbstractChallenge<Something>> challenges = new HashMap<>();
With the ultimate hope of achieving something along the lines of
List<AbstractChallenge<A>> specificChallenges = this.challenges.get(A.class);
specificChallenges.removeIf(challenge -> challenge.handle(A));
But I'm having a hard time figuring out what goes in the 'Something'. If I put the wildcard ? symbol, then IntelliJ says that handle must take in a parameter of the requirement: capture of ? when I pass it class A. The best I've gotten to is to not specify the type for AbstractChallenge but I'd like a better solution.
Any ideas? Thanks!
What you seek is something like that (I took the comment here):
private Map<Class<?>, List<IChallenge<?>> challenges = new HashMap<>();
A a = ...;
challenges.get(a.getClass())
.removeIf(challenger -> challenger.handle(a));
This is unsafe as it can be and you can't do much because you don't know the actual type of T (the compiler does not, so the much it can do is infer it, and in this case, the type would be Object):
The key can be any type, for example Integer.class
The value can be any type IChallenge<T> and if T is not Integer (or Number, Object, eg: any type in the hierarchy of T), it may fail if the implementation use the object it handles and do some cast.
When you add:
challenges.get(Integer.class).add((Number a) -> a.intValue() > 10); // #1
challenges.get(Integer.class).add((Integer a) -> a.intValue() > 10); // #2
challenges.get(Integer.class).add((Object a) -> a != null); // #3
challenges.get(Integer.class).add((String a) -> a.length() > 10); // #4
Here is an example:
Integer a = Integer.valueOf(5);
// #1 -> ok: a is a Number
challenges.get(a.getClass()).removeIf(c -> c.handle(a));
// #2 -> ok: a is an Integer
challenges.get(a.getClass()).removeIf(c -> c.handle(a));
// #3 -> ok: a is an Object
challenges.get(a.getClass()).removeIf(c -> c.handle(a));
// #4 ->ko: a is not a String
challenges.get(a.getClass()).removeIf(c -> c.handle(a));
If you wish to avoid that, but still be able to handle anything challenge, you should ensure that the class holding/building the challenges do it correctly:
public <T> void addChallenge(Class<T> type, IChallenge<T> challenge) {
challenges.computeIfAbsent(type, ignored -> new ArrayList<>()).add(challenge);
}
While you could use the getType() your defined in IChallenge, I wanted to show you how to enforce that the type (the key) and the IChallenge (the value) can be secured: normally, unless you gave write access to the map to other classes, this should be safe because the compiler validate the type at insertion.
Therefore, when you remove them, you should never have a ClassCastException due to the type parameter of IChallenge.
You could also try playing with ? super T and ? extends T but that's another challenge.
--
Regarding your comment:
I'm not entirely sure how to go about using the addChallenge method you specified. Right now, I have a list of Class> for every challenge created, and when a specific challenge should be loaded, the program instantiates using .newInstance(). Should I be doing it differently? I only need a certain amount of challenges loaded at once, not all – DeprecatedFrank
I am not telling to load all challenges at once, I am merely telling your to use OOP to ensure that no one, but your challenge holder (let call it ChallengeHolder) manage the map, and manage it so that you avoid generics pitfall:
class ChallengeHolder {
private final Map<Class<?>, List<IChallenge<?>>> challenges;
public ChallengeHolder() {
this.challenges = new HashMap<>();
}
public <T> void addChallenge(Class<T> type, IChallenge<T> challenge) {
challenges.computeIfAbsent(type, ignored -> new ArrayList<>()).add(challenge);
}
public boolean handle(Object a) {
List<IChallenge<T>> challengers = challenges.get(a);
if (challengers == null) return false;
return challengers.removeIf(c -> c.handle(a));
}
}
Since there are no public access to challenges beyond what the ChallengeHolder class provides, there should be no problem with using Object or Class<T>.
If you need to create IChallenge on demand, then you could perhaps an implementation like this:
public class LazyChallenge<T> implements IChallenge<T> {
private final Class<IChallenge<T>> impl;
private IChallenge<T> value;
public LazyChallenge(IChallenge<T> impl) {
this.impl = impl;
}
public boolean handle(T o) {
if (value == null) {
try {
value = impl.getConstructor().newInstance();
} catch (java.lang.ReflectiveOperationException e) { // ... a bunch of exception your IDE will fill in ...
throw new IllegalStateException(e);
}
}
return value.handle(o);
}
}
You would then add it to ChallengeHolder:
challengeHolder.addChallenge(String.class, new LazyChallenge<>(StringChallenge.class));
Or you could use lambda to avoid the reflection:
public class LazyChallenge<T> implements IChallenge<T> {
private final Class<IChallenge<T>> supplier;
private IChallenge<T> value;
public LazyChallenge(Supplier<IChallenge<T>> supplier) {
this.supplier = supplier;
}
public boolean handle(T o) {
if (value == null) {
value = supplier.get();
}
return value.handle(o);
}
}
And:
challengeHolder.addChallenge(String.class, new LazyChallenge<>(StringChallenge::new));
And after though, you may directly use Supplier in place of IChallenge in ChallengeHolder:
class ChallengeHolder {
private final Map<Class<?>, List<Supplier<IChallenge<?>>>> challenges;
public ChallengeHolder() {
this.challenges = new HashMap<>();
}
public <T> void addChallenge(Class<T> type, Supplier<IChallenge<T>> challenge) {
challenges.computeIfAbsent(type, ignored -> new ArrayList<>()).add(challenge);
}
public boolean handle(Object a) {
List<IChallenge<T>> challengers = challenges.get(a);
if (challengers == null) return false;
return challengers.removeIf(c -> c.get().handle(a));
}
}
StringChallenge existing = ... ;
// always reuse an existing
challengeHolder.addChallenge(String.class, () -> existing);
// bring a new challenge each time that ChallengeHolder::handle is called
challengeHolder.addChallenge(String.class, StringChallenge::new);
If I were to implements it, I would use the lambda way because you avoid reflection pitfalls (the try catch, the visibility problems especially given that Java 9++ introduced modules, ...).
The LazyChallenge defined above may help to avoid creating the StringChallenge more than one. In that case, it would be best to have it implements Supplier<T> instead of IChallenge<T>.
This whole digression does not change what I pointed out earlier: ensure that only ChallengeHolder read/write the map.
I have a lot of different objects which are being mapped so I've written a bunch of static mapping methods and one giant switch-case method which takes the type (a built-in field) and then uses the specialized mapping function.
Example:
public static SomeOtherObject mapFrom(SomeObject someObject) {
//... mapping logic
return someOtherObjectInstance;
}
//... a bunch of those
// root/main mapper
public static SomeOtherObjectBase mapFrom(SomeObjectBase someObjectBase) {
switch(someObjectBase.getType()) {
case SOME_TYPE: return mapFrom((SomeObject)someObjectBase);
//...
}
}
I then thought that I could probably convert this to an enum where each enumeration would be a mapper and would be bound to the type, thus avoiding a switch-case... something like this:
public enum SomeObjectMappers {
SOME_TYPE_MAPPER(SOME_TYPE) {
#Override
SomeOtherObject mapFrom(SomeObject someObject) {
//... mapping logic
return someOtherObjectInstance;
}
},
//... a bunch of those
;
private final Type type;
//constructor, getters...
abstract <T extends SomeOtherObjectBase, U extends SomeObjectBase> T mapFrom(U obj);
public static SomeOtherObjectBase mapFrom(SomeObjectBase obj) {
return Arrays.stream(values())
.filter(v -> v.getType() == type)
.map(m -> m.mapFrom(obj))
.findFirst()
.orElse(null);
}
}
However this does not really compile/work as for some reason mapper implementation in SOME_TYPE_MAPPERdoes not accept concrete subclasses SomeOtherObject and SomeObject as valid signatures for the abstract method.
Can this not be done?
so i have a use case where i need to copy an object of classes ( classes may vary depending on the input type in factory.
here is a sample of what i am trying to do
public interface DataUtil {
// the main wrapper
static Object copyObject(Object payload){
if(payload instanceof Human))
return copyEntry((Human) payload);
if(payload instanceof Car))
return copyEntry((Car) payload);
if(payload instanceof Planet))
return copyEntry((Planet) payload);
return payload;
}
static Human copyEntry(Human human) {
return Human.builder()
.name(human.getName())
.age(human.getAge())
.build();
}
static Car copyEntry(Car car) {
return Car.builder()
.model(car.getModel())
.brand(car.getBrand())
.build();
}
static Planet copyEntry(Planet planet) {
// return builder like previous
}
}
If you look at copyObject function, it does the job as intended but he issue is in return type. At present, to make itself compatible, its returning an Object but i would rather prefer to return it specific class Object ( say Human or Car for instance )
Is there a way to get this done with Generics (using <T>)? or is this a bad approach in the first place to do?
Is there a way to get this done with Generics (using )? or is this
a bad approach in the first place to do?
It is a bad approach because you receive as parameter a Object.
You cannot infer from that the concrete type : whereas the instanceof you used. Which is not a fine approach.
Here two ideas (related enough)
1) Introducing a Copyable interface
You could introduce an interface that the classes of the objects you want to copy implement :
public interface Copyable<T> {
T copy(T t);
}
that could be implemented such as :
public class Human implements Copyable<Human> {
#Override
public Human copy(Human t) {
return Human.builder()
.name(human.getName())
.age(human.getAge())
.build();
}
}
So the general copy() method could look like :
// the main wrapper
static <T extends Copyable<T>> T copyObject(T payload) {
return payload.copy(payload);
}
And you could use it in this way :
Human human = new Human();
// set some fields ...
Human copiedHuman = copyObject(human); // compile
Car copiedCar = copyObject(human); // doesn't compile
2) Use the visitor pattern
As alternative, it is also a good case for the visitor pattern : you want to apply a processing according to the concrete type of the parameter.
It allows to group together copy operations as in your actual code.
The general copyObject() method could rely on CopyVisitor that will do the copy according to the concrete type of the parameter :
#SuppressWarnings("unchecked")
static <T extends Visited> T copyObject(T payload) {
CopyVisitor visitor = new CopyVisitor();
payload.accept(visitor);
return (T) visitor.getCopy();
}
Where CopyVisitor implements a classic Visitor interface :
public interface Visitor {
void visitHuman(Human human);
void visitCar(Car car);
void visitPlanet(Planet planet);
}
in this way :
public class CopyVisitor implements Visitor {
private Visited copy;
#Override
public void visitHuman(Human human) {
copy = Human.builder()
.name(human.getName())
.age(human.getAge())
.build();
}
#Override
public void visitCar(Car car) {
copy = Car.builder()
.model(car.getModel())
.brand(car.getBrand())
.build();
}
#Override
public void visitPlanet(Planet planet) {
//...
}
public Visited getCopy() {
return copy;
}
}
The visited classes (Car, Human, Plan) would implement a specific interface to "accept" the visitor :
public interface Visited {
void accept(Visitor visitor);
}
such as :
public class Human implements Visited {
#Override
public void accept(Visitor visitor) {
visitor.visitHuman(this);
}
}
So you can use the copy() method in this way :
Human human = new Human();
// set some fields ...
Human copiedHuman = copyObject(human); // compile
Car copiedCar = copyObject(human); // doesn't compile
Unfortunately you have to do some unchecked casts like this:
static <TPayload> TPayload copyObject(Object payload) {
if (payload instanceof Human)
return (TPayload) copyEntry((Human) payload);
if (payload instanceof Car)
return (TPayload) copyEntry((Car) payload);
if (payload instanceof Planet)
return (TPayload) copyEntry((Planet) payload);
return (TPayload) payload;
}
But as mentioned in the comments this does not prevent you from writing:
Number n = DataUtil.copyObject("someString");
If you have knowledge about the type in Object you can do it with:
static <T> T copyObject(Object payload)
{
if (payload instanceof Human)
{
return (T) copyEntry((Human) payload);
}
if (payload instanceof Car)
{
return (T) copyEntry((Car) payload);
}
if (payload instanceof Planet)
{
return (T) copyEntry((Planet) payload);
}
return (T) payload;
};
and then:
Human h1 = new ...
Human h2= copyObject(h1);
Even if Java's Type Erasure would not apply you need runtime knowledge for your language to do so aka "dependent typing".
So return type overloading found in some languages like in C++ wouldn't help for runtime type switches like in List<Object>.
But why you need this anyway, you will collect all object instances after the call in a new heterogeneous List again
I have this project I'm working on and basically this is what I would like to achieve.
This is what I have:
MyObject obj = MyObject.builder()
.withValue("string")
.withAnotherValue("string")
.build();
MyObject obj = MyObject.builder()
.withValue("string")
.withAnotherValue("string")
.withField("key", "value")
.build();
So the step builder pattern forces the user to use the withValue() method and the withAnotherValue() method in that order. The method field() is optional and can be used as many times as you want.I followed this website for example http://www.svlada.com/step-builder-pattern/
So what I would like to achieve is this:
MyObject obj = MyObject.builder(Type.ROCK)
.withColour("blue")
.withValue("string")
.withAnotherValue("string")
.build();
MyObject obj = MyObject.builder(Type.STONE)
.withWeight("heavy")
.withValue("string")
.withAnotherValue("string")
.withField("key", "value")
.build();
So in the builder() method you'd put an enum type and based on the enum you'd have a different set of methods appear. So for ROCK the withValue(),withAnotherValue() and withColour() are now mandatory. But for STONE withWeight(), withAnotherValue() and withColour() are mandatory.
I something like this possible? I have been trying for the past two days to figure this out but I just can't seem to get it to give specific methods for each type. It just shows all the methods in the Builder.
Any thoughts and help is much appreciated.
Code:
Enum
public enum Type implements ParameterType<Type> {
ROCK, STONE
}
ParameterType
interface ParameterType<T> {}
MyObject
public class MyObject implements Serializable {
private static final long serialVersionUID = -4970453769180420689L;
private List<Field> fields = new ArrayList<>();
private MyObject() {
}
public interface Type {
Value withValue(String value);
}
public interface Value {
Build withAnotherValue(String anotherValue);
}
public interface Build {
MyObject build();
}
public Type builder(Parameter type) {
return new Builder();
}
public static class Builder implements Build, Type, Value {
private final List<Field> fields = new ArrayList<>();
#Override
public Build withAnotherValue(String anotherValue) {
fields.add(new Field("AnotherValue", anotherValue));
return this;
}
#Override
public Value withValue(String value) {
fields.add(new Field("Value", value));
return this;
}
#Override
public MyObject build() {
MyObject myObject = new MyObject();
myObject.fields.addAll(this.fields);
return myObject;
}
}
}
This isn't possible using enum, but you could do this with a custom enum-like class:
public final class Type<B extends MyObject.Builder> {
private final Supplier<? extends B> supplier;
private Type(Supplier<? extends B> supplier) {
this.supplier = Objects.requireNonNull(supplier);
}
public B builder() {
return supplier.get();
}
public static final Type<MyObject.RockBuilder> ROCK =
new Type<>(MyObject.RockBuilder::new);
public static final Type<MyObject.StoneBuilder> STONE =
new Type<>(MyObject.StoneBuilder::new);
}
public class MyObject {
// ...
// And this method is probably superfluous at this point.
public static <B extends MyObject.Builder> builder(Type<? extends B> type) {
return type.builder();
}
}
You could adapt that approach to a step builder easily, but there's a separate issue here. Since each step in a step builder specifies the next step in the return type, you can't re-use step interfaces very easily. You would need to declare, for example, separate interfaces RockValueStep, StoneValueStep, etc. because the interfaces themselves specify the step order.
The only simple way around that would be if the separate types (rock, stone, etc.) only strictly added steps such that e.g. Type.ROCK returns a ColourStep and Type.STONE returns a WeightStep, and both ColourStep and WeightStep return ValueStep:
// Rock builder starts here.
interface ColourStep { ValueStep withColour(String c); }
// Stone builder starts here.
interface WeightStep { ValueStep withWeight(String w); }
// Shared.
interface ValueStep { AnotherValueStep withValue(String v); }
And then:
public final class Type<B /* extends ABuilderStepMarker, possibly */> {
// (Constructor and stuff basically same as before.)
public static final Type<MyObject.ColourStep> ROCK =
new Type<>(/* implementation */::new);
public static final Type<MyObject.WeightStep> STONE =
new Type<>(/* implementation */::new);
}
The reasons this kind of thing can't be done using enum are pretty much:
enum can't be generic:
// This is an error.
enum Type<T> {
}
Although you could declare an abstract method on an enum and override it with a covariant return type, the covariant return type is never visible:
// This is valid code, but the actual type of
// Type.ROCK is just Type, so the return type of
// Type.ROCK.builder() is just MyObject.Builder,
// despite the override.
enum Type {
ROCK {
#Override
public MyObject.RockBuilder builder() {
return new MyObject.RockBuilder();
}
};
public abstract MyObject.Builder builder();
}
Considering you are looking for specific methods for a specific type of builder, having multiple builders, one for each type of MyObject that can be built may work best. You can create an interface that defines the builder and then put the common functionality into an abstract class, from which the individual builders extend. For example:
public interface Builder {
public MyObject build();
}
public abstract class AbstractBuilder() {
private final List<Field> fields = new ArrayList<>();
protected void addField(String key, String value) {
fields.add(new Field(key, value));
}
#Override
public MyObject build() {
MyObject myObject = new MyObject();
myObject.fields.addAll(this.fields);
return myObject;
}
}
public class StoneBuilder extends AbstractBuilder {
public StoneBuilder withValue(String value) {
addField("Value", value);
return this;
}
// ...More builder methods...
}
public class RockBuilder extends AbstractBuilder {
public RockBuilder withAnotherValue(String value) {
addField("AnotherValue", value);
return this;
}
// ...More builder methods...
}
This allows you to build MyObject instances in the following manner:
MyObject obj = new RockBuilder()
.withValue("string")
.build();
MyObject obj = new StoneBuilder()
.withAnotherValue("string")
.build();
Your question can be generalised as follows: "How can I write the following method?"
public <T extends AbstractBuilder> T builder(final SomeNonGenericObject object) {
// code goes here
}
And the answer is: "You cannot, because there is no way for the compiler to infer what the type of T is. The only way that this is possible is by somehow passing T as a parameter:
public <T extends AbstractBuilder> T builder(final SomeNonGenericObject object, final Class<T> builderClass) {
// code goes here
}
or
public <T extends AbstractBuilder> T builder(final SomeGenericObject<T> object) {
// code goes here
}
For example:
public <T extends AbstractBuilder> T builder(final Supplier<T> object) {
return supplier.get();
}
final Supplier<AbstractBuilder> rockBuilderSupplier = RockBuilder::new;
builder(rockBuilerSupplier)
.withColour("blue")
// etc
Or simply use Justin Albano's answer, which works just as well.
I was reading how to instantiate a generic and after reading and applying this answer; I would like to know what would be the differences between expecting a Supplier<T> vs. expecting a new instance of T.
Example:
abstract class AbstractService<T extends AbstractEntity> {
protected Supplier<T> makeNewThing(); // supplier is expected
public T myMethod(){
T object = makeNewThing().get(); // local object by calling supplier
object.doStuff();
return object;
}
}
class CarService extends AbstractService<Car> {
public Supplier<Car> makeNewThing(){
return Car::new;
}
}
vs.
abstract class AbstractService<T extends SomeAbstractEntity> {
protected T makeNewThing(); // object is expected, newness is assumed
public T myMethod(){
T object = makeNewThing(); // local object by calling constructor
object.doStuff();
return object;
}
}
class CarService extends AbstractService<Car> {
public Car makeNewThing(){
return new Car();
}
}
The only thing I can think of is that expecting a supplier ensures that a new object will be created, but when expecting an object we can only assume that the implementing classes are calling the constructor and not re-using an existing instance.
I'd like to know of other objective differences and possible use cases, if any. Thanks in advance.
Using a Supplier postpones the creation of the instance.
This means that you might avoid a creation of an unnecessary instance.
For example, suppose you pass the output of makeNewThing() to some method.
public void makeNewThingSometimes (T newInstance)
{
if (someCondition) {
this.instance = newInstance;
}
}
public void makeNewThingSometimes (Supplier<T> supplier)
{
if (someCondition) {
this.instance = supplier.get();
}
}
Calling the first variant requires creating an instance of T even if you are not going to use it.
Calling the second variant only creates an instance of T when necessary.
Using a Consumer can save both storage (if the create instance requires a significant amount of memory) and time (if the execution of the constructor is expansive).
The only thing I can think of is that expecting a supplier ensures
that a new object will be created,
Not necessarily.
You implement the Supplier in this way :
return SomeEntityImplementation::new;
But you could have implemented it in this other way :
if (myCachedObject != null){
return (()-> myCachedObject);
}
return SomeEntityImplementation::new;
Both ways may be used to return a cached object or create a new one.
One of Supplier advantages is the case of Supplier creating an object : this one is actually created only as the Supplier.get() method is invoked.
Note that in your example, using Supplier doesn't bring any advantage as in both cases (with or without Supplier) the object creation is already performed in a lazy way : as the factory method is invoked.
To take advantage of it, you should have a method that provides a Supplier<T> as parameter as in the Eran and Dasblinkenlight examples.
Another Supplier advantage is its ability to implement factory that may return multiple of things.
Using Supplier allows to have a shorter and more readable code and besides that doesn't rely on Java Reflection.
Supposing that you want to create the object from an Enum value, you could so write :
public enum MyBaseClassFactory {
ENUM_A (A::new),
ENUM_B (B::new),
ENUM_C (C::new),
ENUM_D (D::new);
private Supplier<BaseClass> supplier;
MyBaseClassFactory (Supplier<BaseClass> supplier){
this.supplier = supplier;
}
public BaseClass createObject(){
return supplier.get();
}
}
You could so use it :
BaseClass base = MyBaseClassFactory.ENUM_A.createObject();
Without Supplier, you will have to use Reflection (that may fail at runtime) or write a verbose and unmaintainable code.
For example with Reflection :
public enum MyEnumFactoryClass {
ENUM_A(A.class), ENUM_B(B.class), ENUM_C(C.class), ENUM_D(D.class);
private Class<BaseClass> clazz;
MyEnumFactoryClass(Class<BaseClass> clazz) {
this.clazz = clazz;
}
public BaseClass createObject() {
return clazz.newInstance();
}
}
For example without reflection but with more verbose code :
public enum MyEnumFactoryClass {
ENUM_A {
#Override
public BaseClass createObject() {
return new A();
}
},
ENUM_B {
#Override
public BaseClass createObject() {
return new B();
}
},
ENUM_C {
#Override
public BaseClass createObject() {
return new C();
}
},
ENUM_D {
#Override
public BaseClass createObject() {
return new D();
}
};
public abstract BaseClass createObject();
}
You could of course take advantage in a close way of Supplier by using it with a Map<String, Supplier<BaseClass>>.
The first solution is more flexible, because an extra level of indirection in object creation lets users of your class library change the source of new items independently of ServiceImpl<SomeEntityImplementation> class.
You can make a new Supplier<T> instance without subclassing or recompiling ServiceImpl, because there is an extra level of indirection. ServiceImpl could be implemented as follows:
class ServiceImpl<SomeEntityImplementation> {
private final Supplier<SomeEntityImplementation> supplier;
public Supplier<T> makeNewThing(){
return supplier;
}
public ServiceImpl(Supplier<SomeEntityImplementation> s) {
supplier = s;
}
}
This makes it possible for users of ServiceImpl to provide their own Supplier<T>, which is not possible using the second approach, in which the source of new items is merged into the implementation of service itself.