Generic OR instead of AND <T extends Number | CharSequence> - java

Is it possible to generically parameterize a method accepting EITHER ClassA OR InterfaceB ?
Does Not Compile Due to | Pseudocode
public <T extends Number | CharSequence> void orDoer(T someData){ // ... }
i.e. instead of writing multiple method signatures, I would like this one method to accept either a Number or CharSequence as an argument
Should Pass with a Number OR CharSequence argument
orDoer(new Integer(6));
int somePrimitive = 4;
orDoer(somePrimitive);
orDoer("a string of chars");

If you really want to do that, you'll need to wrap youur accepted classes inside a custom class of your own. In your example case, probably something like:
public class OrDoerElement {
private final Number numberValue;
private final CharSequence charSequenceValue;
private OrDoerElement(Number number, CharSequence charSequence) {
this.numberValue = number;
this.charSequenceValue = charSequence;
}
public static OrDoerElement fromCharSequence(CharSequence value) {
return new OrDoerElement(null, value);
}
public static OrDoerElement fromNumber(Number value) {
return new OrDoerElement(value, null);
}
}
And your orDoer method becomes:
public void orDoer(OrDoerElement someData) { .... }
Then you can build one of those and use in your method using either:
orDoer(OrDoerElement.fromCharSequence("a string of chars"));
orDoer(OrDoerElement.fromNumber(new Integer(6)));
But honestly, that sounds a bit too complex and too much work just to be able to call a method with different parameter types. Are you sure you can't achieve the same using two methods, and a third method for the common logic?

Is using an anonymous abstract class an option for you? When I need type safe parameters or return types, I use some variant of the code below. That being said, I agree with the other comments here, and am curious what benefit you really derive when you're enforcing a type safety for a group of objects that don't have all that much in common.
public abstract class Doer<T> {
public void do(T obj) {
// do some stuff.
}
}
// calling method
new Doer<Number>(){}.do(new Integer(5));

For the original question:
public void orDoer(Object someData){
assert someData instanceof Number || someData instanceof CharSequence;
// ...
}
In your more specific case, the assert statement should just use introspection to clarify if the object has the specifics you want, i.e. check for a constructor from String, probe to create a new instance of the object from the toString() result of the incoming object, and compare both for equality:
public void orDoer(Object someData) {
assert isUniconstructable(someData);
}
public static boolean isUniconstructable(Object object) {
try {
return object.equals(object.getClass().getConstructor(String.class)
.newInstance(object.toString()));
} catch (InstantiationException | IllegalAccessException | InvocationTargetException
| NoSuchMethodException| RuntimeException e) {
return false;
}
}
(Because of the exceptions that may be thrown, we need to wrap the assert test into its own function.)
Be aware that introspection may break due to Android’s ProGuard code compression which rewrites the class names, and instead of YourClass just a Class, i.e. Q, is stored in the database, and when you want to restore it with a later version of your app which has more classes, class Q is something different then. See the ProGuard website for more information on this; I just wanted to notify that you should be aware of this when using introspection on Android.

Related

Is it possible in java to make generic with unstable number of classes? [duplicate]

I am looking to create a particular type of interface in Java (although this is just as applicable to regular classes). This interface would need to contain some method, say, invoke; it would be called with a varying amount of parameters depending on the generic type arguments supplied.
As an example:
public interface Foo<T...> {
public void invoke(T... args);
}
// In some other class
public static Foo<Float, String, Integer> bar = new Foo<Float, String, Integer>() {
#Override
public void invoke(Float arg1, String arg2, Integer arg3) {
// Do whatever
}
};
To explain, briefly, how this could be used (and provide some context), consider a class Delegator: the class takes a varying number of generic types, and has a single method - invoke, with these parameter types. The method passes on its parameters to an object in a list: an instance of IDelegate, which takes the same generic types. This allows Delegator to choose between several delegate methods (defined inside IDelegate) without having to create a new class for each specific list of parameter types.
Is anything like this available? I have read about variadic templates in C++, but cannot find anything similar in Java. Is any such thing available? If not, what would be the cleanest way to emulate the same data model?
Is anything like this available? I have read about variadic templates
in C++, but cannot find anything similar in Java. Is any such thing
available?
No, this feature is not available in Java.
No, there is nothing like that directly available. However if you use a library with Tuple classes you can simulate it by just making the interface
interface Foo<T> {
void invoke(T t);
}
(This interface is essentially the same as Consumer<T>.)
Then you could do for example
Foo<Tuple<String, Integer, Date, Long>> foo = new Foo<>() {
...
}
You would need a separate Tuple type for each number of parameters. If you have a Tuple class for 4 parameters, but not one for 5, you could squeeze an extra parameter in by using a Pair class.
Foo<Tuple<String, Integer, Date, Pair<Long, BigDecimal>>> foo = ...
By nesting tuple types in this way you get an unlimited number of parameters. However, these workarounds are really ugly, and I would not use them.
Given the context you provided I would recommend using a List as a parameter. If these parameters have something in common, you can restrain your list to <T extends CommonParrent> instead of using List<Object>. If not, you may still want to use marker interface.
Here is an example.
public class Main {
public static void main(String[] args) {
delegate(asList(new ChildOne(1), new ChildTwo(5), new ChildOne(15)));
}
private static <T extends Parent> void delegate(List<T> list) {
list.forEach(item -> {
switch (item.type) {
case ONE: delegateOne((ChildOne) item); break;
case TWO: delegateTwo((ChildTwo) item); break;
default: throw new UnsupportedOperationException("Type not supported: " + item.type);
}
});
}
private static void delegateOne(ChildOne childOne) {
System.out.println("child one: x=" + childOne.x);
}
private static void delegateTwo(ChildTwo childTwo) {
System.out.println("child two: abc=" + childTwo.abc);
}
}
public class Parent {
public final Type type;
public Parent(Type type) {
this.type = type;
}
}
public enum Type {
ONE, TWO
}
public class ChildOne extends Parent {
public final int x;
public ChildOne(int x) {
super(Type.ONE);
this.x = x;
}
}
public class ChildTwo extends Parent {
public final int abc;
public ChildTwo(int abc) {
super(Type.TWO);
this.abc = abc;
}
}
The biggest flaw of this solution is that children have to specify their type via enum which should correspond to the casts in the switch statement, so whenever you change one of these two places, you will have to remember to change the other, because compiler will not tell you this. You will only find such mistake by running the code and executing specific branch so test driven development recommended.

Java casting an object passed to method to its original type

I have a list called itemsData of object of class EtcStruct, but the class can differ depending on the file i want to use (the class is full of variables setters and getters):
ObservableList<EtcStruct> itemsData = FXCollections.observableArrayList();
Im passing it to the method thats supposed to work for any object type i choose and run invoked method from the file.
public static void parseToFile(ObservableList itemsData){
EtcStruct itemObject = (EtcStruct) itemsData.get(0);
System.out.print((int)reflectedmethod.invoke(itemObject);
}
Code above works , but what i want to achieve is make the method work without editing it's object type to make it more flexible for whatever structclass i plan to use.
I tried something with passing Struct Class name and .getClass() it returns the original type but i dont know what to do with it to make the new object of itemsData original type and cast the itemsData object.
public static void parseToFile(ObservableList itemsData,Class c){
Object itemObject = c.newInstance();
Object newobject = curClass.newInstance();
newobject = c.cast(itemsList.get(0));
}
Above seemed dumb to me and obviously didnt work.
After reading your comment I understand better why one would use reflection in your case. A GUI builder/editor is an example where reflection is used to provide an interface to set/get the values of components. Still, IMHO, reflection isn't a tool you would design for when you own the classes and are the primary designer. If possible you should strive for something more like this:
interface Parsable {
default int parse() {
System.out.println("Here I do something basic");
return 0;
}
}
class BasicStruct implements Parsable { }
class EtcStruct implements Parsable {
#Override
public int parse() {
System.out.println("Here I do something specific to an EtcStruct");
return 1;
}
}
// If some structs have a parent-child relationship
// you can alternatively `extend EtcStruct` for example.
class OtherStruct extends EtcStruct {
#Override
public int parse() {
super.parse();
System.out.println("Here I do something specific to an OtherStruct");
return 2;
}
}
void parseToFile(Parsable parsable) {
System.out.println(parsable.parse());
}
// If you use a generic with a specific class you don't
// have to guess or care which kind it is!
void parseToFile(ObservableList<Parsable> parsables) {
for (Parsable p : parsables) {
parseToFile(p);
}
}
public static void main(String[] args) {
ObservableList<Parsable> parsables = FXCollections.observableArrayList();
parsables.add(new BasicStruct());
parsables.add(new EtcStruct());
parsables.add(new OtherStruct());
parseToFile(parsables);
}
Output:
Here I do something basic
0
Here I do something specific to an EtcStruct
1
Here I do something specific to an EtcStruct
Here I do something specific to an OtherStruct
2
Of course, this is just an example that needs to be altered to meet your needs.
But what I still don't get is if you're able to parse from a file why you can't parse to one. Nonetheless, I slapped some code together to show you how I might parse an object to a file, manually, when dealing with Objects only.
The idea is to satisfy a bean-like contract. That is, each structure should provide a parameter-less constructor, all fields you want managed by reflection will follow Java naming convention and will have both a public setter and getter.
Don't get caught up in the file writing; that will be determined by your needs. Just notice that by following this convention I can treat any Object as a parsable structure. A less refined version here for reference:
public void parseToFile(Object object) throws IOException, InvocationTargetException, IllegalAccessException {
fos = new FileOutputStream("example" + object.getClass().getSimpleName());
List<Method> getters = Arrays.stream(object.getClass().getMethods())
.filter(method -> method.getName().startsWith("get") && !method.getName().endsWith("Class"))
.collect(Collectors.toList());
for (Method getter : getters) {
String methodName = getter.getName();
String key = String.valueOf(Character.toLowerCase(methodName.charAt(3))) +
methodName.substring(4, methodName.length());
fos.write((key + " : " + String.valueOf(getter.invoke(object)) + "\n").getBytes());
}
fos.close();
}
I think that you can just still use Generics to keep static objects typing. Try to parametrize your function parseToFile. Here is an example:
public static void parseToFile(ObservableList<EtcStruct> itemsData){
EtcStruct itemObject = itemsData.get(0);
System.out.print((int)reflectedmethod.invoke(itemObject);
}

Which contract is satisfied [duplicate]

If I have two interfaces , both quite different in their purposes , but with same method signature , how do I make a class implement both without being forced to write a single method that serves for the both the interfaces and writing some convoluted logic in the method implementation that checks for which type of object the call is being made and invoke proper code ?
In C# , this is overcome by what is called as explicit interface implementation. Is there any equivalent way in Java ?
No, there is no way to implement the same method in two different ways in one class in Java.
That can lead to many confusing situations, which is why Java has disallowed it.
interface ISomething {
void doSomething();
}
interface ISomething2 {
void doSomething();
}
class Impl implements ISomething, ISomething2 {
void doSomething() {} // There can only be one implementation of this method.
}
What you can do is compose a class out of two classes that each implement a different interface. Then that one class will have the behavior of both interfaces.
class CompositeClass {
ISomething class1;
ISomething2 class2;
void doSomething1(){class1.doSomething();}
void doSomething2(){class2.doSomething();}
}
There's no real way to solve this in Java. You could use inner classes as a workaround:
interface Alfa { void m(); }
interface Beta { void m(); }
class AlfaBeta implements Alfa {
private int value;
public void m() { ++value; } // Alfa.m()
public Beta asBeta() {
return new Beta(){
public void m() { --value; } // Beta.m()
};
}
}
Although it doesn't allow for casts from AlfaBeta to Beta, downcasts are generally evil, and if it can be expected that an Alfa instance often has a Beta aspect, too, and for some reason (usually optimization is the only valid reason) you want to be able to convert it to Beta, you could make a sub-interface of Alfa with Beta asBeta() in it.
If you are encountering this problem, it is most likely because you are using inheritance where you should be using delegation. If you need to provide two different, albeit similar, interfaces for the same underlying model of data, then you should use a view to cheaply provide access to the data using some other interface.
To give a concrete example for the latter case, suppose you want to implement both Collection and MyCollection (which does not inherit from Collection and has an incompatible interface). You could provide a Collection getCollectionView() and MyCollection getMyCollectionView() functions which provide a light-weight implementation of Collection and MyCollection, using the same underlying data.
For the former case... suppose you really want an array of integers and an array of strings. Instead of inheriting from both List<Integer> and List<String>, you should have one member of type List<Integer> and another member of type List<String>, and refer to those members, rather than try to inherit from both. Even if you only needed a list of integers, it is better to use composition/delegation over inheritance in this case.
The "classical" Java problem also affects my Android development...
The reason seems to be simple:
More frameworks/libraries you have to use, more easily things can be out of control...
In my case, I have a BootStrapperApp class inherited from android.app.Application,
whereas the same class should also implement a Platform interface of a MVVM framework in order to get integrated.
Method collision occurred on a getString() method, which is announced by both interfaces and should have differenet implementation in different contexts.
The workaround (ugly..IMO) is using an inner class to implement all Platform methods, just because of one minor method signature conflict...in some case, such borrowed method is even not used at all (but affected major design semantics).
I tend to agree C#-style explicit context/namespace indication is helpful.
The only solution that came in my mind is using referece objects to the one you want to implent muliple interfaceces.
eg: supposing you have 2 interfaces to implement
public interface Framework1Interface {
void method(Object o);
}
and
public interface Framework2Interface {
void method(Object o);
}
you can enclose them in to two Facador objects:
public class Facador1 implements Framework1Interface {
private final ObjectToUse reference;
public static Framework1Interface Create(ObjectToUse ref) {
return new Facador1(ref);
}
private Facador1(ObjectToUse refObject) {
this.reference = refObject;
}
#Override
public boolean equals(Object obj) {
if (obj instanceof Framework1Interface) {
return this == obj;
} else if (obj instanceof ObjectToUse) {
return reference == obj;
}
return super.equals(obj);
}
#Override
public void method(Object o) {
reference.methodForFrameWork1(o);
}
}
and
public class Facador2 implements Framework2Interface {
private final ObjectToUse reference;
public static Framework2Interface Create(ObjectToUse ref) {
return new Facador2(ref);
}
private Facador2(ObjectToUse refObject) {
this.reference = refObject;
}
#Override
public boolean equals(Object obj) {
if (obj instanceof Framework2Interface) {
return this == obj;
} else if (obj instanceof ObjectToUse) {
return reference == obj;
}
return super.equals(obj);
}
#Override
public void method(Object o) {
reference.methodForFrameWork2(o);
}
}
In the end the class you wanted should something like
public class ObjectToUse {
private Framework1Interface facFramework1Interface;
private Framework2Interface facFramework2Interface;
public ObjectToUse() {
}
public Framework1Interface getAsFramework1Interface() {
if (facFramework1Interface == null) {
facFramework1Interface = Facador1.Create(this);
}
return facFramework1Interface;
}
public Framework2Interface getAsFramework2Interface() {
if (facFramework2Interface == null) {
facFramework2Interface = Facador2.Create(this);
}
return facFramework2Interface;
}
public void methodForFrameWork1(Object o) {
}
public void methodForFrameWork2(Object o) {
}
}
you can now use the getAs* methods to "expose" your class
You can use an Adapter pattern in order to make these work. Create two adapter for each interface and use that. It should solve the problem.
All well and good when you have total control over all of the code in question and can implement this upfront.
Now imagine you have an existing public class used in many places with a method
public class MyClass{
private String name;
MyClass(String name){
this.name = name;
}
public String getName(){
return name;
}
}
Now you need to pass it into the off the shelf WizzBangProcessor which requires classes to implement the WBPInterface... which also has a getName() method, but instead of your concrete implementation, this interface expects the method to return the name of a type of Wizz Bang Processing.
In C# it would be a trvial
public class MyClass : WBPInterface{
private String name;
String WBPInterface.getName(){
return "MyWizzBangProcessor";
}
MyClass(String name){
this.name = name;
}
public String getName(){
return name;
}
}
In Java Tough you are going to have to identify every point in the existing deployed code base where you need to convert from one interface to the other. Sure the WizzBangProcessor company should have used getWizzBangProcessName(), but they are developers too. In their context getName was fine. Actually, outside of Java, most other OO based languages support this. Java is rare in forcing all interfaces to be implemented with the same method NAME.
Most other languages have a compiler that is more than happy to take an instruction to say "this method in this class which matches the signature of this method in this implemented interface is it's implementation". After all the whole point of defining interfaces is to allow the definition to be abstracted from the implementation. (Don't even get me started on having default methods in Interfaces in Java, let alone default overriding.... because sure, every component designed for a road car should be able to get slammed into a flying car and just work - hey they are both cars... I'm sure the the default functionality of say your sat nav will not be affected with default pitch and roll inputs, because cars only yaw!

Avoiding instanceof in Java

Having a chain of "instanceof" operations is considered a "code smell". The standard answer is "use polymorphism". How would I do it in this case?
There are a number of subclasses of a base class; none of them are under my control. An analogous situation would be with the Java classes Integer, Double, BigDecimal etc.
if (obj instanceof Integer) {NumberStuff.handle((Integer)obj);}
else if (obj instanceof BigDecimal) {BigDecimalStuff.handle((BigDecimal)obj);}
else if (obj instanceof Double) {DoubleStuff.handle((Double)obj);}
I do have control over NumberStuff and so on.
I don't want to use many lines of code where a few lines would do. (Sometimes I make a HashMap mapping Integer.class to an instance of IntegerStuff, BigDecimal.class to an instance of BigDecimalStuff etc. But today I want something simpler.)
I'd like something as simple as this:
public static handle(Integer num) { ... }
public static handle(BigDecimal num) { ... }
But Java just doesn't work that way.
I'd like to use static methods when formatting. The things I'm formatting are composite, where a Thing1 can contain an array Thing2s and a Thing2 can contain an array of Thing1s. I had a problem when I implemented my formatters like this:
class Thing1Formatter {
private static Thing2Formatter thing2Formatter = new Thing2Formatter();
public format(Thing thing) {
thing2Formatter.format(thing.innerThing2);
}
}
class Thing2Formatter {
private static Thing1Formatter thing1Formatter = new Thing1Formatter();
public format(Thing2 thing) {
thing1Formatter.format(thing.innerThing1);
}
}
Yes, I know the HashMap and a bit more code can fix that too. But the "instanceof" seems so readable and maintainable by comparison. Is there anything simple but not smelly?
Note added 5/10/2010:
It turns out that new subclasses will probably be added in the future, and my existing code will have to handle them gracefully. The HashMap on Class won't work in that case because the Class won't be found. A chain of if statements, starting with the most specific and ending with the most general, is probably the best after all:
if (obj instanceof SubClass1) {
// Handle all the methods and properties of SubClass1
} else if (obj instanceof SubClass2) {
// Handle all the methods and properties of SubClass2
} else if (obj instanceof Interface3) {
// Unknown class but it implements Interface3
// so handle those methods and properties
} else if (obj instanceof Interface4) {
// likewise. May want to also handle case of
// object that implements both interfaces.
} else {
// New (unknown) subclass; do what I can with the base class
}
You might be interested in this entry from Steve Yegge's Amazon blog: "when polymorphism fails". Essentially he's addressing cases like this, when polymorphism causes more trouble than it solves.
The issue is that to use polymorphism you have to make the logic of "handle" part of each 'switching' class - i.e. Integer etc. in this case. Clearly this is not practical. Sometimes it isn't even logically the right place to put the code. He recommends the 'instanceof' approach as being the lesser of several evils.
As with all cases where you are forced to write smelly code, keep it buttoned up in one method (or at most one class) so that the smell doesn't leak out.
As highlighted in the comments, the visitor pattern would be a good choice. But without direct control over the target/acceptor/visitee you can't implement that pattern. Here's one way the visitor pattern could possibly still be used here even though you have no direct control over the subclasses by using wrappers (taking Integer as an example):
public class IntegerWrapper {
private Integer integer;
public IntegerWrapper(Integer anInteger){
integer = anInteger;
}
//Access the integer directly such as
public Integer getInteger() { return integer; }
//or method passthrough...
public int intValue() { return integer.intValue(); }
//then implement your visitor:
public void accept(NumericVisitor visitor) {
visitor.visit(this);
}
}
Of course, wrapping a final class might be considered a smell of its own but maybe it's a good fit with your subclasses. Personally, I don't think instanceof is that bad a smell here, especially if it is confined to one method and I would happily use it (probably over my own suggestion above). As you say, its quite readable, typesafe and maintainable. As always, keep it simple.
Instead of a huge if, you can put the instances you handle in a map (key: class, value: handler).
If the lookup by key returns null, call a special handler method which tries to find a matching handler (for example by calling isInstance() on every key in the map).
When a handler is found, register it under the new key.
This makes the general case fast and simple and allows you to handle inheritance.
You can use reflection:
public final class Handler {
public static void handle(Object o) {
try {
Method handler = Handler.class.getMethod("handle", o.getClass());
handler.invoke(null, o);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public static void handle(Integer num) { /* ... */ }
public static void handle(BigDecimal num) { /* ... */ }
// to handle new types, just add more handle methods...
}
You can expand on the idea to generically handle subclasses and classes that implement certain interfaces.
I think that the best solution is HashMap with Class as key and Handler as value. Note that HashMap based solution runs in constant algorithmic complexity θ(1), while the smelling chain of if-instanceof-else runs in linear algorithmic complexity O(N), where N is the number of links in the if-instanceof-else chain (i.e. the number of different classes to be handled). So the performance of HashMap based solution is asymptotically higher N times than the performance of if-instanceof-else chain solution.
Consider that you need to handle different descendants of Message class differently: Message1, Message2, etc. . Below is the code snippet for HashMap based handling.
public class YourClass {
private class Handler {
public void go(Message message) {
// the default implementation just notifies that it doesn't handle the message
System.out.println(
"Possibly due to a typo, empty handler is set to handle message of type %s : %s",
message.getClass().toString(), message.toString());
}
}
private Map<Class<? extends Message>, Handler> messageHandling =
new HashMap<Class<? extends Message>, Handler>();
// Constructor of your class is a place to initialize the message handling mechanism
public YourClass() {
messageHandling.put(Message1.class, new Handler() { public void go(Message message) {
//TODO: IMPLEMENT HERE SOMETHING APPROPRIATE FOR Message1
} });
messageHandling.put(Message2.class, new Handler() { public void go(Message message) {
//TODO: IMPLEMENT HERE SOMETHING APPROPRIATE FOR Message2
} });
// etc. for Message3, etc.
}
// The method in which you receive a variable of base class Message, but you need to
// handle it in accordance to of what derived type that instance is
public handleMessage(Message message) {
Handler handler = messageHandling.get(message.getClass());
if (handler == null) {
System.out.println(
"Don't know how to handle message of type %s : %s",
message.getClass().toString(), message.toString());
} else {
handler.go(message);
}
}
}
More info on usage of variables of type Class in Java: http://docs.oracle.com/javase/tutorial/reflect/class/classNew.html
You could consider the Chain of Responsibility pattern. For your first example, something like:
public abstract class StuffHandler {
private StuffHandler next;
public final boolean handle(Object o) {
boolean handled = doHandle(o);
if (handled) { return true; }
else if (next == null) { return false; }
else { return next.handle(o); }
}
public void setNext(StuffHandler next) { this.next = next; }
protected abstract boolean doHandle(Object o);
}
public class IntegerHandler extends StuffHandler {
#Override
protected boolean doHandle(Object o) {
if (!o instanceof Integer) {
return false;
}
NumberHandler.handle((Integer) o);
return true;
}
}
and then similarly for your other handlers. Then it's a case of stringing together the StuffHandlers in order (most specific to least specific, with a final 'fallback' handler), and your despatcher code is just firstHandler.handle(o);.
(An alternative is to, rather than using a chain, just have a List<StuffHandler> in your dispatcher class, and have it loop through the list until handle() returns true).
Just go with the instanceof. All the workarounds seem more complicated. Here is a blog post that talks about it: http://www.velocityreviews.com/forums/t302491-instanceof-not-always-bad-the-instanceof-myth.html
I have solved this problem using reflection (around 15 years back in pre Generics era).
GenericClass object = (GenericClass) Class.forName(specificClassName).newInstance();
I have defined one Generic Class ( abstract Base class). I have defined many concrete implementations of base class. Each concrete class will be loaded with className as parameter. This class name is defined as part of configuration.
Base class defines common state across all concrete classes and concrete classes will modify the state by overriding abstract rules defined in base class.
At that time, I don't know the name of this mechanism, which has been known as reflection.
Few more alternatives are listed in this article : Map and enum apart from reflection.
Add a method in BaseClass which returns name of the class. And override the methods with the specific class name
public class BaseClass{
// properties and methods
public String classType(){
return BaseClass.class.getSimpleName();
}
}
public class SubClass1 extends BaseClass{
// properties and methods
#Override
public String classType(){
return SubClass1.class.getSimpleName();
}
}
public class SubClass2 extends BaseClass{
// properties and methods
#Override
public String classType(){
return SubClass1.class.getSimpleName();
}
}
Now use the switch case in following way-
switch(obj.classType()){
case SubClass1:
// do subclass1 task
break;
case SubClass2:
// do subclass2 task
break;
}
What I use for Java 8:
void checkClass(Object object) {
if (object.getClass().toString().equals("class MyClass")) {
//your logic
}
}

Type-safe method reflection in Java

Is any practical way to reference a method on a class in a type-safe manner? A basic example is if I wanted to create something like the following utility function:
public Result validateField(Object data, String fieldName,
ValidationOptions options) { ... }
In order to call it, I would have to do:
validateField(data, "phoneNumber", options);
Which forces me to either use a magic string, or declare a constant somewhere with that string.
I'm pretty sure there's no way to get around that with the stock Java language, but is there some kind of (production grade) pre-compiler or alternative compiler that may offer a work around? (similar to how AspectJ extends the Java language) It would be nice to do something like the following instead:
public Result validateField(Object data, Method method,
ValidationOptions options) { ... }
And call it with:
validateField(data, Person.phoneNumber.getter, options);
As others mention, there is no real way to do this... and I've not seen a precompiler that supports it. The syntax would be interesting, to say the least. Even in your example, it could only cover a small subset of the potential reflective possibilities that a user might want to do since it won't handle non-standard accessors or methods that take arguments, etc..
Even if it's impossible to check at compile time, if you want bad code to fail as soon as possible then one approach is to resolve referenced Method objects at class initialization time.
Imagine you have a utility method for looking up Method objects that maybe throws error or runtime exception:
public static Method lookupMethod( Class c, String name, Class... args ) {
// do the lookup or throw an unchecked exception of some kind with a really
// good error message
}
Then in your classes, have constants to preresolve the methods you will use:
public class MyClass {
private static final Method GET_PHONE_NUM = MyUtils.lookupMethod( PhoneNumber.class, "getPhoneNumber" );
....
public void someMethod() {
validateField(data, GET_PHONE_NUM, options);
}
}
At least then it will fail as soon as MyClass is loaded the first time.
I use reflection a lot, especially bean property reflection and I've just gotten used to late exceptions at runtime. But that style of bean code tends to error late for all kinds of other reasons, being very dynamic and all. For something in between, the above would help.
There isn't anything in the language yet - but part of the closures proposal for Java 7 includes method literals, I believe.
I don't have any suggestions beyond that, I'm afraid.
Check out https://proxetta.jodd.org/refs/methref. It uses the Jodd proxy library (Proxetta) to proxy your type. Not sure about its performance characteristics, but it does provide type safety.
An example: Suppose Str.class has method .boo(), and you want to get its name as the string "boo":
String methodName = Methref.of(Str.class).name(Str::boo);
There's more to the API than the example above: https://oblac.github.io/jodd-site/javadoc/jodd/methref/Methref.html
Is any practical way to reference a method on a class in a type-safe manner?
First of all, reflection is type-safe. It is just that it is dynamically typed, not statically typed.
So, assuming that you want a statically typed equivalent of reflection, the theoretical answer is that it is impossible. Consider this:
Method m;
if (arbitraryFunction(obj)) {
m = obj.getClass().getDeclaredMethod("foo", ...);
} else {
m = obj.getClass().getDeclaredMethod("bar", ...);
}
Can we do this so that that runtime type exceptions cannot happen? In general NO, since this would entail proving that arbitraryFunction(obj) terminates. (This is equivalent to the Halting Problem, which is proven to be unsolvable in general, and is intractable using state-of-the-art theorem proving technology ... AFAIK.)
And I think that this road-block would apply to any approach where you could inject arbitrary Java code into the logic that is used to reflectively select a method from an object's class.
To my mind, the only moderately practical approach at the moment would be to replace the reflective code with something that generates and compiles Java source code. If this process occurs before you "run" the application, you've satisfied the requirement for static type-safety.
I was more asking about reflection in which the result is always the same. I.E. Person.class.getMethod("getPhoneNumber", null) would always return the same method and it's entirely possible to resolve it at compile time.
What happens if after compiling the class containing this code, you change Person to remove the getPhoneNumber method?
The only way you can be sure that you can resolve getPhoneNumber reflectively is if you can somehow prevent Person from being changed. But you can't do that in Java. Runtime binding of classes is a fundamental part of the language.
(For record, if you did that for a method that you called non-reflectively, you would get an IncompatibleClassChangeError of some kind when the two classes were loaded ...)
It has been pointed out that in Java 8 and later you could declare your validator something like this:
public Result validateField(Object data,
SomeFunctionalInterface function,
ValidationOptions options) { ... }
where SomeFunctionalInterface corresponds to the (loosely speaking) common signature of the methods you are validating.
Then you can call it with a method reference; e.g.
validateField(data, SomeClass::someMethod, options)
This is approach is statically type-safe. You will get a compilation error if SomeClass doesn't have someMethod or if it doesn't conform to SomeFunctionalInterface.
But you can't use a string to denote the method name. Looking up a method by name would entail either reflection ... or something else that side-steps static (i.e. compile time / load time) type safety.
Java misses the syntax sugar to do something as nice as Person.phoneNumber.getter. But if Person is an interface, you could record the getter method using a dynamic proxy. You could record methods on non-final classes as well using CGLib, the same way Mockito does it.
MethodSelector<Person> selector = new MethodSelector<Person>(Person.class);
selector.select().getPhoneNumber();
validateField(data, selector.getMethod(), options);
Code for MethodSelector: https://gist.github.com/stijnvanbael/5965609
Inspired by mocking frameworks, we could dream up the following syntax:
validator.validateField(data, options).getPhoneNumber();
Result validationResult = validator.getResult();
The trick is the generic declaration:
class Validator {
public <T> T validateField(T data, options) {...}
}
Now the return type of the method is the same as your data object's type and you can use code completion (and static checking) to access all the methods, including the getter methods.
As a downside, the code isn't quite intuitive to read, since the call to the getter doesn't actually get anything, but instead instructs the validator to validate the field.
Another possible option would be to annotate the fields in your data class:
class FooData {
#Validate(new ValidationOptions(...))
private PhoneNumber phoneNumber;
}
And then just call:
FooData data;
validator.validate(data);
to validate all fields according to the annotated options.
The framework picklock lets you do the following:
class Data {
private PhoneNumber phoneNumber;
}
interface OpenData {
PhoneNumber getPhoneNumber(); //is mapped to the field phoneNumber
}
Object data = new Data();
PhoneNumber number = ObjectAccess
.unlock(data)
.features(OpenData.class)
.getPhoneNumber();
This works in a similar way setters and private methods. Of course, this is only a wrapper for reflection, but the exception does not occur at unlocking time not at call time. If you need it at build time, you could write a unit test with:
assertThat(Data.class, providesFeaturesOf(OpenData.class));
I found a way to get the Method instance using Lambdas. It works only on interface methods though currently.
It works using net.jodah:typetools which is a very lightweight library.
https://github.com/jhalterman/typetools
public final class MethodResolver {
private interface Invocable<I> {
void invokeWithParams(I instance, Class<?>[] parameterTypes) throws Throwable;
}
interface ZeroParameters<I, R> extends Invocable<I> {
R invoke(I instance) throws Throwable;
#Override
default void invokeWithParams(I instance, Class<?>[] parameterTypes) throws Throwable {
invoke(instance);
}
}
public static <I, R> Method toMethod0(ZeroParameters<I, R> call) {
return toMethod(ZeroParameters.class, call, 1);
}
interface OneParameters<I, P1, R> extends Invocable<I> {
R invoke(I instance, P1 p1) throws Throwable;
#Override
default void invokeWithParams(I instance, Class<?>[] parameterTypes) throws Throwable {
invoke(instance, param(parameterTypes[1]));
}
}
public static <I, P1, R> Method toMethod1(OneParameters<I, P1, R> call) {
return toMethod(OneParameters.class, call, 2);
}
interface TwoParameters<I, P1, P2, R> extends Invocable<I> {
R invoke(I instance, P1 p1, P2 p2) throws Throwable;
#Override
default void invokeWithParams(I instance, Class<?>[] parameterTypes) throws Throwable {
invoke(instance, param(parameterTypes[1]), param(parameterTypes[2]));
}
}
public static <I, P1, P2, R> Method toMethod2(TwoParameters<I, P1, P2, R> call) {
return toMethod(TwoParameters.class, call, 3);
}
private static final Map<Class<?>, Object> parameterMap = new HashMap<>();
static {
parameterMap.put(Boolean.class, false);
parameterMap.put(Byte.class, (byte) 0);
parameterMap.put(Short.class, (short) 0);
parameterMap.put(Integer.class, 0);
parameterMap.put(Long.class, (long) 0);
parameterMap.put(Float.class, (float) 0);
parameterMap.put(Double.class, (double) 0);
}
#SuppressWarnings("unchecked")
private static <T> T param(Class<?> type) {
return (T) parameterMap.get(type);
}
private static <I> Method toMethod(Class<?> callType, Invocable<I> call, int responseTypeIndex) {
Class<?>[] typeData = TypeResolver.resolveRawArguments(callType, call.getClass());
Class<?> instanceClass = typeData[0];
Class<?> responseType = responseTypeIndex != -1 ? typeData[responseTypeIndex] : Void.class;
AtomicReference<Method> ref = new AtomicReference<>();
I instance = createProxy(instanceClass, responseType, ref);
try {
call.invokeWithParams(instance, typeData);
} catch (final Throwable e) {
throw new IllegalStateException("Failed to call no-op proxy", e);
}
return ref.get();
}
#SuppressWarnings("unchecked")
private static <I> I createProxy(Class<?> instanceClass, Class<?> responseType,
AtomicReference<Method> ref) {
return (I) Proxy.newProxyInstance(MethodResolver.class.getClassLoader(),
new Class[] {instanceClass},
(proxy, method, args) -> {
ref.set(method);
return parameterMap.get(responseType);
});
}
}
Usage:
Method method = MethodResolver.toMethod2(SomeIFace::foobar);
System.out.println(method); // public abstract example.Result example.SomeIFace.foobar(java.lang.String,boolean)
Method get = MethodResolver.<Supplier, Object>toMethod0(Supplier::get);
System.out.println(get); // public abstract java.lang.Object java.util.function.Supplier.get()
Method accept = MethodResolver.<IntFunction, Integer, Object>toMethod1(IntFunction::apply);
System.out.println(accept); // public abstract java.lang.Object java.util.function.IntFunction.apply(int)
Method apply = MethodResolver.<BiFunction, Object, Object, Object>toMethod2(BiFunction::apply);
System.out.println(apply); // public abstract java.lang.Object java.util.function.BiFunction.apply(java.lang.Object,java.lang.Object)
Unfortunately you have to create a new interface and method based on the parameter count and whether the method returns void or not.
However, if you have a somewhat fixed/limited method signature/parameter types, then this becomes quite handy.

Categories