Given a java.lang.reflect.Method object, is there anyway to determine whether the method is purely functional (i.e., given the same input, it will always produce the same output and it is stateless. In other words, the function does not depend on its environment)?
No, there's no way to do it.
Reflection does not allow you to inspect the actual code behind the method.
And even if that where possible, the actual analysis would probably be ... tricky, to say the least.
No there is no way to do that with reflection or any other mechanism.
The developer knows if the method is functional. For example, Spring has a #Cacheable annotation that gives a hint to the application that the method is functional and can therefore cache the result for a given set of arguments. (Spring will wrap your object in a proxy that provides the caching behavior.)
is there anyway to determine whether the method is purely functional(ie., given the same input, it will always produce the same output
I know it's now what you've asked for, but Unit Tests may help you with this.
No. Reflection can not read the byte code of the method. So you can't really tell what a method does or even what other classes it uses.
Reflection will not help you here. If you really want to define it at run time, you can try to use javap -c classname.class. But it would be better to avoid such a hacks.
Related
I'm investigating ways to ensure a java class only calls a limited set of allowed methods from other classes. The usecase I have receives the class via the standard java serialization.
The approach I want to try is to simply list the methods it calls and only run the code if it passes a short whire list.
The question I have : how do I list the methods used in that class?
This is not a perfect solution but you coud use this if you can't find something better. You can use javap, if you're in Linux, run in the command line (or run a proccess using Runtime.exec()): javap -verbose /path/to/my/classfile.class | grep invoke and you'll have the binary signatures that the class "calls" from other classes. Yes, I know, it's not what you wanted but you could use it as a last resource.
If you need a more "javaish" solution, you could have a look at a java library called "asm": http://asm.ow2.org/
You could pass a dynamic proxy object to the caller, which inside checks the methods against your white list and throws exception when the call is not allowed.
Dynamic proxies basically allows you to insert piece of code between the caller's method invocation and the actual invocation of the called method.
I'd really think through though to if you really need this. Dynamic proxies are useful but they can also be very confusing and annoying to debug.
I understand at a low level what static (compile time) and dynamic (runtime) bindings are.
I understand to some extent why it's important to know that (e.g., the fact that generics are resolved statically helps explain what you can and cannot do, etc).
What I don't understand is why the choices were made one way or another - e.g., Java uses static binding for overloaded methods, and dynamic binding for overridden ones. Why is that? Is it a design choice, is it something that is obvious and unavoidable for people that understand the deep functioning of Java, or is it something one needs to learn (rather than understand)?
The question is, how can the compiler know which method to call during compile time, in the case of overriding. You must understand this,
List list = list.getAList();
list.add(whatever);
Now, suppose getAList() method can return any of the several List implementations based on some criteria. Thereby, how can a compiler know, that what implementation is returned? and which add() method to call. As you can see, this can only be decided on runtime. Whereas, in overloading its not the case and everything is clear on compile time. I hope you understand the thing now.
[Edited]
Bringing the discussion going on in the comments to the actual answer.
It can't be known until runtime. Understand it this way, the instantiation of a particular class is dependent on the argument provided by user. Now tell me how the compiler will know which argument user will pass, and apparently what class to instantiate. Or easier still, answer this question that how the compiler will know whether the flow will be passed to if block or else block? Or why do you think we have checked and runtime exceptions? Take the case of divide-by-zero; for example n/m, where m becomes 0 as result of some calculation. In this case, its obvious that the compiler wouldn't be able to say that there would be a ArithmeticException because m is not known right away. As all these information are not available at compile time, thus compiler, similarly, doesn't know which method will override which.
Dynamic binding is for when you override methods because it will need to decide at runtime which method (code) to execute based on the runtime type of the object. With an overloaded method you do not need to decide at runtime, you can figure out at compile time which is the method that will be called. This will result in faster execution.
According to me i guess it is on obious reason...
Overloading is something which complilor understand during complile time and overriding is late binding or runtime....
Understanding OOP concept may help you..
I want to know if it is possible to create an Annotation that enforces a specific Return Type?
What I would like is a way to create an Annotation and use it like:
#MarkerAnnotationForcingStringReturnType
public String someWebflowMethod(...){
...
return "someString";
}
Then I can have any method name I want, but it will have to return, a String, for example.
Doesn't the method signature itself enforce a specific return type?
You could create your own annotation and then write your own annotation processor which could enforce it.
I don't know of one built in... and frankly I'm not sure I see the point. If you're going to be vigilant enough to write the annotation, why would you not be vigilant enough to get the method return type right? Under what circumstances would you get the annotation right but the method wrong?
By themselves, annotations are simply metadata - that is, "comments" that are attached to bits of code and are included in the bytecode itself (possibly).
You could write a processor using apt that would validate this at compile-time if you really wanted to.
But I find that declaring a method to return String is just as robust and more easily understandable by a typical Java developer. Do you really have a good reason for doing this? If you're worried that someone might change the return type, they might just as easily delete your annotation. If you want to prevent someone from making these changes, a comment is arguably more effective than an annotation that they might remove for "being superfluous" (hey, we're talking about developers that you're expecting to invalidate base requirements here through incompetence/lack of awareness...).
Suppose I have an interface with lots of methods that I want to mock for a test, and suppose that I don't need it to do anything, I just need the object under test to have an instance of it. For example, I want to run some performance testing/benchmarking over a certain bit of code and don't want the methods on this interface to contribute.
There are plenty of tools to do that easily, for example
Interface mock = Mockito.mock(Interface.class);
ObjectUnderTest obj = ...
obj.setItem(mock);
or whatever.
However, they all come with some runtime overhead that I would rather avoid:
Mockito records all calls, stashing the arguments for verification later
JMock and others (I believe) require you to define what they going to do (not such a big deal), and then execution goes through a proxy of various sorts to actual invoke the method.
Good old java.lang.reflect.Proxy and friends all go through at least a few more method calls on the stack before getting to the method to be invoked, often reflectively.
(I'm willing to be corrected on any of the details of those examples, but I believe the principle holds.)
What I'm aiming for is a "real" no-op implementation of the interface, such as I could write by hand with everything returning null, false or 0. But that doesn't help if I'm feeling lazy and the interface has loads of methods. So, how can I generate and instantiate such a no-op implementation of an arbitrary interface at runtime?
There are tools available such as Powermock, CGLib that use bytecode generation, but only as part of the larger mocking/proxying context and I haven't yet figured out what to pick out of the internals.
OK, so the example may be a little contrived and I doubt that proxying will have too substantial an impact on the timings, but I'm curious now as to how to generate such a class. Is it easy in CGLib, ASM?
EDIT: Yes, this is premature optimisation and there's no real need to do it. After writing this question I think the last sentence didn't quite make my point that I'm more interested in how to do it in principle, and easy ways into dynamic class-generation than the actual use-case I gave. Perhaps poorly worded from the start.
Not sure if this is what you're looking for, but the "new class" wizard in Eclipse lets you build a new class and specify superclass and/or interface(s). If you let it, it will auto-code up dummy implementations of all interface/abstract methods (returning null unless void). It's pretty painless to do.
I suspect the other "big name" IDEs, such as NetBeans and Idea, have similar facilities.
EDIT:
Looking at your question again, I wonder why you'd be concerned about performance of auto proxies when dealing with test classes. It seems to me that if performance is an issue, you should be testing "real" functionality, and if you're dealing with mostly-unimplemented classes anyway then you shouldn't be in a testing situation where performance matters.
It would take a little work to build the utility, but probably not too hard for basic vanilla Java interface without "edge cases" (annotations, etc), to use Javassist code generation to textually create a class at runtime that implements null versions of every method defined on the interface. This would be different from Javassist ProxyFactory (Or CGLib Enhancer) proxy objects which would still have a few layers of indirection. I think there would be no overhead in the resulting class from the direct bytecode generation mode. If you are brave you could also dive into ASM to do the same thing.
Is there any way we can inject new methods and properties into classes during run-time.
http://nurkiewicz.blogspot.com/2009/09/injecting-methods-at-runtime-to-java.html states we may do that by using Groovy.
Is it possible by just doing using Java?
Is it possible by just doing using
Java?
The simple answer is an emphatic "You don't want to do that!".
It is technically possible, but not without resorting to extremely complex, expensive and fragile tricks like bytecode modification1. And even then, you have to rely on dynamic loading to access the modified type and (probably) reflection to make use of its new members. In short, you would be creating lots of pain for yourself, for little if any gain.
Java is a statically typed language, and adding / modifying class type signatures can break the static typing contract of a class.
1 - AspectJ and the like allow you to inject additional behaviour into a class, but it is probably not the "runtime" injection that you are after. Certainly, the injected methods won't be available for statically compiled code to call.
So if you were really crazy, you could do something like what they outline here. What you could do is load the .java file, find the correct insertion point, add whatever methods you need to, call the java compiler and reload the class. Good luck debugging that mess though :)
Edit This actually might be of some use...
You can do some quite funky things with AOP, although genuine modification of classes at runtime is a pretty hairy technique that needs a lot of classloading magic and sleight of hand.
What is easier is using AOP techniques to generate a subclass of your target class and to introduce new methods into this instead, what AOP called a "mixin" or "introduction". See here to read how Spring AOP does it, although this may be quite lame compared to what you're actually trying to achieve.
Is it possible by just doing using Java?
Quite so, the "only" thing you have to do is define an instrumentation agent which supplies an appropriate ClassFileTransformer, and you'll have to use reflection to invoke the added methods. Odds are this isn't what you want to do, though, but it's doable and there's a well-defined interface for it. If you want to modify existing methods you may be interested in something like AspectJ.
While it might be possible, it is not useful.
How would you access these new fields and methods?
You could not use these methods and fields directly (as "ordinary" fields and methods), since they wouldn't be compiled in.
If all you want is the possibility to add "properties" and "methods", you can use a Map<String, Object> for the "dynamic properties", and a Map<String, SuitableInterface> for the "dynamic methods", and look them up by name.
If you need an extension language for Java, an embedded dynamic language (such as Javascript, or Groovy) can be added; most of these can access arbitrary java objects and methods.