Python static method vs. Java static method - java

In Java, we only store one copy of the static method into memory, and we can call it over and over again. This is for performance and space saving.
Previously, someone had claimed in work that static function in Python does not work the same way as in Java, is this correct?
Someone also claimed that every time we are calling the Python static method, and Python Interpreter still need to spend the time to instantiate an object first. Is this correct?
class A(object):
#staticmethod
def static_1():
print 'i am static'

The Python method for java static method is #classmethod.

Accessing a Python static method does not involve creating a new object. Python just returns the original function object.
(Accessing a non-static method does involve creating a new method object, but this is cheap. In particular, it only bundles together a reference to the function and a reference to self; it does not involve copying the function.)

There is a difference between Python the language and specific Python implementation.
Also, almost all the languages store only one copy of method each method in memory. This applies to all static, class and regular method. The savings you mentioned come from not needing to pass the pointer to 'self' in Python or 'this' in Java. One less parameter to pass could add to huge savings for the methods that are called from within innermost long running loops.
As for storing methods themselves. Implementations like PyPi constantly perform JIT to compile methods into machine code. They keep recompiling based on updated statistics on how method performs. I believe similar behaviour occurs in Java.

Related

Method references on instances - why not call method directly (because how can it execute in a different context)?

I am trying to understand this example code from Oracle Learning on Lambdas and Method References:
String city = "Munich";
Supplier<String> lambda = city::toUpperCase;
System.out.println(lambda.get());
Why didn't they simply call
city.toUpperCase();
Isn't this method tied to the specific instance variable city?
So how would it execute in a different context to provide the benefits of lambdas - I am unable to understand that.
In that limited code snippet you show, you would indeed just call city.toUpperCase();. There is no point in using a lambda there.
You must be ignoring a larger lesson. I suspect the author of that tutorial code was demonstrating the effect of such code, explaining the equivalent behavior. You should link to the exact tutorial page for greater context.
The point of a lambda is that you want to execute that method reference elsewhere in the code base. Rather than immediately execute that method in the current code, you want some other context of code to run that method. You want to pass that method reference as an argument to some other method call.
Because then it wouldn't be a good example for "Learning on Lambdas and Method References" ...

Object Memory Management Python Vs Java

Can some one explain on the way Python manages memory management during creation of an object in class.
For example in java we can only declare member variables and the initialisation part happens inside the constructor. That means memory used when an object is constructed.
But in python we can initialise a class variable outside the init method. Where is this data stored?
As a precursor, this question has already been answered here, this may also be a good reference. However, I will try to explain it again. The init method in Python is designed for conventional use and although a special method in the fact that it is reserved to go at the beginning of a method it is not required. Memory management in Python involves a private heap containing all Python objects and data structures. If you were to initialize a class variable outside the init method declaration, it would simply be stored in the heap along with those initialized in the init method. Hope this helps!

Why most of the java.lang.reflect.Array class methods are 'native'

I have gone through What code and how does java.lang.reflect.Array create a new array at runtime?,. I understand that they are implemented in native language ('C'), But my question is why almost all methods java.lang.reflect.Array class methods are native .
My guess and understanding is that
To improve performance ? or to allocate continuous memory for arrays by JVM ?
Is my understanding correct about native methods in Array class or Do i miss anything ?
The reflect.Array.newInstance method uses native code because it must use native code. This has nothing inherently to do with performance but is a result of the fact that the Java language cannot express this operation.
To show that it's a language limitation and not strictly related to performance, here is some valid code which creates a new array without directly invoking any native method.
Object x = new String[0];
However, newInstance takes an arbitrary value of Class<?> and then creates the corresponding array with the represented type. However, this construct is not possible in plain Java and it cannot be expressed by the type-system or corresponding normal "new array" syntax.
// This production is NOT VALID in Java, as T is not a type
// (T is variable that evaluates to an object representing a type)
Class<?> T = String.class;
Object x = new T[0];
// -> error: cannot find symbol T
Because such a production is not allowed, a native method (which has access to the JVM internals) is used to create the new array instance of the corresponding type.
While the above argues for the case of newInstance needing to be native, I believe many of the other reflect.Array methods (which are get/set methods) could be handled in plain Java with the use of specialized casting; in these cases the argument for performance holds sway.
However, most code does not use the Array reflection (this includes "multi-valued data structures" such as ArrayList), but simply uses normal Java array access which is directly translated to the appropriate Java bytecode without going through reflect.Array or the native methods it uses.
Conclusion:
Java already provides fast array access through the JVM's execution of the bytecode. HotSpot, the "official" JVM, is written in C++ which is "native" code - but this execution of array-related bytecode is independent of reflect.Array and the use native methods.
newInstance uses a native method because it must use a native method or otherwise dynamically generate and execute bytecode.
Other reflect.Array methods that could be expressed in Java are native methods for a combination of performance, dispatch simplicity, and "why not" - it's just as easy to add a second or third native method.
Arrays are at the heart of all multi-valued data structures. Arrays require using segments of memory on the host machine, which means accessing memory in a safe, and machine specific manner - that requires calls to the underlying operating system.
Such calls are native because to perform them you must move out of java and into the host environment to complete them. At some point every operation must be handed over to the host machine to actually implement it using the local OS and hardware.

Does marking a method's arguments as final makes the method call faster?

I have seen time-sensitive backtracking programs written this way and I guess it makes the compiler avoid some memory copy and make a faster method call, and I guess this is useful on recursive programs.
But this is speculation by me, I'd like a detailed explanation/article on this or a refutation.
It has zero impact on performance - indeed, it has no runtime effect at all.
If you compile a class that contains 2 methods - one with the parameters marked as final, and the other without - and then look at the bytecode that gets generated for each method, you'll note that there is no difference (other than the method name).
All the final keyword does in this context is make it so that you cannot reassign that variable within the method.

How to identify if an object returned was created during the execution of a method - Java

Original Question: Given a method I would like to determine if an object returned is created within the execution of that method. What sort of static analysis can or should I use?
Reworked Questions: Given a method I would like to determine if an object created in that method may be returned by that method. So, if I go through and add all instantiations of the return type within that method to a set, is there an analysis that will tell me, for each member of the set, if it may or may not be returned. Additionally, would it be possible to not limit the set to a single method but, all methods called by the original method to account for delegation?
This is not specific to any invocation.
It looks like method escape analysis may be the answer.
Thanks everyone for your suggestions.
Your question seems to be either a simple "reaching" analysis ("does a new value reach a return statements") if you are interested in any invocation and only if a method-local new creates the value. If you need to know if any invocation can return a new value from any subcomputation you need to compute the possible call-graph and determine if any called function can return a new value, or pass a new value from a called function to its parent.
There are a number of Java static analysis frameworks.
SOOT is a byte-code based analysis framework. You could probably implement your static query using this.
The DMS Software Reengineering Toolkit is a generic engine for building custom analyzers and transformation tools. It has a full Java front end, and computes various useful base analyses (def/use chains, call graph) on source code. It can process class files but presently only to get type information.
If you wanted a dynamic analysis, either by itself or as a way to tighten up the static analysis, DMS can be used to instrument the source code in arbitrary ways by inserting code to track allocations.
I'm not sure if this would work for you circumstances, but one simple approach would be to populate a newly added 'instantiatedTime' field in the constructor of the object and compare that with the time the method was call was made. This assumes you have access to the source for the object in question.
Are you sure static analysis is the right tool for the job? Static analysis can give you a result in some cases but not in all.
When running the JVM under a debugger, it assigns objects with increasing object IDs, which you can fetch via System.identityHashCode(Object o). You can use this fact to build a test case that creates an object (the checkpoint), and then calls the method. If the returned object as an id greater than the checkpoint id, then you know the object was created in the method.
Disclaimer: that this is observed behaviour under a debugger, under Windows XP.
I have a feeling that this is impossible to do without a specially modified JVM. Here are some approaches ... and why they won't work in general.
The Static Analysis approach will work in simple cases. However, something like this is likely to stump any current generation static analysis tool:
// Bad design alert ... don't try this at home!
public class LazySingletonStringFactory {
private String s;
public String create(String initial) {
if (s == null) {
s = new String(initial);
}
return s;
}
}
For a static analyser to figure out if a given call to LazySingletonStringFactory.create(...) returns a newly created String it must figure out that it has not been called previously. The Halting Problem tells us that this is theoretically impossible in some cases, and in practice this is beyond the "state of the art".
The IdentityHashCode approach may work in a single-threaded application that completes without the garbage collector running. However, if the GC runs you will get incorrect answers. And if you have multiple threads, then (depending on the JVM) you may find that objects are allocated in different "spaces" resulting in object "id" creation sequence that is no longer monotonic across all threads.
The Code Instrumentation approach works if you can modify the code of the Classes you are concerned about, either direct source-code changes, annotation-based code injection or by some kind of bytecode processing. However, in general you cannot do these things for all classes.
(I'm not aware of any other approaches that are materially different to the above three ... but feel free to suggest them as a comment.)
Not sure of a reliable way to do this statically.
You could use:
AspectJ or a similar AOP library could be use to instrument classes and increment a counter on object creation
a custom classloader (or JVM agent, but classloader is easier) could be used similarly

Categories