According to the following link the java stack frame contains local variables, operand stack and the current class constant pool reference.
http://blog.jamesdbloom.com/JVMInternals.html
Also From Oracle "Structure of JVM" Section 2.6.3. "Dynamic Linking - Each frame (§2.6) contains a reference to the run-time constant pool (§2.5.5) for the type of the current method to support dynamic linking of the method code."
I have also read that the object in the heap also has a pointer/reference to the class data.
https://www.artima.com/insidejvm/ed2/jvm6.html
The stack frame will contain the "current class constant pool reference" and also it will have the reference to the object in heap which in turn will also point to the class data. Is this not redundant??
For example.
public class Honda {
public void run() {
System.out.println("honda is running");
}
public static void main(String[] args) {
Honda h = new Honda();
h.run(); //output honda is running
}
}
When h.run() is going to be executed, jvm will create a new stack frame and push h on the stack frame. h will point to the object in heap which in turn will have a pointer to class data of Honda. The stack frame will also have current class constant reference. Is this correct? If not please shed some light on this.
Is this not redundant??
Maybe it is redundant for instance methods and constructors.
It isn't redundant for static methods or class initialization pseudo-methods.
It is also possible that the (supposedly) redundant reference gets optimized away by the JIT compiler. (Or maybe it isn't optimized away ... because they have concluded that the redundancy leads to faster execution on average.) Or maybe the actual implementation of the JVM1 is just different.
Bear in mind that the JVM spec is describing an idealized stack frame. The actual implementation may be different ... provided that it behaves the way that the spec says it should.
On #EJP's point on normativeness, the only normative references for Java are the JLS and JVM specifications, and the Javadoc for the class library. You can also consult the source code of the JVM itself. The specifications say what should happen, and the code (in a sense) says what does happen. An article you might find in a published paper or a web article is not normative, and may well be incorrect or out of date.
1 - The actual implementation may vary from one version to the next, or between vendors. Furthermore, I have heard of a JVM implementation where a bytecode rewriter transformed from standard bytecodes to another abstract machine language at class load time. It wasn't a great idea from a performance perspective ... but it was certainly within the spirit of the JVM spec.
The stack frame will contain the "current class constant pool reference" and also it will have the reference to the object in heap which in turn will also point to the class data. Is this not redundant??
You missed the precondition of that statement, or you misquoted it, or it was just plainly wrong where you saw it.
The "reference to the object in heap" is only added for non-static method, and it refers to the hidden this parameter.
As it says in section "Local Variables Array":
The array of local variables contains all the variables used during the execution of the method, including a reference to this, all method parameters and other locally defined variables. For class methods (i.e. static methods) the method parameters start from zero, however, for instance method the zero slot is reserved for this.
So, for static methods, there is no redundancy.
Could the constant pool reference be eliminated when this is present? Yes, but then there would need to be a different way to locate the constant pool reference, requiring different bytecode instructions, so that would be a different kind of redundancy.
Always having the constant pool reference available in a well-known location in the stack frame, simplifies the bytecode logic.
There are two points here. First, there are static methods which are invoked without a this reference. Second, the actual class of an object instance is not necessarily the declaring class of the method whose code we are actually executing. The purpose of the constant pool reference is to enable resolving of symbolic references and loading of constants referenced by the code. In both cases, we need the constant pool of the class containing the currently executed code, even if the method might be inherited by the actual class of the this reference (in case of a private method invoked by another inherited method, we have a method invoked with a this instance of a class which formally does not even inherit the method).
It might even be the case that the currently executed code is contained in an interface, so we never have instances of it, but still a class file with a constant pool which must be available when executing the code. This does not only apply to Java 8 and newer, which allow static and default methods in interfaces; earlier versions also might need to execute the <clinit> method of an interface to initialize its static fields.
By the way, even if an instance method is invoked with an object reference associated with this in its first local variable, there is no requirement for the bytecode instructions to keep it there. If not needed, it might get overwritten by an arbitrary value, reusing the variable slot for other purposes. This does not preclude that subsequent instructions need the constant pool, which, as said, does not need to belong to the actual class of this anyway.
Of course, that pool reference is a logical construct anyway. Implementations may transform the code to use a shared pool or not to need a pool at all when all references have been resolved already, etc. After inlining, code may not even have a dedicated stack frame anymore.
Related
As per the definition in Java. If class doesn't occupy memory and just acts like a template, then why are we creating objects(main method) inside curly braces of class. Doesn't that mean now class also occupies memory because of objects present inside of it?
Trying to understand the definition of a class
There are three concepts to keep separate here: the class, the instance, and the stack.
class SomeClass {
static int staticValue = 0;
/* non-static */ int instanceValue = 0;
int someMethod() {
int stackValue = 42;
SomeClass instance = new SomeClass();
// ...
}
}
The class acts as a template, yes. In some languages other than Java, the class takes up no memory of its own: it merely describes the memory layout of the class's instances. For a beginner definition of OOP concepts you can think of that as true.
In Java this is not quite true for three reasons:
There is an object instance for SomeClass, accessible via SomeClass.class, which does take up memory. This instance allows you to look up information about the class itself, which is sometimes called "reflection".
The static field staticValue is shared among all instances of SomeClass, so in a sense the class takes up a small amount of memory to contain this shared value.
SomeClass contains methods like someMethod, and that code has to be in memory in order to run. If you're willing to consider code as requiring memory, and that the code is associated with the class, then the class consumes memory. People talking about OOP concepts aren't usually talking about the memory consumed by the code itself, though.
This can be compared to instances of the class SomeClass, which at a minimum contain a separate value of instanceValue for every instance you create. Instances don't have their own code, and do (in Java) contain a reference to their Class instance accessible via getClass().
Finally, the method someMethod and your main example do use references and local variables that consume memory, but in a different place than the instances or classes. This place is called a "stack", in part because as you call methods and those call further methods, the stack grows like a stack of papers on a desk. This means that there may be many copies of stackValue existing at once, one for each time you have called someMethod that hasn't finished yet. Each value of stackValue is discarded whenever its corresponding invocation of someMethod returns. These aren't directly tied to classes or instances, other than that they are code that might be considered associated with a class as in #3 above. Disregarding the memory consumed by the compiled code itself, the instance does not contribute to SomeClass or its instances consuming any more memory in ways that matter to OOP.
(Instances created with new are not a part of a "stack" but rather are part of the "heap", at this level of explanation, and that includes the SomeClass.class instance and any instances of SomeClass. Some languages require careful management of the heap's memory, but Java manages it for you through a process called garbage collection. Primitives like stackValue and the reference named instance are kept on the stack, though.)
Preface
I have been experimenting with ByteBuddy and ASM, but I am still a beginner in ASM and between beginner and advanced in ByteBuddy. This question is about ByteBuddy and about JVM bytecode limitations in general.
Situation
I had the idea of creating global mocks for testing by instrumenting constructors in such a way that instructions like these are inserted at the beginning of each constructor:
if (GlobalMockRegistry.isMock(getClass()))
return;
FYI, the GlobalMockRegistry basically wraps a Set<Class<?>> and if that set contains a certain class, then isMock(Class<?>> clazz) would return true. The advantage of that concept is that I can (de)activate global mocking for each class during runtime because if multiple tests run in the same JVM process, one test might need a certain global mock, the next one might not.
What the if(...) return; instructions above want to achieve is that if mocking is active, the constructor should not do anything:
no this() or super() calls, → update: impossible
no field initialisations, → update: possible
no other side effects. → update: might be possible, see my update below
The result would be an object with uninitialised fields that did not create any (possibly expensive) side effects such as resource allocation (database connection, file creation, you name it). Why would I want that? Could I not just create an instance with Objenesis and be happy? Not if I want a global mock, i.e. mock objects I cannot inject because they are created somewhere inside methods or field initialisers I do not have control over. Please do not worry about what method calls on such an object would do if its instance fields are not properly initialised. Just assume I have instrumented the methods to return stub results, too. I know how to do that already, the problem are only constructors in the context of this question.
Questions / problems
Now if I try to simulate the desired result in Java source code, I meet the following limitations:
I cannot insert any code before this() or super(). I could mitigate that by also instrumenting the super class hierarchy with the same if(...) return;, but would like to know if I could in theory use ASM to insert my code before this() or super() using a method visitor. Or would the byte code of the instrumented class somehow be verified during loading or retransformation and then rejected because the byte code is "illegal"? I would like to know before I start learning ASM because I want to avoid wasting time for an idea which is not feasible.
If the class contains final instance fields, I also cannot enter a return before all of those fields have been initialised in the constructor. That might happen at the very end of a complex constructor which performs lots of side effects before actually initialising the last field. So the question is similar to the previous one: Can I use ASM to insert my if(...) return; before any fields (including final ones) are initialised and produce a valid class which I could not produce using javac and will not be rejected when loaded or retransformed?
BTW, if it is relevant, we are talking about Java 8+, i.e. at the time of writing this that would be Java versions 8 to 14.
If anything about this question is unclear, please do not hesitate to ask follow-up questions, so I can improve it.
Update after discussing Antimony's answer
I think this approach could work and avoid side effects, calling the constructor chain but avoiding any side effects and resulting in a newly initialised instance with all fields empty (null, 0, false):
In order to avoid calling this.getClass(), I need to hard-code the mock target's class name directly into all constructors up the parent chain. I.e. if two "global mock" target classes have the same parent class(es), multiple of the following if blocks would be woven into each corresponding parent class, one for each hard-coded child class name.
In order to avoid any side effects from objects being created or methods being called, I need to call a super constructor myself, using null/zero/false values for each argument. That would not matter because the next parent class up the chain would have a similar code block so that the arguments given do not matter anyway.
// Avoid accessing 'this.getClass()'
if (GlobalMockRegistry.isMock(Sub.class)) {
// Identify and call any parent class constructor, ideally a default constructor.
// If none exists, call another one using default values like null, 0, false.
// In the class derived from Object, just call 'Object.<init>'.
super(null, 0, false);
return;
}
// Here follows the original byte code, i.e. the normal super/this call and
// everything else the original constructor does.
Note to myself: Antimony's answer explains "uninitialised this" very nicely. Another related answer can be found here.
Next update after evaluating my new idea
I managed to validate my new idea with a proof of concept. As my JVM byte code knowledge is too limited and I am not used to the way of thinking it requires (stack frames, local variable tables, "reverse" logic of first pushing/popping variables, then applying an operation on them, not being able to easily debug), I just implemented it in Javassist instead of ASM, which in comparison was a breeze after failing miserably with ASM after hours of trial & error.
I can take it from here and I want to thank user Antimony for his very instructive answer + comments. I do know that theoretically the same solution could be implemented using ASM, but it would be exceedingly difficult in comparison because its API is too low level for the task at hand. ByteBuddy's API is too high level, Javassist was just right for me in order to get quick results (and easily maintainable Java code) in this case.
Yes and no. Java bytecode is much less restrictive than Java (source) in this regard. You can put any bytecode you want before the constructor call, as long as you don't actually access the uninitialized object. (The only operations allowed on an uninitialized this value are calling a constructor, setting private fields declared in the same class, and comparing it against null).
Bytecode is also more flexible in where and how you make the constructor call. For example, you can call one of two different constructors in an if statement, or you can wrap the super constructor call in a "try block", both things that are impossible at the Java language level.
Apart from not accessing the uninitialized this value, the only restriction* is that the object has to be definitely initialized along any path that returns from the constructor call. This means the only way to avoid initializing the object is to throw an exception. While being much laxer than Java itself, the rules for Java bytecode were still very deliberately constructed so it is impossible to observe uninitialized objects. In general, Java bytecode is still required to be memory safe and type safe, just with a much looser type system than Java itself. Historically, Java applets were designed to run untrusted code in the JVM, so any method of bypassing these restrictions was a security vulnerability.
* The above is talking about traditional bytecode verification, as that is what I am most familiar with. I believe stackmap verification behaves similarly though, barring implementation bugs in some versions of Java.
P.S. Technically, Java can have code execute before the constructor call. If you pass arguments to the constructor, those expressions are evaluated first, and hence the ability to place bytecode before the constructor call is required in order to compile Java code. Likewise, the ability to set private fields declared in the same class is used to set synthetic variables that arise from the compilation of nested classes.
If the class contains final instance fields, I also cannot enter a return before all of those fields have been initialised in the constructor.
This, however, is eminently possible. The only restriction is that you call some constructor or superconstructor on the uninitialized this value. (Since all constructors recursively have this restriction, this will ultimately result in java.lang.Object's constructor being called). However, the JVM doesn't care what happens after that. In particular, it only cares that the fields have some well typed value, even if it is the default value (null for objects, 0 for ints, etc.) So there is no need to execute the field initializers to give them a meaningful value.
Is there any other way to get the type to be instantiated other than this.getClass() from a super class constructor?
Not as far as I am aware. There's no special opcode for magically getting the Class associated with a given value. Foo.class is just syntactic sugar which is handled by the Java compiler.
In java-8 sources we can find quite tricky way of JIT optimization inside class Class:
/*
* Private constructor. Only the Java Virtual Machine creates Class objects.
* This constructor is not used and prevents the default constructor being
* generated.
*/
private Class(ClassLoader loader) {
// Initialize final field for classLoader. The initialization value of non-null
// prevents future JIT optimizations from assuming this final field is null.
classLoader = loader;
}
So, this constructor is never invoked, but JIT will be "cheated" by this trick.
My question is: could it be implemented in slightly different way, let's say
private Class() {
classLoader = (ClassLoader)(new Object());
}
This is absolutely meaningless logic, but does it matter if constructor will never been invoked?
Would such kind of trick also prevent JIT from this optimization?
In Java 6 and Java 7 (and Java 8 before update 40), the constructor is as simple as private Class() {}, but in these versions, there is no classLoader field either.
This implies that the association between Class and ClassLoader had to be maintained in a special JVM specific way, thus, getClassLoader() has to call into a native method, not necessarily involving JNI, but rather handled as a JVM intrinsic operation, but still requiring special care inside the JVM’s native code. Further, the garbage collector had to know about the special relationship.
In contrast, hiding a field in Reflection is not so complicated, while now having an ordinary field simplifies the JVM’s native code, most notably the getClassLoader() operation and the garbage collector implementation(s). It might also be simpler for the optimizer to inline field accesses if it’s an ordinary field.
Now, when the Class objects are created via special JVM code, not using the declared constructor, it could contradict an optimizing JIT’s assumptions made by analyzing the constructor’s actual code to predict the possible values for this final field.
Note that nobody said that the current JIT is that smart. The comment talks about hypothetical “future JIT optimizations”. Having a constructor initializing the field with a parameter value is in line with what the JVM actually does.
In contrast, a constructor like your suggested classLoader = (ClassLoader)(new Object()); could lead a hypothetical optimizer to conclude that this field can’t be initialized with an actual ClassLoader instance as that code can never complete normally.
The commentary in the source of Class states that the initialisation value make future JIT optimisation knowing that the classLoader field is not null. So the optimizer might do an even better job in future.
To prevent optimisation, just declare your fields volatile.
In Java:
class Base {
public Base() { System.out.println("Base::Base()"); virt(); }
void virt() { System.out.println("Base::virt()"); }
}
class Derived extends Base {
public Derived() { System.out.println("Derived::Derived()"); virt(); }
void virt() { System.out.println("Derived::virt()"); }
}
public class Main {
public static void main(String[] args) {
new Derived();
}
}
This will output
Base::Base()
Derived::virt()
Derived::Derived()
Derived::virt()
However, in C++ the result is different:
Base::Base()
Base::virt() // ← Not Derived::virt()
Derived::Derived()
Derived::virt()
(See http://www.parashift.com/c++-faq-lite/calling-virtuals-from-ctors.html for C++ code)
What causes such a difference between Java and C++? Is it the time when vtable is initialized?
EDIT: I do understand Java and C++ mechanisms. What I want to know is the insights behind this design decision.
Both approaches clearly have disadvatages:
In Java, the call goes to a method which cannot use this properly because its members haven’t been initialised yet.
In C++, an unintuitive method (i.e. not the one in the derived class) is called if you don’t know how C++ constructs classes.
Why each language does what it does is an open question but both probably claim to be the “safer” option: C++’s way prevents the use of uninitialsed members; Java’s approach allows polymorphic semantics (to some extent) inside a class’ constructor (which is a perfectly valid use-case).
Well you have already linked to the FAQ's discussion, but that’s mainly problem-oriented, not going into the rationales, the why.
In short, it’s for type safety.
This is one of the few cases where C++ beats Java and C# on type safety. ;-)
When you create a class A, in C++ you can let each A constructor initialize the new instance so that all common assumptions about its state, called the class invariant, hold. For example, part of a class invariant can be that a pointer member points to some dynamically allocated memory. When each publicly available method preserves the class invariant, then it’s guaranteed to hold also on entry to each method, which greatly simplifies things – at least for a well-chosen class invariant!
No further checking is then necessary in each method.
In contrast, using two-phase initialization such as in Microsoft's MFC and ATL libraries you can never be quite sure whether everything has been properly initialized when a method (non-static member function) is called. This is very similar to Java and C#, except that in those languages the lack of class invariant guarantees comes from these languages merely enabling but not actively supporting the concept of a class invariant. In short, Java and C# virtual methods called from a base class constructor can be called down on a derived instance that has not yet been initialized, where the (derived) class invariant has not yet been established!
So, this C++ language support for class invariants is really great, helping do away with a lot of checking and a lot of frustrating perplexing bugs.
However, it makes a bit difficult to do derived class specific initialization in a base class constructor, e.g. doing general things in a topmost GUI Widget class’ constructor.
The FAQ item “Okay, but is there a way to simulate that behavior as if dynamic binding worked on the this object within my base class's constructor?” goes a little into that.
For a more full treatment of the most common case, see also my blog article “How to avoid post-construction by using Parts Factories”.
Regardless of how it's implemented, it's a difference in what the language definition says should happen. Java allows you to call functions on a derived object that hasn't been fully initialized (it has been zero-initialized, but its constructor has not run). C++ doesn't allow that; until the derived class's constructor has run, there is no derived class.
Hopefully this will help:
When your line new Derived() executes, the first thing that happens is the memory allocation. The program will allocate a chunk of memory big enough to hold both the members of Base and Derrived. At this point, there is no object. It's just uninitialized memory.
When Base's constructor has completed, the memory will contain an object of type Base, and the class invariant for Base should hold. There is still no Derived object in that memory.
During the construction of base, the Base object is in a partially-constructed state, but the language rules trust you enough to let you call your own member functions on a partially-constructed object. The Derived object isn't partially constructed. It doesn't exist.
Your call to the virtual function ends up calling the base class's version because at that point in time, Base is the most derived type of the object. If it were to call Derived::virt, it would be invoking a member function of Derived with a this-pointer that is not of type Derrived, breaking type safety.
Logically, a class is something that gets constructed, has functions called on it, and then gets destroyed. You can't call member functions on an object that hasn't been constructed, and you can't call member functions on an object after it's been destroyed. This is fairly fundamental to OOP, the C++ language rules are just helping you avoid doing things that break this model.
In Java, method invocation is based on object type, which is why it is behaving like that (I don't know much about c++).
Here your object is of type Derived, so jvm invokes method on Derived object.
If understand Virtual concept clearly, equivalent in java is abstract, your code right now is not really virtual code in java terms.
Happy to update my answer if something wrong.
Actually I want to know what's the insight behind this design decision
It may be that in Java, every type derives from Object, every Object is some kind of leaf type, and there's a single JVM in which all objects are constructed.
In C++, many types aren't virtual at all. Furthermore in C++, the base class and the subclass can be compiled to machine code separately: so the base class does what it does without whether it's a superclass of something else.
Constructors are not polymorphic in case of both C++ and Java languages, whereas a method could be polymorphic in both languages. This means, when a polymorphic method appears inside a constructor, the designers would be left with two choices.
Either strictly conform to the semantics on non-polymorphic
constructor and thus consider any polymorphic method invoked within a
constructor as non-polymorphic. This is how C++ does§.
Or, compromise
the strict semantics of non-polymorphic constructor and adhere to the
strict semantics of a polymorphic method. Thus polymorphic methods
from constructors are always polymorphic. This is how Java does.
Since none of the strategies offers or compromises any real benefits compared to other and yet Java way of doing it reduces lots of overhead (no need to differentiate polymorphism based on the context of constructors), and since Java was designed after C++, I would presume, the designer of Java opted for the 2nd option seeing the benefit of less implementation overhead.
Added on 21-Dec-2016
§Lest the statement “method invoked within a constructor as non-polymorphic...This is how C++ does” might be confusing without careful scrutiny of the context, I’m adding a formalization to precisely qualify what I meant.
If class C has a direct definition of some virtual function F and its ctor has an invocation to F, then any (indirect) invocation of C’s ctor on an instance of child class T will not influence the choice of F; and in fact, C::F will always be invoked from C’s ctor. In this sense, invocation of virtual F is less-polymorphic (compared to say, Java which will choose F based on T)
Further, it is important to note that, if C inherits definition of F from some parent P and has not overriden F, then C’s ctor will invoke P::F and even this, IMHO, can be determined statically.
my professor said that when ever i use a static method from a class the whole class gets loaded into memory and then the method is executed.
my question is: if a class contains 100 methods and 50 different variables and if i called one static method from that class.the complete class(100 methods and 50 variable ) gets loaded in memory which is inefficient in terms of memory and performance. How does java deals with this kind of issue ?
True, the class byte-code is loaded when you call a static method (but once, not every time).. The same also happens when you call a non-static method. In the later case an instance also must be created. Thus, in the sense of your question, it is a false dichotomy. Because Java is a dynamic language and platform (with a JIT) the runtime efficiency can increase significantly between method invocations. Thus, it is best to write clear and concise code (that is Write Dumb Code). If the clearest way to implement your solution is static methods then use them.