Possible to duplicate current java stack using JNI - java

I am trying to record the arguments passed to a method before it is called using bytecode instrumentation.
Currently while instrumenting using java code I have to first pop all the args into a locals, then push them again twice (once for my method which will record and in this case all primitive types have to be converted to their boxed types, and once for the actual method call).
What I would ideally like to do is just duplicate the entire stack for the num of args pushed for the method call. However the jvm bytecode's dup() instruction only allows duplicating the topmost value of the stack.
Is it possible using JNI to somehow duplicate the entire stack in one go?

No. The stack effectively goes away when the method is compiled. The JVM has no way of compiling native code. So even if you did try to directly manipulate the stack, it would change format (and use registers) on the fly.
You can reasonably easily duplicate the top four slot of the stack (using dup2_x2), but any further and you'll probably need to use local variables.

Related

java.lang.VerifyError errors using Java ASM

I am trying to write an instrumentation module for Java programs. One particular instrumentation I am looking to add is collecting all the objects in a method's argument list and do some processing on them.
Currently, to get the list of object arguments, I pop all the method args from stack, and then push them in one by one, adding my instrumentation call in between. While this mostly works, I see some
java.lang.VerifyError, [1] (****) Incompatible argument to function
type errors in large programs. Does popping and then pushing an object back to stack change its type somehow? Alternatively, is there a better solution for duplicating 'N' arguments from the stack without popping?
Where are you popping your arguments to? You need to store them in the local variable array, I assume? It is perfectly possible that you override variables that are stored there already but which are accessed later. In this case, you might have changed the types of the stored variables which yields an error during verification.
As verification is a determinisitic process: Simply compare the byte code of a failing method to the verifiers complaint and make sure that the types match.

Instrument intermediary local method call within a method body

I know (at least using either BCEL, or ASM, for instance), it is possible to somehow access local variables of a method... but, I need something more, what I would like is:
to get the type of such a variable (or a way to convert from the signature)
to know (distinguish) when this variable is used (either sees it value affected, or is passed as parameter)
when this variable is used as parameter, to know which method call it was passed to
to break "method-chains" in their respective method calls and get their return value so I can manipulate them
The basic idea is that I would like to "instrument" methods a bit in the same way a debugger does (though limited to the first frame depth...).
Any pointer appreciated.
If more information need, feel free to ask.
This is only possible using a byte code-level API. cglib does not expose such an API such that you have to choose between ASM, BCEL and Javassist where I would recommend you ASM which has the best documentation.
What you would need to do:
Parse the signature of the method, ASM offers utilities for that. You would get any type by its internal name. You would need to map these names to their index.
Find any use of the variable that is used from that index.
This is however a quite difficult task. In order to predict your code, you would have to emulate the method invocation. The JVM is a stack machine, arguments can be placed on the operand stack as a result of an arbitrary chain of commands. Therefore, you would effectively have to interpret any byte code instruction that you find. You will, more or less, need to write your own simplistic interpreter what is quite a task.

What is a stack map frame

I've recently been looking at The Java Virtual Machine Specifications (JVMS) to try to better understand the what makes my programs work, but I've found a section that I'm not quite getting...
Section 4.7.4 describes the StackMapTable Attribute, and in that section the document goes into details about stack map frames. The issue is that it's a little wordy and I learn best by example; not by reading.
I understand that the first stack map frame is derived from the method descriptor, but I don't understand how (which is supposedly explained here.) Also, I don't entirely understand what the stack map frames do. I would assume they're similar to blocks in Java, but it appears as though you can't have stack map frames inside each other.
Anyway, I have two specific questions:
What do the stack map frames do?
How is the first stack map frame created?
and one general question:
Can someone provide an explanation less wordy and easier to understand than the one given in the JVMS?
Java requires all classes that are loaded to be verified, in order to maintain the security of the sandbox and ensure that the code is safe to optimize. Note that this is done on the bytecode level, so the verification does not verify invariants of the Java language, it merely verifies that the bytecode makes sense according to the rules for bytecode.
Among other things, bytecode verification makes sure that instructions are well formed, that all the jumps are to valid instructions within the method, and that all instructions operate on values of the correct type. The last one is where the stack map comes in.
The thing is that bytecode by itself contains no explicit type information. Types are determined implicitly through dataflow analysis. For example, an iconst instruction creates an integer value. If you store it in slot 1, that slot now has an int. If control flow merges from code which stores a float there instead, the slot is now considered to have invalid type, meaning that you can't do anything more with that value until overwriting it.
Historically, the bytecode verifier inferred all the types using these dataflow rules. Unfortunately, it is impossible to infer all the types in a single linear pass through the bytecode because a backwards jump might invalidate already inferred types. The classic verifier solved this by iterating through the code until everything stopped changing, potentially requiring multiple passes.
However, verification makes class loading slow in Java. Oracle decided to solve this issue by adding a new, faster verifier, that can verify bytecode in a single pass. To do this, they required all new classes starting in Java 7 (with Java 6 in a transitional state) to carry metadata about their types, so that the bytecode can be verified in a single pass. Since the bytecode format itself can't be changed, this type information is stored seperately in an attribute called StackMapTable.
Simply storing the type for every single value at every single point in the code would obviously take up a lot of space and be very wasteful. In order to make the metadata smaller and more efficient, they decided to have it only list the types at positions which are targets of jumps. If you think about it, this is the only time you need the extra information to do a single pass verification. In between jump targets, all control flow is linear, so you can infer the types at in between positions using the old inference rules.
Each position where types are explicitly listed is known as a stack map frame. The StackMapTable attribute contains a list of frames in order, though they are usually expressed as a difference from the previous frame in order to reduce data size. If there are no frames in the method, which occurs when control flow never joins (i.e. the CFG is a tree), then the StackMapTable attribute can be omitted entirely.
So this is the basic idea of how StackMapTable works and why it was added. The last question is how the implicit initial frame is created. The answer of course is that at the beginning of the method, the operand stack is empty and the local variable slots have the types given by the types of the method parameters, which are determined from the method decriptor.
If you're used to Java, there are a few minor differences to how method parameter types work at the bytecode level. First off, virtual methods have an implicit this as first parameter. Second, boolean, byte, char, and short do not exist at the bytecode level. Instead, they are all implemented as ints behind the scenes.

What's the relationship between MethodHandles.foldArguments and MethodHandle.asCollector?

The Javadoc for MethodHandles.foldArguments contains this parenthetical note:
(Note that dropArguments can be used to remove any arguments that either the combiner or the target does not wish to receive. If some of the incoming arguments are destined only for the combiner, consider using asCollector instead, since those arguments will not need to be live on the stack on entry to the target.)
First, I'm confused whether this suggests replacing foldArguments with dropArguments+asCollector, replacing foldArguments+dropArguments with asCollector, replacing foldArguments+dropArguments with foldArguments+asCollector, etc.
Secondly, I don't understand why MethodHandles.asCollector is relevant at all here.
The note doesn't say "If you just want to collect arguments into an array, use asCollector", it seems to imply asCollector is a general replacement for foldArguments (possibly in some combination with dropArguments), which it is not.
The bit about "live on the stack on entry to the target" seems to imply I should first collect any arguments "destined only for the combiner" into an array with asCollector before sending them to the combiner. I don't understand how adding an array creation and extra level of indirection is going to help anything, especially because if the resulting method handle gets inlined the JVM will try to optimize out the array creation anyway. If the combiner-only args are dropped with dropArguments, the JVM should be able to prove they aren't used in the target. If for some reason the JVM can't prove the combiner-only args are not used in the target and thus must keep them alive, surely the array created by asCollector will be alive and thus keep its contents alive as well. They'll be on the heap rather than the stack, but I don't see how that helps (especially if they're references to objects already on the heap).
Java 8 added MethodHandles.collectArguments which combines foldArguments and dropArguments in the obvious way to implement collector-only arguments. The Javadoc collectArguments does not mention asCollector as an alternative, suggesting any advice to use asCollector instead no longer holds, but the foldArguments Javadoc still contains the confusing parenthetical note.
What's the relationship (if any) between MethodHandles.foldArguments and MethodHandle.asCollector?

What happen if I manually changed the bytecode before running it?

I am little bit curious about that what happen if I manually changed something into bytecode before execution. For instance, let suppose assigning int type variable into byte type variable without casting or remove semicolon from somewhere in program or anything that leads to compile time error. As I know all compile time errors are checked by compiler before making .class file. So what happen when I changed byte code after successfully compile a program then changed bytecode manually ? Is there any mechanism to handle this ? or if not then how program behaves after execution ?
EDIT :-
As Hot Licks, Darksonn and manouti already gave correct satisfy answers.Now I just conclude for those readers who all seeking answer for this type question :-
Every Java virtual machine has a class-file verifier, which ensures that loaded class files have a proper internal structure. If the class-file verifier discovers a problem with a class file, it throws an exception. Because a class file is just a sequence of binary data, a virtual machine can't know whether a particular class file was generated by a well-meaning Java compiler or by shady crackers bent on compromising the integrity of the virtual machine. As a consequence, all JVM implementations have a class-file verifier that can be invoked on untrusted classes, to make sure the classes are safe to use.
Refer this for more details.
You certainly can use a hex editor (eg, the free "HDD Hex Editor Neo") or some other tool to modify the bytes of a Java .class file. But obviously, you must do so in a way that maintains the file's "integrity" (tables all in correct format, etc). Furthermore (and much trickier), any modification you make must pass muster by the JVM's "verifier", which essentially rechecks everything that javac verified while compiling the program.
The verification process occurs during class loading and is quite complex. Basically, a data flow analysis is done on each procedure to assure that only the correct data types can "reach" a point where the data type is assumed. Eg, you can't change a load operation to load a reference to a HashMap onto the "stack" when the eventual user of the loaded reference will be assuming it's a String. (But enumerating all the checks the verifier does would be a major task in itself. I can't remember half of them, even though I wrote the verifier for the IBM iSeries JVM.)
(If you're asking if one can "jailbreak" a Java .class file to introduce code that does unauthorized things, the answer is no.)
You will most likely get a java.lang.VerifyError:
Thrown when the "verifier" detects that a class file, though well formed, contains some sort of internal inconsistency or security problem.
You can certainly do this, and there are even tools to make it easier, like http://set.ee/jbe/. The Java runtime will run your modified bytecode just as it would run the bytecode emitted by the compiler. What you're describing is a Java-specific case of a binary patch.
The semicolon example wouldn't be an issue, since semicolons are only for the convenience of the compiler and don't appear in the bytecode.
Either the bytecode executes normally and performs the instructions given or the jvm rejects them.
I played around with programming directly in java bytecode some time ago using jasmin, and I noticed some things.
If the bytecode you edited it into makes sense, it will of coursse run as expected. However there are some bytecode patterns that are rejected with a VerifyError.
For the specific examble of out of bounds access, you can compile code with out of bounds just fine. They will get you an ArrayIndexOutOfBoundsException at runtime.
int[] arr = new int[20];
for (int i = 0; i < 100; i++) {
arr[i] = i;
}
However you can construct bytecode that is more fundamentally flawed than that. To give an example I'll explain some things first.
The java bytecode works with a stack, and instructions works with the top elements on the stack.
The stack naturally have different sizes at different places in the program but sometimes you might use a goto in the bytecode to cause the stack to look different depending on how you reached there.
The stack might contain object, int then you store the object in an object array and the int in an int array. Then you go on and from somewhere else in that bytecode you use a goto, but now your stack contains int, object which would result in an int being passed to an object array and vice versa.
This is just one example of things that could happen which makes your bytecode fundamentally flawed. The JVM detects these kinds of flaws when the class is loaded at runtime, and then emits a VerifyError if something dosen't work.

Categories