According this topic:
Reasons of getting a java.lang.VerifyError
java.lang.VerifyError obtains if version of execution jvm newer than jvm that was used for compilation.
Always we can fix this problem using following jvm option: -XX:-UseSplitVerifier.
According this:
https://stackoverflow.com/a/16467026/2674303
to use this option is 'perfectly safe'.
Thus I don't understand why does java.lang.VerifyError is problem that prevents succesful compilation. Please clarify. Maybe it is not safe for libraries which instrument bytecode?
The Q&A's that you link to refer to a specific kind of verification error that it is safe to work around by using an alternative verifier. However, there are other kinds of verify error that you should not ignore ... and that you cannot deal with that way.
In short, the advice in the linked question does not apply in general. In general:
Switching to the alternative verifier probably won't help.
If you disable the verifier entirely, you risk running code that may violate the JVM's runtime type-safety (etc) constraints. That can lead to security issues and / or heap corruption and hard JVM crashes.
If you have a specific VerifyError that you need advice on, please include the full exception message and stacktrace, and describe the situation in which it occurs. Note that Andrey's answer is correct that a common cause of verification errors is bugs in code that is doing "bytecode engineering" for various purposes. Often, the fix is to change to a different version of the corresponding dependency.
VerifyError happens when your byte code is incorrect in some way. For example, when it tries to read from an uninitialized variable or assigns a primitive value to a field of type Object. An instrumentation library may have bugs leading to generation of such malformed bytecode. If you can share the exact details of the error, we could probably be more specific about the exact cause.
The goal is to force tools and libraries generate correct StackMapTable attribute whenever they manipulate Java bytecode, so that JVM does not have to do slow and complicated type inferencing phase of verification, but rather do only fast and simple type checking phase.
-XX:-UseSplitVerifier has been deprecated in Java 8, it won't help any more.
VerifyError is thrown during class loading by JVM, so it appears on runtime level, not compile level.
Not always we can fix probem by using old verifier. There is a particular case when this helps. It only helps when class is propher for older verifier and is missing StackMapTable which was introduced in JVM 1.6 and are obligatory in JVM 1.7
If you want to get rid VerifyError and UseSplitVerifier does not help that mean that your class is incorrect from JVM point of view. You can still turn off whole verification, but this can cause problems.
Bytecode manipulation is always danger when you don't know what are your'e doing. Now main bytecode manipulation libraries support StackMapFrames calculations so they could generate bytecode which is propher for every VM. But still, user can generate bytecode which is incorrect on class file level, in that case JVM will still thrown VerifyError on load, SplitVerifier won't help. Only disabling verification will force VM to load that class, but could be some error when it will be executing.
Related
As of Java 17 --illegal-access is effectively obsolete https://openjdk.java.net/jeps/403
Any use of this option, whether with permit, warn, debug, or deny,
will have no effect other than to issue a warning message. We expect
to remove the --illegal-access option entirely in a future release.
Because of this, using openjdk17 early access builds, I'm seeing an issue with jackson https://github.com/FasterXML/jackson-databind/issues/3168. It seems to me that they're advocating --add-opens usage and struggle to envisage a holistic "fix".
I'd like to avoid adding --add-opens because if it's not jackson, it's the next dependency. I don't want to have to change JVM args across environments because of dependency changes. How do I avoid this?
From this article it seems that you can avoid resorting to --add-opens by exporting the modules at runtime through the methods of the Burningwave Core library:
org.burningwave.core.assembler.StaticComponentContainer.Modules.exportAllToAll()
org.burningwave.core.assembler.StaticComponentContainer.Modules.exportPackageToAllUnnamed("java.base","java.lang")
You don't, JDK internals are encapsulated for a reason.
...
...
Okay, are they gone now?
You can use Overlord by Mackenzie Scott to do various incredibly dangerous things nobody should ever do, including but not limited to:
Creating objects without calling their constructors
Casting values to incompatible types
Managing memory directly, and indeed,
Forcibly accessing JDK internals.
Specifically, see (or, rather, don't see) Overlord.breakEncapsulation(Class, Class, boolean) and Overlord.allowAccess(Class, Class, boolean).
In my application lot of warnings are coming. For removing that warnings I'm using #SuppressWarnings annotations, anything would be happen in my code if I used several suppress warning annotations.
The #SuppressWarnings annotation does not change anything to the way your code works. The only thing it does is not make your compiler or IDE complain about specific warnings.
If you feel you need to use #SuppressWarnings a lot, then you should take a close look at why you get those warnings. It's a sign that you might be doing things incorrectly - you get warnings for a reason.
The #SuppressWarnings annotation disables certain compiler warnings. In this case, the warning about deprecated code ("deprecation") and unused local variables or unused private methods ("unused"). This article explains the possible values.
Depends on what warnings you are suppressing. If they are related to APIs that available only in new versions, your app will crash on older versions. Some warnings on the other hand are informational and point to common causes of bugs, so it really depends on what warning you are suppressing.
If you mean will my project break? or will it run slower most probably the answer is no. You can be fine suppressing warnings if they are trivial and you understand what are they signaling you and why are they there.
For example, an unused variable warning. Maybe you have defined it and plan to use in the near future, but the warning annoys you. Although I strongly suggest you to use a Source Code Version Control System like Git/Mercurial so you can safely delete code and recover a few days later.
But always check every warning you're suppressing: they are there for a purpose. For example, deprecated warnings: maybe your code runs fine, but in the next version of the JVM that deprecated method/class may have disappeared.
Always understand what you're doing
I would like to mark usage of certain methods provide by the JRE as deprecated. How do I do this?
You can't. Only code within your control can have the #Deprecated annotation added. Any attempt to reverse engineer the bytecode will result in a non-portable JRE. This is contrary to Java's write once, run anywhere methodology.
you can't deprecate JRE methods, but you can add warnings or even compile errors to your build system i.e. using AspectJ or forbid the use of given methods in the IDE.
For example in Eclipse:
Go to Project properties -->Java Compiler --> Errors Warnings, Then enable project specific settings, Expand Deprecated and restrited APIs category
"Forbidden reference (acess rule)"
Obviously you could instrument or override the class adding #Deprecated annotation, but it's not a clean solution.
Add such restrictions to your coding guidelines, and enforce as part of your code review process.
You only can do it, if and only if you are building your own JRE! In that case just add #Deprecated above the corresponding code block! But if you are using Oracle's JRE, you are no where to do so!
In what context? Do you mean you want to be able to easily configure your IDE to inhibit use of certain API? Or are you trying to dictate to the world what APIs you prohibit? Or are you trying to do something at runtime?
If the first case, Eclipse, and I assume other IDEs, allow you to mark any API as forbidden, discouraged, or accessible at the package or class level.
If you mean the second, you can't, of course. That would be silly.
If you are trying to prohibit certain methods from being called at runtime, you can configure a security policy to prevent code loaded from specified locations from being able to call specific methods that check with the SecurityManager, if one is installed.
You can compile your own version of the class and add it to the boot class path or lib/ext directory. http://docs.oracle.com/javase/tutorial/ext/basics/install.html This will change the JDK and the JRE.
In fact you can remove it for compiling and your program won't compile if it is used.
Snihalani: Just so that I get this straight ...
You want to 'deprecate methods in the JRE' in order to 'Making sure people don't use java's implementation and use my implementation from now on.' ?
First of all: you can't change anything in the JRE, neither are you allowed to, it's property of Oracle. Uou might be able to change something locally if you want to go through the trouble, but that 'll just be in your local JRE, not in the ones that can be downloaded from the Oracle webpage.
Next to that, nobody has your implementation, so how would we be able to use it anyway? The implementations provided by Oracle do exactly what they should do, and when a flaw/bug/... is found it'll be corrected or replaced by a new method (at which point the original method becomes deprecated).
But, what mostly worries me, is that you would go and change implementations with something you came up with. Reminds me quite lot of phishing and such techniques, having us run your code, without knowing what it does, without even knowing we are running your code. After all, if you would have access to the original code and "build" the JRE, what's to stop you from altering the code in the original method?
Deprecated is a way for the author to say:
"Yup ... I did this in the past, but it seems that there are problems with the method.
just in order not to change the behaviour of existing applications using this method, I will not change this method, rather mark it as deprecated, and add a method that solves this problem".
You are not the author, so it isn't up to you to decide whether or not the methods work the way they should anyway.
There are known compatibility issues with JDK7 compiled code using instrumentation.
As for http://www.oracle.com/technetwork/java/javase/compatibility-417013.html
Classfiles with version number 51 are exclusively verified using the type-checking verifier, and thus the methods must have StackMapTable attributes when appropriate. For classfiles with version 50, the Hotspot JVM would (and continues to) failover to the type-inferencing verifier if the stackmaps in the file were missing or incorrect. This failover behavior does not occur for classfiles with version 51 (the default version for Java SE 7).
Any tool that modifies bytecode in a version 51 classfile must be sure to update the stackmap information to be consistent with the bytecode in order to pass verification.
The solution is to use -XX:-UseSplitVerifier as summarised here:
https://community.oracle.com/blogs/fabriziogiudici/2012/05/07/understanding-subtle-new-behaviours-jdk-7
How safe it is? I suppose Oracle has put this check in for a reason. If I don't use it, I may be risking some other issues.
What can be consequences of using -XX:-UseSplitVerifier?
Thanks,
Piotr.
In short, it's perfectly safe.
Since Java 6, Oracle's compiler has made class files with a StackMapTable. The basic idea is that the compiler can explicitly specify what the type of an object is, instead of making the runtime do it. That provides a tiny speedup in the runtime, in exchange for some extra time during compile and some complexity in the compiled class file (the aforementioned StackMapTable).
As an experimental feature, it was not enabled by default in the Java 6 compiler. The runtime defaults to verifying the object types itself if no StackMapTable exists.
Until Java 7. Oracle made it mandatory: the compiler generates them, and the runtime verifies them. It still uses the old verifier if the StackMapTable isn't there... but only on class files from Java 6 or earlier (version 50). Java 7 class files (version 51) are required to use the StackMapTable, and so the runtime won't cut them the same slack.
That's only a problem if your classfiles were generated without a StackMapTable. For instance, if you used a non-Oracle JVM. Or if you messed with bytecode afterwards -- like instrumenting it for use with a debugger, optimizer, or code coverage analyzer.
But you can get around it! Oracle's JVM provides the -XX:+UseSplitVerifier to force the runtime to fallback to the old type verifier. It doesn't care about StackMapTable.
In practice, the hoped-for optimization in runtime speed and efficiency hasn't materialized: if it exists, it hasn't been enough for anyone to notice. As the new type verifier doesn't provide any new features (just the optimization), it's perfectly safe to shut it off.
Oracle's explanation is at http://www.oracle.com/technetwork/java/javase/compatibility-417013.html if you search for JSR 202.
Yes -- it's safe. As Judebert says, it just slows class loading slightly.
To add a little more info: What exactly is a StackMap Table? Well, the Bytecode verifier needs to make two passes over the code in the class file to validate proper types of data are being passed around and used. The first pass, which is the slower one, does flow analysis of all the code's branches to see what type of data could be on the stack at each bytecode instruction. The second pass looks at each instruction to see if it can validly operate on all those types.
Here's the key: the compiler already has all the information at hand that the first pass generates - so (in Java 6 & 7) it stores it in a StackMap table in the class file.
This speeds up class loading because the class loader doesn't have to do that first pass. That's why it's called a Split Verifier, because the work is split between the compiler and the runtime loading mechanism. When you use the -XX:-UseSplitVerifier option, you tell Java to do both passes at class load time (and to ignore any StackMap table). Many products (like profilers that modify bytecode at load time) did not know about the StackMap table initially, so when they modified classes at load time, the StackMap table from the compiler was out of date and caused errors.
SO, to summarize, the -XX:-UseSplitVerifier option slows class loading. It does not affect security, runtime performance or functionality.
Stack Map Frames were added in Java 7 and "prashant" argues that the idea is flawed and proposes that developers always use the -XX:-UseSplitVerifier flag to avoid using them.
Read more: Java 7 Bytecode Verifier: Huge backward step for the JVM
I have a scenario where I have code written against version 1 of a library but I want to ship version 2 of the library instead. The code has shipped and is therefore not changeable. I'm concerned that it might try to access classes or members of the library that existed in v1 but have been removed in v2.
I figured it would be possible to write a tool to do a simple check to see if the code will link against the newer version of the library. I appreciate that the code may still be very broken even if the code links. I am thinking about this from the other side - if the code won't link then I can be sure there is a problem.
As far as I can see, I need to run through the bytecode checking for references, method calls and field accesses to library classes then use reflection to check whether the class/member exists.
I have three-fold question:
(1) Does such a tool exist already?
(2) I have a niggling feeling it is much more complicated that I imagine and that I have missed something major - is that the case?
(3) Do you know of a handy library that would allow me to inspect the bytecode such that I can find the method calls, references etc.?
Thanks!
I think that Clirr - a binary compatibility checker - can help here:
Clirr is a tool that checks Java libraries for binary and source compatibility with older releases. Basically you give it two sets of jar files and Clirr dumps out a list of changes in the public api. The Clirr Ant task can be configured to break the build if it detects incompatible api changes. In a continuous integration process Clirr can automatically prevent accidental introduction of binary or source compatibility problems.
Changing the library in your IDE will result in all possible compile-time errors.
You don't need anything else, unless your code uses another library, which in turn uses the updated library.
Be especially wary of Spring configuration files. Class names are configured as text and don't show up as missing until runtime.
If you have access to the source code, you could just compile source against the new library. If it doesn't compile, you have definitely a problem. If it compiles you may still have a problem if the program uses reflection, some kind of IoC stuff like Spring etc.
If you have unit tests, then you may have a better change catch any linking errors.
If you have only have a .class file of the program, then I don't know any tools that would help besides decomplining class file to source and compiling source again against the new library, but that doesn't sound too healthy.
The checks you mentioned are done by the JVM/Java class loader, see e.g. Linking of Classes and Interfaces.
So "attempting to link" can be simply achieved by trying to run the application. Of course you could hoist the checks to run them yourself on your collection of .class/.jar files. I guess a bunch of 3rd party byte code manipulators like BCEL will also do similar checks for you.
I notice that you mention reflection in the tags. If you load classes/invoke methods through reflection, there's no way to analyse this in general.
Good luck!