Will the JVM optimise out unused fields - java

I'm trying to get to learn more about the JVM when it comes to optimising my code and was curious whether (or more specifically in which ways) it optimises out unused fields?
I assume that if you have a field within a class that is never written too or read from, when the code is run this field will not exist within the class. Say you had a class that looked like this:
public class Foo {
public final int A;
public final float B;
private final long[] C = new long[512];
}
and you only used variables A and B, then you can probably see how initiating, maintaining and freeing variable C is a waste of time for what is essentially garbage data. Firstly would I be correct in assuming the JVM would spot this?
Now my second and more important example is whether the JVM takes inheritance into consideration here? Say for example Foo looked more like this:
public class Foo {
public final int A;
public final float B;
private final long[] C = new long[512];
public long get(int i) {
return C[i];
}
}
then I assume that this class would be stored somewhere in memory kinda like:
[ A:4 | B:4 | C:1024 ]
so if I had a second class that looked like this:
public class Bar extends Foo {
public final long D;
#Override public long get(int i) {
return i * D;
}
}
then suddenly this means that field C is never used, so would an instance of Bar in memory look like this:
[ A:4 | B:4 | C:1024 | D:8 ] or [ A:4 | B:4 | D:8 ]

To prove that a field is entirely unused, i.e. not only unused til now but also unused in the future, it is not enough to be private and unused with the declaring class. Fields may also get accessed via Reflection or similar. Frameworks building upon this may even be in a different module, e.g. Serialization is implemented inside the java.base module.
Further, in cases where the garbage collection of objects would be observable, e.g. for classes with nontrivial finalize() methods or weak references pointing to the objects, additional restrictions apply:
JLS §12.6.1., Implementing Finalization
Optimizing transformations of a program can be designed that reduce the number of objects that are reachable to be less than those which would naively be considered reachable. For example, a Java compiler or code generator may choose to set a variable or parameter that will no longer be used to null to cause the storage for such an object to be potentially reclaimable sooner.
Another example of this occurs if the values in an object’s fields are stored in registers. The program may then access the registers instead of the object, and never access the object again. This would imply that the object is garbage. Note that this sort of optimization is only allowed if references are on the stack, not stored in the heap.
This section also gives an example where such optimization would be forbidden:
class Foo {
private final Object finalizerGuardian = new Object() {
protected void finalize() throws Throwable {
/* finalize outer Foo object */
}
}
}
The specification emphasizes that even being otherwise entirely unused, the inner object must not get finalized before the outer object became unreachable.
This wouldn’t apply to long[] arrays which can’t have a finalizer, but it makes more checks necessary while reducing the versatility of such hypothetical optimization.
Since typical execution environments for Java allow to add new code dynamically, it is impossible to prove that such an optimization will stay unobservable. So the answer is, there is no such optimization that would eliminate an unused field from a class in practice.
There is, however, a special case. The JVM may optimize a particular use case of the class when the object’s entire lifetime is covered by the code the optimizer is looking at. This is checked by Escape Analysis.
When the preconditions are met, Scalar Replacement may be performed which will eliminate the heap allocation and turn the fields into the equivalent of local variables. Once your object has been decomposed into the three variables A, B, and C they are subject to the same optimizations as local variables. So they may end up in CPU registers instead of RAM or get eliminated entirely if they are never read or contain a predictable value.
Not that in this case, you don’t have to worry about the inheritance relation. Since this optimization only applies for a code path spanning the object’s entire lifetime, it includes its allocation, hence, its exact type is known. And all methods operating on the object must have been inlined already.
Since by this point, the outer object doesn’t exist anymore, eliminating the unused inner object also wouldn’t contradict the specification cited above.
So there’s no optimization removing an unused field in general, but for a particular Foo or Bar instance, it may happen. For those cases, even the existence of methods potentially using the field wouldn’t impose a problem, as the optimizer knows at this point, whether they are actually invoked during the object’s lifetime.

Related

What's the best way to declare a constant in Kotlin?

When we using kotlin there are mainly two ways of declaring constant:
class Foo {
companion object {
private const val a = "A"
}
}
And:
class Foo {
private val a = "A"
}
which one is better?
I searched the companion way is equivalent to
public static final class Companion {
#NotNull
private static final String a = "A";
}
in Java.
In this question
If a constant is not static, Java will allocate a memory for that constant in every object of the class (i.e., one copy of the constant per object).
If a constant is static, there will be only one copy of the constant
for that class (i.e., one copy per class).
Yes that's true.
BUT
If I have 100 or more constants in one class all static, when they are released? There are always in memory.
They won't be released until the program killed/terminated, right?
So I think the second way
class Foo {
private val a = "A"
}
is the right way. As any instance will be released some point then the memory is released.
Not quite sure I missed something. Any comments? Thanks!
BUT If I have 100 or more contacts in one class all static, when they are released?
You're going to write the info of 100 contacts in a source file? That's.. generally not where such data is supposed to go, but okay. 100.
You know what? That's bush league. Let's make it 10,000 contacts, all written out in one gigantic kotlin or java file.
It's still peanuts, memory wise. What's the share of a penny compared to the GDP of the world? Something along those lines. Whenever you load a class file, that's in memory. And is never getting released. And it doesn't matter one iota.
All those constant values are part of the class def and are loaded at least once in that sense no matter what you do.
The correct way is obviously for static data (i.e. things that are inherent to the class and do not ever change) to be loaded once precisely, instead of 'x times, where x is the amount of instances'. It's semantically correct (by its nature, unchanging, class-global stuff is static, that's what static means), and increments the 'load' of the fact that this class has been touched by a few percentage points (you're just adding references; all those strings are loaded only once and are loaded whether you make this stuff static or not, it's part of the class definition. Where do you imagine those strings go if there are 0 instances? JVM is not going to reload that class file from disk every time you call new Foo()!) - whereas if it's once-per-instance you might be looking at millions of refs for no reason at all.
There are many ways to define constants in Kotlin. They differ in their scope and use of namespaces, memory usage, and ability to inherit and override.
There's no single ‘best’ approach; it depends on what your constants represent, and how you're using them.
Here are some (simplified) examples:
At the top (file) level:
const val a = "A"
The declaration is ‘loose’ in a file, not contained in any class. This is usually the simplest and most concise way — but it may not occur to folks who are used to Java, as it has no direct Java equivalent.
The constant is available anywhere in the file (as a bare a); and if not private, it can also be used anywhere else (either as a fully-qualified list.of.packages.a, or if that's imported, simply as a). It can't be inherited or overridden.
In a companion object:
class A {
companion object {
const val a = "A"
}
}
If you know Java, this is roughly equivalent to a static field (as the question demonstrates). As with a top-level declaration, there is exactly one instance of the property in memory.
The main difference is that it's now part of A, which affects its scope and accessibility: it's available anywhere within A and its companion object, and (unless you restrict it) it can also be used elsewhere (as list.of.packages.A.a, and A.a if A is in scope, and simple a if the whole thing is imported). (You can't inherit from a singleton such as a companion object, so it can't be inherited or overridden.)
In a class:
class A {
val a = "A"
}
This differs both in concept and in practice, because every instance of A has its own property. This means that each instance of A will take an extra 4 or 8 bytes (or whatever the platform needs to store a reference) — even though they all hold the same reference. (The String object itself is interned.)
If A or a are closed (as here), that's is unlikely to make good sense either in terms of the meaning of the code, or its memory usage. (If you only have a few instances, it won't make much difference — but what if you have hundreds of thousands of instances in memory?) However, if A and a are both open, then subclasses can override the value, which can be handy. (However, see below.)
Once again, the property is available anywhere within A, and (unless restricted) anywhere that can see A. (Note that the property can't be const in this case, which means the compiler can't inline uses of it.)
In a class, with an explicit getter:
class A {
val a get() = "A"
}
This is conceptually very similar to the previous case: every instance of A has its own property, which can be overridden in subclasses. And it's accessed in exactly the same way.
However, the implementation is more efficient. This version provides the getter function — and because that makes no reference to a backing field, the compiler doesn't create one. So you get all the benefits of a class property, but without the memory overhead.
As enum values:
enum class A {
A
}
This makes sense only if you have a number of these values which are all examples of some common category; but if you do, then this is usually a much better way to group them together and make them available as named constants.
As values in a structure such as an array or map:
val letterConstants = mapOf('a' to "A")
This approach makes good sense if you want to look values up programatically, but if you have a lot of values and want to avoid polluting namespaces, it can still make sense even if you only ever access it with constants.
It can also be loaded up (or extended) at runtime (e.g. from a file or database).
(I'm sure there are other approaches, too, that I haven't thought of.)
As I said, it's hard to recommend a particular implementation, because it'll depend upon the problem you're trying to solve: what the constants mean, what they're associated with, and how and where they'll be used.
The ability to override makes a different between the two. Take a look at the following example. speed might be a property that is constant across a class, but you might want a different constant value for different subclasses.
import kotlin.reflect.KClass
open class Animal{
companion object{
val speed : Int = 1
}
var x: Int = 0
fun move(){
x += speed
print("${this} moved to ${x}\n")
}
}
class Dog : Animal(){
companion object{
val speed : Int = 3
}
}
class Cat : Animal(){
companion object{
val speed : Int = 2
}
}
fun main()
{
val a = Animal()
a.move()
val c = Cat()
c.move()
val d = Dog()
d.move()
}
Output:
Animal#49c2faae moved to 1
Cat#17f052a3 moved to 1
Dog#685f4c2e moved to 1
This doesn't work because speed in move() always refer to Animal.speed. So in this case, you want speed be an instance member instead of static (or companion).
open class Animal{
open val speed : Int = 1
var x: Int = 0
fun move(){
x += speed
print("${this} moved to ${x}\n")
}
}
class Dog : Animal(){
override val speed : Int = 3
}
class Cat : Animal(){
override val speed : Int = 2
}
Output:
Animal#37f8bb67 moved to 1
Cat#1d56ce6a moved to 2
Dog#17f052a3 moved to 3
As a general practice, if a value is something that is absolutely independent to individual instances, make it static. In contrast, if a property sounds like a property belongs to the individual instance, not belongs to a type, even it is constant across every instances (for the time being), I would put it as an instance member as it is likely subject to change in future development. Though it is totally fine to make it static until you actually find the needs to change in the future. For the above example, you might even eventually change speed to var instead of val when you later find that every individual dog has different speed. Just do what suit your needs at the moment :)

Hypothetical partial object dealocation in java

The following is a hypothetical problem. I'm purely interested if the effect is SOMEHOW achievable, by any obscure means imaginable (unsafe API, JNI, ASM etc). It is not an XY problem and I don't ever plan to write code like that! I'm just curious about internals.
Let's assume that we have this very simple hierarchy in Java:
class Cupcake {
public String kind;
// ...
}
class WholeRealityItself extends Cupcake {
public Object[] wholeMatterOfUniverse;
// transform internal state because reasons
public performBigBangAndFluctuateACupcake() {
// ... chaotic spacetime fluctuations produce a cupcake
this.kind = "quantum_with_sprinkles";
}
}
Our process is as follows:
WholeRealityItself reality = new WholeRealityItself();
reality.performBigBangAndFluctuateACupcake();
Cupcake cupcake = (Cupcake) reality; // upcast
// from now on the object will be only accessed via it's supertype and never downcast
Putting it into words:
We create an object of subtype, that has a lot of memory allocated.
We perform some internal state transformation on this subtype object.
We upcast the object to it's supertype and from now on we will ONLY refer to it by it's supertype and never downcast
So now our JVM holds a Cupcake reference to an internal WholeRealityItself object with memory that (the programmer knows) will never again be accessed. Yes, I know that references and actual allocated objects are two different things and upcasts/downcasts make the program just "reinterpret" an object.
Completely ignoring the fact that this abomination of a code is unusable and should be replaced with a sane builder/factory/copy or whatever, just assume for the sake of argument that we want it that way. The point is not how to achieve the same effect but if the following is possible:.
Can you force a narrowing of the actual allocated OBJECT to covert it's internals to a Cupcake from WholeRealityItself and force deallocation of wholeMatterOfUniverse?
AKA - can you SOMEHOW slice an underlying allocated OBJECT? Last questions about object slicing are from ~10 years ago.
AKA - can you SOMEHOW slice an underlying allocated OBJECT?
No you can't.
The object is represented by a heap node. If you did anything to interfere with the size or type (class) of the heap node, you are liable to crash the garbage collector.
I guess, you could use abstraction breaking (nasty!) reflection to identify and assign null to all of the fields added by the subclass. But the problem is that you can't do anything about methods of the superclass that the subclass has overloaded. If those methods refer to any of the fields that you have assigned null to, and something calls them, you have a potentially broken (smashed) object.
My advice: create a brand new Cupcake object using the relevant state from your WholeRealityItself object. It will have a different reference. Deal with that.
This David Wheeler quotation may be relevant ...
"All problems in computer science can be solved by another level of indirection."
Neither upcasts nor downcasts change the object itself - they only influence how the compiler treats the reference to the object.
One example are overriden methods: what method is called depends entirely on the runtime type of the object, not on the reference type that the compiler uses:
class Cupcake {
public String kind;
// ...
public void printMeOut() {
System.out.println("Cupcake");
}
}
class WholeRealityItself extends Cupcake {
public Object[] wholeMatterOfUniverse;
#Override
public void printMeOut() {
System.out.println("WholeRealityItself");
}
public performBigBangAndFluctuateACupcake() {
//...
}
}
After your sample code
WholeRealityItself reality = new WholeRealityItself();
reality.performBigBangAndFluctuateACupcake();
Cupcake cupcake = (Cupcake) reality; // upcast
// from now on the object will be only accessed via it's supertype and never downcast
the call cupcake.printMeOut(); will print out "WholeRealityItself" every time now matter how much time passed since the upcast.
You are talking about upcasting and nothing happens to your object. Only way to free would be to have a constructor in cupecake that would take another cupecake as input and would only use the needed parts. After you would release wholeworld.

can moderm JVMs optimize different instances of the same class differently?

say I have 2 instances of the same class, but they behave differently (follow different code paths) based on a final boolean field set at construction time. so something like:
public class Foo {
private final boolean flag;
public Foo(boolean flagValue) {
this.flag = flagValue;
}
public void f() {
if (flag) {
doSomething();
} else {
doSomethingElse();
}
}
}
2 instances of Foo with different values for flag could in theory be backed by 2 different assemblies, thereby eliminating the cost of the if (sorry for the contrived example, its the simplest one I could come up with).
so my question is - do any JVMs actually do this? or is a single class always backed by a single assembly?
Yes, JVMs do this form of optimization. In your case, this would be a result of inlining and adaptive optimization for being a value to always be true. Consider the following code:
Foo foo = new Foo(true);
foo.f();
It is trivial to prove for HotSpot that Foo is always an actual instance of Foo at the call site of f what allows the VM to simply copy-paste the code of the method, thus eliminating the virtual dispatch. After inlining, the example is reduced to:
Foo foo = new Foo(true);
if (foo.flag) {
doSomething();
} else {
doSomethingElse();
}
This again, allows to reduce the code to:
Foo foo = new Foo(true);
foo.doSomething();
If the optimization can be applied does therefore depend on the monomorphism of the call site of foo and the stability of flag at this call site. (The VM profiles your methods for such patterns.) The less the VM is able to predict the outcome of your program, the less optimization is applied.
If the example was so trivial as the above code, the JIT would probably also erase the object allocation and simply call doSomething. Also, for the trivial example case where the value of the field can be proven to be true trivially, the VM does not even need to optimize adaptively but simply applies the above optimization. There is a great tool named JITWatch that allows you to look into how your code gets optimized.
The following applies to hotspot, other JVMs may apply different optimizations.
If those instances are in turned assigned to static final fields and then referred to by other code and the VM is started with -XX:+TrustFinalNonStaticFields then those instances can participate in constant folding and inlining CONSTANT.f() can result in different branches being eliminated.
Another approach available to privileged code is creating anonymous classes instead of instances via sun.misc.Unsafe.defineAnonymousClass(Class<?>, byte[], Object[]) and patching a class constant for each class, but ultimately that also has to be referenced through a class constant to have any effect on optimizations.

Java Memory Model: Is it safe to create a cyclical reference graph of final instance fields, all assigned within the same thread?

Can somebody who understand the Java Memory Model better than me confirm my understanding that the following code is correctly synchronized?
class Foo {
private final Bar bar;
Foo() {
this.bar = new Bar(this);
}
}
class Bar {
private final Foo foo;
Bar(Foo foo) {
this.foo = foo;
}
}
I understand that this code is correct but I haven't worked through the whole happens-before math. I did find two informal quotations that suggest this is lawful, though I'm a bit wary of completely relying on them:
The usage model for final fields is a simple one: Set the final fields for an object in that object's constructor; and do not write a reference to the object being constructed in a place where another thread can see it before the object's constructor is finished. If this is followed, then when the object is seen by another thread, that thread will always see the correctly constructed version of that object's final fields. It will also see versions of any object or array referenced by those final fields that are at least as up-to-date as the final fields are. [The Java® Language Specification: Java SE 7 Edition, section 17.5]
Another reference:
What does it mean for an object to be properly constructed? It simply means that no reference to the object being constructed is allowed to "escape" during construction. (See Safe Construction Techniques for examples.) In other words, do not place a reference to the object being constructed anywhere where another thread might be able to see it; do not assign it to a static field, do not register it as a listener with any other object, and so on. These tasks should be done after the constructor completes, not in the constructor. [JSR 133 (Java Memory Model) FAQ, "How do final fields work under the new JMM?"]
Yes, it is safe. Your code does not introduce a data race. Hence, it is synchronized correctly. All objects of both classes will always be visible in their fully initialized state to any thread that is accessing the objects.
For your example, this is quite straight-forward to derive formally:
For the thread that is constructing the threads, all observed field values need to be consistent with program order. For this intra-thread consistency, when constructing Bar, the handed Foo value is observed correctly and never null. (This might seem trivial but a memory model also regulates "single threaded" memory orderings.)
For any thread that is getting hold of a Foo instance, its referenced Bar value can only be read via the final field. This introduces a dereference ordering between reading of the address of the Foo object and the dereferencing of the object's field pointing to the Bar instance.
If another thread is therefore capable of observing the Foo instance altogether (in formal terms, there exists a memory chain), this thread is guaranteed to observe this Foo fully constructed, meaning that its Bar field contains a fully initialized value.
Note that it does not even matter that the Bar instance's field is itself final if the instance can only be read via Foo. Adding the modifier does not hurt and better documents the intentions, so you should add it. But, memory-model-wise, you would be okay even without it.
Note that the JSR-133 cookbook that you quoted is only describing an implementation of the memory model rather than then memory model itself. In many points, it is too strict. One day, the OpenJDK might no longer align with this implementation and rather implement a less strict model that still fulfills the formal requirements. Never code against an implementation, always code against the specification! For example, do not rely on a memory barrier being placed after the constructor, which is how HotSpot more or less implements it. These things are not guaranteed to stay and might even differ for different hardware architectures.
The quoted rule that you should never let a this reference escape from a constructor is also too narrow a view on the problem. You should not let it escape to another thread. If you would, for example, hand it to a virtually dispatched method, you could not longer control where the instance would end up. This is therefore a very bad practice! However, constructors are not dispatched virtually and you can safely create circular references in the manner you depicted. (I assume that you are in control of Bar and its future changes. In a shared code base, you should document tightly that the constructor of Bar must not let the reference slip out.)
Immutable Objects (with only final fields) are only "threadsafe" after they are properly constructed, meaning their constructor has completed. (The VM probably accomplishes this by a memory barrier after the constructor of such objects)
Lets see how to make your example surely unsafe:
If the Bar-Constructor would store a this-reference where another thread could see it, this would be unsafe because Bar isnt constructed yet.
If the Bar-Constructor would store a foo-reference where another thread could see it, this would be unsafe because foo isnt constructed yet.
If the Bar-Constructor would read some foo-fields, then (depending on the order of initialization inside the Foo-constructor) these fields would always be uninitialized. Thats not a threadsafety-problem, just an effect of the order of initialization. (Calling a virtual method inside a constructor has the same issues)
References to immutable Objects (only final fields) which are created by a new-expression are always safe to access (no uninitialized fields visible). But the Objects referenced in these final fields may show uninitialized values if these references were obtained by a constructor giving away its this-reference.
As Assylias already wrote: Because in your example the constructors stored no references to where another thread could see them, your example is "threadsafe". The created Foo-Object can safely be given other threads.

What's the difference between anonymous classes in Java and closures?

It looks like anonymous class provides the basic functionality of closure, is that true?
There is almost no difference. In fact the there is an old saying about closures and objects. Closures are the poor man's object, and objects are the poor man's closure. Both are equally powerful in terms of what they can do. We are only arguing over expressiveness.
In Java we are modeling closures with Anonymous Objects. In fact a little history here is that originally Java had the ability to modify the outward scope without the use of final. This works and worked fine for Objects allocated in the local method scope, but when it comes to primitives this caused lots of controversy. Primitives are allocated on the stack so in order for them to live past the execution of the outer method Java would have to allocate memory on the heap and move those members into the heap. At that time people were very new to garbage collection and they didn't trust it so the claim was Java shouldn't allocate memory without explicit instruction from the programmer. In efforts to strike a compromise Java decided to use the final keyword.
http://madbean.com/2003/mb2003-49/
Now the interesting thing is that Java could remove that restriction and make use of the final keyword optional now that everyone is more comfortable with the garbage collector and it could be completely compatible from a language perspective. Although the work around for this issue is simple to define instance variables on your Anonymous Object and you can modify those as much as you wish. In fact that could be an easy way to implement closure style references to local scope by adding public instance variables to the anonymous class through the compiler, and rewriting the source code to use those instead of stack variables.
public Object someFunction() {
int someValue = 0;
SomeAnonymousClass implementation = new SomeAnonymousClass() {
public boolean callback() {
someValue++;
}
}
implementation.callback();
return someValue;
}
Would be rewritten to:
public Object someFunction() {
SomeAnonymousClass implementation = new SomeAnonymousClass() {
public int someValue = 0;
public boolean callback() {
someValue++;
}
}
implementation.callback();
// all references to someValue could be rewritten to
// use this instance variable instead.
return implementation.someValue;
}
I think the reason people complain about Anonymous inner classes has more to do with static typing vs dynamic typing. In Java we have to define an agreed upon interface for the implementor of the anonymous class and the code accepting the anonymous class. We have to do that so we can type check everything at compile time. If we had 1st class functions then Java would need to define a syntax for declaring a method's parameters and return types as a data type to remain a statically typed language for type safety. This would almost be as complex as defining an interface. (An interface can define multiple methods, a syntax for declaring 1st class methods would only be for one method). You could think of this as a short form interface syntax. Under the hood the compiler could translate the short form notation to an interface at compile time.
There are a lot of things that could be done to Java to improve the Anonymous Class experience without ditching the language or major surgery.
As far as they both affect otherwise "private" scoping, in a very limited sense, yes. however, there are so many differences that the answer might as well be no.
Since Java lacks the ability to handle blocks of code as true R-values, inner classes cannot pass blocks of code as is typically done in continuations. Therefore the closure as a continuation technique is completely missing.
While the lifetime of a class to be garbage collected is extended by people holding inner classes (similar to closures keeping variables alive while being rebound to the closure), the ability of Java to do renaming via binding is limited to comply with the existing Java syntax.
And to allow threads to properly not stomp over each other's data using Java's thread contention model, inner classes are further restricted with access to data that is guaranteed not to upset, aka final locals.
This completely ignores the other inner classes (aka static inner classes) which is slightly different in feel. In other words, it touches upon a few items that closures could handle, but falls short of the minimum requirements that most people would consider necessary to be called a closure.
IMHO, They serve a similar purpose, however a closure is intended to be more concise and potentially provide more functionality.
Say you want to use a local variable using an anonymous class.
final int[] i = { 0 };
final double[] d = { 0.0 };
Runnable run = new Runnable() {
public void run() {
d[0] = i[0] * 1.5;
}
};
executor.submit(run);
Closures avoid the need for most of the boiler plate coding by allowing you write just what is intended.
int i = 0;
double d = 0.0;
Runnable run = { => d = i * 1.5; };
executor.submit(run);
or even
executor.submit({ => d = i * 1.5; });
or if closures support code blocks.
executor.submit() {
d = i * 1.5;
}

Categories