How to find if an object is referencing another at runtime - java

is it possible to check at runtime if an object has a direct or indirect reference to another object?
(I know I can use VisualVm or similar to analyze the HeapDump, but i'd like to automate it at runtime)
I'm working with WeakHashMaps, doing something like this:
public class MyClass {
// the Runnable will be eventually removed if the key is collected by the GC
private static final WeakHashMap<Object, Runnable> map = new WeakHashMap<>();
public static void main(String[] args) {
MyClass a = new MyClass(2);
MyClass b = new MyClass(20);
a = null;// no more Strong references to a
b = null;// no more Strong references to b
System.gc();
for (Runnable r : map.values()) {
r.run();
}
// will print (20), becouse using format() in the lambda cause a Strong
// reference to MyClass (b) and thus the WeakHashMap will never remove b
}
public MyClass(int i) {
if (i < 10) {
map.put(this, () -> {
System.out.println(i);
});
} else {
map.put(this, () -> {
// this is subtle, but calling format() create a strong reference
// between the Runnable and MyClass
System.out.println(format(i));
});
}
}
private String format(Integer i) {
return "(" + i + ")";
}
}
in the code, the two instance of MyClass will add themselves (as key) and a runnable (as value) to the WeakHashMap.
In the first instance (a), the Runnable simply call System.out.println() and when the instance a is no more referenced (a = null) the entry will be removed from the map.
In the second instance (b), the Runnable also call format() an instance function of MyClass. This create a strong reference to b and adding the Runnable to the map will result in a lock condition, where the value is an indirect strong reference to the key preventing the collection by the garbage collector.
Now I know to avoid these conditions (for instance, using a Weakreference inside the lambda), but this is really easy to miss in a real scenario, and will cause a memory leak.
So, prior to adding the pair to the map I'd like to check if the value is somehow referencing the key, and throw an exception if so.
This would be a "debug" task, and will be disabled in production, so I don't care if it is slow or an hack.
--- update ---
I'm trying to deal with WeakListeners, and to avoid them to be immediately collected if not referenced.
So i register them as notifier.addWeakListener(holder, e -> { ... });
and this will add the listener to a WeakHashMap preventing the listener to be collected until holder live.
But any reference to the holder in the listener will create a lock :(
Is there a better way?

The Reflection API gives you access to all fields of a run-time object (and its runtime type, and possibly the Class object). In theory, you could traverse through the tree of you instance's fields (and static fields on the class), the fields' fields etc.
While this is possible, I doubt it would be feasible. You write you don't care about performance, but it may even be too slow for development runs. Which brings us to the Rule 1 of implementing your own cache: Don't do it.

There is already a builtin feature for associations which are automatically cleaned up, ordinary instance fields. I.e
public class MyClass {
public static void main(String[] args) {
MyClass a = new MyClass(2);
MyClass b = new MyClass(20);
WeakReference<MyClass> aRef = new WeakReference<>(a), bRef = new WeakReference<>(b);
a = null;// no more Strong references to a
b = null;// no more Strong references to b
System.gc();
if(aRef.get() == null) System.out.println("a collected");
if(bRef.get() == null) System.out.println("b collected");
}
Runnable r;
public MyClass(int i) {
if (i < 10) {
r = () -> System.out.println(i);
} else {
r = () -> {
// reference from Runnable to MyClass is no problem
System.out.println(format(i));
};
}
}
private String format(Integer i) {
return "(" + i + ")";
}
}
You can put these associated objects into a weak hashmap as keys, to allow them to get garbage collected, which, of course, will only happen when the particular MyClass instance, which still holds a strong reference to it, gets garbage collected:
public class MyClass {
public static void main(String[] args) {
MyClass a = new MyClass(2);
MyClass b = new MyClass(20);
for(Runnable r: REGISTERED) r.run();
System.out.println("cleaning up");
a = null;// no more Strong references to a
b = null;// no more Strong references to b
System.gc();
// empty on common JRE implementations
for(Runnable r: REGISTERED) r.run();
}
static Set<Runnable> REGISTERED = Collections.newSetFromMap(new WeakHashMap<>());
Runnable r;
public MyClass(int i) {
r = i < 10?
() -> System.out.println(i):
() -> {
// reference from Runnable to MyClass is no problem
System.out.println(format(i));
};
REGISTERED.add(r);
}
private String format(Integer i) {
return "(" + i + ")";
}
}
But note that what works smoothly in this simple test setup is nothing you should rely on, especially as you mentioned weak listeners.
In production environments, the garbage collector runs when there are memory needs, which is not connected to application logic, i.e. whether particular actions implemented as listeners should be executed or not. One possible scenario would be that there is always enough memory, so the garbage collector never runs and obsolete listeners keep being executed forever.
But you may encounter problems into the other direction too. Your question suggests that it might be possible to write your listeners (Runnable in the example) in a way that they don’t contain references to the instance whose life time ought to determine the listener’s life time (the MyClass instance). This raises the question, in which way the life times of these objects are connected at all. You have to keep strong references to these key objects, for the sake of keeping these listeners alive, which is error prone too.

Related

StringCoding has threadLocal [duplicate]

Does any one have an example how to do this? Are they handled by the garbage collector? I'm using Tomcat 6.
The javadoc says this:
"Each thread holds an implicit reference to its copy of a thread-local variable as long as the thread is alive and the ThreadLocal instance is accessible; after a thread goes away, all of its copies of thread-local instances are subject to garbage collection (unless other references to these copies exist).
If your application or (if you are talking about request threads) container uses a thread pool that means that threads don't die. If necessary, you would need to deal with the thread locals yourself. The only clean way to do this is to call the ThreadLocal.remove() method.
There are two reasons you might want to clean up thread locals for threads in a thread pool:
to prevent memory (or hypothetically resource) leaks, or
to prevent accidental leakage of information from one request to another via thread locals.
Thread local memory leaks should not normally be a major issue with bounded thread pools since any thread locals are likely to get overwritten eventually; i.e. when the thread is reused. However, if you make the mistake of creating a new ThreadLocal instances over and over again (instead of using a static variable to hold a singleton instance), the thread local values won't get overwritten, and will accumulate in each thread's threadlocals map. This could result in a serious leak.
Assuming that you are talking about thread locals that are created / used during a webapp's processing of an HTTP request, then one way to avoid the thread local leaks is to register a ServletRequestListener with your webapp's ServletContext and implement the listener's requestDestroyed method to cleanup the thread locals for the current thread.
Note that in this context you also need to consider the possibility of information leaking from one request to another.
Here is some code to clean all thread local variables from the current thread when you do not have a reference to the actual thread local variable. You can also generalize it to cleanup thread local variables for other threads:
private void cleanThreadLocals() {
try {
// Get a reference to the thread locals table of the current thread
Thread thread = Thread.currentThread();
Field threadLocalsField = Thread.class.getDeclaredField("threadLocals");
threadLocalsField.setAccessible(true);
Object threadLocalTable = threadLocalsField.get(thread);
// Get a reference to the array holding the thread local variables inside the
// ThreadLocalMap of the current thread
Class threadLocalMapClass = Class.forName("java.lang.ThreadLocal$ThreadLocalMap");
Field tableField = threadLocalMapClass.getDeclaredField("table");
tableField.setAccessible(true);
Object table = tableField.get(threadLocalTable);
// The key to the ThreadLocalMap is a WeakReference object. The referent field of this object
// is a reference to the actual ThreadLocal variable
Field referentField = Reference.class.getDeclaredField("referent");
referentField.setAccessible(true);
for (int i=0; i < Array.getLength(table); i++) {
// Each entry in the table array of ThreadLocalMap is an Entry object
// representing the thread local reference and its value
Object entry = Array.get(table, i);
if (entry != null) {
// Get a reference to the thread local object and remove it from the table
ThreadLocal threadLocal = (ThreadLocal)referentField.get(entry);
threadLocal.remove();
}
}
} catch(Exception e) {
// We will tolerate an exception here and just log it
throw new IllegalStateException(e);
}
}
There is no way to cleanup ThreadLocal values except from within the thread that put them in there in the first place (or when the thread is garbage collected - not the case with worker threads). This means you should take care to clean up your ThreadLocal's when a servlet request is finished (or before transferring AsyncContext to another thread in Servlet 3), because after that point you may never get a chance to enter that specific worker thread, and hence, will leak memory in situations when your web app is undeployed while the server is not restarted.
A good place to do such cleanup is ServletRequestListener.requestDestroyed().
If you use Spring, all the necessary wiring is already in place, you can simply put stuff in your request scope without worrying about cleaning them up (that happens automatically):
RequestContextHolder.getRequestAttributes().setAttribute("myAttr", myAttr, RequestAttributes.SCOPE_REQUEST);
. . .
RequestContextHolder.getRequestAttributes().getAttribute("myAttr", RequestAttributes.SCOPE_REQUEST);
Reading again the Javadoc documentation carefully:
'Each thread holds an implicit reference to its copy of a thread-local variable as long as the thread is alive and the ThreadLocal instance is accessible; after a thread goes away, all of its copies of thread-local instances are subject to garbage collection (unless other references to these copies exist).
'
There is no need to clean anything, there is an 'AND' condition for the leak to survive. So even in a web container where thread survive to the application,
as long as the webapp class is unloaded ( only beeing reference in a static class loaded in the parent class loader would prevent this and this has nothing to do with ThreadLocal but general issues with shared jars with static data ) then the second leg of the AND condition is not met anymore so the thread local copy is eligible for garbage collection.
Thread local can't be the cause of memory leaks, as far the implementation meets the documentation.
I would like to contribute my answer to this question even though it's old. I had been plagued by the same problem (gson threadlocal not getting removed from the request thread), and had even gotten comfortable restarting the server anytime it ran out of memory (which sucks big time!!).
In the context of a java web app that is set to dev mode (in that the server is set to bounce every time it senses a change in the code, and possibly also running in debug mode), I quickly learned that threadlocals can be awesome and sometime be a pain. I was using a threadlocal Invocation for every request. Inside the Invocation. I'd sometimes also use gson to generate my response. I would wrap the Invocation inside a 'try' block in the filter, and destroy it inside a 'finally' block.
What I observed (I have not metrics to back this up for now) is that if I made changes to several files and the server was constantly bouncing in between my changes, I'd get impatient and restart the server (tomcat to be precise) from the IDE. Most likely than not, I'd end up with an 'Out of memory' exception.
How I got around this was to include a ServletRequestListener implementation in my app, and my problem vanished. I think what was happening is that in the middle of a request, if the server would bounce several times, my threadlocals were not getting cleared up (gson included) so I'd get this warning about the threadlocals and two or three warning later, the server would crash. With the ServletResponseListener explicitly closing my threadlocals, the gson problem vanished.
I hope this makes sense and gives you an idea of how to overcome threadlocal issues. Always close them around their point of usage. In the ServletRequestListener, test each threadlocal wrapper, and if it still has a valid reference to some object, destroy it at that point.
I should also point out that make it a habit to wrap a threadlocal as a static variable inside a class. That way you can be guaranteed that by destroying it in the ServeltRequestListener, you won't have to worry about other instances of the same class hanging around.
#lyaffe's answer is the best possible for Java 6. There are a few issues that this answer resolves using what is available in Java 8.
#lyaffe's answer was written for Java 6 before MethodHandle became available. It suffers from performance penalties due to reflection. If used as below, MethodHandle provides zero overhead access to fields and methods.
#lyaffe's answer also goes through the ThreadLocalMap.table explicitly and is prone to bugs. There is a method ThreadLocalMap.expungeStaleEntries() now available that does the same thing.
The code below has 3 initialization methods to minimize the cost of invoking expungeStaleEntries().
private static final MethodHandle s_getThreadLocals = initThreadLocals();
private static final MethodHandle s_expungeStaleEntries = initExpungeStaleEntries();
private static final ThreadLocal<Object> s_threadLocals = ThreadLocal.withInitial(() -> getThreadLocals());
public static void expungeThreadLocalMap()
{
Object threadLocals;
threadLocals = s_threadLocals.get();
try
{
s_expungeStaleEntries.invoke(threadLocals);
}
catch (Throwable e)
{
throw new IllegalStateException(e);
}
}
private static Object getThreadLocals()
{
ThreadLocal<Object> local;
Object result;
Thread thread;
local = new ThreadLocal<>();
local.set(local); // Force ThreadLocal to initialize Thread.threadLocals
thread = Thread.currentThread();
try
{
result = s_getThreadLocals.invoke(thread);
}
catch (Throwable e)
{
throw new IllegalStateException(e);
}
return(result);
}
private static MethodHandle initThreadLocals()
{
MethodHandle result;
Field field;
try
{
field = Thread.class.getDeclaredField("threadLocals");
field.setAccessible(true);
result = MethodHandles.
lookup().
unreflectGetter(field);
result = Preconditions.verifyNotNull(result, "result is null");
}
catch (NoSuchFieldException | SecurityException | IllegalAccessException e)
{
throw new ExceptionInInitializerError(e);
}
return(result);
}
private static MethodHandle initExpungeStaleEntries()
{
MethodHandle result;
Class<?> clazz;
Method method;
Object threadLocals;
threadLocals = getThreadLocals();
clazz = threadLocals.getClass();
try
{
method = clazz.getDeclaredMethod("expungeStaleEntries");
method.setAccessible(true);
result = MethodHandles.
lookup().
unreflect(method);
}
catch (NoSuchMethodException | SecurityException | IllegalAccessException e)
{
throw new ExceptionInInitializerError(e);
}
return(result);
}
The JVM would automatically clean-up all the reference-less objects that are within the ThreadLocal object.
Another way to clean up those objects (say for example, these objects could be all the thread unsafe objects that exist around) is to put them inside some Object Holder class, which basically holds it and you can override the finalize method to clean the object that reside within it. Again it depends on the Garbage Collector and its policies, when it would invoke the finalize method.
Here is a code sample:
public class MyObjectHolder {
private MyObject myObject;
public MyObjectHolder(MyObject myObj) {
myObject = myObj;
}
public MyObject getMyObject() {
return myObject;
}
protected void finalize() throws Throwable {
myObject.cleanItUp();
}
}
public class SomeOtherClass {
static ThreadLocal<MyObjectHolder> threadLocal = new ThreadLocal<MyObjectHolder>();
.
.
.
}
final ThreadLocal<T> old = backend;
// try to clean by reflect
try {
// BGN copy from apache ThreadUtils#getAllThreads
ThreadGroup systemGroup = Thread.currentThread().getThreadGroup();
while (systemGroup.getParent() != null) {
systemGroup = systemGroup.getParent();
}
int count = systemGroup.activeCount();
Thread[] threads;
do {
threads = new Thread[count + (count / 2) + 1]; //slightly grow the array size
count = systemGroup.enumerate(threads, true);
//return value of enumerate() must be strictly less than the array size according to javadoc
} while (count >= threads.length);
// END
// remove by reflect
final Field threadLocalsField = Thread.class.getDeclaredField("threadLocals");
threadLocalsField.setAccessible(true);
Class<?> threadLocalMapClass = Class.forName("java.lang.ThreadLocal$ThreadLocalMap");
Method removeMethod = threadLocalMapClass.getDeclaredMethod("remove", ThreadLocal.class);
removeMethod.setAccessible(true);
for (int i = 0; i < count; i++) {
final Object threadLocalMap = threadLocalsField.get(threads[i]);
if (threadLocalMap != null) {
removeMethod.invoke(threadLocalMap, old);
}
}
}
catch (Exception e) {
throw new ThreadLocalAttention(e);
}

Broken singleton without volatile example

I know, that in theory, to implement a correct singleton, in addition to double checked locking and synchronized we should make an instance field volatile.
But in real life I cannot get an example, that would expose the problem. Maybe there is a JVM flag that would disable some optimisation, or allow runtime to do such code permutation?
Here is the code, that, as I understand, should print to console from time to time, but it doesn't:
class Data {
int i;
Data() {
i = Math.abs(new Random().nextInt()) + 1; // Just not 0
}
}
class Keeper {
private Data data;
Data getData() {
if (data == null)
synchronized (this) {
if (data == null)
data = new Data();
}
return data;
}
}
#Test
void foo() throws InterruptedException {
Keeper[] sharedInstance = new Keeper[]{new Keeper()};
Thread t1 = new Thread(() -> {
while (true)
sharedInstance[0] = new Keeper();
});
t1.start();
final Thread t2 = new Thread(() -> {
while (true)
if (sharedInstance[0].getData().i == 0)
System.out.println("GOT IT!!"); // This actually does not happen
});
t2.start();
t1.join();
}
Could someone provide a code, that clearly demonstrates described theoretical lack of volatile problem?
Very good article about it
https://shipilev.net/blog/2014/safe-public-construction/
You can find examples in the end.
And be aware about
x86 is Total Store Order hardware, meaning the stores are visible for all processors in some total order. That is, if compiler actually presented the program stores in the same order to hardware, we may be reasonably sure the initializing stores of the instance fields would be visible before seeing the reference to the object itself. Even if your hardware is total-store-ordered, you can not be sure the compiler would not reorder within the allowed memory model spec. If you turn off -XX:+StressGCM -XX:+StressLCM in this experiment, all cases would appear correct since the compiler did not reorder much.

WeakReference not collected in curly brackets?

This fails
public void testWeak() throws Exception {
waitGC();
{
Sequence a = Sequence.valueOf("123456789");
assert Sequence.used() == 1;
a.toString();
}
waitGC();
}
private void waitGC() throws InterruptedException {
Runtime.getRuntime().gc();
short count = 0;
while (count < 100 && Sequence.used() > 0) {
Thread.sleep(10);
count++;
}
assert Sequence.used() == 0: "Not removed!";
}
The test fails. Telling Not removed!.
This works:
public void testAWeak() throws Exception {
waitGC();
extracted();
waitGC();
}
private void extracted() throws ChecksumException {
Sequence a = Sequence.valueOf("123456789");
assert Sequence.used() == 1;
a.toString();
}
private void waitGC() throws InterruptedException {
Runtime.getRuntime().gc();
short count = 0;
while (count < 100 && Sequence.used() > 0) {
Thread.sleep(10);
count++;
}
assert Sequence.used() == 0: "Not removed!";
}
It seems like the curly brackets does not affect the weakness.
Some official resources?
Scope is a compile-time thing. It is not determining the reachability of objects at runtime, only has an indirect influence due to implementation details.
Consider the following variation of your test:
static boolean WARMUP;
public void testWeak1() throws Exception {
variant1();
WARMUP = true;
for(int i=0; i<10000; i++) variant1();
WARMUP = false;
variant1();
}
private void variant1() throws Exception {
AtomicBoolean track = new AtomicBoolean();
{
Trackable a = new Trackable(track);
a.toString();
}
if(!WARMUP) System.out.println("variant1: "
+(waitGC(track)? "collected": "not collected"));
}
public void testWeak2() throws Exception {
variant2();
WARMUP = true;
for(int i=0; i<10000; i++) variant2();
WARMUP = false;
variant2();
}
private void variant2() throws Exception {
AtomicBoolean track = new AtomicBoolean();
{
Trackable a = new Trackable(track);
a.toString();
if(!WARMUP) System.out.println("variant2: "
+(waitGC(track)? "collected": "not collected"));
}
}
static class Trackable {
final AtomicBoolean backRef;
public Trackable(AtomicBoolean backRef) {
this.backRef = backRef;
}
#Override
protected void finalize() throws Throwable {
backRef.set(true);
}
}
private boolean waitGC(AtomicBoolean b) throws InterruptedException {
for(int count = 0; count < 10 && !b.get(); count++) {
Runtime.getRuntime().gc();
Thread.sleep(1);
}
return b.get();
}
on my machine, it prints:
variant1: not collected
variant1: collected
variant2: not collected
variant2: collected
If you can’t reproduce it, you may have to raise the number of warmup iterations.
What it demonstrates: whether a is in scope (variant 2) or not (variant 1) doesn’t matter, in either case, the object has not been collected in cold execution, but got collected after a number of warmup iterations, in other words, after the optimizer kicked in.
Formally, a is always eligible for garbage collection at the point we’re invoking waitGC(), as it is unused from this point. This is how reachability is defined:
A reachable object is any object that can be accessed in any potential continuing computation from any live thread.
In this example, the object can not be accessed by potential continuing computation, as no such subsequent computation that would access the object exists. However, there is no guaranty that a particular JVM’s garbage collector is always capable of identifying all of those objects at each time. In fact, even a JVM not having a garbage collector at all would still comply to the specification, though perhaps not the intent.
The possibility of code optimizations having an effect on the reachability analysis has also explicitly mentioned in the specification:
Optimizing transformations of a program can be designed that reduce the number of objects that are reachable to be less than those which would naively be considered reachable. For example, a Java compiler or code generator may choose to set a variable or parameter that will no longer be used to null to cause the storage for such an object to be potentially reclaimable sooner.
So what happens technically?
As said, scope is a compile-time thing. At the bytecode level, leaving the scope defined by the curly braces has no effect. The variable a is out of scope, but its storage within the stack frame still exists holding the reference until overwritten by another variable or until the method completes. The compiler is free to reuse the storage for another variable, but in this example, no such variable exists. So the two variants of the example above actually generate identical bytecode.
In an unoptimized execution, the still existing reference within the stack frame is treated like a reference preventing the object’s collection. In an optimized execution, the reference is only held until its last actual use. Inlining of its fields can allow its collection even earlier, up to the point that it is collected right after construction (or not getting constructed at all, if it hadn’t a finalize() method). The extreme end is finalize() called on strongly reachable object in Java 8…
Things change, when you insert another variable, e.g.
private void variant1() throws Exception {
AtomicBoolean track = new AtomicBoolean();
{
Trackable a = new Trackable(track);
a.toString();
}
String message = "variant1: ";
if(!WARMUP) System.out.println(message
+(waitGC(track)? "collected": "not collected"));
}
Then, the storage of a is reused by message after a’s scope ended (that’s of course, compiler specific) and the object gets collected, even in the unoptimized execution.
Note that the crucial aspect is the actual overwriting of the storage. If you use
private void variant1() throws Exception {
AtomicBoolean track = new AtomicBoolean();
{
Trackable a = new Trackable(track);
a.toString();
}
if(!WARMUP)
{
String message = "variant1: "
+(waitGC(track)? "collected": "not collected");
System.out.println(message);
}
}
The message variable uses the same storage as a, but its assignment only happens after the invocation of waitGC(track), so you get the same unoptimized execution behavior as in the original variant.
By the way, don’t use short for local loop variables. Java always uses int for byte, short, char, and int calculations (as you know, e.g. when trying to write shortVariable = shortVariable + 1;) and requiring it to cut the result value to short (which still happens implicitly when you use shortVariable++), adds an additional operation, so if you thought, using short improved the efficiency, notice that it actually is the opposite.

Iterating a WeakHashMap

I'm using a WeakHashMap concurrently. I want to achieve fine-grained locking based on an Integer parameter; if thread A needs to modify a resource identified by Integer a and thread B does the same for resource identified by Integer b, then they need not to be synchronized. However, if there are two threads using the same resource, say thread C is also using a resource identified by Integer a, then of course thread A and C need to synchronize on the same Lock.
When there are no more threads that need the resource with ID X then the Lock in the Map for key=X can be removed. However, another thread can come in at that moment and try to use the lock in the Map for ID=X, so we need global synchronization when adding/removing the lock. (This would be the only place where every thread must synchronize, regardless of the Integer parameter) But, a thread cannot know when to remove the lock, because it doesn't know it is the last thread using the lock.
That's why I'm using a WeakHashMap: when the ID is no longer used, the key-value pair can be removed when the GC wants it.
To make sure I have a strong reference to the key of an already existing entry, and exactly that object reference that forms the key of the mapping, I need to iterate the keySet of the map:
synchronized (mrLocks){
// ... do other stuff
for (Integer entryKey : mrLocks.keySet()) {
if (entryKey.equals(id)) {
key = entryKey;
break;
}
}
// if key==null, no thread has a strong reference to the Integer
// key, so no thread is doing work on resource with id, so we can
// add a mapping (new Integer(id) => new ReentrantLock()) here as
// we are in a synchronized block. We must keep a strong reference
// to the newly created Integer, because otherwise the id-lock mapping
// may already have been removed by the time we start using it, and
// then other threads will not use the same Lock object for this
// resource
}
Now, can the content of the Map change while iterating it? I think not, because by calling mrLocks.keySet(), I created a strong reference to all keys for the scope of iteration. Is that correct?
As the API makes no assertions about the keySet(), I would recommend a cache usage like this:
private static Map<Integer, Reference<Integer>> lockCache = Collections.synchronizedMap(new WeakHashMap<>());
public static Object getLock(Integer i)
{
Integer monitor = null;
synchronized(lockCache) {
Reference<Integer> old = lockCache.get(i);
if (old != null)
monitor = old.get();
// if no monitor exists yet
if (monitor == null) {
/* clone i for avoiding strong references
to the map's key besides the Object returend
by this method.
*/
monitor = new Integer(i);
lockCache.remove(monitor); //just to be sure
lockCache.put(monitor, new WeakReference<>(monitor));
}
}
return monitor;
}
This way you are holding a reference to the monitor (the key itself) while locking on it and allow the GC to finalize it when not using it anymore.
Edit:
After the discussion about payload in the comments I thought about a solution with two caches:
private static Map<Integer, Reference<ReentrantLock>> lockCache = new WeakHashMap<>();
private static Map<ReentrantLock, Integer> keyCache = new WeakHashMap<>();
public static ReentrantLock getLock(Integer i)
{
ReentrantLock lock = null;
synchronized(lockCache) {
Reference<ReentrantLock> old = lockCache.get(i);
if (old != null)
lock = old.get();
// if no lock exists or got cleared from keyCache already but not from lockCache yet
if (lock == null || !keyCache.containsKey(lock)) {
/* clone i for avoiding strong references
to the map's key besides the Object returend
by this method.
*/
Integer cacheKey = new Integer(i);
lock = new ReentrantLock();
lockCache.remove(cacheKey); // just to be sure
lockCache.put(cacheKey, new WeakReference<>(lock));
keyCache.put(lock, cacheKey);
}
}
return lock;
}
As long as a strong reference to the payload (the lock) exists, the strong reference to the mapped integer in keyCache avoids the removal of the payload from the lockCache cache.

Java multi-thread

I'm a novice java programmer and kind of confused by the following code snippet. Does it mean the first thread coming in will share the lock with the third one? Hope someone could help me clarify. Thanks in advance.
public class T_6 extends Thread {
static Object o = new Object();
static int counter = 0;
int id;
public T_6(int id) {
this.id = id;
}
public void run () {
if ( counter++ == 1 ) //confused in here.
o = new Object();
synchronized ( o ) {
System.err.println( id + " --->" );
try {
sleep(1000);
} catch ( InterruptedException e ) {
System.err.println("Interrupted!");
}
System.err.println( id + " <---" );
}
}
public static void main (String args []) {
new T_6(1).start();
new T_6(2).start();
new T_6(3).start();
}
}
When you reach the up-count and if, you do a typical check-then-act operation. The problem here is that several threads can come here at the same time. This will mean they will have local copies of the counter. The different threads may all have a 0 local copy - meaning they will count up to 1 and create new objects - all of them. But they are stored in a static container - of which they may or may not have local copies. In short, whatever happens here is accidental. They may end up synchronizing over the same object - but they may try to synchronize over different objects, meaning they won't synchronize at all.
You should have a look at the final and volatile keywords.
final means a reference can't be repointed once pointed somewhere. This is a good idea for locks. If you change your declaration to
final static Object o = new Object();
you are guaranteed that o cannot change, and all synchronizations will be over the same object.
volatile means it is forbidden for the VM to store a thread-local copy of a variable. All reads and writes must be to memory. This means that all threads will see writes that other threads do.
To ensure proper synchronization between multiple threads, all must acquire lock on the same object, or else synchronization will not be achieved.
Take a look at this part of your code:
if ( counter++ == 1 ) //confused in here.
o = new Object();
This part is not necessary at all to make the code thread-safe. Remove the above code which is causing confusion. You have already created instance of the object while declaring it. Now to ensure thread-safety between all the threads, make them acquire lock on the same object which you have already created.
Look here : static final Object o = new Object();
Just make the object final, to ensure you do not assign new value anywhere else in the code mistakenly/intentionally. You can directly use this object in synchronized fashion to ensure thread safety.
Does it mean the first thread coming in will share the lock with the
third one?
Yes, moreover, due to:
non-volatile static int counter = 0 variable
non-atomic operation ++
each thread will have its own copy of variable counter. It means that the following condition never true:
if ( counter++ == 1 )
o = new Object();
That's why all of these threads will share the same lock on object o initialized when declaring o.

Categories