Greetings, fellow SO users.
I am currently in the process of writing a class of which instances will serve as a cache of JavaBean PropertyDescriptors. You can call a method getPropertyDescriptor(Class clazz, String propertyName) which will return the appropriate PropertyDescriptor. If it wasn't retrieved previously, the BeanInfo instance for the class is obtained and the right descriptor located. This result is then stored for the class-name pair so the next time it can be returned right away without the lookup or requiring the BeanInfo.
A first concern was when multiple invocations for the same class would overlap. This was simply solved by synchronizing on the clazz parameter. So multiple invocations for the same class are synchronized, but invocations for a different class can continue unhindered. This seemed like a decent compromise between thread-safety and liveness.
Now, it is possible that at some point certain classes, which have been introspected, might need to be unloaded. I can't simply keep references to them since this might result in a classloader leak. Also, the Introspectorclass of the JavaBeans API mentions that classloader destruction should be combined with a flush of the introspector: http://download.oracle.com/javase/6/docs/api/java/beans/Introspector.html
So, I've added a method flushDirectory(ClassLoader cl) that will remove any class from the cache and flush it from the Introspector (with Introspector.flushFromCaches(Class clz)) provided it was loaded with that classloader.
Now I have a new concern regarding synchronization. No new mappings should be added to the cache while this flush is in progress, while the flush should not start if access is still going. In other words, the basic problem is:
How do I make sure one piece of code may be run by multiple threads while another piece of code can only be run by one thread and prohibits those other pieces from running? It is a sort of one-way synchronization.
First I tried a combination of a java.util.concurrent.Lock and an AtomicInteger to keep count of the number of invocations in progress, but noticed that a lock can only be obtained, not checked if it currently is in use without locking. Now I'm using simple synchronization on an Object over the atomic integer. Here's a trimmed-down version of my class:
import java.beans.BeanInfo;
import java.beans.IntrospectionException;
import java.beans.Introspector;
import java.beans.PropertyDescriptor;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.concurrent.atomic.AtomicInteger;
public class DescriptorDirectory {
private final ClassPropertyDirectory classPropertyDirectory = new ClassPropertyDirectory();
private final Object flushingLock = new Object();
private final AtomicInteger accessors = new AtomicInteger(0);
public DescriptorDirectory() {}
public PropertyDescriptor getPropertyDescriptor(final Class<?> clazz, final String propertyName) throws Exception {
//First incrementing the accessor count.
synchronized(flushingLock) {
accessors.incrementAndGet();
}
PropertyDescriptor result;
//Synchronizing on the directory Class root
//This is preferrable to a full method synchronization since two lookups for
//different classes can never be on the same directory path and won't collide
synchronized(clazz) {
result = classPropertyDirectory.getPropertyDescriptor(clazz, propertyName);
if(result == null) {
//PropertyDescriptor wasn't loaded yet
//First we need bean information regarding the parent class
final BeanInfo beanInfo;
try {
beanInfo = Introspector.getBeanInfo(clazz);
} catch(final IntrospectionException e) {
accessors.decrementAndGet();
throw e;
//TODO: throw specific
}
//Now we must find the PropertyDescriptor of our target property
final PropertyDescriptor[] propList = beanInfo.getPropertyDescriptors();
for (int i = 0; (i < propList.length) && (result == null); i++) {
final PropertyDescriptor propDesc = propList[i];
if(propDesc.getName().equals(propertyName))
result = propDesc;
}
//If no descriptor was found, something's wrong with the name or access
if(result == null) {
accessors.decrementAndGet();
//TODO: throw specific
throw new Exception("No property with name \"" + propertyName + "\" could be found in class " + clazz.getName());
}
//Adding mapping
classPropertyDirectory.addMapping(clazz, propertyName, result);
}
}
accessors.decrementAndGet();
return result;
}
public void flushDirectory(final ClassLoader cl) {
//We wait until all getPropertyDescriptor() calls in progress have completed.
synchronized(flushingLock) {
while(accessors.intValue() > 0) {
try {
Thread.sleep(100);
} catch(final InterruptedException e) {
//No show stopper
}
}
for(final Iterator<Class<?>> it =
classPropertyDirectory.classMap.keySet().iterator(); it.hasNext();) {
final Class<?> clazz = it.next();
if(clazz.getClassLoader().equals(cl)) {
it.remove();
Introspector.flushFromCaches(clazz);
}
}
}
}
//The rest of the inner classes are omitted...
}
I believe this should work. Suppose thread 1 calls the get... method and thread 2 calls the flush... method at the same time. If thread 1 gets the lock on flushingLock first, thread 2 will wait for the accessor count to return to 0. In the meantime, new calls to get... can't continue since thread 2 will now have the flushingLock. If thread 2 got the lock first, it will wait for the accessors to go down to 0 while calls to get... will wait until the flush is complete.
Can anyone see problems with this approach? Are there some scenarios I'm overlooking? Or perhaps I overcomplicate things. Most of all, some java.util.concurrent classes might provide exactly what I'm doing here or there's a standard pattern to apply to this problem I'm nt aware of.
Sorry for the length of this post. It's not that complex but still far from simple matter, so I figure some discussion regarding the right approach would be interesting.
Thanks to everyone who reads this and in advance for any answers.
As far as I understand you can use a ReadWriteLock here:
private ReadWriteLock lock = new ReentrantReadWriteLock();
private Lock readLock = lock.readLock();
private Lock writeLock = lock.writeLock();
public PropertyDescriptor getPropertyDescriptor(final Class<?> clazz, final String propertyName) throws Exception {
readLock.lock();
try {
...
} finally {
readLock.unlock();
}
}
public void flushDirectory(final ClassLoader cl) {
writeLock.lock();
try {
...
} finally {
writeLock.unlock();
}
}
Also synchronizing on Class instance looks suspicious for me - it can interfere with some other synchronization. Perhaps it would be better to use a thread-safe Map of Future<PropertyDescriptor> (see, for example, Synchronization in a HashMap cache).
Related
We need to lock a method responsible for loading database date into a HashMap based cache.
A possible situation is that a second thread tries to access the method while the first method is still loading cache.
We consider the second thread's effort in this case to be superfluous. We would therefore like to have that second thread wait until the first thread is finished, and then return (without loading the cache again).
What I have works, but it seems quite inelegant. Are there better solutions?
private static final ReentrantLock cacheLock = new ReentrantLock();
private void loadCachemap() {
if (cacheLock.tryLock()) {
try {
this.cachemap = retrieveParamCacheMap();
} finally {
cacheLock.unlock();
}
} else {
try {
cacheLock.lock(); // wait until thread doing the load is finished
} finally {
try {
cacheLock.unlock();
} catch (IllegalMonitorStateException e) {
logger.error("loadCachemap() finally {}",e);
}
}
}
}
I prefer a more resilient approach using read locks AND write locks. Something like:
private static final ReadWriteLock cacheLock = new ReentrantReadWriteLock();
private static final Lock cacheReadLock = cacheLock.readLock();
private static final Lock cacheWriteLock = cacheLock.writeLock();
private void loadCache() throws Exception {
// Expiry.
while (storeCache.expired(CachePill)) {
/**
* Allow only one in - all others will wait for 5 seconds before checking again.
*
* Eventually the one that got in will finish loading, refresh the Cache pill and let all the waiting ones out.
*
* Also waits until all read locks have been released - not sure if that might cause problems under busy conditions.
*/
if (cacheWriteLock.tryLock(5, TimeUnit.SECONDS)) {
try {
// Got a lock! Start the rebuild if still out of date.
if (storeCache.expired(CachePill)) {
rebuildCache();
}
} finally {
cacheWriteLock.unlock();
}
}
}
}
Note that the storeCache.expired(CachePill) detects a stale cache which may be more than you are wanting but the concept here is the same, establish a write lock before updating the cache which will deny all read attempts until the rebuild is done. Also, manage multiple attempts at write in a loop of some sort or just drop out and let the read lock wait for access.
A read from the cache now looks like this:
public Object load(String id) throws Exception {
Store store = null;
// Make sure cache is fresh.
loadCache();
try {
// Establish a read lock so we do not attempt a read while teh cache is being updated.
cacheReadLock.lock();
store = storeCache.get(storeId);
} finally {
// Make sure the lock is cleared.
cacheReadLock.unlock();
}
return store;
}
The primary benefit of this form is that read access does not block other read access but everything stops cleanly during a rebuild - even other rebuilds.
You didn't say how complicated your structure is and how much concurrency / congestion you need. There are many ways to address your need.
If your data is simple, use a ConcurrentHashMap or similar to hold your data. Then just read and write in threads regardlessly.
Another alternative is to use actor model and put read/write on the same queue.
If all you need is to fill a read-only map which is initialized from database once requested, you could use any form of double-check locking which may be implemented in a number of ways. The easiest variant would be the following:
private volatile Map<T, V> cacheMap;
public void loadCacheMap() {
if (cacheMap == null) {
synchronized (this) {
if (cacheMap == null) {
cacheMap = retrieveParamCacheMap();
}
}
}
}
But I would personally prefer to avoid any form of synchronization here and just make sure that the initialization is done before any other thread can access it (for example in a form of init method in a DI container). In this case you would even avoid overhead of volatile.
EDIT: The answer works only when initial load is expected. In case of multiple updates, you could try to replace the tryLock by some other form of test and test-and-set, for example using something like this:
private final AtomicReference<CountDownLatch> sync =
new AtomicReference<>(new CountDownLatch(0));
private void loadCacheMap() {
CountDownLatch oldSync = sync.get();
if (oldSync.getCount() == 0) { // if nobody updating now
CountDownLatch newSync = new CountDownLatch(1);
if (sync.compareAndSet(oldSync, newSync)) {
cacheMap = retrieveParamCacheMap();
newSync.countDown();
return;
}
}
sync.get().await();
}
After looking at this question, I think I want to wrap ThreadLocal to add a reset behavior.
I want to have something similar to a ThreadLocal, with a method I can call from any thread to set all the values back to the same value. So far I have this:
public class ThreadLocalFlag {
private ThreadLocal<Boolean> flag;
private List<Boolean> allValues = new ArrayList<Boolean>();
public ThreadLocalFlag() {
flag = new ThreadLocal<Boolean>() {
#Override protected Boolean initialValue() {
Boolean value = false;
allValues.add(value);
return value;
}
};
}
public boolean get() {
return flag.get();
}
public void set(Boolean value) {
flag.set(value);
}
public void setAll(Boolean value) {
for (Boolean tlValue : allValues) {
tlValue = value;
}
}
}
I'm worried that the autoboxing of the primitive may mean the copies I've stored in the list will not reference the same variables referenced by the ThreadLocal if I try to set them. I've not yet tested this code, and with something tricky like this I'm looking for some expert advice before I continue down this path.
Someone will ask "Why are you doing this?". I'm working in a framework where there are other threads that callback into my code, and I don't have references to them. Periodically I want to update the value in a ThreadLocal variable they use, so performing that update requires that the thread which uses the variable do the updating. I just need a way to notify all these threads that their ThreadLocal variable is stale.
I'm flattered that there is new criticism recently regarding this three year old question, though I feel the tone of it is a little less than professional. The solution I provided has worked without incident in production during that time. However, there are bound to be better ways to achieve the goal that prompted this question, and I invite the critics to supply an answer that is clearly better. To that end, I will try to be more clear about the problem I was trying to solve.
As I mentioned earlier, I was using a framework where multiple threads are using my code, outside my control. That framework was QuickFIX/J, and I was implementing the Application interface. That interface defines hooks for handling FIX messages, and in my usage the framework was configured to be multithreaded, so that each FIX connection to the application could be handled simultaneously.
However, the QuickFIX/J framework only uses a single instance of my implementation of that interface for all the threads. I'm not in control of how the threads get started, and each is servicing a different connection with different configuration details and other state. It was natural to let some of that state, which is frequently accessed but seldom updated, live in various ThreadLocals that load their initial value once the framework has started the thread.
Elsewhere in the organization, we had library code to allow us to register for callbacks for notification of configuration details that change at runtime. I wanted to register for that callback, and when I received it, I wanted to let all the threads know that it's time to reload the values of those ThreadLocals, as they may have changed. That callback comes from a thread I don't control, just like the QuickFIX/J threads.
My solution below uses ThreadLocalFlag (a wrapped ThreadLocal<AtomicBoolean>) solely to signal the other threads that it may be time to update their values. The callback calls setAll(true), and the QuickFIX/J threads call set(false) when they begin their update. I have downplayed the concurrency issues of the ArrayList because the only time the list is added to is during startup, and my use case was smaller than the default size of the list.
I imagine the same task could be done with other interthread communication techniques, but for what it's doing, this seemed more practical. I welcome other solutions.
Interacting with objects in a ThreadLocal across threads
I'll say up front that this is a bad idea. ThreadLocal is a special class which offers speed and thread-safety benefits if used correctly. Attempting to communicate across threads with a ThreadLocal defeats the purpose of using the class in the first place.
If you need access to an object across multiple threads there are tools designed for this purpose, notably the thread-safe collections in java.util.collect.concurrent such as ConcurrentHashMap, which you can use to replicate a ThreadLocal by using Thread objects as keys, like so:
ConcurrentHashMap<Thread, AtomicBoolean> map = new ConcurrentHashMap<>();
// pass map to threads, let them do work, using Thread.currentThread() as the key
// Update all known thread's flags
for(AtomicBoolean b : map.values()) {
b.set(true);
}
Clearer, more concise, and avoids using ThreadLocal in a way it's simply not designed for.
Notifying threads that their data is stale
I just need a way to notify all these threads that their ThreadLocal variable is stale.
If your goal is simply to notify other threads that something has changed you don't need a ThreadLocal at all. Simply use a single AtomicBoolean and share it with all your tasks, just like you would your ThreadLocal<AtomicBoolean>. As the name implies updates to an AtomicBoolean are atomic and visible cross-threads. Even better would be to use a real synchronization aid such as CyclicBarrier or Phaser, but for simple use cases there's no harm in just using an AtomicBoolean.
Creating an updatable "ThreadLocal"
All of that said, if you really want to implement a globally update-able ThreadLocal your implementation is broken. The fact that you haven't run into issues with it is only a coincidence and future refactoring may well introduce hard-to-diagnose bugs or crashes. That it "has worked without incident" only means your tests are incomplete.
First and foremost, an ArrayList is not thread-safe. You simply cannot use it (without external synchronization) when multiple threads may interact with it, even if they will do so at different times. That you aren't seeing any issues now is just a coincidence.
Storing the objects as a List prevents us from removing stale values. If you call ThreadLocal.set() it will append to your list without removing the previous value, which introduces both a memory leak and the potential for unexpected side-effects if you anticipated these objects becoming unreachable once the thread terminated, as is usually the case with ThreadLocal instances. Your use case avoids this issue by coincidence, but there's still no need to use a List.
Here is an implementation of an IterableThreadLocal which safely stores and updates all existing instances of the ThreadLocal's values, and works for any type you choose to use:
import java.util.Iterator;
import java.util.concurrent.ConcurrentMap;
import com.google.common.collect.MapMaker;
/**
* Class extends ThreadLocal to enable user to iterate over all objects
* held by the ThreadLocal instance. Note that this is inherently not
* thread-safe, and violates both the contract of ThreadLocal and much
* of the benefit of using a ThreadLocal object. This class incurs all
* the overhead of a ConcurrentHashMap, perhaps you would prefer to
* simply use a ConcurrentHashMap directly instead?
*
* If you do really want to use this class, be wary of its iterator.
* While it is as threadsafe as ConcurrentHashMap's iterator, it cannot
* guarantee that all existing objects in the ThreadLocal are available
* to the iterator, and it cannot prevent you from doing dangerous
* things with the returned values. If the returned values are not
* properly thread-safe, you will introduce issues.
*/
public class IterableThreadLocal<T> extends ThreadLocal<T>
implements Iterable<T> {
private final ConcurrentMap<Thread,T> map;
public IterableThreadLocal() {
map = new MapMaker().weakKeys().makeMap();
}
#Override
public T get() {
T val = super.get();
map.putIfAbsent(Thread.currentThread(), val);
return val;
}
#Override
public void set(T value) {
map.put(Thread.currentThread(), value);
super.set(value);
}
/**
* Note that this method fundamentally violates the contract of
* ThreadLocal, and exposes all objects to the calling thread.
* Use with extreme caution, and preferably only when you know
* no other threads will be modifying / using their ThreadLocal
* references anymore.
*/
#Override
public Iterator<T> iterator() {
return map.values().iterator();
}
}
As you can hopefully see this is little more than a wrapper around a ConcurrentHashMap, and incurs all the same overhead as using one directly, but hidden in the implementation of a ThreadLocal, which users generally expect to be fast and thread-safe. I implemented it for demonstration purposes, but I really cannot recommend using it in any setting.
It won't be a good idea to do that since the whole point of thread local storage is, well, thread locality of the value it contains - i.e. that you can be sure that no other thread than your own thread can touch the value. If other threads could touch your thread local value, it won't be "thread local" anymore and that will break the memory model contract of thread local storage.
Either you have to use something other than ThreadLocal (e.g. a ConcurrentHashMap) to store the value, or you need to find a way to schedule an update on the threads in question.
You could use google guava's map maker to create a static final ConcurrentWeakReferenceIdentityHashmap with the following type: Map<Thread, Map<String, Object>> where the second map is a ConcurrentHashMap. That way you'd be pretty close to ThreadLocal except that you can iterate through the map.
I'm disappointed in the quality of the answers received for this question; I have found my own solution.
I wrote my test case today, and found the only issue with the code in my question is the Boolean. Boolean is not mutable, so my list of references wasn't doing me any good. I had a look at this question, and changed my code to use AtomicBoolean, and now everything works as expected.
public class ThreadLocalFlag {
private ThreadLocal<AtomicBoolean> flag;
private List<AtomicBoolean> allValues = new ArrayList<AtomicBoolean>();
public ThreadLocalFlag() {
flag = new ThreadLocal<AtomicBoolean>() {
#Override protected AtomicBoolean initialValue() {
AtomicBoolean value = new AtomicBoolean();
allValues.add(value);
return value;
}
};
}
public boolean get() {
return flag.get().get();
}
public void set(boolean value) {
flag.get().set(value);
}
public void setAll(boolean value) {
for (AtomicBoolean tlValue : allValues) {
tlValue.set(value);
}
}
}
Test case:
public class ThreadLocalFlagTest {
private static ThreadLocalFlag flag = new ThreadLocalFlag();
private static boolean runThread = true;
#AfterClass
public static void tearDownOnce() throws Exception {
runThread = false;
flag = null;
}
/**
* #throws Exception if there is any issue with the test
*/
#Test
public void testSetAll() throws Exception {
startThread("ThreadLocalFlagTest-1", false);
try {
Thread.sleep(1000L);
} catch (InterruptedException e) {
//ignore
}
startThread("ThreadLocalFlagTest-2", true);
try {
Thread.sleep(1000L);
} catch (InterruptedException e) {
//ignore
}
startThread("ThreadLocalFlagTest-3", false);
try {
Thread.sleep(1000L);
} catch (InterruptedException e) {
//ignore
}
startThread("ThreadLocalFlagTest-4", true);
try {
Thread.sleep(8000L); //watch the alternating values
} catch (InterruptedException e) {
//ignore
}
flag.setAll(true);
try {
Thread.sleep(8000L); //watch the true values
} catch (InterruptedException e) {
//ignore
}
flag.setAll(false);
try {
Thread.sleep(8000L); //watch the false values
} catch (InterruptedException e) {
//ignore
}
}
private void startThread(String name, boolean value) {
Thread t = new Thread(new RunnableCode(value));
t.setName(name);
t.start();
}
class RunnableCode implements Runnable {
private boolean initialValue;
RunnableCode(boolean value) {
initialValue = value;
}
#Override
public void run() {
flag.set(initialValue);
while (runThread) {
System.out.println(Thread.currentThread().getName() + ": " + flag.get());
try {
Thread.sleep(4000L);
} catch (InterruptedException e) {
//ignore
}
}
}
}
}
I'd like to see if there's a good pattern for sharing a context across all classes and subthreads of a top-level thread without using InheritableThreadLocal.
I've got several top-level processes that each run in their own thread. These top-level processes often spawn temporary subthreads.
I want each top level process to have and manage it's own database connection.
I do not want to pass around the database connection from class to class and from thread to subthread (my associate calls this the "community bicycle" pattern). These are big top-level processes and it would mean editing probably hundreds of method signatures to pass around this database connection.
Right now I call a singleton to get the database connection manager. The singleton uses InheritableThreadLocal so that each top-level process has it's own version of it. While I know some people have problems with singletons, it means I can just say DBConnector.getDBConnection(args) (to paraphrase) whenever I need the correctly managed connection. I am not tied to this method if I can find a better and yet still-clean solution.
For various reasons InheritableThreadLocal is proving to be tricky. (See this question.)
Does anyone have a suggestion to handle this kind of thing that doesn't require either InheritableThreadLocal or passing around some context object all over the place?
Thanks for any help!
Update: I've managed to solve the immediate problem (see the linked question) but I'd still like to hear about other possible approaches. forty-two's suggestion below is good and does work (thanks!), but see the comments for why it's problematic. If people vote for jtahlborn's answer and tell me that I'm being obsessive for wanting to avoid passing around my database connection then I will relent, select that as my answer, and revise my world-view.
I haven't tested this, but the idea is to create a customized ThreadPoolExecutor that knows how to get the context object and use #beforeExecute() to transfer the context object to the thread that is going to execute the task. To be a nice citizen, you should also clear the context object in #afterEXecute(), but I leave that as an exercise.
public class XyzThreadPoolExecutor extends ThreadPoolExecutor {
public XyzThreadPoolExecutor() {
super(3, 3, 100, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>(), new MyThreadFactory());
}
#Override
public void execute(Runnable command) {
/*
* get the context object from the calling thread
*/
Object context = null;
super.execute(new MyRunnable(context, command));
}
#Override
protected void beforeExecute(Thread t, Runnable r) {
((MyRunnable)r).updateThreadLocal((MyThread) t);
super.beforeExecute(t, r);
}
private static class MyThreadFactory implements ThreadFactory {
#Override
public Thread newThread(Runnable r) {
return new MyThread(r);
}
}
private class MyRunnable implements Runnable {
private final Object context;
private final Runnable delegate;
public MyRunnable(Object context, Runnable delegate) {
super();
this.context = context;
this.delegate = delegate;
}
void updateThreadLocal(MyThread thread) {
thread.setContext(context);
}
#Override
public void run() {
delegate.run();
}
}
private static class MyThread extends Thread {
public MyThread(Runnable target) {
super(target);
}
public void setContext(Object context) {
// set the context object here using thread local
}
}
}
the "community bicycle" solution (as you call it) is actually much better than the global (or pseudo global) singleton that you are currently using. it makes the code testable and it makes it very easy to choose which classes use which context. if done well, you don't need to add the context object to every method signature. you generally ensure that all the "major" classes have a reference to the current context, and that any "minor" classes have access to the relevant "major" class. one-off methods which may need access to the context will need their method signatures updated, but most classes should have the context available through a member variable.
As a ThreadLocal is essentially a Map keyed on your thread, couldn't you implement a Map keyed on your thread name? All you then need is an effective naming strategy that meets your requirements.
As a Lisper, I very much agree with your worldview and would consider it a shame if you were to revise it. :-)
If it were me, I would simply use a ThreadGroup for each top-level process, and associate each connection with the group the caller is running in. If using in conjunction with thread pools, just ensure the pools use threads in the correct thread group (for instance, by having a pool per thread group).
Example implementation:
public class CachedConnection {
/* Whatever */
}
public class ProcessContext extends ThreadGroup {
private static final Map<ProcessContext, Map<Class, Object>> contexts = new WeakHashMap<ProcessContext, Map<Class, Object>>();
public static T getContext(Class<T> cls) {
ProcessContext tg = currentContext();
Map<Class, Object> ctx;
synchronized(contexts) {
if((ctx = contexts.get(tg)) == null)
contexts.put(tg, ctx = new HashMap<Class, Object>());
}
synchronized(ctx) {
Object cur = ctx.get(cls);
if(cur != null)
return(cls.cast(cur));
T new_t;
try {
new_t = cls.newInstance();
} catch(Exception e) {
throw(new RuntimeException(e));
}
ctx.put(cls, new_t);
return(new_t);
}
}
public static ProcessContext currentContext() {
ThreadGroup tg = Thread.currentThread().getThreadGroup();
while(true) {
if(tg instanceof ProcessContext)
return((ProcessContext)tg);
tg = tg.getParent();
if(tg == null)
throw(new IllegalStateException("Not running in a ProcessContext"));
}
}
}
If you then simply make sure to run all your threads in a proper ProcessContext, you can get a CachedConnection anywhere by calling ProcessContext.getContext(CachedConnection.class).
Of course, as mentioned above, you would have to make sure that any other threads you may delegate work to also run in the correct ProcessContext, but I'm pretty sure that problem is inherent in your description -- you would obviously need to specify somehow which one of multiple contexts your delegation workers run in. If anything, it could be conceivable to modify ProcessContext as follows:
public class ProcessContext extends ThreadGroup {
/* getContext() as above */
private static final ThreadLocal<ProcessContext> tempctx = new ThreadLocal<ProcessContext>();
public static ProcessContext currentContext() {
if(tempctx.get() != null)
return(tempctx.get());
ThreadGroup tg = Thread.currentThread().getThreadGroup();
while(true) {
if(tg instanceof ProcessContext)
return((ProcessContext)tg);
tg = tg.getParent();
if(tg == null)
throw(new IllegalStateException("Not running in a ProcessContext"));
}
}
public class RunnableInContext implements Runnable {
private final Runnable delegate;
public RunnableInContext(Runnable delegate) {this.delegate = delegate;}
public void run() {
ProcessContext old = tempctx.get();
tempctx.set(ProcessContext.this);
try {
delegate.run();
} finally {
tempctx.set(old);
}
}
}
public static Runnable wrapInContext(Runnable delegate) {
return(currentContext().new RunnableInContext(delegate));
}
}
That way, you could use ProcessContext.wrapInContext() to pass a Runnable which, when run, inherits its context from where it was created.
(Note that I haven't actually tried the above code, so it may well be full of typos.)
I would not support your world-view and jthalborn's idea on the count that its more testable even.
Though paraphrasing first what I have understood from your problme statement is like this.
There are 3 or 4 top-level processes (and they are basically having a thread of their own). And connection object is what is diffrenet in them.
You need some basic characteristic of Connection to be set up and done once.
The child threads in no way change the Connection object passe to them from top-level threads.
Here is what I propose, you do need the one tim,e set-up of you Connection but then in each of your top-level process, you do 1) further processing of that Connection 2) keep a InheriatbleThreadLocal (and the child process of your top-level thread will have the modified connection object. 3) Pass these threasd implementing classes. MyThread1, MyThread2, MyThread3, ... MyThread4 in the Executor. (This is different from the other linked question of yours that if you need some gating, Semaphore is a better approach)
Why I said that its not less testable than jthalborn's view is that in that case also you anyway again needs to provide mocked Connection object. Here too. Plus conecptually passing the object and keeping the object in ThreadLocal is one and the same (InheritableThreadLocal is a map which gets passed by java inbuilt way, nothing bad here I believe).
EDIT: I did keep in account that its a closed system and we are not having "free" threads tempring with connection
I have a common interface for a number of singleton implementations. Interface defines initialization method which can throw checked exception.
I need a factory which will return cached singleton implementations on demand, and wonder if following approach is thread-safe?
UPDATE1: Please don't suggest any 3rd partly libraries, as this will require to obtain legal clearance due to possible licensing issues :-)
UPDATE2: this code will likely to be used in EJB environment, so it's preferrable not to spawn additional threads or use stuff like that.
interface Singleton
{
void init() throws SingletonException;
}
public class SingletonFactory
{
private static ConcurrentMap<String, AtomicReference<? extends Singleton>> CACHE =
new ConcurrentHashMap<String, AtomicReference<? extends Singleton>>();
public static <T extends Singleton> T getSingletonInstance(Class<T> clazz)
throws SingletonException
{
String key = clazz.getName();
if (CACHE.containsKey(key))
{
return readEventually(key);
}
AtomicReference<T> ref = new AtomicReference<T>(null);
if (CACHE.putIfAbsent(key, ref) == null)
{
try
{
T instance = clazz.newInstance();
instance.init();
ref.set(instance); // ----- (1) -----
return instance;
}
catch (Exception e)
{
throw new SingletonException(e);
}
}
return readEventually(key);
}
#SuppressWarnings("unchecked")
private static <T extends Singleton> T readEventually(String key)
{
T instance = null;
AtomicReference<T> ref = (AtomicReference<T>) CACHE.get(key);
do
{
instance = ref.get(); // ----- (2) -----
}
while (instance == null);
return instance;
}
}
I'm not entirely sure about lines (1) and (2). I know that referenced object is declared as volatile field in AtomicReference, and hence changes made at line (1) should become immediately visible at line (2) - but still have some doubts...
Other than that - I think use of ConcurrentHashMap addresses atomicity of putting new key into a cache.
Do you guys see any concerns with this approach? Thanks!
P.S.: I know about static holder class idiom - and I don't use it due to ExceptionInInitializerError (which any exception thrown during singleton instantiation is wrapped into) and subsequent NoClassDefFoundError which are not something I want to catch. Instead, I'd like to leverage the advantage of dedicated checked exception by catching it and handling it gracefully rather than parse the stack trace of EIIR or NCDFE.
You have gone to a lot of work to avoid synchronization, and I assume the reason for doing this is for performance concerns. Have you tested to see if this actually improves performance vs a synchronized solution?
The reason I ask is that the Concurrent classes tend to be slower than the non-concurrent ones, not to mention the additional level of redirection with the atomic reference. Depending on your thread contention, a naive synchronized solution may actually be faster (and easier to verify for correctness).
Additionally, I think that you can possibly end up with an infinite loop when a SingletonException is thrown during a call to instance.init(). The reason being that a concurrent thread waiting in readEventually will never end up finding its instance (since an exception was thrown while another thread was initializing the instance). Maybe this is the correct behaviour for your case, or maybe you want to set some special value to the instance to trigger an exception to be thrown to the waiting thread.
Having all of these concurrent/atomic things would cause more lock issues than just putting
synchronized(clazz){}
blocks around the getter. Atomic references are for references that are UPDATED and you don't want collision. Here you have a single writer, so you do not care about that.
You could optimize it further by having a hashmap, and only if there is a miss, use the synchronized block:
public static <T> T get(Class<T> cls){
// No lock try
T ref = cache.get(cls);
if(ref != null){
return ref;
}
// Miss, so use create lock
synchronized(cls){ // singletons are double created
synchronized(cache){ // Prevent table rebuild/transfer contentions -- RARE
// Double check create if lock backed up
ref = cache.get(cls);
if(ref == null){
ref = cls.newInstance();
cache.put(cls,ref);
}
return ref;
}
}
}
Consider using Guava's CacheBuilder. For example:
private static Cache<Class<? extends Singleton>, Singleton> singletons = CacheBuilder.newBuilder()
.build(
new CacheLoader<Class<? extends Singleton>, Singleton>() {
public Singleton load(Class<? extends Singleton> key) throws SingletonException {
try {
Singleton singleton = key.newInstance();
singleton.init();
return singleton;
}
catch (SingletonException se) {
throw se;
}
catch (Exception e) {
throw new SingletonException(e);
}
}
});
public static <T extends Singleton> T getSingletonInstance(Class<T> clazz) {
return (T)singletons.get(clazz);
}
Note: this example is untested and uncompiled.
Guava's underlying Cache implementation will handle all caching and concurrency logic for you.
This looks like it would work although I might consider some sort of sleep if even a nanosecond or something when testing for the reference to be set. The spin test loop is going to be extremely expensive.
Also, I would consider improving the code by passing the AtomicReference to readEventually() so you can avoid the containsKey() and then putIfAbsent() race condition. So the code would be:
AtomicReference<T> ref = (AtomicReference<T>) CACHE.get(key);
if (ref != null) {
return readEventually(ref);
}
AtomicReference<T> newRef = new AtomicReference<T>(null);
AtomicReference<T> oldRef = CACHE.putIfAbsent(key, newRef);
if (oldRef != null) {
return readEventually(oldRef);
}
...
The code is not generally thread safe because there is a gap between the CACHE.containsKey(key) check and the CACHE.putIfAbsent(key, ref) call. It is possible for two threads to call simultaneously into the method (especially on multi-core/processor systems) and both perform the containsKey() check, then both attempt to do the put and creation operations.
I would protect that execution of the getSingletonInstnace() method using either a lock or by synchronizing on a monitor of some sort.
google "Memoizer". basically, instead of AtomicReference, use Future.
I'm running a process in a separate thread with a timeout, using an ExecutorService and a Future (example code here) (the thread "spawning" takes place in a AOP Aspect).
Now, the main thread is a Resteasy request. Resteasy uses one ore more ThreadLocal variables to store some context information that I need to retrieve at some point in my Rest method call. Problem is, since the Resteasy thread is running in a new thread, the ThreadLocal variables are lost.
What would be the best way to "propagate" whatever ThreadLocal variable is used by Resteasy to the new thread? It seems that Resteasy uses more than one ThreadLocal variable to keep track of context information and I would like to "blindly" transfer all the information to the new thread.
I have looked at subclassing ThreadPoolExecutor and using the beforeExecute method to pass the current thread to the pool, but I couldn't find a way to pass the ThreadLocal variables to the pool.
Any suggestion?
Thanks
The set of ThreadLocal instances associated with a thread are held in private members of each Thread. Your only chance to enumerate these is to do some reflection on the Thread; this way, you can override the access restrictions on the thread's fields.
Once you can get the set of ThreadLocal, you could copy in the background threads using the beforeExecute() and afterExecute() hooks of ThreadPoolExecutor, or by creating a Runnable wrapper for your tasks that intercepts the run() call to set an unset the necessary ThreadLocal instances. Actually, the latter technique might work better, since it would give you a convenient place to store the ThreadLocal values at the time the task is queued.
Update: Here's a more concrete illustration of the second approach. Contrary to my original description, all that is stored in the wrapper is the calling thread, which is interrogated when the task is executed.
static Runnable wrap(Runnable task)
{
Thread caller = Thread.currentThread();
return () -> {
Iterable<ThreadLocal<?>> vars = copy(caller);
try {
task.run();
}
finally {
for (ThreadLocal<?> var : vars)
var.remove();
}
};
}
/**
* For each {#code ThreadLocal} in the specified thread, copy the thread's
* value to the current thread.
*
* #param caller the calling thread
* #return all of the {#code ThreadLocal} instances that are set on current thread
*/
private static Collection<ThreadLocal<?>> copy(Thread caller)
{
/* Use a nasty bunch of reflection to do this. */
throw new UnsupportedOperationException();
}
Based on #erickson answer I wrote this code. It is working for inheritableThreadLocals. It builds list of inheritableThreadLocals using same method as is used in Thread contructor. Of course I use reflection to do this. Also I override the executor class.
public class MyThreadPoolExecutor extends ThreadPoolExecutor
{
#Override
public void execute(Runnable command)
{
super.execute(new Wrapped(command, Thread.currentThread()));
}
}
Wrapper:
private class Wrapped implements Runnable
{
private final Runnable task;
private final Thread caller;
public Wrapped(Runnable task, Thread caller)
{
this.task = task;
this.caller = caller;
}
public void run()
{
Iterable<ThreadLocal<?>> vars = null;
try
{
vars = copy(caller);
}
catch (Exception e)
{
throw new RuntimeException("error when coping Threads", e);
}
try {
task.run();
}
finally {
for (ThreadLocal<?> var : vars)
var.remove();
}
}
}
copy method:
public static Iterable<ThreadLocal<?>> copy(Thread caller) throws Exception
{
List<ThreadLocal<?>> threadLocals = new ArrayList<>();
Field field = Thread.class.getDeclaredField("inheritableThreadLocals");
field.setAccessible(true);
Object map = field.get(caller);
Field table = Class.forName("java.lang.ThreadLocal$ThreadLocalMap").getDeclaredField("table");
table.setAccessible(true);
Method method = ThreadLocal.class
.getDeclaredMethod("createInheritedMap", Class.forName("java.lang.ThreadLocal$ThreadLocalMap"));
method.setAccessible(true);
Object o = method.invoke(null, map);
Field field2 = Thread.class.getDeclaredField("inheritableThreadLocals");
field2.setAccessible(true);
field2.set(Thread.currentThread(), o);
Object tbl = table.get(o);
int length = Array.getLength(tbl);
for (int i = 0; i < length; i++)
{
Object entry = Array.get(tbl, i);
Object value = null;
if (entry != null)
{
Method referentField = Class.forName("java.lang.ThreadLocal$ThreadLocalMap$Entry").getMethod(
"get");
referentField.setAccessible(true);
value = referentField.invoke(entry);
threadLocals.add((ThreadLocal<?>) value);
}
}
return threadLocals;
}
As I understand your problem, you can have a look at InheritableThreadLocal which is meant to pass ThreadLocal variables from Parent Thread context to Child Thread Context
I don't like Reflection approach. Alternative solution would be to implement executor wrapper and pass object directly as a ThreadLocal context to all child threads propagating a parent context.
public class PropagatedObject {
private ThreadLocal<ConcurrentHashMap<AbsorbedObjectType, Object>> data = new ThreadLocal<>();
//put, set, merge methods, etc
}
==>
public class ObjectAwareExecutor extends AbstractExecutorService {
private final ExecutorService delegate;
private final PropagatedObject objectAbsorber;
public ObjectAwareExecutor(ExecutorService delegate, PropagatedObject objectAbsorber){
this.delegate = delegate;
this.objectAbsorber = objectAbsorber;
}
#Override
public void execute(final Runnable command) {
final ConcurrentHashMap<String, Object> parentContext = objectAbsorber.get();
delegate.execute(() -> {
try{
objectAbsorber.set(parentContext);
command.run();
}finally {
parentContext.putAll(objectAbsorber.get());
objectAbsorber.clean();
}
});
objectAbsorber.merge(parentContext);
}
Here is an example to pass the current LocaleContext in parent thread to the child thread spanned by CompletableFuture[By default it used ForkJoinPool].
Just define all the things you wanted to do in a child thread inside a Runnable block. So when the CompletableFuture execute the Runnable block, its the child thread who is in control and voila you have the parent's ThreadLocal stuff set in Child's ThreadLocal.
The problem here is not the entire ThreadLocal is copied over. Only the LocaleContext is copied. Since the ThreadLocal is of private access to only the Thread it belongs too using Reflection and trying to get and set in Child is all too much of wacky stuff which might lead to memory leaks or performance hit.
So if you know the parameters you are interested from the ThreadLocal, then this solution works way cleaner.
public void parentClassMethod(Request request) {
LocaleContext currentLocale = LocaleContextHolder.getLocaleContext();
executeInChildThread(() -> {
LocaleContextHolder.setLocaleContext(currentLocale);
//Do whatever else you wanna do
}));
//Continue stuff you want to do with parent thread
}
private void executeInChildThread(Runnable runnable) {
try {
CompletableFuture.runAsync(runnable)
.get();
} catch (Exception e) {
LOGGER.error("something is wrong");
}
}
If you look at ThreadLocal code you can see:
public T get() {
Thread t = Thread.currentThread();
...
}
current thread cannot be overwritten.
Possible solutions:
Look at java 7 fork/join mechanism (but i think it's a bad way)
Look at endorsed mechanism to overwrite ThreadLocal class in your JVM.
Try to rewrite RESTEasy (you can use Refactor tools in your IDE to replace all ThreadLocal usage, it's look like easy)