I am trying to find out whether it is possible to create Java dynamic proxy to automatically close Autocloseable resources without having to remember of embedding such resources with try-resources block.
For example I have a JedisPool that has a getResource method which can be used like that:
try(Jedis jedis = jedisPool.getResource() {
// use jedis client
}
For now I did something like that:
class JedisProxy implements InvocationHandler {
private final JedisPool pool;
public JedisProxy(JedisPool pool) {
this.pool = pool;
}
public static JedisCommands newInstance(Pool<Jedis> pool) {
return (JedisCommands) java.lang.reflect.Proxy.newProxyInstance(
JedisCommands.class.getClassLoader(),
new Class[] { JedisCommands.class },
new JedisProxy(pool));
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
try (Jedis client = pool.getResource()) {
return method.invoke(client, args);
} catch (InvocationTargetException e) {
throw e.getTargetException();
} catch (Exception e) {
throw e;
}
}
}
Now each time when I call method on Jedis (JedisCommands) this method is passed to proxy which gets a new client from the pool, executes method and returns this resource to the pool.
It works fine, but when I want to execute multiple methods on client, then for each method resource is taken from pool and returned again (it might be time consuming). Do you have any idea how to improve that?
You would end up with your own "transaction manager" in which you normally would return the object to the pool immediately, but if you had started a "transaction" the object wouldn't be returned to the pool until you've "committed" the "transaction".
Suddenly your problem with using try-with-resources turns into an actual problem due to the use of a hand-crafted custom mechanism.
Using try with resources pros:
Language built-in feature
Allows you to attach a catch block, and the resources are still released
Simple, consistent syntax, so that even if a developer weren't familiar with it, he would see all the Jedis code surrounded by it and (hopefully) think "So this must be the correct way to use this"
Cons:
You need to remember to use it
Your suggestion pros (You can tell me if I forget anything):
Automatic closing even if the developer doesn't close the resource, preventing a resource leak
Cons:
Extra code always means extra places to find bugs in
If you don't create a "transaction" mechanism, you may suffer from a performance hit (I'm not familiar with [jr]edis or your project, so I can't say whether it's really an issue or not)
If you do create it, you'll have even more extra code which is prone to bugs
Syntax is no longer simple, and will be confusing to anyone coming to the project
Exception handling becomes more complicated
You'll be making all your proxy-calls through reflection (a minor issue, but hey, it's my list ;)
Possibly more, depending on what the final implementation will be
If you think I'm not making valid points, please tell me. Otherwise my assertion will remain "you have a 'solution' looking for a problem".
I don’t think that this is going into the right direction. After all, developers should get used to handle resources correctly and IDEs/compilers are able to issue warnings when autoclosable resources aren’t handled using try(…){}…
However, the task of creating a proxy for decorating all invocations and the addition of a way to decorate a batch of multiple action as a whole, is of a general nature, therefore, it has a general solution:
class JedisProxy implements InvocationHandler {
private final JedisPool pool;
public JedisProxy(JedisPool pool) {
this.pool = pool;
}
public static JedisCommands newInstance(Pool<Jedis> pool) {
return (JedisCommands) java.lang.reflect.Proxy.newProxyInstance(
JedisCommands.class.getClassLoader(),
new Class[] { JedisCommands.class },
new JedisProxy(pool));
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
try (Jedis client = pool.getResource()) {
return method.invoke(client, args);
} catch (InvocationTargetException e) {
throw e.getTargetException();
}
}
public static void executeBatch(JedisCommands c, Consumer<JedisCommands> action) {
InvocationHandler ih = Proxy.getInvocationHandler(c);
if(!(ih instanceof JedisProxy))
throw new IllegalArgumentException();
try(JedisCommands actual=((JedisProxy)ih).pool.getResource()) {
action.accept(actual);
}
}
public static <R> R executeBatch(JedisCommands c, Function<JedisCommands,R> action){
InvocationHandler ih = Proxy.getInvocationHandler(c);
if(!(ih instanceof JedisProxy))
throw new IllegalArgumentException();
try(JedisCommands actual=((JedisProxy)ih).pool.getResource()) {
return action.apply(actual);
}
}
}
Note that the type conversion of a Pool<Jedis> to a JedisPool looked suspicious to me but I didn’t change anything in that code as I don’t have these classes to verify it.
Now you can use it like
JedisCommands c=JedisProxy.newInstance(pool);
c.someAction();// aquire-someaction-close
JedisProxy.executeBatch(c, jedi-> {
jedi.someAction();
jedi.anotherAction();
}); // aquire-someaction-anotherAction-close
ResultType foo = JedisProxy.executeBatch(c, jedi-> {
jedi.someAction();
return jedi.someActionReturningValue(…);
}); // aquire-someaction-someActionReturningValue-close-return the value
The batch execution requires the instance to be a proxy, otherwise an exception is thrown as it’s clear that this method cannot guarantee a particular behavior for an unknown instance with an unknown life cycle.
Also, developers now have to be aware of the proxy and the batch execution feature just like they have to be aware of resources and the try(…){} statement when not using a proxy. On the other hand, if they aren’t, they lose performance when invoking multiple methods on a proxy without using the batch method, whereas they let resources leak when invoking multiple methods without try(…){}on an actual, non-proxy resource…
Related
I have a continuously running service that I want stopped cleanly if it hits an exception. This fits well with the try with resources paradigm but it's not really a resource that needs to be "closed".
To clarify based on comments, my code looks something like this
class Service {
Resource resource;
State state;
boolean keepRunning = false;
void start() {
keepRunning = true;
resource = new Resource()
new Thread(() -> {
while(keepRunning) {
Data data = resources.pull();
state.update(data);
... // Do stuff with state
}
}).start();
}
void stop() {
keepRunning = false;
}
}
class Main {
void run() {
Service service = new Service();
service.start();
}
}
Is there a pattern that lets me use the syntactic sugar that try-with-resources provides while still not abusing try-with-resources with things that are not resources?
Of course, it is totally okay to use try-with-resources with anything you want. The requirement is that it implements the interface AutoCloseable (documentation). With that interface you will need to implement a close method, which then is called by the try-with-resources construct.
That is why this interface is there, you are allowed to implement it for your own classes.
For example if you have a service that needs to be probably shut down, also in error case, you may use AutoCloseable and implement the close method probably.
However it's meaningless and will confuse readers of your program if you make something AutoCloseable where there is totally no intuition what close means on this object. In such cases you should probably look for other constructs.
It's a bit like implementing Iterable such that you can use the enhanced for loop like:
for (Item item : myObject) {
...
}
You may do it, if it makes sense. Otherwise it will confuse people.
So i have been going through "Effective Java 2nd Ed."
In item 7 he talks about not using finalizers because they can cause a lot of problems .
But instead of using finalizers we can " provide an explicit termination method" and an example of those is the close statement . and i did not understand what is " termination statements and what are the differences between them and finalizers ?
I came to the conclusion that terminating an object is like nulling it thus the resourses is released . but i think i don`t understand the difference that well . so i appreciate any help .
Thanks !
But instead of using finalizers we can " provide an explicit
termination method" and an example of those is the close statement .
The authors refers to a close() method that provides a way to clean an object that uses resources to free.
For example when you create and manipulate an InputStream or an OutputStream, you don't want to rely on Java finalizers (that may exist for some subclasses of these interface. For example it is the case for the FileInputStream class that defines a finalize() method) to release the resources associated to the stream but you want to use the method provided by the API to do it : void close() as it is more reliable as finalizer.
java.sql.Statement works in the same way : it provides a close() method to release JDBC resources associated to the statement instance.
I came to the conclusion that terminating an object is like nulling it
thus the resourses is released .
Assigning an object to null will not necessary free all resources that should be freed. Besides if the object or a field of the object is still referenced by another living object, the object would be not illegible to be garbage collected
At last, being garbage collected may also take a some time.
Why wait if we don't need to use the object ?
The main difference between an explicit termination method and finalize() is that the second one is not guaranteed to be called. It's called eventually during garbage collection which might to be honest never occur. Lets consider the following three classes.
class Foo {
#Override
public void finalize() {
System.out.println("Finalize Foo");
}
}
class Bar implements Closeable {
#Override
public void close() {
System.out.println("Close Bar");
}
}
class Baz implements AutoCloseable {
#Override
public void close() {
System.out.println("Close Baz");
}
}
The first one overrides the finalize() method inherited from Object. Foo and Bar implement both interfaces which are handled by ARM (Automatic Resource Management).
Foo foo = new Foo();
new Foo();
try (Bar bar = new Bar(); Baz baz = new Baz()) { // this is ARM
System.out.println("termination example");
}
Bar bar = null;
try {
bar = new Bar();
// ...
} finally {
if (bar != null) {
bar.close();
}
}
This example should return:
termination example
Close Baz
Close Bar
Close Bar
The finalize() method of Foo gets never called because Foo is not garbage collected. The JVM has available resources, so for performance optimization it does not perform garbage collecting. Furthermore - if a resource isn't garbage collected despite the fact of finishing the Application. Even the second created instance of Foo is not Garbage Collected, because there is plenty of resources for the JVM to thrive.
The second one with ARM is a lot better, because it creates both the resources (one implementing java.io.Closeable and one implementing java.lang.AutoCloseable, it's worth mentioning that Closeable extends AutoCloseable, that's why it's available for ARM). ARM guaranties for both of these resources to be closed, to close one when the other throws and so on. The second one presents something similar to ARM, but saving a lot of unnecessary boilerplate code.
Something making you a better developer:
But it's still not perfect. There is still a burden on the programmer to remember closing the object. The absence of destructors in Java forces the developer either to remember closing the resource, remember to use ARM and so on. There is a good design pattern (good explained by Venkat Subramaniam) - the Loan Pattern. A simple example of the loan pattern:
class Loan {
private Loan() {
}
public Loan doSomething(int m) {
System.out.println("Did something " + m);
if (new Random().nextBoolean()) {
throw new RuntimeException("Didn't see that commming");
}
return this;
}
public Loan doOtherThing(int n) {
System.out.println("Did other thing " + n);
return this;
}
private void close() {
System.out.println("Closed");
}
public static void loan(Consumer<Loan> toPerform) {
Loan loan = new Loan();
try {
toPerform.accept(loan);
} catch (Exception e) {
e.printStackTrace();
} finally {
loan.close();
}
}
}
You can use it like that:
class Main {
public static void main(String[] args) {
Loan.loan(loan -> loan.doOtherThing(2)
.doSomething(3)
.doOtherThing(3));
}
}
It relieves the developer of the burden of closing the resource, because it has already been handled for him. If one of these methods throws, then it's handled and the developer does not have to bother. The close method and constructor are private to not tempt the developer to use them.
Some 3rd party library swallowed an Exception:
String getAnswer(){
try{
// do stuff, modify instance state, maybe throw some exceptions
// ...
return computeAnswer();
}catch (SomeException e){
return null;
}
}
As much as I want to change it into:
String getAnswer() throws SomeException{
// do stuff, modify instance state, maybe throw some exceptions
// ...
return computeAnswer();
}
I can't, because the library is already packaged into a jar. So, is there a way to bring the exception back?
I don't need to rethrow, a stacktrace with exception and message would work too.
I don't think reflection would help here, Unsafe perhaps?
Yes I know I can use a debugger to find out what's happening, but that wouldn't be very useful if I need the exception at runtime for logging and stuff like that
You can do it without reflection or AOP. The main idea is to throw another (unchecked) exception in the constructor of SomeException. There are some limitations (see at the end of this answer) but I hope it fits your needs.
You need to replace the SomeException with a new version (just create a SomeException.java file in the original package but in your src directory) with something like :
package com.3rdpartylibrary;
public class SomeException extends Exception {
public static class SomeExceptionWrapperException extends RuntimeException {
public SomeExceptionWrapperException(final SomeException ex) {
super(ex.getMessage(), ex);
}
}
public SomeException(final String message) {
super(message);
throw new SomeExceptionWrapperException(this); //<=== the key is here
}
}
The SomeExceptionWrapperException has to be unchecked (inherit from RuntimeException or Error). It will be our wrapper to carry the SomeException accross the ugly 3rd party catch(...)
Then you can catch the SomeExceptionWrapperException in your code (and eventually rethrow the original SomeException:
//original, unmodifiable 3rdParty code, here as a example
public String getAnswer() {
try {
//some code
throw new SomeException("a message");
} catch (final SomeException e) {
return null;
}
}
//a wrapper to getAnswer to unwrapp the `SomeException`
public String getAnswerWrapped() throws SomeException {
try {
return getAnswer();
} catch (final SomeExceptionWrapperException e) {
throw (SomeException) e.getCause();
}
}
#Test(expected = SomeException.class)
public void testThrow() throws SomeException {
final String t = getAnswerWrapped();
}
The test will be green as the original SomeException, will be thrown.
Limitations:
This solution will not work if either :
if SomeException is in java.lang as you cannot replace java.lang classes (or see Replacing java class?)
if the 3rd party method has a catch(Throwable e) (which will be horrible and should motivate you to ignore the full 3rd party library)
To solve this based on your constraints I would use aspects (something like AspectJ) and attach it to the creation of your exception, logging (or having it call some arbitrary) method then.
http://www.ibm.com/developerworks/library/j-aspectj/
If all you're looking for is to log the stacktrace + exception message, you could do that at the point you're throwing your exception.
See Get current stack trace in Java to get the stack trace. You can simply use Throwable.getMessage() to get the message and write it out.
But if you need the actual Exception within your code, you could try and add the exception into a ThreadLocal.
To do this, you would need a class like this that can store the exception:
package threadLocalExample;
public class ExceptionKeeper
{
private static ThreadLocal<Exception> threadLocalKeeper = new ThreadLocal<Exception>();
public static Exception getException()
{
return threadLocalKeeper.get();
}
public static void setException(Exception e)
{
threadLocalKeeper.set(e);
}
public static void clearException()
{
threadLocalKeeper.set(null);
}
}
... then in your code which throws the Exception, the code that the 3rd party library calls, you can do something like this to record the exception before you throw it:
package threadLocalExample;
public class ExceptionThrower
{
public ExceptionThrower()
{
super();
}
public void doSomethingInYourCode() throws SomeException
{
boolean someBadThing = true;
if (someBadThing)
{
// this is bad, need to throw an exception!
SomeException e = new SomeException("Message Text");
// but first, store it in a ThreadLocal because that 3rd party
// library I use eats it
ExceptionKeeper.setException(e);
// Throw the exception anyway - hopefully the library will be fixed
throw e;
}
}
}
... then in your overall code, the one that calls the third party library, it can setup and use the ThreadLocal class like this:
package threadLocalExample;
import thirdpartylibrary.ExceptionEater;
public class MainPartOfTheProgram
{
public static void main(String[] args)
{
// call the 3rd party library function that eats exceptions
// but first, prepare the exception keeper - clear out any data it may have
// (may not need to, but good measure)
ExceptionKeeper.clearException();
try
{
// now call the exception eater. It will eat the exception, but the ExceptionKeeper
// will have it
ExceptionEater exEater = new ExceptionEater();
exEater.callSomeThirdPartyLibraryFunction();
// check the ExceptionKeeper for the exception
Exception ex = ExceptionKeeper.getException();
if (ex != null)
{
System.out.println("Aha! The library ate my exception, but I found it");
}
}
finally
{
// Wipe out any data in the ExceptionKeeper. ThreadLocals are real good
// ways of creating memory leaks, and you would want to start from scratch
// next time anyway.
ExceptionKeeper.clearException();
}
}
}
Beware of ThreadLocals. They have their use, but they are a great way of creating memory leaks. So if your application has a lot of threads that would execute this code, be sure to look at the memory footprint and make sure the ThreadLocals aren't taking up too much memory. Being sure to clear out the ThreadLocal's data when you know you no longer need it should prevent that.
JVMTI agent can help. See the related question.
I've made an agent that calls Throwable.printStackTrace() for every thrown exception, but you may easily change the callback to invoke any other Java method.
A rather dirty trick that could do the job with less effort than AOP or de-/recompile the JAR:
If you can copy the source code, you can create a patched version of the class in question with your version of the getAnswer method. Then put it on your classpath before the third party library that contains the unwanted version of getAnswer.
Problems could arise if SomeException is not a RuntimeException and other third party code calls getAnswer. In this situation I am not sure how the resulting behavior will be. But you could circumvent this by wrapping SomeException in a custom RuntimeException.
Could you not just use a reference variable to call that method, if the result is a null, then you can just display a message/call an exception, whatever you want?
if you're using maven, you would exclude packages of the library.
Dependency Exclusions.
I hope to be helpful
If you have the source to the throwing class, you can add it "in the original package but in your src directory" using the technique as #Benoît has pointed out. Then just change
return null;
to
return e;
or
e.printStackTrace();
etc.
This would be quicker then making a new Exception.
After looking at this question, I think I want to wrap ThreadLocal to add a reset behavior.
I want to have something similar to a ThreadLocal, with a method I can call from any thread to set all the values back to the same value. So far I have this:
public class ThreadLocalFlag {
private ThreadLocal<Boolean> flag;
private List<Boolean> allValues = new ArrayList<Boolean>();
public ThreadLocalFlag() {
flag = new ThreadLocal<Boolean>() {
#Override protected Boolean initialValue() {
Boolean value = false;
allValues.add(value);
return value;
}
};
}
public boolean get() {
return flag.get();
}
public void set(Boolean value) {
flag.set(value);
}
public void setAll(Boolean value) {
for (Boolean tlValue : allValues) {
tlValue = value;
}
}
}
I'm worried that the autoboxing of the primitive may mean the copies I've stored in the list will not reference the same variables referenced by the ThreadLocal if I try to set them. I've not yet tested this code, and with something tricky like this I'm looking for some expert advice before I continue down this path.
Someone will ask "Why are you doing this?". I'm working in a framework where there are other threads that callback into my code, and I don't have references to them. Periodically I want to update the value in a ThreadLocal variable they use, so performing that update requires that the thread which uses the variable do the updating. I just need a way to notify all these threads that their ThreadLocal variable is stale.
I'm flattered that there is new criticism recently regarding this three year old question, though I feel the tone of it is a little less than professional. The solution I provided has worked without incident in production during that time. However, there are bound to be better ways to achieve the goal that prompted this question, and I invite the critics to supply an answer that is clearly better. To that end, I will try to be more clear about the problem I was trying to solve.
As I mentioned earlier, I was using a framework where multiple threads are using my code, outside my control. That framework was QuickFIX/J, and I was implementing the Application interface. That interface defines hooks for handling FIX messages, and in my usage the framework was configured to be multithreaded, so that each FIX connection to the application could be handled simultaneously.
However, the QuickFIX/J framework only uses a single instance of my implementation of that interface for all the threads. I'm not in control of how the threads get started, and each is servicing a different connection with different configuration details and other state. It was natural to let some of that state, which is frequently accessed but seldom updated, live in various ThreadLocals that load their initial value once the framework has started the thread.
Elsewhere in the organization, we had library code to allow us to register for callbacks for notification of configuration details that change at runtime. I wanted to register for that callback, and when I received it, I wanted to let all the threads know that it's time to reload the values of those ThreadLocals, as they may have changed. That callback comes from a thread I don't control, just like the QuickFIX/J threads.
My solution below uses ThreadLocalFlag (a wrapped ThreadLocal<AtomicBoolean>) solely to signal the other threads that it may be time to update their values. The callback calls setAll(true), and the QuickFIX/J threads call set(false) when they begin their update. I have downplayed the concurrency issues of the ArrayList because the only time the list is added to is during startup, and my use case was smaller than the default size of the list.
I imagine the same task could be done with other interthread communication techniques, but for what it's doing, this seemed more practical. I welcome other solutions.
Interacting with objects in a ThreadLocal across threads
I'll say up front that this is a bad idea. ThreadLocal is a special class which offers speed and thread-safety benefits if used correctly. Attempting to communicate across threads with a ThreadLocal defeats the purpose of using the class in the first place.
If you need access to an object across multiple threads there are tools designed for this purpose, notably the thread-safe collections in java.util.collect.concurrent such as ConcurrentHashMap, which you can use to replicate a ThreadLocal by using Thread objects as keys, like so:
ConcurrentHashMap<Thread, AtomicBoolean> map = new ConcurrentHashMap<>();
// pass map to threads, let them do work, using Thread.currentThread() as the key
// Update all known thread's flags
for(AtomicBoolean b : map.values()) {
b.set(true);
}
Clearer, more concise, and avoids using ThreadLocal in a way it's simply not designed for.
Notifying threads that their data is stale
I just need a way to notify all these threads that their ThreadLocal variable is stale.
If your goal is simply to notify other threads that something has changed you don't need a ThreadLocal at all. Simply use a single AtomicBoolean and share it with all your tasks, just like you would your ThreadLocal<AtomicBoolean>. As the name implies updates to an AtomicBoolean are atomic and visible cross-threads. Even better would be to use a real synchronization aid such as CyclicBarrier or Phaser, but for simple use cases there's no harm in just using an AtomicBoolean.
Creating an updatable "ThreadLocal"
All of that said, if you really want to implement a globally update-able ThreadLocal your implementation is broken. The fact that you haven't run into issues with it is only a coincidence and future refactoring may well introduce hard-to-diagnose bugs or crashes. That it "has worked without incident" only means your tests are incomplete.
First and foremost, an ArrayList is not thread-safe. You simply cannot use it (without external synchronization) when multiple threads may interact with it, even if they will do so at different times. That you aren't seeing any issues now is just a coincidence.
Storing the objects as a List prevents us from removing stale values. If you call ThreadLocal.set() it will append to your list without removing the previous value, which introduces both a memory leak and the potential for unexpected side-effects if you anticipated these objects becoming unreachable once the thread terminated, as is usually the case with ThreadLocal instances. Your use case avoids this issue by coincidence, but there's still no need to use a List.
Here is an implementation of an IterableThreadLocal which safely stores and updates all existing instances of the ThreadLocal's values, and works for any type you choose to use:
import java.util.Iterator;
import java.util.concurrent.ConcurrentMap;
import com.google.common.collect.MapMaker;
/**
* Class extends ThreadLocal to enable user to iterate over all objects
* held by the ThreadLocal instance. Note that this is inherently not
* thread-safe, and violates both the contract of ThreadLocal and much
* of the benefit of using a ThreadLocal object. This class incurs all
* the overhead of a ConcurrentHashMap, perhaps you would prefer to
* simply use a ConcurrentHashMap directly instead?
*
* If you do really want to use this class, be wary of its iterator.
* While it is as threadsafe as ConcurrentHashMap's iterator, it cannot
* guarantee that all existing objects in the ThreadLocal are available
* to the iterator, and it cannot prevent you from doing dangerous
* things with the returned values. If the returned values are not
* properly thread-safe, you will introduce issues.
*/
public class IterableThreadLocal<T> extends ThreadLocal<T>
implements Iterable<T> {
private final ConcurrentMap<Thread,T> map;
public IterableThreadLocal() {
map = new MapMaker().weakKeys().makeMap();
}
#Override
public T get() {
T val = super.get();
map.putIfAbsent(Thread.currentThread(), val);
return val;
}
#Override
public void set(T value) {
map.put(Thread.currentThread(), value);
super.set(value);
}
/**
* Note that this method fundamentally violates the contract of
* ThreadLocal, and exposes all objects to the calling thread.
* Use with extreme caution, and preferably only when you know
* no other threads will be modifying / using their ThreadLocal
* references anymore.
*/
#Override
public Iterator<T> iterator() {
return map.values().iterator();
}
}
As you can hopefully see this is little more than a wrapper around a ConcurrentHashMap, and incurs all the same overhead as using one directly, but hidden in the implementation of a ThreadLocal, which users generally expect to be fast and thread-safe. I implemented it for demonstration purposes, but I really cannot recommend using it in any setting.
It won't be a good idea to do that since the whole point of thread local storage is, well, thread locality of the value it contains - i.e. that you can be sure that no other thread than your own thread can touch the value. If other threads could touch your thread local value, it won't be "thread local" anymore and that will break the memory model contract of thread local storage.
Either you have to use something other than ThreadLocal (e.g. a ConcurrentHashMap) to store the value, or you need to find a way to schedule an update on the threads in question.
You could use google guava's map maker to create a static final ConcurrentWeakReferenceIdentityHashmap with the following type: Map<Thread, Map<String, Object>> where the second map is a ConcurrentHashMap. That way you'd be pretty close to ThreadLocal except that you can iterate through the map.
I'm disappointed in the quality of the answers received for this question; I have found my own solution.
I wrote my test case today, and found the only issue with the code in my question is the Boolean. Boolean is not mutable, so my list of references wasn't doing me any good. I had a look at this question, and changed my code to use AtomicBoolean, and now everything works as expected.
public class ThreadLocalFlag {
private ThreadLocal<AtomicBoolean> flag;
private List<AtomicBoolean> allValues = new ArrayList<AtomicBoolean>();
public ThreadLocalFlag() {
flag = new ThreadLocal<AtomicBoolean>() {
#Override protected AtomicBoolean initialValue() {
AtomicBoolean value = new AtomicBoolean();
allValues.add(value);
return value;
}
};
}
public boolean get() {
return flag.get().get();
}
public void set(boolean value) {
flag.get().set(value);
}
public void setAll(boolean value) {
for (AtomicBoolean tlValue : allValues) {
tlValue.set(value);
}
}
}
Test case:
public class ThreadLocalFlagTest {
private static ThreadLocalFlag flag = new ThreadLocalFlag();
private static boolean runThread = true;
#AfterClass
public static void tearDownOnce() throws Exception {
runThread = false;
flag = null;
}
/**
* #throws Exception if there is any issue with the test
*/
#Test
public void testSetAll() throws Exception {
startThread("ThreadLocalFlagTest-1", false);
try {
Thread.sleep(1000L);
} catch (InterruptedException e) {
//ignore
}
startThread("ThreadLocalFlagTest-2", true);
try {
Thread.sleep(1000L);
} catch (InterruptedException e) {
//ignore
}
startThread("ThreadLocalFlagTest-3", false);
try {
Thread.sleep(1000L);
} catch (InterruptedException e) {
//ignore
}
startThread("ThreadLocalFlagTest-4", true);
try {
Thread.sleep(8000L); //watch the alternating values
} catch (InterruptedException e) {
//ignore
}
flag.setAll(true);
try {
Thread.sleep(8000L); //watch the true values
} catch (InterruptedException e) {
//ignore
}
flag.setAll(false);
try {
Thread.sleep(8000L); //watch the false values
} catch (InterruptedException e) {
//ignore
}
}
private void startThread(String name, boolean value) {
Thread t = new Thread(new RunnableCode(value));
t.setName(name);
t.start();
}
class RunnableCode implements Runnable {
private boolean initialValue;
RunnableCode(boolean value) {
initialValue = value;
}
#Override
public void run() {
flag.set(initialValue);
while (runThread) {
System.out.println(Thread.currentThread().getName() + ": " + flag.get());
try {
Thread.sleep(4000L);
} catch (InterruptedException e) {
//ignore
}
}
}
}
}
I have a common interface for a number of singleton implementations. Interface defines initialization method which can throw checked exception.
I need a factory which will return cached singleton implementations on demand, and wonder if following approach is thread-safe?
UPDATE1: Please don't suggest any 3rd partly libraries, as this will require to obtain legal clearance due to possible licensing issues :-)
UPDATE2: this code will likely to be used in EJB environment, so it's preferrable not to spawn additional threads or use stuff like that.
interface Singleton
{
void init() throws SingletonException;
}
public class SingletonFactory
{
private static ConcurrentMap<String, AtomicReference<? extends Singleton>> CACHE =
new ConcurrentHashMap<String, AtomicReference<? extends Singleton>>();
public static <T extends Singleton> T getSingletonInstance(Class<T> clazz)
throws SingletonException
{
String key = clazz.getName();
if (CACHE.containsKey(key))
{
return readEventually(key);
}
AtomicReference<T> ref = new AtomicReference<T>(null);
if (CACHE.putIfAbsent(key, ref) == null)
{
try
{
T instance = clazz.newInstance();
instance.init();
ref.set(instance); // ----- (1) -----
return instance;
}
catch (Exception e)
{
throw new SingletonException(e);
}
}
return readEventually(key);
}
#SuppressWarnings("unchecked")
private static <T extends Singleton> T readEventually(String key)
{
T instance = null;
AtomicReference<T> ref = (AtomicReference<T>) CACHE.get(key);
do
{
instance = ref.get(); // ----- (2) -----
}
while (instance == null);
return instance;
}
}
I'm not entirely sure about lines (1) and (2). I know that referenced object is declared as volatile field in AtomicReference, and hence changes made at line (1) should become immediately visible at line (2) - but still have some doubts...
Other than that - I think use of ConcurrentHashMap addresses atomicity of putting new key into a cache.
Do you guys see any concerns with this approach? Thanks!
P.S.: I know about static holder class idiom - and I don't use it due to ExceptionInInitializerError (which any exception thrown during singleton instantiation is wrapped into) and subsequent NoClassDefFoundError which are not something I want to catch. Instead, I'd like to leverage the advantage of dedicated checked exception by catching it and handling it gracefully rather than parse the stack trace of EIIR or NCDFE.
You have gone to a lot of work to avoid synchronization, and I assume the reason for doing this is for performance concerns. Have you tested to see if this actually improves performance vs a synchronized solution?
The reason I ask is that the Concurrent classes tend to be slower than the non-concurrent ones, not to mention the additional level of redirection with the atomic reference. Depending on your thread contention, a naive synchronized solution may actually be faster (and easier to verify for correctness).
Additionally, I think that you can possibly end up with an infinite loop when a SingletonException is thrown during a call to instance.init(). The reason being that a concurrent thread waiting in readEventually will never end up finding its instance (since an exception was thrown while another thread was initializing the instance). Maybe this is the correct behaviour for your case, or maybe you want to set some special value to the instance to trigger an exception to be thrown to the waiting thread.
Having all of these concurrent/atomic things would cause more lock issues than just putting
synchronized(clazz){}
blocks around the getter. Atomic references are for references that are UPDATED and you don't want collision. Here you have a single writer, so you do not care about that.
You could optimize it further by having a hashmap, and only if there is a miss, use the synchronized block:
public static <T> T get(Class<T> cls){
// No lock try
T ref = cache.get(cls);
if(ref != null){
return ref;
}
// Miss, so use create lock
synchronized(cls){ // singletons are double created
synchronized(cache){ // Prevent table rebuild/transfer contentions -- RARE
// Double check create if lock backed up
ref = cache.get(cls);
if(ref == null){
ref = cls.newInstance();
cache.put(cls,ref);
}
return ref;
}
}
}
Consider using Guava's CacheBuilder. For example:
private static Cache<Class<? extends Singleton>, Singleton> singletons = CacheBuilder.newBuilder()
.build(
new CacheLoader<Class<? extends Singleton>, Singleton>() {
public Singleton load(Class<? extends Singleton> key) throws SingletonException {
try {
Singleton singleton = key.newInstance();
singleton.init();
return singleton;
}
catch (SingletonException se) {
throw se;
}
catch (Exception e) {
throw new SingletonException(e);
}
}
});
public static <T extends Singleton> T getSingletonInstance(Class<T> clazz) {
return (T)singletons.get(clazz);
}
Note: this example is untested and uncompiled.
Guava's underlying Cache implementation will handle all caching and concurrency logic for you.
This looks like it would work although I might consider some sort of sleep if even a nanosecond or something when testing for the reference to be set. The spin test loop is going to be extremely expensive.
Also, I would consider improving the code by passing the AtomicReference to readEventually() so you can avoid the containsKey() and then putIfAbsent() race condition. So the code would be:
AtomicReference<T> ref = (AtomicReference<T>) CACHE.get(key);
if (ref != null) {
return readEventually(ref);
}
AtomicReference<T> newRef = new AtomicReference<T>(null);
AtomicReference<T> oldRef = CACHE.putIfAbsent(key, newRef);
if (oldRef != null) {
return readEventually(oldRef);
}
...
The code is not generally thread safe because there is a gap between the CACHE.containsKey(key) check and the CACHE.putIfAbsent(key, ref) call. It is possible for two threads to call simultaneously into the method (especially on multi-core/processor systems) and both perform the containsKey() check, then both attempt to do the put and creation operations.
I would protect that execution of the getSingletonInstnace() method using either a lock or by synchronizing on a monitor of some sort.
google "Memoizer". basically, instead of AtomicReference, use Future.