I have a class whose fields can't help but lazily initialized.
class Some {
public Some getPrevious() {
{
final Some result = previous;
if (result != null) {
return result;
}
}
synchornized (this) {
if (previous == null) {
previous = computePrevious();
}
return previous;
}
}
// all fields are final initialized in constructor
private final String name;
// this is a lazy initialized self-type value.
private volatile Some previous;
}
Now sonarcloud keep complaining with java:S3077.
Use a thread-safe type; adding "volatile" is not enough to make this field thread-safe.
Is there anything wrong with the code?
Can(Should) I ignore it?
What about using AtomicReference? Isn't it an overkill?
A 'thread safe type' means one which can be used by several threads without issues.
So if Other is immutable, it is a 'thread-safe type' for the purposes of S3077.
If it is a class which is designed to be used by several threads, e.g. a ConcurrentHashMap, then it is also a 'thread-safe type'.
If you google S3077 you can find useful discussions which answer your question, e.g. https://community.sonarsource.com/t/java-rule-s3077-should-not-apply-to-references-to-immutable-objects/15200
In the book "Java Concurrency in Practice" is mentioned that the following code is not threadsafe:
#NotThreadSafe
public class DoubleCheckedLocking {
private static Resource resource;
public static Resource getInstance(){
if(resource == null){
synchronized (DoubleCheckedLocking.class){
if(resource == null)
resource = new Resource();
}
}
return resource;
}
}
It is not thread safe because because:
- one thread can create new instance of Resource
- another thread at the same time in the "if" condition can get not empty reference but the object of Resource will not be completly initialized
In this question is similar code. Resources are stored in concurentHashMap and people say that it is threadSafe. Something like this:
public class DoubleCheckedLocking2 {
private static ConcurrentHashMap<String, ComplexObject> cache = new ConcurrentHashMap<String, ComplexObject>();
public static ComplexObject getInstance(String key) {
ComplexObject result = cache.get(key);
if (result == null) {
synchronized (DoubleCheckedLocking2.class) {
ComplexObject currentValue = cache.get(key);
if (currentValue == null) {
result = new ComplexObject();
cache.put(key, result);
} else {
result = currentValue;
}
}
}
return result;
}
}
Why does storing the values in ConcurrentHashMap make the code threadSafe? I think that it is still possible that the ComplexObject won't be completely initialized and this "partial object" will be saved in the map. And other threads will be reading partial not fully initialized objects.
I think I know what is "happens-before", I've analyzed code in JDK 8.0_31 and I still don't know the answer.
I am aware of the functions like computeIfAbsent, putIfAbsent. I know that this code can be written differently. I just wan't know details which make this code threadsafe.
Happens before actually is the key here. There's a happens before edge extending from map.put(key, object) to a subsequent map.get(key), therefore the object you retrieve is at least as up to date as it was at the time it was stored in the map.
TaggedLogger has only a string field - tag.
public class TaggedLogger {
private final String tag;
public static TaggedLogger forInstance(Object instance) {
return new TaggedLogger(getTagOfInstance(instance));
}
public static String getTagOfInstance(Object instance) {
return getTagOfClass(instance.getClass());
}
public static TaggedLogger forClass(Class<?> someClass) {
return new TaggedLogger(getTagOfClass(someClass));
}
public static String getTagOfClass(Class<?> someClass) {
return someClass.getName();
}
public static TaggedLogger withTag(String tag) {
return new TaggedLogger(tag);
}
private TaggedLogger(String tag) {
this.tag = tag;
}
public void debug(Object obj) {
Log.d(getTag(), String.valueOf(obj));
}
public String getTag() {
return tag;
}
public void exception(String message) {
Log.e(getTag(), String.valueOf(message));
}
public void exception(Throwable exception) {
Log.e(getTag(), String.valueOf(exception.getMessage()), exception);
}
public void exception(Throwable exception, String additionalMessage) {
Log.e(getTag(), String.valueOf(exception.getMessage()), exception);
Log.e(getTag(), String.valueOf(additionalMessage));
}
public void info(Object obj) {
Log.i(getTag(), String.valueOf(obj.toString()));
}
}
And TaggedLoggers is using to get cached (or create new and put in cache) TaggedLogger instances:
public class TaggedLoggers {
public static final TaggedLogger GLOBAL = getCachedWithTag("GLOBAL");
private static final Map<String,TaggedLogger> cache = new HashMap<String, TaggedLogger>();
public static TaggedLogger getCachedForInstance(Object obj) {
return getCachedWithTag(TaggedLogger.getTagOfInstance(obj));
}
public static TaggedLogger getCachedForClass(Class<?> someClass) {
return getCachedWithTag(TaggedLogger.getTagOfClass(someClass));
}
public static TaggedLogger getCachedWithTag(String tag) {
TaggedLogger logger = cache.get(tag);
if (logger == null) {
logger = TaggedLogger.withTag(tag);
cache.put(tag, logger);
}
return logger;
}
}
Is there any use in TaggedLoggers class?
Actually I often use TaggedLogger for logging using arguments as tags. I.e.:
public class FragmentUtils {
public static void showMessage(Fragment fragment, String message, int toastDuration) {
TaggedLoggers.getCachedForInstance(fragment).debug(message);
Context context = fragment.getActivity();
if (context == null) {
return;
}
Toast toast = Toast.makeText(context, message, toastDuration);
toast.show();
}
}
So, caching TaggedLogger instances actually helps me to avoid a lot of unnecessary instances.
But, should I to do so?
Caching of existing instances can help a lot or can kill performances.
When creating a new instance you have to consider two factors :
The time to setup the instance itself, that is allocate ram, initialize fields and execute the constructor
The time taken by the garbage collector to cleanup after the instance is not reachable anymore
This used to be a lot of time on older JVMs, not the garbage collector has been improved and usually creating and throwing away small instances is not a big problem as it was before, but still it has it's cost. Don't know exactly how much Android JVMs have been optimized.
In this case it depends on how often you create and throw away these instances, which you said is very often.
When instead reusing them, you have to consider two factors :
The time to lookup the existing instance, that is a map lookup
The ram that is kept full of actually unused instances
So, in this case, it depends on how many different instances you have. If you will have thousands of TaggedLogger's then looking up the map and keeping all that stuff in ram could hurt performances more than creating and throwing away.
If TaggedLoggers are around hundred(s), then probably better caching, if they go into thousands, then probably better to instantiate and throw away.
However, I would question wether you need a TaggerLogger. If you always have the tag String, can't you simply call the logger method or only have a (possibly even static) façade in front of it, instead of instances that contains only an information (the tag string) that you already have?
Creating and garbage collecting TaggedLoggers would be nearly free, so there's no real benefit to caching them, but since TaggedLoggers uses a HashMap instead of a ConcurrentHashMap there is the potential, if it is called from multiple threads, for difficult to debug problems up to and including an infinite loop if two threads try to resize the map larger at the same time.
It provides little if any benefit, creates additional complexity, and may create problems.
See also: A Beautiful Race Condition
I've been using LazyReference class for a few years (not on a regular basis of course, but sometimes it is very useful). The class can be seen here. Credits go to Robbie Vanbrabant (class author) and Joshua Bloch with his famous "Effective Java 2nd edt." (original code).
The class works correctly (in Java 5+) but there is one little potential issue. If instanceProvider returns null (well it must not according to Guice Provider.get() contract, but…) then on every execution of LazyReference.get() method the LOCK will be held and instanceProvider.get will be called over and over again. It looks like a good punishment for those who break the contracts (he-he), but what if one really needs to lazily initialize a field with the possibility to set the null value?
I modified LazyReference a little bit:
public class LazyReference<T> {
private final Object LOCK = new Object();
private volatile T instance;
private volatile boolean isNull;
private final Provider<T> instanceProvider;
private LazyReference(Provider<T> instanceProvider) {
this.instanceProvider = instanceProvider;
}
public T get() {
T result = instance;
if (result == null && !isNull) {
synchronized (LOCK) {
result = instance;
if (result == null && !isNull) {
instance = result = instanceProvider.get();
isNull = (result == null);
}
}
}
return result;
}
}
IMHO it should work just fine (if you have another opinion please post your comments and criticisms). But I wonder what will happen if I remove the volatile modifier from isNull boolean (leaving it for instance of course)? Will it still work correctly?
The above code has a race condition: instance may be set to the "real" null from the result of instanceProvider.get() before isNull has been set.
Are you sure it wouldn't be easier for you to just scrap this complicated nonsense and synchronise properly. I bet you will not be able to measure any difference in performance and it will be easier to verify that your code is correct.
As pointed out by Neil Coffey, this code contains a race condition, but it can be easily fixed as follows (note that instance don't need to be volatile):
public class LazyReference<T> {
private T instance;
private volatile boolean initialized;
...
public T get() {
if (!initialized) {
synchronized (LOCK) {
if (!initialized) {
instance = instanceProvider.get();
initialized = true;
}
}
}
return instance;
}
}
let's say we have a CountryList object in our application that should return the list of countries. The loading of countries is a heavy operation, so the list should be cached.
Additional requirements:
CountryList should be thread-safe
CountryList should load lazy (only on demand)
CountryList should support the invalidation of the cache
CountryList should be optimized considering that the cache will be invalidated very rarely
I came up with the following solution:
public class CountryList {
private static final Object ONE = new Integer(1);
// MapMaker is from Google Collections Library
private Map<Object, List<String>> cache = new MapMaker()
.initialCapacity(1)
.makeComputingMap(
new Function<Object, List<String>>() {
#Override
public List<String> apply(Object from) {
return loadCountryList();
}
});
private List<String> loadCountryList() {
// HEAVY OPERATION TO LOAD DATA
}
public List<String> list() {
return cache.get(ONE);
}
public void invalidateCache() {
cache.remove(ONE);
}
}
What do you think about it? Do you see something bad about it? Is there other way to do it? How can i make it better? Should i look for totally another solution in this cases?
Thanks.
google collections actually supplies just the thing for just this sort of thing: Supplier
Your code would be something like:
private Supplier<List<String>> supplier = new Supplier<List<String>>(){
public List<String> get(){
return loadCountryList();
}
};
// volatile reference so that changes are published correctly see invalidate()
private volatile Supplier<List<String>> memorized = Suppliers.memoize(supplier);
public List<String> list(){
return memorized.get();
}
public void invalidate(){
memorized = Suppliers.memoize(supplier);
}
Thanks you all guys, especially to user "gid" who gave the idea.
My target was to optimize the performance for the get() operation considering the invalidate() operation will be called very rare.
I wrote a testing class that starts 16 threads, each calling get()-Operation one million times. With this class I profiled some implementation on my 2-core maschine.
Testing results
Implementation Time
no synchronisation 0,6 sec
normal synchronisation 7,5 sec
with MapMaker 26,3 sec
with Suppliers.memoize 8,2 sec
with optimized memoize 1,5 sec
1) "No synchronisation" is not thread-safe, but gives us the best performance that we can compare to.
#Override
public List<String> list() {
if (cache == null) {
cache = loadCountryList();
}
return cache;
}
#Override
public void invalidateCache() {
cache = null;
}
2) "Normal synchronisation" - pretty good performace, standard no-brainer implementation
#Override
public synchronized List<String> list() {
if (cache == null) {
cache = loadCountryList();
}
return cache;
}
#Override
public synchronized void invalidateCache() {
cache = null;
}
3) "with MapMaker" - very poor performance.
See my question at the top for the code.
4) "with Suppliers.memoize" - good performance. But as the performance the same "Normal synchronisation" we need to optimize it or just use the "Normal synchronisation".
See the answer of the user "gid" for code.
5) "with optimized memoize" - the performnce comparable to "no sync"-implementation, but thread-safe one. This is the one we need.
The cache-class itself:
(The Supplier interfaces used here is from Google Collections Library and it has just one method get(). see http://google-collections.googlecode.com/svn/trunk/javadoc/com/google/common/base/Supplier.html)
public class LazyCache<T> implements Supplier<T> {
private final Supplier<T> supplier;
private volatile Supplier<T> cache;
public LazyCache(Supplier<T> supplier) {
this.supplier = supplier;
reset();
}
private void reset() {
cache = new MemoizingSupplier<T>(supplier);
}
#Override
public T get() {
return cache.get();
}
public void invalidate() {
reset();
}
private static class MemoizingSupplier<T> implements Supplier<T> {
final Supplier<T> delegate;
volatile T value;
MemoizingSupplier(Supplier<T> delegate) {
this.delegate = delegate;
}
#Override
public T get() {
if (value == null) {
synchronized (this) {
if (value == null) {
value = delegate.get();
}
}
}
return value;
}
}
}
Example use:
public class BetterMemoizeCountryList implements ICountryList {
LazyCache<List<String>> cache = new LazyCache<List<String>>(new Supplier<List<String>>(){
#Override
public List<String> get() {
return loadCountryList();
}
});
#Override
public List<String> list(){
return cache.get();
}
#Override
public void invalidateCache(){
cache.invalidate();
}
private List<String> loadCountryList() {
// this should normally load a full list from the database,
// but just for this instance we mock it with:
return Arrays.asList("Germany", "Russia", "China");
}
}
Whenever I need to cache something, I like to use the Proxy pattern.
Doing it with this pattern offers separation of concerns. Your original
object can be concerned with lazy loading. Your proxy (or guardian) object
can be responsible for validation of the cache.
In detail:
Define an object CountryList class which is thread-safe, preferably using synchronization blocks or other semaphore locks.
Extract this class's interface into a CountryQueryable interface.
Define another object, CountryListProxy, that implements the CountryQueryable.
Only allow the CountryListProxy to be instantiated, and only allow it to be referenced
through its interface.
From here, you can insert your cache invalidation strategy into the proxy object. Save the time of the last load, and upon the next request to see the data, compare the current time to the cache time. Define a tolerance level, where, if too much time has passed, the data is reloaded.
As far as Lazy Load, refer here.
Now for some good down-home sample code:
public interface CountryQueryable {
public void operationA();
public String operationB();
}
public class CountryList implements CountryQueryable {
private boolean loaded;
public CountryList() {
loaded = false;
}
//This particular operation might be able to function without
//the extra loading.
#Override
public void operationA() {
//Do whatever.
}
//This operation may need to load the extra stuff.
#Override
public String operationB() {
if (!loaded) {
load();
loaded = true;
}
//Do whatever.
return whatever;
}
private void load() {
//Do the loading of the Lazy load here.
}
}
public class CountryListProxy implements CountryQueryable {
//In accordance with the Proxy pattern, we hide the target
//instance inside of our Proxy instance.
private CountryQueryable actualList;
//Keep track of the lazy time we cached.
private long lastCached;
//Define a tolerance time, 2000 milliseconds, before refreshing
//the cache.
private static final long TOLERANCE = 2000L;
public CountryListProxy() {
//You might even retrieve this object from a Registry.
actualList = new CountryList();
//Initialize it to something stupid.
lastCached = Long.MIN_VALUE;
}
#Override
public synchronized void operationA() {
if ((System.getCurrentTimeMillis() - lastCached) > TOLERANCE) {
//Refresh the cache.
lastCached = System.getCurrentTimeMillis();
} else {
//Cache is okay.
}
}
#Override
public synchronized String operationB() {
if ((System.getCurrentTimeMillis() - lastCached) > TOLERANCE) {
//Refresh the cache.
lastCached = System.getCurrentTimeMillis();
} else {
//Cache is okay.
}
return whatever;
}
}
public class Client {
public static void main(String[] args) {
CountryQueryable queryable = new CountryListProxy();
//Do your thing.
}
}
Your needs seem pretty simple here. The use of MapMaker makes the implementation more complicated than it has to be. The whole double-checked locking idiom is tricky to get right, and only works on 1.5+. And to be honest, it's breaking one of the most important rules of programming:
Premature optimization is the root of
all evil.
The double-checked locking idiom tries to avoid the cost of synchronization in the case where the cache is already loaded. But is that overhead really causing problems? Is it worth the cost of more complex code? I say assume it is not until profiling tells you otherwise.
Here's a very simple solution that requires no 3rd party code (ignoring the JCIP annotation). It does make the assumption that an empty list means the cache hasn't been loaded yet. It also prevents the contents of the country list from escaping to client code that could potentially modify the returned list. If this is not a concern for you, you could remove the call to Collections.unmodifiedList().
public class CountryList {
#GuardedBy("cache")
private final List<String> cache = new ArrayList<String>();
private List<String> loadCountryList() {
// HEAVY OPERATION TO LOAD DATA
}
public List<String> list() {
synchronized (cache) {
if( cache.isEmpty() ) {
cache.addAll(loadCountryList());
}
return Collections.unmodifiableList(cache);
}
}
public void invalidateCache() {
synchronized (cache) {
cache.clear();
}
}
}
I'm not sure what the map is for. When I need a lazy, cached object, I usually do it like this:
public class CountryList
{
private static List<Country> countryList;
public static synchronized List<Country> get()
{
if (countryList==null)
countryList=load();
return countryList;
}
private static List<Country> load()
{
... whatever ...
}
public static synchronized void forget()
{
countryList=null;
}
}
I think this is similar to what you're doing but a little simpler. If you have a need for the map and the ONE that you've simplified away for the question, okay.
If you want it thread-safe, you should synchronize the get and the forget.
What do you think about it? Do you see something bad about it?
Bleah - you are using a complex data structure, MapMaker, with several features (map access, concurrency-friendly access, deferred construction of values, etc) because of a single feature you are after (deferred creation of a single construction-expensive object).
While reusing code is a good goal, this approach adds additional overhead and complexity. In addition, it misleads future maintainers when they see a map data structure there into thinking that there's a map of keys/values in there when there is really only 1 thing (list of countries). Simplicity, readability, and clarity are key to future maintainability.
Is there other way to do it? How can i make it better? Should i look for totally another solution in this cases?
Seems like you are after lazy-loading. Look at solutions to other SO lazy-loading questions. For example, this one covers the classic double-check approach (make sure you are using Java 1.5 or later):
How to solve the "Double-Checked Locking is Broken" Declaration in Java?
Rather than just simply repeat the solution code here, I think it is useful to read the discussion about lazy loading via double-check there to grow your knowledge base. (sorry if that comes off as pompous - just trying teach to fish rather than feed blah blah blah ...)
There is a library out there (from atlassian) - one of the util classes called LazyReference. LazyReference is a reference to an object that can be lazily created (on first get). it is guarenteed thread safe, and the init is also guarenteed to only occur once - if two threads calls get() at the same time, one thread will compute, the other thread will block wait.
see a sample code:
final LazyReference<MyObject> ref = new LazyReference() {
protected MyObject create() throws Exception {
// Do some useful object construction here
return new MyObject();
}
};
//thread1
MyObject myObject = ref.get();
//thread2
MyObject myObject = ref.get();
This looks ok to me (I assume MapMaker is from google collections?) Ideally you wouldn't need to use a Map because you don't really have keys but as the implementation is hidden from any callers I don't see this as a big deal.
This is way to simple to use the ComputingMap stuff. You only need a dead simple implementation where all methods are synchronized, and you should be fine. This will obviously block the first thread hitting it (getting it), and any other thread hitting it while the first thread loads the cache (and the same again if anyone calls the invalidateCache thing - where you also should decide whether the invalidateCache should load the cache anew, or just null it out, letting the first attempt at getting it again block), but then all threads should go through nicely.
Use the Initialization on demand holder idiom
public class CountryList {
private CountryList() {}
private static class CountryListHolder {
static final List<Country> INSTANCE = new List<Country>();
}
public static List<Country> getInstance() {
return CountryListHolder.INSTANCE;
}
...
}
Follow up to Mike's solution above. My comment didn't format as expected... :(
Watch out for synchronization issues in operationB, especially since load() is slow:
public String operationB() {
if (!loaded) {
load();
loaded = true;
}
//Do whatever.
return whatever;
}
You could fix it this way:
public String operationB() {
synchronized(loaded) {
if (!loaded) {
load();
loaded = true;
}
}
//Do whatever.
return whatever;
}
Make sure you ALWAYS synchronize on every access to the loaded variable.