Any purpose of using a LoadableDetachableModel in a DataProvider? - java

Since it is still not 100% clear when a LDM should be used I tried a simple memory test.
I created a DataView with a DataProvider that simply creates a list of few 100 entities with some big data inside (long String):
private class HeavyDataProvider implements IDataProvider<HeavyBean> {
#Override
public void detach() {
}
#Override
public Iterator<? extends HeavyBean> iterator(int first, int count) {
List<HeavyBean> l = newArrayList();
for (int i = 0; i < this.size(); i++) {
l.add(new HeavyBean());
}
return l.iterator();
}
#Override
public IModel<HeavyBean> model(HeavyBean heavyBean) {
return new CompoundPropertyModel<HeavyBean>(heavyBean);
}
#Override
public int size() {
return 500;
}
}
Using wicket's DebugBar is see this creates a Page with a size of 5MB. In the javadoc of DataProvider it is stated that the model return in above model method is usually a detachable one so I changed this method to:
#Override
public IModel<HeavyBean> model(final HeavyBean heavyBean) {
return new LoadableDetachableModel<HeavyBean>() {
#Override
protected HeavyBean load() {
return heavyBean;
}
};
}
Naively I was expecting the pagesize to be reduced in a big way now since the heavybeans will no longer be part of the model. Actual result: 5MB. Since the model will detach the heavyBean this must mean that something else still has a hold of it (DataView? Item?).
I saw other examples where DataView and DataProvider are combined in a similar fashion but for me it is unclear what the point is since it does not seem to make any difference regarding the pageSize/memory usage.
So, am I understanding/doing something wrong (likely) or are LDM's useless in DataProviders?
Side question (sorry), in which scenario would you use an LDM?

Your implementation of LDM is just plain wrong. It is holding a direct reference to the bean itself, and just returning it. This way, the bean will be serialized along the model, making it completely pointless.
You should do something like this:
#Override
public IModel<HeavyBean> model(final HeavyBean heavyBean) {
final Integer id = heavyBean.getId();
return new LoadableDetachableModel<HeavyBean>() {
#Override
protected HeavyBean load() {
return ServiceLocator.get(HeavyDao.class).get(id);
}
};
}
If you use the wicket-ioc module, the HeavyDao reference could be injected into the enclosing page/component.
I think Wicket is really easy to use, but you must understand the basics of Java serialization, or else you may end up with a very bloated http session.

For the LDM to work, you will have to actually detach the data in the detach() method. LDMs are meant to be used with databases, where you can restore / load the data on the next request with only the knowledge of an ID. So, in detach() you would trow away all data but the ID (or watever you need to relaod the data when needed) and in the load() (is this right? can't lock up the api right now) you will restore the data.
Hope that helps.

Related

How to create a custom BodyPublisher for Java 11 HttpRequest

I'm trying to create a custom BodyPublisher that would deserialize my JSON object. I could just deserialize the JSON when I'm creating the request and use the ofByteArray method of BodyPublishers but I would rather use a custom publisher.
public class CustomPublisher implements HttpRequest.BodyPublisher {
private byte[] bytes;
public CustomPublisher(ObjectNode jsonData) {
...
// Deserialize jsonData to bytes
...
}
#Override
public long contentLength() {
if(bytes == null) return 0;
return bytes.length
}
#Override
public void subscribe(Flow.Subscriber<? super ByteBuffer> subscriber) {
CustomSubscription subscription = new CustomSubscription(subscriber, bytes);
subscriber.onSubscribe(subscription);
}
private CustomSubscription implements Flow.Subscription {
private final Flow.Subscriber<? super ByteBuffer> subscriber;
private boolean cancelled;
private Iterator<Byte> byterator;
private CustomSubscription(Flow.Subscriber<? super ByteBuffer> subscriber, byte[] bytes) {
this.subscriber = subscriber;
this.cancelled = false;
List<Byte> bytelist = new ArrayList<>();
for(byte b : bytes) {
bytelist.add(b);
}
this.byterator = bytelist.iterator();
}
#Override
public void request(long n) {
if(cancelled) return;
if(n < 0) {
subscriber.onError(new IllegalArgumentException());
} else if(byterator.hasNext()) {
subscriber.onNext(ByteBuffer.wrap(new byte[]{byterator.next()));
} else {
subscriber.onComplete();
}
}
#Override
public void cancel() {
this.cancelled = true;
}
}
}
This implementation works, but only if subscriptions request method gets called with 1 as a parameter. But that's what happens when I am using it with the HttpRequest.
I'm pretty sure this is not any way preferred or optimal way of creating the custom subscription but I have yet to found better way to make it work.
I would greatly appreciate if anyone can lead me to a better path.
You are right to avoid making a byte array out of it, as that would create memory issues for large objects.
I wouldn’t try to write a custom publisher. Rather, just take advantage of the factory method HttpRequest.BodyPublishers.ofInputStream.
HttpRequest.BodyPublisher publisher =
HttpRequest.BodyPublishers.ofInputStream(() -> {
PipedInputStream in = new PipedInputStream();
ForkJoinPool.commonPool().submit(() -> {
try (PipedOutputStream out = new PipedOutputStream(in)) {
objectMapper.writeTree(
objectMapper.getFactory().createGenerator(out),
jsonData);
}
return null;
});
return in;
});
As you have noted, you can use HttpRequest.BodyPublishers.ofByteArray. That is fine for relatively small objects, but I program for scalability out of habit. The problem with assuming code won’t need to scale is that other developers will assume it is safe to pass large objects, without realizing the impact on performance.
Writing your own body publisher will be a lot of work. Its subscribe method is inherited from Flow.Publisher.
The documentation for the subscribe method starts with this:
Adds the given Subscriber if possible.
Each time your subscribe method is called, you need to add the Subscriber to some sort of colllection, you need to create an implementation of Flow.Subscription, and you need to immediately pass it to the subscriber’s onSubscribe method. Your Subscription implementation object needs to send back one or more ByteBuffers, only when the Subscription’s request method is called, by invoking the corresponding Subscriber’s (not just any Subscriber’s) onNext method, and once you’ve sent all of the data, you must call the same Subscriber’s onComplete() method. On top of that, the Subscription implementation object needs to handle cancel requests.
You can make a lot of this easier by extending SubmissionPublisher, which is a default implementation of Flow.Publisher, and then adding a contentLength() method to it. But as the SubmissionPublisher documentation shows, you still have a fair amount of work to do, for even a minimal working implementation.
The HttpRequest.BodyPublishers.of… methods will do all of this for you. ofByteArray is okay for small objects, but ofInputStream will work for any object you could ever pass in.

Is a Java object with hundreds of methods expensive?

I have a class like below, with hundreds of methods:
public class APIMethods {
public ToView toView;
public APIMethods(ToView toView) {
this.toView = toView;
}
public static final int SUCCESS = 1;
public static final int ERROR = 0;
public void registerAnonymous(String deviceId, String installRef, final int requestCode) {
APIInterface apiService =
RetrofitClientInstance.getRetrofitInstance().create(APIInterface.class);
JsonObject obj = new JsonObject();
obj.addProperty("androidId", deviceId);
obj.addProperty("projectId", 0);
obj.addProperty("ChannelName", installRef);
Call<Response<BasicUser>> call = apiService.registerAnonymous("application/json", Utils.getFlavorId(), obj);
call.enqueue(new Callback<Response<BasicUser>>() {
#Override
public void onResponse(Call<Response<BasicUser>> call, Response<Response<BasicUser>> response) {
Response<BasicUser> mResponse;
try {
mResponse = response.body();
if (mResponse.getErrorCode() == 0)
toView.updateView(requestCode, SUCCESS, mResponse);
else
toView.updateView(requestCode, ERROR, mResponse);
} catch (Exception e) {
mResponse = new Response<>();
mResponse.setErrorCode(-1);
toView.updateView(requestCode, ERROR, mResponse);
e.printStackTrace();
}
}
#Override
public void onFailure(Call<PetMarkResponse<BasicUser>> call, Throwable t) {
Response<BasicUser> numberValidationResponse = new Response<BasicUser>();
numberValidationResponse.setErrorCode(-1);
toView.updateView(requestCode, ERROR, numberValidationResponse);
}
});
}
///And dozens of such method
}
So in my other classes everywhere in my application, I simply instantiate the class and call the method that I want:
APIMethods api = new APIMethods(this);
api.registerAnonymous(Utils.getAndroidId(this), BuildConfig.FLAVOR, STATE_REGISTER_ANONYMOUS);
My question is how expensive this object (api) is? Note that in each class, a few methods of the object are called.
The object is not expensive at all.
An object contains a pointer to the object's class, and the methods are stored with the class. Essentially, the methods are all shared. An object of a class with no methods and an object of a class with 10000 methods are the same size (assuming everything else is equal).
The situation would be different if you had 100 fields instead of 100 methods.
You may want to think about if having hundreds of methods in a single class is a good idea. Is the code easy to understand and maintain? Is this an example of the "God object" anti pattern? https://en.m.wikipedia.org/wiki/God_object
This seems like a classic example of the XY problem. Your actual problem is how to make the code readable, but you're actually asking about whether a class with hundreds of methods is expensive.
It being expensive is the least of your concerns - you should be more worried about maintenance. There's no reason at all that any class should ever be that large, especially if you have a lot of independent methods and each class is only calling a few of them. This will make the class very hard to understand - having them all in one place will not improve the situation.
Some of the comments have already pointed this out, but you should, at a minimum, break this up topically.
Even better, refactor this to the Strategy pattern and use a Factory to pick which one to use. That will meet your goal of ease of use while avoiding the problem of having hundreds of unrelated methods in one place.
Try to define a Cohesive class, untill and unless the methods are written relevant to the class and it defines its purpose.
Below link describe the importance of methods for a class:
https://www.decodejava.com/coupling-cohesion-java.htm

Avoid construction of new objects if the class can operate stateless

(This question was closed on code review so I think I should ask here)
Let's say I have a factory like this (it's from an interview):
public class ControllersFactoryImpl implements ControllersFactory {
private final SessionKeeper sessionKeeper;
private final ScoreKeeper scoreKeeper;
public ControllersFactoryImpl(final SessionKeeper sessionKeeper, final ScoreKeeper scoreKeeper) {
this.sessionKeeper = sessionKeeper;
this.scoreKeeper = scoreKeeper;
}
#Override
public Controller makeLoginController(final int userId) {
return new LoginController(userId, sessionKeeper);
}
#Override
public Controller makePostUserScoreController(final int levelId, final String session, final int score) {
return new AddScoreController(levelId, session, score, sessionKeeper, scoreKeeper);
}
#Override
public Controller makeHighScoreController(final int levelId) {
return new HighScoreController(levelId, scoreKeeper);
}
}
since one of the requirements was to handle several call at the time (like millions) they told me that this solution could be improved because in this way we had a huge spawning of new objects (since I'm always calling new) that are doing a single stateless operation and the garbage collector could run into problems trying to clean them.
Controller is an interface that has a single method execute().
Avoiding the usage of constructor is something that is puzzling me because the only way I can think of it, is to give to the execute method a var-args argument and I don't really like that solution because the code is not really readable in that way.
Do you have any alternatives?
This is the code for the controller:
public interface Controller {
String execute();
}
And this is where the controller is used:
Controller controller = null;
try {
if (exchange.isGet()) {
final Matcher mLogin = loginPattern.matcher(path);
if (mLogin.matches()) {
controller = factory.makeLoginController(Integer.parseInt(mLogin.group(1)));
contentType = TEXT_PLAIN;
}
Matcher mHighScore = highScorePattern.matcher(path);
if (mHighScore.matches()) {
controller = factory.makeHighScoreController((Integer.parseInt(mHighScore.group(1))));
contentType = TEXT_CSV;
exchange.setContentDisposition("attachment; fileName=data.csv");
}
} else if (exchange.isPost()) {
final Matcher mScore = userScorePattern.matcher(path);
if (mScore.matches()) {
final Matcher mSession = sessionKeyPattern.matcher(httpExchange.getRequestURI().getQuery());
if (mSession.matches()) {
final Scanner s = new Scanner(httpExchange.getRequestBody());
final int score = Integer.parseInt(s.hasNext() ? s.next() : "0");
controller = factory.makePostUserScoreController(Integer.parseInt(mScore.group(1)), mSession.group(1), score);
contentType = TEXT_PLAIN;
}
}
}
if (controller != null) {
exchange.sendOk();
buildResponse(exchange, controller, contentType);
} else exchange.sendNotFound();
} catch (ExpiredSessionException e) {
exchange.sendUnauthorized();
exchange.setContentType(TEXT_PLAIN);
exchange.setContentType("Session Expired");
} catch (Exception e) {
log(e.getMessage());
httpExchange.sendResponseHeaders(500, 0);
} finally {
httpExchange.getResponseBody().close();
}
Disclaimer: I'm aware of the if-else situation but with that amount of time I didn't have the time to refactor this part.
It's possible to change the code the way you want.
private void buildResponse(Exchange exchange, Controller controller, String contentType) throws IOException {
exchange.setContentType(contentType);
exchange.setContent(controller.execute());
}
since one of the requirements was to handle several call at the time (like millions) they told me that this solution could be improved because in this way we had a huge spawning of new objects (since I'm always calling new)
This sounds like a very very premature optimization. Does the program do any real work, like reading a file or iterating something? If so, then many bigger objects get created and caring about the controller creation is ridiculous.
Anyway, there's a Scanner allocated.
Your controller is not really stateless, it's immutable at best. Its state consists e.g. of levelId, session, score, sessionKeeper, scoreKeeper.
execute method a var-args argument
This means a creation of an array... about the same overhead you wanted to avoid.
Anyway, it looks like the controller currently just complicates the design and you might be better off not using it. However, as the program grows, you may see that using a controller is a good idea as it nicely separates different actions.
I'd just try it out as is. Get millions of requests, determine the bottleneck, and redesign it in case of problems. Till you run into performance problems, keep your design as clean as possible.
Clean design means flexible design and that's the best starting point for optimizations. Code perfectly optimized for imaginary problems is a non-maintainable mess, getting slow in face of real problems and hopeless to improve.
If you really had to eliminate the controller creation, then you can't store any information in them. So you could create an
enum Controller {
LOGIN {
...
}
POST_USERS_SCORE {
...
}
HIGH_SCORE {
...
}
abstract execute(int levelId, String session, int score);
}
where each implementation would ignore the arguments it doesn't need. This is a bit messy, but not as messy as mutable design could get. With mutable controllers you could pool and recycle them, but this is rarely a good idea.

Disabling an if-Condition for one static method call by setting a static field

We have got a class, let it be named AttributeUpdater in our project handling the copying of values from one entity to another. The core method traverses through the attributes of an entity and copies them as specified into the second one. During that loop the AttributeUpdater collects all reports, which contain information about what value was overwritten during copying, into a nice list for eventual logging purposes. This list is deleted in case that the old entity which values got overwritten was never persisted into the database, because in that case you only would overwrite default values and logging that is deemed redundant. In pseudo Java code:
public class AttributeUpdater {
public static CopyResult updateAttributes(Entity source, Entity target, String[] attributes) {
List<CopyReport> reports = new ArrayList<CopyReport>();
for(String attribute : attributes) {
reports.add(copy(source, target, attribute));
}
if(target.isNotPersisted()) {
reports.clear();
}
return new CopyResult(reports);
}
}
Now someone got the epiphany that there is one case in which the reports actually matter even if the entity has not been persisted yet. This would not be that big of a deal if I could just add another parameter to the method signature, but that is somewhat out of option due to the actual structure of the class and the amount of required refractoring. Since the method is static the only other solution I came up with is adding a flag as a static field and setting it just before the function call.
public class AttributeUpdater {
public static final ThreadLocal<Boolean> isDeletionEnabled = new ThreadLocal<Boolean> {
#Override protected Boolean initialValue() {
return Boolean.TRUE;
}
public static Boolean getDeletionEnabled() { return isDeletionEnabled.get(); }
public static void setDeletionEnabled(Boolean b) { isDeletionEnabled.set(b); }
public static CopyResult updateAttributes(Entity source, Entity target, String[] attributes) {
List<CopyReport> reports = new ArrayList<CopyReport>();
for(String attribute : attributes) {
reports.add(copy(source, target, attribute));
}
if(isDeletionEnabled.get() && target.isNotPersisted()) {
reports.clear();
}
return new CopyResult(reports);
}
}
ThreadLocal is a container used for thread-safety. This solution, while it does the job, has at least for me one major drawback: for all the other methods which assume that the reports are deleted there is now no way of guaranteeing that those reports will be deleted as expected. Again refractoring is not an option. So I came up with this:
public class AttributeUpdater {
private static final ThreadLocal<Boolean> isDeletionEnabled = new ThreadLocal<Boolean> {
#Override protected Boolean initialValue() {
return Boolean.TRUE;
}
public static Boolean getDeletionEnabled() { return isDeletionEnabled.get(); }
public static void disableDeletionForNextCall() { isDeletionEnabled.set(Boolean.FALSE); }
public static CopyResult updateAttributes(Entity source, Entity target, String[] attributes) {
List<CopyReport> reports = new ArrayList<CopyReport>();
for(String attribute : attributes) {
reports.add(copy(source, target, attribute));
}
if(isDeletionEnabled.get() && target.isNotPersisted()) {
reports.clear();
}
isDeletionEnabled.set(Boolean.TRUE);
return new CopyResult(reports);
}
}
This way I can guarantee that for old code the function will always work like it did before the change. The downside to this solution is, especially for nested entities, that I am going to be accessing the ThreadLocal-Container a lot - Iteration over one of those means calling disableDeletionForNextCall() for each nested element. Also as the method is called a lot overall there are valid performance concerns.
TL;DR: Look at pseudo Java source code. First one is old code, second and third are different attempts to allow deletion disabling. Parameters cannot be added to method signature.
Is there a possibility to determine which solution is better or is this merely a philosophical issue? Or is there even a better solution to this problem?
The obvious way to decide which solution is better in terms of performance would be benchmarking this. As both solutions access the thread-local variable at least for reading, I doubt that they would differ too much. You could perhaps combine them like this:
if(!isDeletionEnabled.get())
isDeletionEnabled.set(Boolean.TRUE);
else if (target.isNotPersisted())
reports.clear();
In this case, you will have the benefit of the second solution (guaranteed resetting of the flag) without unneccessary writes.
I doubt there will be much practical difference. With a bit of luck, the HotSpot JVM will compile the thread local variable into some nice native code which works without too much of a performance penalty, though I have no actual experience there.

Thread-safe cache of one object in java

let's say we have a CountryList object in our application that should return the list of countries. The loading of countries is a heavy operation, so the list should be cached.
Additional requirements:
CountryList should be thread-safe
CountryList should load lazy (only on demand)
CountryList should support the invalidation of the cache
CountryList should be optimized considering that the cache will be invalidated very rarely
I came up with the following solution:
public class CountryList {
private static final Object ONE = new Integer(1);
// MapMaker is from Google Collections Library
private Map<Object, List<String>> cache = new MapMaker()
.initialCapacity(1)
.makeComputingMap(
new Function<Object, List<String>>() {
#Override
public List<String> apply(Object from) {
return loadCountryList();
}
});
private List<String> loadCountryList() {
// HEAVY OPERATION TO LOAD DATA
}
public List<String> list() {
return cache.get(ONE);
}
public void invalidateCache() {
cache.remove(ONE);
}
}
What do you think about it? Do you see something bad about it? Is there other way to do it? How can i make it better? Should i look for totally another solution in this cases?
Thanks.
google collections actually supplies just the thing for just this sort of thing: Supplier
Your code would be something like:
private Supplier<List<String>> supplier = new Supplier<List<String>>(){
public List<String> get(){
return loadCountryList();
}
};
// volatile reference so that changes are published correctly see invalidate()
private volatile Supplier<List<String>> memorized = Suppliers.memoize(supplier);
public List<String> list(){
return memorized.get();
}
public void invalidate(){
memorized = Suppliers.memoize(supplier);
}
Thanks you all guys, especially to user "gid" who gave the idea.
My target was to optimize the performance for the get() operation considering the invalidate() operation will be called very rare.
I wrote a testing class that starts 16 threads, each calling get()-Operation one million times. With this class I profiled some implementation on my 2-core maschine.
Testing results
Implementation Time
no synchronisation 0,6 sec
normal synchronisation 7,5 sec
with MapMaker 26,3 sec
with Suppliers.memoize 8,2 sec
with optimized memoize 1,5 sec
1) "No synchronisation" is not thread-safe, but gives us the best performance that we can compare to.
#Override
public List<String> list() {
if (cache == null) {
cache = loadCountryList();
}
return cache;
}
#Override
public void invalidateCache() {
cache = null;
}
2) "Normal synchronisation" - pretty good performace, standard no-brainer implementation
#Override
public synchronized List<String> list() {
if (cache == null) {
cache = loadCountryList();
}
return cache;
}
#Override
public synchronized void invalidateCache() {
cache = null;
}
3) "with MapMaker" - very poor performance.
See my question at the top for the code.
4) "with Suppliers.memoize" - good performance. But as the performance the same "Normal synchronisation" we need to optimize it or just use the "Normal synchronisation".
See the answer of the user "gid" for code.
5) "with optimized memoize" - the performnce comparable to "no sync"-implementation, but thread-safe one. This is the one we need.
The cache-class itself:
(The Supplier interfaces used here is from Google Collections Library and it has just one method get(). see http://google-collections.googlecode.com/svn/trunk/javadoc/com/google/common/base/Supplier.html)
public class LazyCache<T> implements Supplier<T> {
private final Supplier<T> supplier;
private volatile Supplier<T> cache;
public LazyCache(Supplier<T> supplier) {
this.supplier = supplier;
reset();
}
private void reset() {
cache = new MemoizingSupplier<T>(supplier);
}
#Override
public T get() {
return cache.get();
}
public void invalidate() {
reset();
}
private static class MemoizingSupplier<T> implements Supplier<T> {
final Supplier<T> delegate;
volatile T value;
MemoizingSupplier(Supplier<T> delegate) {
this.delegate = delegate;
}
#Override
public T get() {
if (value == null) {
synchronized (this) {
if (value == null) {
value = delegate.get();
}
}
}
return value;
}
}
}
Example use:
public class BetterMemoizeCountryList implements ICountryList {
LazyCache<List<String>> cache = new LazyCache<List<String>>(new Supplier<List<String>>(){
#Override
public List<String> get() {
return loadCountryList();
}
});
#Override
public List<String> list(){
return cache.get();
}
#Override
public void invalidateCache(){
cache.invalidate();
}
private List<String> loadCountryList() {
// this should normally load a full list from the database,
// but just for this instance we mock it with:
return Arrays.asList("Germany", "Russia", "China");
}
}
Whenever I need to cache something, I like to use the Proxy pattern.
Doing it with this pattern offers separation of concerns. Your original
object can be concerned with lazy loading. Your proxy (or guardian) object
can be responsible for validation of the cache.
In detail:
Define an object CountryList class which is thread-safe, preferably using synchronization blocks or other semaphore locks.
Extract this class's interface into a CountryQueryable interface.
Define another object, CountryListProxy, that implements the CountryQueryable.
Only allow the CountryListProxy to be instantiated, and only allow it to be referenced
through its interface.
From here, you can insert your cache invalidation strategy into the proxy object. Save the time of the last load, and upon the next request to see the data, compare the current time to the cache time. Define a tolerance level, where, if too much time has passed, the data is reloaded.
As far as Lazy Load, refer here.
Now for some good down-home sample code:
public interface CountryQueryable {
public void operationA();
public String operationB();
}
public class CountryList implements CountryQueryable {
private boolean loaded;
public CountryList() {
loaded = false;
}
//This particular operation might be able to function without
//the extra loading.
#Override
public void operationA() {
//Do whatever.
}
//This operation may need to load the extra stuff.
#Override
public String operationB() {
if (!loaded) {
load();
loaded = true;
}
//Do whatever.
return whatever;
}
private void load() {
//Do the loading of the Lazy load here.
}
}
public class CountryListProxy implements CountryQueryable {
//In accordance with the Proxy pattern, we hide the target
//instance inside of our Proxy instance.
private CountryQueryable actualList;
//Keep track of the lazy time we cached.
private long lastCached;
//Define a tolerance time, 2000 milliseconds, before refreshing
//the cache.
private static final long TOLERANCE = 2000L;
public CountryListProxy() {
//You might even retrieve this object from a Registry.
actualList = new CountryList();
//Initialize it to something stupid.
lastCached = Long.MIN_VALUE;
}
#Override
public synchronized void operationA() {
if ((System.getCurrentTimeMillis() - lastCached) > TOLERANCE) {
//Refresh the cache.
lastCached = System.getCurrentTimeMillis();
} else {
//Cache is okay.
}
}
#Override
public synchronized String operationB() {
if ((System.getCurrentTimeMillis() - lastCached) > TOLERANCE) {
//Refresh the cache.
lastCached = System.getCurrentTimeMillis();
} else {
//Cache is okay.
}
return whatever;
}
}
public class Client {
public static void main(String[] args) {
CountryQueryable queryable = new CountryListProxy();
//Do your thing.
}
}
Your needs seem pretty simple here. The use of MapMaker makes the implementation more complicated than it has to be. The whole double-checked locking idiom is tricky to get right, and only works on 1.5+. And to be honest, it's breaking one of the most important rules of programming:
Premature optimization is the root of
all evil.
The double-checked locking idiom tries to avoid the cost of synchronization in the case where the cache is already loaded. But is that overhead really causing problems? Is it worth the cost of more complex code? I say assume it is not until profiling tells you otherwise.
Here's a very simple solution that requires no 3rd party code (ignoring the JCIP annotation). It does make the assumption that an empty list means the cache hasn't been loaded yet. It also prevents the contents of the country list from escaping to client code that could potentially modify the returned list. If this is not a concern for you, you could remove the call to Collections.unmodifiedList().
public class CountryList {
#GuardedBy("cache")
private final List<String> cache = new ArrayList<String>();
private List<String> loadCountryList() {
// HEAVY OPERATION TO LOAD DATA
}
public List<String> list() {
synchronized (cache) {
if( cache.isEmpty() ) {
cache.addAll(loadCountryList());
}
return Collections.unmodifiableList(cache);
}
}
public void invalidateCache() {
synchronized (cache) {
cache.clear();
}
}
}
I'm not sure what the map is for. When I need a lazy, cached object, I usually do it like this:
public class CountryList
{
private static List<Country> countryList;
public static synchronized List<Country> get()
{
if (countryList==null)
countryList=load();
return countryList;
}
private static List<Country> load()
{
... whatever ...
}
public static synchronized void forget()
{
countryList=null;
}
}
I think this is similar to what you're doing but a little simpler. If you have a need for the map and the ONE that you've simplified away for the question, okay.
If you want it thread-safe, you should synchronize the get and the forget.
What do you think about it? Do you see something bad about it?
Bleah - you are using a complex data structure, MapMaker, with several features (map access, concurrency-friendly access, deferred construction of values, etc) because of a single feature you are after (deferred creation of a single construction-expensive object).
While reusing code is a good goal, this approach adds additional overhead and complexity. In addition, it misleads future maintainers when they see a map data structure there into thinking that there's a map of keys/values in there when there is really only 1 thing (list of countries). Simplicity, readability, and clarity are key to future maintainability.
Is there other way to do it? How can i make it better? Should i look for totally another solution in this cases?
Seems like you are after lazy-loading. Look at solutions to other SO lazy-loading questions. For example, this one covers the classic double-check approach (make sure you are using Java 1.5 or later):
How to solve the "Double-Checked Locking is Broken" Declaration in Java?
Rather than just simply repeat the solution code here, I think it is useful to read the discussion about lazy loading via double-check there to grow your knowledge base. (sorry if that comes off as pompous - just trying teach to fish rather than feed blah blah blah ...)
There is a library out there (from atlassian) - one of the util classes called LazyReference. LazyReference is a reference to an object that can be lazily created (on first get). it is guarenteed thread safe, and the init is also guarenteed to only occur once - if two threads calls get() at the same time, one thread will compute, the other thread will block wait.
see a sample code:
final LazyReference<MyObject> ref = new LazyReference() {
protected MyObject create() throws Exception {
// Do some useful object construction here
return new MyObject();
}
};
//thread1
MyObject myObject = ref.get();
//thread2
MyObject myObject = ref.get();
This looks ok to me (I assume MapMaker is from google collections?) Ideally you wouldn't need to use a Map because you don't really have keys but as the implementation is hidden from any callers I don't see this as a big deal.
This is way to simple to use the ComputingMap stuff. You only need a dead simple implementation where all methods are synchronized, and you should be fine. This will obviously block the first thread hitting it (getting it), and any other thread hitting it while the first thread loads the cache (and the same again if anyone calls the invalidateCache thing - where you also should decide whether the invalidateCache should load the cache anew, or just null it out, letting the first attempt at getting it again block), but then all threads should go through nicely.
Use the Initialization on demand holder idiom
public class CountryList {
private CountryList() {}
private static class CountryListHolder {
static final List<Country> INSTANCE = new List<Country>();
}
public static List<Country> getInstance() {
return CountryListHolder.INSTANCE;
}
...
}
Follow up to Mike's solution above. My comment didn't format as expected... :(
Watch out for synchronization issues in operationB, especially since load() is slow:
public String operationB() {
if (!loaded) {
load();
loaded = true;
}
//Do whatever.
return whatever;
}
You could fix it this way:
public String operationB() {
synchronized(loaded) {
if (!loaded) {
load();
loaded = true;
}
}
//Do whatever.
return whatever;
}
Make sure you ALWAYS synchronize on every access to the loaded variable.

Categories