Why is the DAO method so slow in ORMLite? - java

I have a method that looks like this
public Dao<ModelStore, Integer> getDaoStore() throws SQLException {
return BaseDaoImpl.createDao(getConnectionSource(), ModelStore.class);
}
when i call getDaoStore it is quite a lengthy process. In my log's i can see that the GC runs after every call to this, so I'm guessing there's a lot going on with this call.
Is there a way to speed this up?

A deep examination of Android-land has revealed that because of a gross Method.equals() method, annotations under Android are very slow and extremely GC intensive. We added table configuration files in version 4.26 that bypass this and make ORMLite start much, much faster. See this question and this thread on the mailing list.
We continue to improve annotation speeds. See also: ORMLite poor performance on Android?
DAO creation is a relatively expensive process. ORMLite creates a data representation of both the class and the fields in the class and builds a number of other utility classes that help with the various DAO functionality. You should make sure that you call the createDao method once per invocation. I assume this is under Android #Pzanno?
In 4.16 we added a DaoManager whose job it is to cache the Dao classes and this was improved in version 4.20. You should then always use it to create your Daos. Something like the following code is recommended:
private Dao<ModelStore, Integer> modelStoreDao = null;
...
public Dao<ModelStore, Integer> getDaoStore() throws SQLException {
if (modelStoreDao == null) {
modelStoreDao = DaoManager.createDao(getConnectionSource(),
ModelStore.class);
}
return modelStoreDao;
}
Hope this helps. A memory audit of ORMLite is probably also in order. It's been a while since I looked at it's consumption.

Related

Time taken to execute all methods in a method stack

A lot of times while writing applications, I wish to profile and measure the time taken for all methods in a stacktrace. What I mean is say:
Method A --> Method B --> Method C ...
A method internally calls B and it might call another. I wish to know the time taken to execute inside each method. This way in a web application, I can precisely know the percentage of time being consumed by what part of the code.
To explain further, most of the times in spring application, I write an aspect which collects information for every method call of a class. Which finally gives me summary. But I hate doing this, its repetitive and verbose and need to keep changing regex to accommodate different classes. Instead I would like this:
#Monitor
public void generateReport(int id){
...
}
Adding some annotation on method will trigger instrumentation api to collect all statistics of time taken by this method and any method later called. And when this method is exited, it stops collection information. I think this should be relatively easy to implement.
The questions is: Are there any reasonable alternatives that lets me do that for general java code? Or any quick way of collection this information. Or even a spring plugin for spring applications?
PS: Exactly like XRebel, it generates beautiful summaries of time take by the security, dao, service etc part of code. But it costs a bomb. If you can afford, you should definitely buy it.
You want to write a Java agent. Such an agent allows you to redefine a class when it is loaded. This way, you can implement an aspect without polluting your source code. I have written a library, Byte Buddy, which makes this fairly easy.
For your monitor example, you could use Byte Buddy as follows:
new AgentBuilder.Default()
.rebase(declaresMethod(isAnnotatedWith(Monitor.class))
.transform( (builder, type) ->
builder
.method(isAnnotatedWith(Monitor.class))
.intercept(MethodDelegation.to(MonitorInterceptor.class);
);
class MonitorInterceptor {
#RuntimeType
Object intercept(#Origin String method,
#SuperCall Callable<?> zuper)
throws Exception {
long start = System.currentTimeMillis();
try {
return zuper.call();
} finally {
System.out.println(method + " took " + (System.currentTimeMillis() - start);
}
}
}
The above built agent can than be installed on an instance of the instrumentation interface which is provided to any Java agent.
As an advantage over using Spring, the above agent will work for any Java instance, not only for Spring beans.
I don't know if theres already a library doing it nor can I give you a ready to use code. But I can give you a description how you can implement it on your own.
First of all i assume its no problem to include AspectJ into your project. Than create an annotation f.e. #Monitor which acts as marker for the time measurment of whatever you like.
Than create a simple data strucutre holding the information you wana track.
An example for this could be the following :
public class OperationMonitoring {
boolean active=false;
List<MethodExecution> methodExecutions = new ArrayList<>();
}
public class MethodExecution {
MethodExcecution invoker;
List<MethodExeuction> invocations = new ArrayList<>();
long startTime;
long endTime;
}
Than create an Around advice for all methods. On execution check if the called Method is annotated with your Monitoring annotation. If yes started monitoring each method execution in this thread. A simple example code could look like:
#Aspect
public class MonitoringAspect {
private ThreadLocal<OperationMonitoring> operationMonitorings = new ThreadLocal<>();
#Around("execution(* *.*(..))")
public void monitoring(ProceedingJoinPoint pjp) {
Method method = extractMethod(pjp);
if (method != null) {
OperationMonitoring monitoring = null;
if(method.isAnnotationPresent(Monitoring.class){
monitoring = operationMonitorings.get();
if(monitoring!=null){
if(!monitoring.active) {
monitoring.active=true;
}
} else {
// Create new OperationMonitoring object and set it
}
}
if(monitoring == null){
// this method is not annotated but is the tracking already active?
monitoring = operationMonitoring.get();
}
if(monitoring!=null && monitoring.active){
// do monitoring stuff and invoke the called method
} else {
// invoke the called method without monitoring
}
// Stop the monitoring by setting monitoring.active=false if this method was annotated with Monitoring (and it started the monitoring).
}
}
private Method extractMethod(JoinPoint joinPoint) {
if (joinPoint.getKind().equals(JoinPoint.METHOD_EXECUTION) && joinPoint.getSignature() instanceof MethodSignature) {
return ((MethodSignature) joinPoint.getSignature()).getMethod();
}
return null;
}
}
The code above is just a how to. I would also restructure the code but I've written it in a textfield, so please be aware of architectural flaws. As mentioned with a comment at the end. This solution does not supporte multiple annotated methods along the way. But it would be easy to add this.
A limitation of this approach is that it fails when you start additional threads during a tracked path. Adding support for starting new threads in a monitored Thread is not that easy. Thats also the reason why IoC frameworks have own features for handling threads to be able to track this.
I hope you understand the general concept of this, if not feel free to ask further questions.
This is the exact reason why I built the open source tool stagemonitor, which uses Byte Buddy to insert profiling code. If you want to monitor a web application you don't have to alter or annotate your code. If you have a standalone application, there is a #MonitorRequests annotation you can use.
You say you want to know the percentage of time taken within each routine on the stack.
I assume you mean inclusive time.
I also assume you mean wall-clock time, on the theory that if one of those lower-level callees happens to do some I/O, locking, etc., you don't want to be blind to that.
So a stack-sampling profiler that samples on wall-clock time will be getting the right kind of information.
The percentage time that A takes is the percentage of samples containing A, same for B, etc.
To get the percentage of A's time used by B, it is the percentage of samples containing A that happen to have B at the next level below.
The information is all in the stack samples, but it may be hard to get the profiler to extract just the information you want.
You also say you want precise percentage.
That means you also need a large number of stack samples.
For example, if you want to shrink the uncertainty of your measurements by a factor of 10, you need 100 times as many samples.
In my experience finding performance problems, I am willing to tolerate an uncertainty of 10% or more, because my goal is to find big wastage, not to know with precision how bad it is.
So I take samples manually, and look at them manually.
In fact, if you look at the statistics, you only have to see something wasteful on as few as two samples to know it's bad, and the fewer samples you take before seeing it twice, the worse it is.
(Example: If the problem wastes 30% of time, it takes on average 2/30% = 6.67 samples to see it twice. If it wastes 90% of time, it only takes 2.2 samples, on average.)

Best practice in android class constructor

What is the correct way I should initialise my class in android, the class is called Compilation and it has all its values in the db.
I can do the following :
1
public Compilation(int id)
{
// get db singleton here and fill all values
// however I feel this is bad OO because nobody knows I am doing this
}
2
public Compilation(int id, SQLiteDatabase db)
{
// use the provided db to get the info
// however now all calling classes will have to get the db for me
}
3
// get all compilations at once
SQLiteDatabase db = DatabaseHelper.getInstance().getReadableDatabase();
Cursor c = db.rawQuery("SELECT * FROM Compilation", null);
while(c.moveToNext())
{
// get all params here
Compilation comp = new Compilation (a,b,c,d,e);
}
public Compilation(a,b,c,d,e)
{
// just assign all the values given to private vars
}
The problem I see with this is that now the Compilation class is no longer so self contained, it needs another class to initialise it.
Which one is the proper way to do it?
The general rule in software design tells us that we should create Classes which have the minimum dependency on other parts of the software system. This way we end up with Classes which are better reusable.
The first alternative you have proposed is the worst one because its create a very tight dependency on one specific data provider( sqlite ). Maintenance of such class can be a nightmare( Just imagine that the next release of Android will come with sqlite or mysql :) and you want to switch to mysql)
The second one is kinda better if you would replace the constructor parameter from a Class to an Interface and thus creating something that we call Dependency injection .However there are better ways of doing dependency injection on Android (check out for example Dagger)
The third one seems to me as the best fit, since you don't create any dependency. Perhaps to ease the creation of such classes (and to make the code a little bit more "enterprise"), you could create a factory class which would create the instances of Compilation Class (more about this here )
In the very end however this is not a question about Android best practices but about software design decisions which highly depend on what are you trying to do!
All your options are correct, but I think a factory based approach will work fine. I've used it in different occasions. I just wrote down a skeleton of such an alternative approach.
public class CompilationFactory
{
// DB instance and/or cache implementation (HashMap based or via 3rd party lib)
static
{
// DB init stuff here
// if your app logic allows it you can also cache Compilation to avoid
// reading the DB multiple times
}
public static Compilation compilationForId(int id)
{
// either read your Compilation from the DB or from the precomputed cache
}
}
I wouldn't do it this way, I'd use an empty constructor then use an Application (http://developer.android.com/reference/android/app/Application.html) and serve my self the DB from the Application so I don't have to keep instantiating. Might be too advanced if you're doing a school project but yeah..
This depends slightly on the implementation of your Database - if you're using content providers or not, etc.
All of the provided examples are "correct" from the standpoint that they will work. That said number 3 is a red flag to me. Without further code to clarify you run the risk of calling "getReadableDatabase" more than once, which is unescesary.
Beyond that point, it's difficult to know exactly what to recommend you here. There are fancy ways to do this, and they might be overkill for you depending on your projects nature.
I'm going to operate under the assumption that you have a Class which manages Compilations. Something like, and keepping it simple in this case would be the following:
public class CompliationManager() {
private ArrayList<Compilation> myCompilations = new ArrayList<Compilation>();
SQLiteDatabase db;
public CompilationManager() {
db = DatabaseHelper.getInstance().getReadableDatabase();
}
public void loadCompliations() {
Cursor c = db.rawQuery("SELECT * FROM Compilation", null);
while(c.moveToNext()) {
Compilation comp = new Compilation(c);
myCompilations.add(comp);
}
c.close();
}
}
public class Compilation() {
public Compilation(Cursor c) {
// do the actual retrival, setting of fields etc...
getCompilationFromCursor();
}
}

Someone breaks my sorting of a list - now which approach to choose: return unmodifiable List or a new List altogether?

I'm using lists on a jsf web server to e.g. access the data model from web pages. The access to these lists is done from various other places as well though (web services, tools).
There is a piece of code which gets broken by someone resorting the list I return. With someone I am talking about someone in my developer team - we are the only ones using this code. I have roughly 300 references on this function and it could be performance relevant to do the fix nicely:
The list can be anywhere from 1 - 10'000 entries and commonly I will have maybe 10-100 such lists. In reality I will probably have very often around 20 lists with each having 8 entries - so not such a big deal. But I can have more sometimes
I am btw talking about a function something like this:
public List<MyObject> getMyObjectList() {
if (this.myObjects== null) {
myObjects = new ArrayList<MyObject>(myObjectsMap.values());
}
return myObjects;
}
Now I could of course do the return like this:
public List<MyObject> getMyObjectList() {
if (this.myObjects== null) {
myObjects = new ArrayList<MyObject>(myObjectsMap.values());
}
return Collections.unmodifiableList(myObjects );
}
But this will break, eventually at several places in different projects / applications.
It would imho be the cleanest to return unmodifiable, add javadoc - and fix everything which breaks. But :-D this is work. I'd probably have to test roughly 10 applications.
On the other hand I could just return a new list, e.g.
public List<MyObject> getMyObjectList() {
return new ArrayList<MyObject>(myObjectsMap.values());
}
Which is no to little work - but what about the performance issues with this? Other than that - if someone is deleting stuff from the list I returned, it will silently break the application.
So:
What is the performance issue? Is it an issue?
What would you do?
What would you do?
If I understand you correctly, this is a production library used in several applications. And, like it or not, the de-facto contract for getMyObjectList() is that the user can sort the list without getting an error or exception.
I would immediately change this method and return a defensive copy:
// good idea
public List<MyObject> getMyObjectList() {
return new ArrayList<MyObject>(myObjectsMap.values());
}
You have now fixed the problem where someone is sorting your internal collection and you have not broken your contract. In fact you can even update the Javadoc and tell the user that they can do whatever they want with the copy.
This may or may not cause a performance problem. Remember, the objects in the collection are not being copied - they are still being shared. You are just creating a new array list and whatever internal objects it needs to keep track of the objects.
If it turns out that these copies are causing a performance problem, then you can consider enhancing your class to include a read-only cache of your internal collection. To access this you must give the method a new name - ex getMySharedObjectList and you can gradually update client code to use this new method as performance needs require.
But don't do it this way. I think this method is particularly bad:
// bad idea
public List<MyObject> getMyObjectList() {
if (this.myObjects== null) {
myObjects = new ArrayList<MyObject>(myObjectsMap.values());
}
return Collections.unmodifiableList(myObjects );
}
You have created a situation where it is very easy for myObjects to get out of sync with myObjectsMap. (What happens when an item is added to myObjectsMap after someone called getMyObjectList?) At the same time, you are making a copy of the list every time someone calls the method. So you just gave up whatever theoretical performance gains you had in the first place.
Anyway, good luck. Hope this helps.
If you can afford to test the applications, I'd go with the unmodifiableList. It will save you from other related issues in the future.

reliably forcing Guava map eviction to take place

EDIT: I've reorganized this question to reflect the new information that since became available.
This question is based on the responses to a question by Viliam concerning Guava Maps' use of lazy eviction: Laziness of eviction in Guava's maps
Please read this question and its response first, but essentially the conclusion is that Guava maps do not asynchronously calculate and enforce eviction. Given the following map:
ConcurrentMap<String, MyObject> cache = new MapMaker()
.expireAfterAccess(10, TimeUnit.MINUTES)
.makeMap();
Once ten minutes has passed following access to an entry, it will still not be evicted until the map is "touched" again. Known ways to do this include the usual accessors - get() and put() and containsKey().
The first part of my question [solved]: what other calls cause the map to be "touched"? Specifically, does anyone know if size() falls into this category?
The reason for wondering this is that I've implemented a scheduled task to occasionally nudge the Guava map I'm using for caching, using this simple method:
public static void nudgeEviction() {
cache.containsKey("");
}
However I'm also using cache.size() to programmatically report the number of objects contained in the map, as a way to confirm this strategy is working. But I haven't been able to see a difference from these reports, and now I'm wondering if size() also causes eviction to take place.
Answer: So Mark has pointed out that in release 9, eviction is invoked only by the get(), put(), and replace() methods, which would explain why I wasn't seeing an effect for containsKey(). This will apparently change with the next version of guava which is set for release soon, but unfortunately my project's release is set sooner.
This puts me in an interesting predicament. Normally I could still touch the map by calling get(""), but I'm actually using a computing map:
ConcurrentMap<String, MyObject> cache = new MapMaker()
.expireAfterAccess(10, TimeUnit.MINUTES)
.makeComputingMap(loadFunction);
where loadFunction loads the MyObject corresponding to the key from a database. It's starting to look like I have no easy way of forcing eviction until r10. But even being able to reliably force eviction is put into doubt by the second part of my question:
The second part of my question [solved]: In reaction to one of the responses to the linked question, does touching the map reliably evict all expired entries? In the linked answer, Niraj Tolia indicates otherwise, saying eviction is potentially only processed in batches, which would mean multiple calls to touch the map might be needed to ensure all expired objects were evicted. He did not elaborate, however this seems related to the map being split into segments based on concurrency level. Assuming I used r10, in which a containsKey("") does invoke eviction, would this then be for the entire map, or only for one of the segments?
Answer: maaartinus has addressed this part of the question:
Beware that containsKey and other reading methods only run postReadCleanup, which does nothing but on each 64th invocation (see DRAIN_THRESHOLD). Moreover, it looks like all cleanup methods work with single Segment only.
So it looks like calling containsKey("") wouldn't be a viable fix, even in r10. This reduces my question to the title: How can I reliably force eviction to occur?
Note: Part of the reason my web app is noticeably affected by this issue is that when I implemented caching I decided to use multiple maps - one for each class of my data objects. So with this issue there is the possibility that one area of code is executed, causing a bunch of Foo objects to be cached, and then the Foo cache isn't touched again for a long time so it doesn't evict anything. Meanwhile Bar and Baz objects are being cached from other areas of code, and memory is being eaten. I'm setting a maximum size on these maps, but this is a flimsy safeguard at best (I'm assuming its effect is immediate - still need to confirm this).
UPDATE 1: Thanks to Darren for linking the relevant issues - they now have my votes. So it looks like a resolution is in the pipeline, but seems unlikely to be in r10. In the meantime, my question remains.
UPDATE 2: At this point I'm just waiting for a Guava team member to give feedback on the hack maaartinus and I put together (see answers below).
LAST UPDATE: feedback received!
I just added the method Cache.cleanUp() to Guava. Once you migrate from MapMaker to CacheBuilder you can use that to force eviction.
I was wondering the about the same issue you described in the first part of your question. From what I can tell from looking at the source code for Guava's CustomConcurrentHashMap (release 9), it appears that entries are evicted on the get(), put(), and replace() methods. The containsKey() method does not appear to invoke eviction. I'm not 100% sure because I took a quick pass at the code.
Update:
I also found a more recent version of the CustomConcurrentHashmap in Guava's git repository and it looks like containsKey() has been updated to invoke eviction.
Both release 9 and the latest version I just found do not invoke eviction when size() is called.
Update 2:
I recently noticed that Guava r10 (yet to be released) has a new class called CacheBuilder. Basically this class is a forked version of the MapMaker but with caching in mind. The documentation suggests that it will support some of the eviction requirements you are looking for.
I reviewed the updated code in r10's version of the CustomConcurrentHashMap and found what looks like a scheduled map cleaner. Unfortunately, that code appears unfinished at this point but r10 looks more and more promising each day.
Beware that containsKey and other reading methods only run postReadCleanup, which does nothing but on each 64th invocation (see DRAIN_THRESHOLD). Moreover, it looks like all cleanup methods work with single Segment only.
The easiest way to enforce eviction seems to be to put some dummy object into each segment. For this to work, you'd need to analyze CustomConcurrentHashMap.hash(Object), which is surely no good idea, as this method may change anytime. Moreover, depending on the key class it may be hard to find a key with a hashCode ensuring it lands in a given segment.
You could use reads instead, but would have to repeat them 64 times per segment. Here, it'd easy to find a key with an appropriate hashCode, since here any object is allowed as an argument.
Maybe you could hack into the CustomConcurrentHashMap source code instead, it could be as trivial as
public void runCleanup() {
final Segment<K, V>[] segments = this.segments;
for (int i = 0; i < segments.length; ++i) {
segments[i].runCleanup();
}
}
but I wouldn't do it without a lot of testing and/or an OK by a guava team member.
Yep, we've gone back and forth a few times on whether these cleanup tasks should be done on a background thread (or pool), or should be done on user threads. If they were done on a background thread, this would eventually happen automatically; as it is, it'll only happen as each segment gets used. We're still trying to come up with the right approach here - I wouldn't be surprised to see this change in some future release, but I also can't promise anything or even make a credible guess as to how it will change. Still, you've presented a reasonable use case for some kind of background or user-triggered cleanup.
Your hack is reasonable, as long as you keep in mind that it's a hack, and liable to break (possibly in subtle ways) in future releases. As you can see in the source, Segment.runCleanup() calls runLockedCleanup and runUnlockedCleanup: runLockedCleanup() will have no effect if it can't lock the segment, but if it can't lock the segment it's because some other thread has the segment locked, and that other thread can be expected to call runLockedCleanup as part of its operation.
Also, in r10, there's CacheBuilder/Cache, analogous to MapMaker/Map. Cache is the preferred approach for many current users of makeComputingMap. It uses a separate CustomConcurrentHashMap, in the common.cache package; depending on your needs, you may want your GuavaEvictionHacker to work with both. (The mechanism is the same, but they're different Classes and therefore different Methods.)
I'm not a big fan of hacking into or forking external code until absolutely necessary. This problem occurs in part due to an early decision for MapMaker to fork ConcurrentHashMap, thereby dragging in a lot of complexity that could have been deferred until after the algorithms were worked out. By patching above MapMaker, the code is robust to library changes so that you can remove your workaround on your own schedule.
An easy approach is to use a priority queue of weak reference tasks and a dedicated thread. This has the drawback of creating many stale no-op tasks, which can become excessive in due to the O(lg n) insertion penalty. It works reasonably well for small, less frequently used caches. It was the original approach taken by MapMaker and its simple to write your own decorator.
A more robust choice is to mirror the lock amortization model with a single expiration queue. The head of the queue can be volatile so that a read can always peek to determine if it has expired. This allows all reads to trigger an expiration and an optional clean-up thread to check regularly.
By far the simplest is to use #concurrencyLevel(1) to force MapMaker to use a single segment. This reduces the write concurrency, but most caches are read heavy so the loss is minimal. The original hack to nudge the map with a dummy key would then work fine. This would be my preferred approach, but the other two options are okay if you have high write loads.
I don't know if it is appropriate for your use case, but your main concern about the lack of background cache eviction seems to be memory consumption, so I would have thought that using softValues() on the MapMaker to allow the Garbage Collector to reclaim entries from the cache when a low memory situation occurs. Could easily be the solution for you. I have used this on a subscription-server (ATOM) where entries are served through a Guava cache using SoftReferences for values.
Based on maaartinus's answer, I came up with the following code which uses reflection rather than directly modifying the source (If you find this useful please upvote his answer!). While it will come at a performance penalty for using reflection, the difference should be negligible since I'll run it about once every 20 minutes for each caching Map (I'm also caching the dynamic lookups in the static block which will help). I have done some initial testing and it appears to work as intended:
public class GuavaEvictionHacker {
//Class objects necessary for reflection on Guava classes - see Guava docs for info
private static final Class<?> computingMapAdapterClass;
private static final Class<?> nullConcurrentMapClass;
private static final Class<?> nullComputingConcurrentMapClass;
private static final Class<?> customConcurrentHashMapClass;
private static final Class<?> computingConcurrentHashMapClass;
private static final Class<?> segmentClass;
//MapMaker$ComputingMapAdapter#cache points to the wrapped CustomConcurrentHashMap
private static final Field cacheField;
//CustomConcurrentHashMap#segments points to the array of Segments (map partitions)
private static final Field segmentsField;
//CustomConcurrentHashMap$Segment#runCleanup() enforces eviction on the calling Segment
private static final Method runCleanupMethod;
static {
try {
//look up Classes
computingMapAdapterClass = Class.forName("com.google.common.collect.MapMaker$ComputingMapAdapter");
nullConcurrentMapClass = Class.forName("com.google.common.collect.MapMaker$NullConcurrentMap");
nullComputingConcurrentMapClass = Class.forName("com.google.common.collect.MapMaker$NullComputingConcurrentMap");
customConcurrentHashMapClass = Class.forName("com.google.common.collect.CustomConcurrentHashMap");
computingConcurrentHashMapClass = Class.forName("com.google.common.collect.ComputingConcurrentHashMap");
segmentClass = Class.forName("com.google.common.collect.CustomConcurrentHashMap$Segment");
//look up Fields and set accessible
cacheField = computingMapAdapterClass.getDeclaredField("cache");
segmentsField = customConcurrentHashMapClass.getDeclaredField("segments");
cacheField.setAccessible(true);
segmentsField.setAccessible(true);
//look up the cleanup Method and set accessible
runCleanupMethod = segmentClass.getDeclaredMethod("runCleanup");
runCleanupMethod.setAccessible(true);
}
catch (ClassNotFoundException cnfe) {
throw new RuntimeException("ClassNotFoundException thrown in GuavaEvictionHacker static initialization block.", cnfe);
}
catch (NoSuchFieldException nsfe) {
throw new RuntimeException("NoSuchFieldException thrown in GuavaEvictionHacker static initialization block.", nsfe);
}
catch (NoSuchMethodException nsme) {
throw new RuntimeException("NoSuchMethodException thrown in GuavaEvictionHacker static initialization block.", nsme);
}
}
/**
* Forces eviction to take place on the provided Guava Map. The Map must be an instance
* of either {#code CustomConcurrentHashMap} or {#code MapMaker$ComputingMapAdapter}.
*
* #param guavaMap the Guava Map to force eviction on.
*/
public static void forceEvictionOnGuavaMap(ConcurrentMap<?, ?> guavaMap) {
try {
//we need to get the CustomConcurrentHashMap instance
Object customConcurrentHashMap;
//get the type of what was passed in
Class<?> guavaMapClass = guavaMap.getClass();
//if it's a CustomConcurrentHashMap we have what we need
if (guavaMapClass == customConcurrentHashMapClass) {
customConcurrentHashMap = guavaMap;
}
//if it's a NullConcurrentMap (auto-evictor), return early
else if (guavaMapClass == nullConcurrentMapClass) {
return;
}
//if it's a computing map we need to pull the instance from the adapter's "cache" field
else if (guavaMapClass == computingMapAdapterClass) {
customConcurrentHashMap = cacheField.get(guavaMap);
//get the type of what we pulled out
Class<?> innerCacheClass = customConcurrentHashMap.getClass();
//if it's a NullComputingConcurrentMap (auto-evictor), return early
if (innerCacheClass == nullComputingConcurrentMapClass) {
return;
}
//otherwise make sure it's a ComputingConcurrentHashMap - error if it isn't
else if (innerCacheClass != computingConcurrentHashMapClass) {
throw new IllegalArgumentException("Provided ComputingMapAdapter's inner cache was an unexpected type: " + innerCacheClass);
}
}
//error for anything else passed in
else {
throw new IllegalArgumentException("Provided ConcurrentMap was not an expected Guava Map: " + guavaMapClass);
}
//pull the array of Segments out of the CustomConcurrentHashMap instance
Object[] segments = (Object[])segmentsField.get(customConcurrentHashMap);
//loop over them and invoke the cleanup method on each one
for (Object segment : segments) {
runCleanupMethod.invoke(segment);
}
}
catch (IllegalAccessException iae) {
throw new RuntimeException(iae);
}
catch (InvocationTargetException ite) {
throw new RuntimeException(ite.getCause());
}
}
}
I'm looking for feedback on whether this approach is advisable as a stopgap until the issue is resolved in a Guava release, particularly from members of the Guava team when they get a minute.
EDIT: updated the solution to allow for auto-evicting maps (NullConcurrentMap or NullComputingConcurrentMap residing in a ComputingMapAdapter). This turned out to be necessary in my case, since I'm calling this method on all of my maps and a few of them are auto-evictors.

Generating singletons

This might sound like a weird idea and I haven't thought it through properly yet.
Say you have an application that ends up requiring a certain number of singletons to do some I/O for example. You could write one singleton and basically reproduce the code as many times as needed.
However, as programmers we're supposed to come up with inventive solutions that avoid redundancy or repetition of any kind. What would be a solution to make multiple somethings that could each act as a singleton.
P.S: This is for a project where a framework such as Spring can't be used.
You could introduce an abstraction like this:
public abstract class Singleton<T> {
private T object;
public synchronized T get() {
if (object == null) {
object = create();
}
return object;
}
protected abstract T create();
}
Then for each singleton, you just need to write this:
public final Singleton<Database> database = new Singleton<Database>() {
#Override
protected Database create() {
// connect to the database, return the Database instance
}
};
public final Singleton<LogCluster> logs = new Singleton<LogCluster>() {
...
Then you can use the singletons by writing database.get(). If the singleton hasn't been created, it is created and initialized.
The reason people probably don't do this, and prefer to just repeatedly write something like this:
private Database database;
public synchronized Database getDatabase() {
if (database == null) {
// connect to the database, assign the database field
}
return database;
}
private LogCluster logs;
public synchronized LogCluster getLogs() {
...
Is because in the end it is only one more line of code for each singleton, and the chance of getting the initialize-singleton pattern wrong is pretty low.
However, as programmers we're supposed to come up with inventive solutions that avoid redundancy or repetition of any kind.
That is not correct. As programmers, we are supposed to come up with solutions that meet the following criteria:
meet the functional requirements; e.g. perform as required without bugs,
are delivered within the mandated timeframe,
are maintainable; e.g. the next developer can read and modify the code,
performs fast enough for the task in hand, and
can be reused in future tasks.
(These criteria are roughly ordered by decreasing priority, though different contexts may dictate a different order.)
Inventiveness is NOT a requirement, and "avoid[ing] redundancy or repetition of any kind" is not either. In fact both of these can be distinctly harmful ... if the programmer ignores the real criteria.
Bringing this back to your question. You should only be looking for alternative ways to do singletons if it is going to actually make the code more maintainable. Complicated "inventive" solutions may well return to bite you (or the people who have to maintain your code in the future), even if they succeed in reducing the number of lines of repeated code.
And as others have pointed out (e.g. #BalusC), current thinking is that the singleton pattern should be avoided in a lot of classes of application.
There does exist a multiton pattern. Regardless, I am 60% certain that the real solution to the original problem is a RDBMS.
#BalusC is right, but I will say it more strongly, Singletons are evil in all contexts.
Webapps, desktop apps, etc. Just don't do it.
All a singleton is in reality is a global wad of data. Global data is bad. It makes proper unit testing impossible. It makes tracing down weird bugs much, much harder.
The Gang of Four book is flat out wrong here. Or at least obsolete by a decade and a half.
If you want only one instance, have a factory that makes only one. Its easy.
How about passing a parameter to the function that creates the singleton (for example, it's name or specialization), that knows to create a singleton for each unique parameter?
I know you asked about Java, but here is a solution in PHP as an example:
abstract class Singleton
{
protected function __construct()
{
}
final public static function getInstance()
{
static $instances = array();
$calledClass = get_called_class();
if (!isset($instances[$calledClass]))
{
$instances[$calledClass] = new $calledClass();
}
return $instances[$calledClass];
}
final private function __clone()
{
}
}
Then you just write:
class Database extends Singleton {}

Categories