Java Map with TimeToLive Associated with each key/value pair - java

Recently I was asked (in an interview) to design HashMap with TTL associated with each key. I done it using similar approach given below but as per him this is not a good approach as this would need iteration on whole map and if map size is in million then this would be a bottleneck.
Is there any better approach to do the same? Moreover- He was only concerned that a thread is kept running in background though the next TTL is hours later.
class CleanerThread extends Thread {
#Override
public void run() {
System.out.println("Initiating Cleaner Thread..");
while (true) {
cleanMap();
try {
Thread.sleep(expiryInMillis / 2);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
private void cleanMap() {
long currentTime = new Date().getTime();
for (K key : timeMap.keySet()) {
if (currentTime > (timeMap.get(key) + expiryInMillis)) {
V value = remove(key);
timeMap.remove(key);
System.out.println("Removing : " + sdf.format(new Date()) + " : " + key + " : " + value);
}
}
}
}

It would be better to use a LinkedHashMap so that you could preserve the insertion order. Infact LinkedHashMap extends from a HashMap. If running a thread is the problem, then you could create a custom implementation of the map by extending the LinkedHashMap. Inside the class, override the get method.
EDIT : Based on onkar's comment. It is better to override the get instead of put as this would prevent retrieval of expired items.
public class MyLinkedHashMap<K> extends LinkedHashMap<K, Date> {
private static final long expiryTime = 100000L;
private long currentOldest = 0L;
#Override
public Date get(Object key) {
long currentTime = new Date().getTime();
if ((currentOldest > 0L) && (currentOldest + expiryTime) < currentTime) {
// even the oldest key has not expired.
return super.get(key);
}
Iterator<Map.Entry<K, Date>> iter = this.entrySet().iterator();
while (iter.hasNext()) {
Map.Entry<K, Date> entry = iter.next();
long entryTime = entry.getValue().getTime();
if (currentTime >= entryTime + expiryTime) {
iter.remove();
} else {
// since this is a linked hash map, order is preserved.
// All the elements after the current entry came later.
// So no need to check the remaining elements if the current is not expired.
currentOldest = entryTime;
break;
}
}
return super.get(key);
}
}

Extending HashMap and it's Entry class and adding new entry for expiry time and override get method to return value only if expiry time is in past i.e expiryTime<System.getCurrentTimeInMilis()

When you talk TTLs, and you want to access them by order of their TTL values, you should use PriorityQueue or PriorityBlockingQueue or HeapMaps (idk if Java has a HeapMap implementation tho).
Whenever you insert an item, it gets shuffled to its proper ordering position withing the Collection.
So if you only want to take out TTLs that have expired, you just check/take, and you will get the eariest expired first, and you keep check/take until you hit the first whose TTL has not expired yet. That's where you stop.
Because PriorityQueues guarantee that (if you did the compareTo functions properly) all elemets are sorted at all times, so after hitting the first un-expired entry, a) that entry will be closest to expiry and b) all the others will have a later expiry. The last item in the queue - independent of the sequence you put them there - will be the one with the latest expiry.

Related

Best Data Structure for fast retrieval, update, and keeping ordering

The problem is as follows
I need to keep track of url + click count.
I need to be able to update url quickly with click count when user click on that url.
I need to be able to retrieve the top 10 click count URL quickly.
NOTE: Assuming you cannot use the database.
What is the best data structure to achieve the result?
I have thought about using a map before, but map doesn't keep track of ordering of the top 10 clicks.
You need an additional List<Map.Entry<URL,Integer>> for holding the top ten, with T being the click count for the lowermost.
If you count another click and this count is still not greater than T: do nothing.
If the increased count is greater than T, check whether the URL is in the list or not. If it is, do nothing. If it is not, add this entry to the List, sort and delete the last entry if the list has more than 10 entries. Update T.
The best data structure I can think is of using the TreeSet.
The elements of TreeSet are sorted, so you can easily find top items.
Also make sure for URL you maintain a separate comparator class which implements
Comparator, so you can put your logic of keeping elements sorted all
the time based on count. Use this comparator while creating the TreeSet. Insertion/Update/delete/Get all operations happen in O(logn)
Here is the code, how you should define the structure.
TreeSet<URL> treeSet = new TreeSet<URL>(new URLComparator());
class URL {
private String url;
int count;
public URL(String string, int i) {
url = string;
count = i;
}
#Override
public int hashCode() {
return url.hashCode();
}
#Override // No need to write this method. Just used it for testing
public String toString() {
return "url : " + url + " ,count : " + count+"\n";
}
}
One more info- Use hashcode method of your URL class as hashcode of your url.
This is how you define URLComparator class. compare logic is based on URL count.
class URLComparator implements Comparator<URL> {
#Override
public int compare(URL o1, URL o2) {
return new Integer(o2.count).compareTo(o1.count);
}
}
Testing
TreeSet<URL> treeSet = new TreeSet<URL>(new URLComparator());
treeSet.add(new URL("url1", 12));
treeSet.add(new URL("url2", 0));
treeSet.add(new URL("url3", 5));
System.out.println(treeSet);
Output:-
[url : url1 ,count : 12
, url : url3 ,count : 5
, url : url2 ,count : 0
]
To print top 10 elements, use following code.
Iterator<URL> iterator = treeSet.iterator();
int count = 0;
while(count < 10 && iterator.hasNext() ){
System.out.println(iterator.next());
count++;
}
You can use a Map<String, Integer> for the use case as:
It keeps track of key(url) and value(click count)
You can put to the map an updated url with mapped click count when user click on that url.
You can retrieve the top 10 click count after sorting the map based on the entryset
// create a list out of the entryset of your map
Set<Map.Entry<String, Integer>> set = map.entrySet();
List<Map.Entry<String, Integer>> list = new ArrayList<>(set);
// this can be clubbed in another stub to act on top 'N' click counts
list.sort((o1, o2) -> (o2.getValue()).compareTo(o1.getValue()));
list.stream().limit(10).forEach(entry ->
System.out.println(entry.getKey() + " ==== " + entry.getValue()));
Using Map, you will have to sort the values for top 10 urls.
which will egt you o(nlogn) complexity using comparator for sorting by values.
Another Way is:
Using Doubly linked list(of size 10) with a HashMap (And proceeding in a LRU cache way)
Retrieve/Update will be o(1).
Top 10 results will be items in list.
Structure of Doubly list :
class UrlAndCountNode{
String url;
int count;
UrlAndCountNode next;
UrlAndCountNode prev;
}
Structure of Map:
Map<String, UrlAndCountNode>
That's an interesting question IMO. It seems you need something that is sorted by clicks, but at the same time you need to alter these values, the only way to do that with a data structure is to remove that entry (that you want to update) and put the updated one back. Simply updating clicks will not work. As such I think that keeping them sorted by clicks is a batter option.
The downside is that if there are entries with the same number of clicks, they will get overriden, as such something like guava multiset would be a much better option.
As such I would do this:
static class Holder {
private final String name;
private final int clicks;
public Holder(String name, int clicks) {
super();
this.name = name;
this.clicks = clicks;
}
public String getName() {
return name;
}
public int getClicks() {
return clicks;
}
#Override
public String toString() {
return "name = " + name + " clicks = " + clicks;
}
}
And methods would look like this:
private static List<Holder> firstN(Multiset<Holder> set, int n) {
return set.stream().limit(n).collect(Collectors.toList());
}
private static void updateOne(Multiset<Holder> set, String urlName, int more) {
Iterator<Holder> iter = set.iterator();
int currentClicks = 0;
boolean found = false;
while (iter.hasNext()) {
Holder h = iter.next();
if (h.getName().equals(urlName)) {
currentClicks = h.getClicks();
iter.remove();
found = true;
}
}
if (found) {
set.add(new Holder(urlName, currentClicks + more));
}
}

Java Hashmap - Multiple thread put

We've recently had a discussion at my work about whether we need to use ConcurrentHashMap or if we can simply use regular HashMap, in our multithreaded environment. The argument for HashMaps are two: it is faster then the ConcurrentHashMap, so we should use it if possible. And ConcurrentModificationException apparently only appears as you iterate over the Map as it is modified, so "if we only PUT and GET from the map, what is the problem with the regular HashMap?" was the arguments.
I thought that concurrent PUT actions or concurrent PUT and READ could lead to exceptions, so I put together a test to show this. The test is simple; create 10 threads, each which writes the same 1000 key-value pairs into the map again-and-again for 5 seconds, then print the resulting map.
The results were quite confusing actually:
Length:1299
Errors recorded: 0
I thought each key-value pair was unique in a HashMap, but looking through the map, I can find multiple Key-Value pairs that are identical. I expected either some kind of exception or corrupted keys or values, but I did not expect this. How does this occur?
Here's the code I used, for reference:
public class ConcurrentErrorTest
{
static final long runtime = 5000;
static final AtomicInteger errCount = new AtomicInteger();
static final int count = 10;
public static void main(String[] args) throws InterruptedException
{
List<Thread> threads = new LinkedList<>();
final Map<String, Integer> map = getMap();
for (int i = 0; i < count; i++)
{
Thread t = getThread(map);
threads.add(t);
t.start();
}
for (int i = 0; i < count; i++)
{
threads.get(i).join(runtime + 1000);
}
for (String s : map.keySet())
{
System.out.println(s + " " + map.get(s));
}
System.out.println("Length:" + map.size());
System.out.println("Errors recorded: " + errCount.get());
}
private static Map<String, Integer> getMap()
{
Map<String, Integer> map = new HashMap<>();
return map;
}
private static Map<String, Integer> getConcMap()
{
Map<String, Integer> map = new ConcurrentHashMap<>();
return map;
}
private static Thread getThread(final Map<String, Integer> map)
{
return new Thread(new Runnable() {
#Override
public void run()
{
long start = System.currentTimeMillis();
long now = start;
while (now - start < runtime)
{
try
{
for (int i = 0; i < 1000; i++)
map.put("i=" + i, i);
now = System.currentTimeMillis();
}
catch (Exception e)
{
System.out.println("P - Error occured: " + e.toString());
errCount.incrementAndGet();
}
}
}
});
}
}
What you're faced with seems to be a TOCTTOU class problem. (Yes, this kind of bug happens so often, it's got its own name. :))
When you insert an entry into a map, at least the following two things need to happen:
Check whether the key already exists.
If the check returned true, update the existing entry, if it didn't, add a new one.
If these two don't happen atomically (as they would in a correctly synchronized map implementation), then several threads can come to the conclusion that the key doesn't exist yet in step 1, but by the time they reach step 2, that isn't true any more. So multiple threads will happily insert an entry with the same key.
Please note that this isn't the only problem that can happen, and depending on the implementation and your luck with visibility, you can get all kinds of different and unexpected failures.
In multi thread environment, you should always use CuncurrentHashMap, if you are going to perform any operation except get.
Most of the time you won't get an exception, but definitely get the corrupt data because of the thread local copy value.
Every thread has its own copy of the Map data when performing the put operation and when they check for key existence, multiple threads found it false and they enter the data.

Search multiple HashMaps at the same time

tldr: How can I search for an entry in multiple (read-only) Java HashMaps at the same time?
The long version:
I have several dictionaries of various sizes stored as HashMap< String, String >. Once they are read in, they are never to be changed (strictly read-only).
I want to check whether and which dictionary had stored an entry with my key.
My code was originally looking for a key like this:
public DictionaryEntry getEntry(String key) {
for (int i = 0; i < _numDictionaries; i++) {
HashMap<String, String> map = getDictionary(i);
if (map.containsKey(key))
return new DictionaryEntry(map.get(key), i);
}
return null;
}
Then it got a little more complicated: my search string could contain typos, or was a variant of the stored entry. Like, if the stored key was "banana", it is possible that I'd look up "bannana" or "a banana", but still would like the entry for "banana" returned. Using the Levenshtein-Distance, I now loop through all dictionaries and each entry in them:
public DictionaryEntry getEntry(String key) {
for (int i = 0; i < _numDictionaries; i++) {
HashMap<String, String> map = getDictionary(i);
for (Map.Entry entry : map.entrySet) {
// Calculate Levenshtein distance, store closest match etc.
}
}
// return closest match or null.
}
So far everything works as it should and I'm getting the entry I want. Unfortunately I have to look up around 7000 strings, in five dictionaries of various sizes (~ 30 - 70k entries) and it takes a while. From my processing output I have the strong impression my lookup dominates overall runtime.
My first idea to improve runtime was to search all dictionaries parallely. Since none of the dictionaries is to be changed and no more than one thread is accessing a dictionary at the same time, I don't see any safety concerns.
The question is just: how do I do this? I have never used multithreading before. My search only came up with Concurrent HashMaps (but to my understanding, I don't need this) and the Runnable-class, where I'd have to put my processing into the method run(). I think I could rewrite my current class to fit into Runnable, but I was wondering if there is maybe a simpler method to do this (or how can I do it simply with Runnable, right now my limited understanding thinks I have to restructure a lot).
Since I was asked to share the Levenshtein-Logic: It's really nothing fancy, but here you go:
private int _maxLSDistance = 10;
public Map.Entry getClosestMatch(String key) {
Map.Entry _closestMatch = null;
int lsDist;
if (key == null) {
return null;
}
for (Map.Entry entry : _dictionary.entrySet()) {
// Perfect match
if (entry.getKey().equals(key)) {
return entry;
}
// Similar match
else {
int dist = StringUtils.getLevenshteinDistance((String) entry.getKey(), key);
// If "dist" is smaller than threshold and smaller than distance of already stored entry
if (dist < _maxLSDistance) {
if (_closestMatch == null || dist < _lsDistance) {
_closestMatch = entry;
_lsDistance = dist;
}
}
}
}
return _closestMatch
}
In order to use multi-threading in your case, could be something like:
The "monitor" class, which basically stores the results and coordinates the threads;
public class Results {
private int nrOfDictionaries = 4; //
private ArrayList<String> results = new ArrayList<String>();
public void prepare() {
nrOfDictionaries = 4;
results = new ArrayList<String>();
}
public synchronized void oneDictionaryFinished() {
nrOfDictionaries--;
System.out.println("one dictionary finished");
notifyAll();
}
public synchronized boolean isReady() throws InterruptedException {
while (nrOfDictionaries != 0) {
wait();
}
return true;
}
public synchronized void addResult(String result) {
results.add(result);
}
public ArrayList<String> getAllResults() {
return results;
}
}
The Thread it's self, which can be set to search for the specific dictionary:
public class ThreadDictionarySearch extends Thread {
// the actual dictionary
private String dictionary;
private Results results;
public ThreadDictionarySearch(Results results, String dictionary) {
this.dictionary = dictionary;
this.results = results;
}
#Override
public void run() {
for (int i = 0; i < 4; i++) {
// search dictionary;
results.addResult("result of " + dictionary);
System.out.println("adding result from " + dictionary);
}
results.oneDictionaryFinished();
}
}
And the main method for demonstration:
public static void main(String[] args) throws Exception {
Results results = new Results();
ThreadDictionarySearch threadA = new ThreadDictionarySearch(results, "dictionary A");
ThreadDictionarySearch threadB = new ThreadDictionarySearch(results, "dictionary B");
ThreadDictionarySearch threadC = new ThreadDictionarySearch(results, "dictionary C");
ThreadDictionarySearch threadD = new ThreadDictionarySearch(results, "dictionary D");
threadA.start();
threadB.start();
threadC.start();
threadD.start();
if (results.isReady())
// it stays here until all dictionaries are searched
// because in "Results" it's told to wait() while not finished;
for (String string : results.getAllResults()) {
System.out.println("RESULT: " + string);
}
I think the easiest would be to use a stream over the entry set:
public DictionaryEntry getEntry(String key) {
for (int i = 0; i < _numDictionaries; i++) {
HashMap<String, String> map = getDictionary(i);
map.entrySet().parallelStream().foreach( (entry) ->
{
// Calculate Levenshtein distance, store closest match etc.
}
);
}
// return closest match or null.
}
Provided you are using java 8 of course. You could also wrap the outer loop into an IntStream as well. Also you could directly use the Stream.reduce to get the entry with the smallest distance.
Maybe try thread pools:
ExecutorService es = Executors.newFixedThreadPool(_numDictionaries);
for (int i = 0; i < _numDictionaries; i++) {
//prepare a Runnable implementation that contains a logic of your search
es.submit(prepared_runnable);
}
I believe you may also try to find a quick estimate of strings that completely do not match (i.e. significant difference in length), and use it to finish your logic ASAP, moving to next candidate.
I have my strong doubts that HashMaps are a suitable solution here, especially if you want to have some fuzzing and stop words. You should utilize a proper full text search solutions like ElaticSearch or Apache Solr or at least an available engine like Apache Lucene.
That being said, you can use a poor man's version: Create an array of your maps and a SortedMap, iterate over the array, take the keys of the current HashMap and store them in the SortedMap with the index of their HashMap. To retrieve a key, you first search in the SortedMap for said key, get the respective HashMap from the array using the index position and lookup the key in only one HashMap. Should be fast enough without the need for multiple threads to dig through the HashMaps. However, you could make the code below into a runnable and you can have multiple lookups in parallel.
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.SortedMap;
import java.util.TreeMap;
public class Search {
public static void main(String[] arg) {
if (arg.length == 0) {
System.out.println("Must give a search word!");
System.exit(1);
}
String searchString = arg[0].toLowerCase();
/*
* Populating our HashMaps.
*/
HashMap<String, String> english = new HashMap<String, String>();
english.put("banana", "fruit");
english.put("tomato", "vegetable");
HashMap<String, String> german = new HashMap<String, String>();
german.put("Banane", "Frucht");
german.put("Tomate", "Gemüse");
/*
* Now we create our ArrayList of HashMaps for fast retrieval
*/
List<HashMap<String, String>> maps = new ArrayList<HashMap<String, String>>();
maps.add(english);
maps.add(german);
/*
* This is our index
*/
SortedMap<String, Integer> index = new TreeMap<String, Integer>(String.CASE_INSENSITIVE_ORDER);
/*
* Populating the index:
*/
for (int i = 0; i < maps.size(); i++) {
// We iterate through or HashMaps...
HashMap<String, String> currentMap = maps.get(i);
for (String key : currentMap.keySet()) {
/* ...and populate our index with lowercase versions of the keys,
* referencing the array from which the key originates.
*/
index.put(key.toLowerCase(), i);
}
}
// In case our index contains our search string...
if (index.containsKey(searchString)) {
/*
* ... we find out in which map of the ones stored in maps
* the word in the index originated from.
*/
Integer mapIndex = index.get(searchString);
/*
* Next, we look up said map.
*/
HashMap<String, String> origin = maps.get(mapIndex);
/*
* Last, we retrieve the value from the origin map
*/
String result = origin.get(searchString);
/*
* The above steps can be shortened to
* String result = maps.get(index.get(searchString).intValue()).get(searchString);
*/
System.out.println(result);
} else {
System.out.println("\"" + searchString + "\" is not in the index!");
}
}
}
Please note that this is a rather naive implementation only provided for illustration purposes. It doesn't address several problems (you can't have duplicate index entries, for example).
With this solution, you are basically trading startup speed for query speed.
Okay!!..
Since your concern is to get faster response.
I would suggest you to divide the work between threads.
Lets you have 5 dictionaries May be keep three dictionaries to one thread and rest two will take care by another thread.
And then witch ever thread finds the match will halt or terminate the other thread.
May be you need an extra logic to do that dividing work ... But that wont effect your performance time.
And may be you need little more changes in your code to get your close match:
for (Map.Entry entry : _dictionary.entrySet()) {
you are using EntrySet But you are not using values anyway it seems getting entry set is a bit expensive. And I would suggest you to just use keySet since you are not really interested in the values in that map
for (Map.Entry entry : _dictionary.keySet()) {
For more details on the proformance of map Please read this link Map performances
Iteration over the collection-views of a LinkedHashMap requires time proportional to the size of the map, regardless of its capacity. Iteration over a HashMap is likely to be more expensive, requiring time proportional to its capacity.

Summing values in a List. Could I be doing this more efficiently?

I have a list of Fact objects. Each object has a Date field (reportingDate) and a long field (numberSaved). There are several results for each reportingDate. I'm trying to get a sum of all numberSaved values for each reporting date. Currently, I'm doing it like this:
private static List<Fact> sumFacts(List<Fact> facts) {
List<Fact> summedFacts = new ArrayList<Fact>();
for (Fact fact : facts) {
boolean found = false;
for (Fact sumFact : summedFacts) {
if(sumFact.getReportingDate().equals(fact.getReportingDate())) {
found = true;
sumFact.setNumberSaved(sumFact.getNumberSaved() + fact.getNumberSaved());
}
}
if (!found) summedFacts.add(fact);
}
return summedFacts;
}
public class Fact {
String reportingDate;
long numberSaved;
public String getReportingDate() {
return reportingDate;
}
public void setReportingDate(String reportingDate) {
this.reportingDate = reportingDate;
}
public long getNumberSaved() {
return numberSaved;
}
public void setNumberSaved(long numberSaved) {
this.numberSaved = numberSaved;
}
}
For each item in the original list, it iterates through the new list looking for a matching Date. If it finds an object with a matching date, it adds its numberSaved value to it. If it makes it through the whole list without finding a matching date, it adds itself to the new list.
Is there a more efficient way that I could be summing the values into a list of Fact objects with unique dates?
EDIT:
I forgot to mention that I need to maintain the order of the items
Instead of keeping your facts in a List and iterating over it (producing an O(n^2) complexity), you could store them in a map form the reporting date to the fact object, giving you an O(n) complexity:
private static List<Fact> sumFacts(List<Fact> facts) {
Map<String, Fact> summedFacts = new HashMap<Fact>();
for (Fact fact : facts) {
summedFact = summedFacts.get(fact.getReportingDate());
if (summedFact == null) {
summedFacts.put (fact.getReportingDate(), fact);
} else {
summedFact.setNumberSaved(summedFact.getNumberSaved() + fact.getNumberSaved());
}
}
return new ArrayList<Fact>(summedFacts.values());
}
The only way this could be faster is if both lists were sorted by some key (most likely your date that you are using). Checking for the existence of an object in an unsorted list is O(n), and you are doing this for every element of another list, making the problem O(m * n).
This shows that your solution is as efficient as it can be without presorting lists.
The most you can improve on is to use List.add(int, Object) so that it inserts the item to the front of the list so that it is not looped over again.
You could greatly increase performance by ussing a HashTable for summedFacts (read more on http://docs.oracle.com/javase/7/docs/api/java/util/Hashtable.html)
You would have your date converted to string and use it as Key of the HashTable. The value of the HashTable will hold the sum for the Fact objects with the same date.
HashTable access is instant (O(1)) therefore this solution will lead you to an O(n) implementation instead of your O(n*m) one.
For example:
private static HashTable<string, Fact> sumFacts(List<Fact> facts) {
HashTable<string, Fact> summedFacts = new Hashtable<string, Fact>();
for (Fact fact : facts) {
// Check if the item with this date is already added to the HashTable. If not, then add it
if (summedFacts.get(sumFact.getReportingDate()) == null)
summedFacts.put(fact.getReportingDate(), fact); // add the value to the HashTable.
else {
// If the date is already there, than perform adition.
currentFact = summedFacts.get(fact.getReportingDate());
currentFact.setNumberSaved(fact.getNumberSaved() + currentFact.getNumberSaved());
}
}
}
return summedFacts;
}

HashMap and ArrayList adding while iterating/looping

I have a game where every X seconds it will write changed values in memory back to my DB. These values are stored in containers(HashMaps and ArrayLists) when the data they hold is edited.
For simplicity lets pretend I have only 1 container to write to the DB:
public static HashMap<String, String> dbEntitiesDeletesBacklog = new HashMap<String, String>();
My DB writing loop:
Timer dbUpdateJob = new Timer();
dbUpdateJob.schedule(new TimerTask() {
public void run() {
long startTime = System.nanoTime();
boolean updateEntitiesTableSuccess = UpdateEntitiesTable();
if (!updateEntitiesTableSuccess){
try {
conn.rollback();
} catch (SQLException e) {
e.printStackTrace();
logger.fatal(e.getMessage());
System.exit(1);
}
} else { //everything saved to DB - commit time
try {
conn.commit();
} catch (SQLException e) {
e.printStackTrace();
logger.fatal(e.getMessage());
System.exit(1);
}
}
logger.debug("Time to save to DB: " + (System.nanoTime() - startTime) / 1000000 + " milliseconds");
}
}, 0, 10000); //TODO:: figure out the perfect saving delay
My update method:
private boolean UpdateEntitiesTable() {
Iterator<Entry<String, String>> it = dbEntitiesDeletesBacklog.entrySet().iterator();
while (it.hasNext()) {
Entry<String, String> pairs = it.next();
String tmpEntityId = pairs.getKey();
int deletedSuccess = UPDATE("DELETE" +
" FROM " + DB_NAME + ".entities" +
" WHERE entity_id=(?)", new String[]{tmpEntityId});
if (deletedSuccess != 1) {
logger.error("Entity " + tmpEntityId + " was unable to be deleted.");
return false;
}
it.remove();
dbEntitiesDeletesBacklog.remove(tmpEntityId);
}
Do I need to create some sort of locking mechanism while 'saving to DB' for the dbEntitiesDeletesBacklog HashMap and other containers not included in this excerpt? I would think I need to, because it creates its iterator, then loops. What if something is added after the iterator is created, and before its done looping through the entries. I'm sorry this is more of a process question and less of a code help question(since I included so much sample code), but I wanted to make sure it was easy to understand what I am trying to do and asking.
Same question for my other containers which I use like so:
public static ArrayList<String> dbCharacterDeletesBacklog = new ArrayList<String>();
private boolean DeleteCharactersFromDB() {
for (String deleteWho : dbCharacterDeletesBacklog){
int deleteSuccess = MyDBSyncher.UPDATE("DELETE FROM " + DB_NAME + ".characters" +
" WHERE name=(?)",
new String[]{deleteWho});
if (deleteSuccess != 1) {
logger.error("Character(deleteSuccess): " + deleteSuccess);
return false;
}
}
dbCharacterDeletesBacklog.clear();
return true;
}
Thanks so much, as always, for any help on this. It is greatly appreciated!!
At the very least, you need a synchronized map (via Collections.synchronizedMap) if you are accessing your map concurrently, otherwise you may experience non deterministic behaviour.
Further than that, as you suggest, you also need to lock your map during iteration. From the javadoc for Collections.synchronizedMap() the suggestion is:
It is imperative that the user manually synchronize on the returned
map when iterating over any of its collection views:
Map m = Collections.synchronizedMap(new HashMap());
...
Set s = m.keySet(); // Needn't be in synchronized block
...
synchronized(m) { // Synchronizing on m, not s!
Iterator i = s.iterator(); // Must be in synchronized block
while (i.hasNext())
foo(i.next());
}
Failure to follow this advice may result in non-deterministic
behavior.
Alternatively, use a ConcurrentHashMap instead of a regular HashMap to avoid requiring synchronization during iteration. For a game, this is likely a better option since you avoid locking your collection for a long period of time.
Possibly even better, consider rotating through new collections such that every time you update the database you grab the collection and replace it with a new empty one where all new updates are written to, avoiding locking the collection while the database writes are occurring. The collections in this case would be managed by some container to allow this grab and replace to be thread safe. <<< Note: You cannot expose the underlying collection in this case to modifying code since you need to keep its reference strictly private for the swap to be effective (and not introduce any race conditions).
Here is a sample of what I will be using. I am posting there here in the hopes that it will help someone else with a similar issue.
public class MyDBSyncher {
public static boolean running = false;
public static HashMap<String, String> dbEntitiesInsertsBacklog_A = new HashMap<String, String>();
public static HashMap<String, String> dbEntitiesInsertsBacklog_B = new HashMap<String, String>();
public MyDBSyncher(){
Timer dbUpdateJob = new Timer();
dbUpdateJob.schedule(new TimerTask() {
public void run() {
running = true;
boolean updateEntitiesTableSuccess = UpdateEntitiesTable();
running = false;
}
}, 0, 10000); //TODO:: figure out the perfect saving delay
}
public HashMap getInsertableEntitiesHashMap(){
if (running){
return dbEntitiesInsertsBacklog_B;
} else {
return dbEntitiesInsertsBacklog_A;
}
}
private boolean UpdateEntitiesTable() {
Iterator<Entry<String, String>> it2 = getInsertableEntitiesHashMap().entrySet().iterator();
while (it2.hasNext()) {
Entry<String, String> pairs = it2.next();
String tmpEntityId = pairs.getKey();
//some DB updates here
it2.remove();
getInsertableEntitiesHashMap().remove(tmpEntityId);
}
return true;
}
}

Categories