I'm trying to support modification (deactivate() function call) of the following data structure in a thread safe manner -
private static Map<String, Set<Integer>> dbPartitionStatus = new HashMap<String, Set<DBPartitionId>>();
public void deactivate(DBPartitionId partition) throws Exception {
synchronized (dbPartitionStatus) {
Set<DBPartitionId> partitions = dbPartitionStatus.get(serviceName);
if (partitions == null) {
partitions = new HashSet<DBPartitionId>();
}
partitions.add(partition);
dbPartitionStatus.put(serviceName, partitions);
}
}
If I were to replace the synchronization with ConcurrentHashMap & ConcurrentSkipListSet duo, there would be some race condition.
I was wondering if there was a cleaner way of achieving synchronization here (using java.util.concurrent)
Should be no race conditions with the following implementation:
private final static ConcurrentMap <String, Set <DBPartitionId>> dbPartitionStatus =
new ConcurrentHashMap <String, Set <DBPartitionId>> ();
public void deactivate (DBPartitionId partition) {
Set <DBPartitionId> partitions = dbPartitionStatus.get (serviceName);
if (partitions == null)
{
partitions = new ConcurrentSkipListSet <DBPartitionId> ();
Set <DBPartitionId> p =
dbPartitionStatus.putIfAbsent (serviceName, partitions);
if (p != null) partitions = p;
}
partitions.add (partition);
}
I personally cannot see a issues with this sort of approach:
private static ConcurrentHashMap<String, ConcurrentSkipListSet<DBPartitionId>> dbPartitionStatus = new ConcurrentHashMap<>();
public bool deactivate(DBPartitionId partition) throws Exception {
ConcurrentSkipListSet<DBPartitionId> partitions = dbPartitionStatus.get(serviceName);
if (partitions == null) {
// Create a new set
partitions = new ConcurrentSkipListSet<DBPartitionId>();
// Attempt to add, if we add, ev will be null.
ConcurrentSkipListSet<DBPartitionId> ev = dbPartitionStatus.put(serviceName, partitions);
// If non-null, someone else has added it, so now use it.
if (ev != null)
partitions = ev;
}
// will return true if added succesfully...
return partitions.add(partition);
}
There is also the putIfAbsent() method in map which can do the get/put onthe map in an "atomic" operation, however it has the additional overhead in this case that you have to construct an empty set to pass in each time.
Related
I have this method:
public List<String> composeList (DataBaseObject dBO) {
List<String> valueList = new ArrayList<>();
for (String separatedFieldName : separatedFieldNames) {
object = PropertyUtils.getProperty(object, separatedFieldName);
valueList.add(object.toString());
}
}
I have a list of 1000 dBO objects and would like to call this method in a multi-threaded way.
But the return of this method also goes into a list
Here is the caller:
List<List<String>> valueLists = new ArrayList<>();
for (DataBaseObject dBO : listOfDBOs)
valueLists.add(composeList(dBOObject));
Since the machines now a days have multiple cores, I was wondering how can I make use of them. Like how do I call composeList in parellel and store results in one ArrayList.
I know I can use the Collections.SynchronizedList but then the execution time of composeList is so little that I will end up adding elements in a sequence and even though being multi-threaded it would still be sequential execution as every add will put a lock on the Sysnchronized list.
This might sound like a design question but still it is programming related. And I would really appreciate any help with this situation.
Java 8 parallel streams are designed for exactly this situation.
List<String> dbFieldValues = dbObjectList.parallelStream()
.flatMap(seperatedFieldNames().parallelStream()
.map(fn -> PropertyUtils.getProperty(db, fn).toString()))
.collect(Collectors.toList());
Assuming the collection of seperatedFieldNames supports parallel streams (e.g. ArrayList) this will use multiple threads without any need to create them yourself.
Note that this assumes there are no side-effects to getProperty.
Possible, this solution little bit "traditional style" without any cool new things, like stream etc, but I would do something like following:
public class AnyClass {
private static final AtomicInteger index = new AtomicInteger(0);
private static final Object lock = new Object();
public class ComposerThread implements Runnable {
private List<DataBaseObject> dboList;
private List<List<String>> valueList;
private List<String> fieldNames;
public MyThread(List<DataBaseObject> dboList, List<List<String>> valueList,
List<String> fieldNames) {
this.valueList= valueList;
this.dboList = dboList;
this.fieldNames = fieldNames;
}
public void run() {
int i = index.getAndIncrement(); //thread takes next unprocessed index
while(i<dboList.size()){
DataBaseObject object = dboList.get(i);
List<String> list = new ArrayList<>();
for (String separatedFieldName : fieldNames) {
Object object = PropertyUtils.getProperty(object, separatedFieldName);
list.add(object.toString());
}
synchronized (lock) { //addition synchronized
valueList.add(list);
}
i = index.getAndIncrement(); //thread takes next unprocessed index
}
}
}
....
}
Note, that AtomicInteger index and Object lock have to be final static, because of usage in synchronization. Now you can use ComposerThread internal class in same parent class:
public class AnyClass{
....
private List<List<String>> composeValueList(List<DataBaseObject> dboList,
List<String> fieldNames, int threadCount) {
index.set(0);//reset index before process dboList
List<List<String>> valueList = new List<List<String>>();
Thread [] pool = new Thread[threadCount];
for(int i=0; i<pool.length; i++){
pool[i] = new Thread(new ComposerThread (objectList, valueList, fieldNames));
pool[i].start();
}
for(Thread thread : threadPool){
thread.join(); //just wait while all will be done
}
return valueList;
}
}
As you see, you even can set count of threads.
Read about ExecutorService and ForkJoin frameworks in java. It may help you to achieve what you want.
thank you.
I'm trying to multi thread an import job, but running into a problem where it's causing duplicate data. I need to keep my map outside of the loop so all my threads can update and read from it, but I can't do this without it being final and with it being final I can't update the map. Currently I need to put my Map object in the run method, but the problem comes when the values are not initially in the database and each thread creates a new one. This results in duplicate data in the database. Does anybody know how to do some sort of call back to update my map outside?
ExecutorService executorService = Executors.newFixedThreadPool(10);
final Map<Integer, Object> map = new HashMap<>();
map.putAll(populate from database);
for (int i = 0; i < 10; i++) {
executorService.execute(new Runnable() {
public void run() {
while ((line = br.readLine()) != null) {
if(map.containsKey(123)) {
//read map object
session.update(object);
} else {
map.put(123,someObject);
session.save(object);
}
if(rowCount % 250 == 0)
tx.commit;
});
}
executorService.shutdown();
You need to use some synchronization techniques.
Problematic part is when different threads are trying to put some data into map.
Example:
Thread 1 is checking if there is object with key 123 in map. Before thread 1 added new object to map, thread 2 is executed. Thread 2 also check if there is object with key 123. Then both threads added object 123 to map. This causes duplicates...
You can read more about synchronization here
http://docs.oracle.com/javase/tutorial/essential/concurrency/sync.html
Based on your problem description it appears that you want to have a map where the data is consistent and you always have the latest up-t-date data without having missed any updates.
In this case make you map as a Collections.synchronizedMap(). This will ensure that all read and write updates to the map are synchronized and hence you are guaranteed to find a key using the latest data in the map and also guaranteed to write exclusively to the map.
Refer to this SO discussion for a difference between the concurrency techniques used with maps.
Also, one more thing - defining a Map as final does not mean yu cannot modify the map - you can definitely add and remove elements from the map. What you cannot do however is change the variable to point to another map. This is illustrated by a simple code snippet below:
private final Map<Integer, String> testMap = Collections.synchronizedMap(new HashMap<Integer,String>());
testMap.add(1,"Tom"); //OK
testMap.remove(1); //OK
testMap = new HashMap<Integer,String>(); //ERROR!! Cannot modify a variable with the final modifier
I would suggest the following solution
Use ConcurrentHashmap
Don't use update and commit inside your crawling threads
Trigger save and commit when your map reaches a critical size in a separate thread.
Pseudocode sample:
final Object lock = new Object();
...
executorService.execute(new Runnable() {
public void run() {
...
synchronized(lock){
if(concurrentMap.size() > 250){
saveInASeparateThread(concurrentMap.values().removeAll()));
}
}
}
}
This following logic resolves my issue. The code below isn't tested.
ExecutorService executorService = Executors.newFixedThreadPool(10);
final Map<Integer, Object> map = new ConcurrentHashMap<>();
map.putAll(myObjectList);
List<Future> futures = new ArrayList<>();
for (int i = 0; i < 10; i++) {
final thread = i;
Future future = executorService.submit(new Callable() {
public void call() {
List<MyObject> list;
CSVReader reader = new CSVReader(new InputStreamReader(csvFile.getStream()));
list = bean.parse(strategy, reader);
int listSize = list.size();
int rowCount = 0;
for(MyObject myObject : list) {
rowCount++;
Integer key = myObject.getId();
if(map.putIfAbsent(key, myObject) == null) {
session.save(object);
} else {
myObject = map.get(key);
//Do something
session.update(myObject);
}
if(rowCount % 250 == 0 || rowCount == listSize) {
tx.flush();
tx.clear();
}
};
tx.commit();
return "Thread " + thread + " completed.";
});
futures.add(future);
}
for(Future future : futures) {
System.out.println(future.get());
}
executorService.shutdown();
I have a data structure like Map<Key, Set<Value>>. I'm trying to implement the following scenario:
Several producers update this map adding new values either to already existing keys or to new keys (in which case new map entries are created).
A consumer periodically polls some limited number of entries from the map and passes them to processor.
Here's my take:
private static final MAX_UPDATES_PER_PASS = 100;
private final ConcurrentHashMap<Key, Set<Value>> updates = new ConcurrentHashMap<Key, Set<Value>>();
#Override
public void updatesReceived(Key key, Set<Value> values) {
Set<Value> valuesSet = updates.get(key);
if (valuesSet == null){
valuesSet = Collections.newSetFromMap(new ConcurrentHashMap<Value, Boolean>());
Set<Value> previousValues = updates.putIfAbsent(key, valuesSet);
if (previousValues != null){
valuesSet = previousValues;
}
}
valuesSet.addAll(values);
}
private class UpdatesProcessor implements Runnable {
#Override
public void run() {
int updatesProcessed = 0;
Map<Key, Set<Value>> valuesToProcess = new HashMap<Key, Set<Value>>();
Iterator<Map.Entry<Key, Set<Value>>> iterator = updates.entrySet().iterator();
while(iterator.hasNext() && updatesProcessed < MAX_UPDATES_PER_PASS) {
Map.Entry<Key, Set<Value>> next = iterator.next();
iterator.remove(); // <-- here
Key key = next.getKey();
Set<Value> values = valuesToProcess.get(key);
if (values == null){
values = new HashSet<Value>();
valuesToProcess.put(key, values);
}
values.addAll(next.getValue());
updatesProcessed++;
}
if (!valuesToProcess.isEmpty()){
process(valuesToProcess);
}
}
}
The method updatesRecevied() is called by producers of values from arbitrary threads. The UpdatesProcessor is scheduled for periodic execution through ScheduledExecutorService, so it too can be called from arbitrary threads.
Every single value should be processed exactly once. No more no less. I don't care if a value gets processed sooner or later, but eventually it should.
I want it to be fast and furious, so I don't want to synchronize everything up.
This clumsy code with the iterator in the UpdatesProcessor serves one single goal which could be easily achieved if there was something like ConcurrentHashMap.poll(). But there isn't.
So, to the questions. First, is this guaranteed to work or not? After I call iterator.remove() the entry is removed from the map, and every additional values would go to the new entry's set, right?
And second, am I complicating things? Is there a common approach to (data structure for) this kind of scenario?
I am aggregating multiple values for keys in a multi-threaded environment. The keys are not known in advance. I thought I would do something like this:
class Aggregator {
protected ConcurrentHashMap<String, List<String>> entries =
new ConcurrentHashMap<String, List<String>>();
public Aggregator() {}
public void record(String key, String value) {
List<String> newList =
Collections.synchronizedList(new ArrayList<String>());
List<String> existingList = entries.putIfAbsent(key, newList);
List<String> values = existingList == null ? newList : existingList;
values.add(value);
}
}
The problem I see is that every time this method runs, I need to create a new instance of an ArrayList, which I then throw away (in most cases). This seems like unjustified abuse of the garbage collector. Is there a better, thread-safe way of initializing this kind of a structure without having to synchronize the record method? I am somewhat surprised by the decision to have the putIfAbsent method not return the newly-created element, and by the lack of a way to defer instantiation unless it is called for (so to speak).
Java 8 introduced an API to cater for this exact problem, making a 1-line solution:
public void record(String key, String value) {
entries.computeIfAbsent(key, k -> Collections.synchronizedList(new ArrayList<String>())).add(value);
}
For Java 7:
public void record(String key, String value) {
List<String> values = entries.get(key);
if (values == null) {
entries.putIfAbsent(key, Collections.synchronizedList(new ArrayList<String>()));
// At this point, there will definitely be a list for the key.
// We don't know or care which thread's new object is in there, so:
values = entries.get(key);
}
values.add(value);
}
This is the standard code pattern when populating a ConcurrentHashMap.
The special method putIfAbsent(K, V)) will either put your value object in, or if another thread got before you, then it will ignore your value object. Either way, after the call to putIfAbsent(K, V)), get(key) is guaranteed to be consistent between threads and therefore the above code is threadsafe.
The only wasted overhead is if some other thread adds a new entry at the same time for the same key: You may end up throwing away the newly created value, but that only happens if there is not already an entry and there's a race that your thread loses, which would typically be rare.
As of Java-8 you can create Multi Maps using the following pattern:
public void record(String key, String value) {
entries.computeIfAbsent(key,
k -> Collections.synchronizedList(new ArrayList<String>()))
.add(value);
}
The ConcurrentHashMap documentation (not the general contract) specifies that the ArrayList will only be created once for each key, at the slight initial cost of delaying updates while the ArrayList is being created for a new key:
http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentHashMap.html#computeIfAbsent-K-java.util.function.Function-
In the end, I implemented a slight modification of #Bohemian's answer. His proposed solution overwrites the values variable with the putIfAbsent call, which creates the same problem I had before. The code that seems to work looks like this:
public void record(String key, String value) {
List<String> values = entries.get(key);
if (values == null) {
values = Collections.synchronizedList(new ArrayList<String>());
List<String> values2 = entries.putIfAbsent(key, values);
if (values2 != null)
values = values2;
}
values.add(value);
}
It's not as elegant as I'd like, but it's better than the original that creates a new ArrayList instance at every call.
Created two versions based on Gene's answer
public static <K,V> void putIfAbsetMultiValue(ConcurrentHashMap<K,List<V>> entries, K key, V value) {
List<V> values = entries.get(key);
if (values == null) {
values = Collections.synchronizedList(new ArrayList<V>());
List<V> values2 = entries.putIfAbsent(key, values);
if (values2 != null)
values = values2;
}
values.add(value);
}
public static <K,V> void putIfAbsetMultiValueSet(ConcurrentMap<K,Set<V>> entries, K key, V value) {
Set<V> values = entries.get(key);
if (values == null) {
values = Collections.synchronizedSet(new HashSet<V>());
Set<V> values2 = entries.putIfAbsent(key, values);
if (values2 != null)
values = values2;
}
values.add(value);
}
It works well
This is a problem I also looked for an answer. The method putIfAbsent does not actually solve the extra object creation problem, it just makes sure that one of those objects doesn't replace another. But the race conditions among threads can cause multiple object instantiation. I could find 3 solutions for this problem (And I would follow this order of preference):
1- If you are on Java 8, the best way to achieve this is probably the new computeIfAbsent method of ConcurrentMap. You just need to give it a computation function which will be executed synchronously (at least for the ConcurrentHashMap implementation). Example:
private final ConcurrentMap<String, List<String>> entries =
new ConcurrentHashMap<String, List<String>>();
public void method1(String key, String value) {
entries.computeIfAbsent(key, s -> new ArrayList<String>())
.add(value);
}
This is from the javadoc of ConcurrentHashMap.computeIfAbsent:
If the specified key is not already associated with a value, attempts
to compute its value using the given mapping function and enters it
into this map unless null. The entire method invocation is performed
atomically, so the function is applied at most once per key. Some
attempted update operations on this map by other threads may be
blocked while computation is in progress, so the computation should be
short and simple, and must not attempt to update any other mappings of
this map.
2- If you cannot use Java 8, you can use Guava's LoadingCache, which is thread-safe. You define a load function to it (just like the compute function above), and you can be sure that it'll be called synchronously. Example:
private final LoadingCache<String, List<String>> entries = CacheBuilder.newBuilder()
.build(new CacheLoader<String, List<String>>() {
#Override
public List<String> load(String s) throws Exception {
return new ArrayList<String>();
}
});
public void method2(String key, String value) {
entries.getUnchecked(key).add(value);
}
3- If you cannot use Guava either, you can always synchronise manually and do a double-checked locking. Example:
private final ConcurrentMap<String, List<String>> entries =
new ConcurrentHashMap<String, List<String>>();
public void method3(String key, String value) {
List<String> existing = entries.get(key);
if (existing != null) {
existing.add(value);
} else {
synchronized (entries) {
List<String> existingSynchronized = entries.get(key);
if (existingSynchronized != null) {
existingSynchronized.add(value);
} else {
List<String> newList = new ArrayList<>();
newList.add(value);
entries.put(key, newList);
}
}
}
}
I made an example implementation of all those 3 methods and additionally, the non-synchronized method, which causes extra object creation: http://pastebin.com/qZ4DUjTr
Waste of memory (also GC etc.) that Empty Array list creation problem is handled with Java 1.7.40. Don't worry about creating empty arraylist.
Reference : http://javarevisited.blogspot.com.tr/2014/07/java-optimization-empty-arraylist-and-Hashmap-cost-less-memory-jdk-17040-update.html
The approach with putIfAbsent has the fastest execution time, it is from 2 to 50 times faster than the "lambda" approach in evironments with high contention. The Lambda isn't the reason behind this "powerloss", the issue is the compulsory synchronisation inside of computeIfAbsent prior to the Java-9 optimisations.
the benchmark:
import java.util.Random;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;
public class ConcurrentHashMapTest {
private final static int numberOfRuns = 1000000;
private final static int numberOfThreads = Runtime.getRuntime().availableProcessors();
private final static int keysSize = 10;
private final static String[] strings = new String[keysSize];
static {
for (int n = 0; n < keysSize; n++) {
strings[n] = "" + (char) ('A' + n);
}
}
public static void main(String[] args) throws InterruptedException {
for (int n = 0; n < 20; n++) {
testPutIfAbsent();
testComputeIfAbsentLamda();
}
}
private static void testPutIfAbsent() throws InterruptedException {
final AtomicLong totalTime = new AtomicLong();
final ConcurrentHashMap<String, AtomicInteger> map = new ConcurrentHashMap<String, AtomicInteger>();
final Random random = new Random();
ExecutorService executorService = Executors.newFixedThreadPool(numberOfThreads);
for (int i = 0; i < numberOfThreads; i++) {
executorService.execute(new Runnable() {
#Override
public void run() {
long start, end;
for (int n = 0; n < numberOfRuns; n++) {
String s = strings[random.nextInt(strings.length)];
start = System.nanoTime();
AtomicInteger count = map.get(s);
if (count == null) {
count = new AtomicInteger(0);
AtomicInteger prevCount = map.putIfAbsent(s, count);
if (prevCount != null) {
count = prevCount;
}
}
count.incrementAndGet();
end = System.nanoTime();
totalTime.addAndGet(end - start);
}
}
});
}
executorService.shutdown();
executorService.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
System.out.println("Test " + Thread.currentThread().getStackTrace()[1].getMethodName()
+ " average time per run: " + (double) totalTime.get() / numberOfThreads / numberOfRuns + " ns");
}
private static void testComputeIfAbsentLamda() throws InterruptedException {
final AtomicLong totalTime = new AtomicLong();
final ConcurrentHashMap<String, AtomicInteger> map = new ConcurrentHashMap<String, AtomicInteger>();
final Random random = new Random();
ExecutorService executorService = Executors.newFixedThreadPool(numberOfThreads);
for (int i = 0; i < numberOfThreads; i++) {
executorService.execute(new Runnable() {
#Override
public void run() {
long start, end;
for (int n = 0; n < numberOfRuns; n++) {
String s = strings[random.nextInt(strings.length)];
start = System.nanoTime();
AtomicInteger count = map.computeIfAbsent(s, (k) -> new AtomicInteger(0));
count.incrementAndGet();
end = System.nanoTime();
totalTime.addAndGet(end - start);
}
}
});
}
executorService.shutdown();
executorService.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
System.out.println("Test " + Thread.currentThread().getStackTrace()[1].getMethodName()
+ " average time per run: " + (double) totalTime.get() / numberOfThreads / numberOfRuns + " ns");
}
}
The results:
Test testPutIfAbsent average time per run: 115.756501 ns
Test testComputeIfAbsentLamda average time per run: 276.9667055 ns
Test testPutIfAbsent average time per run: 134.2332435 ns
Test testComputeIfAbsentLamda average time per run: 223.222063625 ns
Test testPutIfAbsent average time per run: 119.968893625 ns
Test testComputeIfAbsentLamda average time per run: 216.707419875 ns
Test testPutIfAbsent average time per run: 116.173902375 ns
Test testComputeIfAbsentLamda average time per run: 215.632467375 ns
Test testPutIfAbsent average time per run: 112.21422775 ns
Test testComputeIfAbsentLamda average time per run: 210.29563725 ns
Test testPutIfAbsent average time per run: 120.50643475 ns
Test testComputeIfAbsentLamda average time per run: 200.79536475 ns
I have bunch of log files and I want to process them in java, but I want to sort them first so I can have more human readable results.
My Log Class :
public class Log{
//only relevant fields here
private String countryCode;
private AccessType accessType;
...etc..
}
AccessType is Enum, which has values WEB, API, OTHER.
I'd like to group Log objects by both countryCode and accessType, so that end product would be log list.
I got this working for grouping Logs into log list by countryCode like this :
public List<Log> groupByCountryCode(String countryCode) {
Map<String, List<Log>> map = new HashMap<String, List<Log>>();
for (Log log : logList) {
String key = log.getCountryCode();
if (map.get(key) == null) {
map.put(key, new ArrayList<Log>());
}
map.get(key).add(log);
}
List<Log> sortedByCountryCodeLogList = map.get(countryCode);
return sortedByCountryCodeLogList;
}
from this #Kaleb Brasee example :
Group by field name in Java
Here is what I've been trying for some time now, and really stuck now ..
public List<Log> groupByCountryCode(String countryCode) {
Map<String, Map<AccessType, List<Log>>> map = new HashMap<String, Map<AccessType, List<Log>>>();
AccessType mapKey = null;
List<Log> innerList = null;
Map<AccessType, List<Log>> innerMap = null;
// inner sort
for (Log log : logList) {
String key = log.getCountryCode();
if (map.get(key) == null) {
map.put(key, new HashMap<AccessType, List<Log>>());
innerMap = new HashMap<AccessType, List<Log>>();
}
AccessType innerMapKey = log.getAccessType();
mapKey = innerMapKey;
if (innerMap.get(innerMapKey) == null) {
innerMap.put(innerMapKey, new ArrayList<Log>());
innerList = new ArrayList<Log>();
}
innerList.add(log);
innerMap.put(innerMapKey, innerList);
map.put(key, innerMap);
map.get(key).get(log.getAccessType()).add(log);
}
List<Log> sortedByCountryCodeLogList = map.get(countryCode).get(mapKey);
return sortedByCountryCodeLogList;
}
I'm not sure I know what I'm doing anymore
Your question is confusing. You want to sort the list, but you are creating many new lists, then discarding all but one of them?
Here is a method to sort the list. Note that Collections.sort() uses a stable sort. (This means that the original order of items within a group of country code and access type is preserved.)
class MyComparator implements Comparator<Log> {
public int compare(Log a, Log b) {
if (a.getCountryCode().equals(b.getCountryCode()) {
/* Country code is the same; compare by access type. */
return a.getAccessType().ordinal() - b.getAccessType().ordinal();
} else
return a.getCountryCode().compareTo(b.getCountryCode());
}
}
Collections.sort(logList, new MyComparator());
If you really want to do what your code is currently doing, at least skip the creation of unnecessary lists:
public List<Log> getCountryAndAccess(String cc, AccessType access) {
List<Log> sublist = new ArrayList<Log>();
for (Log log : logList)
if (cc.equals(log.getCountryCode()) && (log.getAccessType() == access))
sublist.add(log);
return sublist;
}
If you're able to use it, Google's Guava library has an Ordering class that might be able to help simplify things. Something like this might work:
Ordering<Log> byCountryCode = new Ordering<Log>() {
#Override
public int compare(Log left, Log right) {
return left.getCountryCode().compareTo(right.getCountryCode());
}
};
Ordering<Log> byAccessType = new Ordering<Log>() {
#Override
public int compare(Log left, Log right) {
return left.getAccessType().compareTo(right.getAccessType());
}
};
Collections.sort(logList, byCountryCode.compound(byAccessType));
You should create the new inner map first, then add it to the outer map:
if (map.get(key) == null) {
innerMap = new HashMap<AccessType, List<Log>>();
map.put(key, innerMap);
}
and similarly for the list element. This avoids creating unnecessary map elements which will then be overwritten later.
Overall, the simplest is to use the same logic as in your first method, i.e. if the element is not present in the map, insert it, then just get it from the map:
for (Log log : logList) {
String key = log.getCountryCode();
if (map.get(key) == null) {
map.put(key, new HashMap<AccessType, List<Log>>());
}
innerMap = map.get(key);
AccessType innerMapKey = log.getAccessType();
if (innerMap.get(innerMapKey) == null) {
innerMap.put(innerMapKey, new ArrayList<Log>());
}
innerMap.get(innerMapKey).add(log);
}
Firstly, it looks like you're adding each log entry twice with the final line map.get(key).get(log.getAccessType()).add(log); inside your for loop. I think you can do without that, given the code above it.
After fixing that, to return your List<Log> you can do:
List<Log> sortedByCountryCodeLogList = new ArrayList<Log>();
for (List<Log> nextLogs : map.get(countryCode).values()) {
sortedByCountryCodeLogList.addAll(nextLogs);
}
I think that code above should flatten it down into one list, still grouped by country code and access type (not in insertion order though, since you used HashMap and not LinkedHashMap), which I think is what you want.