I am pretty new to Java and i have a hard time solving this problem.
Lets say i have two Arraylists personIdList and titlelist which gets filled by a Database with the help of a ResultSet Object iteration.
personIdList.add(repPersonId.getLong("personID"));
titelList.add(repTitel.getString("titel"));
The first Arraylist(personIdList) contains Ids from different composer's like so :
[34, 34, 34, 37, 38, 133, 232, 232, 285, 285, 285, 285]
The second List(titlelist) contains Title's from that composer like so :
[Symphonie, Sinfonia Concertante, Oper, Symphonie, Ouverture zur Oper, Ouverture zur Oper, Konzert für zwei Klaviere, Sinfonie, Chöre aus der Schauspielmusik, Requiem, Klavierkonzert, Klavierkonzert]
Can i somehow establish a connection between those arrays? Because the composer id should be connectet to the corresponding Title.
For example(pseudo Code): personIdList.get(34) should give me all Titles that are connected to the Id 34.
Do i have to use ArrayLists or is there already something that does that?
As RC said. If you're not going to ORM Map it, you can iterate the id list and create a new Map (a key-value pair collection):
Map<Integer, List<String> personTitles = new HashMap<>();
for(Integer id: personIdList) {
// your solution for retrieving the titles from the database
// i would imagine you're using an entityManager.query or something which returns the result set of the titles
// where you have to pass the `id` as a parameter
// and get the list of titles and store them in a List<String>
personTitles.put(id, `your retrieved List<String>`);
}
And then you can just say personTitles.get(32) which should retrieve your list of titles.
If you can post more code of how you retrieve entries from the database, that would be helpful.
You can write your own collector which will create the desired map:
Map<Integer, List<String>> groupedIds = personIdList.stream()
.collect(new CustomCollector(titlelist.iterator()));
groupedIds.get(34); // [Symphonie, Sinfonia Concertante, Oper]
Bellow is the custom collector.
public class CustomCollector implements Collector<Integer, Map<Integer, List<String>>, Map<Integer, List<String>>> {
private Iterator<String> iterator;
public CustomCollector(Iterator<String> iterator) {
this.iterator = iterator;
}
#Override
public Supplier<Map<Integer, List<String>>> supplier() {
return HashMap::new;
}
#Override
public BiConsumer<Map<Integer, List<String>>, Integer> accumulator() {
return (map, id) -> {
List<String> list = map.get(id);
if(list == null) {
list = new ArrayList<>();
list.add(iterator.next());
map.put(id, list);
} else {
list.add(iterator.next());
}
};
}
#Override
public BinaryOperator<Map<Integer, List<String>>> combiner() {
return (m1, m2) -> m1;
}
#Override
public Function<Map<Integer, List<String>>, Map<Integer, List<String>>> finisher() {
return HashMap::new;
}
#Override
public Set<Characteristics> characteristics() {
return Collections.emptySet();
}
}
There is a good collection framework in Guava(provided by Google) library. If you are flexible to use external jars then use a multilistmap from guava library
ListMultimap<Integer, String> multimap = ArrayListMultimap.create();
multimap.put(repPersonId.getLong("personID"),(repTitel.getString("titel"));
So what it does is it accepts more than one value for one key and stores that in a list.
If you want the list of keys you can get them as Set by calling
Set<Integer> keyset=multimap.getKeySet();
While retrieving the values from the multimap it returns the list result
List<String> personTitles = multimap.get(personId);
This way you can retrieve the data so no need of creating a list and setting it to the map and this makes it easier to read than a map of array lists
If you even want all the values in your case titles you can get it by
Collection<String> titles=multimap.values();
Anyway collection can be casted to Set if you don't want duplicates or List if you want all values in mulimap irrespective of duplicates.
So I think this will be very handy and easily understandable compared to map of arraylists
Usually you should use a object and fill it.... normaly a ORM mapping or so..
But, if you want a homemade soluction, at least the id of the Arrays matches ? Like
arr1.get(1) is related to the arr2.get(1) ?
Related
I'm in a weird situation where have a JSON API that takes an array with strings of neighborhoods as keys and an array of strings of restaurants as values which get GSON-parsed into the Restaurant object (defined with a String for the neighborhood and a List<String> with the restaurants). The system stores that data in a map whose keys are the neighborhood names and values are a set of restaurant names in that neighborhood. Therefore, I want to implement a function that takes the input from the API, groups the values by neighborhood and concatenates the lists of restaurants.
Being constrained by Java 8, I can't use more recent constructs such as flatMapping to do everything in one line and the best solution I've found is this one, which uses an intermediate map to store a Set of List before concatenating those lists into a Set to be store as value in the final map:
public Map<String, Set<String>> parseApiEntriesIntoMap(List<Restaurant> restaurants) {
if(restaurants == null) {
return null;
}
Map<String, Set<String>> restaurantListByNeighborhood = new HashMap<>();
// Here we group by neighborhood and concatenate the list of restaurants into a set
Map<String, Set<List<String>>> map =
restaurants.stream().collect(groupingBy(Restaurant::getNeighborhood,
Collectors.mapping(Restaurant::getRestaurantList, toSet())));
map.forEach((n,r) -> restaurantListByNeighborhood.put(n, Sets.newHashSet(Iterables.concat(r))));
return restaurantListByNeighborhood;
}
I feel like there has to be a way do get rid of that intermediate map and do everything in one line...does someone have a better solution that would allow me to do this?
You could with Java-8 simply use toMap with a mergeFunction defined as:
public Map<String, Set<String>> parseApiEntriesIntoMap(List<Restaurant> restaurants) {
// read below about the null check
return restaurants.stream()
.collect(Collectors.toMap(Restaurant::getNeighborhood,
r -> new HashSet<>(r.getRestaurantList()), (set1, set2) -> {
set1.addAll(set2);
return set1;
}));
}
Apart from which, one should ensure that the check and the result from the first block of code from your method
if(restaurants == null) {
return null;
}
when on the other hand dealing with empty Collections and Map, it should be redundant as the above code would return empty Map for an empty List by the nature of stream and collect operation itself.
Note: Further, if you may require a much relatable code to flatMapping in your future upgrades, you can use the implementation provided in this answer.
Or a solution without using streams, in this case, would look similar to the approach using Map.merge. It would use a similar BiFunction as:
public Map<String, Set<String>> parseApiEntriesIntoMap(List<Restaurant> restaurants) {
Map<String, Set<String>> restaurantListByNeighborhood = new HashMap<>();
for (Restaurant restaurant : restaurants) {
restaurantListByNeighborhood.merge(restaurant.getNeighborhood(),
new HashSet<>(restaurant.getRestaurantList()),
(strings, strings2) -> {
strings.addAll(strings2);
return strings;
});
}
return restaurantListByNeighborhood;
}
You can also flatten the Set<List<String>> after collecting them using Collectors.collectingAndThen
Map<String, Set<String>> res1 = list.stream()
.collect(Collectors.groupingBy(Restaurant::getNeighborhood,
Collectors.mapping(Restaurant::getRestaurantList,
Collectors.collectingAndThen(Collectors.toSet(),
set->set.stream().flatMap(List::stream).collect(Collectors.toSet())))));
I want to build a Map containing elements that are sorted by their value. I receive a list of purchases containing {customerId, purchaseAmount}, and want to build a map of the form which maps the customer to their total purchase amount. A single customer may have multiple purchases.
Finally, I want to process this information customer-by-customer, in order of decreasing total purchase amount. Meaning that I process the highest spending customer first, and the lowest spending customer last.
My initial solution for this was to build a Map (using HashMap), converting this Map to a List (LinkedList), sorting this List in decreasing order, and then processing this List. This is an O(n log n) solution, and I believe it is the best possible time complexity. However, I want to know if there is some way to leverage a data structure such as TreeMap which has a sorted property inherent to it. By default it will be sorted by its keys, however I want to sort it by the value. My current solution below.
public class MessageProcessor {
public static void main(String[] args) {
List<Purchase> purchases = new ArrayList<>();
purchases.add(new Purchase(1, 10));
purchases.add(new Purchase(2, 20));
purchases.add(new Purchase(3, 10));
purchases.add(new Purchase(1, 22));
purchases.add(new Purchase(2, 100));
processPurchases(purchases);
}
private static void processPurchases(List<Purchase> purchases) {
Map<Integer, Double> map = new HashMap<>();
for(Purchase p: purchases) {
if(!map.containsKey(p.customerId)) {
map.put(p.customerId, p.purchaseAmt);
}else {
double value = map.get(p.customerId);
map.put(p.customerId, value + p.purchaseAmt);
}
}
List<Purchase> list = new LinkedList<>();
for(Map.Entry<Integer, Double> entry : map.entrySet()) {
list.add(new Purchase(entry.getKey(), entry.getValue()));
}
System.out.println(list);
Comparator<Purchase> comparator = Comparator.comparing(p -> p.getPurchaseAmt());
list.sort(comparator.reversed());
//Process list
//...
}
class Purchase {
int customerId;
double purchaseAmt;
public Purchase(int customerId, double purchaseAmt) {
this.customerId = customerId;
this.purchaseAmt = purchaseAmt;
}
public double getPurchaseAmt() {
return this.purchaseAmt;
}
}
The current code accomplishes what I want to do, however I would like to know if there is a way that I can avoid transforming the Map into a List and then sorting the List using my custom Comparator. Perhaps using some kind of sort of sorted Map. Any advice would be appreciated. Also, suggestions on how to make my code more readable or idiomatic would be appreciated. Thanks. This is my first post of StackOverflow
First of all a TreeMap does not work for you, because it is sorted by keys, not by values. Another alternative would be a LinkedHashMap. It is sorted by insertion order.
You also can use Java Streams to process your List:
Map<Integer, Double> map = purchases.stream()
.collect(Collectors.toMap(Purchase::getCustomerId, Purchase::getPurchaseAmt, (a, b) -> a + b));
This creates a map for with the customerId as key and the sum of all purchases. Next you can sort that, by using another stream and migrating it to a LinkedHashMap:
LinkedHashMap<Integer, Double> sorted = map.entrySet().stream()
.sorted(Comparator.comparing(Map.Entry<Integer, Double>::getValue).reversed())
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, (a, b) -> {
throw new IllegalStateException("");
}, LinkedHashMap::new));
At the end you can create a new list again if you need it:
List<Purchase> list = sorted.entrySet().stream()
.map(e -> new Purchase(e.getKey(), e.getValue()))
.collect(Collectors.toList());
If you want more basic information to java Streams here is an official tutorial.
I need to validate if map (String to String) entry doesn't contain same key and value pair (case-insensitive). For example -
("hello", "helLo") // is not a valid entry
I was wondering if Google collection's Iterable combined with Predicates some how could solve this problem easily.
Yes I could have simple iterator for entries to do it myself, but thinking of any thing already up.
Looking for something in-lined with Iterables.tryFind(fromToMaster, Predicates.isEqualEntry(IGNORE_CASE)).isPresent()
If you want to use guava, you can use the Maps utils, specifically the filterEntries function.
An example to filter only entries where the key does not equal the value (ignoring the case) could look like this
Map<String, String> map = new HashMap<>();
map.put("hello", "helLo");
map.put("Foo", "bar");
Map<String, String> filtered = Maps.filterEntries(map, new Predicate<Map.Entry<String, String>>() {
#Override
public boolean apply(Map.Entry<String, String> input) {
return !input.getKey().equalsIgnoreCase(input.getValue());
}
});
System.out.println(filtered); // will print {Foo=bar}
However there is no default Predicate in guava's Predicates I know of that does what you want.
Addition:
If you want a validation mechanism without creating a new map, you can use Iterables and the any method to iterate over the entry set of the map. To make the condition more readable I would assign the predicate to a variable or a member field of the class you are working in.
Predicate<Map.Entry<String, String>> keyEqualsValueIgnoreCase = new Predicate<Map.Entry<String, String>>() {
#Override
public boolean apply(Map.Entry<String, String> input) {
return input.getKey().equalsIgnoreCase(input.getValue());
}
};
if (Iterables.any(map.entrySet(), keyEqualsValueIgnoreCase)) {
throw new IllegalStateException();
}
or if you need the entry, you can use the Iterables#tryFind method and use the returned Optional
Optional<Map.Entry<String, String>> invalid = Iterables.tryFind(map.entrySet(), keyEqualsValueIgnoreCase);
if(invalid.isPresent()) {
throw new IllegalStateException("Invalid entry " + invalid.get());
}
I want to store in a map keys with multiple values.
For example : i am reading from an ArrayList the keys which are Strings and from another ArrayList the values which are integers:
Keys Values
humans 50
elfs 20
dwarfs 30
humans 40
elfs 10
and i want to store these informations like this: Map < String, ArrayList < Integer>>
[humans = {50,40}]
[elfs = {20,10}]
[dwarfs = {30}]
It is there possible to do this?
I recommend using the Guava MultiMap. Alternatively, your
Map<String, ArrayList<Integer>>
will also accomplish this. When doing a put, determine if there is already a list associated with the key; if there is then your put will be a get(key).add(value), otherwise it will be a put(new List(value)). Likewise a remove will remove a value from the associated list, or else will completely remove the list if this will result in an empty list.
Also, a Map<String, HashSet<Integer>> will probably result in better performance than a map of lists; obviously don't do this if you want to associate duplicate values with a key.
I do this:
public class StringToListInt {
private Map<String, List<Integer>> stringToListInt;
public StringToListInt() {
stringToListInt = new HashMap<String, List<Integer>>();
}
public void addInt( String string, Integer someValue ) {
List<Integer> listInt = stringToListInt.get( string );
if ( listInt == null ) {
listInt = new ArrayList<String>();
stringToListInt.put( string, listInt );
}
listInt.add( someValue );
}
public List<Integer> getInts( String string ) {
return stringToListInt.get( string );
}
}
If you add in some Generics, I imagine you would end up with something very similar to Guava's MultiMap without the dependency.
I'm wondering if anyone knows a good way to remove duplicate Values in a LinkedHashMap? I have a LinkedHashMap with pairs of String and List<String>. I'd like to remove duplicates across the ArrayList's. This is to improve some downstream processing.
The only thing I can think of is keeping a log of the processed Values as I iterate over HashMap and then through the ArrayList and check to see if I've encountered a Value previously. This approach seems like it would degrade in performance as the list grows. Is there a way to pre-process the HashMap to remove duplicates from the ArrayList values?
To illustrate...if I have
String1>List1 (a, b, c)
String2>List2 (c, d, e)
I would want to remove "c" so there are no duplicates across the Lists within the HashMap.
I believe creating a second HashMap, that can be sorted by values (Alphabetically, numerically), then do a single sweep through the sorted list, to check to see if the current node, is equivalent to the next node, if it is, remove the next one, and keep the increment at the same, so it will remain at the same index of that sorted list.
Or, when you are adding values, you can check to see if it already contains this value.
Given your clarification, you want something like this:
class KeyValue {
public String key;
public Object value;
KeyValue(String key, Object value) {
this.key = key;
this.value = value;
}
public boolean equals(Object o) {
// boilerplate omitted, only use the value field for comparison
}
public int hashCode() {
return value.hashCode();
}
}
public void deduplicate() {
Map<String, List<Object>> items = new HashMap<String, List<Object>>();
Set<KeyValue> kvs = new HashSet<KeyValue>();
for (Map.Entry<String, List<Object>> entry : items.entrySet()) {
String key = entry.getKey();
List<Object> values = entry.getValue();
for (Object value : values) {
kvs.add(new KeyValue(key, value));
}
values.clear();
}
for (KeyValue kv : kvs) {
items.get(kv.key).add(kv.value);
}
}
Using a set will remove the duplicate values, and the KeyValue lets us preserve the original hash key while doing so. Add getters and setters or generics as needed. This will also modify the original map and the lists in it in place. I also think the performance for this should be O(n).
I'm assuming you need unique elements (contained in your Lists) and not unique Lists.
If you need no association between the Map's key and elements in its associated List, just add all of the elements individually to a Set.
If you add all of the Lists to a Set, it will contain the unique List objects, not unique elements of the Lists, so you have to add the elements individually.
(you can, of course, use addAll to make this easier)
So, to clarify... You essentially have K, [V1...Vn] and you want unique values for all V?
public void add( HashMap<String, List> map, HashMap<Objet, String> listObjects, String key, List values)
{
List uniqueValues= new List();
for( int i = 0; i < values.size(); i++ )
{
if( !listObjects.containsKey( values.get(i) ) )
{
listObjects.put( values.get(i), key );
uniqueValues.add( values.get(i) );
}
}
map.put( key, uniqueValues);
}
Essentially, we have another HashMap that stores the list values and we remove the non-unique ones when adding a list to the map. This also gives you the added benefit of knowing which list a value occurs in.
Using Guava:
Map<Value, Key> uniques = new LinkedHashMap<Value, Key>();
for (Map.Entry<Key, List<Value>> entry : mapWithDups.entrySet()) {
for (Value v : entry.getValue()) {
uniques.put(v, entry.getKey());
}
}
ListMultimap<K, V> uniqueLists = Multimaps.invertFrom(Multimaps.forMap(uniques),
ArrayListMultimap.create());
Map<K, List<V>> uniqueListsMap = (Map) uniqueLists.asMap(); // only if necessary
which should preserve the ordering of the values, and keep them unique. If you can use a ListMultimap<K, V> for your result -- which you probably can -- then go for it, otherwise you can probably just cast uniqueLists.asMap() to a Map<K, List<V>> (with some abuse of generics, but with guaranteed type safety).
As other have noted, you could check the value as you add, but, if you have to do it after the fact:
static public void removeDups(Map<String, List<String>> in) {
ArrayList<String> allValues = new ArrayList<String>();
for (List<String> inValue : in.values())
allValues.addAll(inValue);
HashSet<String> uniqueSet = new HashSet<String>(allValues);
for (String unique : uniqueSet)
allValues.remove(unique);
// anything left over was a duplicate
HashSet<String> nonUniqueSet = new HashSet<String>(allValues);
for (List<String> inValue : in.values())
inValue.removeAll(nonUniqueSet);
}
public static void main(String[] args) {
HashMap<String, List<String>> map = new HashMap<String, List<String>>();
map.put("1", new ArrayList(Arrays.asList("a", "b", "c", "a")));
map.put("2", new ArrayList(Arrays.asList("d", "e", "f")));
map.put("3", new ArrayList(Arrays.asList("a", "e")));
System.out.println("Before");
System.out.println(map);
removeDups(map);
System.out.println("After");
System.out.println(map);
}
generates an output of
Before
{3=[a, e], 2=[d, e, f], 1=[a, b, c, a]}
After
{3=[], 2=[d, f], 1=[b, c]}