I have List<Person> persons = new ArrayList<>(); and I want to list all unique names. I mean If there are "John", "Max", "John", "Greg" then I want to list only "Max" and "Greg". Is there some way to do it with Java stream?
We can use streams and Collectors.groupingBy in order to count how many occurrences we have of each name - then filter any name that appears more than once:
List<String> res = persons.stream()
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()))
.entrySet()
.stream()
.filter(e -> e.getValue() == 1)
.map(e -> e.getKey())
.collect(Collectors.toList());
System.out.println(res); // [Max, Greg]
List persons = new ArrayList();
persons.add("Max");
persons.add("John");
persons.add("John");
persons.add("Greg");
persons.stream()
.filter(person -> Collections.frequency(persons, person) == 1)
.collect(Collectors.toList());
First guess solution.
persons.stream()
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()))
.entrySet()
.stream()
.filter(entry -> entry.getValue() == 1)
.map(Map.Entry::getKey)
.collect(Collectors.toList())
Here is my solution:
List<String> persons = new ArrayList<>();
persons.add("John");
persons.add("John");
persons.add("MAX");
persons.add("Greg");
persons.stream()
.distinct()
.sorted()
.collect(Collectors.toList());
This is an old post, but I'd like to propose yet another approach based on a custom collector:
public static <T> Collector<T, ?, List<T>> excludingDuplicates() {
return Collector.<T, Map<T, Boolean>, List<T>>of(
LinkedHashMap::new,
(map, elem) -> map.compute(elem, (k, v) -> v == null),
(left, right) -> {
right.forEach((k, v) -> left.merge(k, v, (o, n) -> false));
return left;
},
m -> m.keySet().stream().filter(m::get).collect(Collectors.toList()));
}
Here I'm using Collector.of to create a custom collector that will accumulate elements on a LinkedHashMap: if the element is not present as a key, its value will be true, otherwise it will be false. The merge function is only applied for parallel streams and it merges the right map into the left map by attempting to put each entry of the right map in the left map, changing the value of already present keys to false. Finally, the finisher function returns a list with the keys of the map whose values are true.
This method can be used as follows:
List<String> people = Arrays.asList("John", "Max", "John", "Greg");
List<String> result = people.stream().collect(excludingDuplicates());
System.out.println(result); // [Max, Greg]
And here's another approach simpler than using a custom collector:
Map<String, Boolean> duplicates = new LinkedHashMap<>();
people.forEach(elem -> duplicates.compute(elem, (k, v) -> v != null));
duplicates.values().removeIf(v -> v);
Set<String> allUnique = duplicates.keySet();
System.out.println(allUnique); // [Max, Greg]
You can try the below code.
List<Person> uniquePersons = personList.stream()
.collect(Collectors.groupingBy(person -> person.getName()))
.entrySet().stream().filter(stringListEntry -> stringListEntry.getValue().size()==1)
.map(stringListEntry -> { return stringListEntry.getValue().get(0); })
.collect(Collectors.toList());
This should remove all the duplicate elements.
List<String> persons = new ArrayList<>();
persons.add("John");
persons.add("John");
persons.add("MAX");
persons.add("Greg");
Set<String> set = new HashSet<String>();
Set<String> duplicateSet = new HashSet<String>();
for (String p : persons) {
if (!set.add(p)) {
duplicateSet.add(p);
}
}
System.out.println(duplicateSet.toString());
set.removeAll(duplicateSet);
System.out.println(set.toString());
You can simply use Collections.frequency to check the element occurance in the list as shown below to filter the duplicates:
List<String> listInputs = new ArrayList<>();
//add your users
List<String> listOutputs = new ArrayList<>();
for(String value : listInputs) {
if(Collections.frequency(listInputs, value) ==1) {
listOutputs.add(value);
}
}
System.out.println(listOutputs);
Related
I have two lists that I need to check that every product (from products) has a code (from productCodes)
List<String> productCodes = List.of("X_14_AA_85", "X_14_BB_85", "X_14_ZZ_85");
List<String> products = List.of("AA", "BB", "CC", "ZZ");
// I want to achieve a collection of (product code, product)
// according if product name exists in productCode name
// key - product code, value - product
/*
Map<String, String> map = Map.of(
"AA", "X_14_AA_85",
"BB", "X_14_BB_85",
"CC", null, // null if code doesn't exist
"ZZ", "X_14_ZZ_85"
);
*/
// after a filter with null keys I could return a message something like this
// List<String> nullableProducts = List.of("CC");
// return "I could prompt that there's no code for product/s: " + nullableProducts;
Is there a way with streams to filter by list item values?
You can stream the keySet and filter null values:
Java 16+:
List<String> list = map.keySet().stream()
.filter(key -> map.get(key) == null).toList();
Java 15 and older:
List<String> list = map.keySet().stream()
.filter(key -> map.get(key) == null)
.collect(Collectors.toList());
Note: You can't instantiate an unmodifiable Map using Map.of() with null keys or values. Instead, you can do:
Map<String, String> map = new HashMap<>();
map.put("AA", "X_14_AA_85");
map.put("BB", "X_14_BB_85");
map.put("CC", null);
map.put("ZZ", "X_14_ZZ_85");
If the purpose is to get a map containing null value, this has to be implemented using a custom collector, because existing implementation throws NullPointerException when putting null:
List<String> productCodes = List.of("X_14_AA_85", "X_14_BB_85", "X_14_ZZ_85");
List<String> products = List.of("AA", "BB", "CC", "ZZ");
Map<String, String> mapCodes = products.stream()
.distinct()
.collect(
HashMap::new,
(m, p) -> m.put(p, productCodes
.stream()
.filter(pc -> pc.contains(p))
.findFirst()
.orElse(null)
),
HashMap::putAll
);
// -> {AA=X_14_AA_85, BB=X_14_BB_85, CC=null, ZZ=X_14_ZZ_85}
Then the list of non-matched products may be retrieved as follows:
List<String> nonMatchedProducts = mapCodes.entrySet()
.stream()
.filter(e -> e.getValue() == null)
.map(Map.Entry::getKey)
.collect(Collectors.toList());
// -> [CC]
However, as the result of findFirst is returned as Optional it may be used along with Collectors::toMap, and then the non-matched values can be filtered out using Optional::isEmpty:
Map<String, Optional<String>> mapCodes2 = products.stream()
.distinct()
.collect(Collectors.toMap(
p -> p,
p -> productCodes.stream().filter(pc -> pc.contains(p)).findFirst()
));
// -> {AA=Optional[X_14_AA_85], BB=Optional[X_14_BB_85], CC=Optional.empty, ZZ=Optional[X_14_ZZ_85]}
List<String> nonMatchedProducts2 = mapCodes2.entrySet()
.stream()
.filter(e -> e.getValue().isEmpty())
.map(Map.Entry::getKey)
.collect(Collectors.toList());
// -> [CC]
Also, the null/empty values may not be stored at all, then non-matched products can be found after removing all the matched ones:
Map<String, String> map3 = new HashMap<>();
for (String p : products) {
productCodes.stream()
.filter(pc -> pc.contains(p))
.findFirst()
.ifPresent(pc -> map3.put(p, pc)); // only matched pairs
}
// -> {AA=X_14_AA_85, BB=X_14_BB_85, ZZ=X_14_ZZ_85}
List<String> nonMatchedProducts3 = new ArrayList<>(products);
nonMatchedProducts3.removeAll(map3.keySet());
// -> [CC]
Given your two lists, I would do something like this. I added two products that contain non-existent codes.
List<String> products =
List.of("X_14_AA_85", "X_14_SS_88", "X_14_BB_85", "X_14_ZZ_85", "X_16_RR_85");
List<String> productCodes = List.of("AA", "BB", "CC", "ZZ");
Declare a lambda to extract the code and copy the codes to a set for efficient lookup. In fact, since duplicates codes aren't necessary, a set would be the preferred data structure from the start.
Assuming the product code is the same place and length, you can do it like this using substring. Otherwise you may need to use a regular expression to parse the product string.
Function<String, String> extractCode =
code -> code.substring(5,7);
Set<String> productCodeSet = new HashSet<>(productCodes);
And run it like this.
List<String> missingCodes = products.stream()
.filter(product -> !productCodeSet
.contains(extractCode.apply(product)))
.toList();
System.out.println("There are no codes for the following products: " + missingCodes);
Prints
There are no codes for the following products: [X_14_SS_88, X_16_RR_85]
Consider the following data structures:
ArrayList<HashMap<String, String>> entries = new ArrayList<>();
ArrayList<String> keyNamesToInclude = new ArrayList<>();
This code creates a copy of entries, but with hashmaps only including the keys in keyNamesToInclude:
ArrayList<HashMap<String, String>> copies = new ArrayList<>();
for (HashMap<String, String> entry: entries) {
HashMap<String, String> copy = new HashMap<>();
for (String keyName: keyNamesToInclude) {
copy.put(keyName, entry.get(keyName));
}
copies.add(copy);
}
How would one create this with Streams in a functional way?
It is better to convert keyNamesToInclude into Set to facilitate lookup of the keys.
Then use List::stream to get Stream<HashMap> and for each map get filtered stream of its entries re-collected into a new map and list accordingly.
Set<String> keys = new HashSet<>(keyNamesToInclude); // removes possible duplicates
List<Map<String, String>> copies = entries.stream() // Stream<HashMap>
.map(m -> m.entrySet()
.stream()
.filter(e -> keys.contains(e.getKey()))
.collect(Collectors.toMap(
Map.Entry::getKey, Map.Entry::getValue
))
)
.collect(Collectors.toList());
If it is very important to have concrete implementations of List and Map in copies, casting or special forms of collectors may be needed even though Collectors.toList() returns ArrayList and Collectors.toMap returns HashMap:
// casting
ArrayList<HashMap<String, String>> copies2 = (ArrayList) entries.stream() // Stream<HashMap>
.map(m -> (HashMap<String, String>) m.entrySet()
.stream()
.filter(e -> keys.contains(e.getKey()))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue))
)
.collect(Collectors.toList());
// special collectors
// toMap(keyMapper, valueMapper, mergeFunction, mapFactory)
// toList -> toCollection(ArrayList::new)
ArrayList<HashMap<String, String>> copies3 = entries.stream() // Stream<HashMap>
.map(m -> m.entrySet()
.stream()
.filter(e -> keys.contains(e.getKey()))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, (a, b) -> a, HashMap::new))
)
.collect(Collectors.toCollection(ArrayList::new));
You can do something like:
Set<String> setNamesToInclude = new HashSet<>(keyNamesToInclude);
List<Map<String, String>> copies = entries.stream()
.map(hm -> hm.keySet()
.stream()
.filter(setNamesToInclude::contains)
.collect(Collectors.toMap(Function.identity(), hm::get))
)
.collect(Collectors.toList());
Maybe somewhat more efficient solution that utilizes hashmap instead of list contains.
List<HashMap<String, String>> copies = maps.stream().map(e -> {
HashMap<String, String> copy = new HashMap<>();
keys.forEach(k -> {
String value = e.get(k);
if (value != null)
copy.put(k, e.get(k));
});
return copy;
}).collect(Collectors.toList());
You can do something like the following (pseudo code):
return entries.stream()
.map(hashmap -> {
return hashmap.stream()
.filter((k,v) -> keyNamesToInclude.contains(k))
.collect(toMap());
})
.collect(toList());
I recommend using a Set to contain the keysToBeIncluded since it is a little more efficient. This also filters out empty maps from the final list.
List<Map<String, String>> entries = List.of(
Map.of("1", "one", "2", "two", "3", "three"),
Map.of("2", "two", "3", "three"), Map.of("1", "one"),
Map.of("6", "six", "7", "seven"));
entries.forEach(System.out::println);
Set<String> keyNamesToInclude = Set.of("2", "3", "6");
stream the original list of maps
for each map, filter the required keys
build the new map
filter to allow non-empty maps.
create the list
List<Map<String, String>> copies = entries.stream()
.map(m -> m.entrySet().stream().filter(
e -> keyNamesToInclude.contains(e.getKey()))
.collect(Collectors.toMap(Entry::getKey,
Entry::getValue))).filter(m->!m.isEmpty())
.toList(); // java 16 - can be replaced with a collector
copies.stream().forEach(System.out::println);
Prints four lines of source and three lines of result
{3=three, 2=two, 1=one}
{3=three, 2=two}
{1=one}
{7=seven, 6=six}
{2=two, 3=three}
{2=two, 3=three}
{6=six}
Input: List<Foo> rawdata
Desired Output: Map<Foo, Bar>
Method implementation: Java 8
public Map<Foo, Bar> getMap(List<Foo> rawdata) {
rawData.stream().flatmap(d -> {
val key = method1(d);
val value = method2(d);
})
// Now I want to create a map using the Key/Value
// pair which was obtained from calling method 1 and 2.
}
I did try this:
public Map<Foo, Bar> getMap(List<Foo> rawdata) {
rawData.stream()
.flatmap(
d -> {
val key = method1(d);
val value = method2(d);
return Stream.of(new Object[][]{key, value});
})
.collect(Collectors.toMap(
key -> key[0],
key -> key[1]));
}
However want to see if there is less expensive way of doing this or any better way.
There's no point in creating an intermediate stream and then using flatMap().
You can just collect() directly into a map:
public Map<Foo, Bar> getMap(List<Foo> rawData) {
return rawData.stream()
.collect(Collectors.toMap(foo -> method1(foo), foo -> method2(foo)));
// OR: Collectors.toMap(this::method1, this::method2);
}
You can use toMap(keyMapper,valueMapper,mergeFunction) method with three parameters that allows to merge duplicates into a list, for example:
List<String[]> list = List.of(
new String[]{"Foo1", "Bar1"},
new String[]{"Foo1", "Bar2"},
new String[]{"Foo2", "Baz"});
Map<String, List<String>> map1 = list.stream()
.collect(Collectors.toMap(
e -> e[0],
e -> new ArrayList<>(List.of(e[1])),
(list1, list2) -> {
list1.addAll(list2);
return list1;
}));
System.out.println(map1);
// {Foo1=[Bar1, Bar2], Foo2=[Baz]}
If you are sure that there are no duplicates, you can use toMap(keyMapper,valueMapper) method with two parameters:
List<String[]> list = List.of(
new String[]{"Foo1", "Bar1"},
new String[]{"Foo2", "Baz"});
Map<String, String> map2 = list.stream()
.collect(Collectors.toMap(e -> e[0], e -> e[1]));
System.out.println(map2);
// {Foo1=Bar1, Foo2=Baz}
Anyway you can collect a list of Map.Entrys:
List<String[]> list = List.of(
new String[]{"Foo1", "Bar1"},
new String[]{"Foo1", "Bar2"},
new String[]{"Foo2", "Baz"});
List<Map.Entry<String, String>> list1 = list.stream()
.map(arr -> Map.entry(arr[0], arr[1]))
.collect(Collectors.toList());
System.out.println(list1);
// [Foo1=Bar1, Foo1=Bar2, Foo2=Baz]
See also: Ordering Map<String, Integer> by List<String> using streams
Map<A, List<B>> xFunction() {
Map<A, List<B>> mapList = new HashMap<>();
List<A> aList = x.getAList();
for (A a : aList) {
List<B> bList = getBList(a.getId());
mapList.put(a, bList);
}
return mapList;
}
How to convert this in java 8 with collect and grouping by or mapping?
I try with something like:
x.getAList
.stream()
.map(a -> getBList(a.getId)) //return a list of B
.collect(Collectors.groupingBy (...) )
Cheers
You need Collectors.toMap:
Map<A, List<B>> map =
x.getAList()
.stream()
.collect(Collectors.toMap (Function.identity(), a -> getBList(a.getId())));
#Eran was first but to reproduce behavior you should use toMap collector with mergeFunction for duplicates by a.getId() because by default Java will throw IllegalStateException for entries with the same key:
x.getAList()
.stream()
.collect(Collectors.toMap(Function.identity(), a -> getBList(a.getId())), (u, u2) -> u2);
I'm trying to find a more elegant way to create a map that group field values by field names using Java 8 than the following:
#Test
public void groupFieldValuesByFieldNames() {
Person lawrence = aPerson().withFirstName("Lawrence").withLastName("Warren").born();
Person gracie = aPerson().withFirstName("Gracie").withLastName("Ness").born();
Map<String, List<String>> valuesByFieldNames = new HashMap<>();
Stream.of(lawrence, gracie).forEach(person -> {
valuesByFieldNames.computeIfAbsent("lastName", s -> new ArrayList<>()).add(person.getLastName());
valuesByFieldNames.computeIfAbsent("firstName", s -> new ArrayList<>()).add(person.getFirstName());
});
assertThat(valuesByFieldNames, hasEntry("lastName", asList("Warren", "Ness")));
assertThat(valuesByFieldNames, hasEntry("firstName", asList("Lawrence", "Gracie")));
}
Try this.
Map<String, List<String>> valuesByFieldNames = Stream.of(lawrence, gracie)
.flatMap(p -> Stream.of(new String[]{"firstName", p.getFirstName()},
new String[]{"lastName", p.getLastName()}))
.collect(Collectors.groupingBy(a -> a[0],
Collectors.mapping(a -> a[1], Collectors.toList())));
Or more generally
Map<String, List<String>> valuesByFieldNames = Stream.of(lawrence, gracie)
.flatMap(p -> Stream.of(new AbstractMap.SimpleEntry<>("firstName", p.getFirstName()),
new AbstractMap.SimpleEntry<>("lastName", p.getLastName())))
.collect(Collectors.groupingBy(e -> e.getKey(),
Collectors.mapping(e -> e.getValue(), Collectors.toList())));
You can have the following, that will work correctly in parallel:
Map<String, List<String>> valuesByFieldNames =
Stream.of(lawrence, gracie).collect(HashMap::new, (m, p) -> {
m.computeIfAbsent("lastName", s -> new ArrayList<>()).add(p.getLastName());
m.computeIfAbsent("firstName", s -> new ArrayList<>()).add(p.getFirstName());
}, (m1, m2) -> m2.forEach((k, v) -> m1.merge(k, v, (l1, l2) -> { l1.addAll(l2); return l1; })));
What this does is that it collect each person into a mutable HashMap. The accumulator computes the last name and the first name by invoking computeIfAbsent, just like your initial code. The combiner merges two maps together by iterating over the entries of the second map and merging each key into the first map; in case of conflict, the value is the addition of the two lists.