Counting the values of a map - java

The following map contains data of the form:
1946-01-12;13:00:00;0.3;G
1946-01-12;18:00:00;-2.8;Y
1946-01-13;07:00:00;-6.2;G
1946-01-13;13:00:00;-4.7;G
The dates are the keys.
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.time.LocalDate;
import java.time.LocalTime;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.NavigableMap;
import java.util.TreeMap;
public class WeatherDataHandler {
private NavigableMap<LocalDate, List<Weather>> weatherData =
new TreeMap<>();
public void loadData(String filePath) throws IOException {
List<String> fileData =
Files.readAllLines(Paths.get(filePath));
for (String str : fileData) {
List<String> parsed = parseData(str);
LocalDate date = LocalDate.parse(parsed.get(0));
LocalTime time = LocalTime.parse(parsed.get(1));
double temperature = Double.parseDouble(parsed.get(2));
String quality = parsed.get(3);
Weather weather =
new Weather(date, time, temperature, quality);
List<Weather> entries;
entries = new ArrayList<Weather>();
if (weatherData.get(date) == null) {
entries.add(weather);
weatherData.put(date, entries);
} else {
entries = weatherData.get(date);
entries.add(weather);
}
}
}
private List<String> parseData(String str) {
return Arrays.asList(str.split(";"));
}
}
Now, I want to implement a method that counts the number of entries of every key, or in other words, the number of times every date occur in the list. It shall return all dates between two dates (that is user input) and the number of values of every key. I started with the following code
/**
* Search for missing values between the two dates (inclusive) assuming there
* should be 24 measurement values for each day (once every hour). Result is
* sorted by date.
*/
public Map<LocalDate, Integer> missingValues(LocalDate dateFrom, LocalDate dateTo) {
Map<LocalDate, Integer> counts = weatherData.subMap(dateFrom , dateTo)
.values().stream()
.flatMap(List::stream)
.filter(p -> p.getValue()
.collect(Collectors.toMap(p -> p.getKey(), p -> p.getValue()));
}
}
but I am having troubles with the filter and collectors method. How can I finish this?

Use Collectors.groupingBy to group by date of Weather class and Collectors.counting() to count the size of every group.
Map<LocalDate, Long> counts = weatherData.subMap(dateFrom ,dateTo)
.values()
.stream()
.flatMap(List::stream)
.collect(Collectors.groupingBy(p -> p.getDate(), Collectors.counting()));

What you effectively need is to group by the date as the key element of the map. In the value, you want how many times each key has occurred.
For this, you will have to use Collectors.groupingBy while collecting the map. and for the value, it would be simply Collectors.counting(). Please check the code snippet below for more detail:
Map<LocalDate, Long> countMap = weatherData
.subMap(dateFrom ,dateTo)
.values()
.stream()
.flatMap(List::stream)
.collect(Collectors.groupingBy(p -> p.getDate(), Collectors.counting()));

Since you are using streams to simplify your code you may also want to do the following.
Instead of doing
List<Weather> entries;
entries = new ArrayList<Weather>();
if (weatherData.get(date) == null) {
entries.add(weather);
weatherData.put(date, entries);
} else {
entries = weatherData.get(date);
entries.add(weather);
}
You can just do
weatherData.computeIfAbsent(date, k->new ArrayList<>()).add(weather);
It says that if the key date is not there, put in a new ArrayList as the value. In any event, return the value. Since the value is the list that was either just entered or already there, you can then simply add the weather instance to the list.
The compiler already knows the type of list since it derived that from your NavigableMap instance.
If you want to check it out first, try this simple example.
Map<Integer,List<Integer>> map = new HashMap<>();
System.out.println(map);
map.computeIfAbsent(10, k-> new ArrayList<>()).add(20);
System.out.println(map);
map.computeIfAbsent(10, k-> new ArrayList<>()).add(30);
System.out.println(map);
computeIfAbsent has been available since Java 1.8

Related

Remove elements from hashset inside hashmap while iterating through java stream [duplicate]

This question already has answers here:
what does java8 stream map do here?
(4 answers)
Closed 7 months ago.
I have a hashmap in Java with a string key and a HashSet value. The hashset may contain many PlacementBundles inside it.
public Map<String, Set<PlacementBundle>> placementByConcept;
I am trying to remove the value from the HashSet while iterating the map which matches a specific condition.
I tried the below code but cannot remove the matching element from the HashSet.
placementByConcept.entrySet()
.stream()
.map(e -> e.getValue()
.removeIf(s -> s.getBeatObjectiveId().equals("non-scored")));
you can use forEach:
placementByConcept.entrySet().forEach(e -> e.getValue().removeIf(s -> s.getBeatObjectiveId().equals("non-scored")));
public class Remove {
public static void main(String[] args)
{
HashMap<Integer, String>
map = new HashMap<>();
map.put(1, "Stack");
map.put(2, "Overflow");
map.put(3, "StackOverflow");
int keyToBeRemoved = 2;
System.out.println("Original HashMap: "
+ map);
map.entrySet()
.removeIf(
entry -> (keyToBeRemoved == entry.getKey()));
System.out.println("New HashMap: "
+ map);
}
}
Output:
Original HashMap: {1=Stack, 2=Overflow, 3=StackOverflow}
New HashMap: {1=Stack, 3=StackOverflow}
In your case Set<PlacementBundle> is an immutable collection. You can't remove an element from it.
Thank you Holger for pointing out the assumption I made which may not be true for the asked question.
If Set is immutable collection and you use foreach as suggested in the accepted answer, you will get UnsupportedOperationException
import lombok.Builder;
import lombok.Data;
import lombok.extern.slf4j.Slf4j;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
import java.util.stream.Collectors;
#Slf4j
public class Test {
public static void main(String[] args) {
Map<String, Set<PlacementBundle>> placementByConcept = new HashMap<>();
placementByConcept.put("concept1", Set.of(
PlacementBundle.builder().beatObjectiveId("scored").build(),
PlacementBundle.builder().beatObjectiveId("non-scored").build())
);
placementByConcept.put("concept2", Set.of(
PlacementBundle.builder().beatObjectiveId("scored").build(),
PlacementBundle.builder().beatObjectiveId("non-scored").build())
);
log.info("Original: {}", placementByConcept);
/* This won't give any exception, neither will remove the entries */
placementByConcept.entrySet()
.stream()
.map(e -> e.getValue()
.removeIf(s -> s.getBeatObjectiveId().equals("non-scored")));
log.info("Does not work: {}", placementByConcept);
/* This will give you the exception UnsupportedOperationException */
// placementByConcept.entrySet().forEach(e -> e.getValue().removeIf(s -> s.getBeatObjectiveId().equals("non-scored")));
/* This is one of the correct way */
for (Map.Entry<String, Set<PlacementBundle>> entry : placementByConcept.entrySet()) {
var filtered = entry.getValue().stream()
.filter(placementBundle -> !placementBundle.getBeatObjectiveId().equals("non-scored"))
.collect(Collectors.toUnmodifiableSet());
log.debug("New Value Set: {}", filtered);
entry.setValue(filtered);
}
log.info("After: {}", placementByConcept);
}
}
#Builder
#Data
class PlacementBundle {
private String beatObjectiveId;
}
Output:
Original: {concept2=[PlacementBundle(beatObjectiveId=scored), PlacementBundle(beatObjectiveId=non-scored)], concept1=[PlacementBundle(beatObjectiveId=scored), PlacementBundle(beatObjectiveId=non-scored)]}
Does not work: {concept2=[PlacementBundle(beatObjectiveId=scored), PlacementBundle(beatObjectiveId=non-scored)], concept1=[PlacementBundle(beatObjectiveId=scored), PlacementBundle(beatObjectiveId=non-scored)]}
After: {concept2=[PlacementBundle(beatObjectiveId=scored)], concept1=[PlacementBundle(beatObjectiveId=scored)]}

Calculate occurrence of property in a list of objects and convert objects to list of new objects with that counter

I have a list of Person objects. id is one of properties of Person class. The list can contain duplicates of the Person with the same id and different other properties.
Now I need to calculate the occurrences of the same id and after that convert the grouped Persons ordered by timestamp to another object which has the property occurrences.
I came up with something like this and I'm wondering if there is any way to simplify this process:
List<Person> persons = Arrays.asList(
new Person("A", "2021-12-27"),
new Person("B", "2021-12-26"),
new Person("A", "2021-12-25")
);
Map<String, Long> personToOccurrence = persons
.stream()
.collect(Collectors.groupingBy(Person::getUserName, Collectors.counting()));
List<NewClass> convertedPersonsWithOccurrences = persons
.stream()
.filter(distinctByKey(Person::getUserName))
.map(i -> convertToNewClass(i, personToOccurrence.get(i.getUserName())))
.collect(Collectors.toList());
class Person {
String id;
Date timestamp;
}
private static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) {
Set<Object> seen = ConcurrentHashMap.newKeySet();
return t -> seen.add(keyExtractor.apply(t));
}
So in the convertedPersonsWithOccurrences I expect to have:
NewClass("A", "2021-12-27", 2)
NewClass("B", "2021-12-26", 1)
where the last attribute is the occurrence of id attribute in persons list.
EDIT:
In general, those classes has been simplified. In real case, convertToNewClass should accept a whole object which has multiple fields and convert it to new class. So that's why I gave example of passing whole Person object and occurrences.
You can use toMap collector with merge function. in merge function you need to compare Timestamp property and increment count value.
Use LocalDate instead Date
persons.stream()
.collect(Collectors.toMap(Person::getId,
v -> new NewClass(v.id, v.getTimestamp(), 1l),
(o1, o2) -> {
if (o1.getTimestamp().isAfter(o2.getTimestamp())) {
o1.setCount(o1.getCount() + 1l);
return o1;
} else {
o2.setCount(o2.getCount() + 1l);
return o2;
}
})).values();
Since you have a map of user ids, you can skip the distinct stuff in the second calculation by looking at the keys of the map, which are by definition going to be distinct.
If you use the base groupingBy() instead of adding a downstream counting() collector so you get a list of each group of Persons, it will make getting the newest (Or oldest) timestamp of each group easy.
Self-contained Java 17+ example:
import java.util.*;
import java.util.stream.*;
public class Demo {
private static record Person(String id, Date timestamp) {}
private static record NewClass(String id, Date timestamp, int count) {}
private static Date makeDate(int year, int month, int day) {
Calendar c = Calendar.getInstance();
c.clear();
c.set(year, month - 1, day);
return c.getTime();
}
public static void main(String[] args) {
List<Person> persons = List.of(new Person("A", makeDate(2021, 12, 27)),
new Person("B", makeDate(2021, 12, 26)),
new Person("A", makeDate(2021, 12, 25)));
Map<String, List<Person>> groupedPersons =
persons.stream().collect(Collectors.groupingBy(Person::id));
List<NewClass> convertedPersonsWithOccurences =
groupedPersons.entrySet().stream()
.map(e -> new NewClass(e.getKey(),
e.getValue().stream()
.max((a, b) -> a.timestamp.compareTo(b.timestamp))
.orElseThrow()
.timestamp,
e.getValue().size()))
.collect(Collectors.toList());
System.out.println(convertedPersonsWithOccurences);
}
}
displays
[NewClass[id=A, timestamp=Mon Dec 27 00:00:00 PST 2021, count=2], NewClass[id=B, timestamp=Sun Dec 26 00:00:00 PST 2021, count=1]]
when run.

Get only key from List of Map object using stream

I am trying to get the Get only the key values from the List of Map object using the stream in java 8.
When I stream the List of map object I am getting Stream<List<String>> instead of List<String>.
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import java.util.stream.Stream;
public class StreamTest {
public static void main(String[] args) {
// TODO Auto-generated method stub
System.out.println("Hello World");
Map<String, String> a = new HashMap<String, String>();
a.put("1", "Bharathi");
a.put("2", "Test");
a.put("3", "Hello");
List<Map<String, String>> b = new ArrayList<>();
b.add(a);
System.out.println("Hello World" + b);
/*
* b.stream().map(c-> c.entrySet().stream().collect( Collectors.toMap(entry ->
* entry.getKey(), entry -> entry.getValue())));
*/
Stream<List<String>> map2 = b.stream()
.map(c -> c.entrySet().stream().map(map -> map.getKey()).collect(Collectors.toList()));
//List<List<String>> collect = map2.map(v -> v).collect(Collectors.toList());
}
}
How to get the key object from the List of Map object?
you can use flatMap over the keySet of each Map within:
List<String> output = lst.stream()
.flatMap(mp -> mp.keySet().stream())
.collect(Collectors.toList());
You can simply flatMap it:
b.stream().flatMap(m -> m.keySet().stream()).collect(Collectors.toList())
Of course flatMap is stream base solution! however you can do it with non-stream version in simple way.
List<String> map2 = new ArrayList<>();
b.forEach(map -> map2.addAll(map.keySet()));
You also can use a slightly more declarative way:
List<String> collect = b.stream()
.map(Map::keySet) // map maps to key sets
.flatMap(Collection::stream) // union all key sets to the one stream
.collect(Collectors.toList()); // collect the stream to a new list

Java 8 stream Map<String, List<String>> sum of values for each key

I am not so familiar with Java 8 (still learning) and looking to see if I could find something equivalent of the below code using streams.
The below code mainly tries to get corresponding double value for each value in String and then sums it up. I could not find much help anywhere on this format. I am not sure if using streams would clean up the code or would make it messier.
// safe assumptions - String/List (Key/Value) cannot be null or empty
// inputMap --> Map<String, List<String>>
Map<String, Double> finalResult = new HashMap<>();
for (Map.Entry<String, List<String>> entry : inputMap.entrySet()) {
Double score = 0.0;
for (String current: entry.getValue()) {
score += computeScore(current);
}
finalResult.put(entry.getKey(), score);
}
private Double computeScore(String a) { .. }
Map<String, Double> finalResult = inputMap.entrySet()
.stream()
.collect(Collectors.toMap(
Entry::getKey,
e -> e.getValue()
.stream()
.mapToDouble(str -> computeScore(str))
.sum()));
Above code iterates over the map and creates a new map with same keys & before putting the values, it first iterates over each value - which is a list, computes score via calling computeScore() over each list element and then sums the scores collected to be put in the value.
You could also use the forEach method along with the stream API to yield the result you're seeking.
Map<String, Double> resultSet = new HashMap<>();
inputMap.forEach((k, v) -> resultSet.put(k, v.stream()
.mapToDouble(s -> computeScore(s)).sum()));
s -> computeScore(s) could be changed to use a method reference i.e. T::computeScore where T is the name of the class containing computeScore.
How about this one:
Map<String, Double> finalResult = inputMap.entrySet()
.stream()
.map(entry -> new AbstractMap.SimpleEntry<String, Double>( // maps each key to a new
// Entry<String, Double>
entry.getKey(), // the same key
entry.getValue().stream()
.mapToDouble(string -> computeScore(string)).sum())) // List<String> mapped to
// List<Double> and summed
.collect(Collectors.toMap(Entry::getKey, Entry::getValue)); // collected by the same
// key and a newly
// calulcated value
The version above could be merged to the single collect(..) method:
Map<String, Double> finalResult = inputMap.entrySet()
.stream()
.collect(Collectors.toMap(
Entry::getKey, // keeps the same key
entry -> entry.getValue()
.stream() // List<String> -> Stream<String>
// then Stream<String> -> Stream<Double>
.mapToDouble(string -> computeScore(string))
.sum())); // and summed
The key parts:
collect(..) performs a reduction on the elements using a certain strategy with a Collector.
Entry::getKey is a shortcut for entry -> entry.getKey. A function for mapping the key.
entry -> entry.getValue().stream() returns the Stream<String>
mapToDouble(..) returns the DoubleStream. This has an aggregating operation sum(..) which sums the elements - together creates a new value for the Map.
Regardless of whether you use the stream-based or the loop-based solution, it would be beneficial and add some clarity and structure to extract the inner loop into a method:
private double computeScore(Collection<String> strings)
{
return strings.stream().mapToDouble(this::computeScore).sum();
}
Of course, this could also be implemented using a loop, but ... that's exactly the point: This method can now be called, either in the outer loop, or on the values of a stream of map entries.
The outer loop or stream could also be pulled into a method. In the example below, I generalized this a bit: The type of the keys of the map does not matter. Neither does whether the values are List or Collection instances.
As an alternative to the currently accepted answer, the stream-based solution here does not fill a new map that is created manually. Instead, it uses a Collector.
(This is similar to other answers, but I think that the extracted computeScore method greatly simplifies the otherwise rather ugly lambdas that are necessary for the nested streams)
import java.util.Arrays;
import java.util.Collection;
import java.util.HashMap;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.stream.Collectors;
public class ToStreamOrNotToStream
{
public static void main(String[] args)
{
ToStreamOrNotToStream t = new ToStreamOrNotToStream();
Map<String, List<String>> inputMap =
new LinkedHashMap<String, List<String>>();
inputMap.put("A", Arrays.asList("1.0", "2.0", "3.0"));
inputMap.put("B", Arrays.asList("2.0", "3.0", "4.0"));
inputMap.put("C", Arrays.asList("3.0", "4.0", "5.0"));
System.out.println("Result A: " + t.computeA(inputMap));
System.out.println("Result B: " + t.computeB(inputMap));
}
private <T> Map<T, Double> computeA(
Map<T, ? extends Collection<String>> inputMap)
{
Map<T, Double> finalResult = new HashMap<>();
for (Entry<T, ? extends Collection<String>> entry : inputMap.entrySet())
{
double score = computeScore(entry.getValue());
finalResult.put(entry.getKey(), score);
}
return finalResult;
}
private <T> Map<T, Double> computeB(
Map<T, ? extends Collection<String>> inputMap)
{
return inputMap.entrySet().stream().collect(
Collectors.toMap(Entry::getKey, e -> computeScore(e.getValue())));
}
private double computeScore(Collection<String> strings)
{
return strings.stream().mapToDouble(this::computeScore).sum();
}
private double computeScore(String a)
{
return Double.parseDouble(a);
}
}
I found it somewhat shorter:
value = startDates.entrySet().stream().mapToDouble(Entry::getValue).sum();

How to print a list by adding a new line after every 3rd element in a list in java lambda expression?

Suppose I have a list as below
Collection<?> mainList = new ArrayList<String>();
mainList=//some method call//
Currently, I am displaying the elements in the list as
System.out.println(mainList.stream().map(Object::toString).collect(Collectors.joining(",")).toString());
And I got the result as
a,b,c,d,e,f,g,h,i
How to print this list by adding a new line after every 3rd element in a list in java, so that it will print the result as below
a,b,c
d,e,f
g,h,i
Note: This is similar to How to Add newline after every 3rd element in arraylist in java?.But there formatting the file is done while reading itself.
I want to do it while printing the output.
If you want to stick to Java Stream API then your problem can be solved by partitioning initial list to sublists of size 3 and then representing each sublist as a String and joining results with \n.
import java.util.AbstractMap;
import java.util.Arrays;
import java.util.Collection;
import java.util.List;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.stream.Collectors;
final class PartitionListExample {
public static void main(String[] args) {
final Collection<String> mainList = Arrays.asList("a", "b", "c", "d", "e", "f", "g", "h", "i");
final AtomicInteger idx = new AtomicInteger(0);
final int size = 3;
// Partition a list into list of lists size 3
final Collection<List<String>> rows = mainList.stream()
.collect(Collectors.groupingBy(
it -> idx.getAndIncrement() / size
))
.values();
// Write each row in the new line as a string
final String result = rows.stream()
.map(row -> String.join(",", row))
.collect(Collectors.joining("\n"));
System.out.println(result);
}
}
There are 3rd party libraries that provide utility classes that makes list partitioning easier (e.g. Guava or Apache Commons Collections) but this solution is built on Java 8 SDK only.
What it does is:
firstly we collect all elements by grouping by assigned row index and we store values as a list (e.g. {0=[a,b,c],1=[d,e,f],2=[g,h,i]}
then we take a list of all values like [[a,b,c],[d,e,f],[g,h,i]]
finally we represent list of lists as a String where each row is separated by \n
Output Demo
Running following program will print to console following output:
a,b,c
d,e,f
g,h,i
Getting more from following example
Alnitak played even more with following example and came up with a shorter solution by utilizing Collectors.joining(",") in .groupingBy collector and using String.join("\n", rows) in the end instead of triggering another stream reduction.
final Collection<String> rows = mainList.stream()
.collect(Collectors.groupingBy(
it -> idx.getAndIncrement() / size,
Collectors.joining(",")
))
.values();
// Write each row in the new line as a string
final String result = String.join("\n", rows);
System.out.println(result);
}
Final note
Keep in mind that this is not the most efficient way to print list of elements in your desired format. But partitioning list of any elements gives you flexibility if it comes to creating final result and is pretty easy to read and understand.
A side remark : in your actual code, map(Object::toString) could be removed if you replace
Collection<?> mainList = new ArrayList<String>(); by
Collection<String> mainList = new ArrayList<String>();.
If you manipulate Strings, create a Collection of String rather than Collection of ?.
But there formatting the file is done while reading itself.I want to
do it while printing the output.
After gotten the joined String, using replaceAll("(\\w*,\\w*,\\w*,)", "$1" + System.lineSeparator()) should do the job.
Iit will search and replace all series of 3 characters or more separated by a , character by the same thing ($1-> group capturing) but by concatenating it with a line separator.
Besides this :
String collect = mainList.stream().collect(Collectors.joining(","));
could be simplified by :
String collect = String.join(",", mainList);
Sample code :
public static void main(String[] args) {
Collection<String> mainList = Arrays.asList("a","b","c","d","e","f","g","h","i", "j");
String formattedValues = String.join(",", mainList).replaceAll("(\\w*,\\w*,\\w*,)", "$1" + System.lineSeparator());
System.out.println(formattedValues);
}
Output :
a,b,c,
d,e,f,
g,h,i,
j
Another approach that hasn't been answered here is to create a custom Collector.
import java.util.*;
import java.util.function.BiConsumer;
import java.util.function.BinaryOperator;
import java.util.function.Function;
import java.util.function.Supplier;
import java.util.stream.Collector;
import java.util.stream.Collectors;
public class PartitionListInPlace {
static class MyCollector implements Collector<String, List<List<String>>, String> {
private final List<List<String>> buckets;
private final int bucketSize;
public MyCollector(int numberOfBuckets, int bucketSize) {
this.bucketSize = bucketSize;
this.buckets = new ArrayList<>(numberOfBuckets);
for (int i = 0; i < numberOfBuckets; i++) {
buckets.add(new ArrayList<>(bucketSize));
}
}
#Override
public Supplier<List<List<String>>> supplier() {
return () -> this.buckets;
}
#Override
public BiConsumer<List<List<String>>, String> accumulator() {
return (buckets, element) -> buckets
.stream()
.filter(x -> x.size() < bucketSize)
.findFirst()
.orElseGet(() -> {
ArrayList<String> nextBucket = new ArrayList<>(bucketSize);
buckets.add(nextBucket);
return nextBucket;
})
.add(element);
}
#Override
public BinaryOperator<List<List<String>>> combiner() {
return (b1, b2) -> {
throw new UnsupportedOperationException();
};
}
#Override
public Function<List<List<String>>, String> finisher() {
return buckets -> buckets.stream()
.map(x -> x.stream()
.collect(Collectors.joining(", ")))
.collect(Collectors.joining(System.lineSeparator()));
}
#Override
public Set<Characteristics> characteristics() {
return new HashSet<>();
}
}
public static void main(String[] args) {
Collection<String> mainList = Arrays.asList("a","b","c","d","e","f","g","h","i", "j");
String formattedValues = mainList
.stream()
.collect(new MyCollector(mainList.size() / 3, 3));
System.out.println(formattedValues);
}
}
Explanation
This is a mutable collector that should not be used in parallel. If your necessities require that you process the stream in parallel you will have to transform this collector to be thread safe, which is pretty easy if you don't care about the order of the elements.
The combiner throws an exception because it is never called since run the stream sequentially.
The set of Characteristics has none that interests us, you can verify this by reading the javadoc
The supplier method will fetch the bucket in which we want to insert the element. The element will be insert in the next bucket that has space, otherwise we will create a new bucket and add it there.
The finisher is quite simple: Join the contents of each bucket by , and join the buckets themselves with System.lineSeparator()
Remember
Do not use this collector to process
Output
a, b, c
d, e, f
g, h, i
j

Categories