Example SQL Result
dataResult
Code Amt TotalAmtPerCode
A1 4 0
A1 4 0
B1 4 0
B1 5 0
A1 6 0
with this result
i would like to ask on how to compute the TotalAmtPerCode
The expected result should be
Code Amt TotalAmtPerCode
A1 4 14
A1 4 14
B1 4 9
B1 5 9
A1 6 14
sample code
for (Map<String, Object> data: dataResult) {
Long total = ComputeTotalAmount(dataResult,data.get(DBColumn.Code.name();
container.setTotalAmtPerCode(total);
}
function that computes the total amount
private static long ComputeTotalAmount(List<Map<String, Object>> list, String code) {
Long total = 0;
for (Map<String, Object> data: dataResult) {
if (code.equals(data.get(DBColumn.Code.name()))) {
total = total+Long.valueOf(data.get(DBColumn.Code.name()).toString)
}
}
}
This one is working fine but I would like to ask for an optimization on this code. Because if I would loop 10,000 records, it would check 1st record for the Code then reiterate the 10k to find the same Code and get the amount on that code and sum it all then it would check the 2nd record and so-on.
Welcome to StackOverflow :)
As far as I see you need to group the sum of TotalAmtPerCode values by the Code. There exist a method Stream::collect to transform the values to the desired output using Collectors::groupingBy collector which groups the Stream<T> into Map<K, V> where V is a collection.
Map<String, Integer> map = dataresult.stream()
.collect(Collectors.groupingBy( // Group to Map
d -> d.get(DBColumn.Code.name()), // Key is the code
Collectors.summingInt(d -> d.get(DBColumn.TotalAmtPerCode.name())))); // Value is the sum
Note:
You might need to edit d -> d.get(DBColumn.Code.name()) and d.get(DBColumn.TotalAmtPerCode.name()) according to your needs to get the Code and TotalAmtPerCode - I dont know the data model.
I assume the TotalAmtPerCode is int. Otherwise, use Collectors.summingLong.
You could use Collectors.groupingBy():
Map<String, Long> collect = list.stream()
.collect(Collectors.groupingBy(
p -> p.getFirst(),
Collectors.summingLong(p -> p.getSecond())
)
);
This groups the input by some classifier (here it's p -> p.getFirst(), this will be probably something like data.get(DBColumn.Code.name()) in your case) and summarizes the values (p -> p.getSecond(), which must be changed to something like Long.valueOf(data.get(DBColumn.Code.name()).toString)).
Note: getFirst() and getSecond() are methods from org.springframework.data.util.Pair.
Example:
List<Pair<String, Long>> list = new ArrayList<>();
list.add(Pair.of("A1", 1L));
list.add(Pair.of("A1", 2L));
list.add(Pair.of("B1", 1L));
Map<String, Long> collect = list.stream()
.collect(Collectors.groupingBy(
p -> p.getFirst(),
Collectors.summingLong(p -> p.getSecond())
)
);
System.out.println(collect);
Output:
{A1=3, B1=1}
Related
Given a List<Integer> l and a factor int f, I would like to use a stream to create a Map<Integer, Map<Integer, Long>> m such that the parent map has keys that are the index within l divided by f, and the value is a map of values to counts.
If the list is {1,1,1,4} and the factor is f=2 I would like to get:
0 ->
{
1 -> 2
}
1 ->
{
1 -> 1
4 -> 1
}
Basically, I'm hoping for a stream version of:
Map<Integer, Map<Integer, Long>> m = new HashMap<>();
for (int i = 0; i < l.size(); i++) {
m.computeIfAbsent(i/f, k -> new HashMap<>())
.compute(l.get(i), (k, v) -> v==null?1:v+1);
}
I realize it is fairly similar to this question about collecting a map of maps and I understand how to do a much simpler groupingBy with a count:
Map<Integer, Long> m = l.stream()
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()));
But I do not understand how to put those two ideas together without iterating.
Because I am working with indexes as one of the keys, I imagine that rather than starting with l.stream() I will start with IntStream.range(0, l.size()).boxed() which lets me get the first key (i -> i/f) and the second key(i -> l.get(i)), but I still don't know how to properly collect the counts.
Here is a solution.
public static void main(String[] args) {
final List<Integer> l = List.of(1,1,1,4);
final int f = 2;
final var value = IntStream.range(0,l.size())
.boxed()
.collect(Collectors.groupingBy(i -> i/f, Collectors.groupingBy(l::get, Collectors.counting())));
System.out.println(value);
}
Not sure if this is a personal requirement, but sometime using standard loops over streams is not necessarily a bad thing.
You can wrap your grouping collector in CollectingAndThen collector which takes a downstream collector and a finisher function. In the finisher you can modify the values (sublists) to a map:
List<Integer> list = List.of(1, 1, 1, 4);
int fac = 2;
AtomicInteger ai = new AtomicInteger();
Map<Integer,Map<Integer,Long>> result =
list.stream()
.collect(Collectors.groupingBy(
i -> ai.getAndIncrement() / fac,
Collectors.collectingAndThen(
Collectors.toList(), val -> val.stream()
.collect(Collectors.groupingBy(Function.identity(),
Collectors.counting())))));
System.out.println(result);
I'm storing information of the lastByFirst variable.
{Peter=[Leigh], George=[Barron, Trickett,Evans],
Paul-Courtenay=[Hyu], Ryan=[Smith], Toby=[Geller, Williams],
Simon=[Bush, Milosic, Quarterman,Brown]}
How can I print the first 3 which appeared the most and also the number of appereance.
I would like to list those which 3 value appeared the most. In the lastByFirst contains something like that. I would like to print in this way:
Simon: 4
George: 3
Toby:2
Map<String, List<String>> lastByFirst = PeopleProcessor.lastnamesByFirstname(PeopleSetup.people);
My attempt was something like that:
var store = lastByFirst.entrySet()
.stream()
.collect( Collectors.groupingBy(Person::getLastName,
Collectors.counting())
.toString();
store should be equal with
Simon: 4
George: 3
Toby:2
Here's one that first converts the map of lists to a map of the sizes of the list, and then picks the top three such sizes:
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
public class Demo {
public static void main(String[] args) {
Map<String, List<String>> lastByFirst =
Map.of("Peter", List.of("Leigh"), "George", List.of("Barron", "Trickett", "Evans"),
"Paul-Courtenay", List.of("Hyu"), "Ryan", List.of("Smith"),
"Toby", List.of("Geller", "Williams"), "Simon", List.of("Bush", "Milosic", "Quaterman", "Brown"));
List<Map.Entry<String, Integer>> topThree =
lastByFirst.entrySet().stream()
.collect(Collectors.toUnmodifiableMap(Map.Entry::getKey, e -> e.getValue().size()))
.entrySet()
.stream()
.sorted(Comparator.<Map.Entry<String, Integer>, Integer>comparing(Map.Entry::getValue).reversed())
.limit(3)
.collect(Collectors.toList());
System.out.println(topThree);
}
}
You can:
sort in descending mode by size
select the first three elements
reduce to one string
//1
List<Map.Entry<String, List<String>>> entryList = lastByFirst.entrySet()
.stream()
.sorted((e2, e1) -> Integer.compare(e1.getValue().size(), e2.getValue().size()))
.toList();
//2
String result = IntStream.range(0, 3)
.mapToObj(entryList::get)
.map(e -> String.format("%s: %d\n", e.getKey(), e.getValue().size()))
.collect(Collectors.joining()); //3
If you already have a map of people grouped by first name, you can address the problem of finding the 3 most frequent first names in a linear time O(n). Which is faster than sorting the whole data set.
If instead of picking 3 most frequent first names it would be generalized to m most frequent, then the time complexity would be O(n + m * log m) (which for small values of m would be close to linear time).
To implement it using streams, we can utilize a custom comparator, which can be created using the static method Collector.of().
As a mutable container of the collector, we can use a TreeMap sorted in the natural order, where the key would represent the of people having the same first name and the value would be a first name itself.
In order to retain only m most frequent names we need to track the size of the TreeMap and when it gets exceeded we have to remove the first entry (i.e. an entry having the lowest key).
public static <K, V> Collector<Map.Entry<K, List<V>>, ?, NavigableMap<Integer, K>>
getEntryCollector(int size) {
return Collector.of(
TreeMap::new,
(NavigableMap<Integer, K> map, Map.Entry<K, List<V>> entry) -> {
if (map.size() < size || map.firstKey() < entry.getValue().size()) { // the container hasn't reached the maximum size of the frequency of the offered name is higher than the lowest existing frequency
map.put(entry.getValue().size(), entry.getKey());
}
if (map.size() > size) map.remove(map.firstKey()); // the size of the container has been exceeded
},
(NavigableMap<Integer, K> left, NavigableMap<Integer, K> right) -> { // merging the two containers with partial results obtained during the parallel execution
left.putAll(right);
while (left.size() > size) left.remove(left.firstKey());
return left;
}
);
}
main()
public static void main(String args[]) {
Map<String, List<String>> lastByFirst =
Map.of("Peter", List.of("Leigh"), "George", List.of("Barron", "Trickett", "Evans"),
"Paul-Courtenay", List.of("Hyu"), "Ryan", List.of("Smith"), "Toby", List.of("Geller", "Williams"),
"Simon", List.of("Bush", "Milosic", "Quarterman", "Brown"));
NavigableMap<Integer, String> nameByFrequency =
lastByFirst.entrySet().stream()
.collect(getEntryCollector(3));
nameByFrequency.entrySet().stream() // printing the result, sorting in reversed order applied only for demo purposes
.sorted(Map.Entry.comparingByKey(Comparator.<Integer>naturalOrder().reversed()))
.forEach(entry -> System.out.println(entry.getValue() + ": " + entry.getKey()));
}
Output:
Simon: 4
George: 3
Toby: 2
A link Online Demo
Here is another solution by StreamEx:
EntryStream.of(lastByFirst)
.mapValues(List::size) //
.reverseSorted(Comparators.comparingByValue())
.limit(3)
.toList()
.forEach(System.out::println);
I have a stream over a simple Java data class like:
class Developer{
private Long id;
private String name;
private Integer codePost;
private Integer codeLevel;
}
I would like to apply this filter to my stream :
if 2 dev has the same codePost with different codeExperience keep the dev with codeLevel = 5
keep all devs if Developers has the same codePost with the same codeLevel
Example
ID
name
codePost
codeExperience
1
Alan stonly
30
4
2
Peter Zola
20
4
3
Camilia Frim
30
5
4
Antonio Alcant
40
4
or in java
Developer dev1 = new Developer (1,"Alan stonly",30,4);
Developer dev2 = new Developer (2,"Peter Zola",20,4);
Developer dev3 = new Developer (3,"Camilia Frim ",30,5);
Developer dev4 = new Developer (4,"Antonio Alcant",40,4);
Stream<Developer> Developers = Stream.of(dev1, dev2, dev3 , dev4);
As mentioned in the comments, Collectors.toMap should be used here with the merge function (and optionally a map supplier, e.g. LinkedHashMap::new to keep insertion order):
Stream.of(dev1, dev2, dev3, dev4)
.collect(Collectors.toMap(
Developer::getCodePost,
dev -> dev,
(d1, d2) -> Stream.of(d1, d2)
.filter(d -> d.getCodeLevel() == 5)
.findFirst()
.orElse(d1),
LinkedHashMap::new // keep insertion order
))
.values()
.forEach(System.out::println);
The merge function may be implemented with ternary operator too:
(d1, d2) -> d1.getCodeLevel() == 5 ? d1 : d2.codeLevel() == 5 ? d2 : d1
Output:
Developer(id=3, name=Camilia Frim , codePost=30, codeLevel=5)
Developer(id=2, name=Peter Zola, codePost=20, codeLevel=4)
Developer(id=4, name=Antonio Alcant, codePost=40, codeLevel=4)
If the output needs to be sorted in another order, values() should be sorted as values().stream().sorted(DeveloperComparator) with a custom developer comparator, e.g. Comparator.comparingLong(Developer::getId) or Comparator.comparing(Developer::getName) etc.
Update
As the devs sharing the same codeLevel should NOT be filtered out, the following (a bit clumsy) solution is possible on the basis of Collectors.collectingAndThen and Collectors.groupingBy:
input list is grouped into a map of codePost to the list of developers
then the List<Developer> values in the map are filtered to keep the devs with max codeLevel
// added two more devs
Developer dev5 = new Developer (5L,"Donkey Hot",40,3);
Developer dev6 = new Developer (6L,"Miguel Servantes",40,4);
Stream.of(dev1, dev2, dev3, dev4, dev5, dev6)
.collect(Collectors.collectingAndThen(Collectors.groupingBy(
Developer::getCodePost
), map -> {
map.values()
.stream()
.filter(devs -> devs.size() > 1)
.forEach(devs -> {
int maxLevel = devs.stream()
.mapToInt(Developer::getCodeLevel)
.max().orElse(5);
devs.removeIf(x -> x.getCodeLevel() != maxLevel);
});
return map;
}))
.values()
.stream()
.flatMap(List::stream)
.sorted(Comparator.comparingLong(Developer::getId))
.forEach(System.out::println);
Output:
Developer(id=2, name=Peter Zola, codePost=20, codeLevel=4)
Developer(id=3, name=Camilia Frim , codePost=30, codeLevel=5)
Developer(id=4, name=Antonio Alcant, codePost=40, codeLevel=4)
Developer(id=6, name=Miguel Servantes, codePost=40, codeLevel=4)
Let's say I have a list of an object lets call this object Order which has a quantity and price as fields.
for example as below:
Object Order (fields quantity and price) list contains the below values:
Quantity Price
5 200
6 100
3 200
1 300
Now I want to use Java-8 to fetch this list filtered in below way:
Quantity Price
8 200
6 100
1 300
Price being the unique value to filter from and summing any quantity that the price has, I want to form a new list based on this.
Please suggest how i can do this with java 8 lambda expression, thanks.
The following Stream does the trick:
List<Order> o = orders.stream().collect(
Collectors.collectingAndThen(
Collectors.groupingBy(Order::getPrice,Collectors.summingInt(Order::getQuantity)),
map -> map.entrySet().stream()
.map(e -> new Order(e.getKey(), e.getValue()))
.collect(Collectors.toList())));
Let's break this down:
The following code returns a Map<Integer, Integer> which contains price as a key (the unique value you want to base the summing on) and its summed quantities. The key method is Collectors.groupingBy with classifier describing the key and the downstream defining a value, which would be a sum of quantities, hence Collectors.summingInt (depends on the quantity type):
Map<Integer, Integer> map = orders.stream().collect(
Collectors.groupingBy( // I want a Map<Integer, Integer>
Order::getPrice, // price is the key
Collectors.summingInt(Order::getQuantity)) // sum of quantities is the value
The desired structure is List<Order>, therefore you want to use the Collectors.collectingAndThen method with a Collector<T, A, R> downstream and Function<R, RR> finisher. The downstream is the grouping from the first point, the finisher will be a conversion of Map<Integer, Integer> back to List<Order>:
List<Order> o = orders.stream().collect(
Collectors.collectingAndThen(
grouping, // you know this one ;)
map -> map.entrySet()
.stream() // iterate entries
.map(e -> new Order(e.getKey(), e.getValue())) // new Order(qty, price)
.collect(Collectors.toList()))); // as a List<Order>
I have a list of Records. Which has two fields: LocalDateTime instant and a Double data.
I want to groupBy all the records by Hour and create a Map<Integer, Double>. Where the keys (Integer) are hours and values(Double) are last Data of that hour - first Data of that hour.
What I have done so far is following:
Function<Record, Integer> keyFunc = rec->rec.getInstant().getHour();
Map<Integer, List<Record>> valueMap = records.stream().collect(Collectors.groupingBy(keyFunc));
I want the value map to hold Double instead of List<Records>.
For Example: List records can be following:
Instant Data
01:01:24 23.7
01:02:34 24.2
01:05:23 30.2
...
01:59:27 50.2
02:03:23 54.4
02:04:23 56.3
...
02:58:23 70.3
...
etc
Resulting map should be:
Key Value
1 26.5 (50.2-23.7)
2 15.9 (70.3-54.4)
...
You are mostly looking for Collectors.mapping within the groupingBy.
Map<Integer, List<Double>> valueMap = records.stream()
.collect(Collectors.groupingBy(keyFunc,
Collectors.mapping(Record::getData, Collectors.toList())));
This would group Records by their instant's hour and corresponding data for such records into a List as values of the map. Based on comments further
I want to subtract the first data from the last data
Yes the list will be sorted list based on instant
you can use the grouped map to get the desired output as:
Map<Integer, Double> output = new HashMap<>();
valueMap.forEach((k, v) -> output.put(k, v.get(v.size() - 1) - v.get(0)));
Alternatively, you could use Collectors.mapping with Collectors.collectingAndThen further as:
Map<Integer, Double> valueMap = records.stream()
.collect(Collectors.groupingBy(keyFunc,
Collectors.mapping(Record::getData,
Collectors.collectingAndThen(
Collectors.toList(), recs -> recs.get(recs.size() - 1) - recs.get(0)))));
You can use collectingAndThen as a downstream collector to groupingBy, and use the two extreme values of each group to compute the difference:
Map<Integer, Double> result = records.stream()
.collect(
Collectors.groupingBy(rec -> rec.getInstant().getHour(),
Collectors.collectingAndThen(
Collectors.toList(),
list -> {
//please handle the case of 1 entry only
list.sort(Comparator.comparing(Record::getInstant));
return list.get(list.size() - 1).getData()
- list.get(0).getData();
})));
Collectors.groupingBy(rec -> rec.getInstant().getHour() will group entries by hour. As used here, Collectors.collectingAndThen will take hourly entries as lists, sort every such list by the instant field then find the difference between the two extreme elements.
Based on the comment that the list would be sorted on timestamp, the following would work
:
Map<Integer, Double> valueMap = records.stream()
.collect(Collectors.groupingBy(rec -> rec.getInstant().getHour(),
Collectors.mapping(Record::getData,
Collectors.collectingAndThen(Collectors.toList(),recs -> recs.get(recs.size()-1) - recs.get(0)))));