I have following classes
class A {
private Long id;
private Long rid; //Joins A with B1 and B2.
//Other data.
}
class B1 {
private Long rid; //Joins with A
private Long cid; //Joins with C
//other data.
}
class B2 {
private Long rid; //Joins with A
private Long cid; //Joins with C
//other data.
}
class C {
private Long id;
//other data.
}
A comes from a table in database. B1, B2 and C from b1, b2 and c tables respectively.
Now my logic is like following:
Get latest 1000 rows of A.
From A, get B1 and B2 on element rid.
From B1 and B2, get C.
Now, compare A and C on some parameter and produce the report.
In webflux, my code is like as follows (code is in Java 17):
class StageResult {
private List<A> aList;
private List<Long> rids;
private List<Long> cids1;
private List<Long> cids2;
private List<C> c;
}
var stageResult = new StageResult();
var page = PageRequest.of(0, 1000);
getA(page).collectList()
.flatMap(aList -> {
stageResult.setAList(aList);
stageResult.setRids(aList.stream().map(A::getRid).collect(Collectors.toList()))
return Mono.just(stageResult);
}).
.flatMapMany(x -> getB1(x.getRids())) //Stage 2
.map(B1::getCid).collectList()
.flatMap(cids -> {
stageResult.setCids1(cids);
return Mono.just(stageResult);
})
.flatMapMany(x -> getB2(x.getRids())) //Stage 3
.map(B2::getCid).collectList()
.flatMap(cids -> {
stageResult.setCids2(cids);
return Mono.just(stageResult);
})
.flatMapMany(x -> {
var list = new ArrayList<Long>(x.getCids1());
list.addAll(x.getCids2());
return getC(list)
.collectList().map(y -> {
x.setC(y);
return x;
});
}).flatMap(x -> {
//Compare the element
});
In this case, stage 2 and 3 can be executed in parallel. Just want to know that how we can execute 2 and 3 in parallel. One approach is to create separate stage 2 and stage 3 by stage 1. Cache the result of stage 1. And then zip stage 2 and stage 3 before stage 4 and use the result.
You can use Flux.merge to resolve publishers in parallel.
Merge data from Publisher sequences contained in an array / vararg into an interleaved merged sequence. Unlike concat, sources are subscribed to eagerly.
.flatMapMany(x -> Flux.merge(
getB1(x.getRids()).map(B1::getCid), //Stage 2
getB2(x.getRids()).map(B2::getCid) //Stage 3
)
)
.collectList()
btw
It doesn't make sense to create "pseudo-async" publishers using Mono.just and then resolve using flatMap.
.flatMap(cids -> {
stageResult.setCids1(cids);
return Mono.just(stageResult);
})
Use can use simple map instead.
.map(cids -> {
stageResult.setCids1(cids);
return stageResult;
})
Related
I have this code which is used to get a list from a reactive spring JPA repository:
Mono<List<ActivePairs>> list = activePairsService.findAllOrdered().collectList();
ActivePairs obj = new ActivePairs().pair("All");
I would like to add item at the beginning of the list:
I tried:
list.mergeWith(new ActivePairs().pair("All"));
But I get:
Required type: Publisher
<? extends java.util.List<io.domain.ActivePairs>>
Provided: ActivePairs
Do you know what is the proper way to implement this?
EDIT:
#Query(value = "SELECT ......")
Flux<ActivePairs> findAll();
Obj:
#Table("active_pairs")
public class ActivePairs implements Serializable {
private static final long serialVersionUID = 1L;
#Id
private Long id;
#Column("pair")
private String pair;
..... getter, setter, toString();
}
........
Mono<List<ActivePairs>> resultList = findAll().collectList();
resultList.subscribe(
addressList -> {
addressList.add(0,new ActivePairs().id(1111L).pair("All"));
},
error -> error.printStackTrace()
);
Using above code All is not added into the list.
You can use Mono.doOnNext() to add to the inner list in transit:
list.doOnNext(list -> list.add(0, new ActivePairs().pair("All")))
Edit
Not sure why it doesn't work for you. Here is an example that shows the working concept:
Mono<List<String>> source = Flux.just("A", "B", "C").collectList();
source
.doOnNext(list -> list.add(0, "All"))
.subscribe(
System.out::println,
Throwable::printStackTrace,
() -> System.out.println("Done"));
prints:
[All, A, B, C]
Done
Its impossible to modify List<ActivePairs> from Mono<List<ActivePairs>>. If the List isn't available yet since Mono and Flux are asynchronous/non-blocking by their nature, you can't get it except by waiting until it comes in and that's what blocking is.
You can subscribe to the Mono from the calling method and the Subscriber you pass will get the List when it becomes available. E.g
list.subscribe(
addressList -> {
adressList.add(0,new ActivePairs().pair("All"));
},
error -> error.printStackTrace(),
() -> Console.out.println("completed without a value")
)
NOTE :
There are also methods Mono::doOnSuccess, and Mono::doOnNext:
Mono::doOnSuccess triggers when the Mono completes successfully - result is either T or null, which means the processing itself successfully finished regardless the state of data and is executed although the data are not available or present but the pipeline itself succeed.
Mono::doOnNext triggers when the data is emitted successfully, which means the data is available and present.
To get list from Mono use block() method, like this:
List<ActivePairs> pairs = list.block();
When you do System.out.println(pairs) you should see item added in your list.
public class Call {
private String status;
private String callName;
}
I have a list of calls and i have to create a summary, like this:
public class CallSummary {
private String callName;
private List<ItemSummary> items;
}
public class itemSummary {
private String status;
private Integer percentage;
}
My goal is show a percentage of calls with some status
like :
INBOUND_CALL : {
FAILED = 30%
SUCCESS = 70%
}
how can i do it using java 8 stream and Collectors ?
The idea behind the grouping would be to nest is in such a way that you have a call name and then status based count lookup available. I would also suggest using an enumeration for the status
enum CallStatus {
FAILED, SUCCESS
}
and adapting it in other classes as
class Call {
private CallStatus status;
private String callName;
}
Then you can implement a nested grouping and start off with an intermediate result such as:
List<Call> sampleCalls = List.of(new Call(CallStatus.SUCCESS,"naman"),new Call(CallStatus.FAILED,"naman"),
new Call(CallStatus.SUCCESS,"diego"), new Call(CallStatus.FAILED,"diego"), new Call(CallStatus.SUCCESS,"diego"));
Map<String, Map<CallStatus, Long>> groupedMap = sampleCalls.stream()
.collect(Collectors.groupingBy(Call::getCallName,
Collectors.groupingBy(Call::getStatus, Collectors.counting())));
which would give you an output of
{diego={FAILED=1, SUCCESS=2}, naman={FAILED=1, SUCCESS=1}}
and you can further evaluate the percentages as well. (though representing them in Integer might lose precision depending on how you evaluate them further.)
To solve it further, you can keep another Map for the name-based count lookup as:
Map<String, Long> nameBasedCount = calls.stream()
.collect(Collectors.groupingBy(Call::getCallName, Collectors.counting()));
and further, compute summaries of type CallSummary in a List as :
List<CallSummary> summaries = groupedMap.entrySet().stream()
.map(entry -> new CallSummary(entry.getKey(), entry.getValue().entrySet()
.stream()
.map(en -> new ItemSummary(en.getKey(), percentage(en.getValue(),
nameBasedCount.get(entry.getKey()))))
.collect(Collectors.toList()))
).collect(Collectors.toList());
where percentage count be implemented by you using the signature int percentage(long val, long total) aligned with the datatype chosen in ItemSummary as well.
Sample result:
[
CallSummary(callName=diego, items=[ItemSummary(status=FAILED, percentage=33), ItemSummary(status=SUCCESS, percentage=66)]),
CallSummary(callName=naman, items=[ItemSummary(status=FAILED, percentage=50), ItemSummary(status=SUCCESS, percentage=50)])
]
The following collects to a status -> percent map which you can then convert to you output model. This code assumes a getStatus method.
List<Call> calls;
Map<String,Double> statusPercents = calls.stream()
.collect(Collectors.groupingBy(Call::getStatus,
Collectors.collectingAndThen(Collectors.counting(),
n -> 100.0 * n / calls.size())));
I realise this code is a bit hard to read. The chain of collectors group the calls by status and then counts each group and finally converts to a percent. You could (arguably) make it more readable by having interim variables for the collectors:
var percentFunction = n -> 100.0 * n / calls.size();
var collectPercent = collectingAndThen(count(), percentFunction);
var collectStatusPercentMap = groupingBy(Call::getStatus, collectPercent);
You also want to group by call name but that's really just the same thing - using groupingBy and then reducing the list of calls to a CallSummary.
I have a list of elements, let's call it "keywords", like this:
public class Keyword {
Long id;
String name;
String owner;
Date createdTime;
Double price;
Date metricDay;
Long position;
}
The thing is that there is a keyword for every single day. For example:
Keyword{id=1, name="kw1", owner="Josh", createdTime="12/12/1992", price="0.1", metricDay="11/11/1999", position=109}
Keyword{id=1, name="kw1", owner="Josh", createdTime="12/12/1992", price="0.3", metricDay="12/11/1999", position=108}
Keyword{id=1, name="kw1", owner="Josh", createdTime="12/12/1992", price="0.2", metricDay="13/11/1999", position=99}
Keyword{id=2, name="kw2", owner="Josh", createdTime="13/12/1992", price="0.6", metricDay="13/11/1999", position=5}
Keyword{id=2, name="kw2", owner="Josh", createdTime="13/12/1992", price="0.1", metricDay="14/11/1999", position=4}
Keyword{id=3, name="kw3", owner="Josh", createdTime="13/12/1992", price="0.1", metricDay="13/11/1999", position=8}
Then, from this list I would like to create a new list with all the metrics from all those different days on one single list. First, I created a class like this:
public class KeywordMetric {
Double price;
Date metricDay;
Long position;
}
And what I would like to archive is go from the first list, to a structure like this:
public class KeywordMeged {
Long id;
String name;
String owner;
List<KeywordMetric> metricList;
}
Example of what I expect:
KeywordMerged{id=1, name="kw1", owner="Josh", createdTime="12/12/1992", metricList=[KeywordMetric{price=0.1,metricDay="11/11/1999",position=109},KeywordMetric{price=0.3,metricDay="12/11/1999",position=108},KeywordMetric{price=0.2,metricDay="13/11/1999",position=99}]
KeywordMerged{id=2, name="kw2", owner="Josh", createdTime="13/12/1992", metricList=[KeywordMetric{price=0.6,metricDay="13/11/1999",position=5},KeywordMetric{price=0.1,metricDay="14/11/1999",position=4}]
KeywordMerged{id=3, name="kw3", owner="Josh", createdTime="13/12/1992", metricList=[KeywordMetric{price=0.1,metricDay="13/11/1999",position=8}]
I know how to do this with a lot of loops and mutable varibles, but I can't figure out how to do this with streams and lambda operations. I was able to group all related keywords by Id with this:
Map<Long, List<Keyword>> kwL = kwList.stream()
.collect(groupingBy(Keyword::getId))
And I know that with .forEach() I could iterate over that Map, but can't figure out how to make the collect() method of streams pass from List to KeywordMerged.
You can try to use the Collectors.toMap(...) instead. Where:
Keyword::getId is a key mapper function.
KeywordMerged.from(...) performs a transformation: Keyword => KeywordMerged
(left, right) -> { .. } combines metrics for entities with identical ids.
Collection<KeywordMerged> result = keywords.stream()
.collect(Collectors.toMap(
Keyword::getId,
k -> KeywordMerged.from(k), // you can replace this lambda with a method reference
(left, right) -> {
left.getMetricList().addAll(right.getMetricList());
return left;
}))
.values();
A transformation method might look something like this:
public class KeywordMerged {
public static KeywordMerged from(Keyword k) {
KeywordMetric metric = new KeywordMetric();
metric.setPrice(k.getPrice());
metric.setMetricDay(k.getMetricDay());
metric.setPosition(k.getPosition());
KeywordMerged merged = new KeywordMerged();
merged.setId(k.getId());
merged.setName(k.getName());
merged.setOwner(k.getOwner());
merged.setMetricList(new ArrayList<>(Arrays.asList(metric)));
return merged;
}
}
I think you've got the basic idea. So, refactor according to your needs...
A slightly different approach. First you collect the Map of keywords grouped by id:
Map<Integer, List<Keyword>> groupedData = keywords.stream()
.collect(Collectors.groupingBy(k -> k.getId()));
Further you convert your map to the list of desired format:
List<KeywordMerged> finalData = groupedData.entrySet().stream()
.map(k -> new KeywordMerged(k.getValue().get(0).getId(),
k.getValue().stream()
.map(v -> new KeywordMetric(v.getMetricDay(), v.getPrice(), getPosition()))
.collect(Collectors.toList())))
.collect(Collectors.toList());
This will work on the grouped data, but transforming the map it will create KeywordMerged object, which as argument will receive id (you can extent it further yourself) and converted to List<KeywordMetric> previously grouped by ID data.
EDIT: I believe with some extraction to methods you can make it look much nicer :)
I want to design a rule engine to filter incoming objects as follow:
At the beginning, I have three different classes: A, B, C. The list of classes is dynamic, i.e: I wanna extend this to work with class D, E, etc if D and E will be added later.
public class A {
int a1;
String a2;
boolean a3;
Date a4;
List<String> a5;
...
}
public class B {
String b1;
boolean b2;
Date b3;
long b4;
...
}
public class C {
String c1;
boolean c2;
Date c3;
long c4;
...
}
There will be different objects of class A, B, or C that are gonna be filtered by my rule engine.
The users can define different rules based on a set of predefined operations that each member variable of a class can possibly have.
An example of some operations:
a1 can have operations like: >=, <=, or between some range, etc.
a2 can have operations like: is not, or is, etc.
a3 can only be true or false.
a4 can be: before certain date, after certain date, or between, etc.
a5 can be: exists or not exists in the list, etc.
Some example rules for an object of class A would be:
a1 is between 0 and 100, and a2 is not "ABC", and a3 is false, and a4 is before today, and a5 contains "cdf", etc.
a2 is "abc", and a3 is true, and a4 is between some date range.
etc.
One bullet is one rule. In a rule, there can be one criterion or more criteria (multiple criterion's that AND's together).
Each criterion is defined by a member variable with an operation that I can apply on that variable.
The rule engine must be able to process rules defined by users for each object of class A, B, or C. For every rule (A Rule, B Rule, or C Rule) coming in, the return will be a list of objects that matched the specified rule.
I can create Criterion, Criteria, ARule, BRule, CRule, Operation objects, etc; and I can go with the Brute Force way of doing things; but that's gonna be a lot of if...else... statements.
I appreciate all ideas of any design patterns/design method that I can use to make this clean and extendable.
Thank you very much for your time.
Sounds like a rule is actually a Predicate that is formed by and-ing other predicates. With Java 8, you can let the users define predicates for properties:
Predicate<A> operationA1 = a -> a.getA1() >= 10; // a1 is int
Predicate<A> operationA2 = a -> a.getA2().startsWith("a"); // a2 is String
Predicate<A> operationA3 = a -> a.getA3(); // == true; a3 is boolean
Predicate<A> ruleA = operationA1.and(operationA2).and(operationA3);
Now you can stream your List<A>, filter and collect to a new list:
List<A> result = listOfA.stream()
.filter(ruleA)
.collect(Collectors.toList());
You can use similar approaches for B and C.
Now, there are several ways to abstract all this. Here's one possible solution:
public static <T, P> Predicate<T> operation(
Function<T, P> extractor,
Predicate<P> condition) {
return t -> condition.test(extractor.apply(t));
}
This method creates a predicate (that represents one of your operations) based on a Function that will extract the property from either A, B or C (or future classes) and on a Predicate over that property.
For the same examples I've shown above, you could use it this way:
Predicate<A> operation1A = operation(A::getA1, p -> p >= 10);
Predicate<A> operation2A = operation(A::getA2, p -> p.startsWith("a"));
Predicate<A> operation3A = operation(A::getA3, p -> p); // p == true ?
But, as the method is generic, you can also use it for instances of B:
Predicate<B> operation1B = operation(B::getA1, p -> p.startsWith("z"));
Predicate<B> operation2B = operation(B::getA2, p -> !p); // p == false ?
Predicate<B> operation3B = operation(B::getA3, p -> p.before(new Date()));
Now that you have defined some operations, you need a generic way to create a rule out from the operations:
public static <T> Predicate<T> rule(Predicate<T>... operations) {
return Arrays.stream(operations).reduce(Predicate::and).orElse(t -> true);
}
This method creates a rule by and-ing the given operations. It first creates a stream from the given array and then reduces this stream by applying the Predicate#and method to the operations. You can check the Arrays#stream, Stream#reduce and Optional#orElse docs for details.
So, to create a rule for A, you could do:
Predicate<A> ruleA = rule(
operation(A::getA1, p -> p >= 10),
operation(A::getA2, p -> p.startsWith("a")),
operation(A::getA3, p -> p));
List<A> result = listOfA.stream()
.filter(ruleA)
.collect(Collectors.toList());
I have an entity with 10 fields:
Class Details{
String item;
String name;
String type;
String origin;
String color;
String quality;
String country;
String quantity;
Boolean availability;
String price;
}
I have a restful endpoint that serves a List. I want the user to be able to provide search filters for each field.
Currently I have QueryParam for each field. Then I filter by using java8 stream:
List<Detail> details;
details.stream().filter(detail-> detail.getItem()==item).filter(detail-> detail.getName()==name).....collect(Collectors.toList());
If I have 50 other classes with multiple fields that I want to filter, Is there a way of generalizing this?
You can compose such predicates with .and() and .or(), allowing you to define a single aggregate predicate that applies all the checks you would like, rather than trying to chain n .filter() calls. This enables arbitrarily complex predicates that can be constructed at runtime.
// Note that you shouldn't normally use == on objects
Predicate<Detail> itemPredicate = d-> item.equals(d.getItem());
Predicate<Detail> namePredicate = d-> name.equals(d.getName());
details.stream()
.filter(itemPredicate.and(namePredicate))
.collect(Collectors.toList());
If you want to avoid reflection how about something like this?
static enum DetailQueryParams {
ITEM("item", d -> d.item),
NAME("name", d -> d.name),
TYPE("type", d -> d.type),
ORIGIN("origin", d -> d.origin),
COLOR("color", d -> d.color),
QUALITY("quality", d -> d.quality),
COUNTRY("country", d -> d.country),
QUANTITY("quantity", d -> d.quantity),
AVAILABILITY("availability", d -> d.availability),
PRICE("price", d -> d.price);
private String name;
private Function<Detail, Object> valueExtractor;
private DetailQueryParams(String name,
Function<Detail, Object> valueExtractor) {
this.name = name;
this.valueExtractor = valueExtractor;
}
public static Predicate<Detail> mustMatchDetailValues(
Function<String, Optional<String>> queryParamGetter) {
return Arrays.asList(values()).stream()
.map(p -> queryParamGetter.apply(p.name)
.map(q -> (Predicate<Detail>)
d -> String.valueOf(p.valueExtractor.apply(d)).equals(q))
.orElse(d -> true))
.reduce(Predicate::and)
.orElse(d -> true);
}
}
And then, assuming that you can access query params by e.g. request.getQueryParam(String name) which returns a String value or null, use the code by calling the following:
details.stream()
.filter(DetailQueryParams.mustMatchDetailValues(
name -> Optional.ofNullable(request.getQueryParam(name))))
.collect(Collectors.toList());
What the method basically does is:
- for each possible query param
- get its value from the request
- if value is present build predicate which
- gets field value from detail object and convert to string
- check that both strings (queried and extracted) matches
- if value is not present return predicate that always returns true
- combine resulting predicates using and
- use always true as fallback (which here never actually happens)
Of course this could also be extended to generate predicates depending on the actual value type instead of comparing strings so that e.g. ranges could be requested and handled via "priceMin" and/or "priceMax".
For Lambda details -> condition. So we can specify how many required.
details.stream().filter(detail-> detail.getItem()==item && detail.getName()==name && ...)