Add item into Mono list - java

I have this code which is used to get a list from a reactive spring JPA repository:
Mono<List<ActivePairs>> list = activePairsService.findAllOrdered().collectList();
ActivePairs obj = new ActivePairs().pair("All");
I would like to add item at the beginning of the list:
I tried:
list.mergeWith(new ActivePairs().pair("All"));
But I get:
Required type: Publisher
<? extends java.util.List<io.domain.ActivePairs>>
Provided: ActivePairs
Do you know what is the proper way to implement this?
EDIT:
#Query(value = "SELECT ......")
Flux<ActivePairs> findAll();
Obj:
#Table("active_pairs")
public class ActivePairs implements Serializable {
private static final long serialVersionUID = 1L;
#Id
private Long id;
#Column("pair")
private String pair;
..... getter, setter, toString();
}
........
Mono<List<ActivePairs>> resultList = findAll().collectList();
resultList.subscribe(
addressList -> {
addressList.add(0,new ActivePairs().id(1111L).pair("All"));
},
error -> error.printStackTrace()
);
Using above code All is not added into the list.

You can use Mono.doOnNext() to add to the inner list in transit:
list.doOnNext(list -> list.add(0, new ActivePairs().pair("All")))
Edit
Not sure why it doesn't work for you. Here is an example that shows the working concept:
Mono<List<String>> source = Flux.just("A", "B", "C").collectList();
source
.doOnNext(list -> list.add(0, "All"))
.subscribe(
System.out::println,
Throwable::printStackTrace,
() -> System.out.println("Done"));
prints:
[All, A, B, C]
Done

Its impossible to modify List<ActivePairs> from Mono<List<ActivePairs>>. If the List isn't available yet since Mono and Flux are asynchronous/non-blocking by their nature, you can't get it except by waiting until it comes in and that's what blocking is.
You can subscribe to the Mono from the calling method and the Subscriber you pass will get the List when it becomes available. E.g
list.subscribe(
addressList -> {
adressList.add(0,new ActivePairs().pair("All"));
},
error -> error.printStackTrace(),
() -> Console.out.println("completed without a value")
)
NOTE :
There are also methods Mono::doOnSuccess, and Mono::doOnNext:
Mono::doOnSuccess triggers when the Mono completes successfully - result is either T or null, which means the processing itself successfully finished regardless the state of data and is executed although the data are not available or present but the pipeline itself succeed.
Mono::doOnNext triggers when the data is emitted successfully, which means the data is available and present.
To get list from Mono use block() method, like this:
List<ActivePairs> pairs = list.block();
When you do System.out.println(pairs) you should see item added in your list.

Related

Streaming over a Java list to populating a Map

Java 11 here. I have the following POJO:
#Data // Lombok; adds getters, setters, all-args constructor and equals and hashCode
public class Fliflam {
private String merf;
private String tarf;
private Boolean isFlerf;
}
I have a method that validates a Flimflam and returns a List<String> of any errors encountered while validating the Flimflam. I can change this to return Optional<List<String>> if anyone thinks thats helpful for some reason, especially when dealing with the Stream API:
public List<String> validateFlimflam(Flimflam flimflam) {
List<String> errors = new ArrayList<>();
// ... validation code omitted for brevity
// 'errors' list is populated with any errors; otherwise it returns empty
return errors;
}
I want to stream (Stream API) through a List<Flimflam> and populate a Map<Flimflam,List<String>> errors map, where the key of the map is a Flimflam that failed validation, and its corresponding value is the list of validation error strings.
I can achieve this the "old fashioned" way like so:
List<Flimflam> flimflams = getSomehow();
Map<Flimflam,List<String>> errorsMap = new HashMap<>();
for (Flimflam ff : flimflams) {
List<String> errors = validateFlimflam(ff);
if (!errors.isEmpty() {
errorsMap.put(ff, errors);
}
}
How can I accomplish this via the Stream API?
Like this
Map<Flimflam,List<String>> errorsMap = flimflams.stream().collect(Collectors.toMap(f -> f, f-> f::validateFlimflam));
toMap takes 2 parameters (keyMapper,valueMapper)
In your case key mapper is object from stream itself, and value is calling validateFlimflam on that object
It is hard to tell where exactly your validateFlimflam method is defined. I suspect it is not in the Flimflam class itself since there would be no need to pass an instance of itself to the method. So I presume it is an external method to that class. Assuming that I would proceed as follows:
thisClass = instance containing validateFlimflam. Could be set to this
Map<Flimflam, List<String>> errorsMap =
flimflams.stream().collect(Collectors.toMap(f -> f,
thisClass::validateFlimflam));
If by chance, Flimflam does contain validateFlimflam you could do it like this. Note that this presumes the method takes no arguments as they wouldn't be necessary
Map<Flimflam, List<String>> errorsMap =
flimflams.stream().collect(Collectors.toMap(f -> f,
Flimflam::validateFlimflam));
Finally, if the containing class is some other class and the validateFlimflam method is declared static, then you could do it like this by using the containing class name, not instance. Also, in this case, the method would take an argument as defined.
Map<Flimflam, List<String>> errorsMap =
flimflams.stream().collect(Collectors.toMap(f -> f,
SomeClass::validateFlimflam));

How change Iterable to ArrayList with filter

I have code :
#GetMapping("/goal/{id}")
public String goalInfo(#PathVariable(value = "id") long id, Model model) {
if (!goalRepository.existsById(id)) {
return "redirect:/goal";
}
Iterable<SubGoal> subGoal = subGoalRepository.findAll();
ArrayList<SubGoal> subGoals = new ArrayList<>();
//How refactor this?
for(SubGoal sub : subGoal){
if(sub.getParentGoal().getId().equals(id)){
subGoals.add(sub);
}
}
if(subGoals.size() > 0) {
goalPercent(id, subGoal);
}
Optional<Goal> goal = goalRepository.findById(id);
ArrayList<Goal> result = new ArrayList<>();
goal.ifPresent(result::add);
model.addAttribute("goal", result);
model.addAttribute("subGoal",subGoals);
return "goal/goal-info";
}
Here I get sub-goals from the repository and filter these values.
How I can do it without foreach? I want to use Streams or something else.
You don't need to declare an iterable on your code to filter your ArrayList. The filter method already provides one for you. You can use:
subGoals = subGoals.stream().filter(subGoal ->
/*Here goes your filter condition*/ ).collect(Collectors.toList());
To convert Iterable to Stream use StreamSupport.stream(iter.spliterator(), par).
Iterable<SubGoal> subGoal = subGoalRepository.findAll();
List<SubGoal> subGoals = StreamSupport
.stream(subGoal.spliterator(), false)
.filter(sub -> sub.getParentGoal().getId().equals(id))
.collect(toList()) // static import `Collectors.toList()`
...
Additionally, this part can be also single statement.
before (three statement)
Optional<Goal> goal = goalRepository.findById(id);
ArrayList<Goal> result = new ArrayList<>();
goal.ifPresent(result::add);
after (single statement)
List<Goal> result = goalRepository.findById(id)
.map(goal -> singletonList(goal)) // Collections.singletonList()
.orElse(emptyList()); // Collections.emptyList()
Updates
1. singletonList(), emptyList()
These are just factory methods used when creating single entity list and empty list.
you can change this part any kind of function that has Goal as input and List as output and any empty list.
For example,
.map(goal -> Arrays.asList(goal))
.orElse(new ArrayList<>());
or
.map(goal -> {
ArrayList<Goal> l = new ArrayList<>();
l.add(goal);
return l;
})
...
2. I changed the List Type to List<Goal>, not ArrayList<Goal>
Sorry, I missed explanation about that.
In OOP, using Interface will be better practices than using Concrete Class in many situation.
If you have to use ArrayList<> Type explicitly or want to specify actual list instance in some reason, you can also use toCollection() like below.
.collect(toCollection(ArrayList::new)) // you can specify the actual list instance
Thanks to #John Bollinger #hfontanez for pointing this out.
This is client-side filtering and is extremely inefficient. Instead, simply declare this method on your repository interface:
Collection<SubGoal> findByParentId(Long id); // or Stream, Iterable

How can I get original elements in a class after groupBy in RxJava2?

I have a list of JSON elements. Each JSON element is represented as Java class named "Foo" for example. This Foo class has others fields which are also a Java class. I am trying to group these Foo elements by Bar id and after that I want to do other operations on grouped elements like filter, sort.
public class Foo {
private int id;
private Bar bar;
private Baz baz;
private int qty;
}
public class Bar {
private int id;
private String name;
}
public class Baz {
private int id;
private String type;
}
I tried something like this to see the result after groupBy operation but it didn't print anything. But if I provide simple elements like String or Integer instead of Foo and try to group those with the same approach, it works.
List<Foo> myInput = new ArrayList<>();
myInput.add(...);
myInput.add(...);
Observable.fromIterable(myInput)
.groupBy(el -> el.getBar().getId())
.concatMapSingle(Observable::toList)
.subscribe(System.out::println);
This code works with this input and prints out grouped elements afterwards:
Observable<String> animals = Observable.just(
"Tiger", "Elephant", "Cat", "Chameleon", "Frog", "Fish", "Flamingo");
animals.groupBy(animal -> animal.charAt(0))
.concatMapSingle(Observable::toList)
.subscribe(System.out::println);
What I am trying to do is something like this:
Observable.fromIterable(myInput)
.filter(el -> el.getBaz().getType().equals("type1"))
.groupBy(el -> el.getBar().getId())
//.filter(...)
//.sorted(...)
How can I group elements and retrieve that grouped elements and apply another operations on it? And could you also explain it a little bit so that I can understand what is happening under the hood?
From the docs of Observable#groupBy:
Groups the items emitted by an ObservableSource according to a specified criterion, and emits these grouped items as GroupedObservables.
From the docs of GroupedObservable:
An Observable that has been grouped by key, the value of which can be obtained with getKey().
So, GroupedObservable is a subclass of Observable so you can apply the usual map, filter, flatMap, etc. operators available.
For example, if you want to sum the qtys of the grouped Foos per each Bar, you can map the GroupedObservable in this way:
Observable.fromIterable(foos)
.groupBy(foo -> foo.getBar().getId())
.flatMapSingle(grouped -> grouped.map(foo -> foo.qty).reduce(0, Integer::sum))
.subscribe(System.out::println);

Remake list with some condition

There are two entities:
class GiftCertificate {
Long id;
List<Tag> tags;
}
class Tag {
Long id;
String name;
}
There is a list
List<GiftCertificate>
which contains, for example, the following data:
<1, [1, "Tag1"]>, <2, null>, <1, [2, "Tag2"]>. (It does not contain a set of tags, but only one tag or does not have it at all).
I need to do so that in the result it was this:
<1, {[1," Tag1 "], [2," Tag2 "]}>, <2, null>. I mean, add to the set of the first object a tag from the third GiftCertificate and at the same time delete the 3rd one. I would like to get at least some ideas on how to do this. it would be nice to use stream.
Probably not the most effective way, but it might help
private List<GiftCertificate> joinCertificates(List<GiftCertificate> giftCertificates) {
return giftCertificates.stream()
.collect(Collectors.groupingBy(GiftCertificate::getId))
.entrySet().stream()
.map(entry -> new GiftCertificate(entry.getKey(), joinTags(entry.getValue()))).collect(Collectors.toList());
}
private List<Tag> joinTags(List<GiftCertificate> giftCertificates) {
return giftCertificates.stream()
.flatMap(giftCertificate -> Optional.ofNullable(giftCertificate.getTags()).stream().flatMap(Collection::stream))
.collect(Collectors.toList());
}
You can do what you want with streams and with the help of a dedicated custom constructor and a couple of helper methods in GiftCertificate. Here's the constructor:
public GiftCertificate(GiftCertificate another) {
this.id = another.id;
this.tags = new ArrayList<>(another.tags);
}
This just works as a copy constructor. We're creating a new list of tags, so that if the list of tags of either one of the GiftCertificate instances is modified, the other one won't. (This is just basic OO concepts: encapsulation).
Then, in order to add another GiftCertificate's tags to this GiftCertificate's list of tags, you could add the following method to GiftCertificate:
public GiftCertificate addTagsFrom(GiftCertificate another) {
tags.addAll(another.tags);
return this;
}
And also, a helper method that returns whether the list of tags is empty or not will come in very handy:
public boolean hasTags() {
return tags != null && !tags.isEmpty();
}
Finally, with these three simple methods in place, we're ready to use all the power of streams to solve the problem in an elegant way:
Collection<GiftCertificate> result = certificates.stream()
.filter(GiftCertificate::hasTags) // keep only gift certificates with tags
.collect(Collectors.toMap(
GiftCertificate::getId, // group by id
GiftCertificate::new, // use our dedicated constructor
GiftCertificate::addTagsFrom)) // merge the tags here
.values();
This uses Collectors.toMap to create a map that groups gift certificates by id, merging the tags. Then, we keep the values of the map.
Here's the equivalent solution, without streams:
Map<Long, GiftCertificate> map = new LinkedHashMap<>(); // preserves insertion order
certificates.forEach(cert -> {
if (cert.hasTags()) {
map.merge(
cert.getId(),
new GiftCertificate(cert),
GiftCertificate::addTagsFrom);
}
});
Collection<GiftCertificate> result = map.values();
And here's a variant with a slight performance improvement:
Map<Long, GiftCertificate> map = new LinkedHashMap<>(); // preserves insertion order
certificates.forEach(cert -> {
if (cert.hasTags()) {
map.computeIfAbsent(
cert.getId(),
k -> new GiftCertificate(k)) // or GitCertificate::new
.addTagsFrom(cert);
}
});
Collection<GiftCertificate> result = map.values();
This solution requires the following constructor:
public GiftCertificate(Long id) {
this.id = id;
this.tags = new ArrayList<>();
}
The advantage of this approach is that new GiftCertificate instances will be created only if there's no other entry in the map with the same id.
Java 9 introduced flatMapping collector that is particularly well-suited for problems like this. Break the task into two steps. First, build a map of gift certificate IDs to list of tags and then assemble a new list of GiftCertificate objects:
import static java.util.stream.Collectors.flatMapping;
import static java.util.stream.Collectors.groupingBy;
import static java.util.stream.Collectors.toList;
......
Map<Long, List<Tag>> gcIdToTags = gcs.stream()
.collect(groupingBy(
GiftCertificate::getId,
flatMapping(
gc -> gc.getTags() == null ? Stream.empty() : gc.getTags().stream(),
toList()
)
));
List<GiftCertificate> r = gcIdToTags.entrySet().stream()
.map(e -> new GiftCertificate(e.getKey(), e.getValue()))
.collect(toList());
This assumes that GiftCertificate has a constructor that accepts Long id and List<Tag> tags
Note that this code deviates from your requirements by creating an empty list instead of null in case there are no tags for a gift certificate id. Using null instead of an empty list is just a very lousy design and forces you to pollute your code with null checks everywhere.
The first argument to flatMapping can also be written as gc -> Stream.ofNullable(gc.getTags()).flatMap(List::stream) if you find that more readable.

map vs flatMap in reactor

I've found a lot of answers regarding RxJava, but I want to understand how it works in Reactor.
My current understanding is very vague, i tend to think of map as being synchronous and flatMap to be asynchronous but I can't really get my had around it.
Here is an example:
files.flatMap { it ->
Mono.just(Paths.get(UPLOAD_ROOT, it.filename()).toFile())
.map {destFile ->
destFile.createNewFile()
destFile
}
.flatMap(it::transferTo)
}.then()
I have files (a Flux<FilePart>) and i want to copy it to some UPLOAD_ROOT on the server.
This example is taken from a book.
I can change all the .map to .flatMap and vice versa and everything still works. I wonder what the difference is.
map is for synchronous, non-blocking, 1-to-1 transformations
flatMap is for asynchronous (non-blocking) 1-to-N transformations
The difference is visible in the method signature:
map takes a Function<T, U> and returns a Flux<U>
flatMap takes a Function<T, Publisher<V>> and returns a Flux<V>
That's the major hint: you can pass a Function<T, Publisher<V>> to a map, but it wouldn't know what to do with the Publishers, and that would result in a Flux<Publisher<V>>, a sequence of inert publishers.
On the other hand, flatMap expects a Publisher<V> for each T. It knows what to do with it: subscribe to it and propagate its elements in the output sequence. As a result, the return type is Flux<V>: flatMap will flatten each inner Publisher<V> into the output sequence of all the Vs.
About the 1-N aspect:
for each <T> input element, flatMap maps it to a Publisher<V>. In some cases (eg. an HTTP request), that publisher will emit only one item, in which case we're pretty close to an async map.
But that's the degenerate case. The generic case is that a Publisher can emit multiple elements, and flatMap works just as well.
For an example, imagine you have a reactive database and you flatMap from a sequence of user IDs, with a request that returns a user's set of Badge. You end up with a single Flux<Badge> of all the badges of all these users.
Is map really synchronous and non-blocking?
Yes: it is synchronous in the way the operator applies it (a simple method call, and then the operator emits the result) and non-blocking in the sense that the function itself shouldn't block the operator calling it. In other terms it shouldn't introduce latency. That's because a Flux is still asynchronous as a whole. If it blocks mid-sequence, it will impact the rest of the Flux processing, or even other Flux.
If your map function is blocking/introduces latency but cannot be converted to return a Publisher, consider publishOn/subscribeOn to offset that blocking work on a separate thread.
The flatMap method is similar to the map method with the key difference that the supplier you provide to it should return a Mono<T> or Flux<T>.
Using the map method would result in a Mono<Mono<T>>
whereas using flatMap results in a Mono<T>.
For example, it is useful when you have to make a network call to retrieve data, with a java API that returns a Mono, and then another network call that needs the result of the first one.
// Signature of the HttpClient.get method
Mono<JsonObject> get(String url);
// The two urls to call
String firstUserUrl = "my-api/first-user";
String userDetailsUrl = "my-api/users/details/"; // needs the id at the end
// Example with map
Mono<Mono<JsonObject>> result = HttpClient.get(firstUserUrl).
map(user -> HttpClient.get(userDetailsUrl + user.getId()));
// This results with a Mono<Mono<...>> because HttpClient.get(...)
// returns a Mono
// Same example with flatMap
Mono<JsonObject> bestResult = HttpClient.get(firstUserUrl).
flatMap(user -> HttpClient.get(userDetailsUrl + user.getId()));
// Now the result has the type we expected
Also, it allows for handling errors precisely:
public UserApi {
private HttpClient httpClient;
Mono<User> findUser(String username) {
String queryUrl = "http://my-api-address/users/" + username;
return Mono.fromCallable(() -> httpClient.get(queryUrl)).
flatMap(response -> {
if (response.statusCode == 404) return Mono.error(new NotFoundException("User " + username + " not found"));
else if (response.statusCode == 500) return Mono.error(new InternalServerErrorException());
else if (response.statusCode != 200) return Mono.error(new Exception("Unknown error calling my-api"));
return Mono.just(response.data);
});
}
}
How map internally works in the Reactor.
Creating a Player class.
#Data
#AllArgsConstructor
public class Player {
String name;
String name;
}
Now creating some instances of Player class
Flux<Player> players = Flux.just(
"Zahid Khan",
"Arif Khan",
"Obaid Sheikh")
.map(fullname -> {
String[] split = fullname.split("\\s");
return new Player(split[0], split[1]);
});
StepVerifier.create(players)
.expectNext(new Player("Zahid", "Khan"))
.expectNext(new Player("Arif", "Khan"))
.expectNext(new Player("Obaid", "Sheikh"))
.verifyComplete();
What’s important to understand about the map() is that the mapping is
performed synchronously, as each item is published by the source Flux.
If you want to perform the mapping asynchronously, you should consider
the flatMap() operation.
How FlatMap internally works.
Flux<Player> players = Flux.just(
"Zahid Khan",
"Arif Khan",
"Obaid Sheikh")
.flatMap(
fullname ->
Mono.just(fullname).map(p -> {
String[] split = p.split("\\s");
return new Player(split[0], split[1]);
}).subscribeOn(Scheduler.parallel()));
List<Player> playerList = Arrays.asList(
new Player("Zahid", "Khan"),
new Player("Arif", "Khan"),
new Player("Obaid", "Sheikh"));
StepVerifier.create(players).expectNextMatches(player ->
playerList.contains(player))
.expectNextMatches(player ->
playerList.contains(player))
.expectNextMatches(player ->
playerList.contains(player))
.expectNextMatches(player ->
playerList.contains(player))
.verifyComplete();
Internally in a Flatmap(), a map() operation is performed to the Mono to transform the String to Player. Furthermore, subcribeOn () indicates that each subscription should take place in a parallel thread. In absence of subscribeOn() flatmap() acts as a synchronized.
The map is for synchronous, non-blocking, one-to-one transformations
while the flatMap is for asynchronous (non-blocking) One-to-Many transformations.

Categories