I have the following code that I want to translate to Java 8 streams:
public ReleaseResult releaseReources() {
List<String> releasedNames = new ArrayList<>();
Stream<SomeResource> stream = this.someResources();
Iterator<SomeResource> it = stream.iterator();
while (it.hasNext() && releasedNames.size() < MAX_TO_RELEASE) {
SomeResource resource = it.next();
if (!resource.isTaken()) {
resource.release();
releasedNames.add(resource.getName());
}
}
return new ReleaseResult(releasedNames, it.hasNext(), MAX_TO_RELEASE);
}
Method someResources() returns a Stream<SomeResource> and ReleaseResult class is as follows:
public class ReleaseResult {
private int releasedCount;
private List<String> releasedNames;
private boolean hasMoreItems;
private int releaseLimit;
public ReleaseResult(List<String> releasedNames,
boolean hasMoreItems, int releaseLimit) {
this.releasedNames = releasedNames;
this.releasedCount = releasedNames.size();
this.hasMoreItems = hasMoreItems;
this.releaseLimit = releaseLimit;
}
// getters & setters
}
My attempt so far:
public ReleaseResult releaseReources() {
List<String> releasedNames = this.someResources()
.filter(resource -> !resource.isTaken())
.limit(MAX_TO_RELEASE)
.peek(SomeResource::release)
.map(SomeResource::getName)
.collect(Collectors.toList());
return new ReleasedResult(releasedNames, ???, MAX_TO_RELEASE);
}
The problem is that I can't find a way to know if there are pending resources to process. I've thought of using releasedNames.size() == MAX_TO_RELEASE, but this doesn't take into account the case where the stream of resources has exactly MAX_TO_RELEASE elements.
Is there a way to do the same with Java 8 streams?
Note: I'm not looking for answers like "you don't have to do everything with streams" or "using loops and iterators is fine". I'm OK if using an iterator and a loop is the only way or just the best way. It's just that I'd like to know if there's a non-murky way to do the same.
Since you don’t wanna hear that you don’t need streams for everything and loops and iterators are fine, let’s demonstrate it by showing a clean solution, not relying on peek:
public ReleaseResult releaseReources() {
return this.someResources()
.filter(resource -> !resource.isTaken())
.limit(MAX_TO_RELEASE+1)
.collect(
() -> new ReleaseResult(new ArrayList<>(), false, MAX_TO_RELEASE),
(result, resource) -> {
List<String> names = result.getReleasedNames();
if(names.size() == MAX_TO_RELEASE) result.setHasMoreItems(true);
else {
resource.release();
names.add(resource.getName());
}
},
(r1, r2) -> {
List<String> names = r1.getReleasedNames();
names.addAll(r2.getReleasedNames());
if(names.size() > MAX_TO_RELEASE) {
r1.setHasMoreItems(true);
names.remove(MAX_TO_RELEASE);
}
}
);
}
This assumes that // getters & setters includes getters and setters for all non-final fields of your ReleaseResult. And that getReleasedNames() returns the list by reference. Otherwise you would have to rewrite it to provide a specialized Collector having special non-public access to ReleaseResult (implementing another builder type or temporary storage would be an unnecessary complication, it looks like ReleaseResult is already designed exactly for that use case).
We could conclude that for any nontrivial loop code that doesn’t fit into the stream’s intrinsic operations, you can find a collector solution that basically does the same as the loop in its accumulator function, but suffers from the requirement of always having to provide a combiner function. Ok, in this case we can prepend a filter(…).limit(…) so it’s not that bad…
I just noticed, if you ever dare to use that with a parallel stream, you need a way to reverse the effect of releasing the last element in the combiner in case the combined size exceeds MAX_TO_RELEASE. Generally, limits and parallel processing never play well.
I don't think there's a nice way to do this. I've found a hack that does it lazily. What you can do is convert the Stream to an Iterator, convert the Iterator back to another Stream, do the Stream operations, then finally test the Iterator for a next element!
Iterator<SomeResource> it = this.someResource().iterator();
List<String> list = StreamSupport.stream(Spliterators.spliteratorUnknownSize(it, Spliterator.ORDERED), false)
.filter(resource -> !resource.isTaken())
.limit(MAX_TO_RELEASE)
.peek(SomeResource::release)
.map(SomeResource::getName)
.collect(Collectors.toList());
return new ReleaseResult(list, it.hasNext(), MAX_TO_RELEASE);
The only thing I can think of is
List<SomeResource> list = someResources(); // A List, rather than a Stream, is required
List<Integer> indices = IntStream.range(0, list.size())
.filter(i -> !list.get(i).isTaken())
.limit(MAX_TO_RELEASE)
.collect(Collectors.toList());
List<String> names = indices.stream()
.map(list::get)
.peek(SomeResource::release)
.map(SomeResource::getName)
.collect(Collectors.toList());
Then (I think) there are unprocessed elements if
names.size() == MAX_TO_RELEASE
&& (indices.isEmpty() || indices.get(indices.size() - 1) < list.size() - 1)
Related
I'd like to turn what I'm doing into lambda, in which case it would be I scroll through a list (listRegistrationTypeWork) within the other, check if the child list (getRegistrationTypeWorkAuthors) is != null, if it is, scroll through it looking for an authorCoautor = type, and increment a count, to find out how many records within the lists have this same type.
public int qtyMaximumWorksByAuthorCoauthor(AuthorCoauthor type) {
int count = 0;
for (RegistrationTypeWork tab : listRegistrationTypeWork) {
if (CollectionUtils.isNotEmpty(tab.getRegistrationTypeWorkAuthors())) {
for (RegistrationTypeWorkAuthors author : tab.getRegistrationTypeWorkAuthors()) {
if (author.getAuthorCoauthor().equals(type))
count++;
}
}
}
return count;
}
Although your statement is not clear enough on what transforming to lambda expression would mean, but I am assuming you would like to turn your imperative looping step to a functional stream and lambda based one.
This should be straightforward using:
filter to filter out the unwanted values from both of your collections
flatMap to flatten all inner collections into a single stream so that you can operate your count on it as a single source
public int qtyMaximumWorksByAuthorCoauthor(AuthorCoauthor type) {
return listRegistrationTypeWork.stream()
.filter(tab -> tab.getRegistrationTypeWorkAuthors() != null)
.flatMap(tab -> tab.getRegistrationTypeWorkAuthors().stream())
.filter(author -> type.equals(author.getAuthorCoauthor()))
.count();
}
In addition to Thomas fine comment I think you would want to write your stream something like this.
long count = listRegistrationTypeWork.stream()
// to make sure no lists that are actual null are mapped.
// map all RegistrationTypeWork into optionals of lists of RegistrationTypeWorkAuthors
.map(registrationTypeWork -> Optional.ofNullable(registrationTypeWork.getRegistrationTypeWorkAuthors()))
// this removes all empty Optionals from the stream
.flatMap(Optional::stream)
// this turns the stream of lists of RegistrationTypeWorkAuthors into a stream of plain RegistrationTypeWorkAuthors
.flatMap(Collection::stream)
// this filters out RegistrationTypeWorkAuthors which are of a different type
.filter(registrationTypeWorkAuthors -> type.equals(registrationTypeWorkAuthors.getAuthorCoauthor()))
.count();
// count returns a long so you either need to return a long in your method signature or cast the long to an integer.
return (int) count;
I am trying to do following operations on Flux/Publisher which can only be read once ( think database results which can be read once). But, this question is generic enough that it can be answered in functional programming context without reactor knowledge.
count unique item
check if an element exists
Don't call the publisher/flux generator multiple times.
distinctAndHasElement(4, Flux.just(1,2,3,3,4,4,5));
Mono<Pair<Long, Boolean>> distinctAndHasElement(int toCheck, Flux<Integer> intsFlux) {
// Code that doesn't work, Due to use of non final local variable
boolean found = false;
return intsFlux.map(x -> {
if (toCheck == x) {
found = true;
}
return x;
})
.distinct()
.count()
.map(x -> Pair.of(x, found));
}
We just need ability to fan out into 2 functions that operate on the same type/domain, and zip the final result.
Following doesn't work due to constrain#3
Flux<Integer> distinct = intsFlux.distinct();
Mono<Boolean> found = distinct.hasElement(toCheck);
Mono<Long> count = distinct.count();
return Mono.zip(count, found);
What you're attempting to do is a reduction of your dataset. It means that you attempt to create a single result by merging your initial elements.
Note that count can be considered as a kind of reduction, and in your case, you want an advanced kind of count operation, that also check if at least one of the input elements is equal to a given value.
With reactor (and many other stream framework), you can use the reduce operator.
Let's try your first example with it :
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.util.function.Tuple2;
import reactor.util.function.Tuples;
public class CountAndCheck {
static Mono<Tuple2<Long, Boolean>> distinctAndHasElement(int toCheck, Flux<Integer> intsFlux) {
return intsFlux
.distinct()
.reduce(Tuples.of(0L, false), (intermediateResult, nextElement) -> {
return Tuples.of(intermediateResult.getT1() + 1L, intermediateResult.getT2() || toCheck == nextElement);
});
}
public static void main(String[] args) {
System.out.println(distinctAndHasElement(2, Flux.just(1, 2, 2, 3, 4, 4)).block());
}
}
The above program prints: [4,true]
Note: You can use the scan operator instead of reduction, to get a flux of every intermediate step in the reduction operation. It can be useful to understand how reduction is performed.
You can broadcast your Flux as described in the documentation.
Flux<Integer> distinct = intsFlux.distinct().publish().autoConnect(2);
Mono<Boolean> found = distinct.hasElement(toCheck);
Mono<Long> count = distinct.count();
return Mono.zip(count, found);
I have a complicated requirement where a list records has comments in it. We have a functionality of reporting where each and every change should be logged and reported. Hence as per our design, we create a whole new record even if a single field has been updated.
Now we wanted to get history of comments(reversed sorted by timestamp) stored in our db. After running query I got the list of comments but it contains duplicate entries because some other field was changed. It also contains null entries.
I wrote the following code to remove duplicate and null entries.
List<Comment> toRet = new ArrayList<>();
dbCommentHistory.forEach(ele -> {
//Directly copy if toRet is empty.
if (!toRet.isEmpty()) {
int lastIndex = toRet.size() - 1;
Comment lastAppended = toRet.get(lastIndex);
// If comment is null don't proceed
if (ele.getComment() == null) {
return;
}
// remove if we have same comment as last time
if (StringUtils.compare(ele.getComment(), lastAppended.getComment()) == 0) {
toRet.remove(lastIndex);
}
}
//add element to new list
toRet.add(ele);
});
This logic works fine and have been tested now, But I want to convert this code to use lambda, streams and other java 8's feature.
You can use the following snippet:
Collection<Comment> result = dbCommentHistory.stream()
.filter(c -> c.getComment() != null)
.collect(Collectors.toMap(Comment::getComment, Function.identity(), (first, second) -> second, LinkedHashMap::new))
.values();
If you need a List instead of a Collection you can use new ArrayList<>(result).
If you have implemented the equals() method in your Comment class like the following
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
return Objects.equals(comment, ((Comment) o).comment);
}
you can just use this snippet:
List<Comment> result = dbCommentHistory.stream()
.filter(c -> c.getComment() != null)
.distinct()
.collect(Collectors.toList());
But this would keep the first comment, not the last.
If I'm understanding the logic in the question code you want to remove consecutive repeated comments but keep duplicates if there is some different comment in between in the input list.
In this case a simply using .distinct() (and once equals and hashCode) has been properly defined, won't work as intended as non-consecutive duplicates will be eliminated as well.
The more "streamy" solution here is to use a custom Collector that when folding elements into the accumulator removes the consecutive duplicates only.
static final Collector<Comment, List<Comment>, List<Comment>> COMMENT_COLLECTOR = Collector.of(
ArrayDeque::new, //// supplier.
(list, comment) -> { /// folder
if (list.isEmpty() || !Objects.equals(list.getLast().getComment(), comment.getComment()) {
list.addLast(comment);
}
}),
(list1, list2) -> { /// the combiner. we discard list2 first element if identical to last on list1.
if (list1.isEmpty()) {
return list2;
} else {
if (!list2.isEmpty()) {
if (!Objects.equals(list1.getLast().getComment(),
list2.getFirst().getComment()) {
list1.addAll(list2);
} else {
list1.addAll(list2.subList(1, list2.size());
}
}
return list1;
}
});
Notice that Deque (in java.util.*) is an extended type of List that have convenient operations to access the first and last element of the list. ArrayDeque is the nacked array based implementation (equivalent to ArrayList to List).
By default the collector will always receive the elements in the input stream order so this must work. I know it is not much less code but it is as good as it gets. If you define a Comment comparator static method that can handle null elements or comment with grace you can make it a bit more compact:
static boolean sameComment(final Comment a, final Comment b) {
if (a == b) {
return true;
} else if (a == null || b == null) {
return false;
} else {
Objects.equals(a.getComment(), b.getComment());
}
}
static final Collector<Comment, List<Comment>, List<Comment>> COMMENT_COLLECTOR = Collector.of(
ArrayDeque::new, //// supplier.
(list, comment) -> { /// folder
if (!sameComment(list.peekLast(), comment) {
list.addLast(comment);
}
}),
(list1, list2) -> { /// the combiner. we discard list2 first element if identical to last on list1.
if (list1.isEmpty()) {
return list2;
} else {
if (!sameComment(list1.peekLast(), list2.peekFirst()) {
list1.addAll(list2);
} else {
list1.addAll(list2.subList(1, list2.size());
}
return list1;
}
});
----------
Perhaps you would prefer to declare a proper (named) class that implements the Collector to make it more clear and avoid the definition of lambdas for each Collector action. or at least implement the lambdas passed to Collector.of by static methods to improve readability.
Now the code to do the actual work is rather trivial:
List<Comment> unique = dbCommentHistory.stream()
.collect(COMMENT_COLLECTOR);
That is it. However if it may become a bit more involved if you want to handle null comments (element) instances. The code above already handles the comment's string being null by considering it equals to another null string:
List<Comment> unique = dbCommentHistory.stream()
.filter(Objects::nonNull)
.collect(COMMENT_COLLECTOR);
Your code can be simplified a bit. Notice that this solution does not use stream/lambdas but it seems to be the most succinct option:
List<Comment> toRet = new ArrayList<>(dbCommentHistory.size());
Comment last = null;
for (final Comment ele : dbCommentHistory) {
if (ele != null && (last == null || !Objects.equals(last.getComment(), ele.getComment()))) {
toRet.add(last = ele);
}
}
The outcome is not exactly the same as the question code as in the latter null elements might be added to the toRet but it seems to me that you actually may want to remove the completely instead. Is easy to modify the code (make it a bit longer) to get the same output though.
If you insist in using a .forEach that would not be that difficult, in that case last whould need to be calculated at the beggining of the lambda. In this case you may want to use a ArrayDeque so that you can coveniently use peekLast:
Deque<Comment> toRet = new ArrayDeque<>(dbCommentHistory.size());
dbCommentHistory.forEach( ele -> {
if (ele != null) {
final Comment last = toRet.peekLast();
if (last == null || !Objects.equals(last.getComment(), ele.getComment())) {
toRet.addLast(ele);
}
}
});
This question already has answers here:
Is there an aggregateBy method in the stream Java 8 api?
(3 answers)
Closed 6 years ago.
I have a List of objects that look like this:
{
value=500
category="GROCERY"
},
{
value=300
category="GROCERY"
},
{
value=100
category="FUEL"
},
{
value=300
category="SMALL APPLIANCE REPAIR"
},
{
value=200
category="FUEL"
}
I would like to transform that into a List of objects that looks like this:
{
value=800
category="GROCERY"
},
{
value=300
category="FUEL"
},
{
value=300
category="SMALL APPLIANCE REPAIR"
}
Basically add up all the values with the same category.
Should I be using flatMap? Reduce? I don't understand the nuances of these to figure it out.
Help?
EDIT:
There are close duplicates of this question:
Is there an aggregateBy method in the stream Java 8 api?
and
Sum attribute of object with Stream API
But in both cases, the end result is a map, not a list
The final solution I used, based on answers by #AndrewTobilko and #JBNizet was:
List<MyClass> myClassList = list.stream()
.collect(Collectors.groupingBy(YourClass::getCategory,
Collectors.summingInt(YourClass::getValue)))
.entrySet().stream().map(e -> new MyClass(e.getKey(), e.getValue()).collect(toList());
The Collectors class provides a 'groupingBy' that allows you to perform a 'group by' operation on a stream (similar behavior like GROUP BY in databases). Under the assumption that your list of objects is of type 'Objects', the following code should work:
Map<String, Integer> valueByCategory = myObjects.stream().collect(Collectors.groupingBy(MyObjects::getCategory, Collectors.summingInt(MyObjects::getValue)));
The code basically groups your stream by each category and runs a Collector on each group that sums up the return value of getValue() of every stream element.
See https://docs.oracle.com/javase/8/docs/api/java/util/stream/Collectors.html
With static import of the Collectors class:
list.stream().collect(groupingBy(Class::getCategory, summingInt(Class::getValue)));
You will get a map Map<String, Integer>. Class has to have getValue and getCategory methods to write method references, something like
public class Class {
private String category;
private int value;
public String getCategory() { return category; }
public int getValue() { return value; }
}
Reduce-based method:
List<Obj> values = list.stream().collect(
Collectors.groupingBy(Obj::getCategory, Collectors.reducing((a, b) -> new Obj(a.getValue() + b.getValue(), a.getCategory())))
).values().stream().map(Optional::get).collect(Collectors.toList());
Bad thing is secondary stream() call to remap result from Optional<Obj> and intermediate Map<String, Optional<Obj>> object.
I can suggest alternative variant (less readable) using sorting:
List<Obj> values2 = list.stream()
.sorted((o1, o2) -> o1.getCategory().compareTo(o2.getCategory()))
.collect(
LinkedList<Obj>::new,
(ll, obj) -> {
Obj last = null;
if(!ll.isEmpty()) {
last = ll.getLast();
}
if (last == null || !last.getCategory().equals(obj.getCategory())) {
ll.add(new Obj(obj.getValue(), obj.getCategory())); //deep copy here
} else {
last.setValue(last.getValue() + obj.getValue());
}
},
(list1, list2) -> {
//for parallel execution do a simple merge join here
throw new RuntimeException("parallel evaluation not supported");
}
);
Here we sort list of Objs by category and then processing it sequentially, squashing consecutive objects from same category.
Unfortunately, there is no method in Java to do it without manually keeping last element or elements list (see also Collect successive pairs from a stream)
Working example with both snippets can be checked here: https://ideone.com/p3bKV8
With an Iterable<T>, it's easy:
T last = null;
for (T t : iterable) {
if (last != null && last.compareTo(t) > 0) {
return false;
}
last = t;
}
return true;
But I can't think of a clean way to do the same thing for a Stream<T> that avoids consuming all the elements when it doesn't have to.
There are several methods to iterate over the successive pairs of the stream. For example, you can check this question. Of course my favourite method is to use the library I wrote:
boolean unsorted = StreamEx.of(sourceStream)
.pairMap((a, b) -> a.compareTo(b) > 0)
.has(true);
It's short-circuit operation: it will finish as soon as it find the misorder. Also it works fine with parallel streams.
This is a sequential, state holding solution:
IntStream stream = IntStream.of(3, 3, 5, 6, 6, 9, 10);
final AtomicInteger max = new AtomicInteger(Integer.MIN_VALUE);
boolean sorted = stream.allMatch(n -> n >= max.getAndSet(n));
Parallelizing would need to introduce ranges. The state, max might be dealt with otherwise, but the above seems most simple.
You can grab the Stream's underlying spliterator and check it it has the SORTED characteristic. Since it's a terminal operation, you can't use the Stream after (but you can create another one from this spliterator, see also Convert Iterable to Stream using Java 8 JDK).
For example:
Stream<Integer> st = Stream.of(1, 2, 3);
//false
boolean isSorted = st.spliterator().hasCharacteristics(Spliterator.SORTED);
Stream<Integer> st = Stream.of(1, 2, 3).sorted();
//true
boolean isSorted = st.spliterator().hasCharacteristics(Spliterator.SORTED);
My example shows that the SORTED characteristic appears only if you get the Stream from a source's that reports the SORTED characteristic or you call sorted() at a point on the pipeline.
One could argue that Stream.iterate(0, x -> x + 1); creates a SORTED stream, but there is no knowledge about the semantic of the function applied iteratively. The same applies for Stream.of(...).
If the pipeline is infinite then it's the only way to know. If not, and that the spliterator does not report this characteristic, you'd need to go through the elements and see if it does not satisfy the sorted characteristic you are looking for.
This is what you already done with your iterator approach but then you need to consume some elements of the Stream (in the worst case, all elements). You can make the task parallelizable with some extra code, then it's up to you to see if it's worth it or not...
You could hijack a reduction operation to save the last value and compare it to the current value and throw an exception if it isn't sorted:
.stream().reduce((last, curr) -> {
if (((Comparable)curr).compareTo(last) < 0) {
throw new Exception();
}
return curr;
});
EDIT: I forked another answer's example and replaced it with my code to show it only does the requisite number of checks.
http://ideone.com/ZMGnVW
You could use allMatch with a multi-line lambda, checking the current value against the previous one. You'll have to wrap the last value into an array, though, so the lambda can modify it.
// infinite stream with one pair of unsorted numbers
IntStream s = IntStream.iterate(0, x -> x != 1000 ? x + 2 : x - 1);
// terminates as soon as the first unsorted pair is found
int[] last = {Integer.MIN_VALUE};
boolean sorted = s.allMatch(x -> {
boolean b = x >= last[0]; last[0] = x; return b;
});
Alternatively, just get the iterator from the stream and use a simple loop.
A naive solution uses the stream's Iterator:
public static <T extends Comparable<T>> boolean isSorted(Stream<T> stream) {
Iterator<T> i = stream.iterator();
if(!i.hasNext()) return true;
T current = i.next();
while(i.hasNext()) {
T next = i.next();
if(current == null || current.compareTo(next) > 0) return false;
current = next;
}
return true;
}
Edit: It would also be possible to use a spliterator to parallelize the task, but the gains would be questionable and the increase in complexity is probably not worth it.
I don't know how good it is , but i have just got an idea:
Make a list out of your Stream , Integer or Strings or anything.
i have written this for a List<String> listOfStream:
long countSorted = IntStream.range(1, listOfStream.size())
.map(
index -> {
if (listOfStream.get(index).compareTo(listOfStream.get(index-1)) > 0) {
return 0;
}
return index;
})
.sum();