Split a flux into two fluxes - head and tail - java

I want to split a flux into two fluxes where the first one has the first item of the original flux and the second one will takes the rest of items.
After applying a custom transformation myLogic on each flux I want to combine them into one flux preserving the order of the original flux.
Example:
S: student
S': student after applying myLogic
Emitted flux: s1 -> s2 -> s3 -> s4
The first splited flux: s1' => myLogic
The second splited flux: s2' -> s3' -> s4' => myLogic
The combined flux: s1' -> s2' -> s3' -> s4'

It is enough to use standard Flux methods take and skip to seprate head and tail elements. Calling cache before that is also useful to avoid subscription duplication.
class Util {
static <T, V> Flux<V> dualTransform(
Flux<T> originalFlux,
int cutpointIndex,
Function<T, V> transformHead,
Function<T, V> transformTail
) {
var cached = originalFlux.cache();
var head = cached.take(cutpointIndex).map(transformHead);
var tail = cached.skip(cutpointIndex).map(transformTail);
return Flux.concat(head, tail);
}
static void test() {
var sample = Flux.just("a", "b", "c", "d");
var result = dualTransform(
sample,
1,
x -> "{" + x.toUpperCase() + "}",
x -> "(" + x + ")"
);
result.doOnNext(System.out::print).subscribe();
// prints: {A}(b)(c)(d)
}
}

There's a more simple solution to your problem. You don't need to split and merge the events from publisher. You can make use of index(). It keeps information about the order in which events are published.
Flux<String> values = Flux.just("s1", "s2", "s3");
values.index((i, v) -> {
if (i == 0) {
return v.toUpperCase();
} else {
return v.toLowerCase();
}
});

Here's a hacky way to do this:
boolean a[] = new boolean[]{false}; //use an array as you cannot use non-final variables inside lambdas
originalFlux
.flatMap(a -> {
if(!a[0]) {
a[0] = true;
return runLogicForFirst(a);
} else {
return runLogicForRest(a);
}
})

Instead of creating two separate Flux objects and then merging them, you can just zip your original Flux with another Flux<Boolean> that's only ever true on the first element.
You can then do your processing conditionally as you please in a normal map() call without having to merge separate publishers later on:
Flux<String> values = Flux.just("A", "B", "C", "D", "E", "F", "G");
Flux.zip(Flux.concat(Flux.just(true), Flux.just(false).repeat()), values)
.map(x -> x.getT1() ? "_"+x.getT2().toUpperCase()+"_" : x.getT2().toLowerCase())
.subscribe(System.out::print); // prints "_A_bcdefg"

Related

Java Stream reduce unexplained behaviour

Can anyone please point me in the right direction as I cannot understand the issue.
I am executing following method.
private static void reduce_parallelStream() {
List<String> vals = Arrays.asList("a", "b");
List<String> join = vals.parallelStream().reduce(new ArrayList<String>(),
(List<String> l, String v) -> {
l.add(v);
return l;
}, (a, b) -> {
a.addAll(b);
return a;
}
);
System.out.println(join);
}
It prints
[null, a, null, a]
I cannot understand why does it put two null in the resultant list. I expected the answer to be
[a, b]
as it is a parallel stream so the first parameter to reduce
new ArrayList()
would probably be called twice for each input value a and b.
Then the accumulator function would probably be called twice as it is a parallelStream and pass each input "a and b" in each call along with the lists provided by seeded value. So a is added to list 1 and b is added to list 2 (or vice versa). Afterwards the combinator will combine both lists but it doesn't happen.
Interestingly, if I put a print statement inside my accumulator to print the value of input, the output changes. So following
private static void reduce_parallelStream() {
List<String> vals = Arrays.asList("a", "b");
List<String> join = vals.parallelStream().reduce(new ArrayList<String>(),
(List<String> l, String v) -> {
System.out.printf("l is %s", l);
l.add(v);
System.out.printf("l is %s", l);
return l;
}, (a, b) -> {
a.addAll(b);
return a;
}
);
System.out.println(join);
}
results in this output
l is []l is [b]l is [b, a]l is [b, a][b, a, b, a]
Can anyone please explain.
You should be using Collections.synchronizedList() when working with parallelStream(). Because ArrayList is not threadsafe and you get unexpected behavior when accessing it concurrently, like you're doing it with parallelStream().
I have modified your code and now it's working correctly:
private static void reduce_parallelStream() {
List<String> vals = Arrays.asList("a", "b");
// Use Synchronized List when with parallelStream()
List<String> join = vals.parallelStream().reduce(Collections.synchronizedList(new ArrayList<>()),
(l, v) -> {
l.add(v);
return l;
}, (a, b) -> a // don't use addAll() here to multiplicate the output like [a, b, a, b]
);
System.out.println(join);
}
Output:
Sometimes you'll get this output:
[a, b]
And sometimes this one:
[b, a]
Reason for this is that it's a parallelStream() so you can't be sure about the order of execution.
as it is a parallel stream so the first parameter to reduce new ArrayList()
would probably be called twice for each input value a and b.
That's where you are wrong. The first parameter is a single ArrayList instance, not a lambda expression can produce multiple ArrayList instances.
Therefore, the entire reduction operates on a single ArrayList instance. When multiple threads modify that ArrayList in parallel, the results may change in each execution.
Your combiner actually adds all the elements of a List to the same List.
You can obtain the expected [a,b] output if both the accumulator and combiner functions will produce a new ArrayList instead of mutating their input ArrayList:
List<String> join = vals.parallelStream().reduce(
new ArrayList<String>(),
(List<String> l, String v) -> {
List<String> cl = new ArrayList<>(l);
cl.add(v);
return cl;
}, (a, b) -> {
List<String> ca = new ArrayList<>(a);
ca.addAll(b);
return ca;
}
);
That said, you shouldn't be using reduce at all. collect is the correct way to perform a mutable reduction:
List<String> join = vals.parallelStream()
.collect(ArrayList::new,ArrayList::add,ArrayList::addAll);
As you can see, here, unlike in reduce, the first parameter you pass is a Supplier<ArrayList<String>>, which can be used to generate as many intermediate ArrayList instances as necessary.
It is rather simple, the first argument is the identity or I would say zero to start with. For parallelStream usage this value is reused. That means concurrency problems (the null from an add) and duplicates.
This can be patched by:
final ArrayList<String> zero = new ArrayList<>();
List<String> join = vals.parallelStream().reduce(zero,
(List<String> l, String v) -> {
if (l == zero) {
l = new ArrayList<>();
}
l.add(v);
return l;
}, (a, b) -> {
// See comment of Holger:
if (a == zero) return b;
if (b == zero) return a;
a.addAll(b);
return a;
}
);
Safe.
You might wonder why reduce has no overload for an identity providing function.
The reason is that collect should have been used here.

What is the (kind of) inverse operation to Java's Stream.flatMap()?

The Stream.flatMap() operation transforms a stream of
a, b, c
into a stream that contains zero or more elements for each input element, e.g.
a1, a2, c1, c2, c3
Is there the opposite operations that batches up a few elements into one new one?
It is not .reduce(), because this produces only one result
It is not collect(), because this only fills a container (afaiu)
It is not forEach(), because this has returns just void and works with side effects
Does it exist? can I simulate it in any way?
Finally I figured out that flatMap is its own "inverse" so to say. I oversaw that flatMap not necessarily increases the number of elements. It may also decrease the number of elements by emitting an empty stream for some of the elements. To implement a group-by operation, the function called by flatMap needs minimal internal state, namely the most recent element. It either returns an empty stream or, at the end of a group, it returns the reduced-to group representative.
Here is a quick implementation where groupBorder must return true if the two elements passed in do not belong to the same group, i.e. between them is the group border. The combiner is the group function that combines, for example (1,a), (1,a), (1,a) into (3,a), given that your group elements are, tuples (int, string).
public class GroupBy<X> implements Function<X, Stream<X>>{
private final BiPredicate<X, X> groupBorder;
private final BinaryOperator<X> combiner;
private X latest = null;
public GroupBy(BiPredicate <X, X> groupBorder,
BinaryOperator<X> combiner) {
this.groupBorder = groupBorder;
this.combiner = combiner;
}
#Override
public Stream<X> apply(X elem) {
// TODO: add test on end marker as additonal parameter for constructor
if (elem==null) {
return latest==null ? Stream.empty() : Stream.of(latest);
}
if (latest==null) {
latest = elem;
return Stream.empty();
}
if (groupBorder.test(latest, elem)) {
Stream<X> result = Stream.of(latest);
latest = elem;
return result;
}
latest = combiner.apply(latest, elem);
return Stream.empty();
}
}
There is one caveat though: to ship the last group of the whole stream, an end marker must be stuck as the last element into the stream. The above code assumes it is null, but an additional end-marker-tester could be added.
I could not come up with a solution that does not rely on the end marker.
Further I did not also convert between incoming and outgoing elements. For a unique-operation, this would just work. For a count-operation, a previous step would have to map individual elements to a counting object.
Take a look at collapse in StreamEx
StreamEx.of("a1", "a2", "c1", "c2", "c3").collapse((a, b) -> a.charAt(0) == b.charAt(0))
.map(e -> e.substring(0, 1)).forEach(System.out::println);
Or my fork with more function: groupBy, split, sliding...
StreamEx.of("a1", "a2", "c1", "c2", "c3").collapse((a, b) -> a.charAt(0) == b.charAt(0))
.map(e -> e.substring(0, 1)).forEach(System.out::println);
// a
// c
StreamEx.of("a1", "a2", "c1", "c2", "c3").splitToList(2).forEach(System.out::println);
// [a1, a2]
// [c1, c2]
// [c3]
StreamEx.of("a1", "a2", "c1", "c2", "c3").groupBy(e -> e.charAt(0))
.forEach(System.out::println);
// a=[a1, a2]
// c=[c1, c2, c3]
You can hack your way around. See the following example:
Stream<List<String>> stream = Stream.of("Cat", "Dog", "Whale", "Mouse")
.collect(Collectors.collectingAndThen(
Collectors.partitioningBy(a -> a.length() > 3),
map -> Stream.of(map.get(true), map.get(false))
));
IntStream.range(0, 10)
.mapToObj(n -> IntStream.of(n, n / 2, n / 3))
.reduce(IntStream.empty(), IntStream::concat)
.forEach(System.out::println);
As you see elements are mapped to Streams too, and then concatenated into one large stream.
This is what I came up with:
interface OptionalBinaryOperator<T> extends BiFunction<T, T, Optional<T>> {
static <T> OptionalBinaryOperator<T> of(BinaryOperator<T> binaryOperator,
BiPredicate<T, T> biPredicate) {
return (t1, t2) -> biPredicate.test(t1, t2)
? Optional.of(binaryOperator.apply(t1, t2))
: Optional.empty();
}
}
class StreamUtils {
public static <T> Stream<T> reducePartially(Stream<T> stream,
OptionalBinaryOperator<T> conditionalAccumulator) {
Stream.Builder<T> builder = Stream.builder();
stream.reduce((t1, t2) -> conditionalAccumulator.apply(t1, t2).orElseGet(() -> {
builder.add(t1);
return t2;
})).ifPresent(builder::add);
return builder.build();
}
}
Unfortunately, I did not have the time to make it lazy, but it can be done by writing a custom Spliterator delegating to stream.spliterator() that would follow the logic above (instead of utilizing stream.reduce(), which is a terminal operation).
PS. I just realized you wanted <T,U> conversion, and I wrote about <T,T> conversion. If you can first map from T to U, and then use the function above, then that's it (even if it's suboptimal).
If it's something more complex, the kind of condition for reducing/merging would need to be defined before proposing an API (e.g. Predicate<T>, BiPredicate<T,T>, BiPredicate<U,T>, or maybe even Predicate<List<T>>).
A bit like StreamEx, you could implement the Spliterator manually. For example,
collectByTwos(Stream.of(1, 2, 3, 4), (x, y) -> String.format("%d%d", x, y))
... returns a stream of "12", "34" using the code below:
public static <X,Y> Stream<Y> collectByTwos(Stream<X> inStream, BiFunction<X,X,Y> mapping) {
Spliterator<X> origSpliterator = inStream.spliterator();
Iterator<X> origIterator = Spliterators.iterator(origSpliterator);
boolean isParallel = inStream.isParallel();
long newSizeEst = (origSpliterator.estimateSize() + 1) / 2;
Spliterators.AbstractSpliterator<Y> lCombinedSpliterator =
new Spliterators.AbstractSpliterator<>(newSizeEst, origSpliterator.characteristics()) {
#Override
public boolean tryAdvance(Consumer<? super Y> action) {
if (! origIterator.hasNext()) {
return false;
}
X lNext1 = origIterator.next();
if (! origIterator.hasNext()) {
throw new IllegalArgumentException("Trailing elements of the stream would be ignored.");
}
X lNext2 = origIterator.next();
action.accept(mapping.apply(lNext1, lNext2));
return true;
}
};
return StreamSupport.stream(lCombinedSpliterator, isParallel)
.onClose(inStream::close);
}
(I think this may likely be incorrect for parallel streams.)
Helped mostly by the StreamEx answer above by user_3380739, you can use groupRuns docs here
StreamEx.of("a1", "a2", "c1", "c2", "c3").groupRuns( t, u -> t.charAt(0) == u.charAt(0) )
.forEach(System.out::println);
// a=[a1, a2]
// c=[c1, c2, c3]

Count the same items in a row in Java 8 Stream API

I have a bean and a stream
public class TokenBag {
private String token;
private int count;
// Standard constructor and getters here
}
Stream<String> src = Stream.of("a", "a", "a", "b", "b", "a", "a");
and want to apply some intermediate operation to the stream that returns another stream of objects of TokenBag. In this example there must be two: ("a", 3), ("b", 3) and ("a", 2).
Please think it as a very simplistic example. In real there will be much more complicated logic than just counting the same values in a row. Actually I try to design a simple parser that accepts a stream of tokens and returns a stream of objects.
Also please note that it must stay a stream (with no intermediate accumulation), and also in this example it must really count the same values in a row (it differs from grouping).
Will appreciate your suggestions about the general approach to this task solution.
Map<String, Long> result = src.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()));
System.out.println(result);
This will give the desired output
a=4, b=3
You can then go ahead and iterate over map and create objects of TokenBag.
Stream<String> src = Stream.of("a", "a", "a", "a", "b", "b", "b");
// collect to map
Map<String, Long> counted = src
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()));
// collect to list
List<TokenBag> tokenBags = counted.entrySet().stream().map(m -> new TokenBag(m.getKey(), m.getValue().intValue()))
.collect(Collectors.toList());
First group it to a Map and then map the entries to a TokenBag:
Map<String, Long> values = src.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()));
List<TokenBag> tokenBags = values.entrySet().stream().map(entry -> {
TokenBag tb = new TokenBag();
tb.setToken(entry.getKey());
tb.setCount(entry.getValue().intValue());
return tb;
}).collect(Collectors.toList());
You'd need to convert your stream to a Spliterator and then adapt this spliterator to a custom one that partially-reduces some elements according to your logic (in your example it would need to count equal elements until a different element appears). Then, you'd need to turn your spliterator back to a new stream.
Bear in mind that this can't be 100% lazy, as you'd need to eagerly consume some elements from the backing stream in order to create a new TokenBag element for the new stream.
Here's the code for the custom spliterator:
public class CountingSpliterator
extends Spliterators.AbstractSpliterator<TokenBag>
implements Consumer<String> {
private final Spliterator<String> source;
private String currentToken;
private String previousToken;
private int tokenCount = 0;
private boolean tokenHasChanged;
public CountingSpliterator(Spliterator<String> source) {
super(source.estimateSize(), source.characteristics());
this.source = source;
}
#Override
public boolean tryAdvance(Consumer<? super TokenBag> action) {
while (source.tryAdvance(this)) {
if (tokenHasChanged) {
action.accept(new TokenBag(previousToken, tokenCount));
tokenCount = 1;
return true;
}
}
if (tokenCount > 0) {
action.accept(new TokenBag(currentToken, tokenCount));
tokenCount = 0;
return true;
}
return false;
}
#Override
public void accept(String newToken) {
if (currentToken != null) {
previousToken = currentToken;
}
currentToken = newToken;
if (previousToken != null && !previousToken.equals(currentToken)) {
tokenHasChanged = true;
} else {
tokenCount++;
tokenHasChanged = false;
}
}
}
So this spliterator extends Spliterators.AbstractSpliterator and also implements Consumer. The code is quite complex, but the idea is that it adapts one or more tokens from the source spliterator into an instance of TokenBag.
For every accepted token from the source spliterator, the count for that token is incremented, until the token changes. At this point, a TokenBag instance is created with the token and the count and is immediately pushed to the Consumer<? super TokenBag> action parameter. Also, the counter is reset to 1. The logic in the accept method handles token changes, border cases, etc.
Here's how you should use this spliterator:
Stream<String> src = Stream.of("a", "a", "a", "b", "b", "a", "a");
Stream<TokenBag> stream = StreamSupport.stream(
new CountingSpliterator(src.spliterator()),
false); // false means sequential, we don't want parallel!
stream.forEach(System.out::println);
If you override toString() in TokenBag, the output is:
TokenBag{token='a', count=3}
TokenBag{token='b', count=2}
TokenBag{token='a', count=2}
A note on parallelism: I don't know how to parallelize this partial-reduce task, I even don't know if it's at all possible. But if it were, I doubt it would produce any measurable improvement.
Create a map and then collect the map into the list:
Stream<String> src = Stream.of("a", "a", "a", "a", "b", "b", "b");
Map<String, Long> m = src.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()));
m.entrySet().stream().map(e -> new TokenBag(e.getKey(), e.getValue().intValue())).collect(Collectors.toList());

RxJava: dynamically create Observables and send the final resut as Observable

I am using RxJava in which I want to dynamically create a number of Observables based on some condition. Once I'm done with creating, I want to do some processing on the different values returned by the observables and then send as a single Observable to which I can subscribe on. Here is how my code is :
List<String> valueList = ....
List<Observable<String>> listOfObservables = new ArrayList<Observable<String>>();
for(int i =; i <valueList.size(); i++){
listOfObservables.add(new SomeClass.doOperation(valueList(i)));
// SomeClass.doOperation will return an Observable<String>
}
return Observable.merge(listOfObservables);
But here , I want to do some operation on the values emitted by different Observables in the listOfObservable and finally return it as a single Observable<String>
Like in Observable.zip() , I can do this like
return Observable.zip(observable1, observable2, (string1, string2) -> {
// joining final string here
return string1 + string2;
But I know the number of arguments here. Please let me know how I can achieve this.
Use the zip overload that takes a variable number of arguments, it has a signature of
<R> Observable<R> zip(Iterable<? extends Observable<?>> ws,
FuncN<? extends R> zipFunction)
Example usage:
List<String> valueList = ....
return Observable.from(valueList)
.map(string -> SomeClass.doOperationThatReturnsObservable(string))
.toList()
.flatMap(listOfObs -> Observable.zip(listOfObs, (Object[] results) -> {
// do something with the strings in the array.
return Arrays.stream(results)
.map(Object::toString)
.collect(Collectors.joining(","));
}));

Java 8 lambda get and remove element from list

Given a list of elements, I want to get the element with a given property and remove it from the list. The best solution I found is:
ProducerDTO p = producersProcedureActive
.stream()
.filter(producer -> producer.getPod().equals(pod))
.findFirst()
.get();
producersProcedureActive.remove(p);
Is it possible to combine get and remove in a lambda expression?
To Remove element from the list
objectA.removeIf(x -> conditions);
eg:
objectA.removeIf(x -> blockedWorkerIds.contains(x));
List<String> str1 = new ArrayList<String>();
str1.add("A");
str1.add("B");
str1.add("C");
str1.add("D");
List<String> str2 = new ArrayList<String>();
str2.add("D");
str2.add("E");
str1.removeIf(x -> str2.contains(x));
str1.forEach(System.out::println);
OUTPUT:
A
B
C
Although the thread is quite old, still thought to provide solution - using Java8.
Make the use of removeIf function. Time complexity is O(n)
producersProcedureActive.removeIf(producer -> producer.getPod().equals(pod));
API reference: removeIf docs
Assumption: producersProcedureActive is a List
NOTE: With this approach you won't be able to get the hold of the deleted item.
Consider using vanilla java iterators to perform the task:
public static <T> T findAndRemoveFirst(Iterable<? extends T> collection, Predicate<? super T> test) {
T value = null;
for (Iterator<? extends T> it = collection.iterator(); it.hasNext();)
if (test.test(value = it.next())) {
it.remove();
return value;
}
return null;
}
Advantages:
It is plain and obvious.
It traverses only once and only up to the matching element.
You can do it on any Iterable even without stream() support (at least those implementing remove() on their iterator).
Disadvantages:
You cannot do it in place as a single expression (auxiliary method or variable required)
As for the
Is it possible to combine get and remove in a lambda expression?
other answers clearly show that it is possible, but you should be aware of
Search and removal may traverse the list twice
ConcurrentModificationException may be thrown when removing element from the list being iterated
The direct solution would be to invoke ifPresent(consumer) on the Optional returned by findFirst(). This consumer will be invoked when the optional is not empty. The benefit also is that it won't throw an exception if the find operation returned an empty optional, like your current code would do; instead, nothing will happen.
If you want to return the removed value, you can map the Optional to the result of calling remove:
producersProcedureActive.stream()
.filter(producer -> producer.getPod().equals(pod))
.findFirst()
.map(p -> {
producersProcedureActive.remove(p);
return p;
});
But note that the remove(Object) operation will again traverse the list to find the element to remove. If you have a list with random access, like an ArrayList, it would be better to make a Stream over the indexes of the list and find the first index matching the predicate:
IntStream.range(0, producersProcedureActive.size())
.filter(i -> producersProcedureActive.get(i).getPod().equals(pod))
.boxed()
.findFirst()
.map(i -> producersProcedureActive.remove((int) i));
With this solution, the remove(int) operation operates directly on the index.
Use can use filter of Java 8, and create another list if you don't want to change the old list:
List<ProducerDTO> result = producersProcedureActive
.stream()
.filter(producer -> producer.getPod().equals(pod))
.collect(Collectors.toList());
I'm sure this will be an unpopular answer, but it works...
ProducerDTO[] p = new ProducerDTO[1];
producersProcedureActive
.stream()
.filter(producer -> producer.getPod().equals(pod))
.findFirst()
.ifPresent(producer -> {producersProcedureActive.remove(producer); p[0] = producer;}
p[0] will either hold the found element or be null.
The "trick" here is circumventing the "effectively final" problem by using an array reference that is effectively final, but setting its first element.
With Eclipse Collections you can use detectIndex along with remove(int) on any java.util.List.
List<Integer> integers = Lists.mutable.with(1, 2, 3, 4, 5);
int index = Iterate.detectIndex(integers, i -> i > 2);
if (index > -1) {
integers.remove(index);
}
Assert.assertEquals(Lists.mutable.with(1, 2, 4, 5), integers);
If you use the MutableList type from Eclipse Collections, you can call the detectIndex method directly on the list.
MutableList<Integer> integers = Lists.mutable.with(1, 2, 3, 4, 5);
int index = integers.detectIndex(i -> i > 2);
if (index > -1) {
integers.remove(index);
}
Assert.assertEquals(Lists.mutable.with(1, 2, 4, 5), integers);
Note: I am a committer for Eclipse Collections
The below logic is the solution without modifying the original list
List<String> str1 = new ArrayList<String>();
str1.add("A");
str1.add("B");
str1.add("C");
str1.add("D");
List<String> str2 = new ArrayList<String>();
str2.add("D");
str2.add("E");
List<String> str3 = str1.stream()
.filter(item -> !str2.contains(item))
.collect(Collectors.toList());
str1 // ["A", "B", "C", "D"]
str2 // ["D", "E"]
str3 // ["A", "B", "C"]
When we want to get multiple elements from a List into a new list (filter using a predicate) and remove them from the existing list, I could not find a proper answer anywhere.
Here is how we can do it using Java Streaming API partitioning.
Map<Boolean, List<ProducerDTO>> classifiedElements = producersProcedureActive
.stream()
.collect(Collectors.partitioningBy(producer -> producer.getPod().equals(pod)));
// get two new lists
List<ProducerDTO> matching = classifiedElements.get(true);
List<ProducerDTO> nonMatching = classifiedElements.get(false);
// OR get non-matching elements to the existing list
producersProcedureActive = classifiedElements.get(false);
This way you effectively remove the filtered elements from the original list and add them to a new list.
Refer the 5.2. Collectors.partitioningBy section of this article.
As others have suggested, this might be a use case for loops and iterables. In my opinion, this is the simplest approach. If you want to modify the list in-place, it cannot be considered "real" functional programming anyway. But you could use Collectors.partitioningBy() in order to get a new list with elements which satisfy your condition, and a new list of those which don't. Of course with this approach, if you have multiple elements satisfying the condition, all of those will be in that list and not only the first.
the task is: get ✶and✶ remove element from list
p.stream().collect( Collectors.collectingAndThen( Collector.of(
ArrayDeque::new,
(a, producer) -> {
if( producer.getPod().equals( pod ) )
a.addLast( producer );
},
(a1, a2) -> {
return( a1 );
},
rslt -> rslt.pollFirst()
),
(e) -> {
if( e != null )
p.remove( e ); // remove
return( e ); // get
} ) );
resumoRemessaPorInstrucoes.removeIf(item ->
item.getTipoOcorrenciaRegistro() == TipoOcorrenciaRegistroRemessa.PEDIDO_PROTESTO.getNome() ||
item.getTipoOcorrenciaRegistro() == TipoOcorrenciaRegistroRemessa.SUSTAR_PROTESTO_BAIXAR_TITULO.getNome());
Combining my initial idea and your answers I reached what seems to be the solution
to my own question:
public ProducerDTO findAndRemove(String pod) {
ProducerDTO p = null;
try {
p = IntStream.range(0, producersProcedureActive.size())
.filter(i -> producersProcedureActive.get(i).getPod().equals(pod))
.boxed()
.findFirst()
.map(i -> producersProcedureActive.remove((int)i))
.get();
logger.debug(p);
} catch (NoSuchElementException e) {
logger.error("No producer found with POD [" + pod + "]");
}
return p;
}
It lets remove the object using remove(int) that do not traverse again the
list (as suggested by #Tunaki) and it lets return the removed object to
the function caller.
I read your answers that suggest me to choose safe methods like ifPresent instead of get but I do not find a way to use them in this scenario.
Are there any important drawback in this kind of solution?
Edit following #Holger advice
This should be the function I needed
public ProducerDTO findAndRemove(String pod) {
return IntStream.range(0, producersProcedureActive.size())
.filter(i -> producersProcedureActive.get(i).getPod().equals(pod))
.boxed()
.findFirst()
.map(i -> producersProcedureActive.remove((int)i))
.orElseGet(() -> {
logger.error("No producer found with POD [" + pod + "]");
return null;
});
}
A variation of the above:
import static java.util.function.Predicate.not;
final Optional<MyItem> myItem = originalCollection.stream().filter(myPredicate(someInfo)).findFirst();
final List<MyItem> myOtherItems = originalCollection.stream().filter(not(myPredicate(someInfo))).toList();
private Predicate<MyItem> myPredicate(Object someInfo) {
return myItem -> myItem.someField() == someInfo;
}

Categories