Generating primes with LongStream and jOOλ leads to StackOverflowError - java

For educational purposes I want to create a stream of prime numbers using Java-8. Here's my approach. The number x is prime if it has no prime divisors not exceeding sqrt(x). So assuming I already have a stream of primes I can check this with the following predicate:
x -> Seq.seq(primes()).limitWhile(p -> p <= Math.sqrt(x)).allMatch(p -> x % p != 0)
Here I used jOOλ library (0.9.10 if it matters) just for limitWhile operation which is absent in standard Stream API. So now knowing some previous prime prev I can generate the next prime iterating the numbers until I find the one matching this predicate:
prev -> LongStream.iterate(prev + 1, i -> i + 1)
.filter(x -> Seq.seq(primes()).limitWhile(p -> p <= Math.sqrt(x))
.allMatch(p -> x % p != 0))
.findFirst()
.getAsLong()
Putting everything together I wrote the following primes() method:
public static LongStream primes() {
return LongStream.iterate(2L,
prev -> LongStream.iterate(prev + 1, i -> i + 1)
.filter(x -> Seq.seq(primes())
.limitWhile(p -> p <= Math.sqrt(x))
.allMatch(p -> x % p != 0))
.findFirst()
.getAsLong());
}
Now to launch this I use:
primes().forEach(System.out::println);
Unfortunately it fails with unpleasant StackOverflowError which looks like this:
Exception in thread "main" java.lang.StackOverflowError
at java.util.stream.ReferencePipeline$StatelessOp.opIsStateful(ReferencePipeline.java:624)
at java.util.stream.AbstractPipeline.<init>(AbstractPipeline.java:211)
at java.util.stream.ReferencePipeline.<init>(ReferencePipeline.java:94)
at java.util.stream.ReferencePipeline$StatelessOp.<init>(ReferencePipeline.java:618)
at java.util.stream.LongPipeline$3.<init>(LongPipeline.java:225)
at java.util.stream.LongPipeline.mapToObj(LongPipeline.java:224)
at java.util.stream.LongPipeline.boxed(LongPipeline.java:201)
at org.jooq.lambda.Seq.seq(Seq.java:2481)
at Primes.lambda$2(Primes.java:13)
at Primes$$Lambda$4/1555009629.test(Unknown Source)
at java.util.stream.LongPipeline$8$1.accept(LongPipeline.java:324)
at java.util.Spliterators$LongIteratorSpliterator.tryAdvance(Spliterators.java:2009)
at java.util.stream.LongPipeline.forEachWithCancel(LongPipeline.java:160)
at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:529)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:516)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:502)
at java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:152)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.LongPipeline.findFirst(LongPipeline.java:474)
at Primes.lambda$0(Primes.java:14)
at Primes$$Lambda$1/918221580.applyAsLong(Unknown Source)
at java.util.stream.LongStream$1.nextLong(LongStream.java:747)
at java.util.Spliterators$LongIteratorSpliterator.tryAdvance(Spliterators.java:2009)
...
You might think that I deserve what I get: I called the primes() recursively inside the primes() method itself. However let's just change the method return type to Stream<Long> and use Stream.iterate instead, leaving everything else as is:
public static Stream<Long> primes() {
return Stream.iterate(2L,
prev -> LongStream.iterate(prev + 1, i -> i + 1)
.filter(x -> Seq.seq(primes())
.limitWhile(p -> p <= Math.sqrt(x))
.allMatch(p -> x % p != 0))
.findFirst()
.getAsLong());
}
Now it works like a charm! Not very fast, but in couple of minutes I get the prime numbers exceeding 1000000 without any exceptions. The result is correct, which can be checked against the table of primes:
System.out.println(primes().skip(9999).findFirst());
// prints Optional[104729] which is actually 10000th prime.
So the question is: what's wrong with the first LongStream-based version? Is it jOOλ bug, JDK bug or I'm doing something wrong?
Note that I'm not interested in alternative ways to generate primes, I want to know what's wrong with this specific code.

It seems that LongStream and Stream behave differently when streams are produced by iterate. The following code illustrates the distinction:
LongStream.iterate(1, i -> {
System.out.println("LongStream incrementing " + i);
return i + 1;
}).limit(1).count();
Stream.iterate(1L, i -> {
System.out.println("Stream incrementing " + i);
return i + 1;
}).limit(1).count();
The output is
LongStream incrementing 1
So LongStream will call the function even if only the first element is needed while Stream will not. This explains the exception you are getting.
I don't know if this should be called a bug. Javadoc doesn't specify this behavior one way or another although it would be nice if it were consistent.
One way to fix it is to hardcode the initial sequence of primes:
public static LongStream primes() {
return LongStream.iterate(2L,
prev -> prev == 2 ? 3 :
prev == 3 ? 5 :
LongStream.iterate(prev + 1, i -> i + 1)
.filter(x -> Seq.seq(primes())
.limitWhile(p -> p <= Math.sqrt(x))
.allMatch(p -> x % p != 0)
).findFirst()
.getAsLong());

You can produce this difference in much simpler ways. Consider the following two version of (equally inefficient) recursive long enumeration streams, which can be called as follows to produce a sequence from 1-5:
longs().limit(5).forEach(System.out::println);
Will cause the same StackOverflowError
public static LongStream longs() {
return LongStream.iterate(1L, i ->
1L + longs().skip(i - 1L)
.findFirst()
.getAsLong());
}
Will work
public static Stream<Long> longs() {
return Stream.iterate(1L, i ->
1L + longs().skip(i - 1L)
.findFirst()
.get());
}
The reason
The boxed Stream.iterate() implementation is optimised as follows:
final Iterator<T> iterator = new Iterator<T>() {
#SuppressWarnings("unchecked")
T t = (T) Streams.NONE;
#Override
public boolean hasNext() {
return true;
}
#Override
public T next() {
return t = (t == Streams.NONE) ? seed : f.apply(t);
}
};
unlike the LongStream.iterate() version:
final PrimitiveIterator.OfLong iterator = new PrimitiveIterator.OfLong() {
long t = seed;
#Override
public boolean hasNext() {
return true;
}
#Override
public long nextLong() {
long v = t;
t = f.applyAsLong(t);
return v;
}
};
Notice how the boxed iterator calls the function only after the seed has been returned, whereas the primitive iterator caches the next value prior to returning the seed.
This means that when you use a recursive iteration function with the primitive iterator, the first value in the stream can never be produced, because the next value is fetched prematurely.
This can probably be reported as a JDK bug, and also explains Misha's observation

Related

Is there a better way to terminate iterator.forEachRemaining

I wrote the following code
List<Integer> ll = new ArrayList<>();
numbers.forEach(n1 -> {
numbers.iterator().forEachRemaining((Consumer<? super Integer>) n2 -> {
numbers.iterator().forEachRemaining((Consumer<? super Integer>) n3 -> {
if (n1 + n2 + n3 == 1234)
ll.addAll(Arrays.asList(n1,n2,n3));
throw new RuntimeException("elements found");
});
});
});
i try to find 3 elements in an array, which build a sum of 1234. Is there a better way, to terminate the last forEachRemaining? Is there maybe a better solution, with stream Api, without using three for loops(i,j,k)?
Edit: since i got much feedback, this code is only for educational purpose (better understanding of stream and iterator). This is not the way to solve the problem (find three elements in an array that build the sum of 1234). I assumed that forEachRemaining will prevent duplicate sum of elements in array- I was wrong, lesson learned.
If you do must to solve it using streams (which I consider reasonable only for educational purposes), you have to stream over indices, not elements. Otherwise duplicate elements leak as false result as I shown in comment under question.
public static void main(String[] args) {
List<Integer> numbers = new ArrayList<>();
numbers.add(34);
numbers.add(600);
numbers.add(600);
int[] result = IntStream.range(0, numbers.size()).boxed()
.flatMap(first -> IntStream.range(0, numbers.size()).filter(second -> second != first).boxed()
.flatMap(second -> IntStream.range(0, numbers.size()).filter(third -> third != second && third != first).boxed()
.map(third -> new int[] {numbers.get(first), numbers.get(second), numbers.get(third)})
.filter(arr -> IntStream.of(arr).sum() == 1234)
)
)
.findFirst()
.orElse(null);
System.out.println(Arrays.toString(result));
}
Everybody who advise you to use plain old for loop (no matter whatever SO rule is violated) is right. Just use it. Streams are powerful concept but for different kind of task than yours.
Do it like
List<Integer> doYourThingWith(List<Integer> numbers) {
for (Integer n1 : numbers) {
for (Integer n2 : numbers) {
for (Integer n3 : numbers) {
if (yourConditionIsTrue) return Arrays.asList(n1, n2, n3);
}
}
}
return null;
}
....
List<Integer> result=doYourThingWith(yourSetOfNumbers);
You can do it like this:
List<Integer> ll = numbers.stream()
.flatMap(a -> numbers.stream()
.flatMap(b -> numbers.stream()
.filter(c -> a + b + c == 1234)
.map(c -> Arrays.asList(a, b, c))
)
)
.findFirst()
.orElse(Collections.emptyList());

find largest item in list that exceeds a constant value

Given a list of prices, I want to find the index of the the largest price that exceeds a certain minimum. My current solution looks like this:
public class Price {
public static Integer maxPriceIndex(List<Integer> prices, Integer minPrice) {
OptionalInt maxPriceIndexResult = IntStream.range(0, prices.size())
.reduce((a, b) -> prices.get(a) > prices.get(b) ? a : b);
if (maxPriceIndexResult.isPresent()) {
int maxPriceIndex = maxPriceIndexResult.getAsInt();
int maxFuturePrice = prices.get(maxPriceIndex);
if (maxFuturePrice > minPrice) {
return maxPriceIndex;
}
}
return null;
}
public static void main(String[] args) {
List<Integer> prices = Arrays.asList(5, 3, 2);
Integer result = maxPriceIndex(prices, 6);
System.out.println("Final result: " + result);
}
}
I don't like this mix of imperative and functional code, but can't figure out a way of changing the reducer so that it also compares the price with minPrice. Is there a purely functional solution to this problem?
You can do the filter before finding the max.
IntStream.range(0, prices.size())
.filter(i -> prices.get(i) > minPrice)
.reduce((a, b) -> prices.get(a) > prices.get(b) ? a : b);
Apart from filtering the stream as you process, you can perform a max based on the custom comparator instead of reduce as:
return IntStream.range(0, prices.size())
.filter(i -> prices.get(i) > minPrice)
.boxed()
.max(Comparator.comparingInt(prices::get))
.orElse(null);
Judging from all the answers you got, it's easy to make poorly performing implementations by accident. Not one of the other answers is as fast as the code you originally wrote. (#MikeFHay's is pretty good, though)
Maybe just do:
int index = IntStream.range(0, prices.size())
.reduce((a, b) -> prices.get(a) > prices.get(b) ? a : b)
.orElse(-1);
return (index >= 0 && prices.get(index) > minPrice) ? index : null;
Optionals and Streams are handy to have around, but their use is not mandatory, and you don't have to jump through hoops to use them.
What you really want here is an OptionalInt.filter or OptionalInt.boxed, but Java doesn't provide them.
First filter all values greater than minPrice and then sort them in reverseOrder, next get the index of first max element using findFirst value or if list is empty return null
return list.stream()
.filter(i->i>minPrice)
.sorted(Comparator.reverseOrder())
.findFirst()
.map(v->list.indexOf(v))
.orElse(null);
If you want to get the last index of max element you can use lastIndexOf method
.map(v->list.lastIndexOf(v))

Generate infinite parallel stream

Problem
Hi, I have a function where i going to return infinite stream of parallel (yes, it is much faster in that case) generated results. So obviously (or not) i used
Stream<Something> stream = Stream.generate(this::myGenerator).parallel()
It works, however ... it doesn't when i want to limit the result (everything is fine when the stream is sequential). I mean, it creates results when i make something like
stream.peek(System.out::println).limit(2).collect(Collectors.toList())
but even when peek output produces more than 10 elements, collect is still not finallized (generating is slow so those 10 can took even a minute)... and that is easy example. Actually, limiting those results is a future due the main expectation is to get only better than recent results until the user will kill the process (other case is to return first what i can make with throwing exception if nothing else will help [findFirst didn't, even when i had more elements on the console and no more results for about 30 sec]).
So, the question is...
how to copy with that? My idea was also to use RxJava, and there is another question - how to achieve similar result with that tool (or other).
Code sample
public Stream<Solution> generateSolutions() {
final Solution initialSolution = initialSolutionMaker.findSolution();
return Stream.concat(
Stream.of(initialSolution),
Stream.generate(continuousSolutionMaker::findSolution)
).parallel();
}
new Solver(instance).generateSolutions()
.map(Solution::getPurpose)
.peek(System.out::println)
.limit(5).collect(Collectors.toList());
Implementation of findSolution is not important.
It has some side effect like adding to solutions repo (singleton, sych etc..), but nothing more.
As explained in the already linked answer, the key point to an efficient parallel stream is to use a stream source already having an intrinsic size instead of using an unsized or even infinite stream and apply a limit on it. Injecting a size doesn’t work with the current implementation at all, while ensuring that a known size doesn’t get lost is much easier. Even if the exact size can’t be retained, like when applying a filter, the size still will be carried as an estimate size.
So instead of
Stream.generate(this::myGenerator).parallel()
.peek(System.out::println)
.limit(2)
.collect(Collectors.toList())
just use
IntStream.range(0, /* limit */ 2).unordered().parallel()
.mapToObj(unused -> this.myGenerator())
.peek(System.out::println)
.collect(Collectors.toList())
Or, closer to your sample code
public Stream<Solution> generateSolutions(int limit) {
final Solution initialSolution = initialSolutionMaker.findSolution();
return Stream.concat(
Stream.of(initialSolution),
IntStream.range(1, limit).unordered().parallel()
.mapToObj(unused -> continuousSolutionMaker.findSolution())
);
}
new Solver(instance).generateSolutions(5)
.map(Solution::getPurpose)
.peek(System.out::println)
.collect(Collectors.toList());
Unfortunately this is expected behavior. As I remember I've seen at least two topics on this matter, here is one of them.
The idea is that Stream.generate creates an unordered infinite stream and limit will not introduce the SIZED flag. Because of this when you spawn a parallel execution on that Stream, individual tasks have to sync their execution to see if they have reached that limit; by the time that sync happens there could be multiple elements already processed. For example this:
Stream.iterate(0, x -> x + 1)
.peek(System.out::println)
.parallel()
.limit(2)
.collect(Collectors.toList());
and this :
IntStream.of(1, 2, 3, 4)
.peek(System.out::println)
.parallel()
.limit(2)
.boxed()
.collect(Collectors.toList());
will always generate two elements in the List (Collectors.toList) and will always output two elements also (via peek).
On the other hand this:
Stream<Integer> stream = Stream.generate(new Random()::nextInt).parallel();
List<Integer> list = stream
.peek(x -> {
System.out.println("Before " + x);
})
.map(x -> {
System.out.println("Mapping x " + x);
return x;
})
.peek(x -> {
System.out.println("After " + x);
})
.limit(2)
.collect(Collectors.toList());
will generate two elements in the List, but it may process many more that later will be discarded by the limit. This is what you are actually seeing in your example.
The only sane way of going that (as far as I can tell) would be to create a custom Spliterator. I have not written many of them, but here is my attempt:
static class LimitingSpliterator<T> implements Spliterator<T> {
private int limit;
private final Supplier<T> generator;
private LimitingSpliterator(Supplier<T> generator, int limit) {
Preconditions.checkArgument(limit > 0);
this.limit = limit;
this.generator = Objects.requireNonNull(generator);
}
#Override
public boolean tryAdvance(Consumer<? super T> consumer) {
if (limit == 0) {
return false;
}
T nextElement = generator.get();
--limit;
consumer.accept(nextElement);
return true;
}
#Override
public LimitingSpliterator<T> trySplit() {
if (limit <= 1) {
return null;
}
int half = limit >> 1;
limit = limit - half;
return new LimitingSpliterator<>(generator, half);
}
#Override
public long estimateSize() {
return limit >> 1;
}
#Override
public int characteristics() {
return SIZED;
}
}
And the usage would be:
StreamSupport.stream(new LimitingSpliterator<>(new Random()::nextInt, 7), true)
.peek(System.out::println)
.collect(Collectors.toList());

Java predicate - match against first predicate [duplicate]

I've just started playing with Java 8 lambdas and I'm trying to implement some of the things that I'm used to in functional languages.
For example, most functional languages have some kind of find function that operates on sequences, or lists that returns the first element, for which the predicate is true. The only way I can see to achieve this in Java 8 is:
lst.stream()
.filter(x -> x > 5)
.findFirst()
However this seems inefficient to me, as the filter will scan the whole list, at least to my understanding (which could be wrong). Is there a better way?
No, filter does not scan the whole stream. It's an intermediate operation, which returns a lazy stream (actually all intermediate operations return a lazy stream). To convince you, you can simply do the following test:
List<Integer> list = Arrays.asList(1, 10, 3, 7, 5);
int a = list.stream()
.peek(num -> System.out.println("will filter " + num))
.filter(x -> x > 5)
.findFirst()
.get();
System.out.println(a);
Which outputs:
will filter 1
will filter 10
10
You see that only the two first elements of the stream are actually processed.
So you can go with your approach which is perfectly fine.
However this seems inefficient to me, as the filter will scan the whole list
No it won't - it will "break" as soon as the first element satisfying the predicate is found. You can read more about laziness in the stream package javadoc, in particular (emphasis mine):
Many stream operations, such as filtering, mapping, or duplicate removal, can be implemented lazily, exposing opportunities for optimization. For example, "find the first String with three consecutive vowels" need not examine all the input strings. Stream operations are divided into intermediate (Stream-producing) operations and terminal (value- or side-effect-producing) operations. Intermediate operations are always lazy.
return dataSource.getParkingLots()
.stream()
.filter(parkingLot -> Objects.equals(parkingLot.getId(), id))
.findFirst()
.orElse(null);
I had to filter out only one object from a list of objects. So i used this, hope it helps.
In addition to Alexis C's answer, If you are working with an array list, in which you are not sure whether the element you are searching for exists, use this.
Integer a = list.stream()
.peek(num -> System.out.println("will filter " + num))
.filter(x -> x > 5)
.findFirst()
.orElse(null);
Then you could simply check whether a is null.
Already answered by #AjaxLeung, but in comments and hard to find.
For check only
lst.stream()
.filter(x -> x > 5)
.findFirst()
.isPresent()
is simplified to
lst.stream()
.anyMatch(x -> x > 5)
import org.junit.Test;
import java.util.Arrays;
import java.util.List;
import java.util.Optional;
// Stream is ~30 times slower for same operation...
public class StreamPerfTest {
int iterations = 100;
List<Integer> list = Arrays.asList(1, 10, 3, 7, 5);
// 55 ms
#Test
public void stream() {
for (int i = 0; i < iterations; i++) {
Optional<Integer> result = list.stream()
.filter(x -> x > 5)
.findFirst();
System.out.println(result.orElse(null));
}
}
// 2 ms
#Test
public void loop() {
for (int i = 0; i < iterations; i++) {
Integer result = null;
for (Integer walk : list) {
if (walk > 5) {
result = walk;
break;
}
}
System.out.println(result);
}
}
}
A generic utility function with looping seems a lot cleaner to me:
static public <T> T find(List<T> elements, Predicate<T> p) {
for (T item : elements) if (p.test(item)) return item;
return null;
}
static public <T> T find(T[] elements, Predicate<T> p) {
for (T item : elements) if (p.test(item)) return item;
return null;
}
In use:
List<Integer> intList = Arrays.asList(1, 2, 3, 4, 5);
Integer[] intArr = new Integer[]{1, 2, 3, 4, 5};
System.out.println(find(intList, i -> i % 2 == 0)); // 2
System.out.println(find(intArr, i -> i % 2 != 0)); // 1
System.out.println(find(intList, i -> i > 5)); // null
Improved One-Liner answer: If you are looking for a boolean return value, we can do it better by adding isPresent:
return dataSource.getParkingLots().stream().filter(parkingLot -> Objects.equals(parkingLot.getId(), id)).findFirst().isPresent();

How to use Java 8 streams to find all values preceding a larger value?

Use Case
Through some coding Katas posted at work, I stumbled on this problem that I'm not sure how to solve.
Using Java 8 Streams, given a list of positive integers, produce a
list of integers where the integer preceded a larger value.
[10, 1, 15, 30, 2, 6]
The above input would yield:
[1, 15, 2]
since 1 precedes 15, 15 precedes 30, and 2 precedes 6.
Non-Stream Solution
public List<Integer> findSmallPrecedingValues(final List<Integer> values) {
List<Integer> result = new ArrayList<Integer>();
for (int i = 0; i < values.size(); i++) {
Integer next = (i + 1 < values.size() ? values.get(i + 1) : -1);
Integer current = values.get(i);
if (current < next) {
result.push(current);
}
}
return result;
}
What I've Tried
The problem I have is I can't figure out how to access next in the lambda.
return values.stream().filter(v -> v < next).collect(Collectors.toList());
Question
Is it possible to retrieve the next value in a stream?
Should I be using map and mapping to a Pair in order to access next?
Using IntStream.range:
static List<Integer> findSmallPrecedingValues(List<Integer> values) {
return IntStream.range(0, values.size() - 1)
.filter(i -> values.get(i) < values.get(i + 1))
.mapToObj(values::get)
.collect(Collectors.toList());
}
It's certainly nicer than an imperative solution with a large loop, but still a bit meh as far as the goal of "using a stream" in an idiomatic way.
Is it possible to retrieve the next value in a stream?
Nope, not really. The best cite I know of for that is in the java.util.stream package description:
The elements of a stream are only visited once during the life of a stream. Like an Iterator, a new stream must be generated to revisit the same elements of the source.
(Retrieving elements besides the current element being operated on would imply they could be visited more than once.)
We could also technically do it in a couple other ways:
Statefully (very meh).
Using a stream's iterator is technically still using the stream.
That's not a pure Java8, but recently I've published a small library called StreamEx which has a method exactly for this task:
// Find all numbers where the integer preceded a larger value.
Collection<Integer> numbers = Arrays.asList(10, 1, 15, 30, 2, 6);
List<Integer> res = StreamEx.of(numbers).pairMap((a, b) -> a < b ? a : null)
.nonNull().toList();
assertEquals(Arrays.asList(1, 15, 2), res);
The pairMap operation internally implemented using custom spliterator. As a result you have quite clean code which does not depend on whether the source is List or anything else. Of course it works fine with parallel stream as well.
Committed a testcase for this task.
It's not a one-liner (it's a two-liner), but this works:
List<Integer> result = new ArrayList<>();
values.stream().reduce((a,b) -> {if (a < b) result.add(a); return b;});
Rather than solving it by "looking at the next element", this solves it by "looking at the previous element, which reduce() give you for free. I have bent its intended usage by injecting a code fragment that populates the list based on the comparison of previous and current elements, then returns the current so the next iteration will see it as its previous element.
Some test code:
List<Integer> result = new ArrayList<>();
IntStream.of(10, 1, 15, 30, 2, 6).reduce((a,b) -> {if (a < b) result.add(a); return b;});
System.out.println(result);
Output:
[1, 15, 2]
The accepted answer works fine if either the stream is sequential or parallel but can suffer if the underlying List is not random access, due to multiple calls to get.
If your stream is sequential, you might roll this collector:
public static Collector<Integer, ?, List<Integer>> collectPrecedingValues() {
int[] holder = {Integer.MAX_VALUE};
return Collector.of(ArrayList::new,
(l, elem) -> {
if (holder[0] < elem) l.add(holder[0]);
holder[0] = elem;
},
(l1, l2) -> {
throw new UnsupportedOperationException("Don't run in parallel");
});
}
and a usage:
List<Integer> precedingValues = list.stream().collect(collectPrecedingValues());
Nevertheless you could also implement a collector so thats works for sequential and parallel streams. The only thing is that you need to apply a final transformation, but here you have control over the List implementation so you won't suffer from the get performance.
The idea is to generate first a list of pairs (represented by a int[] array of size 2) which contains the values in the stream sliced by a window of size two with a gap of one. When we need to merge two lists, we check the emptiness and merge the gap of the last element of the first list with the first element of the second list. Then we apply a final transformation to filter only desired values and map them to have the desired output.
It might not be as simple as the accepted answer, but well it can be an alternative solution.
public static Collector<Integer, ?, List<Integer>> collectPrecedingValues() {
return Collectors.collectingAndThen(
Collector.of(() -> new ArrayList<int[]>(),
(l, elem) -> {
if (l.isEmpty()) l.add(new int[]{Integer.MAX_VALUE, elem});
else l.add(new int[]{l.get(l.size() - 1)[1], elem});
},
(l1, l2) -> {
if (l1.isEmpty()) return l2;
if (l2.isEmpty()) return l1;
l2.get(0)[0] = l1.get(l1.size() - 1)[1];
l1.addAll(l2);
return l1;
}), l -> l.stream().filter(arr -> arr[0] < arr[1]).map(arr -> arr[0]).collect(Collectors.toList()));
}
You can then wrap these two collectors in a utility collector method, check if the stream is parallel with isParallel an then decide which collector to return.
If you're willing to use a third party library and don't need parallelism, then jOOλ offers SQL-style window functions as follows
System.out.println(
Seq.of(10, 1, 15, 30, 2, 6)
.window()
.filter(w -> w.lead().isPresent() && w.value() < w.lead().get())
.map(w -> w.value())
.toList()
);
Yielding
[1, 15, 2]
The lead() function accesses the next value in traversal order from the window.
Disclaimer: I work for the company behind jOOλ
You can achieve that by using a bounded queue to store elements which flows through the stream (which is basing on the idea which I described in detail here: Is it possible to get next element in the Stream?
Belows example first defines instance of BoundedQueue class which will store elements going through the stream (if you don't like idea of extending the LinkedList, refer to link mentioned above for alternative and more generic approach). Later you just examine the two subsequent elements - thanks to the helper class:
public class Kata {
public static void main(String[] args) {
List<Integer> input = new ArrayList<Integer>(asList(10, 1, 15, 30, 2, 6));
class BoundedQueue<T> extends LinkedList<T> {
public BoundedQueue<T> save(T curElem) {
if (size() == 2) { // we need to know only two subsequent elements
pollLast(); // remove last to keep only requested number of elements
}
offerFirst(curElem);
return this;
}
public T getPrevious() {
return (size() < 2) ? null : getLast();
}
public T getCurrent() {
return (size() == 0) ? null : getFirst();
}
}
BoundedQueue<Integer> streamHistory = new BoundedQueue<Integer>();
final List<Integer> answer = input.stream()
.map(i -> streamHistory.save(i))
.filter(e -> e.getPrevious() != null)
.filter(e -> e.getCurrent() > e.getPrevious())
.map(e -> e.getPrevious())
.collect(Collectors.toList());
answer.forEach(System.out::println);
}
}

Categories