Can anyone please point me in the right direction as I cannot understand the issue.
I am executing following method.
private static void reduce_parallelStream() {
List<String> vals = Arrays.asList("a", "b");
List<String> join = vals.parallelStream().reduce(new ArrayList<String>(),
(List<String> l, String v) -> {
l.add(v);
return l;
}, (a, b) -> {
a.addAll(b);
return a;
}
);
System.out.println(join);
}
It prints
[null, a, null, a]
I cannot understand why does it put two null in the resultant list. I expected the answer to be
[a, b]
as it is a parallel stream so the first parameter to reduce
new ArrayList()
would probably be called twice for each input value a and b.
Then the accumulator function would probably be called twice as it is a parallelStream and pass each input "a and b" in each call along with the lists provided by seeded value. So a is added to list 1 and b is added to list 2 (or vice versa). Afterwards the combinator will combine both lists but it doesn't happen.
Interestingly, if I put a print statement inside my accumulator to print the value of input, the output changes. So following
private static void reduce_parallelStream() {
List<String> vals = Arrays.asList("a", "b");
List<String> join = vals.parallelStream().reduce(new ArrayList<String>(),
(List<String> l, String v) -> {
System.out.printf("l is %s", l);
l.add(v);
System.out.printf("l is %s", l);
return l;
}, (a, b) -> {
a.addAll(b);
return a;
}
);
System.out.println(join);
}
results in this output
l is []l is [b]l is [b, a]l is [b, a][b, a, b, a]
Can anyone please explain.
You should be using Collections.synchronizedList() when working with parallelStream(). Because ArrayList is not threadsafe and you get unexpected behavior when accessing it concurrently, like you're doing it with parallelStream().
I have modified your code and now it's working correctly:
private static void reduce_parallelStream() {
List<String> vals = Arrays.asList("a", "b");
// Use Synchronized List when with parallelStream()
List<String> join = vals.parallelStream().reduce(Collections.synchronizedList(new ArrayList<>()),
(l, v) -> {
l.add(v);
return l;
}, (a, b) -> a // don't use addAll() here to multiplicate the output like [a, b, a, b]
);
System.out.println(join);
}
Output:
Sometimes you'll get this output:
[a, b]
And sometimes this one:
[b, a]
Reason for this is that it's a parallelStream() so you can't be sure about the order of execution.
as it is a parallel stream so the first parameter to reduce new ArrayList()
would probably be called twice for each input value a and b.
That's where you are wrong. The first parameter is a single ArrayList instance, not a lambda expression can produce multiple ArrayList instances.
Therefore, the entire reduction operates on a single ArrayList instance. When multiple threads modify that ArrayList in parallel, the results may change in each execution.
Your combiner actually adds all the elements of a List to the same List.
You can obtain the expected [a,b] output if both the accumulator and combiner functions will produce a new ArrayList instead of mutating their input ArrayList:
List<String> join = vals.parallelStream().reduce(
new ArrayList<String>(),
(List<String> l, String v) -> {
List<String> cl = new ArrayList<>(l);
cl.add(v);
return cl;
}, (a, b) -> {
List<String> ca = new ArrayList<>(a);
ca.addAll(b);
return ca;
}
);
That said, you shouldn't be using reduce at all. collect is the correct way to perform a mutable reduction:
List<String> join = vals.parallelStream()
.collect(ArrayList::new,ArrayList::add,ArrayList::addAll);
As you can see, here, unlike in reduce, the first parameter you pass is a Supplier<ArrayList<String>>, which can be used to generate as many intermediate ArrayList instances as necessary.
It is rather simple, the first argument is the identity or I would say zero to start with. For parallelStream usage this value is reused. That means concurrency problems (the null from an add) and duplicates.
This can be patched by:
final ArrayList<String> zero = new ArrayList<>();
List<String> join = vals.parallelStream().reduce(zero,
(List<String> l, String v) -> {
if (l == zero) {
l = new ArrayList<>();
}
l.add(v);
return l;
}, (a, b) -> {
// See comment of Holger:
if (a == zero) return b;
if (b == zero) return a;
a.addAll(b);
return a;
}
);
Safe.
You might wonder why reduce has no overload for an identity providing function.
The reason is that collect should have been used here.
Related
I have a list of unsorted strings, where entries are one of {A,B,C,D}:
List<String> strings = new ArrayList<>(Arrays.asList("A","C","B","D","D","A","B","C","A","D","B","D","A","C"));
I need to sort / (group) them in a custom order taking one item at time to have a result like:
[A, B, C, D, A, B, C, D, A, B, C, D, A, D]
I am struggling to come up with an idea how to do so. Any help?
I have tried to use a custom Comparator<String> but not able to implement the logic that first A < second A and first D < second A.
Also tried Stream. groupingBy:
Collection<List<String>> coll = strings.stream().collect(Collectors.groupingBy(s -> s)).values();
which groups same strings into groups.
[[A, A, A, A], [B, B, B], [C, C, C], [D, D, D, D]]
But I am not sure how to take one element at a time from above lists till no elements are available. Does anyone have any approach on how to proceed here? Need a hint in the right direction.
Building a whole new list could lead to some other solutions, for example:
Map<String, Long> counts = strings.stream().collect(groupingBy(identity(), TreeMap::new, counting()));
List<String> ordered = new ArrayList<>();
while (!counts.isEmpty()) {
for (Iterator<Map.Entry<String, Long>> it = counts.entrySet().iterator(); it.hasNext(); ) {
Map.Entry<String, Long> entry = it.next();
ordered.add(entry.getKey());
long newCount = entry.getValue() - 1;
if (newCount == 0) {
it.remove();
} else {
entry.setValue(newCount);
}
}
}
With strings being the input list and ordered the output.
Add a number prefix to each value, sort and remove the prefix, with limitation the array size cannot be far bigger than the number prefix
List<String> strings = new ArrayList<>(Arrays.asList("A","C","B","D","D","A","B","C","A","D","B","D","A","C"));
Map<String, Integer> m = new HashMap<>();
strings.stream()
.map(i -> String.format("%dx%s", (100000 + m.merge(i, 1, (n, w) -> n+w)), i))
.sorted()
.map(i -> i.replaceFirst("^\\d+x", ""))
.collect(Collectors.toList());
This is roughly the same logic as sp00m's answer, but implemented with two streams:
Map<String, Long> groups = strings.stream()
.collect(Collectors.groupingBy(Function.identity(),
TreeMap::new,
Collectors.counting()));
List<String> result = IntStream.range(0, groups.values().stream()
.mapToInt(Long::intValue).max().orElseThrow())
.mapToObj(c -> groups.keySet().stream().filter(k -> groups.get(k) > c))
.flatMap(Function.identity())
.collect(Collectors.toList());
The sorting is taken care of by TreeMap. Just be sure that your actual list elements are comparable (or that you give the right TreeMap supplier)
Well, here is another approach.
First, we could get a list of a list with all items, as you describe in your question.
[
[A, A, A, A],
[B, B, B],
[C, C, C],
[D, D, D, D]
]
Collection<List<String>> chunks = strs.stream()
.collect(Collectors.groupingBy(Function.identity(), TreeMap::new, Collectors.toList()))
.values();
You could insert a custom Comparator by replacing TreeMap::new by () -> new TreeMap<>(comparator).
Then we could just use this to get all ABCD groups.
IntStream.iterate(0, i -> i + 1)
.mapToObj(i -> chunks.stream()
.map(sublist -> i < sublist.size() ? sublist.get(i) : null)
.filter(Objects::nonNull)
.toList())
.takeWhile(list -> !list.isEmpty())
.forEach(System.out::println);
What happens here, is that we loop over each sublist and take the 1st element, then the take each 2nd element, et cetera.
This'll become better readable if you put the code to get a certain index of a bunch of lists into a separate method:
public static <T> Stream<T> nthElement(Collection<? extends List<T>> list, int index) {
return list.stream()
.map(sublist -> index < sublist.size() ? sublist.get(index) : null)
.filter(Objects::nonNull);
}
IntStream.iterate(0, i -> i + 1)
.mapToObj(i -> nthElement(chunks, i).toList())
.takeWhile(list -> !list.isEmpty())
.forEach(System.out::println);
Not as elegant but more clear as to what is going on. You can place an integer before each string indicating the amount of times that string value has been encountered. Then just sort normally and use a regex to replace the integer values.
public static void main(String[] args) {
List<String> strings = new ArrayList<>(Arrays.asList("A","C","B","D","D","A","B","C","A","D","B","D","A","C"));
List<String> sortableString = stringTransform(strings);
sortableString.stream().sorted().forEach(s -> System.err.print(s.replaceAll("[0-9]", "")));
}
private static List<String> stringTransform(List<String> stringList) {
Map<String, Integer> stringMap = new HashMap<>();
List<String> result = new ArrayList<>();
for (String string : stringList) {
Integer integer = stringMap.get(string);
if (integer == null) {
integer = 0;
} else {
integer++;
}
stringMap.put(string, integer);
result.add(integer + string);
}
return result;
}
There is a List in which the elements are mutually exclusive according to certain conditions
Now I need to split into multiple Lists according to this mutual exclusion condition
Mutually exclusive elements cannot appear in a child List after partitioning
The number of child Lists after segmentation should be minimized
-------------For example----------------------
Original list [A, B, C]
A and C are mutually exclusive, A and B are not mutually exclusive, and B and C are not mutually exclusive
It can be divided into [A], [B, C] or [C], [A, B]
Do not split into [A], [B], [C], because the total number of sub lists after splitting is not the minimum
Who can help me?
From what I understand, you want to partition a set elements based on an arbitrary comparison between any two elements in the set. I don't think java comes with that functionality out of the box. One way you can do that is the following:
public class Partition<T> {
public List<Set<T>> partition(List<T> list, BiPredicate<T, T> partitionCondtion) {
List<Set<T>> partition = new ArrayList<>();
while (!list.isEmpty()) {
// get first element from the remaining elements on the original list
T firstElement = list.remove(0);
// add first element into a subset
// all elements on this subset must not be mutually exclusive with firstElement
Set<T> subset = new HashSet<>();
subset.add(firstElement);
// get all remaining elements which can reside in the same subset of
// firstElement
List<T> notMutuallyExclusive = list.stream().filter(e -> !partitionCondtion.test(firstElement, e))
.collect(Collectors.toList());
// add them to the subset of firstElement
subset.addAll(notMutuallyExclusive);
// add subset to partition (list of subsets)
partition.add(subset);
// remove elements added from original list
list.removeAll(notMutuallyExclusive);
}
return partition;
}
}
You can test your scenario like this:
public class PartitionSmallTest {
private BiPredicate<String, String> areMutuallyExclusive() {
return (left, right) -> ("A".equals(left) && "C".equals(right)) || ("C".equals(left) && "A".equals(right));
}
#Test
public void test() {
List<String> list = new ArrayList<>(Arrays.asList("A", "B", "C"));
List<Set<String>> expected = new ArrayList<>();
expected.add(Set.of("A", "B"));
expected.add(Set.of("C"));
List<Set<String>> actual = new Partition<String>().partition(list, areMutuallyExclusive());
Assert.assertEquals(expected, actual);
}
}
I want to split a flux into two fluxes where the first one has the first item of the original flux and the second one will takes the rest of items.
After applying a custom transformation myLogic on each flux I want to combine them into one flux preserving the order of the original flux.
Example:
S: student
S': student after applying myLogic
Emitted flux: s1 -> s2 -> s3 -> s4
The first splited flux: s1' => myLogic
The second splited flux: s2' -> s3' -> s4' => myLogic
The combined flux: s1' -> s2' -> s3' -> s4'
It is enough to use standard Flux methods take and skip to seprate head and tail elements. Calling cache before that is also useful to avoid subscription duplication.
class Util {
static <T, V> Flux<V> dualTransform(
Flux<T> originalFlux,
int cutpointIndex,
Function<T, V> transformHead,
Function<T, V> transformTail
) {
var cached = originalFlux.cache();
var head = cached.take(cutpointIndex).map(transformHead);
var tail = cached.skip(cutpointIndex).map(transformTail);
return Flux.concat(head, tail);
}
static void test() {
var sample = Flux.just("a", "b", "c", "d");
var result = dualTransform(
sample,
1,
x -> "{" + x.toUpperCase() + "}",
x -> "(" + x + ")"
);
result.doOnNext(System.out::print).subscribe();
// prints: {A}(b)(c)(d)
}
}
There's a more simple solution to your problem. You don't need to split and merge the events from publisher. You can make use of index(). It keeps information about the order in which events are published.
Flux<String> values = Flux.just("s1", "s2", "s3");
values.index((i, v) -> {
if (i == 0) {
return v.toUpperCase();
} else {
return v.toLowerCase();
}
});
Here's a hacky way to do this:
boolean a[] = new boolean[]{false}; //use an array as you cannot use non-final variables inside lambdas
originalFlux
.flatMap(a -> {
if(!a[0]) {
a[0] = true;
return runLogicForFirst(a);
} else {
return runLogicForRest(a);
}
})
Instead of creating two separate Flux objects and then merging them, you can just zip your original Flux with another Flux<Boolean> that's only ever true on the first element.
You can then do your processing conditionally as you please in a normal map() call without having to merge separate publishers later on:
Flux<String> values = Flux.just("A", "B", "C", "D", "E", "F", "G");
Flux.zip(Flux.concat(Flux.just(true), Flux.just(false).repeat()), values)
.map(x -> x.getT1() ? "_"+x.getT2().toUpperCase()+"_" : x.getT2().toLowerCase())
.subscribe(System.out::print); // prints "_A_bcdefg"
public static void main(String[] args) throws IOException
{
HashSet set = new HashSet<String>();
set.add("{}");
set.add("{a}");
set.add("{b}");
set.add("{a, b}");
set.add("{a, c}");
sortedSet(set);
}
public static void sortedSet(HashSet set)
{
List<String> setList = new ArrayList<String>(set);
List<String> orderedByAlpha = new ArrayList<String>(set);
//sort by alphabetical order
orderedByAlpha = (List<String>) setList.stream()
.sorted((s1, s2) -> s1.compareToIgnoreCase(s2))
.collect(Collectors.toList());
System.out.println(orderedByAlpha);
}
I am trying to sort alphabetically but the output I get is this :
[{a, b}, {a, c}, {a}, {b}, {}]
but it should be:
[{a}, {a, b}, {a, c}, {b}, {}]
You're output doesn't match your code. You are showing 2D array lists, but your converting to a 1D arraylist, doesn't make sense.
public static void main(String[] args)
{
test(Arrays.asList("a", "d", "f", "a", "b"));
}
static void test(List<String> setList)
{
List<String> out = setList.stream().sorted((a, b) -> a.compareToIgnoreCase(b)).collect(Collectors.toList());
System.out.println(out);
}
This is properly sorting 1D arrays, so you're correct there.
You'll probably need to implement your own comparator to compare the 2D array lists to sort them.
instead of having the source as a List<String> I'd recommend you have it as a List<Set<String>> e.g.
List<Set<String>> setList = new ArrayList<>();
setList.add(new HashSet<>(Arrays.asList("a","b")));
setList.add(new HashSet<>(Arrays.asList("a","c")));
setList.add(new HashSet<>(Collections.singletonList("a")));
setList.add(new HashSet<>(Collections.singletonList("b")));
setList.add(new HashSet<>());
Then apply the following comparator along with the mapping operation to yield the expected result:
List<String> result =
setList.stream()
.sorted(Comparator.comparing((Function<Set<String>, Boolean>) Set::isEmpty)
.thenComparing(s -> String.join("", s),
String.CASE_INSENSITIVE_ORDER))
.map(Object::toString)
.collect(Collectors.toList());
and this prints:
[[a], [a, b], [a, c], [b], []]
note that, currently the result is a list of strings where each string is the string representation of a given set. if however, you want the result to be a List<Set<String>> then simply remove the map operation above.
Edit:
Managed to get a solution working based on your initial idea....
So, first, you need a completely new comparator instead of just (s1, s2) -> s1.compareToIgnoreCase(s2) as it will not suffice.
Given the input:
Set<String> set = new HashSet<>();
set.add("{}");
set.add("{a}");
set.add("{b}");
set.add("{a, b}");
set.add("{a, c}");
and the following stream pipeline:
List<String> result = set.stream()
.map(s -> s.replaceAll("[^A-Za-z]+", ""))
.sorted(Comparator.comparing(String::isEmpty)
.thenComparing(String.CASE_INSENSITIVE_ORDER))
.map(s -> Arrays.stream(s.split(""))
.collect(Collectors.joining(", ", "{", "}")))
.collect(Collectors.toList());
Then we would have a result of:
[{a}, {a, b}, {a, c}, {b}, {}]
Well, as #Aomine and #Holger noted already, you need a custom comparator.
But IMHO their solutions look over-engineered. You don't need any of costly operations like split and substring:
String.substring creates a new String object and calls System.arraycopy() under the hood
String.split is even more costly. It iterates over your string and calls String.substring multiple times. Moreover it creates an ArrayList to store all the substrings. If the number of substrings is big enough then your ArrayList will need to expand its capacity (perhaps not only once) causing another call of System.arraycopy().
For your simple case I would slightly modify the code of built-in String.compareTo method:
Comparator<String> customComparator =
(s1, s2) -> {
int len1 = s1.length();
int len2 = s2.length();
if (len1 == 2) return 1;
if (len2 == 2) return -1;
int lim = Math.min(len1, len2) - 1;
for (int k = 1; k < lim; k++) {
char c1 = s1.charAt(k);
char c2 = s2.charAt(k);
if (c1 != c2) {
return c1 - c2;
}
}
return len1 - len2;
};
It will compare the strings with complexity O(n), where n is the length of shorter string. At the same time it will neither create any new objects nor perform any array replication.
The same comparator can be implemented using Stream API:
Comparator<String> customComparatorUsingStreams =
(s1, s2) -> {
if (s1.length() == 2) return 1;
if (s2.length() == 2) return -1;
return IntStream.range(1, Math.min(s1.length(), s2.length()) - 1)
.map(i -> s1.charAt(i) - s2.charAt(i))
.filter(i -> i != 0)
.findFirst()
.orElse(0);
};
You can use your custom comparator like this:
List<String> orderedByAlpha = setList.stream()
.sorted(customComparatorUsingStreams)
.collect(Collectors.toList());
System.out.println(orderedByAlpha);
A take on it (slightly similar to the answer by Aomine) would be to strip the strings of the characters that makes String#compareTo() fail, in this case ('{' and '}'). Also, the special case that an empty string ("{}") is to be sorted after the rest needs to be taken care of.
The following code implements such a comparator:
static final Comparator<String> COMPARE_IGNORING_CURLY_BRACES_WITH_EMPTY_LAST = (s1, s2) -> {
Function<String, String> strip = string -> string.replaceAll("[{}]", "");
String strippedS1 = strip.apply(s1);
String strippedS2 = strip.apply(s2);
return strippedS1.isEmpty() || strippedS2.isEmpty() ?
strippedS2.length() - strippedS1.length() :
strippedS1.compareTo(strippedS2);
};
Of course, this is not the most efficient solution. If efficiency is truly important here, I would loop through the characters, like String#compareTo() does, as suggested by ETO.
The Stream.flatMap() operation transforms a stream of
a, b, c
into a stream that contains zero or more elements for each input element, e.g.
a1, a2, c1, c2, c3
Is there the opposite operations that batches up a few elements into one new one?
It is not .reduce(), because this produces only one result
It is not collect(), because this only fills a container (afaiu)
It is not forEach(), because this has returns just void and works with side effects
Does it exist? can I simulate it in any way?
Finally I figured out that flatMap is its own "inverse" so to say. I oversaw that flatMap not necessarily increases the number of elements. It may also decrease the number of elements by emitting an empty stream for some of the elements. To implement a group-by operation, the function called by flatMap needs minimal internal state, namely the most recent element. It either returns an empty stream or, at the end of a group, it returns the reduced-to group representative.
Here is a quick implementation where groupBorder must return true if the two elements passed in do not belong to the same group, i.e. between them is the group border. The combiner is the group function that combines, for example (1,a), (1,a), (1,a) into (3,a), given that your group elements are, tuples (int, string).
public class GroupBy<X> implements Function<X, Stream<X>>{
private final BiPredicate<X, X> groupBorder;
private final BinaryOperator<X> combiner;
private X latest = null;
public GroupBy(BiPredicate <X, X> groupBorder,
BinaryOperator<X> combiner) {
this.groupBorder = groupBorder;
this.combiner = combiner;
}
#Override
public Stream<X> apply(X elem) {
// TODO: add test on end marker as additonal parameter for constructor
if (elem==null) {
return latest==null ? Stream.empty() : Stream.of(latest);
}
if (latest==null) {
latest = elem;
return Stream.empty();
}
if (groupBorder.test(latest, elem)) {
Stream<X> result = Stream.of(latest);
latest = elem;
return result;
}
latest = combiner.apply(latest, elem);
return Stream.empty();
}
}
There is one caveat though: to ship the last group of the whole stream, an end marker must be stuck as the last element into the stream. The above code assumes it is null, but an additional end-marker-tester could be added.
I could not come up with a solution that does not rely on the end marker.
Further I did not also convert between incoming and outgoing elements. For a unique-operation, this would just work. For a count-operation, a previous step would have to map individual elements to a counting object.
Take a look at collapse in StreamEx
StreamEx.of("a1", "a2", "c1", "c2", "c3").collapse((a, b) -> a.charAt(0) == b.charAt(0))
.map(e -> e.substring(0, 1)).forEach(System.out::println);
Or my fork with more function: groupBy, split, sliding...
StreamEx.of("a1", "a2", "c1", "c2", "c3").collapse((a, b) -> a.charAt(0) == b.charAt(0))
.map(e -> e.substring(0, 1)).forEach(System.out::println);
// a
// c
StreamEx.of("a1", "a2", "c1", "c2", "c3").splitToList(2).forEach(System.out::println);
// [a1, a2]
// [c1, c2]
// [c3]
StreamEx.of("a1", "a2", "c1", "c2", "c3").groupBy(e -> e.charAt(0))
.forEach(System.out::println);
// a=[a1, a2]
// c=[c1, c2, c3]
You can hack your way around. See the following example:
Stream<List<String>> stream = Stream.of("Cat", "Dog", "Whale", "Mouse")
.collect(Collectors.collectingAndThen(
Collectors.partitioningBy(a -> a.length() > 3),
map -> Stream.of(map.get(true), map.get(false))
));
IntStream.range(0, 10)
.mapToObj(n -> IntStream.of(n, n / 2, n / 3))
.reduce(IntStream.empty(), IntStream::concat)
.forEach(System.out::println);
As you see elements are mapped to Streams too, and then concatenated into one large stream.
This is what I came up with:
interface OptionalBinaryOperator<T> extends BiFunction<T, T, Optional<T>> {
static <T> OptionalBinaryOperator<T> of(BinaryOperator<T> binaryOperator,
BiPredicate<T, T> biPredicate) {
return (t1, t2) -> biPredicate.test(t1, t2)
? Optional.of(binaryOperator.apply(t1, t2))
: Optional.empty();
}
}
class StreamUtils {
public static <T> Stream<T> reducePartially(Stream<T> stream,
OptionalBinaryOperator<T> conditionalAccumulator) {
Stream.Builder<T> builder = Stream.builder();
stream.reduce((t1, t2) -> conditionalAccumulator.apply(t1, t2).orElseGet(() -> {
builder.add(t1);
return t2;
})).ifPresent(builder::add);
return builder.build();
}
}
Unfortunately, I did not have the time to make it lazy, but it can be done by writing a custom Spliterator delegating to stream.spliterator() that would follow the logic above (instead of utilizing stream.reduce(), which is a terminal operation).
PS. I just realized you wanted <T,U> conversion, and I wrote about <T,T> conversion. If you can first map from T to U, and then use the function above, then that's it (even if it's suboptimal).
If it's something more complex, the kind of condition for reducing/merging would need to be defined before proposing an API (e.g. Predicate<T>, BiPredicate<T,T>, BiPredicate<U,T>, or maybe even Predicate<List<T>>).
A bit like StreamEx, you could implement the Spliterator manually. For example,
collectByTwos(Stream.of(1, 2, 3, 4), (x, y) -> String.format("%d%d", x, y))
... returns a stream of "12", "34" using the code below:
public static <X,Y> Stream<Y> collectByTwos(Stream<X> inStream, BiFunction<X,X,Y> mapping) {
Spliterator<X> origSpliterator = inStream.spliterator();
Iterator<X> origIterator = Spliterators.iterator(origSpliterator);
boolean isParallel = inStream.isParallel();
long newSizeEst = (origSpliterator.estimateSize() + 1) / 2;
Spliterators.AbstractSpliterator<Y> lCombinedSpliterator =
new Spliterators.AbstractSpliterator<>(newSizeEst, origSpliterator.characteristics()) {
#Override
public boolean tryAdvance(Consumer<? super Y> action) {
if (! origIterator.hasNext()) {
return false;
}
X lNext1 = origIterator.next();
if (! origIterator.hasNext()) {
throw new IllegalArgumentException("Trailing elements of the stream would be ignored.");
}
X lNext2 = origIterator.next();
action.accept(mapping.apply(lNext1, lNext2));
return true;
}
};
return StreamSupport.stream(lCombinedSpliterator, isParallel)
.onClose(inStream::close);
}
(I think this may likely be incorrect for parallel streams.)
Helped mostly by the StreamEx answer above by user_3380739, you can use groupRuns docs here
StreamEx.of("a1", "a2", "c1", "c2", "c3").groupRuns( t, u -> t.charAt(0) == u.charAt(0) )
.forEach(System.out::println);
// a=[a1, a2]
// c=[c1, c2, c3]