"Double" composition with CompletableFuture - java

I'm trying to avoid nesting CompletableFuture when combining 2 independent ones with a BiFunction that returns a third one.
Currently, using thenCombine() does not cut it:
// What I have
public CompletableFuture<CompletableFuture<C>> doStuff() {
CompletableFuture<A> aFuture = makeSomeA();
CompletableFuture<B> bFuture = makeSomeB();
CompletableFuture<CompletableFuture<C>> cFuture = aFuture.thenCombine(bFuture, this::makeSomeC);
return cFuture;
}
// What I want
public CompletableFuture<C> doStuff() {
CompletableFuture<A> aFuture = makeSomeA();
CompletableFuture<B> bFuture = makeSomeB();
// obv this method does not exist
CompletableFuture<C> c = aFuture.thenBicompose(bFuture, this::makeSomeC);
}
private CompletableFuture<A> makeSomeA() {...}
private CompletableFuture<B> makeSomeB() {...}
private CompletableFuture<C> makeSomeC(A a, B b) {...}
I'm basically trying to find a way that looks like haskell if there was a CompletableFuture monad:
doStuff :: CompletableFuture C
doStuff = do
a <- makeSomeA
b <- makeSomeB
makeSomeC a b
makeSomeA :: CompletableFuture A
makeSomeB :: CompletableFuture B
makeSomeC :: A -> B -> CompletableFuture C
I read somewhere that join() is the flatMap of Completable future, so I think I could use this method to do something like aFuture.thenCombine(bFuture, ((Function<CompletableFuture<C>,C>) CompletableFuture::join).compose(this::makeSomeC) but I'm not sure this is the proper/advised way to go. And I cannot say that this help readability in any way...

To me it looks like thenCompose is the direct equivalent of the Haskell Monad.bind.
thenCompose can be nested in the same way as Haskell monad bind, which is also the result of a do-expression in Haskell. Using that your problem can be solved like this:
public CompletableFuture<C> doStuff() {
CompletableFuture<A> aFuture = makeSomeA();
CompletableFuture<B> bFuture = makeSomeB();
return aFuture.thenCompose(a -> bFuture.thenCompose(b -> makeSomeC(a, b)));
}
Explanation of the types
This can be seen by inspecting the types of the functions.
Monad bind -- which is written >>= in Haskell -- has the following type:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
thenCompose in Java has the following signature:
public <U> CompletionStage<U> thenCompose(Function<? super T, ? extends CompletionStage<U>> fn)
The above converted to Haskell syntax, with an extra parameter as an explicit this, looks like this:
thenCompose :: CompletionStage T -> (T -> CompletionStage U) -> CompletionStage U
We can see that this has the same structure as the Haskell type. The difference is the names, and the fact Haskell's support for higher-kinded types is not exactly expressed by Java interfaces.
Note on the Haskell code in the question
But I am a bit puzzled by your Haskell code. To me it looks like your Haskell code is doing the following:
public CompletableFuture<C> doStuff() {
return makeSomeA().thenCompose(a -> makeSomeB().thenCompose(b -> makeSomeC(a, b)));
}
That is, waiting until the makeSomeA operation has completed before starting on makeSomeB. The Java code on the other hand starts the two operations in parallel, then waits for the result before starting on C. But maybe it is a laziness thing.

I guess the simplest solution is just to apply a thenCompose(identity()) afterwards:
public CompletableFuture<C> doStuff() {
CompletableFuture<A> aFuture = makeSomeA();
CompletableFuture<B> bFuture = makeSomeB();
CompletableFuture<CompletableFuture<C>> cFuture = aFuture.thenCombine(bFuture, this::makeSomeC);
return cFuture.thenCompose(Function.identity());
}
Alternatively, introduce a simple Pair class to combine the results of A and B and use thenCompose():
public CompletableFuture<C> doStuff() {
CompletableFuture<A> aFuture = makeSomeA();
CompletableFuture<B> bFuture = makeSomeB();
CompletableFuture<Pair<A, B>> cFuture = aFuture.thenCombine(bFuture, Pair::new);
return cFuture.thenCompose(p -> makeSomeC(p.a, p.b));
}
private static class Pair<A, B> {
A a;
B b;
public Pair(A a, B b) {
this.a = a;
this.b = b;
}
}
And a third alternative without Pair:
public CompletableFuture<C> doStuff() {
CompletableFuture<A> aFuture = makeSomeA();
CompletableFuture<B> bFuture = makeSomeB();
CompletableFuture<Void> cFuture = CompletableFuture.allOf(aFuture, bFuture);
return cFuture.thenCompose(__ -> makeSomeC(aFuture.join(), bFuture.join()));
}

Related

Java Project Reactor subscribeOn behavior for Mono chains

I am learning about project reactor and get confused on some questions regarding subscribeOn()
My blocking code looks like (say getA(), getB(), getC(A, B), getD(A, B), getE(C, D) are both time complex functions)
public E someMethod() {
A a = getA();
B b = getB();
C c = getC(a, b);
D d = getD(a, b);
return getE(c, d);
}
Now I want to change it into a unblocking implementation, I wrote
public E someMethodUnblocking() {
Mono<A> a = Mono.fromCallable(getA).subscribeOn(Schedulers.boundedElastic());
Mono<B> b = Mono.fromCallable(getB).subscribeOn(Schedulers.boundedElastic());
Mono<C> c = Mono.zip(a, b, (aa, bb) -> getC(aa,bb)).subscribeOn(Schedulers.boundedElastic());
Mono<D> d = Mono.zip(a, b, (aa, bb) -> getD(aa,bb)).subscribeOn(Schedulers.boundedElastic());
return Mono.zip(c, d, (cc, dd) -> getE(cc, dd)).block()
}
Does this look like the correct implementation? will there be a difference if I remove the subscribeOn() when generating Mono a and b?
Your implementation subscribes twice to the (cold) monos a and b - and will run your costly getA() and getB() methods two times.
To avoid that, you should cache() the first two monos.

Why does NoSuchMethodException appears when the method obviously exists?

I have been trying to solve this problem with my code but it keeps throwing a NoSuchMethodException no matter what I do. Does this have to do with imports I have added?
The code is:
import java.util.Optional;
import java.util.function.BinaryOperator;
import java.util.function.Function;
import java.util.function.LongPredicate;
import java.util.function.Predicate;
import java.util.stream.LongStream;
public class Dishouse {
public Function<Long, Long> dionym() {
Function<Long, Long> meth = (a) -> a - 62;
return meth;
}
public Function<Float, Float> femality(Function<Float, Float> sorbonist) {
Function<Float, Float> func = (a) -> a / 92;
return func.compose(sorbonist);
}
public Optional<Long> bristler(LongStream s, Predicate<Long> p,
BinaryOperator<Long> b) {
LongStream result = s.filter((LongPredicate) p);
Long count = result.count();
long stre[] = result.toArray();
if (count > 1) {
Optional<Long> e = null;
for (int i = 0; i < stre.length; i++) {
e = Optional.of(b.apply(stre[i], stre[i + 1]));
}
return e;
} else {
return Optional.empty();
}
}
The error is:
Method bristler not found: java.lang.NoSuchMethodException: Dishouse.bristler(java.util.stream.Stream, java.util.function.Predicate, java.util.function.BinaryOperator)
Above, I have included the whole class Dishouse and no other class or method interacts with this class.
It doesn't. LongStream isn't a Stream. Possibly you want Stream<Long> instead, which is not the same thing (one operates on primitive longs, the other operates on Long objects).
For the same reason, casting a Predicate<Long> to LongPredicate cannot work. casts just assert types, they don't convert things (unless casting to one primitive to another primitive type), and a Predicate<Long> isn't a LongPredicate.
NB: Your code can also return either null or Optional.empty or some optional which is a gigantic code smell. This isn't how you are supposed to use optional.
LongStream is not a sub-interface of Stream. To map a Stream<Long> to a LongStream, use the following:
Stream<Long> sl = ...
LongStream ls = sl.mapToLong(x -> x);

IntelliJ - method reference may change semantics

I have following code:
return userService.getAll()
.stream()
.map(User::getRoleName)
.map(roleName -> roleService.findRoleByName(roleName))
.collect(Collectors.toSet());
It seems that roleName -> roleService.findRoleByName(roleName) could be replaced by a method reference (namely a Reference to an instance method of a particular object) however IntelliJ IDEA warns that this may change semantics:
How could it change semantics? Would it change semantics?
If you do not follow the rules of clean functional programming, there may be a change of semantics when you convert a lambda to a method reference.
The difference is that the method reference will be resolved when the stream is being built. But in the lambda expression the code for getting the method can be evaluated in each execution of the lambda.
Here is a short self-contained example for demonstration:
public class Test40 {
public static void main(String[] args) {
Function<Integer, Integer> f2 = n -> 2 * n;
Function<Integer, Integer> f3 = n -> 3 * n;
Function<Integer, Integer>[] funcArray = new Function[1];
funcArray[0] = f2;
Stream.of(1, 2, 3)
.peek(n -> { if (n > 1 ) funcArray[0] = f3; })
.map(funcArray[0]::apply) // Method reference, '::apply' could be omitted
.forEach(System.out::print); // prints 246
System.out.println();
funcArray[0] = f2;
Stream.of(1, 2, 3)
.peek(n -> { if (n > 1 ) funcArray[0] = f3; })
.map(n -> funcArray[0].apply(n)) // Lambda
.forEach(System.out::print); // prints 269
System.out.println();
}
}
How to avoid this problem: Do not use side effects when working with streams. Do not use peek for processing! This method exists mainly to support debugging (have a look at the javadoc).

Continuous state reduction with Flux

Let's say I have two event types (A and B) and Fluxes that generate them somehow:
Flux<A> aFlux = ...;
Flux<B> bFlux = ...;
and also a type that holds the current state denoted by type S:
class S {
final int val;
}
I want to create the following:
final S sInitial = ...;
Flux<S> sFlux = Flux.merge(aFlux, bFlux)
.scan((a, e) -> {
if(e instanceof A) {
return mapA(a, (A)e);
} else if(e instanceof B) {
return mapB(a, (B)e);
} else {
throw new RuntimeException("invalid event");
}
})
.startWith(sInitial);
where sCurr is the instance of S that was last outputted by sFlux, starting with sInitial and mapA / mapB return the new value of type S. Both S and sInitial are immutable.
That is, I want to:
Continously output the latest state ...
... that is being generated ...
... based on the current state and the received event ...
... as prescribed by the mapper functions
Is there a way to reorganize the above stream flow in some other way, especially in order to avoid using instanceof?
You could add interface and implement it for your A and B classes
interface ToSConvertible {
S toS(S s);
}
Now you could use reactor.core.publisher.Flux#scan(A, java.util.function.BiFunction<A,? super T,A>) method:
Flux<S> sFlux = Flux.merge(aFlux, bFlux)
.scan(sInitial, (s, e) -> e.toS(s));

Collect arguments to apply to curried functions in Java/Scala

I would like to create a class in Java 8 which is able to recursively create an object which has a method that takes a function parameter based on the parameters I added.
For example, I would like to be able to do this:
new X().param(23).param("some String").param(someObject)
.apply((Integer a) -> (String b) -> (Object c) -> f(a,b,c))
The apply method would then apply the collected parameters to the given function.
I feel this should be possible without reflection while maintaing type-safety, but I can't quite figure out how. A solution in Scala is also welcome, if I can translate it to Java 8. If it's not possible, I'll also accept an answer that explains why.
What I have so far is essentially this:
class ParamCmd<A,X> {
final A param;
public ParamCmd(A param) {
this.param = param;
}
public<B> ParamCmd<B, Function<A,X>> param(B b) {
return new ParamCmd<>(b);
}
public void apply(Function<A,X> f) {
// this part is unclear to me
}
public static void main(String[] args) {
new ParamCmd<Integer,String>(0).param("oops").param(new Object())
// the constructed function parameters are reversed relative to declaration
.apply((Object c) -> (String b) -> (Integer a) ->
"args were " + a + " " + b + " " + c
);
}
}
As noted in the code comments, my problems are keeping the function parameters in the order of the calls of param(), and actually applying the parameters.
For an unlimited amount of parameters, the only solution I could think of is with Heterogeneous Lists in Scala.
It is probably isn't feasible in Java as there is type level computation going on with path-dependant types.
Using Heterogeneous Lists and Path-Dependant types:
import scala.language.higherKinds
object Main extends App {
val builder1 = HCons(23, HCons("Hello", HNil))
val builder2 = HCons(42L, builder1)
val res1:String = builder1.apply(i => s => i + s)
val res2:String = builder2.apply(l => i => s => (i+l) + s)
println(res1) // 23Hello
println(res2) // 65Hello
}
sealed trait HList {
type F[Res]
def apply[Res]: F[Res] => Res
}
case class HCons[Head, HTail <: HList](head: Head, tail: HTail) extends HList {
type F[Res] = Head => (tail.type)#F[Res]
def apply[Res]: F[Res] => Res = f => tail.apply(f(head))
}
case object HNil extends HList {
type F[Res] = Res
def apply[Res]: F[Res] => Res = identity
}
This code prints:
23Hello
65Hello
The second, more limited way of doing this, but which might work with Java, is to create multiple classes for each function length, which returns the next sized function length class wrapping the value, up to some maximal length - See the Applicative Builder in Scalaz: "Scalaz Applicative Builder"
This doesn't answer your question. However, maybe it helps someone to find a solution, or to explain why it isn't possible in Java and/or Scala.
It can be done in C++, with an arbitrary number of parameters, and without losing type-safety. The call-side look as follows. Unfortunately, the lambda syntax in C++ is quite verbose.
bar{}.param(23).param("some String").param(4.2).apply(
[](int i) {
return [=](std::string s) {
return [=](double d) {
std::cout << i << ' ' << s << ' ' << d << '\n';
};
};
});
Following is the definition of foo and bar. The implementation is straight-forward. However, I doubt that it is possible to build something like this in Java, because the way type parameters work in Java. Generics in Java can only be used to avoid type casts, and that's not enough for this use case.
template <typename Param, typename Tail>
struct foo {
Param _param;
Tail _tail;
template <typename P>
auto param(P p) {
return foo<P, foo>{p, *this};
}
template <typename Function>
auto apply(Function function) {
return _tail.apply(function)(_param);
}
};
struct bar {
template <typename P>
auto param(P p) {
return foo<P, bar>{p, *this};
}
template <typename Function>
auto apply(Function function) {
return function;
}
};
Sorry I just could give some leads in Scala:
Perhaps it would help to have a look at http://www.scala-lang.org/api/2.10.4/index.html#scala.Function$
.apply((Integer a) -> (String b) -> (Object c) -> f(a,b,c))
pretty much looks like Function.uncurried
param(23).param("some String").param(someObject)
could be implemented using a list for an accumulator if you don't care for Type safety. If you want to keep the Types you could use the HList out of Shapeless https://github.com/milessabin/shapeless which comes with a handy tuppled method.
Implementation of param():
import shapeless._
import HList._
import syntax.std.traversable._
class Method(val l : HList = HNil) {
def param(p: Any) = new Method( p :: l )
}
Example
scala> val m = new Method().param(1).param("test")
m: Method = Method#1130ad00
scala> m.l
res8: shapeless.HList = test :: 1 :: HNil

Categories