How to implement a Function1 in Java (its compose and andThen methods)? - java

I am working with Akka Java API's, in one of the actors I want to receive a callback and process it on completion.
I want to achieve something like:
Future future = Patterns.ask(actorRefMap.get(order.getInstrument()), order, 500);
future.onComplete(getSender().tell(String.format("{} order processed for instrument {} with price {}", order.getOrderType(), order.getInstrument(), order.getPrice()), getSelf()), getContext().dispatcher());
With my current code I am getting error wrong first argument, Found 'void' required 'scala.Function1'. How do we implement the scala.Function1 in Java?

You need to pass it as a function:
Future future = Patterns.ask(actorRefMap.get(order.getInstrument()), order, 500);
future.onComplete(() -> getSender().tell(String.format("{} order processed for instrument {} with price {}", order.getOrderType(), order.getInstrument(), order.getPrice()), getSelf()), getContext().dispatcher());
... the essential part is:
future.onComplete(() -> ...)
instead of
future.onComplete(...)
And if it requires scala.Function1 instead of java.util.Function, make sure you import the Java DSL (akka.actor.typed.javadsl.AskPattern), not the Scala DSL ...

Related

Chaining functions that return Vavr Either

I have a series of functions that take in a Request object and return a Vavr Either.
The Either will contain a Result object if the task is complete or a modified Request object if the task needs to be completed by another function.
The thought was that I could chain them together by doing something like this:
// Note: The Request object parameter is modified by the function
// before being returned in the Either.
Function<Request, Either<Request,Result>> function1;
Function<Request, Either<Request,Result>> function2;
Function<Request, Either<Request,Result>> function3;
Function<Request, Result> terminalFunction;
Result result = function1.apply(request)
.flatMapLeft(function2)
.flatMapLeft(function3)
.fold(terminalFunction, r->r);
But apparently flatMapLeft is not a thing, so I just end up with a nested Eithers on the left side. Any ideas on how I can achieve this functionality? I'm open to alternative libraries.
Edit:
Result result = function1.apply(request)
.fold(function2, Either::right)
.fold(function3, Either::right)
.fold(terminalFunction, r->r);
Seems like this should work instead, but Intellij is giving this error on the second fold line:
no instance(s) of type variable(s) exist so that capture of ? extends Object conforms to Request
You need monadic composition on your Request side, which is left side in your type signatures, but you have monadic composition for Either on the right side. So you need to swap your eithers in your function definitions or you have to pass them through Either.swap() with
Function1.of(SomeType::function1).andThen(Either::swap)
Essentially, each of your function[1-3] would then become of type:
Function<Request, Either<Result, Request>>
Then your call chain becomes:
Result result = function1.apply(request)
.flatMap(function2)
.flatMap(function3)
.swap()
.getOrElseGet(terminalFunction);
Result result = function1.apply(request)
.fold(function2, Either::<Request, Result>right)
.fold(function3, Either::<Request, Result>right)
.fold(terminalFunction, r->r);
This appears to work, although it's a little clunky. Is this an abuse of the library? Would be interested in hearing some alternative approaches.

Vert.x how to add type parameter to promise in lambda function

I am recently using vert.x framework in java and I am still new to vert.x.
Normally when we initiate future as the following way the SMObj automatically assigned to promise1 promise.
Future<SMObj> future = Future.future(promise1 -> {
----
});
What I want to know is, When I use vertx.executeBlocking as following code segment, is there a possible way to set type parameter as SMObj to promise2 (eg: promise2 should be like Promise<SMObj>)
vertx.executeBlocking(promise2->{
----code-----
promise2.complete(SMObj);
}, blockRes->{
----code-----
}
);
Sure you just have to use the diamond operator:
vertx.<SMObj>executeBlocking(promise2 -> {
promise2.complete(SMObjInstance);
}, blockRes -> {
---- code ---
});

Can Spark Streaming do Anything Other Than Word Count?

I'm trying to get to grips with Spark Streaming but I'm having difficulty. Despite reading the documentation and analysing the examples I wish to do something more than a word count on a text file/stream/Kafka queue which is the only thing we're allowed to understand from the docs.
I wish to listen to an incoming Kafka message stream, group messages by key and then process them. The code below is a simplified version of the process; get the stream of messages from Kafka, reduce by key to group messages by message key then to process them.
JavaPairDStream<String, byte[]> groupByKeyList = kafkaStream.reduceByKey((bytes, bytes2) -> bytes);
groupByKeyList.foreachRDD(rdd -> {
List<MyThing> myThingsList = new ArrayList<>();
MyCalculationCode myCalc = new MyCalculationCode();
rdd.foreachPartition(partition -> {
while (partition.hasNext()) {
Tuple2<String, byte[]> keyAndMessage = partition.next();
MyThing aSingleMyThing = MyThing.parseFrom(keyAndMessage._2); //parse from protobuffer format
myThingsList.add(aSingleMyThing);
}
});
List<MyResult> results = myCalc.doTheStuff(myThingsList);
//other code here to write results to file
});
When debugging I see that in the while (partition.hasNext()) the myThingsList has a different memory address than the declared List<MyThing> myThingsList in the outer forEachRDD.
When List<MyResult> results = myCalc.doTheStuff(myThingsList); is called there are no results because the myThingsList is a different instance of the List.
I'd like a solution to this problem but would prefer a reference to documentation to help me understand why this is not working (as anticipated) and how I can solve it for myself (I don't mean a link to the single page of Spark documentation but also section/paragraph or preferably still, a link to 'JavaDoc' that does not provide Scala examples with non-functional commented code).
The reason you're seeing different list addresses is because Spark doesn't execute foreachPartition locally on the driver, it has to serialize the function and send it over the Executor handling the processing of the partition. You have to remember that although working with the code feels like everything runs in a single location, the calculation is actually distributed.
The first problem I see with you code has to do with your reduceByKey which takes two byte arrays and returns the first, is that really what you want to do? That means you're effectively dropping parts of the data, perhaps you're looking for combineByKey which will allow you to return a JavaPairDStream<String, List<byte[]>.
Regarding parsing of your protobuf, looks to me like you don't want foreachRDD, you need an additional map to parse the data:
kafkaStream
.combineByKey(/* implement logic */)
.flatMap(x -> x._2)
.map(proto -> MyThing.parseFrom(proto))
.map(myThing -> myCalc.doStuff(myThing))
.foreachRDD(/* After all the processing, do stuff with result */)

Vertx Future does not wait

Since I´m using Vertx 3.1 in my stack, I was thinking to use the Future feature that the tools brings, but after read the API seems pretty limited to me. I cannot even find the way to make the the future wait for an Observable.
Here my code
public Observable<CommitToOrderCommand> validateProductRestrictions(CommitToOrderCommand cmd) {
Future<Observable<CommitToOrderCommand>> future = Future.future();
orderRepository.getOrder(cmd, cmd.orderId)
.flatMap(order -> validateOrderProducts(cmd, order))
.subscribe(map -> checkMapValues(map, future, cmd));
Observable<CommitToOrderCommand> result = future.result();
if(errorFound){
throw MAX_QUANTITY_PRODUCT_EXCEED.create("Fail"/*restrictions.getBulkBuyLimit().getDescription())*/);
}
return result;
}
private void checkMapValues(Multimap<String, BigDecimal> totalUnitByRestrictions, Future<Observable<CommitToOrderCommand>> future,
CommitToOrderCommand cmd) {
for (String restrictionName : totalUnitByRestrictions.keySet()) {
Restrictions restrictions = Restrictions.valueOf(restrictionName);
if (totalUnitByRestrictions.get(restrictionName)
.stream()
.reduce(BigDecimal.ZERO, BigDecimal::add)
.compareTo(restrictions.getBulkBuyLimit()
.getMaxQuantity()) == 1) {
errorFound = true;
}
}
future.complete(Observable.just(cmd));
}
In the onComplete of my first Observable I´m checking the results, and after finish is when I finish the future to unblock the operation.
But I´m looking that future.result is not block until future.complete is invoke as I was expecting. Instead is just returning null.
Any idea what´s wrong here?
Regards.
The vertx future doesn't block but rather work with a handler that is invoked when a result has been injected (see setHandler and isComplete).
If the outer layer of code requires an Observable, you don't need to wrap it in a Future, just return Observable<T>. Future<Observable<T>> doesn't make much sense, you're mixing two ways of doing async results.
Note that there are ways to collapse an Observable into a Future, but the difficulty is that an Observable may emit several items whereas a Future can hold only a single item. You already took care of that by collecting your results into a single emission of map.
Since this Observable only ever emits one item, if you want a Future out of it you should subscribe to it and call future.complete(yourMap) in the onNext method. Also define a onError handler that will call future.fail.

How to chain Guava futures?

I'm trying to create a small service to accept file upload, unzip it and then delete the uploaded file. Those three steps should be chained as futures. I'm using Google Guava library.
Workflow is:
A future to download the file, if the operation completed, then a future to unzip the file. If unzipping is done, a future to delete the original uploaded file.
But honestly, it isn't clear to me how I would chain the futures, and even how to create them in Guava's way. Documentation is simply terse and unclear. Ok, there is transform method but no concrete example at all. chain method is deprecated.
I miss RxJava library.
Futures.transform is not fluently chainable like RxJava, but you can still use it to set up Futures that depend on one another. Here is a concrete example:
final ListeningExecutorService service = MoreExecutors.listeningDecorator(Executors.newCachedThreadPool());
final ListenableFuture<FileClass> fileFuture = service.submit(() -> fileDownloader.download())
final ListenableFuture<UnzippedFileClass> unzippedFileFuture = Futures.transform(fileFuture,
//need to cast this lambda
(Function<FileClass, UnzippedFileClass>) file -> fileUnzipper.unzip(file));
final ListenableFuture<Void> deletedFileFuture = Futures.transform(unzippedFileFuture,
(Function<UnzippedFileClass, Void>) unzippedFile -> fileDeleter.delete(unzippedFile));
deletedFileFuture.get(); //or however you want to wait for the result
This example assumes fileDownloader.download() returns an instance of FileClass, fileUpzipper.unzip() returns an UnzippedFileClass etc. If FileDownloader.download() instead returns a ListenableFuture<FileClass>, use AsyncFunction instead of Function.
This example also uses Java 8 lambdas for brevity. If you are not using Java 8, pass in anonymous implementations of Function or AsyncFunction instead:
Futures.transform(fileFuture, new AsyncFunction<FileClass, UpzippedFileClass>() {
#Override
public ListenableFuture<UnzippedFileClass> apply(final FileClass input) throws Exception {
return fileUnzipper.unzip();
}
});
More info on transform here: http://docs.guava-libraries.googlecode.com/git-history/release/javadoc/com/google/common/util/concurrent/Futures.html#transform (scroll or search for "transform" -- deep linking appears to be broken currently)
Guava extends the Future interface with ListenableFuture for this purpose.
Something like this should work:
Runnable downloader, unzipper;
ListeningExecutorService service = MoreExecutors.listeningDecorator(Executors.newCachedThreadPool());
service.submit(downloader).addListener(unzipper, service);
I would include deleting the file in the unzipper, since it is a near instantaneous action, and it would complicate the code to separate it.

Categories